Coding of Wavelet- Transformed Images - Amazon Web...

144
Joonas Lehtinen Turku Centre Computer Science for TUCS Dissertations No 62, June 2005 Coding of Wavelet- Transformed Images

Transcript of Coding of Wavelet- Transformed Images - Amazon Web...

Page 1: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Joonas Lehtinen

Turku Centre Computer Sciencefor

TUCS DissertationsNo 62, June 2005

Coding of Wavelet-Transformed Images

Page 2: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

CODING OF

WAVELET-TRANSFORMED

IMAGES

by

Joonas Lehtinen

ACADEMIC DISSERTATION

To be presented, with the permission of the Faculty of Mathematics andNatural Sciences of the University of Turku, for public criticism in theAuditorium of the Department of Information Technology on July 1, 2005,at 12 noon.

University of TurkuDepartment of Information Technology

Turku, Finland2005

Page 3: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Supervised by

Professor Olli Nevalainen, Ph.D.Department of Information TechnologyUniversity of Turku

Reviewed by

Professor Jussi Parkkinen, Ph.D.Department of Computer ScienceUniversity of Joensuu

and

Professor Ioan Tabus, Ph.D.Signal Processing LaboratoryTampere University of Technology

ISBN 952-12-1568-2ISSN 1239-1883Painosalama OyTurku, Finland

ii

Page 4: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Abstract

Compression methods are widely used for reducing storage and enhanc-ing transfer of digital images. By selectively discarding visually subtle detailsfrom images, it is possible to represent images with only a fraction of the bitsrequired for the uncompressed images. The best lossy image compression meth-ods currently used are based on quantization, modeling and entropy coding oftransformed images.

This thesis studies quantization and modeling of wavelet-transformed nat-ural images. Usage of different quantization levels for the different regions ofthe image is discussed and a new variable quality image compression method isintroduced. The benefits of the variable quality image coding are demonstratedfor coding of mammography images. The quantization of the transform coef-ficients is controlled in most of the lossy image coding algorithms by setting alimit to the size of the compressed image or by directly defining the magnitudeof the quantifier. It is shown here how the distortion in the decompressed imagecan be used as the quantization criterion and a new image coding algorithmthat implements this criterion is introduced.

While a wavelet transformed image is encoded, both the coder and decoderknow the values of the already encoded coefficients. The thesis studies how thiscoding context can be used for compression. It is shown that conventional pre-diction methods and scalar quantization can be used for modeling coefficientsand introduce a new coding algorithm that predicts the number of significantbits in the coefficients from their context. A general method of adaptively mod-eling probability distributions of encoded coefficients from a property vectorcalculated from the coefficient context is given. This method is based on vectorquantization of the property vectors and achieves excellent compression perfor-mance. Forming high quality code books for vector quantization is studied.Self-adaptive genetic algorithms are used for the optimization of the code booksand a new model for parallelization of the algorithm is introduced. The modelallows efficient distribution of the optimization problem to multiple networkedprocessors and flexible reconfiguration of the network topology.

iii

Page 5: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Acknowledgements

This work has been carried out at the Turku Centre for Computer Science.As well as funding for research, the Centre has provided an inspiring environ-ment to work in. I am grateful for the flexibility of the graduate school, whichhas made it possible for me to combine research with working at IT Mill Ltd. Iwould especially like to thank Prof. Timo Jarvi for his support and Prof. RalfBack for spurring me on with this work.

Above all, I thank my instructor and collaborator Prof. Olli Nevalainenfor his support and guidance through my studies, from the beginning to thecompletion of this dissertation. Even in the times when most of my time wasspent to other projects he has persistently encouraged me to continue and hasoffered his help throughout the process. I am also very grateful for Prof. JukkaTeuhola for his comments and advice when writing the introduction for thisdissertation as well as Prof. Ioan Tabus and Jussi Parkkinen for their commentsand corrections.

I would like to thank my collegues, Antero Jarvi and Juha Kivijarvi forcollaboration and many interesting discussions; you have both shown me agood example on how algorithm research should be carried out and how muchattention one should give to details.

Finally, I would to thank my parents and grandparents for believing me onthis project, even though I have changed my estimate on the schedule manytimes over the years.

iv

Page 6: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Contents

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Introduction to transform coding . . . . . . . . . . . . . . . . . . 2

1.2.1 Image representation . . . . . . . . . . . . . . . . . . . . . 21.2.2 Image compression . . . . . . . . . . . . . . . . . . . . . . 41.2.3 Entropy coding . . . . . . . . . . . . . . . . . . . . . . . . 61.2.4 Image quality metrics . . . . . . . . . . . . . . . . . . . . 91.2.5 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.3 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Wavelet transforms 132.1 Basis for linear expansions . . . . . . . . . . . . . . . . . . . . . . 132.2 Wavelet transform . . . . . . . . . . . . . . . . . . . . . . . . . . 142.3 Boundary handling for finite signals . . . . . . . . . . . . . . . . 192.4 Two-dimensional signal decomposition . . . . . . . . . . . . . . . 21

3 Transform coding 253.1 Embedded Zerotree Wavelet coding . . . . . . . . . . . . . . . . . 25

3.1.1 Significance maps . . . . . . . . . . . . . . . . . . . . . . . 263.1.2 Coding with zerotrees . . . . . . . . . . . . . . . . . . . . 27

3.2 Set partitioning in hierarchical trees . . . . . . . . . . . . . . . . 283.2.1 Coding bitplanes by sorting . . . . . . . . . . . . . . . . . 283.2.2 List based sorting algorithm . . . . . . . . . . . . . . . . . 30

3.3 Context based coding . . . . . . . . . . . . . . . . . . . . . . . . 313.3.1 Context classification . . . . . . . . . . . . . . . . . . . . 313.3.2 Vector quantization of the context space . . . . . . . . . . 33

3.4 Code book generation for vector quantization . . . . . . . . . . . 333.4.1 Clustering by k-means . . . . . . . . . . . . . . . . . . . . 343.4.2 Genetic algorithms . . . . . . . . . . . . . . . . . . . . . . 35

v

Page 7: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

4 Summary of publications 374.1 Variable quality image compression system based on SPIHT . . . 384.2 Distortion limited wavelet image codec . . . . . . . . . . . . . . . 384.3 Predictive depth coding of wavelet transformed images . . . . . . 394.4 Clustering context properties of wavelet coefficients in automatic

modelling and image coding . . . . . . . . . . . . . . . . . . . . . 404.5 Clustering by a parallel self-adaptive genetic algorithm . . . . . . 414.6 Performance comparison . . . . . . . . . . . . . . . . . . . . . . . 42

5 Conclusions 49

Bibliography 51

Publication reprints 57

Publication errata 135

vi

Page 8: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Chapter 1

Introduction

1.1 Motivation

Digital images and video are rapidly replacing most analogue imaging technolo-gies in all the phases of the imaging: production, transfer, consumption andstorage. Even though the capacity of computers to store and transfer data hasbeen exponentially growing as predicted by Moore’s law 1, in most imaging ap-plications needs for advanced image compression techniques are still increasing.Reasons for utilization of new compression technology vary from the possibil-ity of using the computing capacity in enabling higher quality of images tothe possibility of adding new application specific features into the compressiontechniques.

The most obvious reason for using compression when storing and transfer-ring digital images and video is that the storage and bandwidth requirementsfor a compressed image data might be only a fraction of the requirements forthe original contents. Image compression has been one of the key technolo-gies that have enabled digital television, distribution of video over Internet,high resolution digital cameras and digital archiving of medical images in hos-pitals. The challenges of image and video compression research are not limitedto absolute storage and bandwidth savings. Often, modest computational com-plexity of the coding and decoding might be even more important for practicalapplications. While the image resolution grows, memory requirements of thecompression algorithms might grow in such a way that it would limit the usageof compression techniques in embedded applications, such as high resolutionprinters and scanners. Many application specific features for compression al-gorithms are also currently inspected to make the compression algorithms to

1In 1965 Gordon Moore, co-founder of Intel, predicted that the number of transistors persquare inch on integrated circuits will double yearly. The current expected rate of doublingthe transistor density is once in 18 months.

1

Page 9: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

perform some tasks better than a general purpose algorithm would do or toguarantee that the quality of the images matches the required standards.

From the above it is obvious that research of efficient compression methodsis an important issue in the all fields that require storage and transfer of stillimages and video. In the present study we focus on compression of naturalstill images. The aim is to search for more efficient compression methods aswell as provide more flexibility for selection of compression parameters. Werestrict ourselves to lossy wavelet transform coding techniques [62] due to theirexcellent compression performance. For other image compression techniques,see [5, 34, 55].

1.2 Introduction to transform coding

In this section a brief overview to transform coding of natural images is given.All the steps of the process are discussed, but only minimal details are given. Anintroduction to wavelet transforms is later given in Chapter 2 and techniquesfor coding the transformed data are discussed in Chapter 3.

1.2.1 Image representation

Spatial representation of a digital image can be seen as a matrix of pixels (pic-ture elements) where each pixel represents the average colour of the image onthe area the pixel covers. In most applications, pixels are rectangular and evenlysized in all regions of the image. The spatial resolution of the image is definedas the number of pixels per length unit, for example 150 dots per inch (DPI).Terms image resolution and image size are often used somewhat erroneously assynonyms to represent dimensions of the image in pixels.

In black and white or grayscale images, pixel values represent the lumi-nance (brightness) of the pixels. Each luminance value is represented using afixed number of bits, which is often referred as the luminance resolution. Thisamounts typically 1, 2, 4, 8, 12 or 16 bits per pixel (BPP). On colour images,each pixel is represented by a set of values representing different componentsof the used color system. The commonly used color systems [22] include RGB,where the values represent the intensity of red, green and blue light; YUV wherevalues represent luminance and 2D coordinates on the chrominance (color)plane; and CMYK, a subtractive color model often used in printing. Typicalcolor resolutions include 15, 16, 24 and 36 BPP, where each color componentis expressed by 5, 6, 8 or 12 BPP. In image compression systems, differentcolor components are often compressed separately as different grayscale imagesand they can be represented with different spatial resolutions. Because anygrayscale image compression algorithm can be easily extended to include com-pression of color images, this dissertation does not discuss about compression

2

Page 10: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

of color images.Many signal processing applications require signal frequency information,

which has lead to the development of different frequency representations for thesignal. When in the spatial domain, the image luminance is given as a functionof location, whereas in the frequency domain representation, the amplitudes ofthe different frequency components in the image are represented as a functionof these frequencies. Thus the discrete image signal in spatial domain canbe represented as a linear combination of all its frequency components. Forexample in the discrete Fourier transform [4], a one-dimensional discrete signalx[n] (n ∈ {0, 1, . . . , N − 1}) can be represented as a linear combination of thefrequency components defined by Fourier series:

x[n] =1N

N−1∑k=0

X[k]W−nkN , (1.1)

where X[k] (k ∈ {0, 1, . . . , N − 1}) is the frequency representation of the signaland WN = e−2π

√−1/N .

To produce the frequency representation of non-continuous signals, such asimages, the signal is normally divided spatially to windows of equal size. Eachwindow is then individually transformed to the frequency domain, most oftenusing the fast Fourier transform (FFT) [5] or the discrete cosine transform(DCT) [1, 25]. The typical size of the window in most DCT-based image andvideo compression techniques is 8×8 pixels. Because each window is processedseparately, some image compression algorithms might fail to preserve signalcontinuity on window borders leading to visible blocking artifacts.

The windowed frequency representation uses fixed sized spatial regions whenanalyzing the signal for different frequency components. In natural images, highfrequency details tend to be spatially smaller than the low frequency details,which makes the analysis of different frequencies with equally sized filters inef-ficient. Multi-resolution representations [44, 48] combine some features of fre-quency representation and spatial representation by analyzing spatially smallerregions of the image for high frequency components and larger regions for lowfrequency components.

The multi-resolution representation is usually formed by analyzing the signalwith wavelet functions [23]. Wavelet functions are localized functions that havevalue of zero or very near to zero outside a bounded domain and average ofzero. There exists many kinds of wavelet functions but unlike sine waves, mostwavelets are irregular and asymmetric.

Representation of a video builds on the image representation: the mosttrivial representation of a video sequence is just concatenation of the individualframe images in the video. Practical video formats might also include multipleaudio tracks and subtitles that are synchronized with the video. Video com-

3

Page 11: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

pression algorithms commonly include features that benefit from the similarityof consecutive frames in the video by predicting some of the frames from theneighboring frames and only storing the prediction errors [58]. Even in thosetechniques, some of the frames are stored as separate images to make randomaccess for the video stream possible. Moreover, fragments of the predictedframes or the prediction errors are often stored using conventional still imagecompression algorithms. The field of video compression can be seen as an ex-tension of the still image compression. This dissertation does not include anyvideo compression techniques, but we recognize that the algorithms presentedfor still image compression can be applied to video compression as well.

1.2.2 Image compression

The basic goal of the image compression is to find such a representation foran image that only a minimal number of bits is used. This both allows one tostore more image data on a limited storage space as well as makes it possible totransfer images faster over a channel with limited bandwidth. The compressionefficiency can be measured with the compression ratio R = So/Sc, where Sc isthe size of the compressed representation of the image (in bits) and So = WHB,where W , H and B are the width, height and luminance resolution of the image(in bits). An even more used metric is bits per pixel (BPP), which is simplydefined as Br = B/R.

The image compression algorithms can be divided into two classes: losslessand lossy. If the image compression is completely reversible, it is said to belossless. If the decompressed image is only an approximation of the original im-age, the compression is said to be lossy. Lossless image compression techniquesachieve generally compression ratios in range 1 - 5 on natural images [34, 13],while lossy methods typically achieve several times better compression ratios.For example the compression ratio for a comic image on the page 68 is 1.8 whenusing lossless GIF [55] compression. Lossy compression techniques JPEG [51]and SPIHT[54] achieve compression ratios of 8 and 16 respectively with goodimage quality.

Most lossy image compression methods are based on some kind of coding ofthe transformed and quantized (approximated) image representation. Genericcompression and decompression processes are demonstrated in Figure 1.1. First,the original spatial image is transformed to frequency or multi-resolution repre-sentation with a transform T . A part of the information is lost when quantizingthe coefficients in the transformed image representation with a quantizer Q in.The probability model of the quantized coefficients is created with a modelingalgorithm M . The probability model approximates the frequencies of the co-efficients in the coded image in a such way that the model can be coded withas few bits as possible and at the same time is accurate enough to be used in

4

Page 12: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

010100101010101010100010110101011101001010101

TT

QQ

MM E

Figure 1.1: Structure of a general image compression algorithm. T representsan image transform, Q a quantizer, M a modeling method and E a entropycoder.

encoding of the coefficients. Finally, the coefficients are coded with an entropycoder E using the created model. The purpose of the entropy encoder is tocode the quantized coefficients using as few bits as possible. Any of the fourphases T,Q,M,E can be combined together to achieve some application spe-cific features or better compression performance. It is also possible to streamdata through all the compression phases to save memory needed for bufferingintermediate results [10].

Decoding of the image is done in the reverse order to the coding process.First, the entropy coded bits are decoded back to quantized coefficients usingthe same model that was used in the encoding phase. The model is eithersaved as (entropy coded) side information or dynamically generated from thedecoded quantized coefficients while decoding. The quantized coefficients arethen scaled to represent the original transformed image and finally an inversetransformation is used to get a decoded spatial domain image as a result. Aswith encoding, all the phases of the decoding might be combined and the datacan be streamed through the phases. In symmetric algorithms decoding iscomputationally equivalent to coding. In asymmetric algorithms modeling costrequired in coding might be considerably higher than in decoding.

The purpose of quantization is to remove unimportant details of the image insuch a way that the compression ratio is optimal for the selected image quality.Scalar quantizers belong to the most simple type of quantizers: all transformed

5

Page 13: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

image coefficients are scaled with some constant q to get the scaled coefficientsbqcic, where ci are the original coefficients.

Vector quantization is a more general quantization method. There the sig-nal x of length NK is divided to N K-sized vectors xi = [x(iK), x(iK +1), . . . , x(iK + K − 1)]> and the quantizer tries to find similar code vectorscj ∈ C to represent each xi. The quantized image can be then represented withjust N indexes that identify the code vectors from C. The set C is called codebook. Construction of a code book is a hard problem, which limits the usageof the vector quantization in some applications. Moreover, both the coder andthe decoder must use the same code book. Thus, either the code book mustbe submitted alongside the code vector indexes or a static code book must beused for all images.

Most transform coding techniques could be turned into lossless methods byskipping the quantization step, provided that the transform itself is lossless.Still, many lossless image compression techniques [63, 29] do not rely on trans-forms, but code the image directly from its spatial representation. These kindsof lossless coding algorithms rely on prediction coding, where the luminance ofeach image pixel is predicted from the values already known by the decoder.This allows both the coder and the decoder to make the same prediction for thepixel. If the prediction is good, coding only the prediction error can provide anefficient coding system.

1.2.3 Entropy coding

Entropy coding is a mapping C from an m-sized alphabet X = {xi|0 ≤ i < m}to a set of unique codewords {ci = C(xi)|0 ≤ i < m}, with the purpose ofminimizing the size of the coded message. If the probability of symbol xi inthe alphabet is p(xi), the expected length (in bits) of the message composed ofn symbols coded with the mapping C is:

R = n∑

i

p(xi)l(C(xi)), (1.2)

where l(ci) is the length (in bits) of the codeword ci. It is required that thesequence of the codewords is uniquely decodable: mapping C must be reversibleand no codeword is allowed to be a prefix to another codeword. Entropy [15]of an information source defines a lower bound for the message length

H = −n∑

i

p(xi)log2(p(xi)) (1.3)

when the p(xi)n is the frequency of symbol xi in the message.Huffman coding [30, 37] provides a simple way of building binary codes

with coding efficiency near the optimum as defined by the entropy. The code

6

Page 14: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

”ACDCCBCA” → 00101111010100

C

A

B

D

p=1.00 p=0.25

p=0.50

0

1

0

01

1

Figure 1.2: Huffman coding of message ”ACDCCBCA” and corresponding de-cision tree for decoding.

is built iteratively by selecting two symbols with the lowest probabilities andassigning them codes 0 and 1 for selecting between them. Then the two symbolsare removed from the alphabet and replaced by one symbol the probability ofwhich equals to the sum of the probabilities of the removed symbols. Theprocess is then iterated until all the symbols have been assigned codes. Thishappens when the alphabet has reduced to a single symbol. The result of theprocess can be visualized with a decision tree that is used in decoding by readingthe coded message one bit at a time and following the decisions on the tree asillustrated in the Figure 1.2.

One limitation of the Huffman code and all other similar binary codes isthat at least one bit is used to represent a symbol, which makes the coding ofalphabets with very skewed probability distributions inefficient. More generally,only symbols xi for which

log2(1/p(xi)) = blog2(1/p(xi))c (1.4)

can have codes with optimal length in those systems. One way to overcomethese limitations is to transform the original alphabet to another one whereequation (1.4) holds better for the alphabet and use the new alphabet for gen-erating Huffman-codes. This can be done by combining the letters of the orig-inal alphabet to longer words and adding the new words as letters to the newalphabet.

Arithmetic coding [65] provides optimal compression performance as definedby the entropy equation (1.3). The idea of the arithmetic coding is to representthe whole message with only one codeword so that its probability equals tothe combined probability of all the symbols in the message. The coding pro-cess is illustrated in Figure 1.3. The length of the black region equals to the

7

Page 15: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

A

B

C

A

B

CA

BC

0

0.1

1

0.1011

0.1010.10101

0.11

“A” → “1”

“ACB” → “10101”

“AC” → “101”

p(A) = 0.7p(B) = 0.05p(C) = 0.25

Figure 1.3: Arithmetic coding process of message ”ACB” to binary string”10101”.

probability of the corresponding message. The message is finally coded as theshortest binary number that exists on the probability range corresponding tothe message. For example, in the case of Figure 1.3, the probability of message”ACB” is 0.7 ∗ 0.25 ∗ 0.05 = 0.00875 and thus the corresponding entropy forthe message is 6.837 bits. In the example the message is coded as ”10101”using only five bits, but on average the code length equals to the entropy. Inpractical implementations of arithmetic coding the coder receives the messageas a stream of symbols and updates the range for each symbol. In order to storethe limits of the range with practical accuracy, the coder re-scales the rangeevery time it has reduced to short enough and sends the bits corresponding toscalings to the output code stream.

In many applications, the probability distribution of the alphabet varieshighly in different parts of the coded message. For example in the message”AAAAAAABBBBBBB”, the probabilities of A and B are both 0.5, but stillit might be possible to code the message with less than one bit per symbol ifthe use of different probability distributions would be allowed in the differentparts of the message. Because both the coder and the decoder must use the

8

Page 16: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

same probability distribution for the alphabet, sending a distribution for a largealphabet to the decoder might be impractical in some applications. For thesereasons, the use of a dynamically varying distribution for the alphabet is oftenpreferred to static probability model. This distribution is adaptively learnedin the course of the coding process. Supporting such a dynamic probabilitymodel with Huffman coding is rather impractical, because the coding tables anddecoding trees must be rebuilt at each update of the probability model. Thearithmetic coding supports the use of a dynamic probability model very well,because the coder and decoder only have to know the cumulative probabilityfor each symbol in the alphabet in order to be able to update the range limits.Speed-optimized versions of the arithmetic coding with dynamic probabilitydistributions have been developed based on state automata, where each staterepresents one distribution. One of the most popular is the binary arithmeticcoder named QM-coder that is currently used in the JPEG image compressionstandard [51].

1.2.4 Image quality metrics

In lossy image compression the challenge is to represent the image using theminimal number of bits and at the same time to preserve the image qualityas well as possible. The measurement of the compressed image size is trivial,but there exists no universal quality metric because the definition of quality de-pends on the application. In special fields, such as medical imaging, the imagequality is measured as the impact it has to the diagnosis based on the image.In most medical applications the compression is required to be diagnosticallylossless: no such information is lost that the diagnosis made from the imagecould be disturbed. The goal of lossy image compression of natural images isoften to preserve the overall visual quality of the image as observed by humans.Unfortunately this definition of quality does not state the exact observationconditions for the image and on the other hand there exists no widely adoptedprocedure to easily do the required quantitative testing [45]. Objective qualitymeasurement is an active field of research and new methods [53, 38] are devel-oped for automated quality assessment. Such methods can provide a new basisfor development of more efficient image and video compression methods in thefuture.

In the image compression research literature the most popular distortionmetric is the mean square error (MSE) calculated as:

dMSE(u, u) =1

HW

W−1∑x=0

H−1∑y=0

[u(x, y)− u(x, y)]2, (1.5)

where the functions u and u represent the original and decompressed H ×W sized images, correspondingly. The MSE is commonly normalized to the

9

Page 17: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

dynamic range of the image and expressed on the logarithmic scale as the peaksignal to noise ratio (PSNR), which is defined as follows:

PSNR(u, u) = 10 log10

(2Rlum − 1)2

dMSE(u, u), (1.6)

where Rlum is the luminance resolution of the image.More advanced distortion metrics have been developed that model the hu-

man visual system better than simple MSE [67, 8, 9]. When advanced distortionmetrics are used in the design of image compression algorithms as their opti-mization criteria, they can have tremendous impact on the visual quality ofthe decompressed images. Unfortunately, many advanced distortion metricsare computationally unsuitable for image compression. Furthermore, PSNR iswidely used in image compression literature and thus using it as optimizationcriteria for coding makes performance comparison of coding algorithms easier.

1.2.5 Standards

Most practical applications of image and video compression techniques arebased on widely accepted standards. New standards are usually created in aprocess where the best available ideas are collected from literature and adaptedto form the base for the new standard. In the context of image and video com-pression, the most recognized standard organizations are Joint PhotographicExperts Group (JPEG), Joint Bi-level Image experts Group JBIG and MovingPicture Experts Group (MPEG).

The ISO/IEC 10918-1 standard: ”Digital compression and coding of continuous-tone still images” [51], commonly called ”JPEG”, is the most widely adoptedimage compression standard. It is based on the quantization of DCT-transformedimages with 8 × 8 transform block size. The quantized image coefficients arecoded either using Huffman coding or QM-coder. The standard supports colorimages by compressing different color components of YUV color space sepa-rately and by scaling down the chrominance components spatially by the factorof two in both dimensions.

The new standard ISO/IEC 15444-1:2000 developed by JPEG called ”JPEG2000 image coding system” [60, 46] is a complete revision of the previous JPEGstandard, and provides improved compression performance and an extensive listof advanced compression features. The new standard is based on wavelet im-age coding algorithm similar to the coding algorithms discussed in the chapter3. JPEG 2000 is targeted to a wide range of image compression applications,including general still image coding, video coding, variable quality coding, vol-umetric imaging, document imaging and wireless applications. The core part ofthe standard was published in March 2002, but the other parts of the standardare still under development [24].

10

Page 18: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

1.3 Outline of the thesis

This work is based on the following hypotheses: 1) Variable quality codingof wavelet transformed images can provide considerably better compressionperformance than conventional methods; 2) MSE can be used as a controlcriterion for wavelet image compression; 3) Both scalar and vector quantizationof context information derived from wavelet coefficient neighbors can be usedfor efficient compression; and 4) Parallel processing can be effectively used ingeneration of code books for vector quantization.

In this thesis we introduce four new algorithms for coding of wavelet trans-formed natural images. We show how the image quality can be controlled inembedded coding of images and introduce new effective implementations forzerotree based compression algorithms. We also show that it is possible toachieve good compression performance by a simple context based coefficientprediction method and how clustering methods can be applied efficiently forcontext based classification and prediction. In addition, we present a new stateof the art clustering algorithm that supports parallel processing.

This thesis is divided into two parts: introductive summary of the researchand reprints of the publications. The idea is to first provide sufficient back-ground information in the introductory part and then present the actual com-pression algorithms and the results on the reprints.

This chapter has given a brief introduction to the basic concepts of transformcoding of grayscale images as well as motivation and outline for the thesis. Thenext chapter explains the wavelet transforms used in the algorithms presented.The third chapter provides a detailed overview on the transform coding processand discusses the algorithmic techniques used as basis for the work presentedin the reprints. The fourth chapter provides an overview of the publicationsand finally the results are summarized in the chapter five.

11

Page 19: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

12

Page 20: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Chapter 2

Wavelet transforms

The purpose of this chapter is to give the reader a quick summary on the wavelettransforms used by the algorithms presented in this thesis. The chapter is nota general introduction to the theory behind linear expansions and wavelet func-tions, instead the goal is only to summarize the concepts that are necessary forunderstanding the structure of the transformed image data and algorithms forcoding it. While the material of this section has been collected from varioussources using diverse notation conventions, the representation has been unifiedin order to be able to provide a consistent summary without going into un-necessary details. For more detail and theory behind the concepts presented,references to literature are given.

2.1 Basis for linear expansions

This section introduces the basic linear expansion of discrete signals that isneeded for computing the wavelet transforms. Furthermore some propertiesof the signals and expansions are defined here because they are needed forexplaining the wavelet transforms later in this chapter.

Hilbert space of square-summable sequences l2(Z) ⊂ C∞ is a set of allinfinite-dimensional vectors x, that satisfy the norm constraint 〈x, x〉 < ∞,where notation 〈u, v〉 stands the dot product defined as

∑n∈Z u[n]v[n]. A lin-

ear expansion [62] of a discrete signal x ∈ l2(Z) can be formulated as

x[n] =∑k∈Z

X[k]ϕk[n], (2.1)

where the set of vectors {ϕk|ϕk ∈ l2(Z) ∧ k ∈ Z} is called the basis of theexpansion and vector X contains the expansion coefficients corresponding tothe original signal. It is said that the basis is complete for the space S if allsignals x ∈ S can be expanded as defined by equation (2.1). It can be shown

13

Page 21: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

that for each complete basis there exists a dual set {ϕk|ϕk ∈ l2(Z) ∧ k ∈ Z}that can be used to compute the transformed signal X ∈ l2(Z) as

X[k] =∑n∈Z

ϕk[n]x[n]. (2.2)

It is said the basis {ϕk} is orthogonal if for all i, j(i 6= j) it holds: 〈ϕi, ϕj〉 =0. Furthermore, if the norm

√〈ϕk, ϕk〉 = 1 for all k ∈ Z and {ϕk} is an orthog-

onal basis, it is called orthonormal. An important property of the orthonormalbasis is that for all k ∈ Z : ϕk = ϕk and the energy of the signal x is conservedin the transform defined in equation (2.2) :

〈x, x〉 = 〈X, X〉. (2.3)

In the field of signal compression another commonly used type of basis isbiorthogonal basis, where the set {ϕk} is complete and linearly independent,but is not orthonormal. A biorthogonal basis and its dual {ϕk} satisfy

∀i, j ∈ Z : 〈ϕi, ϕj〉 = δ[i− j], (2.4)

where Dirac’s delta function δ[i] is defined as δ[0] = 1 and for all i 6= 0 : δ[i] = 0.In biorthogonal transforms, the conservation of energy is defined as

〈x, x〉 = 〈X, X〉, (2.5)

where X[k] = 〈ϕk, x〉 and X[k] = 〈ϕk, x〉.In most signal compression applications, the dimensionality d of signal vec-

tor x ∈ S ⊂ l2(Z) can be large, because the dimension corresponds directlyto the number of samples in the signal. If each sample of the signal x is rep-resented with b bits, the size |S| of the space is 2bd. When a transformationis used to map all signals x → X ∈ S and if no information is lost in thetransform, |S| ≤ |S|. If the number of bits needed to represent each elementof the transformed signal X is B, the dimension of the transformed signal isD ≥ log2B |S| = db/B. From equation (2.1) it can be seen that the numberof base vectors ϕk equals to D each having d elements. Because of the largenumber of base vectors, the transformation is practical only if the base vectorsϕk can be easily computed during the transform process.

2.2 Wavelet transform

This section briefly introduces wavelet-functions and show how they are usedfor constructing the transforms for infinite discrete 1D-signals. First an exam-ple of a simple transform is given and later it is shown how filter banks can beused for transforming signals iteratively. Finally the wavelet filters, used by the

14

Page 22: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

X[a, b]ϕa,b a = 1 a = 2 a = 4

b = 0

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

b = 2

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

b = 4

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

b = 6

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

Figure 2.1: Haar expansion of signal x = (−2, 5, 5, 0,−9, 1)> to different com-ponents.

new coding algorithms described in this work, are given. The complete wavelettransforms can implemented when the filtering equations and the filter coeffi-cients are combined with the details on boundary handling and two-dimensionalextensions described later in this chapter.

Wavelets [62, 7, 17] are a class of functions that can be used as a basis fora linear expansion with good localization both in space and scale. A discretewavelet basis {ϕa,b ∈ l2(Z)} can be constructed from a finite mother waveletfunction ϕ ∈ l2(Z) by modulation (a) and translation (b) as

ϕa,b[n] =1√aϕ

[⌊n− b

a

⌋]. (2.6)

The Haar wavelet basis can be defined with equation (2.6) where the motherwavelet is

ϕ[n] =

1√2

if n = 0− 1√

2if n = 1

0 otherwise.(2.7)

The orthonormal basis defined by the mother function ϕ is {ϕ2i,j2i |i, j ∈ Z}.It can be shown that this basis spans the space S ⊂ l2(Z) of signals x ∈ Sfor which

∑n∈Z x[n] = 0. An example of linear expansion for signal x =

(−2, 5, 5, 0,−9, 1)> is shown in Figures 2.1 and 2.2.

15

Page 23: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Function Signal

x

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

x−∑

a≥4,b∈Z X[a, b]ϕa,b

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

x−∑

a≥2,b∈Z X[a, b]ϕa,b

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

x−∑

a≥1,b∈Z X[a, b]ϕa,b

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

1

2

3

4

5

6

7

8

9

10

0 1 2 3 4 5 6 7 8

Figure 2.2: Different iterations for Haar expansion of signal x =(−2, 5, 5, 0,−9, 1)>. It can be seen how the different frequency componentsof the signal x are separated by iteratively substracting them from the signalstarting from the lowest frequencies. Finally, when all the frequency compo-nents are summed together, the original signal is perfectly reconstructed asshown in the last image.

16

Page 24: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

In order to expand the Haar basis to cover any discrete signal x[n] ∈ l2(Z)for which

∑n∈Z x[n] is not constrained to be 0, another definition of Haar is

used:

ϕ2k[n] =

{1√2

if n ∈ {2k, 2k + 1}0 otherwise

ϕ2k+1[n] =

1√2

if n = 2k

− 1√2

if n = 2k + 10 otherwise.

(2.8)

For this definition of basis functions, one can define the transform as

X[2k] = 〈ϕ2k, x〉 = 1√2(x[2k] + x[2k + 1])

X[2k + 1] = 〈ϕ2k+1, x〉 = 1√2(x[2k]− x[2k + 1])

(2.9)

and the corresponding inverse transform as

x[n] =∑k∈Z

X[k]ϕk[n]. (2.10)

The transform (2.9) using the basis (2.8) defined above analyzes the signal x[n]in one scale, whereas the basis defined with equation (2.6) from the mother func-tion (2.7) analyzes the signal in multiple scales. However, the transform (2.9)can be easily extended to multi-resolution analysis by recursively applying thesame transform to the low frequency component y0[k] = X[2k] of the transformresult. The frequency components are computed iteratively as follows:

y(i)1 [k] = 〈ϕ2k+1, y

(i−1)0 〉

y(i)0 [k] = 〈ϕ2k, y

(i−1)0 〉,

(2.11)

where y(0)0 [k] = y0[k] = X[2k] and y

(0)1 [k] = y1[k] = X[2k + 1].

The iteration results y(0)1 , y

(1)1 , . . . , y

(M)1 and y

(M)0 are called the bands or the

components of the signal decomposition analyzing the signal in M + 1 scales.Furthermore y

(M)0 is the low-frequency or scaling component and y

(0)1 , y

(1)1 , . . . , y

(M)1

are the high-frequency components of the signal listed from the highest to lowerfrequencies. It should be noted that the components y

(0)0 , y

(1)0 , . . . , y

(M−1)0 need

not be stored in signal compression applications, because they can be recon-structed iteratively from the other frequency components. This is done iter-atively by reconstructing each of the low frequency components y

(i)0 from the

already known components y(i+1)0 and y

(i+1)1 by reversing the iteration step

defined by the equation (2.11).An implementation of the transform can be created with filter banks. The

idea is to use a low pass filter h0[n] and a high pass filter h1[n] for computing the

17

Page 25: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

transform result X[k] as a convolution of the analysis filters with the originalsignal x[n]:

y0[k] = X[2k] = h0[2k] ∗ x[2k] =∑

l∈Z h0[2k − l]x[l]y1[k] = X[2k + 1] = h1[2k] ∗ x[2k] =

∑l∈Z h1[2k − l]x[l].

(2.12)

The analysis filters can be formed from the basis functions by inverting thetime:

h0[n] = ϕ0[−n]h1[n] = ϕ1[−n].

(2.13)

For the transformation defined by the equation (2.9), the analysis filters are

h0[n] =

{1√2

if n ∈ {−1, 0}0 otherwise

h1[n] =

1√2

if n = 0− 1√

2if n = −1

0 otherwise.

(2.14)

Applying these filters to equation (2.12) leads back to the earlier definition (2.9)of the transform.

One of the most frequently used orthonormal bases for multi-resolution anal-ysis is defined by Daubechie’s family of orthonormal wavelet mother functions[62]. The filters used to implement discrete wavelet transform for this familyare constructed by iteratively solving roots for polynomial function that definesthe filters [16, 18]. The length of the filtering functions depend on the numberof iterations used. For an example, the coefficients for Daubechies low-passfilters h

(D4)0 and h

(D6)0 of lengths 4 and 6 respectively are

h(D4)0 =

0.482962910.83651630.22414386−0.129409522

(2.15)

and

h(D6)0 =

0.332670.8068910.459877−0.135011−0.085440.03522

. (2.16)

The corresponding high-pass filters for an orthonormal Daubechies filter bankcan be computed as

h1[n] = (−1)nh0[−n + L− 1], (2.17)

18

Page 26: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

where L is the length of the low-pass filter. The wavelet transform can be thencomputed by recursive convolutions as shown in the equations (2.11) and (2.12).

A popular basis for wavelet transforms used in lossy compression appli-cations is often shortly called as B9/7. The name refers to a biorthogonalDaubechies wavelet basis where the lengths of the low- and high-pass analysisfilters h

(B97)0 and h

(B97)1 are 9 and 7 respectively. These filters provide a very

efficient representation for natural images while being relatively short. Bothfilters are symmetric, so that these filters are defined by the means of a filters:

h′(B97)0 =

0.8526990.377403−0.110624−0.0238490.037829

(2.18)

and

h′(B97)1 =

0.788485−0.418092−0.0406900.064539

0

, (2.19)

where ∀n ∈ {−4,−3, . . . , 3, 4} : h(B97)i [n] = h

′(B97)i [|n|].

Because of the biorthogonality, the synthesis filters g(B97)0 and g

(B97)1 , that

are used to compute the inverse transform, can be calculated from the analysisfilters as

g(B97)0 [n] = (−1)nh

(B97)1 [n]

g(B97)1 [n] = (−1)nh

(B97)0 [n].

(2.20)

2.3 Boundary handling for finite signals

The transforms above discuss only the case where the discrete signal x belongsto an infinite-dimensional space l2(Z). If the impulse response is short for thefilters used to implement the recursive transform, the finite-dimensional signalx′ ∈ RN can be transformed by constructing an infinite-dimensional signal xfrom x′ with zero padding :

x[n] =

{x′[n] if 0 ≤ n < N0 otherwise.

(2.21)

The problem with zero-border handling is that when the filter length islonger than 2, the transformed signal X might contain non-zero elements outsidethe original range (0, N−1) of x′. This signal growth happens on each iteration

19

Page 27: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

in recursive transforms. As the purpose of signal compression is to representthe signal with as few bits as possible, it is not feasible to store the additionalelements of the transformed signal X, that are required for reconstruction ofthe original signal.

A strategy for constructing a transform where the length of the signal doesnot grow is called circular convolution. There the transform is done with aninfinite-dimensional signal x formed from the finite-dimensional signal x′ as

x[n] = x′[n mod N ]. (2.22)

The transform results in a periodic transformed signal X with the period N .A weakness of the circular signal border handling is that it might introduce adiscontinuation point on the signal border. This should be avoided in signalcompression applications, because the signal discontinuities appear as largecoefficients on high-frequency bands of the transformed signal.

If the filter is symmetric, it is possible to extend the original signal x′ withsymmetric extension [6]. Symmetric extension of x′ is calculated by mirroringthe signal on the borders to construct the corresponding infinite signal x. Thedetails of the mirroring depend on the length of the filters used, the signallength of x′ and whether we are considering the analysis or synthesis filterbanks. Mirroring does not introduce harmful discontinuities to the signal andis thus a preferable extension used for signal compression applications.

Mirroring of the signal borders can be done either as whole point extensionor as half point extension [10]. The whole point extension of a signal x′ ∈ RN

is defined as

x[n] =

x[−n] if n < 0x′[n] if 0 ≤ n < Nx[2N − n− 2] if n ≥ N

(2.23)

and the half point extension is defined as

x[n] =

x[−n− 1] if n < 0x′[n] if 0 ≤ n < Nx[2N − n− 1] if n ≥ N.

(2.24)

The difference between whole and half point extensions is whether the coefficienton the border is duplicated when creating the signal extension by mirroring ornot.

In the common case of the B9/7 wavelet transform with an even lengthsignal [3], the whole point extension is used for the analysis as well as for thesynthesis of the left border in low pass filtering and for synthesis of the rightborder in high pass filtering. The half point extension is used for the synthesisof the right border in low pass filtering and for synthesis of the left border inhigh pass filtering.

20

Page 28: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

2.4 Two-dimensional signal decomposition

One-dimensional discrete wavelet transformations are directly extensible to two-dimensional discrete signals by using matrices for signal representation insteadof vectors [39]. For implementation efficiency and complexity reasons the mostpopular approach for constructing a multi-resolution representation of an imageis to apply one-dimensional wavelet filtering for the columns and the rows ofthe image.

One of the most frequently used two-dimensional signal decompositions isthe dyadic split decomposition [62] that is also called the Mallat decompositionor the octave-band decomposition. The dyadic split decomposition is very ef-fective for natural images and is selected to be used in the coding algorithmspresented in this work. Another popular two-dimensional signal decompositionused with natural images is wavelet packet decomposition that can be seen asa generalization of the dyadic split decomposition [14, 57].

The dyadic split decomposition with L levels can be computed for an imagez ∈ RW×H with the following algorithm:

• For all l ∈ {0, . . . , L− 1}

– For all r ∈ {0, . . . ,H/2l − 1}∗ Transform x = [z0,r, . . . , zW/2l−1,r]> → {y0, y1} as defined in

equation (2.12).∗ For all i ∈ {0, . . . ,W/2l+1 − 1} :· zi,r ← y0[i]· zi+W/2l+1−1,r ← y1[i]

– For all c ∈ {0, . . . ,W/2l − 1}∗ Transform x = [zc,0, . . . , zc,H/2l−1]> → {y0, y1} as defined in

equation (2.12).∗ For all i ∈ {0, . . . ,H/2l+1 − 1} :· zc,i ← y0[i]· zc,i+H/2l+1−1 ← y1[i],

where L is the number of iteration levels used.The goal of the dyadic split decomposition is to separate the image bands

that analyze the image in different resolutions and have minimal correlation.The organization of the bands is illustrated in Figure 2.3. The bands can be di-vided into three orientation pyramids A, B and C storing the vertical, horizontaland diagonal high frequency details of the image correspondingly. The bottomlevels of the pyramids store the highest frequencies and each of the upper levelsstore the details with half the frequency compared to the level immediately be-low it. This way, the wavelet coefficients stored in the same spatial location of

21

Page 29: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

A0

A1A2

C0

C1

C2

B0

B1

B2

S

Figure 2.3: Illustration of the dyadic split decomposition. The different orien-tation components A, B and C are enumerated from the highest to the lowestfrequency components. The low frequency component is denoted with S.

different orientation pyramids describe the same spatial location in the originalimage. In natural images, the discrete signal represents samples from a con-tinuous and fairly smooth light intensity function and thus the energy of highfrequency details is generally much lower than that of the low frequency details.Because of this and the recursive nature of the decomposition, the magnitudeof the wavelet coefficients on the lower pyramid levels is generally considerablysmaller than on the higher levels.

22

Page 30: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Figure 2.4: Example of dyadic split decomposition. The image on the leftis transformed with 4-level dyadic split decomposition using an orthonormalDaubechies wavelet filter [62]. For illustrative purposes, the transformationresult on the right is presented in logarithmic scale.

23

Page 31: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

24

Page 32: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Chapter 3

Transform coding

This chapter introduces the transform coding concepts on which our algorithmspresented in this thesis are built on. The concepts are summarized on a levelof details that only gives a sufficient background knowledge for reading thethesis; for more details and results of the algorithms the reader is referred tothe original publications.

First, the principles of the embedded zerotree wavelet (EZW) coding intro-duced by J. M. Shapiro are discussed. Then, the SPIHT sorting algorithm byA. Said and W. Pearlman, which builds on the EZW, is introduced. These twoalgorithms are the basis for the first two of our coding algorithms introducedlater in this work. The latter two of our coding algorithms introduced in thiswork utilize coefficient coding contexts and prediction. A related context basedcoding algorithm by C. Chrysafis is described in order to help understandingthese techniques. Furthermore, the basics of scalar- and vector quantizationused for the prediction of the probability distribution are introduced.

Finally, optimization techniques necessary for generating code books usedin vector quantization are briefly discussed. These techniques are used by ourfourth coding algorithm. They also lay the basis for our clustering algorithmdescribed in the final part of this work.

3.1 Embedded Zerotree Wavelet coding

The embedded zerotree wavelet (EZW) algorithm is an efficient image coderintroduced by J. M. Shapiro [56]. It exploits the self-similarity of the bandsin the octave-band decomposition on different scales and codes the waveletcoefficients as series of successive approximations using arithmetic coding.

EZW produces an embedded bit stream, which is a bit stream where the bitsare ordered by their impact to the MSE of the decoded image. This guaranteesthat any prefix of the resulting bit stream can be decoded into the best possible

25

Page 33: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

approximation of the original image with a given number of bits. This propertymakes it possible to meet the bit-rate or quality constraints exactly and toprogressively view the image with increasing quality when receiving the codedimage over a slow connection.

3.1.1 Significance maps

When coding the scalar quantized wavelet coefficients, estimating whether eachcoefficient is zero or non-zero is one of the problems with large significanceto the resulting bit-rate. This is emphasized when the target coding rate isless than 1 BPP, because the value of the most coefficients must be 0. EZWfocuses on this problem: the wavelet coefficient matrix is coded as a sequenceof significance maps for different scalar quantizations. In other words, the non-quantized absolute values of the coefficients cj in octave-band decompositionare compared against a set of thresholds {Ti|i ∈ N}, where

Ti =T0

2i(3.1)

and T0 is chosen so that for all cj : |cj | < 2T0.When progressively sending better approximations of the coefficients in the

octave-band decomposition, each coefficient can be in two different states: in-significant state and significant state. All coefficients cj with |cj | < Ti aredefined to be insignificant for the current threshold Ti and all the others sig-nificant. Because Ti > Ti+1 the state of the coefficients can change only frominsignificant to significant state. When sending significance maps for T0, T1, . . .sequentially, only the state changes for coefficients which have been insignificantin previously sent map must be coded.

If the coefficient values are evenly distributed, the best approximation a(cj)of the coefficient cj is

a(cj) = s(cj)∑i=N

bi(|cj |)Ti (3.2)

where the sign s(cj) of cj is defined as

s(cj) =

1 if cj > 0−1 if cj < 0

0 if the sign is unknown or cj = 0(3.3)

and bi(cj) is the significance for the unknown part of cj after i comparisons:

bi(cj) =

1 if Ti ≤ |cj | −

∑i−1k=0 bk(|cj |)Tk

0 if Ti > |cj | −∑i−1

k=0 bk(|cj |)Tk

0.5 if the threshold comparison result is unknown.(3.4)

26

Page 34: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

By combining the above definitions of bi(cj) and s(cj) an embedded bitstream can be produced by sending bi(cj) for each i for each j and also sendings(cj) after the first 1 bit sent for the corresponding coefficient cj . The decodercan calculate an approximation of the octave-band decomposition coefficientfor any prefix of such a bit stream by using the definition of a(cj) above.

3.1.2 Coding with zerotrees

In natural images, the absolute values of the coefficients on octave-band decom-position at high frequency bands tend to be smaller than the coefficients on thelower frequency bands. More specifically, a coefficient on a band in octave-banddecomposition is likely to have a larger absolute value than the four coefficientsdirectly ”below” it on the same orientation, see the pyramid decomposition inFigure 2.3.

EZW exploits this phenomenon by defining that for a given threshold Ti, acoefficient cj is a root of zerotree if and only if

∀cl ∈ Zj : |cl| < Ti, (3.5)

where Zj is the set of all coefficients below cj in the pyramid decomposition,including the cj itself. All coefficients in the Zj thus have the same orientationand are in the same spatial location as cj on the bands corresponding to thesame or higher frequencies than the band where cj resides on. With this defi-nition it is possible to code the insignificance state of all the coefficients in theset Zj with a single bit by marking cj to be a root of the zerotree.

The EZW coding algorithm combines the idea of marking the zerotrees withthe embedded bit stream defined in the section 3.1.1. The algorithm can besummarized as follows:

• Set the initial threshold T0 by finding the smallest k for which all cj :|cj | < 2k+1 and set T0 = 2k

• For all i ∈ {0, . . . , k}

– For each coefficient cj for which the significance status is not known,encode the status with arithmetic coding as one of the following:∗ Top of the zerotree (coding for all the other coefficients in Zj

can be omitted for this i);∗ Insignificant, but not the top of the zerotree;∗ Significant with a positive sign;∗ Significant with a negative sign.

– For each significant coefficient cj with |cj | ≥ Ti−1, code bi(cj) witharithmetic coding. For coefficients cj with |cj | < Ti−1 the bi(cj) = 1and thus coding of bi(cj) can be omitted.

27

Page 35: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

3.2 Set partitioning in hierarchical trees

The set partitioning in hierarchical trees (SPIHT) algorithm presented by A.Said and W. Pearlman [54] is an extension and different implementation of theideas introduced in EZW. The SPIHT algorithm produces an embedded bitstream by storing significance maps and successive approximation informationso compactly that it achieves without entropy coding the same MSE as EZWwith entropy coding for the same BPP. By entropy coding the resulting bitstream, the compression efficiency is even better.

3.2.1 Coding bitplanes by sorting

If the thresholds Ti are selected to be powers of two in the method of sequentiallysubmitting better approximations of all the coefficients described in the chapter3.1.1, the method equals to sending the bit-planes for the matrix of quantizedabsolute coefficient values. Because the thresholds are computed from the initialthreshold, this is achieved by selecting T0 so that there exists m ∈ N : T0 = 2m.

When coding natural images, most bits sent by a bit-plane coding algorithmare insignificant zero-bits in the front of the first one-bit that occurs in thecoefficients. If the location and the number of significant bits for each significantcoefficient are known, only a small number of significant bits (s2, s3, . . .), afterthe first one-bit (s1 = 1) in significant coefficients along with their signs mustbe coded. A good approximation of the coefficients would then be zero for theinsignificant coefficients and the other coefficients could be calculated as

s(2M−1 + s2 ∗ 2M−2 + . . . + sM−m2m + 0 + 2m−2 + 2m−3 + . . . + 20), (3.6)

where s ∈ {−1, 1} is the sign, M is the number of significant bits, and T = 2m

is the threshold. The sub-threshold part T (|c|/T − b|c|/T c) of the coefficientabsolute value |c| is approximated to be bT ∗ 0.01111 . . .2c instead of the moreintuitive alternative bT/2c, because the distribution of the coefficient values iscentered around zero.

Both the EZW and the SPIHT work basically in the same manner. Theysend the coefficient magnitudes, as measured by the number of their significantbits, by sorting the coefficients to the order of their magnitudes and sending theresults of the comparisons done in sorting. If the decoder knows the number ofcoefficients with each magnitude, it can reconstruct the coefficient magnitudesby reversing the sorting process by using the received comparison results. Anexample of the coding of bit-planes by sorting is illustrated in the Figure 3.1.

28

Page 36: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

00

00

10

01

00

00

00

00

00

00

11

00

00

10

00

00

10

00

10

00

11

11

00

01

00

11

01

11

10

01

00

01

+ +- -- -0 +

00

00

00

00

00

00

00

00

00

00

00

00

01

00

00

00

00

00

00

11

01

00

01

01

10

00

11

00

0- +- 0+ -01234567

bit-p

lanes

sign

= 8

= -1

4=

-185

= 0

= 3

= -4

5=

-67

= 5

= -2

= -1

00000001-

= -1

= 31

= 0

= 0

= 1

= -4

01234567

bit-p

lanes

sign

00

00

00

00

00

00

00

00

0 0

= 0

000000000

= 0

= 0

00000001+

= 1

00000011+

= 3

00000010-

= -2

00000100-

= -4

00000101+

= 5

00001000+

= 8

00001110-

= -1

4

00101101-

= -4

5

00011111+

= 31

01000011-

= -6

7

10111001-

= -1

85

quan

tizer

00

00

1?

01

00

00

00

00

00

00

??

??

00

1?

00

00

10

00

?0

?0

10

?0

00

00

11

00

11

11

11

11

00

00

? 0? ?? ?0 0

00

00

00

00

00

00

00

00

00

00

00

00

01

00

00

00

00

00

00

?0

00

00

00

00

00

00

11

00

00 00 0? 001234567

bit-p

lanes

sign

quan

tizer

00

00

10

01

00

00

00

00

00

00

11

00

00

10

00

00

10

00

10

00

10

10

00

00

11

00

11

11

11

11

00

00

+ 0- -- -0 0

00

00

00

00

00

00

00

00

00

00

00

00

01

00

00

00

00

00

00

10

00

00

00

00

00

00

11

00

00 00 0+ 001234567

bit-p

lanes

sign

= 11

= -1

1=

-187

= 0

= 0

= -4

3=

-67

= 0

= 0

= 0

= 27

= 0

= 0

= 0

= 0

rebuildmatrix

sortingalgorithm

sorting decisionsrefinement

bits

Figure 3.1: An example of coding bit-planes by sending sorting decisions. Thetop-left matrix represents the bits of the original wavelet coefficients stored asabsolute values with separate signs. Insignificant digits are grayed. The sort-ing information is created by sorting the coefficients by their magnitudes. Onepossible sorting result is illustrated on the bottom-left matrix. The magni-tude comparisons made by the sorting algorithm together with the number ofcoefficients of each magnitude is used to reconstruct an approximation of theoriginal coefficients as illustrated on top-right matrix. The sorting informationfor a given quantization level contains the information on where the coefficientssignificant on that level are located. Finally, the significant signs and unknownsignificant bits marked with question marks are added to the approximationillustrated by the bottom-right matrix.

29

Page 37: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

3.2.2 List based sorting algorithm

The SPIHT sorting algorithm defines four sets of coefficients:

• H contains all the coefficients on the low frequency band of the octave-band decomposition: this is the same as all the coefficients of the band Sin the Figure 2.3.

• Oi,j is the set of four coefficients below the coefficient in the coordinates(i, j) on the coefficient matrix:

Oi,j = {c2i,2j , c2i,2j+1, c2i+1,2j , c2i+1,2j+1}.

• Di,j is the set of all coefficients below the coefficient in the coordinates(i, j) on the coefficient matrix:

Di,j = Oi,j

⋃{ci′,j′ | ∃i′′, j′′ : ci′′,j′′ ∈ Di,j ∧ ci′,j′ ∈ Oi′′,j′′}.

• Li,j = Di,j \ Oi,j .

The magnitude sorting algorithm in the SPIHT is based on maintainingthree lists of coefficients: list of insignificant sets (LIS), list of insignificant points(LIP) and list of significant points (LSP). The algorithm then traverses throughthe coefficient matrix comparing significance of the coefficients to the thresholds{T0, T1, ...}. After going through the matrix for each threshold, refinementdetails (signs and successive approximation bits) are sent for each coefficient inthe LSP list.

The SPIHT algorithm can be summarized as follows:

• Select m for which all cj : |cj | < T0 = 2m.

• Initialize lists: LSP ← ∅, LIP ← H, LIS ← {Di,j : ∀ci,j ∈ H}.

• For all i ∈ {0, . . . , k − 1}

– For all ci,j ∈ LIP : Send the result of |ci,j | ≥ Ti. If |ci,j | ≥ Ti moveci,j from LIP to LSP and send the sign of ci,j .

– For all Di,j ∈ LIS:

∗ Send the result of test (∀ci′,j′ ∈ Di,j : ci′,j′ < Ti). If the result isfalse:· For all ci′,j′ ∈ Oi,j : Send the result of |ci′,j′ | ≥ Ti. If the

result is true, send the sign of ci′,j′ and add the coefficientto LSP, otherwise, add it to LIP.· Remove Di,j from LIS.

30

Page 38: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

· If Li,j 6= ∅ add Li,j to LIS.– For all Li,j ∈ LIS:∗ Send the result of the test (∀ci′,j′ ∈ Li,j : ci′,j′ < Ti). If the

result is false:· ∀ci′,j′ ∈ Oi,j : Add Di′,j′ to LIS.· Remove Li,j from LIS.

– For all ci,j ∈ LSP : if |ci,j | ≥ Ti−1 send the result of the comparison:

|ci,j | − Ti−1b|ci,j |/Ti−1c ≥ Ti.

3.3 Context based coding

The idea of the context based coding is to use already coded information avail-able both in the decoder and encoder for predicting symbols that must be codednext. The use of context information can be applied to a wide range of codingapplications and it has been used both in lossless image coding as well as inlossy coding algorithms.

Context based prediction can be used for predicting the values to be coded,the distribution of values or both of these. If the values are predicted directly,the prediction results can be subtracted from the originals and the differencecan then be coded. This gives a coding scheme which is commonly referred asprediction coding. The context can also be used for classifying the values todifferent classes with similar properties. If the values in a class are statisticallysimilar, the classes can be coded separately with an arithmetic coder using onestatistical model for each class.

An efficient implementation of context based wavelet coding was introducedby C. Chrysafis and A. Ortega[11, 10] referred here as C/B. The basic idea ofC/B is to quantize the octave-band decomposition using a scalar quantizer andthen classify each coefficient to be coded using information from the surround-ing already coded coefficients. The coefficients are coded with an arithmeticcoder using the adaptive probability model selected by the classification foreach coefficient. The compression performance of C/B is very good and inmany cases better than SPIHT and EZW.

3.3.1 Context classification

Context based wavelet coefficient classification is a function Ci → k, where kdenotes the index of the coefficient class as defined by the context Ci. Thecontext Ci for coding the i:th coefficient ci consists of all the previously codedcoefficients {c0, c1, . . . , ci−1} that are already known by the decoder. This clas-sification Ci → k can be used for selecting the probability distribution that isused for entropy coding the coefficient ci.

31

Page 39: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

0206

0810

0301

?0004

05

07

1109

12Coefficient on the previouspyramid level in the same positionas the coefficient marked with ‘?’

Figure 3.2: The context used by the C/B algorithm to classify the coefficientdenoted with ’?’. The neighboring coefficient positions that have been alreadycoded and thus known by the decoder are enumerated from 0 to 12.

In practical implementations, Ci might be too large to be processed effi-ciently for each coefficient. This is why the most context based systems onlyconsider a close neighborhood of the predicted coefficient. This smaller J-sizedneighborhood Ni for coefficient ci can be defined by mapping the index i of thepredicted context to a set of indexes {n(i, j)} of the coefficients belonging tothe neighborhood as

Ni = {cn(i,j)|0 ≤ j < J} ⊆ Ci. (3.7)

For example, C/B defines the context used for the classification as illustratedin the Figure 3.2.

C/B uses scalar quantization when selecting the class for the context. First,an estimate ci of the absolute value for coefficient ci is calculated as the weightedaverage of the context coefficients:

ci =J∑

j=0

wj |cn(i,j)|, (3.8)

where the weights {wj} are specific to the different context positions. Then, theestimate ci is quantized to calculate the class index k. Quantization is done bypartitioning the estimate space to K ranges that each correspond to the classwith index k ∈ [0,K − 1) in a such way that qk ≤ ci < qk+1 when ci belongsto class k. One class is assigned for each range in a such way that the rangescover the estimate space fully:

∀i : ci ∈ [0,∞) = [q0, q1)⋃

[q1, q2)⋃· · ·

⋃[qK−1,∞), (3.9)

32

Page 40: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

where ∀k : qk < qk+1 ∧ qk ∈ Z . In C/B, the quantization borders {qk} areselected to model ci as an exponential random variable, but always allocatingone class for zero predictions by selecting q0 = 0, q1 = 1.

A context classifier based on the scalar quantization ignores the signs for thecoefficients because sign prediction of wavelet coefficients is hard to do reliably.It is possible to use context based methods also for predicting signs [41], butthe results have not been very encouraging.

3.3.2 Vector quantization of the context space

A general way of classifying context information is to use vector quantiza-tion instead of scalar quantization. While in scalar quantization the one-dimensional estimate space is partitioned into ranges, in vector quantizationthe J-dimensional context space of all the contexts {Ni} in the image is par-titioned into K clusters {Tk|0 ≤ k < K}, where Ni ∈ Tk if and only if ci

belongs to class k. Vector quantization can be used in many applications thatrequire classification of complex data. In many cases the performance of vectorquantization is very good when compared to application specific methods.

The most commonly used distance metric in vector quantization is the Eu-clidian distance:

d(x, y) =

√√√√J−1∑j=0

(x[j]− y[j])2. (3.10)

For each cluster Tk a centroid (mean vector) mk ∈ RJ is defined as thecluster’s representative item:

mk =1|Tk|

∑Ni∈Tk

Ni. (3.11)

The partition is defined so that each vector Ni is assigned to the cluster withcentroid mk nearest to it:

Ni ∈ Tk ⇒ ∀l 6= k : d(ml, Ni) ≥ d(mk, Ni). (3.12)

3.4 Code book generation for vector quantization

The partition of the space together with the representative items for each clusteris called a code book. When a code book exists, mapping any vector to clustersis simple. If the vectors are coded by storing only the indexes of the clustersthey belong to, the decoder can use the code book for approximating the codedvectors with the representative items of their clusters. Finding an optimalpartition for the context space is a hard optimization problem. In this section wedefine the problem and discuss two optimization techniques for finding solutions.

33

Page 41: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

We define indices of the cluster for each context as P = (P0, P1, . . . , PWH−1),where the number of contexts is determined by the dimensions W and H ofthe coefficient matrix. One possible way of defining the partition of the contextspace is by centroids M = (m0,m1, . . . ,mK−1) of clusters. The problem offinding optimal partitioning can then be defined as finding such a solutionω? = (P,M) for the set of different contexts {Ni|i ∈ [0,WH)} that minimizesthe given objective function e(ω). The most commonly used objective functionis mean square error (MSE) for the quantization:

eMSE(ω) =1

KWH

WH−1∑i=0

d(Ni,mpi)2 (3.13)

The number of clusters K in the partition is selected so that the classifica-tion is as good as possible, but at the same time the number of clusters is assmall as possible. Usually each cluster has its own dynamic probability distri-bution that is used for encoding the coefficients whose context belongs to thatcluster. If K is too large, the number of coefficients per probability distribu-tion is too small to adjust the probability distributions properly, which leadsto inefficient entropy encoding. If the number of clusters is too small, thereis not enough clusters available for the different types of contexts, which alsoleads to inefficient entropy encoding. The good selection for K depends on theapplication and the data and is often selected by hand.

3.4.1 Clustering by k-means

The well-known k-means algorithm is a simple and efficient technique for ap-proximate optimizing the clustering solution. This algorithm is also often re-ferred as Generalized Lloyd algorithm (GLA), Linde-Buzo-Gray algorithm(LBG)and iterative self-organizing data analysis technique (ISODATA) [47, 42]. Thealgorithm improves the existing solution iteratively and thus needs an initialsolution to begin with. The quality of the result thus depends on the quality ofthe initial solution. The k-means algorithm is often used as the final or an in-termediate step for other more complicated clustering algorithms to guaranteethat the result is locally optimal.

The k-means algorithm consists of repeatedly recalculating cluster centroids,until a local optimum is found.

K-means algorithm:

• ω0 ← given initial solution or a random solution of the clustering (P,M);

• i← 0

• While i ≤ 1 ∨ eMSE(ωi) < eMSE(ωi−1)

34

Page 42: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

– Calculate the new centroids M = (m0,m1, . . . ,mK−1) from mappingdefined in ωi as shown by equation 3.11.

– Calculate a new context to cluster mapping P = (P0, P1, . . . , PWH−1)by assigning contexts {Nj |j ∈ [0,WH)} to new centroids M asshown by equation (3.12).

– i← i + 1;

– ωi ← (P,M).

3.4.2 Genetic algorithms

Genetic algorithms [28, 21, 49] are optimization techniques inspired by the evo-lutionary process in nature and they have been successfully applied to clustering[31]. In genetic algorithms solutions, often called the individuals, can be seenas chromosomes that consist of genes corresponding to the features of the indi-vidual solutions. The simplest coding for genes is to use one gene to representeach bit in the binary representation of the solution, but in many applicationsit is more efficient to select a higher-level application-specific gene assignment.Genetic algorithms iteratively produce generations of solutions, where the over-all goal is to find new generations that have at least one solution that is betterthan the best solution of the previous generations.

New solutions are produced by genetic operations, which include selection,crossover and mutation. Selection is used to preserve selected individuals be-tween generations. It can preserve the best solutions, try to preserve neededdissimilarity in the next generation or be completely random. Crossover is anoperation where two chromosomes are sliced to gene sequences that are com-bined to compose one or more new individuals. Mutation simulates randomchanges to genes in the chromosome.

An example of a simple genetic algorithm [32] is defined as follows:

• i← 0;

• Create the first generation G0 of S individual solutions.

• Repeat until a stopping condition is met

– Preserve SB surviving individuals from the previous generation Gi

to the new generation Gi+1.

– Select (S−SB)/Sγ pairs of individuals from Gi to produce offspring,where Sγ is the number of offspring produced by a single crossover.Add the offspring to new generation Gi+1;

– Mutate some individuals in Gi+1;

– i← i + 1;

35

Page 43: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

• Output the best individual ωbest ∈ Gi.

The parameters of the above algorithm include the number of individuals pre-served between generations (SB), the size of generations (S), the number ofindividuals created by crossover operation (Sγ) and the selection of the geneticoperations.

Genetic algorithms can be applied to a clustering problem directly, but inmany approaches they have been combined to a local optimization algorithm,such as k-means, for better performance [20]. Also the use of genetic algorithmsis not limited to the optimization of the individual solutions: in algorithms withmany parameters it is possible to use a genetic algorithm for the optimizationof the parameter set to produce algorithms that are more efficient and easier touse. The Self-adaptive genetic algorithm (SAGA) [33] shows that an excellentperformance can be achieved by including optimization strategy parameters,such as crossover method, mutation probability and mutation noise range.

36

Page 44: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Chapter 4

Summary of publications

The thesis constitutes of five publications: The first two publications introducenew techniques for limiting distortion in embedded lossy wavelet compressionmethods. The third publication introduces a simple new method for coefficientprediction and coding using contexts. The fourth publication introduces anadvanced new method for coding the wavelet coefficients by clustering of contextspace. The final paper introduces a new method for clustering using a distributedself-adaptive genetic algorithm.

This chapter outlines the contents of the included articles. A brief per-formance comparison of the described algorithms with standard compressionalgorithms is given in the end of the chapter. The included publications are:

1. Antero Jarvi, Joonas Lehtinen and Olli Nevalainen, Variable qualityimage compression system based on SPIHT, Signal Processing:Image Communications, vol. 14, pages 683–696, 1999.

2. Joonas Lehtinen, Distortion limited wavelet image codec, Acta Cy-bernetica, vol. 14, no. 2, pages 341–356, 1999.

3. Joonas Lehtinen, Predictive depth coding of wavelet transformedimages, Proceedings of SPIE: Wavelet Applications in Signal and ImageProcessing, vol. 3813, no. 102, Denver, USA, 1999.

4. Joonas Lehtinen and Juha Kivijarvi, Clustering context propertiesof wavelet coefficients in automatic modelling and image coding,Proceedings of IEEE 11th International Conference on Image Analysisand Processing, pages 151–156, Palermo, Italy, 2001.

5. Juha Kivijarvi, Joonas Lehtinen and Olli Nevalainen, A parallel geneticalgorithm for clustering, in Kay Chen Tan, Meng Hiot Lim, Xin Yaoand Lipo Wang (editors), Recent Advances in Simulated Evolution andLearning, World Scientific, Singapore, 2004 (to appear).

37

Page 45: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

4.1 Variable quality image compression system basedon SPIHT

The SPIHT algorithm [54] outlined in chapter 3 provides an efficient compres-sion system that produces an embedded bit stream. A new variable qualitycompression system based on SPIHT (vqSPIHT) is presented. The systemsupports marking a region of interest (ROI) in the image as a list of rectanglesand to preserve more details on those regions. This is achieved by selectinga point in the embedded bit stream after which only bits contributing to thedetails on the ROI are included in the final image.

An application of the vqSPIHT compression system to mammography imagecoding is given. In most medical imaging applications, it is necessary to preservethe diagnostic quality of the compressed images, that is, to guarantee that nodetails contributing to the diagnosis are lost. When the mammogram imagesare coded with SPIHT, the most easily lost diagnostically important details arethe microcalcifications. The microcalcifications often occur in groups and thesize of each microcalcification can be only a few pixels. The compression systemmust preserve the exact shapes and locations of the microcalcifications, as theyare important in the diagnosis of breast cancer. We integrate an automatic mi-crocalcification detection algorithm [19] to vqSPIHT in order to construct theROI automatically for compression of mammogram images. This compressionsystem achieves considerably better compression efficiency than SPIHT, whenboth algorithms are constrained to be diagnostically lossless. After publishingthis, other studies [50, 27] have agreed that variable quality compression sys-tems can be more suitable to mammography compression than traditional lossycompression.

The original SPIHT algorithm uses three lists for maintaining sorting stateinformation during the compression. When compressing high-resolution im-ages, such as mammograms, the memory consumption of a straightforwardimplementation for SPIHT is very high. We present a new memory efficientimplementation of the algorithm by modifying it to use matrices for state main-tenance during sorting. The memory requirements of matrix based vqSPIHTare less than half of the memory requirements for original SPIHT. Later similarreduced memory implementation of SPIHT have been published [64].

4.2 Distortion limited wavelet image codec

In this paper a new EZW based wavelet coding scheme is introduced. We referto the new coding scheme with name Distortion Limited Wavelet Image Codec(DLWIC). The codec is designed to be simple to implement, fast and have mod-est memory requirements. It is also shown, how the distortion of the result can

38

Page 46: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

be calculated while progressively coding a transformed image, if the transformis unitary.

EZW exploits spatial inter-band correlations of the wavelet coefficient mag-nitudes by coding the bit-planes of the coefficient matrix in a hierarchical or-der. The order is defined by a quad-tree structure where the completely zerosubtrees (zerotrees) can be represented by only one symbol. In DLWIC the cor-relations between different orientations are also taken into account by bindingtogether the coefficients on three different orientation bands in the same spatiallocations. The maximum absolute values of the coefficients in all subtrees arestored in a two-dimensional heap structure. This allows the coder to test thezerotree property of a subtree with only one comparison. A binary arithmeticcoder with multiple separate probability distributions (states) is used to reachcompression performance that is similar to the previously known EZW variants.

A wavelet transform is used to construct an octave band composition. Thevalue of the square error (SE) is updated in the compression process. We startby calculating the initial SE as the total energy of the image and decrease itwhile bits are sent according to the information content of the bits. For everybit sent, the change in SE of the image is defined by the difference between thepredictions for the coefficient before and after the bit is sent. Calculations canbe implemented efficiently with table lookups. This has the advantage that weknow the MSE of the final decompressed image already in the coding phase andcan stop the transmission of bits when an acceptable distortion level is reached.

The efficiency of the DLWIC is compared to an advanced EZW variant andthe industry standard JPEG using a set of test images. An estimation on speedand memory requirements of the DLWIC algorithm are made.

A simplified implementation of the DLWIC algorithm was published asGNU Wavelet Image Codec (GWIC) [40] that does not include distortion con-trol features, but adds support for color images. Because of the simplicity ofDLWIC and the availability of the implementation, it has been used as a basisfor various projects [59, 12, 35], including a wavelet-based video codec [36], amobile thin-client [2], a low-power embedded system [52].

4.3 Predictive depth coding of wavelet transformedimages

In this paper, a new prediction based method for lossy wavelet image compres-sion is presented. It uses dependencies between the subbands of different scalesas do the more complex tree-based coding methods. Furthermore, the com-pression method uses dependencies between spatially neighbouring coefficientson a subband and between subbands representing different orientations on thesame scale. Despite of its simpicity, the prediction based method achieves good

39

Page 47: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

compression performance.Coding of octave band composition created by biorthogonal 9-7 wavelet

transform is straightforward. The coefficients of the transformed image arequantized with a simple uniform scalar quantizer. For each quantized coefficientthat is not zero, the sign and the significant bits are coded. In addition to these,the decompressor must know the number of coded bits for each coefficient, whichwe call the depth. The depth of each coefficient is predicted from nine relatedcoefficient depths using a linear predictor and the prediction error is then codedusing arithmetic coding.

The linear predictor uses six known spatial neighbors of the coefficient,two other coefficients on the same level (scale) and spatial location, but ondifferent subbands and the coefficient on the previous level in the same location.The weights for the linear predictor are approximated adaptively during thecompression.

The compression method is tested with a standard set of images and theresults are compared with SFQ, SPIHT, EZW and context based algorithms.The compression performance of PDC is found to be comparable with comparedmethods, which shows that simple linear predictors can be used for waveletcoefficient modeling.

4.4 Clustering context properties of wavelet coeffi-cients in automatic modelling and image coding

A new method for modeling and coding wavelet coefficients by clustering is pro-posed. The method shows how any properties calculated from the coding con-text can be used for modeling the probability distribution of the coefficientbeing coded.

The coding system first transforms the image using a wavelet based trans-form. Then, optimal coding parameters (∆,W, C) meeting the given target bit-rate are iteratively searched. Symbol ∆ stands for the scalar quantization stepsize, W is the set of weights for the context and C is the set of the clustercentroids that define the partitioning. The iteration process alternates betweenfinding a suitable ∆ for the clustering C found in the previous iteration andconstructing new clustering for a given ∆. A formula for calculating the weightsW is given. The coefficient is modeled by selecting the cluster centroid nearestto the weighted context parameters of the coefficient and using the probabil-ity model connected to that cluster for entropy coding. The coefficients arequantized using rate distortion quantization [10] and finally coded using anarithmetic coder.

The coding system is tested against the state of the art methods [54, 11, 66]with a simple set of context properties. The coding efficiency of the system

40

Page 48: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

matches the state of the art methods, but the implementation of the proposediterative optimizer is fairly complex and slow. Because the framework is fullyself-adaptive, it can be used as a research tool for testing the efficiency ofdifferent selections for context properties.

4.5 Clustering by a parallel self-adaptive genetic al-gorithm

The quality of a clustering solution is critical for compression performance incontext classification based wavelet image compression. We describe a new al-gorithm for finding high-quality clustering solutions. The algorithm utilizes self-adaptive genetic algorithms for solution optimization and a new island modelfor parallelization.

The self-adaptive genetic algorithm for clustering (SAGA) [33] is based onindividual level self-adaptation [26, 43], where each individual consists of a solu-tion candidate for the problem and a set of strategy parameters. The candidatesolution includes both the cluster centroids and the individual to cluster map-ping. The strategy parameters include the cross-over method, the mutationprobability and the noise range used in the mutations. The actual genetic op-timization algorithm is fairly similar to the algorithm described in Chapter3.4.2.

The parallel SAGA (parSAGA) algorithm uses the island parallelizationmodel [61], where a number of genetic algorithms run independently but com-municate with each other. This simulates a biological model where separatepopulations live on geographically remotely located islands and occasionallysend individuals to other islands. To easily control the process centrally, we im-plement the island model by using a common so-called genebank, where all theislands can send their individuals and receive new individuals. The probabilityof each individual for traveling from an island to another is controlled by a islandtopology model where the locations of the islands on a two-dimensional virtualsea are taken into account and the moving to one direction can be favored overanother.

Two new statistical measures are proposed for the island model and thealgorithm is tested with different parameters against other algorithms. Thetests show that both SAGA and parSAGA algorithms outperform the comparedalgorithms when solving hard problems. While the quality of the solutionsproduced both by SAGA and parSAGA algorithms are equal, the distributionmodel of the parSAGA algorithm allows the solutions to be achieved faster.

41

Page 49: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

4.6 Performance comparison

The goals of the compression algorithms summarized in the previous sectionsvary: The first two algorithms (vqSPIHT and DLWIC) demonstrate how toadd quality control features into an embedded wavelet coder and the lattertwo (PDC and ACPC) show how the scalar and vector quantization can beapplied to wavelet image coding. While the compression performance has notbeen a high priority goal, it is important to achieve an acceptable compressionperformance when adding new features.

The Figures 4.2, 4.3, 4.5 and 4.4 profile the performance of several com-pression algorithms for a set of test images shown in Figure 4.1. For eachimage, the quality measured as PSNR of the decompressed image is shown forall compared algorithms with bitrates between 0 and 0.5 BPP. The Figure 4.6includes magnifications of the compression results for the test image Barbaracompressed with different bit-rates and algorithms.

The following algorithms are included in the comparison: JPEG, JPEG2000, SPIHT, vqSPIHT, ACPC, PDC and DLWIC. The JPEG implementationused is provided by Independent JPEG Group’s libjpeg package version 6busing the default options for compression. The JPEG 2000 implementationused is the Java-based JPEG verification model version 4.1 provided by theJPEG Group with default options. The SPIHT algorithm implementation usedis provided by A. Said and W. Pearlman. Note that this version of SPIHT givesslightly better results than the original algorithm [54] used for comparison inthe original papers. Our variable quality SPIHT algorithm is used with emptyROI. Our clustering based context modeling algorithm here referred as ACPCis used for measuring compression performance. The compression performancecalculations use a code book with 16 clusters and the context described in [11]with zeros for unknown coefficients. PDC and DLWIC algorithms of this workuse parameters described in the included publications.

The compression performance of JPEG 2000, SPIHT, vqSPIHT, PDC andACPC algorithms is fairly similar for all tested images. JPEG 2000 and ACPCconsistently give the best results, but are not comparable because the ACPC im-plementation is an order of magnitude slower than the other algorithms tested.DLWIC gives good compression results for bit rates below 0.1 BPP, but is leftbehind the more complex wavelet compression algorithms with higher bit rates.The quality produced by JPEG is about 2-4dB lower PSNR than the waveletbased algorithms with the same bit rates. When the bit rate is lowered be-low 0.2 BPP, the quality of JPEG drops at a considerably faster pace than forwavelet-based algorithms.

42

Page 50: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Figure 4.1: Test images for compression performance comparison. From left toright the images are standard test images Barbara, Goldhill and Lenna of size512× 512. The landscape image below is of size 2048× 1024. All images use 8bits luminance resolution.

43

Page 51: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

14

16

18

20

22

24

26

28

30

32

34

0 0.1 0.2 0.3 0.4 0.5

PS

NR

(dB

)

BPP

vqSPIHTSPIHTJPEG

JPEG2000PDC

DLWICACPC

Figure 4.2: Compression performance comparison for the test image Barbara.

44

Page 52: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

16

18

20

22

24

26

28

30

32

34

0 0.1 0.2 0.3 0.4 0.5

PS

NR

(dB

)

BPP

vqSPIHTSPIHTJPEG

JPEG2000PDC

DLWICACPC

Figure 4.3: Compression performance comparison for the test image Goldhill.

45

Page 53: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

15

20

25

30

35

40

0 0.1 0.2 0.3 0.4 0.5

PS

NR

(dB

)

BPP

vqSPIHTSPIHTJPEG

JPEG2000PDC

DLWICACPC

Figure 4.4: Compression performance comparison for the test image Lenna.

46

Page 54: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

18

20

22

24

26

28

30

32

34

36

0 0.1 0.2 0.3 0.4 0.5

PS

NR

(dB

)

BPP

vqSPIHTSPIHTJPEG

JPEG2000PDC

DLWIC

Figure 4.5: Compression performance comparison for the test image Landscape.

47

Page 55: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Figure 4.6: Magnifications of the compression results for the test image Barbaracompressed with different bit-rates and algorithms. The images on left columnare compressed with 0.05 BPP, middle with 0.1 BPP and 0.2 BPP on the rightcolumn. The algorithms tested are from top to bottom: DLWIC, PDC, ACPC,vqSPIHT, SPIHT, JPEG2000 and JPEG. Because the JPEG algorithm couldnot reach bit-rates under 0.1 BPP, the original image is presented instead ofJPEG 0.05 BPP.

48

Page 56: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Chapter 5

Conclusions

In this work, four new methods for coding wavelet transformed images and onenew method for clustering have been proposed. It was shown how the differentregions of an image can be coded in variable quality in an embedded waveletimage coding method and how this can be applied to mammography imagecompression. A method was proposed for limiting distortion by maintainingan estimate of the MSE for the image throughout the coding process in theembedded method. It was shown how the context based prediction can beapplied to wavelet image compression for constructing a simple but efficientcoding system. A novel framework for wavelet image compression by clusteringa given set of context properties was proposed and it was shown to provideexcellent compression performance. Finally, a new clustering method based onself-adaptive distributed genetic optimization was proposed.

The proposed variable quality image coding algorithm based on set parti-tioning in hierarchical trees (vqSPIHT) extends the state of the art SPIHTalgorithm both by adding a novel method for enhancing image quality on se-lected regions and by proposing a memory efficient alternative implementationof the SPIHT sorting algorithm. The efficiency and the benefits of variablequality image compression were demonstrated with an integrated system formammography compression with automated ROI detection. The compressionefficiency of vqSPIHT was shown to be superior to conventional methods in themammography image compression and the memory efficiency of the implemen-tation was observed to be significantly better than that of the original SPIHTalgorithm.

The proposed distortion limited wavelet image codec (DLWIC) provides asimplified implementation of EZW and shows how an embedded wavelet imagecoder can be controlled by target distortion instead of target bit-rate. The im-plementation of the algorithm is based on an efficient two-dimensional heapstructure for zerotree property testing. The compression efficiency of the DL-

49

Page 57: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

WIC algorithm was shown to be comparable to SPIHT, while the compressionalgorithm is considerably simpler.

The proposed predictive depth coding (PDC) algorithm demonstrates howthe conventional prediction coding principles, originally used in lossless image-coding, can be applied to lossy wavelet transform coding. It was shown howthe number of significant bits in wavelet coefficients can be successfully pre-dicted from the context by using a simple linear predictor. The compressionefficiency of the proposed method is comparable to other wavelet based meth-ods. It was also demonstrated that context based methods can be used forwavelet coefficient sign prediction, but the achieved coding efficiency benefitsare questionable.

The proposed automatic context property based coding (ACPC) algorithmshows how clustering methods can be used for the classification of wavelet coef-ficient context properties. Our algorithm provides an automated framework forusing any context properties for modeling and coding the wavelet coefficients.The framework makes it easy to test the suitability and efficiency of any con-text based statistical measure for compression systems. It was observed thatselection of context properties can produce excellent compression results withthe proposed ACPC coder. The speed and complexity of the optimization al-gorithm limits the use of the system in practical applications.

The parallelization of genetic algorithms for clustering was studied and anew parallel self-adaptive genetic algorithm (parSAGA) was proposed. A gen-eral model that allows implementation of different island model topologies byparametrization was given. The parSAGA was observed to achieve the samequality results as the sequential SAGA algorithm, but in considerably shortertime. Both algorithms were observed to outperform the other tested methodsin large clustering problems. The speedup of the parSAGA over the sequen-tial SAGA was in some cases observed to be superlinear because the proposedgenebank model retains more diversity in populations and thus keeps findingbetter solutions more efficiently. Finally, two new statistical diversity measuresfor parallel genetic algorithms were proposed and used for studying the behaviorof the distribution model.

50

Page 58: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

References

[1] N. Ahmed, T. Natarajan, and K. R. Rao. Discrete Cosine Transform.IEEE Transactions on Computers, 23(1):90–93, January 1973.

[2] M. Al-Turkistany and A. Helal. Intelligent adaptation framwork for wire-less thin-client environments. In IEEE Symposium on Computers andCommunications - ISCC’2003, Kemer - Antalya, Turkey, June–July 2003.

[3] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies. Image codingusing wavelet transform. IEEE Trans. on Image Proc., 1(2):205–220, April1992.

[4] R. N. Bracewell. The Fourier Transform and Its Applications. McGraw-Hill, 2000.

[5] E. O. Brigham. The fast fourier transform. In Prentice Hall, 1974.

[6] C. M. Brislawn. Preservation of subband symmetry in multirate signalcoding. IEEE Trans. Signal Processing, 43(12):3046–3050, December 1995.

[7] C. Burrus. Introduction to Wavelets and Wavelets Transforms. PrenticeHall, 1997.

[8] P. Le Callet and D. Barba. A robust quality metric for color image qualityassessment. In ICIP03, pages I: 437–440, 2003.

[9] M. Carnec, P. Le Callet, and D. Barba. An image quality assessmentmethod based on perception of structural information. In ICIP03, pagesIII: 185–188, 2003.

[10] C. Chrysafis. Wavelet Image Compression Rate Distortion Optimizationsand Complexity Reductions. PhD thesis, December 1999.

[11] C. Chrysafis and A. Ortega. Efficient context-based entropy coding forlossy wavelet image compression. In DCC, Data Compression Conference,Snowbird, UT, March 1997.

51

Page 59: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

[12] S. Chukov. Jwic - java based wavelet image codec. http://wavlet.chat.ru/,1999.

[13] D. A. Clunie. Lossless compression of grayscale medical images - effective-ness of traditional and state of the art approaches.

[14] R. Coifman and M. V. Wickerhauser. Entropy-based algorithms for bestbasis selection. IEEE Transactions on Information Theory, 38(2), March1992.

[15] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, August 1991.

[16] I. Daubechies. Orthonormal bases of compactly supported wavelets. Com-mun. os Pure and Appl. Math., 41:909–996, November 1988.

[17] I. Daubechies. Ten lectures on wavelets. In Cbms-Nsf Regional ConferenceSeries in Applied Mathematics, volume 61. Society for Industrial & AppliedMathematics, 1992.

[18] I. Daubechies. Ten Lectures on Wavelets. Society for Industrial & AppliedMathematics, May 1992.

[19] J. Dengler, S. Behrens, and J. F. Desaga. Segmentation of microcalcifi-cations in mammograms. IEEE Transactions on Medical Imaging, 12(4),December 1993.

[20] P. Franti, J. Kivijarvi, T. Kaukoranta, and O. Nevalainen. Genetic al-gorithms for large-scale clustering problems. The Computer Journal,40(9):547–554, 1997.

[21] D. E. Goldberg. Genetic Algorithms in Search, Optimization & MachineLearning. Addison-Wesley, Reading, USA, 1989.

[22] R. Gonzalez and R. Woods. Digital Image Processing. Addison-WesleyPublishing Company, 1992.

[23] A. Graps. An introduction to wavelets. IEEE Computational Sciences andEngineering, 2(2):50–61, 1995.

[24] Joint Photographic Experts Group. Official site of the joint photographicexperts group: Jpeg 2000 status. http://www.jpeg.org/jpeg2000/, April2005.

[25] M.A. Haque. A two-dimensional fast cosine transform. ASSP, 33:1532–1539, 1985.

52

Page 60: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

[26] R. Hinterding, Z. Michalewicz, and A. E. Eiben. Adaptation in evolution-ary computation: A survey. In Proceedings of the 4th IEEE InternationalConference on Evolutionary Computation, pages 65–69, April 1997.

[27] C. Ho, D. Hailey, R. Warburton, J. MacGregor, E. Pisano, and J. Joyce.Digital mammography versus film-screen mammography: technical, clini-cal and economic assessments. Technology report no 30, 2002.

[28] J. H. Holland. Adaptation in Natural and Artificial Systems. University ofMichigan Press, Ann Arbor, USA, 1975.

[29] P. G. Howard and J. S. Vitter. Fast and efficient lossless image compression.In Data Compression Conference, pages 351–36, 1993.

[30] D. A. Huffman. A method for the construction of minimum-redundancycodes. In Proc. IRE, pages 1098–1101, September 1952.

[31] J. Kivijarvi. Optimization Methods for Clustering. PhD thesis, Universityof Turku, Department of information technology, 2004.

[32] J. Kivijarvi. Optimization Methods for Clustering. PhD thesis, TurkuCentre for Computer Science, January 2004.

[33] J. Kivijarvi, P. Franti, and O. Nevalainen. Self-adaptive genetic algorithmfor clustering. Journal of Heuristics, 9(2):113–129, 2003.

[34] J. Kivijarvi, T. Ojala, T. Kaukoranta, A. Kuba, L. Nyul, andO. Nevalainen. The comparison of lossless compression methods in thecase of a medical image database. Technical Report 171, Turku Centre forComputer Science, April 1998.

[35] S. Knoblich. Gnu wavelet image codec with els coder. http://home.t-online.de/home/stefan.knoblich/gwic prj.html, May 2000.

[36] S. Knoblich. Wavelet video codec based on gwic. http://home.t-online.de/home/stefan.knoblich/gwic codec.html, May 2000.

[37] D. Knuth. Dynamic huffman coding. J. Algorithms, 2:163–180, 1985.

[38] C. Lee and O. Kwon. Objective measurements of video quality using thewavelet transform. Optical Engineering, 42(1):265–272, January 2003.

[39] T. S. Lee. Image representation using 2d gabor wavelets. IEEE Transac-tions on Pattern Analysis and Machine Intelligence, 18(10):959–971, 1996.

[40] J. Lehtinen. Gwic - gnu wavelet image codec.http://www.jole.fi/research/gwic/, 1998.

53

Page 61: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

[41] J. Lehtinen. Predictive depth coding of wavelet transformed images. InProceedings of SPIE: Wavelet Applications in Signal and Image Processing,volume 3813, Denver, USA, 1999.

[42] S. P. Lloyd. Least squares quantization in pcm. IEEE Trans. on Informa-tion Theory, 28(2):129–137, 1982.

[43] G. Magyar, M. Johnsson, and O. Nevalainen. An adaptive hybrid geneticalgorithm for the three-matching problem. IEEE Transactions on Evolu-tionary Computation, 4:135–146, 2000.

[44] S. G. Mallat. A theory for multiresolution signal decomposition:The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell.,11(7):674–693, 1989.

[45] J. L. Mannos and D. J. Sakrison. The effects of a visual fidelity criterionon the encoding of images. IEEE Trans. Information Theory, IT-20(4),July 1974.

[46] M. W. Marcellin, M. J. Gormish, A. B., and M. P. Boliek. An overview ofjpeg-2000. In Proc. of IEEE Data Compression Conference, pages 523–541,2000.

[47] J. B. McQueen. Some methods of classification and analysis of multivariateobservations. In Proc. 5th Berkeley Symposium Mathemat. Statist. Prob-ability, volume 1, pages 281–296, University of California, Berkeley, CA,1967.

[48] Y. Meyer. Wavelets and operators, volume 37 of Cambridge Studies inAdvanced Mathematics. Cambridge University Press, Cambridge, 1992.Translated from the 1990 French original by D. H. Salinger.

[49] M. Mitchell. An Introduction to Genetic Algorithms. The MIT Press,Cambridge, USA, 1996.

[50] M. Penedo, W. A. Pearlman, P. G. Tahoces, M. Souto, and J. J. Vidal.Region-based wavelet coding methods for digital mammography. IEEETrans. on Medical Imaging, 22:1288–1296, October 2003.

[51] W. Pennebaker and J. Mitchell. Jpeg : Still Image Data CompressionStandard. Van Nostrand Reinhold, 1992.

[52] C. Pereira, R. Gupta, and M. Srivastava. Pasa: A software architecture forbuilding power aware embedded systems. In In the proceedings of the IEEECAS Workshop on Wireless Communications and Networking - Power ef-ficient wireless ad hoc networks, California, USA, September 2002.

54

Page 62: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

[53] M. Pinson and S. Wolf. A new standardized method for objectively mea-suring video quality. IEEE Transactions on Broadcasting, 50(3):312–322,September 2004.

[54] A. Said and W. A. Pearlman. A new fast and efficient image codec basedon set partitioning in hierarchical trees. IEEE Transactions on Circuitsand Systems for Video Technology, 6:243–250, June 1996.

[55] K. Sayood. Introduction to Data Compression. Morgan Kaufmann, 1996.

[56] J. M. Shapiro. Embedded image coding using zerotrees of wavelet coeffi-cients. IEEE Transactions on Signal Processing, 31(12), December 1993.

[57] J. R. Smith and S. Chang. Frequency and spatially adaptive wavelet pack-ets. In Proc of IEEE International Conference on Acoustics, Speech andSignal Processing, May 1995.

[58] P. Symes. Digital Video Compression. McGraw-Hill, 2003.

[59] T. Sziranyi. Self-Organizing Image Fields. PhD thesis, Magyar Tu-domanyos Akademia, 2001.

[60] D. S. Taubman and M. W. Marcellin. JPEG 2000: Image CompressionFundamentals, Standards and Practice. Kluwer Academic Publishers, Mas-sachusetts, USA, 2002.

[61] M. Tomassini. Parallel and distributed evolutionary algorithms, 1999.

[62] M. Vettereli and J. Kovacevic. Wavelets and Subband Coding. PrenticeHall, Englewood Cliffs, NJ, 1995.

[63] M. Weinberger, G. Seroussi, and G. Sapiro. Loco-i: A low complexity,context-based, lossless image compression algorithm. In Proc. IEEE DataCompression Conference, Snowbird, Utah, March–April 1996.

[64] F. W. Wheeler and W. A. Pearlman. Spiht image compression without lists.In IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP2000), Istanbul, Turkey, June 2000.

[65] I. H. Witten, R. M. Neal, and J. G. Cleary. Arithmetic coding for datacompression. Comm. of the ACM, 6(30):520–540, 1987.

[66] Z. Xiong, K. Ramchandran, and M. T. Orchard. Space-frequency quanti-zation for wavelet image coding. IEEE Trans. Image Processing, 1997.

[67] R. D. Zampolo and R. Seara. A measure for perceptual image qualityassessment. In ICIP03, pages I: 433–436, 2003.

55

Page 63: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

56

Page 64: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Publication reprints

57

Page 65: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

58

Page 66: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Variable quality image compression systembased on SPIHT

Antero Jarvi, Joonas Lehtinen and Olli Nevalainen

Published in Signal Processing: Image Communications, vol. 14,pages 683–696, 1999 (sent 1997).

59

Page 67: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

60

Page 68: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

*Corresponding author: Tel.: #3582 3338795; e-mail:

[email protected]."; fax: #3582 2410154.

1 Joonas Lehtinen acknowledges support by the Academy of

Finland.

Signal Processing: Image Communication 14 (1999) 683}696

Variable quality image compression system based on SPIHT

A. JaK rvi!,",*, J. Lehtinen!,",1, O. Nevalainen"

! Turku Centre for Computer Science, Lemminka( isenkatu 14 A, FIN-20520 Turku, Finland

" Department of Computer Science, University of Turku, Lemminka( isenkatu 14 A, FIN-20520 Turku, Finland

Received 10 July 1997

Abstract

An algorithm for variable quality image compression is given. The idea is to encode di!erent parts of an image with

di!erent bit-rates depending on their importance. Variable quality image compression (VQIC) can be applied when

a priori knowledge on some regions or details being more important than others is available. Our target application is

digital mammography, where high compression rates achieved with lossy compression are necessary due to the vast

image sizes, while relatively small regions containing signs of cancer must remain practically unchanged. We show how

VQIC can be implemented on top of SPIHT (Said and Pearlman, 1996), an embedded wavelet encoding scheme. We have

revised the algorithm to use matrixes, which gives more e$cient implementation both in terms of memory usage and

execution time. The e!ect of the VQIC on the quality of compressed images is demonstrated with two test pictures:

a drawing and a more relevant mammogram image. ( 1999 Elsevier Science B.V. All rights reserved.

Keywords: Image compression; SPIHT; Wavelet transform; Region of interest; Variable quality image compression

(VQIC)

1. Introduction

The number of large digital image archives is

increasing rapidly in many "elds, including health

care. The cost-e$cient archiving causes a need for

high-quality image compression techniques aiming

at major savings in storage space and network

bandwidth when transmitting the images. Despite

the potentionally critical nature of medical images,

some degeneration of the image quality must be

allowed, since the best lossless compression methods

can only be about half the size of a typical medical

image. For example, a digital mammogram with

pixel size of 50 lm is approximately of size

5000]5000 pixels with 12 bits per pixel, and thus

needs about 50Mb for storage without compres-

sion. This clearly demonstrates the need for lossy

compression.

Lossy image compression methods are usually

designed to preserve perceived image quality by

removing subtle details, that are di$cult to see with

human eye. The frequent quality measure used for

0923-5965/99/$ - see front matter ( 1999 Elsevier Science B.V. All rights reserved.

PII: S 0 9 2 3 - 5 9 6 5 ( 9 8 ) 0 0 0 3 8 - 1

61

Page 69: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

evaluation of the distortion in a compressed image

is the mean-square error (MSE). However, in medi-

cal imaging, the distortion of an image is de"ned as

the impact that compression causes to diagnostic

accuracy, and "nally to clinical actions taken on

the basis of the image [1]. In this application area,

there is an evident con#ict between the two oppo-

site goals; achieving high compression ratio and

maintaining diagnostically lossless reconstruction

accuracy. One possible way to alleviate this con#ict

is to design an image compression method that uses

lossy compression, but saves more details in impor-

tant regions of the image than in other regions. Gen-

eral-purpose image compression methods can also

be considered to "t into this scheme; important

regions and details are those, that the human visual

system is sensitive to. In medical imaging, the def-

inition of important regions involves application

speci"c medical knowledge. Obviously, the accu-

racy of this knowledge is crucial for good perfor-

mance of the system. If the criteria for important

regions are too loose, the gain in compression ratio

is lost. On the other hand, with too strict criteria

the quality requirements are not met, since some

important regions are treated erroneously as unim-

portant.

Today, the standard in lossy image compression

is the JPEG [2] algorithm, which is based on the

scalar quantization of the coe$cients of the win-

dowed discrete cosine transforms (DCT), followed

by entropy coding of the quantization results.

JPEG is generally accepted and works well in most

cases, but because it uses DCT and divides the

image into blocks of "xed size, it may distort or

even eliminate small and subtle details. This can be

a serious drawback in digital mammography,

where images contain a large number of diagnosti-

cally important small low contrast details, that

must preserve their shape and intensity.

=avelet-based image compression methods are

popular and some of them can be considered to be

`state of the arta in general-purpose lossy image

compression (see [6] for an introduction to the

topic). Wavelet compression methods can be

divided into three stages: wavelet transform, lossy

quantization and encoding of the wavelet coe$-

cients and lossless entropy coding [14]. The wavelet

transform is used to de-correlate the coe$cients

representing the image. The transform collects the

image energy to relatively small number of coe$-

cients, compared to the original highly correlated

pixel representation. In the quantization phase this

sparse representation and dependencies between

coe$cients are exploited with specially tailored

quantization and coding schemes. The widely know

embedded zerotree encoding (EZW) by Shapiro [12]

is an excellent example of such coding scheme, and

also a good reference to wavelet based image com-

pression is general.

One of the most advanced wavelet-based image

compression techniques is SPIHT (Set Partitioning

In Hierarchical Trees) by Said and Pearlman [7,11].

SPIHT is clearly a descendant of EZW using

similar zerotree structure and bitplane coding. In

bitplane coding bits of wavelet coe$cients are

transmitted in the order of their importance, i.e. the

coe$cients are encoded gradually with increasing

accuracy. Because of this, the encoding can be stop-

ped at any stage. The decoder then approximates

the values of the original coe$cients with precision

depending on the number of bits coded for each

coe$cient. This property called embedded coding is

the main reason for choosing SPIHT as the basis of

the <ariable Quality Image Compression system

(VQIC). The implementation of variable quality

property is straightforward in embedded coding:

the encoding of the coe$cients of the whole image

is ceased somewhere in the middle, and sub-

sequently only bits of coe$cients that in#uence the

important regions are encoded.

Another reason for choosing SPIHT is that it

appears to perform very well in terms of compres-

sion ratio; in the case of digital chest X-rays, a com-

pression ratio of 40:1 has been reported to cause no

signi"cant di!erence when compared to the orig-

inal in a radiologist evaluation [4]. In an other

study, SPIHT compression of mammograms to

ratio 80:1 has been found to yield image quality

with no statistically signi"cant di!erences from the

original mammogram [9]. A further indication of

good performance is the fact that the output of

SPIHT encoding is so dense that additional com-

pression with lossless entropy coding gives an extra

gain of only few percentages [11].

VQIC technique can be applied to any image

that is spatially segmentable to a set of regions that

684 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

62

Page 70: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

must be saved in better quality. We do not cover

the segmentation problem in this paper since it is

completely application-speci"c. Since we are tar-

geting at applications where important regions are

small, the segmentation is done with some kind of

feature detector. The feature detection task for

VQIC purpose is considerably easier compared to

applications, where the interest is in the presence or

absence of the feature. In VQIC, a moderate num-

ber of false positive detections can be tolerated, as

long as all of the true features are detected. Thus,

most existing feature detection algorithms are

suitable, because they can be tuned to be over-

sensitive. Several suitable fully automatic segmen-

tation methods exists for this purpose in the "eld

of medical imaging [5], especially for mammo-

graphy [3].

Before describing further details, we brie#y dis-

cuss relevant research. VQIC has been used in the

compression of image sequences in video con-

ferencing [10]. In this work, the pixel values are

predicted and the prediction errors are transformed

by 2D-DCT. Coe$cients of the blocks with minor

importance are quantized with coarser level than

more important details, heads and shoulders. In

another research, the importance of a region is

determined on the basis of the visibility of distor-

tions to human eye [8]. This information is used in

the construction of a constant quality MPEG

stream by adjusting quantization parameters de-

"ned by the MPEG standard. The focus in both

papers is the segmentation of important regions,

and VQIC is achieved by variable quanti"cation of

DCT coe$cients. A recent paper by Shin et al.

describes a selective compression technique, which

integrates detection and compression algorithms

into one system [13]. The compression technique

used is intraband coding of wavelet packet trans-

form coe$cients, where variable quality is achieved

by scaling the wavelet coe$cients in important

regions to increase their priority in coding. The

method is tested with a digital mammogram, where

detected microcalci"cations are considered as

ROIs. Even though the aims of this work are close

to ours, the actual methods di!er considerably.

Our work is organized as follows. In Section 2

we introduce an algorithm called variable quality

SPIH¹ (vqSPIH¹), which is basically a reim-

plementation of SPIHT, with the added VQIC

functionality. We revise the memory organization

of SPIHT to use matrix-based data structures. This

new implementation reduces the working storage

requirements of the algorithm considerably. In Sec-

tion 3 we discuss the compression performance of

vqSPIHT algorithm, and show with two examples

that it can be superior to SPIHT or JPEG. We "rst

demonstrate this with a set of details in a high

contrast drawing compressed to several bit-rates

with all three compression algorithms. We also

discuss a more relevant application for VQIC

} compression of digital mammograms, and make

a comparison between SPIHT and vqSPIHT com-

pressed images.

2. vqSPIHT algorithm

In this section, we explain informally the basic

ideas behind SPIHT and vqSPIHT algorithms to

facilitate the reading of the vqSPIHT algorithm in

a pseudo-code format. We also discuss the imple-

mentation based on matrix data structures and its

implications to practical memory requirements.

2.1. Structure of the wavelet transformed coezcient

table

Wavelet transform converts an image into a coef-

"cient table with approximately the same dimen-

sions as the original image. Fig. 1 shows the

structure of the wavelet coe.cient table, which con-

tains three wavelet coe$cient pyramids (pyramids

A, B and C) and one table of scaling coe.cients S.

The scaling coe$cients represent roughly the mean

values of larger parts of the image and wavelet

coe$cient details of various sizes. Since in practice

the transform is stopped before scaling table

S would shrink to a single coe$cient, table S looks

like a miniature version of the original image. The

top levels of the three wavelet pyramids are located

adjacent to the scaling table (level three in Fig. 1),

and contain coe$cients representing large details,

whereas coe$cients at level zero contribute mainly

to the smallest details in the image. Pyramid A

685A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

63

Page 71: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Fig. 1. An example of a wavelet coe$cient table that contains

three four-level pyramids: A, B and C. Scaling coe$cients are

located on the square noted by S.

contains coe$cients for vertical details and pyr-

amid B respectively for horizontal details. The third

pyramid C contains correction coe$cients needed

in the reconstruction of the image from pyramids A,

B and table S.

In the SPIHT algorithm, the pyramids are

divided into sub-pyramids which corresponds to

zerotrees in EZW. A sub-pyramid has a top

element somewhere in the coe$cient table, and

contain four coe$cients one level lower in the

corresponding spatial location in the same

pyramid, 16 elements two levels lower, and so on.

The sub-pyramids are extended to the scaling coef-

"cient S in the following way. The scaling coe$-

cients are grouped into groups of four. The

coe$cient in the upper left corner has no descend-

ants, whereas the three remaining coe$cients in the

group (upper right corner, lower left corner and

lower right corner) serve as top elements of three

sub-pyramids in pyramids A, B and C in corre-

sponding order.

In our approach to VQIC, we must determine

which coe$cients contribute to the value of a given

pixel in the original image. In an octave-band de-

composition, which we use, a coe$cient of any pyr-

amid in higher level corresponds to four coe$cients

in the next level at same spatial location. We use

this rule of multiples of four in choosing important

coe$cients.

2.2. The basic functioning of SPIHT

To understand how SPIHT works, one must

keep in mind that it is not a complete image com-

pression scheme, but a method tailored for optimal

embedded encoding of wavelet transformed image

coe$cients. The encoding is optimal in the sense of

MSE. SPIHT does not presuppose any particular

wavelet transform. The only requirement is that the

transform has the octave band decomposition

structure, as described above. Also, the optimal

encoding with respect to MSE is achieved only if

the transform has the energy conservation prop-

erty. In the implementation of vqSPIHT, we use

the same biorthogonal B97 wavelet transform [14],

that is used in the SPIHT.

2.2.1. Bitplane coding in SPIHT

Optimal progressive coding of SPIHT is imple-

mented with a bitplane coding scheme. The order of

coding is based on the energy saving property

stating that the larger the wavelet coe$cient is, the

more its transmission reduces the MSE. Further-

more, since SPIHT uses uniform scalar quantiz-

ation, transmission of a more signi"cant bit in any

coe$cient reduces the MSE more than transmis-

sion of a less signi"cant bit in a possibly larger

coe$cient [11].

According to this principle, all coe$cients are

sorted to a decreasing order by the number of signif-

icant bits. The number of signi"cant bits in the

coe$cient having the largest absolute values is

noted by n. The output is generated by transmitting

"rst all the nth bits in coe$cients that have at least

n signi"cant bits, then (n!1)th bits of coe$cients

that have at least (n!1) signi"cant bits, and so on.

Because the most signi"cant bit of a coe$cient is

always one, the sign of the coe$cient is transmitted

in place of the most signi"cant bit.

In addition to transmitted bitplanes, the sorting

order and the length of each bitplane are needed in

the decoder to resolve the location of each trans-

mitted bit in the reconstructed wavelet coe$cient

table. This information is not transmitted explicitly,

instead the same algorithm is used in both the

encoder and the decoder, and all branching deci-

sions made in the encoder are transmitted to the

686 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

64

Page 72: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

decoder. The branching decisions are transmitted

interleaved with the bitplane coded bits and the

signs of coe$cients. Because of progressive nature

of SPIHT coding, the transmitted bit-stream can be

truncated at any point and the original coe$cient

matrix approximated with optimal accuracy with

respect to the number of transmitted bits. For each

coe$cient, the most signi"cant not transmitted bit

is set to one, and the rest to zero, thus achieving

a good approximation in uniform quantization.

2.2.2. Exploitation of the pyramid structure of

wavelet transform

An important property in most natural images is

that the low- and high-frequency components are

spatially clustered together. This means in the

wavelet coe$cient pyramid, that there is high cor-

relation between the magnitudes of the coe$cients

of di!erent levels in corresponding locations. Also,

since the variance of the frequency components

tends to decrease with increasing frequency, it is

very probable that the coe$cients representing "ne

details in a particular spatial location will be small,

if there is a region of small coe$cients in the corre-

sponding location on coarser level of the pyramid.

Thus, it is probable that there exists sub-pyramids

containing only zeroes on the current bitplane.

These zerotrees can be encoded with one bit, thus

cutting down the number of sorting decisions con-

siderably and also the branching decisions that

must be transmitted to the decoder. The way these

dependencies between coe$cients are exploited in

SPIHT coding is described in the presentation of

vqSPIHT algorithm.

2.3. The vqSPIHT algorithm

2.3.1. Extension of SPIHT to vqSPIHT

To expand the SPIHT algorithm to vqSPIHT,

we de"ne a Region Of Interest (ROI) as a region in

the image that should be preserved in better quality

than the rest of the image. ROIs can be presented as

a binary map that is highly compressible with

simple run-length encoding and thus does not a!ect

bit-rate signi"cantly.

In selective coding mode of vqSPIHT, only coef-

"cients a!ecting ROIs are coded. This mode is

triggered when a certain percentage a of the wanted

,nal output ,le size has been reached. The choice of

a is important, and its best value is highly applica-

tion-dependent. Some applications might demand

more sophisticated de"nition of a depending on the

"le size, area of the ROIs and some indicator on

how ROIs are scattered, for example.

To implement selective coding, we construct

a look-up-table (LUT), that is used in the function

in-uence(i, j) de"ning whether a coe$cient (i, j) con-

tributes to any ROI or not. The LUT is constructed

by scaling each level of all three pyramids to the

same size as the original image, and comparing the

map of ROIs to the scaled levels. All the coe$cients

that overlap with any ROI are marked in the LUT.

2.3.2. Implementation with matrices

Instead of lists that are used in the original imple-

mentation of SPIHT, our implementation of

vqSPIHT uses two matrices for keeping track of

signi"cant and insigni"cant coe$cients and sub-

pyramids. With the matrix data structures, we can

considerably reduce the working storage require-

ments of encoding and decoding.

We introduce a point signi,cance matrix (PSM)

to indicate whether a coe$cient is known to be

signi,cant, insigni,cant or still has an unknown

state. The labels of PSM are coded with two bits

and the dimensions of the PSM are the same as in

the coe$cient table. We also need a sub-pyramid list

matrix (SPLM), which is used for maintaining an

implicit list of the sub-pyramids containing only

insigni"cant coe$cients. The list structure is

needed, because the order of sub-pyramids must be

preserved in the sorting algorithm. The dimensions

of the SPLM are half of the dimensions of the

coe$cients matrix, because the coe$cients on the

lowest level of the pyramids cannot be top elements

of sub-pyramids. There are two types of sub-pyr-

amids. A sub-pyramid of type A contains all de-

scendants of a particular coe$cient, excluding the

top coe$cient itself. A sub-pyramid of type B is

otherwise similar, but the immediate o!spring of

the top coe$cient is excluded in addition to the top

coe$cient. The list structure with SPLM is simple:

the lower bits of an element tells the index of the

next element in the list. The type of the sub-pyr-

amid is coded with the highest bit.

687A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

65

Page 73: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

2.3.3. Pseudo-code of vqSPIH¹

The algorithms for the encoder and the decoder

are similar. We use notation &input/output x',

which consists of two steps: in the encoder x is

"rst calculated and then transmitted to entropy

coder; in the decoder x received from decoder

and then used in the construction of a new estimate

for the coe$cient. Variable nbc indicates the num-

ber of bits transmitted or received thus far. Con-

stant ,lesize indicates the requested "nal "le size in

bits.

Function in#uence (i, j) is de"ned to be true, if the

element (i, j) in the LUT is marked to in#uence an

ROI, and false otherwise. Let ci, j

be the value of the

coe$cient (i, j) in the wavelet coe$cient table and

the coordinate pair (i, j) denote either a single

coe$cient or a whole sub-pyramid of type A or

B having coe$cient at (i, j) as the top element. The

meaning of (i, j) will be evident from the context.

Finally, we de"ne the signi"cance of a coe$cient

with function Sn(c

i, j) as follows: S

n(c

i, j)"1 if the

number of bits after the "rst 1-bit in the absolute

value of ci, j

is at least n!1, otherwise Sn(c

i, j)"0.

The signi"cance of a sub-pyramid is de"ned with

function Sn(i, j). S

n(i, j)"0 if S

n(c)"0 for all coe$-

cients c belonging to sub-pyramid (i, j), otherwise

Sn(i, j)"1.

The input for both the encoder and decoder is

the wavelet coe$cient table, the ROI map, a and

"lesize. The output of the decoder is an approxima-

tion of the original wavelet coe$cient table. See

Fig. 2 for an outline of the algorithm.

1. Initialization

f Input/output n, which is the number of signi"-

cant bits in the coe$cient having the largest

absolute value.

f Construct LUT according to the given ROIs.

f Set nbc"0.

f Set the PSM label of all scaling coe$cients (coef-

"cients in the area S in Fig. 1) to insigni,cant and

the PSM label of all other coe$cients to un-

known.

f Create a list of all scaling coe$cients that

have descendants in SPLM and make them of

type A.

2. Sorting step for PSM

f For every element (i, j) in PSM do

C If (nbc/,lesize(a OR in-uence (i, j)) then

} If (i, j) is labeled to be insigni,cant do:

} Input/output Sn(c

i,j).

} If Sn(c

i,j)"1 then set the PSM label (i, j)

to signi,cant and input/output the sign of

ci,j

.

} Else If (i, j) is labeled to be signi,cant do:

} Input/output the n-th most signi"cant bit

of Dci, j

D.

3. Sorting step for SP¸M

f For each element (i, j) in the list in SPLM do:

C If sub-pyramid (i, j) is of type A AND

(nbc/,lesize(a OR in-uence (i, j)) then

} Input/output Sn(i, j).

} If Sn(i, j)"1 then

} For each (k, l) belonging to immediate

o!spring of (i, j) do:

C If (nbc/,lesize(a OR in-uence (k, l))

then

} Input/output Sn(c

k,l).

} If Sn(c

k,l)"1 then set the PSM label

(k, l) to signi,cant and input/output

the sign of ck,l

, else set the PSM label

(k, l) to insigni,cant.

} If (i, j) is not on one of the two lowest levels

of the pyramid then move (i, j) to the end of

the list in SPLM and change its type to B,

else remove (i, j) from the list in SPLM.

C If sub-pyramid (i, j) is of type B AND

(nbc/,lesize(a OR in-uence (i, j)) then

} Input/output Sn(i, j).

} If Sn(i, j)"1 then

} For each (k, l) belonging to immediate

o!spring of (i, j) do:

} If (nbc/,lesize(a OR in-uence(k, l))

then add (k, l)) to the end of list in

SPLM as sub-pyramid of type A.

} Remove (i, j) from the list in SPLM.

4. Quantization-step update

f If n'0 then

C Decrement n by 1.

C Jump to the beginning of the PSM sorting

step 2.

688 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

66

Page 74: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Fig. 2. The vqSPIHT matrix implementation is illustrated on the upper half of the picture: The wavelet coe$cients on the left

are processed one bitlevel at a time. Each bitplane is processed in two steps (2 and 3). The "rst step tests the signi"cance of

each coe$cient that is labeled to be insigni"cant in PSM matrix and transmits the necessary information. The second step pro-

cesses all the trees in SPLM and sets new points in PSM as signi,cant or insigni,cant. The bottom part of the picture illustrates the

thresholding of transmitted data. After the alpha limit has been reached, only bits of coe$cients and decision information that a!ects the

ROI is sent.

The original SPIH¹ algorithm always transmits

bits in two alternating phases: In the "rst phase the

branching decisions of the sorting step and the

signs of new signi"cant coe$cients are transmitted.

In the second phase the bits of all signi"cant coe$-

cients on the current bitplane are transmitted. In-

terrupting the "rst phase can cause transmission of

branching decisions that cannot be used in recon-

struction. In vqSPIH¹, we have therefore com-

bined the sending of the signi"cant coe$cients to

the sorting phase to avoid the problem.

3. Test results

3.1. Numerical quality indicators

As a measure of image quality, we use the point

signal to noise ratio (PSNR) for the whole image

and for the ROIs:

DPSNR

"10 log10

2"11!1

DMSE

dB.

689A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

67

Page 75: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

2A common lossless image compression method.

Fig. 3. The original comic test image with three ROIs marked.

Table 1

Comparison between SPIHT, vqSPIHT and JPEG for the

comic image (Fig. 3)

Method bpp PSNR PSNR

in ROIs

GIF 4.456 R R

JPEG 1.01 26.01 23.76

JPEG 0.50 23.09 20.21

JPEG 0.29 20.43 17.81

SPIHT 1.00 27.56 26.58

SPIHT 0.50 24.06 22.13

SPIHT 0.25 21.00 18.63

vqSPIHT a"90% 1.00 27.26 39.85

vqSPIHT a"90% 0.50 23.83 29.76

vqSPIHT a"90% 0.25 20.75 23.41

vqSPIHT a"80% 1.00 26.72 42.46

vqSPIHT a"80% 0.50 23.40 34.36

vqSPIHT a"80% 0.25 20.37 26.85

vqSPIHT a"80% 0.15 18.55 22.13

vqSPIHT a"80% 0.10 17.45 18.94

vqSPIHT a"80% 0.05 15.15 14.55

It is well-known that PSNR does not give objective

estimate of image quality, but it correlates with the

amount of distortion in the image and it can be

used for comparing the quality of images com-

pressed with algorithms causing similar distortion.

As a measure for the amount of compression we use

the number of bits per pixel (bpp).

3.2. The comic test image

The vqSPIHT algorithm was constructed for the

needs of digital mammography. However, mammo-

grams are rather smooth and thus easily hide com-

pression artifacts. To better illustrate the e!ect of

VQIC, we use a comic picture of size 420]480 with

8 bpp as the "rst test image (Fig. 3). GIF,2 JPEG,

SPIHT and vqSPIHT algorithms are used to com-

press the image having three ROIs covering 2.4%

of the image marked on the Fig. 3. The compres-

sion results are presented in Table 1 and Fig. 4.

As seen in Table 1, the PSNR of SPIHT and

vqSPIHT are similar. With less compression (large

bpp), JPEG is also comparable in terms of PSNR,

but its visual quality decreases rapidly with de-

creasing values of bpp (Fig. 4). IN JPEG, the result-

ing "le size cannot be speci"ed exactly in advance,

and thus the bpp values of JPEG are slightly di!er-

ent from those of SPIHT and vqSPIHT. There are

no big di!erences in the performance of these tech-

niques, when only the overall PSNR is evaluated.

When considering the PSNR of ROIs, the situ-

ation changes radically. SPIHT performs signi"-

cantly better than JPEG, but the improvement

achieved with vqSPIHT is even greater. We have

used quite high a values: 80% and 90% of the size

of the output "le. Even with these values, PSNR in

the ROIs is considerably lower in the vqSPIHT

compressed images than in the SPIHT compressed

images, while good overall quality (PSNR of whole

image) is still maintained. This is partly due to the

fact that the coe$cients that in#uence the ROIs

also contribute to the areas outside the ROIs. Thus,

the overall image quality is still improving outside

the ROIs after the trigger value a has been reached.

Fig. 4 shows a 88]66 pixel region taken from

the comic image and compressed with JPEG,

SPIHT and vqSPIHT with di!erent bpp values.

The original part of the image is shown in the lower

right corner. The selected part includes an ROI,

marked on the original image.

690 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

68

Page 76: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Fig. 4. A region of the comic test image containing an ROI compressed with JPEG, SPIHT and vqSPIHT. See the table below for the

explanation of sub-regions of the "gure:

JPEG 1.00 bpp JPEG 0.50bpp JPEG 0.25 bpp

SPIHT 1.00 bpp SPIHT 0.50bpp SPIHT 0.25 bpp

vqSPIHT a90% 1.00bpp vqSPIHT a90% 0.50 bpp vqSPIHT a90% 0.25 bpp

vqSPIHT a80% 1.00bpp vqSPIHT a80% 0.50 bpp vqSPIHT a80% 0.25 bpp

vqSPIHT a80% 0.15bpp vqSPIHT a80% 0.10 bpp ROI in original

.

The "rst row of Fig. 4 shows the limit bpp value,

where JPEG clearly fails to produce acceptable

quality. The image compressed to 0.50bpp is still

recognizable, but the 0.25 bpp image is not. Even

the 1.00 bpp image compressed with JPEG has

high-frequency noise around the sharp edges. In the

0.5 bpp and 0.25 bpp images the blocking e!ect

introduces additional artifacts. In the 1.00 bpp

SPIHT image there is no high-frequency noise.

When the bpp value gets smaller, the image gets

smoother, and it thus loses small high-frequency

details. However, even the 0.25 bpp SPIHT image is

recognizable.

The overall image quality of the 90% and 80%

1.00bpp vqSPIHT images is very close to that of

the 1 bpp SPIHT image. The visual quality of the

691A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

69

Page 77: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Fig. 5. The two images on the left are ROI masks used in the compression of the two rightmost images, where ROI covers 16% (the

upper image) and 46% of the image. The "rst gray-scale image is compressed with without ROI (a"100%), while 16% ROI (a"50%)

is used in the second image and 46% ROI (a"50%) in the last image. All the images are compressed with same 0.25 bpp bit-rate.

Fig. 6. Original mammogram test image.

0.50bpp SPIHT image is similar to the 0.25 bpp

vqSPIHT (a"80%) image on the ROI. Note that

the ROI of the 0.10 bpp vqSPIHT image is visually

better than the ROI on the 0.25 bpp JPEG image,

and of comparable quality with ROI on the

0.25bpp SPIHT image. In this image, 0.25 bpp cor-

responds to compression ratio 32 : 1. It should be

noted that SPIHT is designed to perform well on

natural images. A comic drawing is a di$cult case

for SPIHT and thus also for vqSPIHT.

The performance of the vqSPIHT was good

when ROI covered only 2.4% of the picture. With

the increase of ROI a must decrease to compensate

the larger number of coe$cients in the ROI in

order to maintain the same quality. Because bits

are coded in the order of their importance, the bits

used in coding of ROI can add considerably less to

whole image PSNR than the bits outside ROI. As

seen in Fig. 5, the bene"ts of VQIC rapidly disap-

pear with large ROI.

3.3. The mammogram test image

The second test image (Fig. 6) is a mammogram

of size 2185]2925 with 12 bpp. The mammogram

test image has been compressed only with

vqSPIHT. However, the setting of a to 100%

makes vqSPIHT function similarly to SPIHT.

In this example, we assume that the micro-calci-

"cations are the only important diagnostic details

of a mammogram that are easily lost in compres-

sion. Note that in a study of the applicability of

692 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

70

Page 78: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Fig. 7. Micro-calci"cations found in the test mammogram

(shown as black), and the ROIs (dotted rectangles around

micro-calci"cations).

Fig. 8. PSNR of the whole mammogram for vqSPIHT as

a function of a and bpp.

Fig. 9. PSNR of the ROIs in the mammogram for vqSPIHT as

a function of a and bpp.

vqSPIHT to digital mammograms also other signs

of cancer, like stellate lesions and nodules, should

be considered. A micro-calci"cation location map,

shown in Fig. 7, was generated with a micro-calci-

"cation detection algorithm slightly modi"ed

from the morphological segmentation algorithm of

Dengler et al. [3]. The detection was tuned to be

oversensitive to make sure that all micro-calci"ca-

tions were detected. Because of this, the algorithm

detected also a large number of false calci"cations,

including the skin-line of the breast. In this test

case, there were 323 ROIs covering 5% of the

whole mammogram.

We used the bpp values 0.05, 0.10, 0.15, 0.25, 0.50,

0.75, 1.00 and let a take values of 30, 40, 50, 60, 70,

80, 90 and 100 percent of the resulting "le size.

Fig. 8 shows the PSNR of the whole image as

a function of a and the bpp. Lowering a decreases

the PSNR of the whole image, but the e!ect re-

mains moderate with reasonable a values.

Fig. 9 shows the PSNR calculated only on the

ROI as a function of a and the bpp. The bene"ts of

the VQIC on ROIs is clearly seen in comparison

with the Fig. 8. To point out, the PSNR of ROIs

in 1.00 bpp mammogram jumps from 40.29 to

693A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

71

Page 79: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Fig. 10. A region of vqSPIHT compressed mammograms with di!erent a and bpp rates. Bpp values on the columns from left to right are

1.00, 0.50 and 0.15. The values of a starting from the uppermost row are 100%, 90%, 70% and 50%. All the images have been histogram

equalized to ease the evaluation. The three images in the last row from left to right are: the histogram equalized uncompressed region,

original uncompressed region and a bit map of the detected micro-calci"cations with the ROIs marked.

54.87dB when a decreases from 100% (i.e. SPIHT)

to 80%. This causes a very moderate change in the

PSNR of the whole image, which decreases from

38.98 to 38.38.

Fig. 10 shows a region containing a micro-calci-

"cation cluster taken from a mammogram, that

has been compressed using various bpp and a

values. A visual comparison between SPIHT and

694 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

72

Page 80: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

vqSPIHT shows that the mammogram can be com-

pressed to a signi"cantly lower bpp value with

vqSPIHT than with SPIHT (a"100%) to achieve

similar preservation of micro-calci"cations in the

ROIs. The region in the upper left corner has been

compressed with SPIHT to compression ratio

12 : 1. Even with this rather modest compression,

a comparison with the original (lower left corner)

reveals that the edges of the calci"cations have

become blurred, some small calci"cations have dis-

appeared and some have merged together. When

keeping the same bpp 1.00, we notice that setting

a"70%, the micro-calci"cations are virtually in-

distinguishable from the original. With this choice

of a, the PSNR of whole image decreases from 38.98

to 37.51dB. Now, keeping a"70% the bpp value

0.15 (compression ratio 1:80) gives a visually com-

parable reconstruction to the 1.00 bpp SPIHT im-

age (compression ratio 1:12). In this case, the PSNR

of the whole mammogram decreases to 34.08 dB.

This is, however, virtually same as the PSNR of

SPIHT 0.15 bpp compressed image, which is

34.10dB.

3.4. Practical memory requirements of the

implementation

We "rst implemented the algorithm using the list

data structures of the original SPIHT algorithm

[11], but found that this required a large amount of

internal memory. The amount of memory needed

was very dependent on the values of bpp and a.

Typically, the compression of a 12MB mammo-

gram required at least 120MB of internal memory

during encoding, but with some combinations of

bpp and a the memory requirement was consider-

ably larger. The memory is mainly used for repres-

enting the coe$cient table and the lists that are

constantly scanned through. Thus, paging the

memory to hard disk increases the execution time

drastically. Memory requirements can be made in-

dependent of the bpp-ratio and a by reimplement-

ing the algorithm using the matrix data structures

presented previously. The working memory space

dropped to about 50MB and about 40% of that

could be paged to disk without signi"cant increase

of the execution time. All of the needed memory

could be allocated once, which made the memory

management e$cient in comparison to the slow

per-node dynamical memory management of ex-

plicit list structures.

4. Summary and conclusions

The idea of VQIC is to use more bits for impor-

tant details at the cost of unimportant details such

as noise. The compression method can be applied

in applications where certain small regions in the

image are especially important. We have shown

that in our target application, compression of

digital mammograms, the variable quality com-

pression scheme can improve the compression e$-

ciency considerably. The variable quality property

has been integrated into SPIHT, which is one of the

best general-purpose compression techniques. We

have also simpli"ed the implementation of SPIHT

and reduced working storage requirements signi"-

cantly compared to the original implementation.

Our version of the algorithm allows the compres-

sion of large images such as mammograms with

a standard PC. A research on the clinical applica-

bility of the VQIC techniques in the context of very

large digital mammogram archive is planned.

Acknowledgements

The authors would like to thank M.Sc J. NaK ppi

for providing the micro-calci"cation detection soft-

ware.

References

[1] C.N. Adams, A. Aiyer, B.J. Betts et al., Image quality in

lossy compressed digital mammograms, in: Proc. 3rd

Internat. Workshop in Digital Mammography, Chicago,

USA, 1996.

[2] V. Bhaskaran, K. Konstantinides, Image and Video Com-

pression Standards, Kluwer Academic Publishers, Dor-

drecht, The Netherlands, 1995, Chapter 5.

[3] J. Dengler, S. Behrens, J.F. Beraga, Segmentation of micro

calci"cations in mammograms, IEEE Trans. Med. Imag-

ing 12 (4) (December 1993).

695A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

73

Page 81: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

[4] B.J. Erickson, A. Manduca, K.R. Persons, Clinical evalu-

ation of wavelet compression of digitized chest X-rays,

in: Proc. SPIE 3031 Medical Imaging: Image Display,

Newport Beach, CA, 1997.

[5] M. Giger, H. MacMahon, Image processing and com-

puter-aided diagnosis, Radiologic Clinics of North Amer-

ica 34 (3) (May 1996) 565}596.

[6] M. Hilton, B.D. Jawerth, A.N. Sengupta, Compressing still

and moving images, Multimedia Systems 2 (December

1994) 218}227.

[7] A. Manduca, A. Said, Wavelet compression of medical

images with set partitioning in hierarchical trees, in: Proc.

SPIE 2704 Medical Imaging: Image Display, Newport

Beach, CA, 1996.

[8] P.J. Meer, R.L. Lagendijk, J. Biemond, Local adaptive

thresholding to reduce the bit rate in constant quality

MPEG coding, in: Proc. Internat. Picture Coding Symp.,

Melbourne, Australia, 1996.

[9] S.M. Perlmutter, P.C. Cosman, R.M. Gray, R.A. Olshen,

D. Ikeda, C.N. Adams, B.J. Betts, M.B. Williams, K.O.

Perlmutter, J. Li, A. Aiyer, L. Fajardo, R. Birdwell, B.L.

Daniel, Image quality in lossy compressed digital mammo-

grams, Signal Processing 59 (2) (June 1997) 189}210.

[10] R. Plompen, J. Groenveld, F. Booman, D. Boekee, An

image knowledge based video codec for low bitrates, in:

Proc. SPIE 804 Advances in Image Processing, 1987.

[11] A. Said, W.A. Pearlman, A new fast and e$cient image

codec based on set partitioning in hierarchical trees, IEEE

Trans. Circuits Systems Video Technol. 6 (June 1996)

243}250.

[12] J.M. Shapiro, Embedded image coding using zerotrees of

wavelet coe$cients, IEEE Trans. Signal Process. 31 (12)

(December 1993).

[13] D. Shin, H. Wu, J. Liu, A region of interest (ROI) based

wavelet compression scheme for medical images, in: Proc.

SPIE 3031 Medical Imaging: Image Display, Newport

Beach, CA, 1997.

[14] M. Vetterli, J. Kovac\ evicH , Wavelets and Subband Coding,

Prentice-Hall, Englewood Cli!s, NJ, 1995.

696 A. Ja( rvi et al. / Signal Processing: Image Communication 14 (1999) 683}696

74

Page 82: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting distortion of a wavelet image codec

Joonas Lehtinen

Published in Acta Cybernetica, vol. 14, no. 2, pages 341–356, 1999.

75

Page 83: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

76

Page 84: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec

Joonas Lehtinen∗

Acta Cybernetica 14 (1999) 341-356

Abstract

A new image compression algorithm, Distortion Limited Wavelet Image

Codec (DLWIC), is introduced. The codec is designed to be simple to im-

plement, fast and have modest requirements for the working storage. It isshown, how the distortion of the result can be calculated while progressively

coding a transformed image and thus how the mean square error of the result

can be limited to a predefined value. The DLWIC uses zerotrees for efficientcoding of the wavelet coefficients. Correlations between different orientationcomponents are also taken into account by binding together the coefficientson the three different orientation components in the same spatial location.The maximum numbers of significant bits in the coefficients of all subtreesare stored in two-dimensional heap structure that allows the coder to test thezerotree property of a subtree with only one comparison. The compressionperformance of the DLWIC is compared to the industry standard JPEG com-pression and to an advanced wavelet image compression algorithm, vqSPIHT.An estimation of execution speed and memory requirements for the algorithmis given. The compression performance of the algorithm seems to exceed theperformance of the JPEG and to be comparable with the vqSPIHT.

1 Introduction

In some digital image archiving and transferring applications, especially in medi-cal imaging, the quality of images must meet predefined constrains. The qualitymust be often guaranteed by using a lossless image compression technique. Thisis somewhat problematic, because the compression performance of the best knownlossless image compression algorithms is fairly modest; the compression ratio rangestypically from 1:2 to 1:4 for medical images [5].

Lossy compression techniques generally offer much higher compression ratiosthan lossless ones, but this is achieved by losing details and thus decreasing thequality of the reconstructed image. Compression performance and also the amountof distortion are usually controlled with some parameters which are not directlyconnected to image quality, defined by mean square error [1] (MSE). If a lossytechnique is used, the quality constrains can be often met by overestimating thecontrol parameters, which results worse compression performance.

∗Turku Centre for Computer Science, University of Turku, Lemminkaisenkatu 14 A, 20520

Turku, Finland, email: [email protected], WWW: http://jole.fi/

341 77

Page 85: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

342 Joonas Lehtinen

In this paper a new lossy image compression technique called Distortion Limited

Wavelet Image Codec (DLWIC ) is presented. The DLWIC is related to embedded

zerotree wavelet coding (EZW) [9] technique introduced by J.M. Shapiro in 1993.Also some ideas from SPIHT [8] and vqSPIHT [3] have been used. DLWIC solvesthe problem of distortion limiting (DL) by allowing the user of the algorithm tospecify the MSE of the decompressed image as controlling parameter for the com-

pression algorithm.The algorithm is designed to be as simple as possible, which is achieved by bind-

ing together the orientation bands of the octave band composition and coding the

zerotree structures and wavelet coefficient bits in the same pass. A special auxiliarydata structure called two dimensional heap is introduced to make the zerotree cod-ing simple and fast. The DLWIC uses only little extra memory in the compressionand is thus suitable for compression of very large images. The technique also seemsto provide competitive compression performance in comparison with the vqSPIHT.

In the DLWIC, the image to be compressed is first converted to the waveletdomain with the orthonormal Daubechies wavelet transform [10]. The transformeddata is then coded by bit-levels using a scanning algorithm presented in this paper.The output of the scanning algorithm is coded using QM-coder [7], an advancedbinary arithmetic coder.

The scanning algorithm processes the bits of the wavelet transformed imagedata in decreasing order of their significance in terms of MSE, as in the EZW. Thisproduces progressive output stream: the algorithm can be stopped at any phase ofthe coding and the already coded output can be used to construct an approximationof the original image. This feature can be used when a user browses images usingslow connection to the image archive: The image can be viewed immediately afteronly few bits have been received; the subsequent bits then make it more accurate.The DLWIC uses the progressivity by stopping the coding when the quality ofthe reconstruction exceeds threshold given as a parameter to the algorithm. Thecoding can also be stopped when the size of the coded output exceeds a giventhreshold. This way both the MSE and bits per pixel (BPP) value of the outputcan be accurately controlled.

After the introduction, the structure of the DLWIC is explained. A quickoverview of the octave band composition is given and it is shown with an examplehow the wavelet coefficients are connected to each other in different parts of thecoefficient matrix.

Some general ideas of the bit-level coding are then explained (2.3) and it isshown how the unknown bits should be approximated in the decoder. The meaningof zerotrees in DLWIC is then discussed (2.4). After that an auxiliary data structurecalled two dimensional heap is introduced (2.5). The scanning algorithm is givenas pseudo code (2.6).

The distortion limiting feature is introduced and the stopping of the algorithmon certain stopping conditions is discussed (2.7). Finally we show how separateprobability distributions are allocated for coding the bits with the QM-coder indifferent contexts (2.8).

The algorithm is tested with a set of images and the compression performance

78

Page 86: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 343

Spatialimagedata

Waveletdomainimagedata

Binary scanningdecisions and

bits of thecoefficients

Compressedimage

Wavelet transformScanning

Statistical coding

Statistical decoding

Scanning usingprecalculated decisions

Inverse wavelet transform

Compression

Decompression

Figure 1: The structure of the DLWIC compression algorithm

is compared to the JPEG and the vqSPIHT compression algorithms (3). Variationsin the quality achieved by the constant quantization in the JPEG is demonstratedwith an example. Also an estimation of the speed and memory usage is given (3.2).

2 DLWIC algorithm

2.1 Structure of the DLWIC and the wavelet transform

The DLWIC algorithm consists of three steps (Figure 1): 1) the wavelet transform,2) scanning the wavelet coefficients by bit-levels and 3) coding the binary decisionsmade by the scanning algorithm and the bits of the coefficients with the statisticalcoder. The decoding algorithm is almost identical: 1) binary decisions and coeffi-cient bits are decoded, 2) the coefficient data is generated using the same scanningalgorithm as in the coding phase, but using the previously coded decision infor-mation, 3) the coefficient matrix is converted to a spatial image with the inversewavelet transform.

The original spatial domain picture is transformed to the wavelet domain usingDaubechies wavelet transform [10]. The transform is applied recursively to therows and columns of the matrix representing the original spatial domain image.This operation gives us an octave band composition (Figure 2). The left side (B)of the resulting coefficient matrix contains horizontal components of the spatialdomain image, the vertical components of the image are on the top (A) and thediagonal components are along the diagonal axis (C). Each orientation pyramid

is divided to levels, for example the horizontal orientation pyramid (B) consistsof three levels (B0, B1 and B2). Each level contains details of different size; thelowest level (B0), for example, contains the smallest horizontal details of the spatialimage. The three orientation pyramids have one shared top level (S), which contains

79

Page 87: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

344 Joonas Lehtinen

A0

A1A2

C0

C1

C2

B0

B1

B2

S

Figure 2: Octave band composition produced by recursive wavelet transform isillustrated on the left and the pyramid structure inside the coefficient matrix isshown on the right.

scaling coefficients of the image, representing essentially the average intensity of thecorresponding region in the image. Usually the coefficients in the wavelet transformof a natural image are small on the lower levels and bigger on the upper levels(Figure 3). This property is very important for the compression: the coefficients ofthis highly skewed distribution can be coded using fewer bits.

2.2 Connection between orientation pyramids

Each level in the coefficient matrix represents certain property of the spatial domainimage in its different locations. Structures in the natural image contain almostalways both big and small details. In the coefficient matrix this means that ifsome coefficient is small, it is most likely that also the coefficients, representingsmaller features of the same spatial location, are small. This can be seen in Figure3: different levels of the same coefficient pyramid look similar, but are in differentscales. The EZW takes advantage of this by scanning the image in depth first order,i.e. it scans all the coefficients related to one spatial location in one orientationpyramid before moving to another location. This way it can code a group of smallcoefficients together, and thus achieves better compression performance.

In natural image, most of the features are not strictly horizontal or vertical,but contain both components. The DLWIC takes advantage of this by binding

also all three orientation pyramids together : The scanning is done only for thehorizontal orientation pyramid (B), but bits of all three coefficients, representing thethree orientations of the same location and scale, are coded together. Surprisinglythis only slightly enhances the compression performance. The feature is howeverincluded in the DLWIC because of its advantages: it simplifies the scanning, makesthe implementation faster and reduces the size of auxiliary data structures.

80

Page 88: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 345

Figure 3: An example of the Daubechies wavelet transform. The original 512×512sized picture is on the left and its transform is presented with absolute values ofthe coefficients in logarithmic scale on the right.

2.3 Bit-level coding

The coefficient matrix of size W × H is scanned by bit-levels beginning from thehighest bit-level nmax required for coding the biggest coefficient in the matrix (i.e.the number of the significant bits in the biggest coefficient):

nmax = blog2(max{(|ci,j |)|0 ≤ i < W ∧ 0 ≤ j < H}) + 1c, (1)

where the coefficient in (i, j) is marked with ci,j . The coefficients are representedusing positive integers and the sign bits that are stored separately. The coder firstcodes all the bits on the bit-level nmax of all the coefficients, then all the bits onbit-level nmax − 1 and so on until the least significant bit-level 1 is reached or thescanning algorithm is stopped (Section 2.7). The sign is coded together with themost significant bit (the first 1-bit) of a coefficient. For example three coefficientsc0,0 = −1910 = −100112, c1,0 = 910 = 010012, c2,0 = −210 = −000102 would becoded as

1100︸︷︷︸

5

0100︸︷︷︸

4

000︸︷︷︸

3

1011︸︷︷︸

2

110︸︷︷︸

1

, (2)

where the corresponding bit-level numbers are marked under the bits coded on thatlevel (without signs it would be 100

︸︷︷︸

5

010︸︷︷︸

4

000︸︷︷︸

3

101︸︷︷︸

2

110︸︷︷︸

1

).

Because of the progressivity, the code stream can be truncated at any positionand the decoder can approximate the coefficient matrix using received information.The easiest way of approximating the unknown bits in the coefficient matrix wouldbe to fill them with zeroes. In the DLWIC algorithm a more accurate estimation is

81

Page 89: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

346 Joonas Lehtinen

used, the first unknown bit of each coefficient, for which the sign is known, is filledwith one and the rest bits are filled with zeroes. For example, if the first seven bitsof the bit-stream (2) have been received, the coefficients would be approximated:c0,0 = −2010 = −101002, c1,0 = 1210 = 011002, c2,0 = 010 = 000002.

2.4 Zerotrees in DLWIC

A bit-level is scanned by first coding a bit of a scaling coefficient (on the level S inthe Figure 2). Then recursively the three bits of the coefficients in the same spatiallocation on the next level of the orientation pyramids (A2,B2,C2) are coded. Thescanning continues to the next scaling coefficient, after all the coefficients in theprevious spatial location in all the pyramid levels has been scanned.

We will define that a coefficient c is insignificant on a bit-level n, if and only if|c| < 2n−1. Because the coefficients on the lower pyramid levels tend to be smallerthan on the higher levels and different sized details are often spatially clustered,probability for a coefficient for being insignificant is high, if the coefficient on thehigher level in the same spatial location is insignificant.

If an insignificant coefficient is found in the scanning, the compression algorithmwill check if any of the coefficients below the insignificant one is significant. If nosignificant coefficients are found, all the bits in those coefficients on current bit-level are zeroes and thus can be coded with only one bit. This structure is calledzerotree.

One difference to the EZW algorithm is that the DLWIC scans all the orienta-tions simultaneously and thus constructs only one shared zerotree for the all theorientation pyramids. Also the significance information is coded at the same passas the significant bits in the coefficients, whereas the EZW and SPIHT algorithmsuse separate passes for the significance information.

2.5 Two dimensional significance heap

It is a slow operation to perform a significance check for all the coefficients on aspecific spatial location on all the pyramid levels. The DLWIC algorithm uses anew auxiliary data-structure, which we call two dimensional significance heap, toeliminate the slow significance checks.

The heap is a two dimensional data-structure of the same size (number of ele-ments) and shape as the horizontal orientation pyramid in the coefficient matrix.Each element in the heap defines the number of bits needed to represent the largest

coefficient in any orientation pyramid in the same location on the same level or

below it. Thus the scanning algorithm can find out, whether there is a zerotreestarting from a particular coefficient on a certain bit-level by comparing the num-ber of the bit-level to the corresponding value in the heap.

Here and in the rest of this paper we denote the height of the coefficient matrixwith H, the width with W and the number of levels in the pyramid excluding thescaling coefficient level (S) with L. Thus the dimensions of the scaling coefficientlevel are: Hs = H/2L and Ws = W/2L. Furthermore the dimensions of the level

82

Page 90: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 347

in the two-dimensional heap, where (x, y) resides are

Wx,y = Ws2blog2

(max{1,y

Hs})c

Hx,y = Hs2blog2

(max{1,y

Hs})c (3)

Now the heap elements hx,y can be defined with the functions ht(x, y), hc(x, y) andhs(x, y):

ht(x, y) = max{hx,y+Hs, blog

2(|cx,y|)c + 1}

hc(x, y) = max{h2x,2y, h2x+1,2y, h2x,2y+1, h2x+1,2y+1}hs(x, y) = blog

2(max{|cx,y|, |cx+Wx,y,y|, |cx+Wx,y,y−Hx,y

|})c + 1

hx,y =

ht(x, y), if x < Ws ∧ y < Hs

hs(x, y), if x ≥ W/2 ∧ y ≥ H/2max{hs(x, y), hc(x, y)}, otherwise,

.

(4)

Note that the definitions (3) and (4) are only valid for the elements in the heap,

where 0 ≤ y < H and 0 ≤ x < Ws2blog2

(max{1,y

Hs})c. While the definition of

the heap looks complex, we can construct the heap with a very simple and fastalgorithm (Alg. 1).

2.6 Coding algorithm

The skeleton of the compression algorithm (Alg. 2) is straightforward: 1) the spatialdomain image is transformed to wavelet domain by constructing the octave bandcomposition, 2) the two dimensional heap is constructed (Alg. 1), 3) the QM-coderis initialized, 4) the coefficient matrix is scanned in bit-levels by executing scanningalgorithm (Alg. 3) for each top level coefficient on each bit-level.

The decoding algorithm is similar. First an empty two dimensional heap iscreated by filling it with zeroes. Then the QM-decoder is initialized and the samescanning algorithm is executed in such way that instead of calculating the decisions,it extracts the decision information from the coded data.

The scanning algorithm (Alg. 3) is the core of the compression scheme. It triesto minimize correlations between the saved bits by coding as many bits as possiblewith zerotrees. In the pseudo-code, Bit(x,n) returns n:th bit of the absolute valueof x and si,j denotes the sign of the coefficient in the matrix element (i, j). Bitsare coded with function QMCode(b,context), where b is the bit to be coded andcontext is the context used as explained in Section 2.8. Context can be either aconstant or some function of variables known to both the coder and decoder. Inboth cases, the value of the context is not important, but it should be unique foreach combination of parameters. Stopping of the algorithm is queried with func-tion ContinueCoding(), which returns true, if the coding should be continued. Inorder to calculate the stopping condition, the quality of the approximated result-ing image must be calculated while coding. This is achieved by calling functionDLUpdate(n,x) every time after coding n:th bit of the coefficient x. Both calcula-tions are explained in the Section 2.7. The dimensions of the matrix and its levelsare noted in the same way as in the Section 2.5.

83

Page 91: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

348 Joonas Lehtinen

The scanning algorithm first checks the stopping condition. Then we check fromthe two dimensional heap, whether there is a zerotree starting from this location,and code the result. If the coefficient had become significant earlier, the decoderknows also that and thus we can omit the coding of the result. If we are codinga scaling coefficient (l = 0), we only process that coefficient and then recursivelyscan the coefficient in the same location on the next level below this one. If weare coding a coefficient below the top-level, we must process all three coefficientsin three orientation pyramids in this spatial location and then recursively scan allfour coefficients on the next level, if that level exists.

When a coefficient is processed using ScanCoeff algorithm (Alg. 4), we firstcheck, whether it had become significant earlier. If that is the case, we just code thebit on the current bit-level and then do the distortion calculation. If the coefficientis smaller than 2n, we code the bit on the current bit-level, and also check whetherthat was the first 1-bit of the coefficient. If that is true, we also code the sign ofthe coefficient and do the distortion calculation.

2.7 Stopping condition and distortion limiting

The DLWIC continues coding until some of the following conditions occur: 1) Allthe bits of the coefficient matrix have been coded, 2) The number of bits producedby the QM-coder reach a user specified threshold, or 3) the distortion of the outputimage, that can be constructed from sent data, decreases below the user specifiedthreshold. The binary stopping decisions made before coding each bit of a coefficientare coded, as the decoder must exactly know when to stop decoding.

The first condition is trivial, as the main loop (Alg. 2) ends when all the bitshave been coded. The second condition is also easy to implement: output routineof the QM-coder can easily count the number of bits or bytes produced. To checkthe third condition, the algorithm must know the MSE of the decompressed image.The MSE of the decompressed image could be calculated by doing inverse wavelettransform for the whole coefficient matrix and then calculating the MSE from theresult. Unfortunately this would be extremely slow, because the algorithm mustcheck the stopping condition very often.

The reason for using Daubechies wavelet transform is its orthonormality. Fororthonormal transforms, the square sums of the pixel values of the image beforeand after the transform are equal:

i,j

(xi,j)2 =

i,j

(ci,j)2 (5)

where xi,j stands for the spatial domain image intensity and ci,j is the waveletcoefficient. Furthermore, the mean square error between the original image andsome approximation of it can be calculated equally in the wavelet and spatialdomains. Thus we do not have to do the inverse wavelet transform to calculate theMSE.

Instead of tracking the MSE, we track the current square error, cse, of theapproximated image because it is computationally easier. The initial approximation

84

Page 92: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 349

of the image is zero coefficient matrix, as we have to approximate the coefficientsto be zero, when we do not know their signs. Thus the initial cse equals to theenergy of the coefficient matrix.

cse←∑

i,j

(ci,j)2 (6)

After sending each bit of a coefficient c, we must update cse by subtracting theerror produced by the previous approximation of c and adding the error of its newapproximation. The error of an approximation of c depends only on the level ofthe last known bit and the coefficient c itself. If we code the n:th bit of c, then thecse should be updated:

cse← cse−

[

(|c|AND2(2n − 1))− 2n−1

]2−

[

(|c|AND2(2n−1 − 1))− 2n−2

]2 ifblog2cc > n− 1

c2 − (2(n−1) + 2(n−2) − c)2 ifblog2cc = n− 1

0 ifblog2cc < n− 1,

(7)

where AND2 is bitwise and-operation. The first case defines the error reduced byfinding out one bit of a coefficient, when the sign is already known. The secondcase defines the error reduced by finding out the sign of a coefficient and the lastcase states that cse does not change if only zero bit before the coefficients first onebit is found. The equation 7 holds only, when n > 1.

2.8 The use of contexts in QM-coder

The QM-coder is a binary arithmetic coding algorithm that tries to code binarydata following some probability distribution as efficiently as possible. Theoreticallyan arithmetic coder compresses data according to its entropy [4], but the QM-coder uses a dynamical probability estimation technique [6, 7, 2] based on stateautomata, and its compression performance can even exceed the entropy, if thelocal probability distribution differs from the global distribution used in the entropycalculation.

The DLWIC codes different types of information with differing probability dis-tributions. For example the signs of coefficients are highly random, that is theprobability of plus sign is approximately 0.5, but the probability of finding thestopping condition is only 1/N , where N is the number of stopping condition eval-uations. If bits following the both distributions would be coded using the sameprobability distribution, the compression performance obviously would not be ac-ceptable.

To achieve better compression performance the DLWIC uses separate contexts

for binary data following different probability distributions. The contexts for cod-ing the following type of data are defined: 1) signs, 2) stopping conditions, 3) thebits of the coefficients after the first one bit, 4) the bits of the scaling coefficients,5) zerotrees on the different levels of the pyramid, 6) the significance check of the

85

Page 93: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

350 Joonas Lehtinen

Figure 4: Test images from top left: 1) barb (512 × 512), 2) bird (256 × 256), 3)boat (512×512), 4) bridge (256×256), 5) camera (256×256), 6) circles (256×256),7) crosses (256× 256), 8) france (672× 496) and 9) frog (621× 498).

insignificant coefficients on different pyramid levels on different orientation pyra-mids. The number of separate contexts is 4 ∗ (l + 1), where l defines the numberof levels in the pyramids. It would also be possible to define different contexts foreach bit-level, but dynamical probability estimation in the QM-coder seems to beso efficient that this is not necessary.

3 Test results

The performance of the DLWIC algorithm is compared to the JPEG and thevqSPIHT [3] algorithms with a set (Fig. 4) of 8 bit grayscale test images. ThevqSPIHT is an efficient implementation of the SPIHT [8] compression algorithm.The vqSPIHT algorithm uses the biorthogonal B97 wavelet transform [10], the QM-coder and a more complicated image scanning algorithm than the DLWIC. Imagequality is measured in terms of peak signal to noise ratio [1] (PSNR), which is aninverse logarithmic measure calculated from MSE.

3.1 Compression efficiency

To compare the compression performance of the algorithms, the test image set iscompressed with different BPP-rates from 0.1 to 3.0 and the PSNR is calculatedas the mean for all the images. Because it is not possible to specify BPP as aparameter for JPEG compression algorithm, various quantization parameters areused and the BPP value is calculated as a mean value of the image set for eachquantization value.

As can be seen in the Figure 5, the performance of the vqSPIHT and the DLWICalgorithms is highly similar. This is somewhat surprising, because of the greatercomplexity and better wavelet transform used in the vqSPIHT. The quality of theimages compressed with the EZW variants seem to exceed the quality produced by

86

Page 94: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 351

15

20

25

30

35

40

45

50

0 0.5 1 1.5 2 2.5 3

PS

NR

(dB

)

BPP

DLWICvqSPIHT

JPEG

Figure 5: Compression performance comparison of DLWIC, vqSPIHT and JPEG.The PSNR-values correspond to mean value obtained from test image set (Fig. 4).

the JPEG. This is especially true when low bit-rates are used. Poor scalability ofthe JPEG to low BPP values is probably implied by the fixed block size used inthe DCT transform of the JPEG as opposed to multi-resolution approach of thewavelet based methods.

One might expect that a conventional image compression algorithm such as theJPEG would give similar PSNR and BPP values for similar images when fixedquantization parameter is used. This is not the case as demonstrated in the Figure6, where all the test images are compressed using the same quantization parameter(20) with the standard JPEG.

3.2 Speed and memory usage

The speed of the implementation is not compared to other techniques, because theimplementation of the algorithm is not highly optimized. Instead an example ofthe time consumption of the different components of the compression process is ex-amined using the GNU profiler. The frog test image is compressed using a 400MHzIntel Pentium II workstation running Linux and the algorithm is implemented inC language and compiled with GNU C 2.7.2.1 using “-O4 -p” options. The cumu-lative CPU time used in the different parts of the algorithm is shown in the Figure7.

When the image is compressed with a low BPP-rate, most of the time is con-sumed by the wavelet transform. When the BPP-rate increases, the time used bythe QM-coder, the scanning algorithm and the distortion calculations increases in

87

Page 95: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

352 Joonas Lehtinen

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0 2 4 6 8 10

BP

P

Image #

1

2

3

4

5

6

7

8

9

0 2 4 6 8 10

PS

NR

Image #

Figure 6: All the images of the test image set are compressed by JPEG with thesame quantization value (20), and the BPP (left) and the PSNR (right) of theresulting images are shown.

somewhat linear manner. Construction of the two dimensional heap seems to bequite fast operation and distortion limiting is not very time consuming.

If we want to optimize the implementation of the DLWIC, the biggest problemwould probably be the extensive use of the QM-coder, that is already highly opti-mized. One way to alleviate the problem would be to store the stopping condition issome other way than compressing the binary decision after each bit received. Alsothe transform would have to be optimized to achieve faster compression, becauseit consumes nearly half of the processing time, when higher compression ratios areused.

Probably the biggest advantage of the DLWIC over the SPIHT and even thevqSPIHT is its low auxiliary memory usage. The only auxiliary data-structureused, the two dimensional heap, can be represented using 8-bit integers and thusonly consumes approximately 8 ∗ N/3 bits of memory, where N is the number ofcoefficients. If the coefficients are stored with 32-bit integers, this implies 8% auxil-iary memory overhead, which is very reasonable, when compared to 32% overheadin the vqSPIHT or even much higher overhead in the SPIHT algorithm, whichdepends on the target BPP-rate.

4 Summary and conclusion

In this paper a new general purpose wavelet image compression scheme, DLWIC,was introduced. Also it was shown how the distortion of the resulting decompressedimage can be calculated while compressing the image and thus how the distortionof the compressed image can be limited. The scanning algorithm in the DLWIC isvery simple and it was shown, how it can be efficiently implemented using a twodimensional heap structure.

Compression performance of the DLWIC was tested with a set of images and thecompression performance seems to be promising, when compared to a more complex

88

Page 96: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 353

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0 0.5 1 1.5 2 2.5 3 3.5 4

TIM

E(s

ec)

BPP

MiscDL

2D HeapTransformScanning

QM-Coder

Figure 7: Running time of the different components in the DLWIC compres-sion/decompression algorithm, when compressing the frog test image (Fig. 4).Graph shows the cumulative CPU time consumption when different BPP-rates areused.

compression algorithm, the vqSPIHT. Furthermore, the compression performanceeasily exceeds the performance of the JPEG, especially when high compressionratios are used.

Further research for extending the DLWIC algorithm to be used in lossless ornearly lossless multidimensional medical image compression is planned. Also theimplementation of the DLWIC will be optimized and usage of some other wavelettransforms will be considered.

References

[1] R. Gonzalez and R. Woods. Digital Image Processing. Addison-Wesley Publishing Company,

1992.

[2] ITU-T. Progressive bi-level image compression, recommendation t.82. Technical report,

International telecommunication union, 1993.

[3] A. Jarvi, J. Lehtinen, and O. Nevalainen. Variable quality image compression system based

on SPIHT. to appear in Signal Processing: Image Communications, 1998.

[4] Sayhood K. Introduction to Data Compression. Morgan Kaufmann, 1996.

[5] Juha Kivijrvi, Tiina Ojala, Timo Kaukoranta, Attila Kuba, Laszlo Nyul, and Olli Nevalainen.

The comparison of lossless compression methods in the case of a medical image database.

Technical Report 171, Turku Centre for Computer Science, April 1998.

[6] W.B. Pennebaker and J.L. Mitchell. Probability estimation for the q-coder. IBM Journal of

Research and Development, 32(6):737–752, 1988.

89

Page 97: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

354 Joonas Lehtinen

[7] William Pennebaker and Joan Mitchell. Jpeg : Still Image Data Compression Standard. Van

Nostrand Reinhold, 1992.

[8] Amir Said and William A. Pearlman. A new fast and efficient image codec based on set

partitioning in hierarchical trees. IEEE Transactions on Circuits and Systems for Video

Technology, 6:243–250, June 1996.

[9] J.M. Shapiro. Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans.

Signal Processing, 31(12), December 1993.

[10] M. Vettereli and J. Kovacevic. Wavelets and Subband Coding. Prentice Hall, Englewood

Cliffs, NJ, 1995.

90

Page 98: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Limiting Distortion of a Wavelet Image Codec 355

Algorithm 1 Construct2DHeap

for l← 1 to L + 1 do

Ht ← H/2min{l,L}, Wt ←W/2min{l,L}

for j ← 0 to Ht − 1 do

for i← 0 to Wt − 1 do

if l = 1 then

t← 0u← b1 + log

2(max{|ci,j+H |, |ci+W,j |, |ci+W,j+H |})c

else if l ≤ L then

t← max{h2x,2y, h2x+1,2y, h2x,2y+1, h2x+1,2y+1}u← b1 + log

2(max{|ci,j+H |, |ci+W,j |, |ci+W,j+H |})c

else

t← max{hi,j+Hs, hi+Ws,j , hi+Ws,j+Hs

}u← b1 + log

2(max{(|ci,j |)|0 ≤ i < W ∧ 0 ≤ j < H})c

hi,j ← max{t, u}

Algorithm 2 CompressDLWIC

Transform the spatial image with Daubechies wavelet transform constructing theoctave band composition where the coefficients ci,j are represented with positiveintegers and separate sign bit.Construct the two dimensional heap (Alg. 1).Initialize QM-coderCalculate initial distortion of the image (Section 2.7).nmax ← max{hi,j |0 ≤ i < Ws ∧ 0 ≤ j < Hs}for n← nmax to 1 do

for j ← 0 to Hs − 1 do

for i← 0 to Ws − 1 do

Scan(i, j, 0, n) (Alg. 3)

91

Page 99: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

356 Joonas Lehtinen

Algorithm 3 Scan(i, j, l, n)

if ContinueCoding() then

if hi,j < n then

QMCode(insignificant, significance-test(l))else

if hi,j = n then

QMCode(significant, significance-test(l))if l = 0 then

ScanCoeff(i, j,toplevel, n)Scan(i, j + Hs, 1, n)

else

ScanCoeff(i, j,horizontal

(

l, c(i+Wi,j),j < 2n,

c(i+Wi,j),(j−Hi,j) < 2n

)

, n)

ScanCoeff(i + Wi,j , j,diagonal

(

l, ci,j < 2n−1,

c(i+Wi,j),(j−Hi,j) < 2n

)

, n)

ScanCoeff(i + Wi,j , j −Hi,j ,vertical

(

l, ci,j < 2n−1,

c(i+Wi,j),j < 2n−1

)

, n)

if 2 ∗ y < H then

Scan(2 ∗ i, 2 ∗ j, l + 1, n)Scan(2 ∗ i + 1, 2 ∗ j, l + 1, n)Scan(2 ∗ i, 2 ∗ j + 1, l + 1, n)Scan(2 ∗ i + 1, 2 ∗ j + 1q, l + 1, n)

Algorithm 4 ScanCoeff(x, y,context, n)

if cx,y < 2nthen

QMCode(Bit(cx,y, n),context)if Bit(cx,y, n)= 1 then

QMCode(sx,y,sign)DLUpdate(n, cx,y)

else

QMCode(Bit(cx,y, n),coefficientbit)DLUpdate(n, cx,y)

92

Page 100: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Predictive depth coding of wavelet transformed images

Joonas Lehtinen

Published in Proceedings of SPIE: Wavelet Applications in Signaland Image Processing, vol. 3813, no. 102, Denver, USA, 1999.

93

Page 101: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

94

Page 102: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Predictive depth coding of wavelet transformed images

J. Lehtinena

aTurku Centre for Computer Science and

Department of Mathematical Sciences, University of Turku,

Lemminkaisenkatu 14 A, 20520 Turku, Finland

ABSTRACT

In this paper, a new prediction based method, predictive depth coding (PDC), for lossy wavelet image compression ispresented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each waveletcoefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmeticcoding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted andthe corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number ofsignificant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with astandard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Eventhough the algorithm is very simple and it does not require any extra memory, the compression results are relativelygood.

Keywords: wavelet, image, compression, lossy, predictive, depth, coding

1. INTRODUCTION

While the DCT-based JPEG1 is currently the most widely used lossy image compression method, much of thelossy still image compression research has moved towards wavelet-transform2 -based compression algorithms. Thebenefits of wavelet-based techniques are well-known and many different compression algorithms , which have goodcompression performance, has been documented in literature.3–6

A general system for lossy wavelet image compression usually consists of three steps: 1) signal decomposition, 2)quantization and 3) lossless coding. Signal decomposition separates different frequency-components from the originalspatial domain image by filtering the image with a subband filter-bank. Quantization is the phase, where some ofthe image information is purposely lost to achieve better compression performance. In the third phase, the quantizedwavelet coefficients are coded using as few bits as possible. Usually this is done in two phases: modelling theinformation to symbols and then entropy encoding the symbols. The entropy coding can be optimally implementedwith arithmetic coding,7 and thus the compression performance largely depends on the quantization and modellingalgorithm.

Signal composition can be done in different ways using 1-dimensional wavelet transform for the columns and therows of the image in some order or by directly using 2-dimensional transform.8 One of the most popular compositionsis to use 1-dimensional transforms for the rows and columns of the image and thus divide the image to four differentorientation bands. Then the same signal-composition is done recursively to the upper left quadrant of the imageto construct a pyramid (dyadic) composition (Fig.1). It is also possible to apply the signal-composition to otherorientation subbands and this way to construct more complicated wavelet packet signal-compositions.2

The 1-dimensional discrete wavelet transform (DWT) is basically very simple: discrete signal is filtered using low-and high-pass filters, and the results are scaled down and concatenated. The filters are constructed from a motherwavelet-function by scaling and translating the function. Some of the most used functions in lossy image compressionare biorthogonal 9-7 wavelet, Daubechies wavelet family and coiflets. Usually the transforms itself are not lossy, butproduce floating point result, which is often rounded to integers for coding.

The quantization of the wavelet coefficient matrix can be done in many ways. The most obvious way of quantizingthe matrix is to use uniform scalar quantizer for the whole matrix, which usually gives surprisingly good results. Itis also possible to use different quantizers for the different subbands or even regions of the image, which leads tovariable quality image compression.9 A more sophisticated way of quantifying the matrix would be to use vector

Further author information: E-mail: [email protected], WWW: http://jole.fi/

95

Page 103: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

quantization, that is computationally more intensive, but in some cases gives better results.10 Many techniques alsoimplement progressive compression by integrating the modelling and the scalar quantization.3

Many different approaches for modelling has been documented in literature. The most obvious one is to entropycode the quantized coefficients directly using some static, adaptively found or explicitly transmitted probabilitydistribution. Unfortunately this leads to relatively poor compression performance, as it does not use any dependenciesbetween the coefficients. Shapiros landmark paper4 used quad-trees to code zero regions (called zerotrees) of thescalar-quantized coefficient matrix. A. Said and W. Pearlman developed the zerotree approach further by implicitlysending zerotrees embedded into code-stream and achieved very good compression performance with their SPIHT-algorithm.3 Also spatial contexts has successfully been used to select the probability distribution for coding thecoefficients.5 One of the best methods is based on space-frequency quantization,6 that tries to find optimal balancebetween scalar quantization and spatial quantization achieved with zerotree-like structures.

In lossless image compression, signal prediction and prediction-error coding are efficient and widely used methods.In this paper the usage of these methods in lossy wavelet image coding is evaluated and a new method for modelling

quantized pyramid composition is introduced. The modelling algorithm is evaluated by implementing a wavelet imagecompression scheme with the following components: 1) biorthogonal 9-7 transform is used to construct a pyramidcomposition, 2) uniform scalar quantization is used for the quantization of the coefficients, 3) an adaptive predictionmodel is used to estimate the coefficient values and 4) the prediction errors are coded with an adaptive arithmeticcoder.

It is possible to try directly predict the values of the wavelet coefficients, but it is probable that the predictionresults would then be very inaccurate. Because the goal is to code the quantized coefficients accurately and thusthe prediction error would be quite big, the efficient error coding would be impossible. Instead of trying to predictthe exact coefficients, the PDC-algorithm estimates the number of bits needed to represent the coefficients and thencodes the coefficients separately.

In this paper the number of significant bits in an coefficient is called depth. Each coefficient is coded as a triplet ofthe depth-prediction error, the sign and the actual bits. The prediction is done with a simple linear predictor, whichcovers six spatial neighbors, a coefficient on the lower scale band and two coefficients on the different orientationbands. The weights of the predictor are adaptively trained in the course of the compression. The signs are codedusing simple context model. The coding of the depths predicted to be near zero utilizes several context.

In section 2 we give a brief introduction to all components of the PDC compression scheme. The original signalis processed by the biorthogonal 9-7 wavelet transform. As a result we get a pyramid composition of the image.We analyze the dependencies between the coefficients and use finally scalar quantization and entropy coding to thecoefficients. Section 3 explains the modelling algorithm in detail. First we define the concept of coefficient depth anda depth prediction context. Then we give an adaptive method for the coefficient estimation and discuss the modellingof the estimation errors. Finally a spatial sign coding context is defined and the coding of absolute values of theactual coefficients are explained. The compression performance of PDC is evaluated experimentally and comparedto some existing image compressing techniques in section 4. Conclusions are shown in section 5.

2. BACKGROUND

2.1. The structure of the compression method

The PDC-compression algorithm consists of six steps:

1. The spatial domain image is transformed to a pyramid composition structure (section 2.2).

2. The coefficient matrix of step 1 is quantized using scalar quantization (section 2.3)

3. Quantized coefficients are converted to depth representation and average values of the depth are calculated forthe different subbands (section 3.2)

4. The significant bits of the absolute coefficient values (section 3.5) and the signs (section 3.4) are entropy-coded(section 2.4) using several different coding contexts.

5. The depth of each coefficient is predicted using the same information as will be available at the moment ofdecompression in the same stage when predicting the same coefficient (section 3.2)

96

Page 104: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

A0

A1A2

C0

C1

C2

B0

B1

B2

S

1 2 3

Figure 1. Images from left to right: 1) pyramid composition structure and the three orientation pyramids A,B andC, 2) spatial domain test image and 3) the same image in wavelet domain: the absolute values of the coefficients arerepresented in logarithmic scale.

6. The prediction errors are calculated and entropy-coded (section 3.3)

The decompression is done similarly, but in the reverse order: 1) the depth-prediction errors are decoded, 2)depths are predicted and the correct depth values are calculated using the decoded prediction errors, 3) the signsand the absolute values of the coefficients are decoded 4) different components of the coefficients are combined and5) the reverse transform is applied to produce the decompressed spatial domain image.

2.2. Pyramid composition structure

In the PDC, the image is transformed to a subband pyramid-composition using biorthogonal 9-72 wavelet functions.The transform is done hierarchically by filtering the rows with low- and high-pass filters and then combining thedown-scaled results side by side. Then the same operation is done to the columns of the image. The result of thesetwo operations is a matrix with the same dimensions as the original image, but it is divided into four parts: S) Topleft part contains a scaled down low-frequency components of the image. This part is also called scaling coefficientsand basically it looks like a scaled down version of the image. A) Top right part contains the vertical high-frequencydetails of the image. B) Bottom left corner contains the horizontal details. C) Bottom right corner of the imagecontains the diagonal high-frequency components.

Pyramid composition (Fig. 1) is created by doing the transformation described above recursively to the scalingcoefficients on the top left corner of the coefficient matrix. This can be repeated while the dimensions of the scalingcoefficients are even. The three pyramids can be seen in the pyramid composition: A,B,C. Pyramid levels correspondto different scale details of the spatial domain image and the different pyramids correspond to details having differentorientations. Usually the details in spatial image have features of multiple orientations and scales and thus they affectto the coefficients on all pyramids on multiple levels. These dependencies imply that coefficients are not independentand their values or magnitudes can be tried to predict from the other coefficients.

2.3. Scalar quantization

Scalar quantization of the coefficients is defined by a mapping from the set of all possible coefficients to a smallerset of representative coefficients, which approximates the original coefficients. Uniform scalar quantization is asimplification of general case, where each quantized coefficient represents a uniform set of original coefficients. Thusthe uniform scalar quantization Q of the coefficients c can be simply implemented as

Q(c, q) = bq(c + 0.5)c/q,

where the quantization parameter q determines the accuracy of the mapping.

97

Page 105: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

p1

p5

p6

p4

p3

p2 x

Figure 2. Spatial context of the predicted coefficient (marked with X) includes the depths of the six surroundingcoefficients.

2.4. Entropy coding

Entropy H gives a lower bound7 on the average code length of a symbol having probability p :

H(p) = p log2(1/p).

This implies that if our modelling system can model the image to a stream of N symbols having independentprobabilities, p0, p1, . . . , pN−1, an optimal entropy coder can compress it to

N−1

i=0pi log

2(1/pi) bits.

The assumption here is that the probabilities of the symbols are known. It is of course possible first to do all themodelling and calculate the probabilities for the symbols, but usually adaptive probability estimation gives very goodresults. A benefit of the adaptive approach is that the estimate for the probability distribution can be calculatedfrom the symbols sent earlier.

An arithmetic coder7 represents a stream of symbols as a number from the range [0, 1). This is done by recursivelydividing the range in the proportion to the symbol probabilities and selecting a subrange corresponding to the symbolto be coded. The output of the algorithm is the binary representation of some number in the final range. Thecompression performance is very close to the optimum defined by the entropy.

3. PREDICTIVE DEPTH CODING

3.1. PDC algorithm

In the PDC-algorithm, the coefficients are compressed one by one starting from the highest pyramid level andcontinuing downwards levelwise in all orientation pyramids at the same time. More precisely the scaling coefficients(on the level S in Fig. 1) are compressed first in row major order, then similarly the bands on the next pyramid levelin the order C,B,A (that is C2,B2,A2 in the example) and finally similarly all the other levels below it.

For each coefficient, we predict it’s depth and calculate the prediction error as the difference of the actual andpredicted values. The prediction error is then coded together with the sign and the actual absolute value of thecoefficient. In addition to an individual coefficient, the compression algorithm considers coefficients, which are onthe spatial neighbourhood of the coeffient, on different orientation pyramids on the same spatial location and on theparent level. Also the information on the mean depth of coefficients on current level and its parent level is utilizedin the prediction.

3.2. Depth prediction

The depth D of the coefficient c is defined as the number of bits needed for expressing its absolute value:

D(c) =

{

blog2(|c|) + 1c, if c > 00, if c = 0

Thus the coefficient can be represented as the depth, followed by the absolute value and the sign. This representationavoids the insignificant bits.

In PDC, the prediction is made with linear predictor P (p1, p2, . . . , pN , w1, w2, . . . , wN ) =∑N

i=1wipi, where pi

forms the prediction context, which is weighted by corresponding wi (sumN

i=1wi = 1). The prediction context is

divided into three parts: 1) spatial context describing the depths of six surrounding coefficients, 2) orientationalcontext consisting of the depths of two orientational coefficients on the same level and at same spatial location, 3) adepth estimate calculated from the depth of the parent coefficient and the mean depths of the levels.

98

Page 106: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

This gives us a prediction context of 9 coefficients. The six spatial neighbours (Fig. 2) are selected so that theygive the best results with the PDC-algorithm. Experiments with different combinations of the orientational contextadvocated us to use a context of two coefficients at the same spatial location on the different orientation pyramidsin the context. One coefficient is used on the upper level in the same spatial location and in the same orientationpyramid as the coefficient to be predicted. Because of the mean-depths of the levels are very different, the depth ofthe parent coefficient is divided by the mean-depth of its level and multiplied by the mean-depth of the current level.

In the compression phase, we can use only the depths known to the decompressor and for the rest of the predictioncontext we apply zero weights. For example for the first coefficient of a band the whole spatial context is undefinedand only the coefficients on the orientational context and a coefficient on the previous level, can be used (if they areknown).

The weights Wi for the linear prediction-context are calculated adaptively in the prediction algorithm:

• ∀i ∈ [1, 9] : Wi ← 1

• while not finished

– ∀i ∈ [1, 9] : wi ←Wi

– ∀i ∈ {unknown,undefined} : wi ← 0

– s←∑9

i=1wi

– P ←

{

(∑

9

i=1wiDi)/s , if s > 0

0 , if s = 0

– ∀i ∈ [1, n] : Wi ←

{

Wi ∗ α , if |pi −D| ≤ |P −D|Wi/α , if |pi −D| > |P −D|

– ∀i ∈ [1, n] : Wi ←

β , if Wi > βγ , if Wi < γ

Wi otherwise

One good setting for the parameters is α = 1.1, β = 100 and γ = 0.3. The symbol D denotes the depth of the currentcoefficient and Di the depth of the neighbour i. The resulting prediction is denoted with P . For the first prediction,s is 0 and thus the first coefficient should not be predicted at all.

3.3. Coding the prediction errors

The linear prediction produces a floating-point value which must be rounded before the error calculation. Theprediction error can then be directly coded with arithmetic coder.

On the lower pyramid levels most of the quantized coefficient values are zero and thus also the zero predictionis very common. We can increase the performance of the compression scheme for small predictions by adjustingthe probability distribution of the error coding. In the PDC this is implemented by using six independent proba-bility distributions. The distribution is selected by classifying the predictions to the classes: [0, 0.01), [0.01, 0.05),[0.05, 0.13), [0.13, 0.25), [0.25, 0.5) and [0.5,∞). This selection of coding contexts seem to work quite well for naturalimages, but the compression performance is rather insensitive to the number of classes and their boundaries.

3.4. Coding the sign

Although the signs of the coefficients seem to be random and their distribution is almost even, some benefit can begained by using two neighbours (north and west) as a coding context. Thus nine different probability distributions isused, as each sign can be either +, − or 0 (if the quantized coefficient is zero, the sign is unknown and insignificant).

99

Page 107: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

3.5. Coding the quantized coefficients

In addition to the sign and the depth of the coefficient, the actual quantized absolute value must be coded. Thevalue could be coded simply with arithmetic coding using the depth as a context, but in the PDC we take an evensimpler approach: the bits of the absolute value are coded as such one by one. No that we could as well apply thebinary arithmetic coder can be used, but its benefit is marginal.

As the depth of the coefficient is known, only the bits after the first one-bit of the absolute value must be coded.Even thought the distribution of these bits is almost even, the distribution of absolute values of the coefficients isskew. For this reason the distribution of the bits right after the first one-bits is also somewhat skew and thus entropycoding is used to code these bits.

4. TESTING

4.1. Compression performance

As the purpose of the PDC is to apply the prediction coding principles in wavelet-coding, the absolute compressionperformance results are compared to some advanced wavelet compression techniques. Furthermore, as the com-pression scheme consists of many different sub-components, their influence on compression performance is evaluatedseparately. The peak signal to noise ratio (PSNR)11 is used as image quality measure.

The compression performance (Table 4.1) of the PDC is measured by compressing three 512 × 512, 8-bit testimages: lena, barbara and goldhill, using different number of bits per pixel (bpp). The performance is comparedto the compression results of the standard JPEG,1 a context based wavelet compression technique (C/B),5 spacefrequency quantization (SFQ)6 and a wavelet image compression technique based on set partitioning in hierarchicaltrees (SPIHT).3

The JPEG codes the image in fixed 8 × 8 blocks by first transforming the image blocks using discrete cosinetransform (DCT) and then applying scalar quantization to the coefficients. The quantized coefficients are codedin zig-zag order using Huffman-coding. Context based technique compresses the scalar quantized coefficients ofthe wavelet pyramid composition using independent probability distributions that depend on the spatial context.The technique is relatively simple, yet very efficient. SFQ is based on the usage of zerotrees4 in wavelet pyramidcomposition coding. The idea of the SFQ is to find optimal balance between spatial quantization done with zerotreesand scalar quantization. The resulting compression performance is very good. The SPIHT is also based on theidea of zerotrees, but it develops the zerotree structure coding further by coding the tree information implicitly asthe decision information of the set partitioning algorithm, which constructs the zerotree. Although the method issomewhat complicated, the results are very good and moreover the modelling algorithm is so powerful, that theentropy-coding can be omitted at least in some applications. The algorithm can also be easily optimized with minormodifications.9

The compression performance graphs (Fig. 3) show that the PDC algorithm is not as good as the current stateof the art algorithms, but the differences in the compression performance are small. The fixed sized block structureof the JPEG restricts the scalability to the smaller bit-rates and thus the compression performance is poor whencomparing it to wavelet-transform based algorithms.

BPP PSNR(dB)lena barbara goldhill

0.50 49.8 49.8 49.80.25 45.8 45.8 45.80.15 42.3 42.5 42.10.10 39.9 40.0 39.20.05 36.6 35.8 34.90.01 29.5 26.4 27.6

Table 1. The PDC compression results

100

Page 108: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

15

20

25

30

35

40

45

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

PS

NR

(dB

)

BPP

PDCSPIHTJPEG

C/BEZWSFQ

15

20

25

30

35

40

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

PS

NR

(dB

)

BPP

PDCSPIHTJPEG

C/BEZWSFQ

15

20

25

30

35

40

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

PS

NR

(dB

)

BPP

PDCSPIHTJPEG

C/BSFQ

Figure 3. Compression performance of the PDC with three different 512× 512 8-bit test images is measured. Thetest images used are from top to bottom: lena, barbara and goldhill.

101

Page 109: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

32

33

34

35

36

37

38

39

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

PS

NR

(dB

)

BPP

All featuresNo sign prediction

No near zero coding contextsNo separate second bit coding

Figure 4. The compression performance of the PDC on the lena test-image, when leaving out some of the features.

4.2. The effect of different components

The modeller used in the PDC uses several different components to gain better compression performance. Manyof these components can be left out from the PDC or integrated to other wavelet image compression systems. Thecoding gain can be measured by leaving the corresponding coding component out from the PDC and comparing thecompression performance the results (Fig. 4) with the complete algorithm.

It turns out that the sign prediction and the use of a separate context for the second bit coding are not allnecessary. If one would like to reduce the amount of arithmetic coding done, the performance loss from directlysaving the sign bits and the coefficient bits is not very big. On the other hand it seem to be important to use severalcontexts for coding the coefficients that are predicted to be near zero.

4.3. Efficiency

One of the advantages of the PDC is that the algorithm does not require any extra memory, unlike many other wavelettransform coding algorithms. Still the wavelet transformed image itself must reside in memory at the prediction step.New methods for wavelet transforms having reduced memory requirements have been documented in literature,5 andthe PDC algorithm could probably be easily changed to work with them, as only the prediction context must bechanged.

The speed of the PDC depends of three different steps: 1) the wavelet transform, 2) modelling and 3) arithmeticcoding. The two most time consuming parts of the modelling step are coefficient prediction and prediction contextweight adjustment. In the prediction one must calculate each estimate as a linear combination of the predictioncontext. This requires 9 floating point multiplications, additions and several memory address calculations and reads.The learning phase is about three to five times more complex than the prediction. One way to speed up the learningphase could be the use of approximate updates. In this approach we would update the weights of the predictioncontext only once for every n predictions, where the parameter n would determine the speed of learning.

5. CONCLUSIONS

A new method for wavelet transform coding has been proposed. The predictive depth coding technique demonstratesthat the conventional prediction coding principles, originally used in lossless image-coding, can be applied to lossywavelet transform coding. We have found that the number of significant bits in the wavelet coefficients can bepredicted with small linear prediction context. A simple method for learning the weights of the linear predictioncontext is introduced. The PDC also demonstrates that sign compression with simple two-sign context is possible,yet relatively inefficient.

102

Page 110: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

A significant benefit of the new compression technique is that it is very simple and does not need any extramemory. As the prediction context is very small, the applicability of the technique for use with reduced memorywavelet transfers should be researched. Also the possibilities of integrating the prediction coding techniques intosome other wavelet coefficient modeling technique for construction of a hybrid-method should be inspected.

REFERENCES

1. W. Pennebaker and J. Mitchell, Jpeg : Still Image Data Compression Standard, Van Nostrand Reinhold, 1992.

2. M. Vettereli and J. Kovacevic, Wavelets and Subband Coding, Prentice Hall, Englewood Cliffs, NJ, 1995.

3. A. Said and W. A. Pearlman, “A new fast and efficient image codec based on set partitioning in hierarchicaltrees,” IEEE Transactions on Circuits and Systems for Video Technology 6, pp. 243–250, June 1996.

4. J. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Trans. Signal Processing 31,December 1993.

5. C. Chrysafis and A. Ortega, “Efficient context-based entropy coding for lossy wavelet image compression,” inDCC, Data Compression Conference, (Snowbird, UT), March 25 - 27 1997.

6. Z. Xiong, K. Ramchandran, and M. T. Orchard, “Space-frequency quantization for wavelet image coding,” IEEE

Trans. Image Processing , 1997.

7. S. K., Introduction to Data Compression, Morgan Kaufmann, 1996.

8. M. Kopp, “Lossless wavelet based image compression with adaptive 2d decomposition,” in Proceedings of the

Fourth International Conference in Central Europe on Computer Graphics and Visualization 96 (WSCG96),pp. 141–149, (Plzen, Czech Republic), February 1996.

9. A. Jarvi, J. Lehtinen, and O. Nevalainen, “Variable quality image compression system based on SPIHT,” to

appear in Signal Processing: Image Communications , 1999.

10. A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Kluwer Publishers, 1996.

11. R. Gonzalez and R. Woods, Digital Image Processing, Addison-Wesley Publishing Company, 1992.

103

Page 111: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

104

Page 112: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Clustering context properties of wavelet coefficientsin automatic modelling and image coding

Joonas Lehtinen and Juha Kivijarvi

Published in Proceedings of IEEE 11th International Conference onImage Analysis and Processing, pages 151–156, Palermo, Italy, 2001.

105

Page 113: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

106

Page 114: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Clustering context properties of wavelet coefficients in automatic modelling andimage coding

Joonas Lehtinen and Juha KivijärviTurku Centre for Computer Science (TUCS)

Lemminäisenkatu 14 A, 20520 Turku, [email protected]

Abstract

An algorithm for automatic modelling of wavelet coeffi-cients from context properties is presented. The algorithm isused to implement an image coder, in order to demonstrateits image coding efficiency. The modelling of wavelet co-efficients is performed by partitioning the weighted contextproperty space to regions. Each of the regions has a dy-namic probability distribution stating the predictions of themodelled coefficients. The coding performance of the al-gorithm is compared to other efficient wavelet-based imagecompression methods.

1. Introduction

Although the amount of available bandwidth and stor-age space is continuously increasing, there is still growingdemand for more efficient image coders in the industrialand medical applications. Most recent image coders arebased on the coding ofwavelet representationof the spa-tial image [8]. This is because wavelet transforms representimages very efficiently and the applicability oftransformbased codersis much better than that of alternative efficientmethods, such as fractal image coders. The compressionefficiency of a lossy wavelet image coder depends on fourthings: 1) the efficiency ofimage transformationmethod,2) thewavelet coefficient quantizationmethod, 3) theco-efficient modellingalgorithm, and 4) theentropy codingmethod. The image transformation converts the pixel rep-resentation of an image from the spatial domain towaveletdomain. The quantization method maps the wavelet coef-ficients to their approximations by reducing the number ofpossible values of the coefficient. This reduces the num-ber of bits needed, but also introduces some errors into theimage. The modelling step maps the quantized coefficientsinto a set of symbols with certain probabilities. The entropycoder encodes the symbols using these probabilities from

the modelling step. The result of this process is a binaryrepresentation of the image data.

Several efficient wavelet image transforms have beenproposed. In the context of lossy image compression, oneof the most popular of these is recursive filtering withDaubechies 9-7 wavelet filtersto produceMallat composi-tion [8]. Many coders use simple scalar quantization whichgives fairly good results, but more advancedrate distortionquantizationhas also been used [2]. Arithmetic coding [1]can be used to perform entropy coding optimally, and thusit is widely used in high-performance image coders. Coef-ficient modelling algorithms vary from coder to coder, butmost of them try to exploit the prior knowledge from thepreviously coded parts of the image to achieve a more effi-cient representation of the coefficients.

We introduce in this paper a new method for modellingthe wavelet coefficients. The modelling decision is basedon a set ofcontext propertiescalculated from the previouslycoded coefficients. Context properties are expected to sup-ply information relevant to the prediction of a coefficient.Some possible choices for context properties are individ-ual previously coded coefficients, their absolute values andmagnitudes, mean and variance of the neighbouring coeffi-cients, measures for local gradient and other values describ-ing the local properties of the image.

The method is independent of the context properties. Inthis way the problem of designing modelling heuristics isreduced to the problem of finding the most relevant contextproperty measures. Because the other key components ofa wavelet coding algorithm can be effectively implementedusing existing techniques, one can construct efficient com-pression algorithm by selecting a good set of context prop-erties.

The quantized coefficients are modelled by partitioningthe weighted context property space with a clustering algo-rithm [4]. A dynamic probability distribution is assignedto each cluster. Such a distribution assigns a probability toeach possible value of a coefficient mapped to the corre-sponding cluster. The coefficient is coded by encoding the

151

107

Page 115: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

probability of the corresponding symbol by arithmetic cod-ing.

Suitable weights for the context properties are usuallyselected by hand. In order to support automatic modelling,our method calculates the weights automatically from thecorrelations of the properties. This allows the coder to adaptmore efficiently to different types of images, as the optimalselection of weights is image-dependent.

Rate distortion criterion [2] is used in the quantization ofcoefficients. Because the quantization depends on the par-titioning of the context property space and the rate distor-tion quantization is used to construct the coefficient prop-erty space, optimal partitioning is not achievable. Further-more, the quantization depends on the target bitrate, whichmakes selecting good quantization parameters and cluster-ing difficult. Good quantization parameters can be founditeratively, but at the cost of a large time consumption.

We test the modelling algorithm and the optimizationframework with a set of coefficient property selections, andcompare the results to existing wavelet image compressionalgorithms. We also demonstrate the effect of different pa-rameter selections on the compression performance.

The rest of the paper is organised as follows. In Section2 we present the image coding algorithm used. Section 3explains the clustering algorithm, and Section 4 summarizesthe results of the test runs. Finally in the Section 5 we makethe conclusions and discuss further research possibilities.

2. Image encoding algorithm

Our image encoding framework consists of three steps:1) wavelet image transform, 2) finding good coding param-eters and 3) coding the wavelet transformed image by usingthe parameters. In the first step the image is transformedto wavelet domain using Daubechies 9-7 wavelet filter re-cursively in order to construct a Mallat composition of theimage.

The coding algorithm quantizes, models and encodes thewavelet coefficient matrixM considering thecoding param-eter set(∆,W,C), where∆ is thescalar quantization step,W = {w1, w2, . . . , wm} is a set ofweights form contextpropertiesandC = {c1, c2, . . . , cn} is a set ofn contextproperty partition centroids. The context properties are de-fined asPi,j = {p

(i,j)

1, p

(i,j)

2, . . . , p

(i,j)m }, where(i, j) are

the indices of the corresponding coefficientxi,j in matrixM .

In the second step we try to find such a parameter set(∆,W,C) that the compressed signal qualityS measuredwith peak signal to noise ratio(PSNR) is maximized forthe selected bitrate. We calculate the weights as

wk =|rk|

4

1

|M |

i,j (p(i,j)

k − pk)2, (1)

whererk is the Pearson’s correlation betweenpk and thecoefficients,|M | is the number of elements inM , andpk isthe mean value of the propertypk. The context propertiesare calculated directly fromM . The denominator normal-izes the context properties by removing the effect of possi-bly different value ranges. The nominator assigns a largerweight to properties which have a strong correlation withthe coefficients.

In order to find suitable∆ andC, we must determineboth of them simultaneously, as they depend on each other.In stepi we first search for a∆i, that satisfies the bitrateconstraint by calculating the entropy of codingM withCi−1. Then the coefficient properties of all the coefficientscoded with parameter set(∆i,M,Ci−1) are used as a train-ing set for the clustering algorithm (see Section 3) to findCi. The steps for finding∆ andC are iterated in order tofind a good solution. The initial selections∆0 andC0 arenot critical, as the algorithm quickly finds a reasonable so-lution.

The coding algorithm (see Alg. 2) can be used for per-forming two different tasks. First, we can calculate the en-tropyH, peak signal qualityS, and context parameter spaceT , which can be used as a training set for the clusteringalgorithm. Second, we can encode the wavelet coefficientmatrix M with arithmetic encoder by usind the describedmodelling algorithm.

The coding starts with an empty reconstruction pyramidR and a training setT . The coefficient matrixM is tra-versed starting from the top of the pyramid, subband bysubband. For each subband, the dynamic distributions areinitialized to contain zero probabilities. For each waveletcoefficientxi,j of a subband, the context propertiesPi,j arecalculated from the reconstruction pyramidR and the set ofproperty weightsW . These parameters are known in thedecoding step as the decoder must be able to repeat the cal-culations.

We define a distinctzero-property setP in order to moreefficiently model the coefficients, which have highly quan-tized neighbours. IfPi,j = P , the zero-property distribu-tion D0 is selected. Otherwise, the distributionDk con-nected to the partition centroidck nearest toPi,j is chosenandPi,j is inserted into the training setT , which is usedby the clustering algorithm for finding the set of partitioncentroidsC.

Coefficientxi,j is quantized using rate distortion quan-tization s = Q(|xi,j |,∆, Dk), in which the idea is to find

such a symbols that the rate distortion criterion| |xi,j |

∆−s|−

ln 2

4log

2P(s|Dk) is minimized.P(s|Dk) is the probability

of symbols in probability distributionDk.The symbols is finally coded by arithmetic coding us-

ing the probabilityP(s|Dk). Furthermore, the sign of thecoefficient must be coded ifs 6= 0. The encoder keeps thereconstruction pyramidR up-to-date by inserting the new

152

108

Page 116: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

encoded value sgn(xi,j)∆s into R. The increase of the en-tropy is calculated to support the search for∆, and the se-lected probability distributionDk is adjusted to include theencoded symbols. At the end of the coding process the sig-nal quality of the compressed image is calculated fromM

andR. Calculations can be made by transforming the ma-trices to spatial domain or by calculating the error directlyfrom the wavelet representations, if the applied filters areorthonormal.

Algorithm 1 The coding algorithm: (∆,M,C, W ) →(T, S, H,B), where∆ is the quantization step size,M isthe wavelet coefficient matrix to be coded,C is a set of con-text property partition centroids,W is a set of weights forproperties,T is the training set containing the context prop-erties,S is the signal quality,H is the resulting entropy andB is the generated bitstream.

� Zero entropyH, bitstreamB, training setT and recon-struction pyramidRfor each pyramid levell starting from the topdo

for each subbandb in level l do� Clear distributionsD0, D1, . . . , Dn

for each coefficientxi,j in the subbandb do� Calculate the context property vectorPi,j fromR using weightsWif Pi,j = P then

� Select the zero-property probability distribu-tion D0

else� Select the probability distributionDk withthe corresponding centroidck nearest to thecontext property vectorPi,j

� Add contextPi,j to the training setT� Find the representative symbols by rate distor-tion quantizing the absolute value of the coeffi-cient|xi,j | using the selected distributionDk and∆� Code the symbols with arithmetic coder toBusing the probability given by the selected distri-butionDk

� If the value of the symbol is non-zero, encodethe sign of the coefficient toB� Insert the quantized coefficient valuesgn(xi,j)∆s to R

� Increase the entropyH by the number of bitsused for coding thes and the sign� Increase the probability of the symbols in dis-tributionDk

� Calculate the signal qualityS from M andR

3. Partitioning the context property space

Clustering means dividing a set of data objects intoclus-ters in such a way that each cluster contains data objectsthat are similar to each other [5]. This idea can be appliedin selection of a suitable probability distribution for codingthe coefficientxi,j . We want to partition the context prop-erty space by performing the clustering for a representativetraining set. This is feasible because clustering also definesa partitioning of the entire space. In this paper we are in-terested in clustering the context property vectorsPi, sincewe assume that the values of the coefficients with similarcontext properties are more likely to be close to each otherthan coefficients with dissimilar context properties.

In particular, we cluster a given set ofN property vectors{P1, . . . , PN} into n clusters. Since each vector is assignedto exactly one cluster, we can represent a clustering by amapping U = {u1, . . . , uN}, whereui defines the indexof the cluster to whichPi is assigned. Furthermore, eachclusterk has arepresentative data object ck.

There are three selections to make before clustering: weshould select the evaluation criterion, the number of clus-ters and the clustering algorithm. The evaluation criterionspecifies which solutions are desirable. In this paper we aimto find solutions giving high compression ratios. Unfortu-nately, using the final compression ratio as the evaluationcriterion is not a practical solution, because it would taketoo long to compute. Thus, we have assumed that mini-mizing the simple and frequently usedmean square error(MSE) of the clustering will serve as an acceptable eval-uation criterion. Given a mappingU and a set of clusterrepresentativesC = {c1, . . . , cn}, the evaluation criterionis calculated as:

f(U,C) =1

Nm

N∑

i=1

d(Pi, cui)2 (2)

whered is a distance function. We letd to be theEuclideandistance since it is the most commonly used distance func-tion in clustering.

Thereafter, given a mappingU , the optimal choices forthe cluster representativesC minimizing the function (2)are the clustercentroids:

ck =

ui=k Pi∑

ui=k 1, 1 ≤ k ≤ n (3)

This formula is an integral part of the widely usedk-meansalgorithm [6]. In this method, an initial solution is it-eratively improved by mapping each vector to the clusterwhich has the nearest representative and then recalculatingthe cluster representatives according to formula (3). Thisnever increases the MSE of the solution. The selection ofthe number of clusters is discussed in Section 4.

153

109

Page 117: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

The problem of finding the clustering with minimal MSEis too complex to be performed exactly. However, many ef-ficient heuristical algorithms have been developed for clus-tering. Thus, even though the optimal solution is unreach-able, satisfactory solutions can usually be found.

We userandomised local search algorithm (RLS-2) aspresented in [4]. The reason for this selection is that RLS-2effectively utilizes the computation time. It finds reasonablygood solutions quickly, but also carries on the optimizationprocess and finally ends up with a very good solution.

RLS-2 starts with a random initial solution and itera-tively tries to find a better one until the desired number ofiterations has been reached. In each iteration, a new solu-tion is created from the current solution by replacing a ran-domly selected cluster centroid with a randomly selectedvector from the training set and applying two iterations ofthe k-means algorithm. The new solution is accepted as thenew current solution if it is better than the old solution.

4. Test results

The above algorithm for clustering the context proper-ties was tested by creating a coder (automatic context prop-erty based coding, ACPC) utilizing it. We used rate distor-tion quantization, Daubechies 9-7 filtering, Mallat composi-tion and arithmetic coder as discussed in previous sections.To represent the context of the coefficient to be coded, weused the absolute values of one coefficient from the previouspyramid level in the Mallat composition and the 12 nearestneighbouring coefficients of the same subband as contextproperties.

If the number of context property partitions is too small,the clustering algorithm is unable to effectively model thedifferent properties of the image, which results to poor com-pression performance. On the other hand, if the numberof partitions is too large, the dynamic distributions do nothave enough data to adapt accurately. The amount of extraheader information required to represent the partitioning ofthe context property space also increases. In Figure 1 wedemonstrate this behaviour by showing the results of com-pressing the test imageLenna using 0.25 bits per pixel andseveral selections of the number of partitions. One can seethat a good choice of this number depends only slightly onthe compression bitrate and seems to be between 10 and 20.

Partitioning of the context property space is demon-strated in Figure 2. Lenna is coded using 16 clusters andonly two context properties: average magnitude and vari-ability. We observe that the clustering algorithm has allo-cated more clusters (i.e. smaller partitions) to high densityareas. Furthermore, the properties are not independent ofeach other.

To test the compression performance of our coder, wecompared the compression results for three standard test im-

33

33.5

34

34.5

35

35.5

36

36.5

37

37.5

10 20 30 40 50 60 70 80

PS

NR

(dB

)

number of clusters

Lenna 0.50bppLenna 0.25bpp

Figure 1. The signal quality for Lenna in PSNRusing two bitrates and different numbers ofclusters

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Var

iabi

lity

Average magnitude

trainingvector

codevector

Figure 2. The average magnitude and variabil-ity context properties with partition centroidsfor Lenna, 0.25 bpp

ages against three state of the art algorithms: SPIHT [7],SFQ [9] and C/B [3], see Table 1. Our algorithm uses 16partitions for context property classification.

The SPIHT algorithm is based on partitioning of hierar-chical trees. It models the scalar quantized coefficients soefficiently that a separate entropy coding step is not nec-essarily required. Still, the results given here for SPIHTinclude an arithmetic compression step. Space frequencyquantization (SFQ) includes a powerful modelling methodwhich balances between spatial and scalar quantization andachieves excellent compression performance.

Context-based method (C/B) is somewhat similar to ourmethod since it uses a set of dynamic discrete distributionsfor modelling the coefficients, and selects the proper dis-tribution by evaluating the context. The partition selectionmethod is straightforward in C/B: a weighted sum of nearbycoefficients is quantized and the result is used to identify thedistribution. The weights and the quantizer are carefully se-

154

110

Page 118: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Barbarabpp SPIHT SFQ C/B ACPC0.20 26.64 26.26 27.33 27.710.25 27.57 27.20 28.48 28.790.50 31.39 31.33 32.37 32.731.00 36.41 36.96 37.61 37.82

Lennabpp SPIHT SFQ C/B ACPC0.20 33.16 33.32 33.24 33.120.25 34.13 34.33 34.31 34.130.50 37.24 37.36 37.52 37.231.00 40.45 40.52 40.80 40.62

Goldhillbpp SPIHT SFQ C/B ACPC0.20 29.84 29.86 29.94 29.810.25 30.55 30.71 30.67 30.550.50 33.12 33.37 33.41 33.201.00 36.54 36.70 36.90 36.59

Table 1. Performance of SPIHT[7], SFQ[9],C/B[3] and our automatic context propertybased coding (ACPC) algorithm measured asPSNR for different bitrates.

lected.

The results show that the compression performance ofour method is comparable with the other algorithms. It per-forms very well with test imageBarbara which has a lotof high contrast edges. The performance in lower contrastimages is slightly weaker.

Unfortunately the combination of the clustering algo-rithm and iterative search of the coding parameters is com-putationally very intensive. Our unoptimized coder imple-mentation is much slower than the comparative programsand thus it is not practical for most real world applications.The speed of the coding algorithm without parameter searchand clustering is comparable to C/B method. Each iterationin the parameter search requires the coder to be rerun withthe new parameters and thus the actual coding time dependson the constraints set for the parameters and clustering. Webelieve that a faster implementation of the compression sys-tem can be made by relaxing the constraints and using fastersearch methods. The decoding process is much faster be-cause it does not include the clustering and parameter searchsteps.

5. Conclusions

A new algorithm for automatic modelling the wavelet co-efficients from arbitrary context properties was introduced.The algorithm was applied for implemention of an imagecoder. The coder achieves results comparable to state of theart methods. However, the encoding is rather slow, so thepractical applications are limited to cases where the com-pression speed is irrelevant. Speed could be enhanced by abetter implementation of the parameter optimization step.

The method is able to automatically find usable weightsfor context properties by utilizing correlation calculations.The presented compression framework allows one to easilyevaluate different context property selections and thus helpsto develop new efficient compression methods.

References

[1] V. Bhaskaran and K. Konstantinides.Image and Video Com-pression Standards. Kluwer Academic Publishers, Dordrecht,The Netherlands, 1995.

[2] C. Chrysafis. Wavelet Image Compression Rate DistortionOptimizations and Complexity Reductions. PhD thesis, De-cember 1999.

[3] C. Chrysafis and A. Ortega. Efficient context-based entropycoding for lossy wavelet image compression. InDCC, DataCompression Conference, Snowbird, UT, March 1997.

[4] P. Fränti and J. Kivijärvi. Randomised local search algorithmfor the clustering problem.Pattern Analysis & Applications,3:358–369, 2000.

[5] L. Kaufman and P. J. Rousseeuw.Finding Groups in Data:An Introduction to Cluster Analysis. John Wiley & Sons, NewYork, 1990.

[6] J. B. McQueen. Some methods of classification and analysisof multivariate observations. InProc. 5th Berkeley Sympo-sium Mathemat. Statist. Probability, volume 1, pages 281–296, University of California, Berkeley, CA, 1967.

[7] A. Said and W. A. Pearlman. A new fast and efficient imagecodec based on set partitioning in hierarchical trees.IEEETransactions on Circuits and Systems for Video Technology,6:243–250, June 1996.

[8] M. Vettereli and J. Kovacevic. Wavelets and Subband Coding.Prentice Hall, Englewood Cliffs, NJ, 1995.

[9] Z. Xiong, K. Ramchandran, and M. T. Orchard. Space-frequency quantization for wavelet image coding.IEEETrans. Image Processing, 1997.

155

111

Page 119: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

112

Page 120: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

A parallel genetic algorithm for clustering

Juha Kivijarvi, Joonas Lehtinen and Olli Nevalainen

To appear in Kay Chen Tan, Meng Hiot Lim, Xin Yao and LipoWang (editors), Recent Advances in Simulated Evolution and Learn-ing, World Scientific, Singapore, 2004.

113

Page 121: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

114

Page 122: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % & ' ( ) * +

& '& * & , , ) , - ) . ) ( /$ & , - 0 * /( % 1 2 0 * $ , 3 4( ) * /. -

5678 9 :; :< =>;: ? 5@ @A 8B CD7E:ADA 8AF GHH: I J K D;8H8:ADALMNOM PQRSNQ T UN PUVW MSQN XYZQR YQ [L \PX]

^ QW _NSV QRS UT `RT UNV _SZUR LQYaRUbUcd\RZe QNfZSd UT LMNOMg h iijk LMNOMg l ZR b_Rm

n oV _Zbp q Ma _rO ZeZq _NeZ sMSM rt

u8>8HHDH:v8E:@A @w xDADE:y 8Hx@>:E7z B {| }B~ 78B >DyD:;DF y@AB:FD>8�HD 8E�EDAE:@A :A >DyDAE �D8>B J � D8B@AB w@> E7 :B 8>D E7D 8;8:H8� :H:E� @w B6:E8� HDy@z �6E8E:@A 8H >DB@6>yDB 8AF E7D ADDF w@> B@H; :Ax 78>FD> �>@�HDz B :A>D8B@A 8�HD E:z D J �D FDBy>:� D 8 AD� �8>8HHDH BDHw�8F8�E:;D | } w@> B@H;:AxE7D F8E8 yH6BED>:Ax �>@�HDz J �7D 8Hx@>:E7z 6E:H:vDB :BH8AF �8>8HHDH:v8�E:@A 6B:Ax 8 xDAD�8A� z @ FDH ? :A � 7:y7 | } �>@yDBBDB y@z z 6A:y8ED � :E7D8y7 @E7D> E7>@6x7 E7D xDAD� 8A� �>@yDBB J �7:B z @ FDH 8HH@� B @AD E@ :z ��HDz DAE F:�D>DAE z :x>8E:@A E@� @H@x:DB :A 8A D8B� z 8AAD> J ��� D>:z DAEBB7@� E7 8E B:xA:� y8AE B� DDF6� :B >D8y7DF �� �8>8HHDH:v8E:@A J �7D D�DyE @wz :x>8E:@A � 8>8z DED>B :B 8HB@ BE6F:DF 8AF E7D FD;DH@�z DAE @w � @�6H8E:@AF:;D>B:E� :B D� 8z :ADF �� BD;D>8H z D8B6>DB ? B@z D @w � 7:y7 8>D AD� J

�� /���� � � ���� �� �� ��� ������ �  �¡¢ £��¤� ¥ �£ �� ¦���¦� § ¥���  £�� �¨ ¦§�§ ��� ���£ � �� § ¢© � �¤ �¨ ¥¤�¢ª £ �§¡¡�¦ «¬­®¯°±® �  £¢ �� § ² §³ �� §� £�© �¡§¤ ��� ���£ � �¡� ¥�� ��� £§© � �¡¢ £��¤ ² ��¤�§£ ¦�££�© �¡§¤ ��� ���£ §¤� �  ¦�´�¤� � � �£

1,2 µ � ��ª¤��¡�© §ª ª �§¤£ §£ © § ³ �§¤�§���  £ �   ¢© �¤�¢£ ¶ �¡¦£ �¨ £���  �� £¢�� §£¦§�§ ��© ª¤�££��  · ª §���¤  ¤���¥ ����  · �© §¥� § §¡³£�£ · © �¦��§¡ ¦§�§ § §¡³¸£�£ · ¦§�§ © � � ¥ · £� ��§¡ £���  ��£ · ���� ¨�¤© §���£ · ��� µ � �� ª¤��¡�© � £�§ ��£§¤� ��© © �  ¡³ ¡§¤¥� �  £���¤§¡ ¤�£ª ���£ ¹ ��� ¦�© � £�� §¡��³ �¨ ¦§�§ ��� ���£© §³ � � ��¥� · �� ��¤  ¢© � �¤ © §³ � � ���¢£§ ¦£ �¤ © �¡¡�� £ · § ¦ ���  ¢© ¸� �¤ �¨ �¡¢ £��¤£ © §³ � � £���¤§¡ �¢   ¦¤�¦£ µ � �¢£ ��� §© �¢ � �¨ ��© ª¢�§���  ��¦�¦ ¨�¤ ¶ ¦� ¥ £§��£¨§���¤³ £�¡¢��� £ �£ �¨��  ��¥� · ���  �¨ ��� ��ª � �¨¶  ¦� ¥ § �¤¢� ¥¡�� §¡ �ª��© ¢© �£ §� §  ¦� �¦ µ

º  ��� ª¤�£� � £�¢ ¦³ ²� �� £�¦�¤ ��� �§£� �¨ » ­ «¬¼½°¾¿ �¡¢ £��¤� ¥ µ º 

115

Page 123: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

89:;<=> ?9: @ AB 9CC>D B ;E9; ;EB F9;9 GHI B=;C =9J H B =GJ C<FB:BF 9C 8 G<J;C <J9 K > =?<FB9J C89=B 9J F =9?=> ?9;<GJ GL 9:;<M =<9? =?>C;B: =BJ;B:C <C D B9J<JNL> ? OP>:;EB:D G:B @ ;EB J>D H B: GL =?>C;B:C <C BQ 8 B=;BF ;G H B RJGA J O S E<C C<;>9;<GJ<C D B; LG: BQ9D 8?B <J TB=;G: U>9J;<V9;<GJ3 O

S EB:B BQ<C;C 9 N:B9; J>D H B: GL 9?NG:<;ED C LG: =?> C;B:<JN4,5 O S EBCB =9J H B=?9CC<MBF 9C W XYZ[Z[\] X^ 9JF _ [`YXYa_ [aX^O b9:;<;<GJ9? 9?NG:<;ED C 9<D ;G F<cT<FB ;EB N<TBJ F9;9 <J;G 9 J>D H B: GL =?>C;B:C A EB:B9C E<B:9:=E<=9? D B;EG FCNBJB:9;B 9 E<B:9:=Ed GL =?>C;B:<JNC GL F<eB:BJ; C<VBC O b9:;<;<GJ9? 9?NG:<;ED C9:B =GD D GJ?d [Z`YXZ[f`@ < OB O ;EBd C;9:; A <;E 9J <J<;<9? CG ?>;<GJ 9JF <;B:9;<TB?d<D 8:GTB <; O g [`YXYa_ [aX^ D B;EG FC =9J H B F<T<FBF <J;G h[f[i[f` 9J F Xjj ^\k l`YXZ[f` D B;EG FC O S EBd 988?d C8?<; 9JF D B:NB G8 B:9;<GJC @ :BC8 B=;<TB?d@ >J;<?9 =?>C;B:<JN A <;E ;EB FBC<:BF J>D H B: GL =?>C;B:C E 9C H BBJ :B9=EBF6 O

m BJB:9? EB>:<C;<= CB9:=E ;B=EJ<U>BC7 E9TB N9<JBF 8 G8>?9:<;d <J CG?T<JNE9:F =GD H<J9;G:<9? G8;<D <V9;<GJ 8:GH?BD C 9JF =?>C;B:<J N <C J G; 9J BQ =B8c;<GJ O n <NE U> 9?<;d :BC>?;C E 9TB H BBJ :B8 G:;BF LG: B ON O i[k o^XZ`h X]] `X^[]j @ZXpo i`XYa_8 9JF BC8 B=<9??d j `] `Z[a X^j \Y[Z_ k i qm r Cs9,10 O tJ ;EB 8:BCBJ;C;>Fd AB =GJ=BJ;:9;B GJ m r C C<J=B ;EBd 9:B TB:d Be B=;<TB A E<?B C;<?? =GJ=B8c;>9??d C<D 8?B 9JF E9TB H BBJ CEGA J ;G 9=E<BTB BQ =B??BJ; :BC> ?;C <J =?> C;B:<J N8:GH?BD C11 O

m r C 8 B:LG:D iZ\a_ XiZ[a \W Z[k [u XZ[\] Hd 988?d<JN C;G =E9C;<= BTG?>;<GJ<J C8<:BF G8 B:9;G:C ;G 9 CB; GL =9JF<F9;B CG?>;<GJC O S EBCB G8 B:9;<GJ C <Jc=?>FB k oZXZ[\]@ aY\ii\f`Y 9J F i`^`aZ[\] O S EB:B 9:B CBTB:9? 8 :G8 B:;<BC A E<=EE9TB <J =:B9CBF ;EB 8 G8>?9:<;d GL m r C 9C 9 NBJB:9? L:9D BAG:R LG: CG?T<J NE9:F G8;<D <V9;<GJ 8:GH?BD C O S EB U>9?<;d GL CG?>;<GJ C LG>JF Hd m r C <C <JD 9Jd =9CBC BQ =B??BJ; O S EB D B;EG F <C 9?CG B9Cd ;G >J FB:C;9J F 9J F 9J BQ 9=;D 9;E BD 9;<=9? LG:D >?9;<GJ <C JG; JBBFBF v <; C>w =BC ;G FB;B:D <JB 9 C><;9H?B:B8:BCBJ;9;<GJ LG: ;EB <JF<T<F>9?C 9JF 9 8 B:;<JBJ; =:GCCGTB: G8 B:9;G: O

r ?? ;EB 9H GTB H BJBM;C @ EGABTB: @ 9:B JG; B9:JBF LG: L:BB x m r C GL;BJ C>LcLB: L:GD TB:d ?GJN :>JJ<JN ;<D BC CG ;E9; 9 =GD D GJ =GD 8?9<J; GJ ;EB<:>CBL>?JBCC FB9?C A <;E ;EB 8:9=;<=9?<;d GL ;EB 988:G9=E O S E<C F:9A H 9=R <C >JcFB:?<JBF <J D 9Jd 8:9=;<=9? 988 ?<=9;<GJC GL m r C A E<=E <J=?> FB =GD 8?<=9;BFGHI B=;<TB L>J=;<GJC G: ;<D B =GJ C;:9<J;C LG: 8:GH?BD CG?T<JN O tJ 9FF<;<GJ @;EB:B 9:B D 9Jd FBC<NJ 9?;B:J9;<TBC ;G =EG GCB L:GD @ 9J F ;EB MJ 9? Bw =<BJ =dGL;BJ C;:GJN?d FB8 BJ FC GJ FB;9<?C GL ;EB FBC<NJ 9JF 8 9:9D B;B: T9?>BC O

SG GTB:=GD B ;EB ?9;;B: F<w =>?;d@ 9F98;<TB m r C E9TB H BBJFBTB?G8 BF12,13 O r i` y lXhXW Z[f` j `] `Z[a X^j \Y[Z_ k y \Y a^oiZ`Y[]j qz{ |{ s <CFBC=:<H BF <J } BL O ~� O tJ ;E<C 9?NG:<;ED @ B9=E <J F<T<F> 9? =GJ;9<J C CBTB:9? 89c:9D B;B: T9?>BC <J 9FF<;<GJ ;G ;EB 9=;>9? CG?>;<GJ O �r m r A 9C FBD GJC;:9;BF;G H B TB:d :GH>C; 9JF ;G 9=E<BTB BQ =B??BJ; :BC> ?;C O S EB D 9<J F:9A H9=R GL

116

Page 124: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ �789 : 978; < => 789 ?;@A BC@@=@ A 7=: 9 D E;B7C@F79?GH I J > FB9 K@;L @ 7; M 9 9F>N=?G O FBF??9?=PFM?9 D Q 8C > H C >=@A >9R9BF? =@79BS;@@9S79< OB; S9>>;B> ;@9 L;C?<9TO 9S7 7; M 9 FM ?9 7; B9<C S9 789 FS7C F? BC@@=@A 7=: 9 S;@>=<9BFM?GD UCB OB=N: FBG A;F? => 7; >O 99< C O VJ I J H MC7 =7 => F?>; =@79B9>7=@A 7; >99 L 89789BOFBF??9?=PF7=;@ ?9F<> 7; F?A;B=78: =S M 9@9W7> F> ; SSF>=;@ F??G >CAA9>79< D E;B F<=>SC >>=;@ ;X <=Y 9B9@7 : ; <9?> ;X O FBF??9?=P=@A I J > F@< F ?=79BFBG >CBR9GH >99Z 9X D [\ D ]@ Z 9X D [^ ;@9 SF@ W@ < =@N<9O78 : F789: F7=SF? F@F?G>=> ;@ <=Y 9B9@7F>O 9S7> ;X O FBF??9? I J > D

_ ` a bc defghi j k glm bfnQ 89 S?C >79B=@A OB;M?9: => <9W @9< F> X; ??;L > D I =R9@ F >97 ;X

N<F7F ;Mo 9S7>

xi

H OFB7=7 =;@ 789 < F7F >97 =@7;M

S?C >79B> =@ >CS8 F L FG 78F7 >=: =?FB ;Mo 9S7>FB9 AB;CO 9< 7;A9789B F@ < <=>>=: =?FB ;Mo 9S7> M 9?;@A 7; <=Y9B9@7 AB;CO> D p FS8;Mo 9S7

xi

8F>K q rstuvrw x

(k)

i

D Q 89 X9F7CB9> FB9 F>>C: 9< 7; M 9 @C: 9B=SF?F@ < ;X 789 >F: 9 >SF?9 D x FOO=@A

P<9W@9> F S?C >79B=@A MG A=R=@A X;B 9FS8

<F7F ;Mo 9S7x

i

789 =@ <9Tp

i

;X 789 S?C>79B =7 => F>>=A@9< 7; D ECB789B: ;B9 H9FS8 S?C>79B

j8 F> F yzuwtrv vr{ vrwr| tst}~r c

j

D�9 : 9F>CB9 789 �}ww}� }zsv}t� ��}wts| yr� M 97L99@ ;Mo 9S7>

xi

F@<x

j

MG789 � u yz}�rs| �}wts| yr

d(xi, x

j) =

K

k=1

(x(k)

i

− x(k)

j

)2. �[�UCB B9OB9>9@7F7=;@ ;X F >;?C7=;@ 7; 789 S?C>79B=@A OB;M?9: =@S?C<9> M ;78

: FOO=@A F@ < S?C >79B B9OB9>9@7F7=R9> H = D9 D F >;?C7=;@ => ;X 789 X;B:ω = (P, C)L 89B9

P = (p1, . . . , pN)

F@ <C = (c1, . . . , cM

)D Q 89 ;Mo 9S7=R9 => 7; W@< F

>;?C7=;@ L =78 : =@=: F? � rs| w �u svr rvv�v �x Vp � H L 8=S8 => SF?SC?F79< F>

e(ω) =1

NK

N

i=1

d(xi, c

pi)2. ���

I =R9@ F : FOO=@AP

H 789 ;O7=: F? S?C>79B B9OB9>9@7F7=R9> FB9 789 S?C>79Byr|tv�}�w

cj

=

Pi=j

xi

Pi=j

1, 1 ≤ j ≤ M, 1 ≤ i ≤ N. ���

Q 89 ;O7=: F?=7G ;X S9@7B;=<> ?9F<> 7; F >=: O?9 F@< L =<9?G C>9< S?C >79B=@A: 978; < H 789 � �� rs|w F?A;B=78:

17D ]7 =: OB;R9> F@ =@=7=F? >;?C7=;@ MG B9O 9F7N

9<?G B9SF?SC?F7=@A 789 S?C>79B B9O B9>9@7F7=R9> C>=@A p � D � F@< B9W@=@A 789: FOO=@A MG F>>=A@=@A 9FS8 ;Mo 9S7 L =78 789 S?C>79B L 8=S8 8 F> 789 @9FB9>7

117

Page 125: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

89:89;9<=>=?@9 A B @9< =CDEFC =C9 89;E G=; DH =C9 IJK 9>< ; >89 E;E >GGL K D M9;= N?= ?; C?FC GL E;9HEG >; > C?GGJOG?K P?<F K 9=CD M ?< K D89 ODK :G?O>=9M >GFD8?=CK ; A

Q R STUVWX Y Z[\]^T _ T`T\]a X Ubcd]\e f Vcd g Uhi\Td]` bj C9 ;9GHJ>M>:=?@9 F9<9=?O >GFD8?=C K klm n m o >::G?9M C98914 E ;9; pq rpsprt uvvwswv xwvy zuru{ |u|p}q12,13 N ~ C989 9>OC ?<M?@?ME >G OD< ;?;=; DH > O><M?M>=9 ;DJGE=?D< =D =C9 :8DPG9K ><M > ;9= DH x|�u|w� � { u�u� w|w�xA m < ?< M?@?ME>G

ι?;

DH =C9 HD8K ι = (ωι, γ

ι, ψ

ι, µ

ι) N ~ C989 ω

ι= (P

ωι, C

ωι) ?; > ;DGE=?D< A j C9

?< OGE;?D< DH P D=C K >::?<F ><M 89:89;9<=>=?@9; >GGD~ ; E; =D K >I9 ;9@98>G;: 99M D:=?K ?�>=?D<; A j C9 ;=8>=9FL :>8>K 9=98; ?<OGE M9M ?< ι >89 ��}xx}sw�� w|� }r γ

ι

N � t|u|p}q { �}�u�pvp|� ψι

>< M q }pxw �uq� w µι

Aj C9 F9<98>G ;=8EO=E89 DH lm n m ?; ;CD~ ?< m GF A �A l?� O8D;;D@98 K 9=CD M;

>89 >@>?G>P G9 � 8>< MDK K EG=?: D?<= N O9<=8D?M M?;=><O9 N G>8F9;= :>8=?=?D<; N K EGJ=?: D?<= : >?8~ ?;9 N D<9J: D?<= :>?8~ ?;9 >< M : >?8~ ?;9 <9>89;= <9?FCP D8 O8D;;D@98 Am GG =C9;9 K 9=CD M; 9�:GD?= ;DK 9 :8DPG9K J;: 9O?� O I<D~ G9MF9 ?< ;=9>M DH OD<J;?M98?<F =C9 ;DGE=?D< ; >; :G>?< P?= ;=8?<F ; A m M9=>?G9M M9;O8?:=?D< ><M M?;JOE;;?D< DH =C9 >GFD8?=CK O>< P 9 HDE<M ?< � 9H A �� A

k�o n 9<98>=9 S 8><MDK ?<M?@?ME>G; =D HD8K =C9 ?<?=?>G F9<98>=?D< Ak�o �=98>=9 =C9 HDGGD~ ?<F

T=?K 9; A

k>o l9G9O= SB

;E8@?@?<F ?< M?@?ME >G; HD8 =C9 <9~ F9<98>=?D< AkP o l9G9O=

S − SB

:>?8; DH ?<M?@?ME>G; >; =C9 ;9= DH :>89<=; AkOo �D8 9>OC : >?8 DH :>89<=; (ι

a, ι

b) MD =C9 HDGGD~ ?<F �

? A � 9=98K ?<9 =C9 ;=8>=9FL : >8>K 9=98 @>GE9; (γιn, ψ

ιn, ν

ιn) HD8 =C9 D�J

;:8?<F ιn

PL ?<C98?=?<F 9>OC DH =C9K 8><MDK GL H8DK ιa

D8 ιb

A?? A � E=>=9 =C9 ;=8>=9FL :>8>K 9=98 @>GE9; DH ι

n

~ ?=C =C9 :8DP>P?G?=L Ψk> :89M9�<9M OD<;=><=o A??? A �89>=9 =C9 ;DGE=?D< ω

ιn

PL O8D;;?<F =C9 ;DGE=?D<; DH =C9 :>89<=; Aj C9 O8D;;?<F K 9=CD M ?; M9=98K ?<9M PL γ

ιn

A?@ A � E=>=9 =C9 ;DGE=?D< DH =C9 D� ;:8?<F ~ ?=C =C9 :8DP >P?G?=L ψ

ιn

A@ A mMM <D?;9 =D

ωιn

A j C9 K >�?K >G <D?;9 ?;ν

ιn

A@ ? A m ::GL IJK 9><; ?=98>=?D<; =D ω

ιn

A@?? A mMM ι

n

=D =C9 <9~ F9<98>=?D< AkM o � 9:G>O9 =C9 OE889<= F9<98>=?D< PL =C9 <9~ F9<98>=?D< A

k�o �E=:E= =C9 P 9;= ;DGE=?D< DH =C9 � <>G F9<98>=?D< AX Ubcd]\ef �� l9GHJ>M>:=?@9 F9<9=?O >GFD8?=CK HD8 OGE ;=98?<F klm n m o A

118

Page 126: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ 7

8 9 :;<; ==>= ?>=@AB C;D EFG> H B @I< J =K LE><FMNOPQ RSQSTTUT V W XY Z[\] ^] _ P`U` abU cdeZf g h Z[Ze eiecj Zkclf m n oUT

18 p q bUQUQ

V W ` QPr sroUR Ur oUraTt Sr o unm m PrsuSaU q sab USub nabUQ v w bU RQn uU``U`SQU `UUr S` s`TSro` q bsub n uuS`snr STTt `Uro sr osxsoPST` XyUm szQSra`{ _ annabUQ s`TSro` v |U b SxU sm RTUm UraUo s`TSr o RSQSTTUTs}Sasnr P`srz S ~ if i�Zf�� lgiev �r abU zUrU� Sr� m n oUT p sr`aUSo n� `Urosrz Um szQSra` osQUuaTt an nabUQs`TSro` p s`TSr o` unm m PrsuSaU nr Tt q sab abU ~ if i�Zf� h [l�iddv w bU zUrU�Sr�RQn uU`` m SsraSsr ` S ~ if i�Zf� p S R nR PTSasnr n� abU � U`a

BsrosxsoPST` QU�

uUsxUo �Qnm s`TSro` v �nQ abU unm m P rsuSasnr RPQR n`U` p abQUU `aUR` rUUo an� U SooUo an �W V W p `UU W Tz v � v w bU zUrU� Sr� RQn uU`` p `UU W Tz v � p QU�P sQU`xUQt TsaaTU RQn uU``nQ asm U Sr o abP` s� U vz v

QRQn uU``nQ` SQU SxSsTS� TU p sa unPTo

� U QPr sr `soU n� nrU n� abU s`TSr o RQn uU``U` v

X�_ XU_ �Uro Sr srosxsoP ST an abU zUrU� Sr� vX� _ � UuUsxU Sr sr osxsoP ST �Qnm abU zUrU�Sr� Sro Soo sa an abU uPQQUra

R nRPTSasnr vXz _ � Um nxU Sr srosxsoPST �Qnm abU uPQQUra R nRPTSasnr v

B =N I<FE� � � � �aUR ` SooUo an �W V W �nQ s`TSro RQn uU``U` v

X�_ �UTUua un nQosrSaU`κ

q

�nQ USub s`TSroq

vX�_ � UR USa ab U �nTTnq srz `aUR` P rasT S `anRRsrz unrosasnr s` �P T� TTUo v

XS_ �TUUR PrasT Sr s`TSr o RQn uU``r

m S�U` S unm m PrsuSasnr QU�PU`a vX� _ � UuUsxU Sr sr osxsoP ST

ιr

�Qnmr

vXu_ �UTUua Sr sr osxsoP ST

ιs

�Qnm abU zUrU�Sr� vXo_ �Uro

ιs

an s`TSror

vXU _ W o o

ιr

an abU zUrU� Sr� vX� _ �� abU zUrU� Sr� unraSsr `

B + 1srosxsoPST` p QUm nxU abU qnQ`a sros�

xsoPST �Qnm abU zUrU� Sr � vX�_ � UaPQr abU `nTPasnr n� ab U � U`a sr osxsoP ST sr abU zUrU� Sr� v

B =NI<FE� � � � V UrU� Sr� RQn uU`` v

� Sub s`TSro RQn uU`` s` S``szr Uo S aqn�osm Ur`snrST un nQosrSaUκ

i=

(xi, y

i) p q b UQU

0 ≤ xi, y

i≤ 1 p unQQU`R nrosrz an abU y Tn uSasnr n� abU s`TSro

119

Page 127: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

� $ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

i8 9 : ;< =>?@ >A @BCD<EFGH AB>I E> =C@F>G κi

@> κj

F? J<KG<J C? L

dt(κ

i, κ

j) =

min(|yi− y

j|, 1 − |y

i− y

j|)2+

max

(

min(xi−xj

w

, 1 − xi+ x

j),

min(xj− x

i,

1+xi−xj

w

)

)2

. MN O

: ;< PQRSTUQVW TVWURVX Y ZRZ[ SUSR w ∈ [0, 1] =>G@B>E? ;>\ I ] =; @BCD<EFGHAB>I E<A@ @> BFH;@ F? ACD>B<J 9 : ;< ?I CEE<B @;< DCE]< >A w ^ @;< ?@B>GH<B @;<FI _CECG =< FG JFB<=@F>G? 9 `<@@FGH w = 1 HFD<? G> <I a;C?F? >G @;< JFB<=@F>GCGJ w = 0 M\ ;F=; ?;>]EJ _ < FG@<BaB<@<J ?> @;C@ min(a

0, b) = b

O =>I aE<@<EbA>B_FJ? @BCD<EFGH AB>I BFH;@ @> E<A@ 9c >@< @;C@ FA CEE @;< FGJFDFJ] CE? FG @;< H<G<_CGd >BFH FG C@< AB>I @;< F?ECG JI CdFG H @;< =>I I ]GF=C@F>G B<e]<?@ ^ @;< H<G<_ CGd aB> =<?? FGA>BI ? @;< F?ECG JC_ >]@ @; F? CG J ?<GJ? G> FGJFDFJ]CE 9 f ? C B<?]E@ >A @; F? ^ @;< F?ECG J a B> =<???dFa? @;< ?@<a? g MA O CGJ g MH O 9

: ;< J<AC] E@ ^ B<?]E@FGH FG QhXZW P UVY VXVi j^ F? @> =;> >?< @;< => >BJFGC@<? κi

BCGJ>I Eb^ E<@ w = 1 CGJ ]?< RVkXSUUS lm SSX h SXSTUQVW \ F@; \ <FH;@? 1

dt(κs,κr)A>B ?<E<=@FGH CG FGJFDFJ] CE ιs

@> _ < ?<G@ @> F?ECGJ r AB>I @;< H<G<_CGd 9n <B< κs

F? @;< E> =C@F>G >A @;< F?ECGJ ιs

>BFHFGC@<? AB>I 9 : ;< FG JFDFJ] CE?>BFH FG C@FGH AB>I F?ECGJ r CB< G>@ =>G?FJ<B<J FG ?<E<=@F>G 9

: ;F? I > J<E F? ?]o =F<G@Eb H<G<BCE @> CEE>\ ]? @> <I aE>b ?<D<BCE CE@<BpGC@FD< G<@\>Bd @>a >E>HF<? 9 q>B <r CI aE< ^ @;< @BCJF@F>G CE RQWi UVY VXVi j F?C=;F<D<J _b ?<@@FGH κ

i= ( i−1

Q−1, 0) A>B i = 1, . . . , Q 9 : ;< JFB<=@F>G =>G@B>E

aCBCI <@<B w =>G@B>E? \ ;<@;<B @;< BFGH I > J<E F? ]GFJFB<=@F>GCE Mw = 0O >B

_FJFB<=@F>GCE Mw = 1O 9 q]B@;<BI >B< ^ F?ECG J i F? CEE>\<J @> B<=<FD< CG FGp

JFDFJ]CE AB>I F?ECGJ j >GEb FA dt(κ

i, κ

j) ≤

1

Q−1M@;< JF?@CG=< _ <@\<<G

G<FH;_ >BFGH F?ECGJ?O 9 sVRkh UVY VXVi j^ \ ;<B< <C=; aB> =<??>B F? =>GG<=@<J @>A>]B G<FH;_ >BFGH aB> =<??>B? ^ =CG _ < B<CEFt<J _b C ?FI FECB ?<@@FGH FG @\>JFI <G?F>G? 9u <HCBJFGH @;< =EC??FK =C@F>G >A a CBCEE<E F?ECGJ v f ? HFD<G _b ` 9pw 9 xFGSU ZXy19 ^ >]B aCBCEE<E `f v f F? CG ZhjW Tm RVW Vkh m SUSRVi SW SVkh QhXZW P z{l QUm Z hUZUQT TVW W STUQVW hTm S[ S9 : ;< A>EE>\ FG H ?<@@FGH? A>B @;< I FHBC@F>GaCBCI <@<B? CB< CaaEF<J L• I FHBC@F>G BC@< L >G< FGJFDFJ] CE I FHBC@<?• I FHBC@F>G AB<e]<G =b L I FHBC@F>G > ==]B? >G=< FG <C=; H<G<BC@F>G• I FHBC@F>G @>a >E>Hb L CJ| ]?@C_E<• I FHBC@F>G a >EF=b L ?<D<BCE a >EF=F<? ;CD< _ <<G FI aE<I <G@<J ^ ?<< `<= 9 } 9

~]B I > J<E ?>I <\ ;C@ B<?<I _E<? @;< F?ECGJ I > J<E v f \ F@; C I C?@<B aB>p

120

Page 128: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ 7

89::;< = :9> ?@ A B C B D E<FG HI JKL20 B M ;N9O9< P QR9@ >FS 9< FG QR<99 FT U ;<QEGQE:U 98Q: B A F<:Q P ;=< T ; >9V EVV;N : 9T =VEQFGW :9O9<EV >FS9<9GQ Q;U ;V;WF9: ?@E>X =:QFGW QR9 E8QFO FQF9: ;Y QR9 8;GQ<;V U<; 89:: B ZQ EV:; EUUVF9: E:@G 8R<;G;= :9T FW<EQF;G P N RF8R :FT UVF[9: QR9 T EG EW9T 9GQ ;Y QR9 F:VEG > U<; 89::9: EG>E>>: Q; 9\ 8F9G 8@B A FG EVV@P QR9 ] ^ : ;G QR9 F:VEG>: E<9 :9VY_E>EUQFO9 B ` R9E>EUQFO9 UE<EVV9V ] ^ ?@ a B `;GW 8RFT EG> bB cR;GW:QFQOEQEGE21 F: EV:;d= FQ9 >FS9<9GQ B ` R9F< EVW;<FQRT =QFVFe9: U ;U=VEQF;G V9O9V E>EUQEQF;G N R9<9E:bE<a^ ] ^ E>EUQ: ;G FG>FOF>= EV V9O9V B A=<QR9<T ;<9 P bE<a^ ] ^ EUUVF9: E O9<@f9g F?V9 G 9FWR? ;<R;; > Q;U ;V;W@B

h O9G QR;=WR QR9 T 9QR; > :8EV9: =U N9VV P ;G9 8;=V> 8;G:F>9< E 8E:9 N FQRE O9<@ VE<W9 G=T ? 9< ;Y ;?X 98Q: P YE:Q U<; 89::;<: EG> E :V;N 8;T T =GF8EQF;G:G9QN;<i B ` R9G P U<;?V9T : T E@ ? 9 8E= :9> ?@ QR9 Q<EG:T F::F;G ;Y QR9 ;?X 98Q_Q;_8V=:Q9< T EUUFGW ;Y :Fe9 Θ(N) N RF8R R EUU 9G: FG 9E8R 8;T T =GF8EQF;G BA;<Q=G EQ9V@P ;G9 8EG :U 99> =U QR 9 8;T T =GF8EQF;G :FT UV@ ?@ >F:8E<>FGW QR9T EUUFGW Y<;T QR9 :9GQ FG >FOF>= EV EG > <98EV8= VEQFGW FQ 9E8R QFT 9 EG F:VEG><989FO9: EG FG >FOF>= EV B

j k lmnmopmoqnr s tnpuvtp wxv my t zprn{| s x |tr` R9 ;U 9<EQF;G ;Y E U E<EVV9V ] ^ 8EG ? 9 9OEV=EQ9> ?@ } H~ �I�� �� � HJ���H� ;<� � H~ �I�� �� � HJ���H�B ] 9G;Q@UF8 T 9E:=<9: 8;G:F>9< QR9 >EQE <9U<9:9GQEQF;G;Y ;?X 98Q: N R9<9E: UR9G;Q@UF8 T 9E:=<9: 8;G:F>9< U<;U 9<QF9: ;Y :;V=QF;G : PFG U<E8QF89 = := EVV@ QR9 [QG9:: B D B c EU 8E<<�<9 HI JKL22 REO9 8;G :F>9<9> QR98E:9 ;Y 89VV= VE< UE<EVV9V ] ^ : EG> >9[ G 9> :9O9<EV T 9E:=<9: Y;< >9:8<F?FGW QR9E>OEG 89T 9GQ ;Y EG 9O;V=QF;GE<@ EVW;<FQRT B ` R9 W9G;Q@U F8 T 9E:=<9: P FG 8V=>_FGW Y<9d=9G 8@ ;Y Q<EG :FQF;G : P 9GQ<;U@ ;Y U ;U=VEQF;G EG> >FO9<:FQ@ FG>F89: P E<9<9VEQ9> Q; QR9 G=T ? 9< ;Y >=UVF8EQ9 :;V=QF;G: B ` R9 UR9G;Q@UF8 T 9E:=<9: FG_8V=>9 U 9<Y;<T EG 89 �F B9 B EO9<EW9 9<<;< ;Y :;V=QF;G:� P >FO9<:FQ@ EG> <=WW9>G9:: PN RF8R T 9E:=<9: QR 9 >9U 9G >9G8@ ;Y FG >FOF>= EV �: [QG9:: Y<;T FQ: G9FWR? ;< �:[QG9:: B

` R9 W9G;Q@UF8 T 9E:=<9: :99T G;Q Q; ? 9 EUUVF8E?V9 Q; EG F:VEG> T ; >9VN FQR O9<@ Y9N >=UVF8EQ9: P EG > N 9 QR9<9Y;<9 WFO9 G9N T 9E:=<9: Y;< QR9 F:VEG>T ; >9V B �G QR9 ;QR9< R EG > P QR9 UR9G;Q@UF8 T 9E:=<9: E<9 EUUVF8E?V9 EV:; Y;<QR9 8;E<:9_W<EFG9> 8E:9 EG> N9 <98EVV QR9T :R;<QV@B

�9 E::=T 9 QR EQ QR 9<9 E<9Q

F:VEG>: P EG> F:VEG>q

RE: E U ;U=VEQF;G ;Y:Fe9

sq

B ` R9 FG>FOF>=EV: ;G F:VEG>q

E<9Iq

= {ιq,i|i = 1, . . . , s

q}

B

121

Page 129: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % & ' ()(* +,)(- % & ./0 1(2/2 324 5&6 & 7 /)383(2/29 :;: <=> ?@AB CD E =FGHI=GJ KLMNOPQR S KTUVWKU XKTY Z QN[ N[K WKPWKUKLNTNQML M\ QLXQ]QXV TYU ^ _ [KO TWKN[KWK\MWK UP KRQ` R NM N[K PTWNQRVYTW PWMaYKS QL bVKUNQML TLX NM N[K RM XQLcM\ QLXQ]QXV TYU ^ dL MWXKW NM XK`LK MVW cKLMNOPQR S KTUVWKU ZK `WUN LKKX NMXK` LK N[K XQUUQS QYTWQNO M\ NZM QLXQ]QXVTYU

ι1TLX

ι2^ eK QcLMWK N[ K UNWTNKcO

PTWTS KNKW ]TYVKU TL X RMLRKLNWTNK ML RTYRVYTNQL c N[K XQf KWKLRK a KNZKKLUMYVNQMLU

ωι1

TLXω

ι2

^ g O XK`LQLc T aQh KRNQ]K ijjkl m n ompa1↔2(ωι1

, ωι2

) =

〈i, α1↔2(i)〉(i = 1, . . . , M)\MW N[K RYV UNKWU QL N[K UMYVNQMLU q ZK RTL RTYRVYTNK

N[K XQUUQS QYTWQNO M\ QLXQ]QXV TYUι1

TLXι2

UQS P YO aO UVS S QLc VP N[K XQUNTL RKUa KNZKKL N[K WKPWKUKLNTNQ]KU M\ N[K TUUM RQTNKX RYV UNKWU r

δb(ι1, ι2) =

M

i=1

d(cωι1

,i, c

ωι2,α1↔2(i)

) stuZ [KWK

cωι1

,i

QU N[K WKPWKUKLNTNQ]K M\ N[KiN[ RYVUNKW QL N[K UMYVNQML M\

ι1^ _ [K

PWMaYKS QL VUQLc v b ^ t QU N[K PWMP KW UKYKRNQML M\ N[K TUUQcLS KLN ^ _ [K LTNwVWTY R[MQRK ZMVYX a K N[K TUUQcLS KLN WKUVYNQLc NM N[K US TYYKUN XQUUQS QYTWQNO^x L\MWNVLTNKYOq N[K P WMaYKS M\ ` L XQLc N[K MPNQS TY TUUQcLS KLN QU XQy RVYN ^zLK RMVYX UKNNYK \MW T [KVWQUNQRTYYO UKYKRNKX UVa MPNQS TY TUUQcLS KLN q aVNN[QU ZMVYX S T{K N[K XQUUQS QYTWQNO S KTUV WK XKP KL X ML N[K UKYKRNQML M\ N[K[KVWQUNQR ^

_ [QU PWMaYKS RTL a K UMY]KX aO TaTLXMLQL c N[K XKS TL X \MW aQh KRNQ] QNO^eK XK` LK TUUQcLS KLN

a1→2

UM N[TN KTR[ RYVUNKW M\ω

ι1

QU TUUQcLKX Z QN[ N[KLKTWKUN RYV UNKW QL

ωι2

S KTUVWKX aO N[K XQUNTLRK a KNZKKL RYV UNKW WKPWKUKLNTwNQ]KU ^ | UUQcLS KLN

a2→1

QU XK`LKX RMWWKUP MLXQLcYO^ } MZ ZK RTL XK` LK N[KXQUUQS QYTWQNO a KNZKKL

ι1TLX

ι2TU N[K T]KWTcK M\ N[ K XQUNTLRKU RTYRVYTNKX

VUQLc N[KUK NZM TUUQcLS KLNU r

δi(ι1, ι2) =

1

2

[

M

i=1

d(cωι1

,i, c

ωι2,α1→2(i)

)

+

M

i=1

d(cωι2

,i, c

ωι1,α2→1(i)

)

]

. s~u_ [QU QU T RMS PVNTNQMLTYYO \KTUQaYK XK`LQNQML UQLRK N[K NZM TUUQcLS KLNU RTLa K XKNKWS QLKX QL

O(M2K)NQS K ^

| RMS PYKNKYO XQfKWKLN TPPWMTR[ NM XK` LQLc XQUUQS QYTWQNO QU NM RML RKLwNWTNK ML S TPPQLcU QLUNKTX M\ RYV UNKW WKPWKUKLNTNQ]KU ^ | UNWTQc[N\MWZ TWX Z TONM XK` LK XQUUQS QYTWQNO VUQLc S TPPQLcU QU NM XK`LK TL

N ×NaQL TWO S TNWQ�

B(ι)

\MW S TPPQLcP

ωι

UM N[TNB

(ι)

i,j

= 1Qf

pωι,i

= pωι,j

q Q ^K ^ Mah KRNUi

TLXjTWK S TPP KX NM N[K UTS K RYVUNKW QL UMYVNQML

ωι

^ _ [K S TPPQL c XQUUQS QYTWQNO

122

Page 130: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ �

78 9:7 ;< =;>;=? @AB ;B 9CD <?E F DG 78 =;HDG;<I DADE D<9B ;< 9CD J7GGDBK 7< =;<IE @9G;JDB L

δm

(ι1, ι2) =N

i=1

N

j=1

|B(ι1)

i,j

− B(ι2)

i,j

|. MNOP ?D 97 9CD B;QD 78 9C D E @9G;JDB 9C D J@AJ?A@9;7< 78 9C;B =;BB;E ;A@G;9R E D@B?GD9@SDB

O(N2) 9;E D TU 7: :D J@< =DV <D 9CD WXYZW[ Y \]^^]_ ]`WZ]abA(δ)(q) 78 ;BA@< =

q@B

A(δ)(q) =

sq∑

i=1

i−1∑

j=1

2δ(ιq,i

, ιq,j

)

sq(s

q− 1)

McO@< = 9CD WXYZW[ Y \]^^]_ ]`WZ]ab de af Y ]^`Wg \ _ d\Y` @B

A(δ) =

Q

q=1s

q(s

q− 1)A(δ)(q)

Q

q=1s

q(s

q− 1)

. MhOi DGD @<R B?;9@FAD =;BB;E ;A@G;9R E D@B? GD

δJ@< F D @KKA;D= GDB?A9;<I 97 D TI T

WXYZW[ Y W^^][ g_ Yga \]^^]_ ]`WZ]abA(δi) @<= WXYZW[ Y _ Wjj ]g[ \]^^]_ ]`WZ]ab

A(δm) T k CD @>DG@ID =;BB;E ;A@G;9R E D@B?GDB 9CD =;>DGB;9R 78 9CD ;<=;>;=?@AB TlF>;7? BARm ;9 9D< =B 97 QDG7 @B 9CD K 7K?A@9;7< J7<>DGIDB 97 BD>DG@A J7K;DB 78@ B;<IAD ;< =;>;=? @A T n R 7F BDG>;<I 9CD =D>DA7KE D<9 78

A(δ) :D J@< ID9 @<?<=DGB9@<=;<I 7< 9CD BK DD= 78 J7<>DGID<JD T

o pq p r s tu vwxy z{ | t}~��t~�;< JD � � T � ;B 9CD 7K9;E ;Q@9;7< JG;9DG;7< 78 9CD B7A?9;7< m :D J@< D>@A? @9D9CD K 7K? A@9;7< 78 @< ;BA@<= FR J@AJ?A@9;<I 9CD WXYZW[ Y \]^adZa]dg dg ]^`Wg \q

A(e)(q) =1

sq

sq∑

i=1

e(ωιq,i

) M��O@< = 9CD ^aWg \WZ\ \YX]Wa]dg de \]^adZa]dg dg ]^`Wg \

qL

σ(e)(q) =

1

sq

sq∑

i=1

(

A(e)(q) − e(ωιq,i

))2

. M��O�?G9CDGE 7GD m :D J@< =DV <D 9CD WXYZW[ Y \]^adZa]dg de af Y ]^`Wg \ _ d\Y`

A(e) =

Q

q=1s

qA(e)(q)

Q

q=1s

q

M��O

123

Page 131: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

�� $ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

89 : ;<= >?@A B@CB BDEF@?FGA GH BF>?GC?FGA GH ?I D F>J@A B K GBDJ

σ(e) =

Q

q=1s

q

[

(σ(e)(q))2 +(

A(e)(q) − A(e))2

]

Q

q=1s

q

. LMNOP <= 8Q=R8S= :TU;VR;TV9 STQ=U T9WVRX 8;TV9 V9 ;<= YRVSR=UU VW ;<= 8ZSVRT;<X89: T; [89 \ = [VX Y 8R=: ;V ;<= :TU;VR;TV9 VW ;<= \ =U; UVZ];TV9 WVR 8 RV]S<QT=^ V9 :TQ=RUT;_` P <= U;89:8R: :=QT8;TV9 VW :TQ=RUT;_ X =8U]R=U Y<=9V;_YT[:TQ=RUT;_ X VR= 8[[]R8;=Z_`a b c def ghea bib jklm lkmmnopq= <8Q= ]U=: WV]R :Tr=R=9; ;=U; YRV\ Z=X U s U== P8\Z= M` P <R== VW ;<=XVRTS T9 8;= WRVX ;<= t=Z: VW Q=[;VR u] 89;Tv8;TV9 89: V9= WRVX 8 \TVZVST[8Z8YY ZT[8;TV9 VW [Z]U;=RT9S ` w CFBx D [V9UTU;U VW 4×4 YTy=Z \ ZV [z U U8X YZ=: WRVX8 SR8_{U[8Z= TX 8S= ^ T;< TX 8S= :=Y;< VW | \ T;U Y =R YTy =Z ` w CFBx D} < 8U ;<=\ZV [z U VW w CFBx D 8W;=R 8 ~ P �{ZTz= u]89;Tv8;TV9 T9;V ;^V Q8Z]=U 8[[VR:T9S ;V;<= 8Q=R8S= YTy=Z Q8Z]= VW 8 \ZV [z ` P <= [Z] U;=R R=YR=U=9;8;TQ=U WVR w CFBx D}8R= RV]9:=: ;V \T98R_ Q=[;VRU ` �@?D> K @CF@D [V9;8T9 U :8;8 WRVX Y =Z8ST[t U<=U VW � 8z= P89S89_Tz8 ` P <= :8;8 VRTST98;=U WRVX 8 R=U=8R[< s T9 ^ <T[<;<= V [[]RR=9[= VW �� :Tr=R=9; � � � WR8SX =9;U ^ 8U ;=U;=: WVR =8[< t U< U8X {YZ= ]UT9S � � � � 898Z_UTU 89: 8 \T9 8R_ :=[TUTV9 ^ 8U V\;8T9=: ^ <=;<=R ;<=WR8SX =9; ^ 8U YR=U=9; VR 8\U=9; ` P <= [Z]U;=R R=YR=U=9;8;TQ=U 8R= R=8Z Q=[{;VRU ` � F>> � K DCF�@ <8U \ ==9 V\;8T9=: \_ U]\;R8[;T9S ;^V U]\ U=u]=9; TX 8S=WR8X =U VW 8 QT:=V TX 8S= U=u]=9[= 89 : [V9U;R][;T9S 4 × 4 YTy=Z \ZV [zU WRVX;<= R=UT:]8ZU `

��� � �� � �� ��"���" �� ��� ��"� !��� �� "���� "�� ���������" ��� ���" � �"���"� +'3� . �� ��� ��� +'3� .� �� ��� ��-20.� � 2+'2. � �� �� '�� � � .+'�2 �� ��� ��

P <= Y8R8X =;=RU WVR �� � � 8R= ;<= :=W8]Z; Y 8R8X =;=RU WRVX � =W ` M  ¡Y8R8X =;=R X ];8;TV9 YRV\8\TZT;_ Ψ = 5% s 9]X \ =R VW z{X =89U T;=R8;TV9 UG = 2 s Y VY]Z8;TV9 UTv=

S = 45 89: ;<= RV]Z=;;= ^ <==Z U=Z=[;TV9 X =;<V : `P <= ;=U;U ^=R= R]9 V9 8 UT9SZ= ;^ V{YRV [=UUVR L ¢¢ £ ¤ v =8[< O [VX Y];=R]UT9S M¢ TUZ89: Y RV [=UU=U ;<]U =X ]Z8;T9S ;<= UT;]8;TV9 VW M¢ T9;=R[V99=[;=:

124

Page 132: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ ��

789 :;<=>? @ A B= 789 9 ;CD7E<D8C 78?<? E>= F=>G H8I 789 : E>=J <8 <B= 789 K:;<E<D8CEH 78 ?<? L ?8 <B= >=?;H<? ?B8; HJ M = I=HH 789 : E>EMH= <8 <B= ?D<; E<D8CI D<B EC E7<;EH 789 :;<=> C=<I8>N @ A B= E9 8;C< 8O :>8 7=??8> <D9 = 78C?;9 =JMG =E7B :>8 7=?? I E? 78C ?DJ=>=J DC ?<=EJ 8O <B= >=EH <D9 = @

A B= ?<E<D?<D7EH ?DPCDQ 7EC7= 8O JDR =>=C7=? BE? M ==C F=>DQ=J MG S<; J=C< T?<K<=?< L

p < 0.05 @

U VW V XYZ[ \YZ] ^[Z_E>S` a ` I E? 789 : E>=J <8 ?=F=C 8<B=> 7H; ?<=>DCP EHP8>D<B9 ? L ?== AEMH=b @ A B= >=?;H<? 8O NK9 =EC ?17,23 EC J cdefg hcdif jklhm hdien oSp q24 E>= EF=>KEP=? 8O rss >;C? I D<B >EC J89 DC D<DEHDtE<D8C? @ u DFD?DF= EC J EPPH89 =>E<DF=BD=>E>7B D7EH 9 =<B8 J? E>= >=?: =7<DF=HG >=:>=?=C<=J MG cv liddinw x kdg ey z idglefhl jkv hjdidieninw oS{ p q25 EC J |hjy }c x kdg ey26 @ A B=?= >=?;H<? I=>= C8<>=: =E<=J ?DC 7= <B= 9 =<B8 J? E>= J=<=>9 DCD?<D7 @ A B= >=?;H<? 8O jhn yex ickylefhl ckhjfg op { SKbq27 L P=C=<D7 EHP8>D<B9 oa ` q11 L ?=HOKEJE:<DF= P=C=<D7 EHKP8>D<B9 oS` a ` q14 EC J _ E>S` a ` E>= EF=>EP=? 8O bs DCJ=: =C J=C< >;C? @ a `ECJ S` a ` I=>= >;C rsss P=C=>E<D8C ? I D<B E : 8:;HE<D8C 8O ~� DCJDFDJ;KEH? @ _E>S` a ` I E? >;C rss P=C=>E<D8C? I D<B rs D?HECJ? 8O ~� DCJDFDJ;EH? @A B= >=?;H< 8O E ?DCPH= _ E>S` a ` >;C D? <B= � S� 8O <B= M =?< ?8H;<D8C DC <B=P=C=MECN EO<=> EHH D?HEC J? BEF= 789 :H=<=J <B=D> >;C @

�= 8M ?=>F= <B E< _ E>S` a ` E7BD=F=? >=?;H<? ?D9 DHE> <8 S` a ` L D @= @ >=?;H<?8O =� 7=HH=C< �;EHD<G I B=>=E? ?D9 :H=> 9 =<B8 J? PDF= 78C ?DJ=>EM HG I=EN=> >=K?;H<? @ ` C =� 7=:<D8C B=>= D? � hdkc x hjihk O8> I BD7B L DC EJJD<D8C <8 <B= a ` ? Lp { SKb E>>DF=? =E7B <D9 = E< <B = ?E9 = >=?;H< @ � BDH= <B=>= D? C8 ?DPCDQ 7EC<JDR =>=C 7= M =<I==C <B= >=?;H<? 8O _E>S` a ` ECJ S` a ` L <B= JDR=>=C7= M =K<I==C _ E>S` a ` EC J a ` D? ?DPCDQ 7EC< 8C 8<B=> ?=<? <BEC � hdkc x hjihkop = 4.6 ∗ 10−10 L 7.3 ∗ 10−10 L 3.0 ∗ 10−9 O8> � jiyw kL � jiyw k� EC J � icc � x kj�

ifhL >=?: =7<DF=HG q @ � B=C 789 :E>DCP <8 8<B=> 9 =<B8 J? L <B= ?DPCDQ 7EC 7= 8OJDR =>=C 7= D? 8MFD8;? @ � 8I=F=> L �E>J T? 9 =<B8 J >=E7B=J EH9 8?< E? P8 8 J E>=?;H< O8> <B= =E?G :>8M H=9 �hdkc x hjihk@ �8> <B= 8<B=> :>8MH=9 ? �E>J T?9 =<B8 J I E? H=?? ?; 77=??O;H @

A B= >=?< 8O <B= >=?;H<? E>= O8> � jiyw k 8CHG EC J <B=G E>= EF=>EP=? 8Obs >;C? 8O rss P=C=>E<D8C ? EC J rs D?HECJ? 8O ~� DCJDFDJ;EH? E? EM 8F= @ S==S=7 @ ~ O8> <B= J=OE;H< 9 DP>E<D8C <8: 8H8PG :E>E9 =<=>? @ AEMH= � 789 :E>=?JDR =>=C< 9 DP>E<D8C : 8HD7D=? @ �=?< >=?;H<? E>= E7BD=F=J MG ?=CJDCP <B= M =?<DCJDFDJ; EH? ECJ >=:HE7DCP <B= I8>?< L E? 8C= I8;HJ =�: =7< @ S=C JDCP <B= M =?<PDF=? 78C?<EC<HG M =<<=> >=?;H<? <B EC ?=CJDCP >ECJ89 DCJDFDJ;EH? o

p = 8.3 ∗

10−8 L 0.036 L 6.5 ∗ 10−5 O8> >=:HE7DCP I8>?< L ?=C< ECJ >EC J89 L >=?: =7<DF=HG q @

125

Page 133: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

� $ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.18 9:; < =>?9@ABCD EF> :>CE BC GBHBGI@9 :>>J : E; K > @ K@G ? ;9BAL M F>C EF> K >:EB: :>CE N

p = 0.0077O P��Q � R ��� !���"�� �� S �"T����# � �TU�V" R �U� Q�"T ��"� T��V TU� ��"� T" ��T "T�T�"T �S� � "�#��WS��T � ���"� X

p < 0.05Y��� Q � V��S�V RZ +'3[ . Z +'3[ .\

�� R ] �^ "T R V�� �� R ] �^ "T R V��_`� ���" �a� R�b� �R �Ra� � R� �c�d� �b� R � � R��� �R�� � R��� ��V e" � �TU�V ��� Rc� � R��� �R� � R����� �� R��b � Rbc �R�� � R� ��d �` �� R� � Rc � �R� � R��fg �� �R�� � R�a� �R�� � R���gfg hi hjhkl � R�� hjmnm � R�������g fg hi hjhnl � R��� hjmno � R���

-20.p q 2+'2. r 'pp s q .+'t2�� R ] �^ "T R V�� �� R ] �^ "T R V��

_`� ���" � R�b�� � R��c c R��� � R�c��d� � R��� � R���� c R��a � R��� ��V e" � �TU�V � R��b � R���� c Rc�b � R����� � R��a� � R��c c R�c � R� ���d �` u juimi � R���� c R� � R� ��fg u juimi � R���� c R��a � R���gfg u juimi � R���� n jhuu � R�������g fg u juimi � R���� n juvv � R��

��Q � � R ��� !���"�� �� V�w����T � �#��`T��� !� �S��" R"��V ��! �S� �� R ] �^ "T RV�� RQ �"T ���"T �� �R�c� � R���Q �"T "��T �� �R� � R��Q�"T ���V�� �� �R��c � R��a���V�� ���"T �� �R�cb � R��c���V�� "��T �� �R�b � R��c���V�� ���V�� �� �R� � R���

x@K9> y A;J ?@=>: EF=>> GBz >=>CE J BD=@EB;C E;? ;9;DB>: P x F> E;? ;9;DLG; >:C {E :>>J E; F@H> @ D=>@E >z>AE ;C =>:I9E: < @E 9>@:E M BEF EFB: :J @99 @CIJ K >= ;| B:9@C G: < :BCA> EF> GBz >=>C A>: @=> C;E :E@EB:EBA@99L :BDC B} A@CE Px F> >z >AE ;| EF> GB=>AEB;C A;CE=;9 ?@=@J >E>= A@C K > :>>C BC x@K9> ~ P �E

126

Page 134: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ ����7 � 8 ��� !���"�� �� 9�: ����;� �#��;��� ;�!� �#��" 8

�� 8 < �= "; 89�� 8�" ��9 �� �8�>� � 8������# �� �8�?� � 8���;���" @�>A �� �8� � 8���

BCDE F GCB BH IB DJFBDKLBKE M BHJ NKDJLBKGE GO JP KMDIBKGE Qw = 0R KF NKFINSIEBIT

MJGCF Qp = 0.040, 0.043

OGD KFUIEN IE N DKEM BGV GUGMWX DJFV JLBKSJUWX OGDw = 1SJDFCF

w = 0R Y��7 � > 8 =:�Z; �� ;[� 9���Z;��� Z��;�� !���� �;�� 8

�" ��9 ;�! � �#� ���# ;�!� �#�w

�� 8 < �= "; 89�� 8 �� 8 < �= "; 8 9�� 8� �� �8�>� � 8��� �� �8�?� � 8���� 8> �� �8 �� � 8��? �� �8�� � 8���� �� �8�� � 8� �� �8� � 8���

\ELDJIFKEM BHJ MJEJ] IE ^ FK_J ODGP BHJ NJOICUB `a BG baa NKN EGB MKSJ FKMTEKc LIEB KP VDGSJP JEB Y d Ge JSJD X BHJ NKf JDJE LJ ] JBeJJE BHKF DJFCUB Qbg bYbbhe KBH FBIE NIDN NJSKIBKGE GO a YaijR IE N BHJ DJFCUB GO kl m l QFJJ nI]UJ `R KFFKMEKc LIEB Q

p = 0.044R YoKMC DJ b FHGe F BHJ FV JJNCV GO pIDkl m l KE LGP V IDKFGE BG kl m l IBSIDKGC F P GP JEBF GO BKP J KE BeG LIFJF q baaa MJEJDIBKGE F GO kl m l e KBH IV GVCUIBKGE GO r s KE NKSKNC IUF IEN baa MJEJDIBKGEF GO kl m l e KBH I V GVCUIBKGEGO r sa KE NKSKNC IUF Y kV JJNCV KF LIULCUIBJN IF I OCELBKGE GO BKP J FG BHIBFV JJNCV

s(t) =t

TParSAGA

(RSAGA

(t))

QbrRe HJDJ

RSAGA

(t)KF BHJ t ku GO BHJ ] JFB DJFCUB OGCE N ]W kl m l IOBJD DCEEKEM

tFJLGE NF QISJDIMJN GSJD `a DCEFR IE N

TParSAGA

(R)KF BHJ BKP J pIDkl m l

EJJNF BG cEN FC LH I FGUCBKGEω

BH IBe(ω) ≤ R

QIUFG ISJDIMJN GSJD `aDCE FR Y n HKF IVV DG ILH HIF ] JJE FJUJLBJN ] JLICFJ BHJ P JBHG NF IDJ I]UJ BG^JJV GE c E NKEM ] JBBJD DJFC UBF IE N BHC F I FKEMUJ V GKEB GO BKP J OGD FV JJNCVG] FJDSIBKGE F LIE EGB ] J LHG FJE v CFBKc I]UWY kKELJ pIDkl m l KF DCE e KBH baKFUIENF X FV JJNCV GO ba e GCUN ] J UKEJID YwJ G] FJDSJ BH IB p IDkl m l KF SJDW OIFB KE LGP VIDKFGE BG kl m l e KBH IUIDMJ V GVCUIBKGE JSJE BHGCMH BH KF kl m l FJBBKEM DJFJP ]UJF P GDJ LUGFJUW BHJFJBBKEM GO p IDkl m l Y n HKF KF ] JLIC FJ kl m l eGCUN EJJN P IEW P GDJ MJEJDIT

127

Page 135: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

� $ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

89:;< 8: <=>>?<<@= AAB CD; EA? 8C9< ADFG? D H :H=AD89:; I 98C 8C ? F:= A?88?JI C ??A<?A?>89:; K L @8?F MNN G?;?FD89:; < 8C? DO?FDG? F?<=A8 9< :;AB MP MKQPR I 98C<8D;EDFE E?O9D89:; :@ N KMSS K T; UDFVL W L X 8C? ADFG? H :H=AD89:; 9< ?<<?;J89DAAB E9O9E?E 9;8: <Y DAA?F 9;8?F>:Y Y =;9>D89;G H :H=AD89:; < 8C= < F?<=A89;G8: ?Z>?AA?;8 <H ??E=H K [ ; 8C? :8C?F CD;E X <9; >? UDFVL W L 9; HFD>89>? DHJHA9?< D >:; <9E?FD\ AB ADFG?F H :H=AD89:; 8C D; VL W L I 98C D H :H=AD89:; :@ ] ^9; E9O9E=DA< X 98 >D; HF?<?FO? G?;?89> ODF9D89:; A:; G?F D; E 8C=< 9< <89AA D\A? 8:D>C9?O? F?G=ADF HF:GF?<< I C?; VL W L >D; :;AB _; E \ ?88?F <:A=89:; < : >>DJ<9:; DAABK ` C9< 9< 9AA= <8FD8?E \B 8C? DAY :<8 A9;?DF _ ; DA H :F89:; :@ 8C? MNNNa]^>=FO? K

0

5

10

15

20

25

30

35

40

45

50

spee

dup

time

1000*45

0

5

10

15

20

25

30

35

40

45

50

spee

dup

time

100*450

b�# c �c �! ��d�! �� ����efe ���� ���� #�����g���" �� �e fe � �gh � !�!� �g��� ��i ��d�� �d�� " ��d ��� #�����g���" �� �efe � �gh � ! �!� �g��� �� i� ��d�� �d�� " �" ����jg��� �� g�� � "! ��g k� �e fe c

l9G=F?< m n ] <C:I 8C? E?O?A:HY ?;8 :@ <?O?FDA <8D89<89>DA Y ?D<=F?< 9;8CF?? E9o ?F?;8 >D<?< K T; l9G K m 8C? E?@D=A8 H DFDY ?8?F< DF? =<?E X 9; l 9G K RY 9GFD89:; 9< H ?F@:FY ?E :;AB :;>? ?O?FB MN G?;?FD89:; < p9;<8?DE :@ ?O?FBG?;?FD89:; q D;E 9; l9G K ] mN rJY ?D;< 98?FD89:; < DF? DHHA9?E 8: ?D>C <:A=89:;9; <8?DE :@ E?@D=A8 m K ` C? E9O?F<98B Y ?D<=F?< >A?DFAB <C:I 8CD8 F?E=>9;G 8C?Y 9GFD89:; @F?s=?; >B <A:I < E:I ; 8C? E?>A9; ? :@ E9O?F<98BK ` C9< 9< HF:\ D\ ABY :<8 DHH DF?;8 9; 8C? GFDHC :@ DO?FDG? D<<9G;Y ?;8 E9<<9Y 9ADF98B p

A(δi)q X \=8

128

Page 136: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ �7

89:; <=> ?@A >B>C D> E ><F>>C 8G>B8H> ?@:<;B<@;C IA(e) J 8C? E >:< ?@:<;B<@;C

8: F>99 8: <=> :<8C ?8B? ?>G@8<@;C ;K ?@:<;B<@;C Iσ(e) J L ;B<B8M <=> :8N >

E >=8G@;B O PC DB>8:@CH <=> CQ N E >B ;K RSN >8C: @<>B8<@;C: 9>8?: <; B8L@? ?>D9@C>;K ?@G>B:@<M @C <=> E >H@CC@CH T 8: ;C > F;Q9? >UL >D< O V ;F>G>B T 8K<>B 8 F =@9><=> ?@G>B:@<M :><: ;C B;QH=9M <=> :8N > 9>G>9 8: F @<= <=> ?>K8Q 9< L8B8N ><>B: OP< :=;Q9? E > C ;<>? <= 8< ?Q> <; <=> :N 8B< RSN >8C: @N L9>N >C<8<@;C23 T WX RSN >8C: @<>B8<@;C: L >B :;9Q<@;C @: ;C9M :9@H=<9M :9;F>B <=8C <=> ?>K8Q9< W O Y =>@CDB>8:> ;K RSN >8C: @<>B8<@;C D;Q C< ?; >: C;< D= 8CH> <=> ZQ89@<M ;K B>:Q9<::@HC@[ D8C<9M I\] \O\WW F @<= :<8C?8B? ?>G@8<@;C ;K X O\X^J O

0

50

100

150

200

250

MS

E

time

average distortion

0

50

100

150

200

250

MS

E

time

standard deviation of distortion0

50

100

150

200

250

MS

E

time

average assignment dissimilarity

0

50

100

150

200

250

MS

E

time

best distortion

_�# ` ` a ��� �!� ��b �� "����� � ��"���" ��� c���� b !���� �b��" `

Y => ?>G>9;L N >C< ;K <= > 8G>B8H> N 8LL@CH ?@::@N @98B@<M IA(δm)J @: :=;F C

@C d@H O e O V >B> T 899 <=> LB>G@;Q: D8:>: =8G> E >>C L9;<<>? @C 8 :@CH9> [ HQB> Of8N > ;E:>BG8<@;C : 8: 8E ;G> D8C 89:; E > N 8?> KB;N <=@: [HQB> O

gC> KQB<=>B <=@CH <; C;<@D> 8E ;Q< <=> N >8:QB>: @: <=> K8D< <=8< >G>C<=;QH= <= > L=>C;<ML@D ?@G>B:@<M ?>D9@C>: :<>8?@9MT H>C;<ML @D ?@G>B:@<M D8C:<@99 ; DD8:@;C 899M @CDB>8:> C;<@D>8E 9MO Y =@: N @H=< :QHH>:< <=8< <=> :>8BD==8: K;QC? C>F LB;N @:@CH 8B>8: ;K LB;E9>N :L8D> <; :<Q ?M D9;:>B O

129

Page 137: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

�� $ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

0

50

100

150

200

250

MS

E

time

average distortion

0

50

100

150

200

250

MS

E

time

standard deviation of distortion

0

50

100

150

200

250

MS

E

time

average assignment dissimilarity

0

50

100

150

200

250

MS

E

time

best distortion

8�# 9 � 9 :��� �!� ��; �� "����� � ��"���" ��� � �#��;��� ���<���=� ��=� �� �� #�����;���" 9

0

50

100

150

200

250

MS

E

time

average distortion

0

50

100

150

200

250

MS

E

time

standard deviation of distortion0

50

100

150

200

250

MS

E

time

average assignment dissimilarity

0

50

100

150

200

250

MS

E

time

best distortion

8�# 9 9 :��� �!� ��; �� "����� � ��"���" ��� � >?� ���" �;���;���" ! �� "� �;��� 9

130

Page 138: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ �7

0

100000

200000

300000

400000

500000

600000

700000

diss

imila

rity

time

default parameters

0

100000

200000

300000

400000

500000

600000

700000

diss

imila

rity

time

migration every 10 generations

0

100000

200000

300000

400000

500000

600000

700000

diss

imila

rity

time

G=20

8�# 9 : 9 ;��� �!� ��< �� <=� �����#� � �!!��# >�""�� � ���<� ��� >���� < !���� �<��" � ���� �#��<��� ���?���@� ��@� �� �� #�����<���" ��> ��� � AB� ���" �<���<���" ! �� "� �<��� 9

C D E FG HIJKLF G KMNONPPQPRSNTRUV UW N XQPWYNZN[TR\Q ]QVQTR^ NP]UORT_` WUO ^PaXTQORV] bNX XTaZYRQZ c daO [ NONPPQP NP]UORT_` N[[PRQZ T_Q ]QVQeNVf ` U ZQP WUO UO]NVRSRV] T_QQ` R]ONTRUV c gV T_Q ` U ZQP h NPP T_ Q ^U` ` aVR^NTRUV e QTbQQV ij k j [OU ^QXXQXRX ZROQ^TQZ T_OUa]_ T_Q ]QVQe NVf [OU ^QXX h b _R^_ ` NRVTNRVX N ^UPPQ^TRUV UWT_Q e QXT RV ZR\RZa NPX OQ^QR\QZ c l _RX ]QVQONP ` U ZQP _NX T_Q NZ\NVTN]Q UW NPYPUb RV] UVQ TU R` [PQ` QVT ZRm QOQVT TU[ UPU]RQX nQoRePp ep XR` [PQ [NON` QTQONZq aXT` QVTX c

l _Q [ NONPPQP ij k j N^_RQ\QZ OQXaPTX UW T_Q XN` Q raNPRTp NX T_Q XQraQVTRNPij k j eaT RV ^UV XRZQONe Pp X_UOTQO TR` Q c ij k j NVZ MNOij k j UaT[ QOWUO`T_Q UT_QO TQXTQZ ` QT_U ZX RV NPP T_Q ^NXQX Qo ^Q[T WUO T_Q QNXp [OUePQ` stuvwx tyztv b _QOQ XQ\QONP NP]UORT_` X ^UV XRXTQVTPp OQN^_QZ T_Q XN` Q XUPaTRUV{lNePQ |} c

i[ QQZa [ UW M NOij k j b NX XTa ZRQZ N]NRV XT TbU ZRm QOQVT ij k j XQTa[X{~ R] c �} c � _QV M NOij k j RX ^U` [ NOQZ TU ij k j b RT_ N [ U[aPNTRUV UW ^UOYOQX[ UVZRV] XRSQ h R cQ c T_Q Va` e QO UW RXPNVZX TR` QX T_Q [ U[aPNTRUV XRSQ UWNV RXPNV Z h M NOij k j RX OQ` NOfNePp Q� ^RQVT c ia[ QOPRVQNO X[ QQZa[ ^UaPZ e Q^PNR` QZ _QOQ h Q\QV T_Ua]_ RT RX Ue\RUaXPp ZaQ TU ZRmQOQVT WaV^TRUVRV] UW T_QNP]UORT_` X c l _RX Q� ^RQV ^p RX Qo[PNRVQZ ep ij k j �X RV NeRPRTp TU _NVZPQ Xa^_

131

Page 139: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

�$ % & ' ()(* +,)(- % & ./0 1(2/2 324 5&6 & 7 /)383(2/2

9 :9;<= > ?>@:9AB?C ;=9D?C9E:F G9DA H I C AJ= ?AJ=; J 9CK L M J=C AJ= > ?>@:9AB?CDBN= ?G OP Q P BD D=A A? AJ= > ?>@:9AB?C DBN= ?G 9 DBC<:= BD:9CK 9C K AJ= KBGRG=;=CS= BC AJ= 9T ?@CA ?G M?;U BD S?T > =CD9A=K EF BC S;=9DBC< AJ= C@T E =; ?G<=C=;9AB?CD L AJ= D> ==K@> BD S:?D= A? :BC=9; H V ?M =W=; L BC AJBD S9D= AJ= :9;<=;=X =SABW= > ?>@:9AB?C DBN= ?G Y9;OP Q P 9::?M D BA A? ;=A9BC T ?;= KBW=;DBAF 9C KAJ@D U==> ?C ZCKBC< E =AA=; D?:@AB?C D T ?;= =[ SB=CA:F BC AJ= :9A=; DA9<=D ?GAJ= D=9;SJ >;? S=DD H \ J@D L D:B<JA D@> =;:BC=9;BAF S?@:K 9:D? E = S:9BT =K J =;=M J=C AJ= D> ==K@> BD ?ED=;W=K C=9; AJ= =C K ?G AJ= D=9;SJ ABT = SJ?D=C H

]?T >9;BD?C ?G KBX =;=CA T B<;9AB?C > ?:BSB=D ^\9E:= _` DJ?M=K AJ9A D=CKBC<AJ= E =DA 9C K ;=>:9SBC< AJ= M?;DA BC KBWBK@9:D BD AJ= T ?DA =X =SABW= T B<;9AB?C> ?:BSFH \ JBD > ?:BSF S9@D=D AJ= J B<J=DA D=:=SAB?C >;=DD@;= 9T ?C< AJ= T =AJ? KDDA@KB=K H a =DA;BSABC< AJ= KB;=SAB?C ?G T B<;9AB?C A@;C =K ?@A A? E = KBD9KW9CRA9<=?@D ^\9E:= b` =W=C AJ?@<J AJ= D=:=SAB?C ?G 9SA@ 9: A?> ?:?<F M 9D G?@C KBC DB<CBZ S9CA ^\9E:= c` H

d= <9W= AM? C=M <=C?AF>BS KBW=;DBAF T =9D@;=D G?; AJ= > 9;9::=: Q P HPW=;9<= 9DDB<CT =CA KBDDBT B:9;BAF T =9D@;=D AJ= 9W=;9<= KBDA9CS=D E =AM==CT 9ASJ BC< S:@DA=; S=CA;?BKD ?G AM? BCKBWBK@ 9:D 9CK 9W=;9<= T 9>>BC< KBDDBT RB:9;BAF S?T > 9;=D AM? T 9>>BC<D H

O=W=;9: AJ BC<D S9C E = :=9;C=K EF ?E D=;WBC< AJ= K=W=:?>T =CA ?G AJ= DA9RABDABS9: T =9D@;=D ^eB<D H f g b` H e B;DA L AJ= T =9D@;=D S:=9;:F K=T ?CDA;9A= AJ9AEF ;=K@ SBC< AJ= G;=h@=CSF ?G T B<;9AB?C L AJ= K=S:BC= ?G <=C=ABS W9;B9AB?CS9C E = =X =SABW=:F D:?M=K K?M C H e@;AJ=;T ?;= L AJ= >;=D=CA=K <=C?AF>BS T =9RD@;=D DJ?M AJ9A =W=C AJ?@<J AJ=;= 9;= C? S?CDBK=;9E:= SJ9C<=D BC KBW=;DBAFT =9D@;=K EF >J=C?AF>BS T =9D@;=D L <=C?AF>BS KBW=;DBAF T 9F DAB:: BCS;=9D=C?ABS=9E:FH \ JBD T 9F E = K@= A? ZCKBC< C=M >;?T BDBC< 9;=9D A? D=9;SJ H

e B<@;= c K=T ?CDA;9A=D 9C?AJ=; BCA=;=DABC< >J=C?T =C?C H d J=C AJ= C@T RE =; ?G URT =9C D BA=;9AB?CD > =; D?:@AB?C D BD BC S;=9D=K A? fi L KBW=;DBAF K;?>DW=;F G9DA BC AJ= E =<BCCBC< L 9D =j> =SA=K H V ?M=W=; L =W=C AJ?@<J >J=C?AF>BST =9D@;=D D@<<=DA AJ9A <=C=ABS W9;B9AB?C DA9FD W=;F :?M L <=C?AF>BS T =9D@;=D;=W=9: AJ 9A AJ= KBW=;DBAF ?G D?:@AB?CD BD 9SA@9::F DBT B:9; A? AJ= K=G9@:A S9D= H\ JBD 9:D? =j>:9BCD M JF AJ BD D=AABC< K? =D C?A :=9K A? BCG=;B?; ;=D@:AD =W=CAJ?@<J AJ= <=C=ABS W9;B9AB?C D==T D A? K=S;=9D= ;9>BK:F M J=C =j 9T BC=K EF?;KBC9;F >J=C?AF>BS T =9D@;=D H

e BC9::FL AJ= @ D=G@:C=DD ?G AJ= DA9ABDABS9: T =9D@;=D BD C?A :BT BA=K A? :=9;CRBC< BT > ?;A9CA AJBC<D 9E ?@A AJ= E =J9WB?; ?G AJ= 9:<?;BAJT 9C K AJ= =X =SA?G KBX =;=CA >9;9T =A=; D=AABC<D H \ J=F S?@:K 9:D? E = @D=K BC <@ BKBC< AJ= 9:R<?;BAJT H Y9;9T =A=;D S?CA;?::BC< AJ= ?> =;9AB?C S?@:K E = 9Kk @DA=K 9SS?;KBC<A? AJ= W9:@=D ?G AJ=D= T =9D@;=D H V ?M =W=; L AJBD M?@:K E = T ?;= 9>>;?>;B9A=M BAJ ?AJ=; 9K9>A9AB?C DSJ=T =D L D== a =G H lf H OBCS= AJ = S9:S@:9AB?C ?G T =9R

132

Page 140: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� � ����� ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

$ % &'&( ()( *)+),-. $ (/ 0'-,12 3 0' 4(56,)'-+/ ��

789:7 ;7 9<=>:9 7?@A B @C: D ;E>= F@C 7;G:9 F<?F8?<=;CE <C <HH9@I ;D <=;@C J9@D< 9<C G@D ?K 7:?:F=:G 7<D H ?: ;C7=:<G @J <HH?K;CE J8?? D :<789:7 L

M NONPNQ RNSTU V U W U XYZ[\]] ^ _`abcde f gh`ibjb klem dmno pX qr s[q t[uvwq x y swz]Zq {[Zzz ^

|vu qvu ^ T}}~� U� U | U � s��� su su q {U � U � v�zzZZ�r ^ � jgmjg� �e�a� b jg � hch� f g �gce�ma�cj�g

c� _`abcde f g h`ibjb p�v�u � \wZ� � Wvuz ^ � Zr �v[� ^ T}}�� U~ U t U � Z[z�v suq � U � U �[s�^ �d�c�e �ahgcj� hcj�g hg m �j�g h` _��� edbbj�g

p� w�rZ[ ^ � v[q[Z �] ^ T}}�� U¡ U t U � U �s\u su q � U � �¢ Zz ^ f `� �ejc£ � b ¤ �e _`abcdejg� � hch p{[Zu]\ Z y sww ^

Xu¥wZrv v q ¦w\§ z ^ T}¨¨� U© U t U � U �s\u ^ � U � U � �[]� su q {U � U ªw�uu ^ f _« _��� n �ae¬ n ­ ®^ �¯¡ pT}}}� U¯ U ° U � s��v[su]s ^ �cdehcj¬ d hg m ± jdehe�£j�h` « dc£ �mb ¤ �e _�md²��³ �dg dehcj�g

jg �d�c�e �ahgcj� hcj�g p°�[�� ¦Zu][Z �v[ ¦v� ´�]Z[ W \Zu Z ^ °�[�� ^ T}}}� Uµ U ¦ U � U � ZZYZz ^ X q U^ « �mdeg ± daejbcj� ¶d�£gj·a db ¤ �e _�� ²jghc�ejh` ¸ e�²`d� b

pV ws �rZww ^ ¹º�v[q ^ T}}~� U¨ U {U ª[»u]\ ^ � U � \Y \¼ »[Y\ suq ¹ U � ZYsws\uZu ^ ¸ hcc n ½ d�n ­ ®^ TT~} pT}}¨� U} U � U y U y vwwsuq ^ f mh� chcj�g jg ¾ hcaeh` hg m f ecj¿ �jh` �ibcd� b pÀ u\YZ[z\]� v�

� \ �\¥su {[Zzz ^ t uu t [¢ v[ ^ T}µ©� UT� U � U X U �vwq¢ Z[¥ ^ �dg dcj� f `� �ejc£� b jg � dhe�£ Á Â� cj� j� hcj�g hgm « h�£ jg d

à dhegjg� ptqq\zvuÄ�ZzwZ�^ � Zsq\u¥ ^ T}¨}� UTTU {U ª[»u]\ ^ � U � \Y \¼ »[Y\ ^ ° U � s��v[su]s suq ¹ U � ZYsws\uZu ^ _��� n Å n ÆÇ ^ ©¡µ

pT}}µ� UT� U � U y \u]Z[q\u¥ ^ È U � \ � swZr \ É suq t U X U X \¢ Zu ^ \u ¸ e��n ÊËËÌ �ÍÍ Í �gcdeÎ

g hcj�g h` _�g¤ dedg �d �g Í ¬�`acj�g hei _��� achcj�g pÏX XX ^ � Zr �v[� ^ T}}µ� ^´ U ¯© U

T~ U � U � s¥�s[ ^ � U �v�uzzvu suq ¹ U � ZYsws\uZu ^ �ÍÍÍ ¶ehgbn Í ¬�`n _��� n Æ ^T~© p����� U

T¡ U � U � \Y\¼ »[Y\ ^ {U ª[»u]\ suq ¹ U � ZYsws\uZu ^ Å n ± dae n Ð ^ TT~ p���~� UT© U � U � \Y\¼ »[Y\ ^ � U |Z�]\uZu suq ¹ U � ZYsws\uZu ^ °À ¦ W °Z �u\ sw � Z´ v[] ¡¯}

p°�[�� ¦Zu][Z �v[ ¦v� ´�]Z[ W \Zu Z ^ °�[�� ^ ����� UT¯ U X U ¦ su]ÑÄ{sÉ ^ ÍÒ �jdgc hg m f��aehcd ¸ heh` `d` �dg dcj� f `� �ejc£� b p� w�rZ[ ^

Vvz]vu ^ ����� UTµ U � U V U �  Ó �ZZu ^ \u ¸ e��n Ôc£ Õ de³ d`di �i�� �bja� �g « hc£ d� hcj�h` �chcjbcj�b

hg m ¸ e�²h²j`jci^ X qz U | U � U |Z ¦s� suq � U � Z�� su pÀ u\YZ[z\]� v� ¦sw\�v[u\s{[Zzz ^ VZ[�ZwZ�^ T}¯µ� ^ ´ U �¨ TU

T¨ U � U °v� szz\u \ ^ \u Í ¬�`acj�g hei f `� �ejc£� b jg Í g� jg ddejg� hgm _��� acde ��jÎdg �d^ X qz U � U � \Z]]\uZu ^ � U � U � »�Zw» ^ {U � Z\]]ssu� »�\ suq � U {Z[\s�º p�v�u� \wZ� � Wvuz ^ ¦�\ �Zz]Z[ ^ T}}}� ^ ´ U TT~ U

T} U W UĦ U |\u ^ � U ª U {�u � suq X U � U � vv q� su ^ \u ¸ e��n Öc£ �ÍÍ Í �i�� �bja��g ¸ heh` `d` hgm � jbcej²acdm ¸ e��dbbjg� pÏXXX ^ � Zr �v[� ^ T}}¡� ^ ´ U �¨ U

�� U ª U � U � s[\u ^ ¹ U °[ZwwZzÄWswsÉs[ su q ª U WsuqvYsw ^ \u ¸ heh` `d` ¸ e�²`d� ��`¬jg�¤ e�� ¾ hcaed Î ¸¸ �¾ ���Á �gcdeg hcj�g h` _�g¤ dedg �d �g Í ¬�`acj�g hei _�� Î

133

Page 141: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

������� �� � �� �� � �� ������ ���� � ��� � ��� ��� ��� ��� �� �� � !��"�#�

� $ % & '(') *+(', $ % -./ 0'1.1 213 4%5 % 6 .(272'1.1

8 9:;:<=>? @AB C D C E FGHAIJ ? K CLMC NOPQRSRT FUA V C W XUURJ YNZJHU[RJL\RJTF[ ?] RQ DIJ^ ? _``ab ? Z C cda C

e_C N C fIU[OPHg FU A MC hPIU[BiHiGFiFU F ? HU j k=lm no: p>:qk> ;:<=> ;r s=>t qkq> lq=> p>:qr r<u q>: vqlw> =r=u <qo? @AB C \ C x JRHUIGHOP FUA y C E FRU[ARz Y{BB|g ZiHIU} UHGRJBHi~? � FU[^I^ ? e���b ? Z C `a C

ee C W C hFZ OFJJ�JR ? { C fRiiFg FU�H ? W C fIg FBBHUH FUA W C NHZZ RJ ? � �=rm s=�8 m � ?ecc Y_```b C

ed C f C x F|^IJFUiF ? MC �JXUiH FUA � C ] RGFTFHURU ? p� �� vk;>om p� ;u q j k=lm � ?_dd� Ye���b C

ea C x C �R[RJ FUA { C �RJBPI ? � rql:k=><lo � q:: m �� ? �`� Y_`�`b Cec C MC �JXUiH ? f C x F|^IJFUiF FUA � C ] RGFTFHURU ? �8 :<l;r � >u <> qqk<>u �� ? d�ad

Y_``�b Ce� C y C K C �FJA ? � m � � qk<l;> �:;:m � oo m �� ? ed� Y_`�db Ce� C MC �JXUiH FUA y C x HGHz XJGH ? j ;::qk> � > ;r�o<o � �88 rm � ? dc� Ye���b C

134

Page 142: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Publication errata

Distortion limited wavelet image codec

Page 88: The PSNR values given in the Figure 6 are incorrect. The correctvalues are given in the diagram below.

24

26

28

30

32

34

36

0 2 4 6 8 10

PS

NR

Image #

Predictive depth coding of wavelet transformed im-ages

Page 100: The column marked as BPP in the Table 1 contains scalar quanti-zation constant q, not BPP values. The corresponding BPP values for each qare given in the table below.

q BPPlena barbara goldhill

0.50 3.28 3.66 3.870.25 2.29 2.65 2.900.15 1.57 1.92 2.150.10 1.05 1.42 1.590.05 0.48 0.84 0.810.01 0.10 0.19 0.11

135

Page 143: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

24. Linas Laibinis25. Shuhua Liu

26. Jaakko Järvi

27. Jan-Christian Lehtinen

28. Martin Büchi29. Elena Troubitsyna30. Janne Näppi31. Jianming Liang32. Tiberiu Seceleanu33. Tero Aittokallio

34. Ivan Porres35. Mauno Rönkkö36. Jouni Smed37. Vesa Halava38. Ion Petre39. Vladimir Kvassov

40. Franck Tétard

41. Jan Manuch42. Kalle Ranto43. Arto Lepistö44. Mika Hirvensalo45. Pentti Virtanen

46. Adekunle Okunoye

47. Antonina Kloptchenko48. Juha Kivijärvi49. Rimvydas Rukš nas50. Dirk Nowotka51. Attila Gyenesei

, Mechanised Formal Reasoning About Modular Programs, Improving Executive Support in Strategic Scanning with Software

Agent Systems, New Techniques in Generic Programming : C++ is More Intentional

than Intended, Reproducing Kernel Splines in the Analysis of Medical

Data, Safe Language Mechanisms for Modularization and Concurrency

, Stepwise Development of Dependable Systems, Computer-Assisted Diagnosis of Breast Calcifications

, Dynamic Chest Images Analysis, Systematic Design of Synchronous Digital Circuits

, Characterization and Modelling of the Cardiorespiratory Systemin Sleep-disordered Breathing

, Modeling and Analyzing Software Behavior in UML, Stepwise Development of Hybrid Systems

, Production Planning in Printed Circuit Board Assembly, The Post Correspondence Problem for Marked Morphisms

, Commutation Problems on Sets of Words and Formal Power Series, Information Technology and the Productivity of Managerial

Work, Managers, Fragmentation of Working Time, and Information

Systems, Defect Theorems and Infinite Words, Z -Goethals Codes, Decoding and Designs

, On Relations between Local and Global Periodicity, Studies on Boolean Functions Related to Quantum Computing

, Measuring and Improving Component-Based SoftwareDevelopment

, Knowledge Management and Global Diversity - AFramework to Support Organisations in Developing Countries

, Text Mining Based on the Prototype Matching Method, Optimization Methods for Clustering

, Formal Development of Concurrent Components, Periodicity and Unbordered Factors of Words

, Discovering Frequent Fuzzy Patterns in Relations of QuantitativeAttributes

4

ë

52. Petteri Kaitovaara53. Petri Rosendahl54. Péter Majlender

55. Seppo Virtanen

56. Tomas Eklund57. Mikael Collan

58. Dag Björklund59. Shengnan Han

60. Irina Georgescu61. Ping Yan62. Joonas Lehtinen

, Packaging of IT Services – Conceptual and Empirical Studies, Niho Type Cross-Correlation Functions and Related Equations, A Normative Approach to Possibility Theory and Soft Decision

Support, A Framework for Rapid Design and Evaluation of Protocol

Processors, The Self-Organizing Map in Financial Benchmarking

, Giga-Investments: Modelling the Valuation of Very LargeIndustrial Real Investments

, A Kernel Language for Unified Code Synthesis, Understanding User Adoption of Mobile Technology: Focusing on

Physicians in Finland, Rational Choice and Revealed Preference: A Fuzzy Approach

, Limit Cycles for Generalized Liénard-type and Lotka-Volterra Systems, Coding of Wavelet-Transformed Images

Turku Centre for Computer Science

TUCS Dissertations

Page 144: Coding of Wavelet- Transformed Images - Amazon Web Servicesshare.jole.fi.s3.amazonaws.com/...Coding_of_Wavelet-Transformed_Images.pdf · The best lossy image compression meth-ods

Lemminkäisenkatu 14 A, 20520 Turku, Finland | www.tucs.fi

Turku

Centre

Computer

Science

for

University of Turku

Department of Information Technology

Department of Mathematics

Åbo Akademi University

Turku School of Economics and Business Administration

Department of Computer Science

Institute for Advanced Management Systems Research

Institute of Information Systems Sciences

ISBN 952-12-1568-2

ISSN 1239-1883