wavelet based compression

download wavelet based compression

of 28

Transcript of wavelet based compression

  • 7/30/2019 wavelet based compression

    1/28

    ABSTRACT

    Wavelets are mathematical functions that cut up data into different frequency components, and

    then study each component with a resolution matched to its scale. They have advantages over

    traditional Fourier methods in analyzing physical situations where the signal contains

    discontinuities and sharp spikes. Wavelets were developed independently in the fields of

    mathematics, quantum physics, electrical engineering, and seismic geology. Interchanges

    between these fields during the last ten years have led to many new wavelet applications such as

    image compression, turbulence, human vision, radar, and earthquake prediction.

    Wavelet methods constitute the underpinning of a new comprehension of time-frequency

    analysis. They have emerged independently within different scientific branches of study until all

    these different viewpoints have been subsumed under the common terms of wavelets and

    time-scale analysis, or scale-space-analysis in the context of image processing. Wavelet

    theory is closely connected to the Fourier transformation. In turn, the continuous wavelet

    transformation is an integral transformation similar to the Fourier transformation. But whereas

    the Fourier transformation analyzes the global regularity of a function, the wavelet transform

    analyzes the point wise regularity of a function. Wavelet theory involves representing general

    functions in terms of simpler, fixed building blocks at different scales and positions.

    CONTENTS

    1

  • 7/30/2019 wavelet based compression

    2/28

    1 Introduction

    1.1 Describing compression 03

    1.2 Lossy compression 03

    1.3 Lossless compression

    03

    1.4 Effects of loosy compression on medical images

    04

    2 Wavelet transform

    2.1 Scale dependent in images 06

    2.2 Inter and intra-scale dependencies 10

    3 Compression methods

    3.1 SPIHT algorithm 16

    3.2 EZW algorithm 18

    3.3 SOFM algorithm 21

    4 Simulation and results

    4.1 Performance analysis 24

    4.2 Conclusion 27

    References 28

    2

  • 7/30/2019 wavelet based compression

    3/28

    CHAPTER-I

    INTRODUCTION

    Digital images in their original state require considerable storage capacity and transmission

    bandwidth. Image compression is the process of encoding images such that less storage space isrequired to archive them and less transmission time is required to retrieve them over a network.

    Compression is possible because most images contain large sections (e.g. backgrounds) that are

    often smooth, containing nearly identical pixel values that contain duplicate information. This is

    referred to as statistical redundancy. Ideally, an image compression technique strives to remove

    redundant information, and efficiently encode and preserve the remaining data.

    1.1 Describing Compression

    The compression ratio between the original image and the compressed version is typically used

    to describe the degree of compression of a compressed image. It is the ratio between computer

    storage required to store the original image to that of the compressed data. While no compression

    algorithm can reduce file size without damage to image quality, many algorithms can reach

    compression ratios on the order of 30:1 (when compared to the uncoded image), with barely

    noticeable effects. Image compression algorithms may be broadly categorized into two types:

    lossy and lossless.

    1.2 Lossy Compression

    With lossy image compression, redundant pixel data are discarded during the compression

    process so that the compressed image is only an approximation of the original material. Quite

    often adjusting the compression parameters can vary the degree of lossiness allowing the image-

    maker to trade off file size against image quality.

    1.3 Lossless Compression

    In lossless compression schemes, the reconstructed image (after compression) is numerically

    identical to the original image and may be displayed as an exact digital replica of the original.

    Only the statistical redundancy is exploited to achieve compression. In general, lossless

    techniques provide far lower compression ratios than lossy techniques with the bonus of

    preserving all image content.

    3

  • 7/30/2019 wavelet based compression

    4/28

    1.4 Effects of Lossy Compression on Medical Images

    At low compression rates, high frequency noise is the first to be lost; however, this is generally

    imperceptible and referred to as visually lossless. As the compression ratio is increased, the

    first perceptible change in medical images is typically the removal of salt-and-pepper noise

    which is generally preferred by observers and may actually improve diagnostic quality. At

    moderate levels of compression, blurring of image content becomes apparent and at high levels

    of compression, blurring will increase and artifacts characteristic of the compression algorithm

    will become evident.

    1.4.1 Specific Issues Associated with Image Compression in a PACS Environment

    With recent advances in PACS and telemedicine, the quantity of medical volumetric data

    generated by modalities such as magnetic resonance imaging (MRI) or computed tomography

    (CT) is ever increasing. Although the cost of data storage is falling as the capacity per device

    increases, there remains a strong demand for efficient image compression techniques to

    accommodate the rapid growth of these data and to reduce associated storage and bandwidth

    costs. Within a typical digital radiology setup, data are stored in a central server or, in larger

    schemes, in one of many server nodes. This large repository of archived patient data greatly

    increases the frequency with which a user can access patient images. However, the client, in

    many cases, is a low-to-mid range computer with modest memory and bandwidth. This type of

    setup imposes constraints that must be addressed:

    1. Choice of compression method - lossy or lossless: If such data are to be transferred over

    low-bandwidth networks, efficient compression is essential. Currently, hospitals are

    reluctant to use lossy compression due to the potential for legal ramifications given

    incorrect diagnoses. Lossless compression does not carry these ramifications, but does

    not compress the image data to the magnitude achieved by lossy compression.

    2. Proprietary or open standards: Should the hospital employ proprietary compression

    schemes or open standards? Proprietary compression schemes have a cost and risk

    associated with their support, end of life and interoperability. Standards reduce this cost

    and risk but might not be the most efficient or advanced option.

    3. Scalability: As some clients will be limited in computer memory, a client receiving scaled

    data may browse low-resolution versions of the image for selection of a region or volume

    4

  • 7/30/2019 wavelet based compression

    5/28

    of interest (ROI, VOI respectively) for preferred download. Further, reduced resolution

    viewing decreases the clients dependence on high bandwidth connections while

    simultaneously decreasing rendering times.

    4. Internet Communication Protocols: The client-server communication protocol must begeneric enough to be easily deployed on a variety of computer operating systems.

    CHAPTER-II

    WAVELET TRANSFORMS

    5

  • 7/30/2019 wavelet based compression

    6/28

    The wavelet transform applied to a whole variety of different signals as emerged as a new tool

    in signal processing. Wavelets are able to model characteristics of signals not previously

    modelled by existing statistical approaches. In particular it is able to model scale dependence

    which is present in most if not all real-life signals such as speech or image signals. In the next

    section we will introduce what we mean by scale dependence in images.

    2.1 Scale dependence in images

    A key feature in the design of good data compression algorithms is that they capture the

    correlations in the data. For example, in an image, there is a high correlation between a pixel

    and its neighbours, something which is exploited in simple predictive coding algorithms. We

    call this correlation spatial dependence since there is a statistical dependence between pixels

    which are spatially close to each other. However, there is a more subtle dependence in signals

    and that is one across different scales. This is something that is brought out by the wavelet

    decomposition of a signal. As an example Fig. 1, below shows an image of an outdoor scene

    which is 512x512 pixels. Fig. 2 shows this image displayed at a coarser scale (in other words,

    the same image is displayed on a smaller grid, in this case 256x256 pixels). Obviously,

    because we are displaying the same image on a smaller grid, many of the finer details in Fig.

    1, are lost in Fig. 2, but we can still easily recognise the scene in Fig. 1, from that in Fig. 2.

    The image displayed in Fig. 2, was produced from that displayed in Fig. 1, by sub-sampling

    the pixels after applying a simple low-pass filter in order to avoid aliasing effects. We can

    obviously apply this process to the image in Fig. 2, and the result is Fig. 3, which is the same

    (recognisable) scene but displayed at an even coarser scale. Obviously, we could repeat this

    process and create a pyramid of images where each level of the pyramid represents the image

    at a different scale.

    The term scale dependence means that there is a correlation between features at one scale (in

    other words, at one level in the pyramid) and features at another scale at the same location

    (taking into account the different sampling grid sizes). The word feature is used loosely in this

    context (although it will have a more precise meaning in the context of the wavelet

    transform).

    6

  • 7/30/2019 wavelet based compression

    7/28

    For example, the presence of an edge at one scale means that there is an increased probability

    of an edge being at the corresponding position in coarser scale images. Fig. 4 and 5

    demonstrate this point by displaying edge maps computed from Fig. 1 and 2 respectively.

    By examining Fig. 3 and 4 closely, we can see that this edge correlation does indeed exist.

    However, it is not 100% correlation. Thus an edge at a fine scale might appear at a coarser

    scale, but it might not. This is important from a compression view-point. Efficient

    compression methods are all about making good predictions. If we apply compression to the

    coarser scale image first (Fig. 4) and then predict the presence of all edge features in Fig. 5, at

    corresponding locations in Fig. 4, then this will be a fairly good approximation. All that

    remains is to add the extra detail (in other words those edge features that are in the fine scale

    image but not the coarser scale image) to get the fine scale compressed image. This is the

    7

    Fig. 1 Fig. 2 Fig. 3

    Fig. 3 Fig. 4

  • 7/30/2019 wavelet based compression

    8/28

    basis (very crudely!) of wavelet compression algorithms they predict what is at the finer

    scale from information computed at the coarser scale on a reduced grid size and encode the

    extra added detail (so called inter-scale prediction). Of course, compressions algorithms

    shouldnt ignore the spatial correlation of neighboring pixels and wavelet compression

    algorithms combine this inter-scale prediction with intra-scale prediction in other words

    prediction information within a single image scale.

    We will show why it is more efficient to compress the wavelet coefficients after a wavelet

    transform. To begin with it is worth looking at the images produced by a wavelet transform.

    Fig. 6 shows an original image, the famous Lena image much used by image compression

    researchers in order to evaluate their algorithm. This image is 512x512 pixels in size. Fig. 7 is

    the result of applying the 2D Haar transform on this image and shows the coarse scale

    approximation image and the 3 detail images (only the magnitudes of the detail images are

    shown as they can be positive or negative values). Fig. 8 shows the result of the 2D linear

    spline wavelet transform for comparison. In Fig. 9, a 2 level wavelet decomposition using the

    Haar wavelet is shown where a wavelet decomposition of the approximation image is applied.

    This results in one approximation image and 6 detail images but notice that all of these

    images fit exactly into the original 512x512 pixel container. From a compression viewpoint,

    the striking feature of the two wavelet transforms is that all three of the detail images are

    mainly dark in the smooth image regions with higher frequency regions (such as edges and

    textured regions) highlighted. In terms of the distribution of greylevels in the detail images,

    the grey level of a pixel in the detail images would have a high probability of having a small

    magnitude (close to zero) than a pixel in the original image (this ignores any inter or intra

    scale dependencies for the moment). Hence we can postulate that the zero th order entropy of

    the wavelet decomposition image (the coarse scale approximation image and the three detail

    images) will be lower than the zero th order entropy of the original image and hence, using

    entropy coding (Huffman or Arithmetic coding) should result in significant compression ifapplied to the wavelet image. Figure 13 shows the grey level histograms of the original image

    along with that of Haar wavelet image and shows that the histogram of the wavelet image is

    much more compact than that of the original image particularly around zero indicating a

    lower entropy value. A careful comparison of Fig. 7 and 8 shows that the linear wavelet does

    a better job in approximating the original image as the three detail images are clearly darker

    8

  • 7/30/2019 wavelet based compression

    9/28

    and contain less energy than for the Haar wavelet. Remember, the detail signal is essentially

    the difference between the original and its coarse scale approximation. The better the

    approximation, the lower the energy in the detail image and the more efficiently the detail

    images can be compressed. Table 1 lists the entropies of the original, the Haar wavelet and the

    linear spline wavelet where this is confirmed. The entropies are shown for both a 1 and 2-

    level wavelet decomposition where it can be seen that the entropy is further reduced for the 2

    level wavelet pyramid. This can be continued and, typically, for image compression

    applications, 5 or 6 level wavelet pyramids are used. Beyond this point, there is little decrease

    in entropy since the coarse scale approximation image occupies a smaller and smaller

    proportion of the wavelet image size and hence approximating it further makes little

    difference (for example, for a 5-level wavelet decomposition, the coarse level approximation

    image is 16x16 pixels for a 512x512 original image size).

    9

    Fig. 6 Fig. 7

  • 7/30/2019 wavelet based compression

    10/28

    2.2 Inter and intra-scale dependencies

    The above results are for zeroth order entropy only in other words, we still have not taken

    into account inter-scale or intra-scale dependencies of the coefficients in the

    wavelet images.

    10

    Fig. 8 Fig. 9

    Fig. 10

  • 7/30/2019 wavelet based compression

    11/28

    Entropy

    Original image 7.22

    1-level Haar wavelet 5.96

    1-level linear spline wavelet 5.53

    2-level Haar wavelet 5.02

    2-level linear spline wavelet 4.57

    Table: 1

    These dependencies are exploited in these algorithms However, being able to explicitly

    express these dependencies algebraically is not easy. Usually papers attempt to do this by

    expressing the mutual information between spatially neighboring or parent-child wavelet

    coefficients this can lead to interesting results although the mathematics is tricky and often

    only simplified cases can be considered. We will consider a simplified scenario where we

    classify wavelet coefficients into 2 groups those that are large and those that are small. This

    sounds too simple but is actually quite relevant when it comes to wavelet compression

    because modern algorithms essentially try to put more effort into coding large valued

    coefficients at the expense of ignoring small valued coefficients. This of course begs the

    question that, somehow, we have to be able to also encode the position of groups of large or

    small valued coefficients which is where the dependencies (inter or intra-scale) allow us to

    do this quite efficiently.

    Before we look at this simple model, we need to be clear about just what dependencies we

    are trying to model. Fig. 11 shows wavelet decomposition into 2 levels the coarse

    approximation image is the small square at the top left of the figure and the 6 detail images

    are shown. A wavelet coefficient in one of the detail images is shown as 11. We are

    considering dependencies between this coefficient and it 8 neighboring coefficients in the

    same detail image. This is intra-scale dependency. We are also considering the dependence

    between X and its parent )(XP which is the corresponding coefficient at the next level.

    Thus if X is detail image pixelj

    lknnd ,,, (k,l are the spatial coordinates within the detail

    image) then its parent coefficient isj

    lknnd 2/,2/,1,1 .

    11

    )(XPX

    X

  • 7/30/2019 wavelet based compression

    12/28

    Let us denote sets S and L assets wavelet coefficients with small and large absolute

    values. We will classify wavelet coefficients into these two classes by comparing their

    absolute values with a threshold. S# and L# are the number of coefficients in each set with

    2## NLS =+ for an image consisting of N rows by N columns.

    Let us focus on the inter-scale dependency first. A wavelet coefficient X is a member of

    either set S or set L. Its parent coefficient is also a member of set S or set L. We want to

    determine if there is any dependency on the memberships of these two coefficients. In other

    words, if the parent of X is in set S, is it more likely that X will be in set S also? Thus we

    want to determine the following probabilities:

    ))(|( SXPSXP =Prob. that X is a member of set S if its parent is a member of set S.

    ))(|( SXPLXP = Prob. that X is a member of set L if its parent is a member of set S.

    ))(|( LXPSXP = Prob. that X is a member of set S if its parent is a member of set L.

    ))(|( LXPLXP = Prob. that X is a member of set L if its parent is a member of set L.

    Without any dependencies we would expect that:

    2

    #))(|())(|(

    N

    SSXPSXPLXPSXP = 2.1

    2

    #))(|())(|(

    N

    LSXPLXPLXPLXP = 2.2

    12

    Fig. 11

    X

  • 7/30/2019 wavelet based compression

    13/28

    We can take simple measurements from a wavelet image to see if any dependencies exist on

    the membership of sets S and L. We estimate the probabilities in equations 2.1 and 2.2 from

    the histogram of the wavelet coefficients. In order to determine the set membership of a

    wavelet coefficient, the histogram shown in Fig. 12 for a 1-level Haar decomposition shows

    that there is a large cluster around zero and a much broader cluster starting at a value of

    around 5. Interestingly, this 2-cluster model of wavelet coefficient distribution has been

    developed and used recently in areas such as image filtering as well as compression. Thus,

    for this image, we will choose a threshold of 5 in order to partition the coefficients into the

    two sets. We then simply measure the numbers of coefficients that are in the respective sets

    to estimate the probabilities ))(|( SXPSXP and ))(|( LXPLXP . Table 2 lists

    these values for the 2-level Haar decomposition for the Lena image. Also table 2 lists the

    values of 2/# NS and 2/# NL which are the probability estimates assuming no inter-scale

    dependency for comparison.

    ))(|( SXPSXP ))(|( LXPLXP 2/# NS2

    /# NL

    0.886 0.529 0.781 0.219

    Table: 2

    From the above table, we see that, ignoring inter-scale dependencies, the probability of a

    coefficient being small is 0.781 but if we take into account whether its parent coefficient is

    small, this increases to 0.886. The probability of a coefficient being large if we ignore inter-

    scale dependencies is 0.219 but if we take into account whether its parent is large, this

    increases to 0.529.

    From an image compression viewpoint, when encoding a wavelet coefficient, we can

    usefully take into account the state of a parent wavelet coefficient (in other words whether it

    is small or large). Since most (if not all) wavelet-based image compression algorithms

    encode coarser scales before finer scales (we will discuss more about this in the next section),

    13

  • 7/30/2019 wavelet based compression

    14/28

    the state of the parent of a wavelet coefficient is known to the decoder as well as the encoder

    and so this does not have to be transmitted.

    For the case of intra-frame dependency, the situation is a bit more complicated as a wavelet

    coefficient at any level has 8 spatial neighbors as shown in Fig. 13. Hence there are8

    2 possible states of the surrounding 8 neighbors which make characterizing the dependence

    unmanageable. A simple solution is to characterize the state of the neighborhood in terms of

    the average (absolute) value of their wavelet coefficient values.

    Let )(Xc be the absolute value of a wavelet coefficient X and the set { }81: nXn the

    neighboring coefficients of X. The state of the neighborhood of X is then characterized by

    the value:

    =

    =8

    1

    )(8

    1

    n

    nXcc 2.3

    The state of the neighborhood is then this value thresholded:

    { }

    { } TcLX

    TcSX

    n

    n

  • 7/30/2019 wavelet based compression

    15/28

    significance of a wavelet coefficient (in other words whether its absolute value is greater than a

    threshold) which in turn determines whether it should be coded or not. It should also be

    mentioned that the neighborhoods shown in Fig. 12 cannot, in fact be used in predicting whether

    the coefficient at the centre of the neighborhoods is significant or not. This is because, in coding

    wavelet coefficients, a raster-scanning of each wavelet level is used which means coefficients are

    scanned in a top-left to bottom-right order. Thus, all of the coefficients in the neighborhoods will

    not have been coded before the centre coefficient X is coded and the decoder will have no way

    computing the prediction probabilities ( { } )|( SXSXP n and { } )|( LXLXP n ). Hence a

    causal neighborhood is used in determining these probabilities. Fig. 13 shows a causal

    neighborhood where, in this case, all of the coefficients in the neighborhood are coded before X.

    CHAPTER-III

    COMPRESSION METHODS

    Uncompressed multimedia data requires considerable storage capacity and transmission

    bandwidth. The data are in the form of graphics, audio, video and image. These types of data

    have to be compressed during the transmission process. Large amount of data cant be stored if

    there is low storage capacity present. The compression offers a means to reduce the cost of

    storage and increase the speed of transmission. Image compression is used to minimize the size

    15

    Fig. 13

  • 7/30/2019 wavelet based compression

    16/28

    in bytes of a graphics file without degrading the quality of the image. There are two types of

    image compression is present. They are lossy and lossless. Some of the compression algorithms

    are used in the earlier days and it was one of the first to be proposed using wavelet methods. For

    still image compression, JPEG (Joint Photographic Experts Group) is established. The JPEG

    technique is mainly based upon the Discrete Cosine Transform. Over the past few years, a

    variety of powerful and sophisticated wavelet based schemes for image compression have been

    developed and implemented. The coders provide a better quality in the pictures. Wavelet based

    image compression based on setpartitioning in hierarchical trees (SPIHT) and is a powerful,

    efficient and yet computationally simple image compression algorithm. It provides a better

    performance when compared to the Embedded Zerotree wavelet transform.

    There are two passes involve in SPIHT and EZW techniques. They are Sorting Pass and

    Refinement Pass. Second, the image quality is measured objectively, using Peak Signal-to-Noise

    Ratio (PSNR) and Mean Squared Error (MSE). In SOFM, there are three types of layers are

    present. They are Input layer, competitive layer and output layer. The input layer accepts

    multidimensional input pattern from the environment. In competitive layer, each neuron node

    receives a sum of weighted inputs from the input layer. The organization of the output layer is

    application dependent.

    3.1 SPIHT ALGORITHM

    The SPIHT algorithm was introduced by Said and Pearlman. It is a powerful, efficient and yet

    computationally simple image compression algorithm. By using this algorithm, the highest

    PSNR values for given compression ratios for a variety of images can be obtained. It provides a

    better comparison standard for all subsequent algorithms. SPIHT stands for Set Partitioning in

    Hierarchical Trees. SPIHT was designed for optimal progressive transmission, as well as for

    compression. One of the important features of SPIHT is that at any point during the decoding of

    an image, the quality of the displayed image is the best that can be achieved for the number of

    bits input by the decoder up to that moment. The wavelet coefficients can be referred as Ci,j. In a

    progressive transmission method, the decoder starts by setting the reconstruction image to zero.

    It then inputs (encoded) transform coefficients, decodes them, and uses them to generate an

    improved reconstruction image. The main aim in progressive transmission is to transmit the most

    important image information first. SPIHT uses the mean squared error (MSE) distortion measure.

    16

  • 7/30/2019 wavelet based compression

    17/28

    3.1

    Where, N is the total number of pixels. So the largest coefficients contain the information that

    reduces the MSE distortion.

    A. SPIHT Coding

    It is important to have the encoder and decoder test sets for significance in the same way, so the

    coding algorithm uses three lists called list of significant pixels (LSP), list of insignificant pixels

    (LIP), and list of insignificant sets (LIS).

    1. Initialization: Set n to [log2 maxi,j(Ci,j)] and transmit n. Set the LSP to empty. Set the LIP

    to the coordinates of all the roots (i, j) H. Set the LIS to the coordinates of all the roots

    (i, j) H that have descendants.

    2. Sorting pass:

    2.1 For each entry (i, j) in the LIP do:

    2.1.1 Output Sn(i, j);

    2.1.2 If Sn(i, j) = 1, move (i, j) to the LSP and output the sign of Ci,j ;

    2.2 For each entry (i, j) in the LIS do:

    2.2.1 if the entry is of type A, then output Sn(D(i, j));if Sn(D(i, j)) = 1, then for each (k, l) O(i, j) do: output Sn(k, l);

    if Sn(k, l) = 1, add (k, l) to the LSP, output the sign of Ck,l;

    if Sn(k, l) = 0, append (k, l) to the LIP;

    if L(i, j) not equal to 0, move (i, j) to the end of the LIS, as a type-B entry,

    and go to step

    2.2.2; else, remove entry (i, j) from the LIS;

    2.2.3 if the entry is of type B, then output Sn(L(i, j));

    if Sn(L(i, j)) = 1, then append each (k, l) O(i,j) to the LIS as a type-A

    entry: remove (i, j) from the LIS:

    3. Refinement pass: for each entry (i, j) in the LSP, except those included in the last sorting pass

    (the one with the same n), output the nth most significant bit of |Ci,j|;

    4. Loop: decrement n by 1 and go to step 2 if needed.

    17

  • 7/30/2019 wavelet based compression

    18/28

    3.2 EZW ALGORITHM

    The EZW algorithm was one of the first and powerful algorithms based on Wavelet based Image

    compression. The other algorithms were created depending upon the fundamental concepts of

    EZW. The EZW algorithm was introduced in the paper of Shapiro. The expansion of EZW is

    Embedded Zerotree Wavelet. The core of the EZW compression is the exploitation of self-

    similarity across different scales of an image wavelet transform. In other words EZW

    approximates higher frequency coefficients of a wavelet transformed image. Because the wavelet

    transform coefficients contain information about both spatial and frequency content of an image,

    discarding a high-frequency coefficient leads to some image degradation in a particular location

    of the restored image rather than across the whole image. Here, the threshold is used to calculate

    a significance map of significant and insignificant wavelet coefficients. Zerotrees are used to

    represent the significance map in an efficient way. The main steps are as follows:

    1. Initialization: Set the threshold T to the smallest power of 2 that is greater than max (i,j)

    |Ci,j|/2, where Ci,j are the wavelet coefficients.

    2. Significance map coding: Scan all the coefficients in a predefined way and output a symbol

    when |Ci,j | > T. When the decoder inputs this symbol, it sets Ci,j = 1.5T.

    3. Refinement: Refine each significant coefficient by sending one more bit of its binary

    representation. When the decoder receives this, it increments the current coefficient value by

    0.25T.

    4. Set T = T/2, and go to step 2 if more iterations are needed.

    A. EZW Coding

    A wavelet coefficient Ci,j is considered insignificant with respect to the current threshold T if |Ci,j|

    = T. The Zerotree data structure can be constructed from the following experimental result: If a

    wavelet coefficient at a coarse scale (i.e., high in the image pyramid) is insignificant with respect

    to a given threshold T, then all of the coefficients of the same orientation in the same spatial

    location at finer scales (i.e., located lower in the pyramid) are very likely to be insignificant with

    respect to T. In each iteration, all the coefficients are scanned in the order shown in Fig. 14. This

    guarantees that when a node is visited, all its parents will already have been scanned. Each

    coefficient visited in the scan is classified as a Zerotree root (ZTR), an isolated zero (IZ),

    positive significant (POS), or negative significant (NEG). A Zerotree root is a

    18

  • 7/30/2019 wavelet based compression

    19/28

    Fig. 14

    Fig. 15

    Coefficient that is insignificant and all its descendants (in the same spatial orientation tree) are

    also insignificant. Such a coefficient becomes the root of a Zerotree. It is encoded with a special

    symbol (denoted by ZTR). When the decoder inputs a ZTR symbol, it assigns a zero value to the

    19

  • 7/30/2019 wavelet based compression

    20/28

    coefficients and to all its descendants in the spatial orientation tree. Their values get improved in

    subsequent iterations. The Fig. 15 illustrates this classification.

    Two lists are used by the encoder (and also by the decoder, which works in lockstep) in the

    scanning process. The dominant list contains the coordinates of the coefficients that have not

    been found to be significant. They are stored in the order scan, by pyramid levels, and within

    each level by subbands. The subordinate list contains the magnitudes of the coefficients that have

    been found to be significant. Each list is scanned once per iteration. Iteration consists of a

    dominant pass followed by a subordinate pass. In the dominant pass, coefficients from the

    dominant list are tested for significance. If a coefficient is found significant, then i) its sign is

    determined, ii) it is classified as either POS or NEG, iii) its magnitude is appended to the

    subordinate list, and iv) it is set to zero in memory (in the array containing all the wavelet

    coefficients). The last step is done so that the coefficient does not prevent the occurrence of a

    Zerotree in subsequent dominant passes at smaller thresholds. At the end of the subordinate pass,

    the encoder sorts the magnitudes in the subordinate list in decreasing order. The encoder stops

    the loop when a certain condition is met and the decoder stops decoding when the maximum

    acceptable distortion has been breached.

    3.3 SOFM ALGORITHM

    Self-Organizing Feature Maps also known as Kohonen maps were first introduced by Vonder

    Malsburg and in its present form by Kohonen. This SOFM algorithm is based on competitive

    learning. Here, neurons are placed at the nodes of a lattice. Neurons become selectively tuned to

    various input patterns. Output neurons compete among themselves to be activated. From that,

    only one or one neuron per group wins. The location of the winning neurons tends to become

    ordered in such a way that a meaningful coordinate system for different input feature is created.

    A. SOFM Coding

    The SOFM algorithm consists of four basic steps. It is shown in the following.

    1. Initialization: Choose random values for the initial weight vector Cj(0).Cj(0) must be

    Different for j=1, 2, 3k.

    2. Sampling: Draw a sample c from the input distribution with a certain probability.

    3. Similarity Matching:

    3.1. The best matching criterion is equivalent to the minimum Euclidean distance between

    vectors.

    20

  • 7/30/2019 wavelet based compression

    21/28

    3.2. Mapping q(c) identifies the neuron that best matches the input vector c.

    3.2

    4. Updating:

    3.3

    Continue until noticeable changes are observed.

    CHAPTER-IV

    Simulation and Results

    The images Lena, Baboon, Cameraman, Peppers, Barbara and Bridge are used for the

    experiments. The results of experiments are used to find the PSNR (Peak Signal to Noise Ratio)

    values and MSE (Mean Square Error) values for the reconstructed images. The results that got by

    using SPIHT technique are shown in the Fig. 16 and Fig. 17. Some of the best results highest

    PSNR values for given compression ratios for the sample images have obtained with SPIHT.

    Fig. 16 SPIHT Compression of Lena, Baboon & Cameraman image

    21

  • 7/30/2019 wavelet based compression

    22/28

    Fig. 17 SPIHT Compression of Peppers, Barbara & Bridge image

    SPIHT provides better results when compared to the EZW. The Fig. 18 and Fig. 19 show the

    results that got by using the EZW technique. EZW is used to produce a fully embedded bit

    stream. The main features of EZW are discrete wavelet transform, Zerotree coding of wavelet

    coefficients and successive approximation quantization. Here, embedding is accomplished via a

    series of decisions that distinguish the reconstructed image from the null image.

    Fig. 18 EZW Compression of Lena, Baboon & Cameraman image

    22

  • 7/30/2019 wavelet based compression

    23/28

    Fig. 19 EZW Compression of Peppers, Barbara & Bridge image

    The objective of the learning algorithm for a SOFM neural network is formation of a feature map

    which captures the essential characteristics of the p-dimensional input data and maps then on anl-D feature space. The learning algorithm consists of two essential aspects of the map formation,

    namely, competition and cooperation between neurons of the output lattice. The results that got

    by using SOFM technique are shown in Fig. 20 and Fig. 21. The images provide less quality

    when compare to the other techniques.

    Fig. 20 SOFM Compression of Lena, Baboon & Cameraman image

    23

  • 7/30/2019 wavelet based compression

    24/28

    Fig. 21 SOFM Compression of Peppers, Barbara & Bridge image

    4.1 PERFORMANCE ANALYSIS

    The above algorithms are compared and the results are shown in the figures. The PSNR and

    MSE values for the images compressed by SPIHT are tabulated in Table 3. The PSNR value is

    calculated by using the following formula.

    4.1

    The SPIHT method is not a simple extension of traditional methods for image compression, and

    represents an important advance in the field. The method deserves special attention because it

    provides highest image quality, progressive image transmission, fully embedded coded file,

    Simple quantization, exact bit rate coding and Error protection. Furthermore, its embedded

    coding process proved to be effective in a broad range of reconstruction qualities.

    Image PSNR MSE

    Lena 39.85 5.98

    Cameraman 34.98 19

    Bridge 29.5 72.23Barbara 38.892 15.22

    Peppers 37.25 11

    Baboon 27.73 92

    TABLE: 3 (PSNR & MSE Values for SPIHT)

    24

  • 7/30/2019 wavelet based compression

    25/28

    The main features of EZW include compact multiresolution representation of images by discrete

    wavelet transformation, Zerotree coding of the significant wavelet coefficients providing

    compact binary maps, successive approximation quantization of the wavelet coefficients,

    adaptive multilevel arithmetic coding, and capability of meeting an exact target bit rate with

    corresponding rate distortion function (RDF). This algorithm may not yield optimal distortion

    but it does provide a practical and general high compression algorithm for a variety of image

    classes.

    Image PSNR MSE

    Lena 25.6 161

    Cameraman 24.2 234

    Bridge 23.68 280.3

    Barbara 22.7 340.33

    Peppers 23.11 82.67

    Baboon 21.33 138.11

    TABLE: 4 (PSNR & MSE Values for EZW)

    The PSNR and MSE values for the images compressed by EZW are tabulated in Table 4. SOFM

    can greatly reduce computational complexity. It provides new ways of associating related data.

    The PSNR and MSE values for the images compressed by SOFM are tabulated in Table 5. In

    SOFM the error rate may be unacceptable.

    Image PSNR MSE

    Lena 11.502 4.6543e+003

    Cameraman 10.98 4.2439 e+003

    Bridge 10.5548 4.4457 e+003

    Barbara 10.38 4.5537 e+003

    Peppers 10.7 4.1261 e+003

    Baboon 10.89 4.8735e+003

    TABLE: 5 (PSNR & MSE Values for SOFM)

    25

  • 7/30/2019 wavelet based compression

    26/28

    Fig. 22 Comparison of PSNR values for SPIHT, EZW & SOFM

    Fig. 23 Comparison of MSE values for SPIHT, EZW

    The comparison of SPIHT, EZW and SOFM by using PSNR and MSE are shown in Fig. 22 and

    Fig. 23. The compression ratio is taken as 2:1 to reduce the time needed for subjective testing.

    4.2 CONCLUSION

    The results of different wavelet-based image compression techniques are compared. The effects

    of different wavelet functions filter orders, number of decompositions, image contents and

    26

  • 7/30/2019 wavelet based compression

    27/28

    compression ratios are examined. The results of the above techniques SPIHT, EZW and SOFM

    are compared by using two parameters such as PSNR and MSE values from the reconstructed

    image. These compression algorithms provide a better performance in picture quality at low bit

    rates. These techniques are successfully tested in many images. One of the important features of

    SPIHT is that it uses the progressive transmission and its use of embedded coding. It is observed

    that SPIHT provides a better result when compare to EZW and SOFM. The EZW algorithm is

    coupled with the power of multiresolution analysis, yields significant compression with little

    quality loss. Because of the inherent multiresolution nature, wavelet-based coders facilitate

    progressive transmission of images thereby allowing variable bit rates. The above algorithms can

    be used to compress the image that is used in the web applications. SOFM can reduce the

    computational complexity. It has no need of supervised learning rules. Many problems cant be

    effectively represented by a SOFM. The Arithmetic coding with SPIHT algorithm will be added

    in the future to get better results.

    REFERENCES

    [1] Ahmed, N., Natarajan, T., and Rao, K. R. Discrete Cosine Transform, IEEE Trans.

    Computers, vol. C-23, Jan., pp. 90-93, 1974.

    [2] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies. Image coding using wavelet

    transform. IEEE Trans. Image Proc., Vol. 5, No. 1, pp. 205-220, 1992.

    [3] A. Said, W.A. Pearlman. Image compression using the spatial-orientation tree. IEEE Int.Symp. on Circuits and Systems, Chicago, IL, pp. 279-282, 1993.

    [4] J.M. Shapiro. Embedded image coding using zerotrees of wavelet coefficients, IEEE Trans.

    Signal Proc., Vol. 41, No. 12, pp. 3445-3462, 1993.

    [5] Vetterli. M. and Kovacevic, J. Wavelets and Subband Coding, Englewood Cliffs, NJ,

    Prentice Hall, 1995.

    [6] A. Said and W.A. Pearlman. A new, fast, and efficient image codec based on set partitioning

    in hierarchical trees. IEEE Trans. on Circuits and Systems for Video Technology, Vol. 6,

    No. 3, pp. 243-250, 1996.

    [7] Charles D. Creusere. A new method of robust image compression based on the embedded

    zero tree wavelet algorithm, IEEE Trans. on Image Processing, 6(10):14361442, October

    1997.

    27

  • 7/30/2019 wavelet based compression

    28/28

    [8] G.M. Davis and A. Nosratinia. Wavelet-based Image Coding: An Overview. Applied and

    Computational Control, Signals and Circuits, Vol. 1, No. 1, 1998.

    [9] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, New York, NY, 1998.

    [10] J. Tian and R.O. Wells. Jr. Embedded image coding using wavelet-difference reduction,

    Kluwer Academic Publ., Norwell, MA, pp. 289-301,1998.

    [11] J. Li and S. Lei. An embedded still image coder with rate-distortion optimization, IEEE

    Trans. on Image Proc., Vol. 8, No. 7, pp. 913-924, 1999.

    [12] H. Malvar. Progressive wavelet coding of images, Proc. of IEEE Data Compression

    Conf., pp. 336-343, March, 1999.

    [13] K.Sayood, Introduction to Data Compression, 2nd edition, Academic Press, Morgan

    Kaufman Publishers, 2000.

    [14] S.P. Raja and A. Suruliandi.Analysis of Efficient Wavelet based Image Compression

    techniques.IEEE Conf. 2010

    28