Download - WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

Transcript
Page 1: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

235 Vol. 15 No. 6 December 2003

WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION USING DYNAMIC VQ AND

SPIHT CODING

SHAOU-GANG MIAOU, SHIH-TSE CHEN, SHU-NIEN CHAO

Multimedia Computing and Telecommunication Lab. Department of Electronic Engineering

Chung Yuan Christian University Chung-Li, Taiwan

ABSTRACT

As the coming era of digitized medical information, a close-al-hand challenge to deal with is the storage and transmission requirement of enormous data, including medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose a dynamic vector quantization (DVQ) scheme with distortion-constrained codebook replenishment (DCCR) mechanism in wavelet domain. In the DVQ-DCCR mechanism, a novel tree-structure vector and the well-known SPIHT technique are combined to provide excellent coding performance in terms of compression ratio and peak signal-to-noise ratio for lossy compression. For the lossless compression in similar scheme, we replace traditional 9/7 wavelet filters by 5/3 filters and implement the wavelet transform in the lifting structure. Furthermore, a detection strategy is proposed to stop the SPIHT coding for less significant bit planes, where SPIHT begins to lose its coding efficiency. Experimental results show that the proposed algorithm is superior to SPIHT with the arithmetic coding in both lossy and lossless compression for all tested images.

Biomed Eng Appl Basis Comm, 2003(December); 15: 235-242. Keywords: medical image compression, lossy and lossless, distortion-constrained codebook replenishment, dynamic vector quantization, SPIHT.

1. INTRODUCTION

Recently, medical diagnostic data produced by hospitals increase exponentially. In an average-sized hospital, many tera or 10" bytes of digital data are generated each year, almost all of which have to be

Received: Oct 3, 2003; Accepted: Nov 6, 2003 Correspondence: Shaou-Gang Miaou, Professor Department of Electronic Engineering, Chung Yuan Christian University, Chung-Li, Taiwan E-mail:[email protected]

kept and archived [I]. Furthermore, for telemedicine or teiebrowsing applications, transmitting a large amount of digital data through a bandwidth-limited channel becomes a heavy burden. A digital compression technique can be used to solve both the storage and the transmission problems.

It is not difficult to see that wavelet-based approaches are the mainstream in the signal compression community. Thus, many wavelet-based image compression algorithms have been proposed in the literature. In particular, techniques that are based on the principal of embedded zerotree coding, such as embedded image coding using zerotrees of wavelet coefficients (EZW) [2], set partitioning in hierarchical trees (SPIHT) [3] and derivatives from them [4-6],

2 4 -

Page 2: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

BIOMEDICAL ENGINEERING-APPLICATIONS, BASIS & COMMUNICATIONS

receive the greatest attention. The techniquesdemonstrates not only excellent performances but alsoattractive features such as signal to noise ratio (SNR)scalability (progressive decoding). For medical imagecompression , more features are needed . For example, itis indispensable for a compression method to have alossy-to-lossless coding scheme in order to meet thevarious requirements of different applications. Forexample , lossless compression is often needed for legalconsideration and in some critical diagnostic cases,whereas lossy compression is all we need for browsingapplications . How to design an integrated scheme thathas excellent performance in terms of compressionratio (CR) and peak SNR (PSNR) and allows bothlossy and lossless coding is one of the important issuesin the recent study of medical image compression [7-9].

Recently, Abu-Hajar and Sankar proposed alossless image compression scheme using partialSPIRT and bit-plane based arithmetic coding (AC)[10]. The coding philosophy is as follows. Afteranalyzing the occurrence probability of zero/one ineach bit plane, it is shown that SPIHT is efficient onlyin more significant bit planes and it is inefficient in lesssignificant ones. Therefore , the great advantage ofSPIHT zerotree coding is explored for the bit planeswhose probability of entry one is less than a pre-specified threshold. Then an AC scheme is applied toother bit planes. Following that study, the same authorsprovided a follow-up work [7] to discuss theintegration of lossy and lossless compression in onesingle coding scheme but no new coding frameworkwas proposed. In [8], images are transformed using thereversible 5/3 wavelet filters for the lossless mode and9/7 filters for the lossy mode. Also, to enhance thecompression performance of the arithmetic coder, bitplanes are sorted into three categories , where eachcategory uses its own proposability model within thecoder. However, additional sorting and modeling forAC shows only trivial improvement.

Combining the progressive coding capability ofSPIHT and a novel dynamic vector quantization(DVQ) mechanism, a record-breaking result for lossyelectrocardiogram (ECG) compression was presentedin [11]. This work along with the ideas presented in [7-8] and [10] motivate this follow-up work. However,the direct extension of [11] for medical images willface the following four major technical challenges. (1)How could the lossy compression approach in [11] bemodified to allow lossless compression as well ? (2)For lossy/lossless compression of medical images,different wavelet transforms may need to beconsidered and implemented . (3) The vector formationfrom the combination of various 1-D waveletcoefficients (for vector quantization of ECG signals)has a specific closed-form relationship [ 10], but such a

236

relationship for 2-D image signals is more difficult toderive than its 1-D counterpart. (4) Since ECG signalsand images are totally different sources, methods workfor ECG may not necessary perform well for images.We attempt to address and solve all these fourproblems in this paper.

The organization of this paper is as follows. Theprinciple of reversible integer wavelet transforms isgiven in Section 2. A short summary of conventionalVQ and codebook replenishment mechanism isdepicted in Section 3. In Section 4, the SPIHTalgorithm is summarized. In Section 5, the detail of theproposed method is presented. In Section 6, theproposed method is tested and simulation results areillustrated along with discussions . Finally, conclusionsare given in Section 7.

2. REVERSIBLE INTEGERWAVELET TRANSFORMS

The wavelet transform (WT), in general, producesfloating point coefficients. Although these coefficientscan be used to reconstruct an original image perfectlyin theory, the use of finite precision arithmetic andquantization results in a lossy scheme.

Recently, reversible integer WT's (WT's thattransform integers to integers and allow perfectreconstruction of the original signal ) have beenintroduced [9] [12-14]. In [13], Calderbank et al.introduced how to use the lifting scheme presented in[15], where Sweldens showed that the convolution-based biorthogonal WT can be implemented in alifting-based scheme as shown in Fig . 1 for reducingthe computational complexity. Note that only thedecomposition part of WT is depicted in Fig. 1 becausethe reconstruction process is just the reverse version ofthe one in Fig. 1. The lifting-based WT consists ofsplitting , lifting, and scaling modules and the WT istreated as a prediction-error decomposition . It providesa complete spatial interpretation of WT. In Fig. 1, let xdenote the input signal and xL, and x ,,, be thedecomposed output signals, where they are obtainedthrough the following three modules of lifting-basedID-WT:

(1) Splitting : In this module , the original signal xis divided into two disjoint parts, i.e.,x,(n)=x(2n) and x,(n)=x(2n+ 1) that denote alleven-indexed and odd-indexed samples of x,respectively.

(2) Lifting: In this module, the predictionoperation P is used to estimate x,(n) from x,(n)and results in an error signal d(n) whichrepresents the detail part of the original signal.Then we update d(n) by applying it to theupdate operation U and the resulting signal is

-25-

Page 3: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

237

Fig. 1 The lifting-based WT.

combined with x,(n) to s(n) estimate whichrepresents the smooth part of the originalsignal.

(3) Scaling: A normalization factor is applied tod(n) and s(n), respectively. In the even-indexed part s(n) is multiplied by anormalization factor K, to produce the waveletsubband xL,. Similarly in the odd-index part theerror signal d(n) is multiplied by K, to obtainthe wavelet subband x,,,.

Note that the output results of xL, and x,,, obtainedby using the lifting-based WT are the same as those ofusing the convolution approach for the same input eventhey have completely different functional structures.Compared with the traditional convolution-based WT,the lifting-based scheme has several advantages. First,it makes optimal use of similarities between thehighpass and lowpass filters, the computationcomplexity can be reduced by a factor of two. Second,it allows a full in-place calculation of the wavelettransform. In other words, no auxiliary memory isneeded.

3. CONVENTIONAL VO ANDCODEBOOK REPLENISHMENTMECHANISM

A collection of codevectors called codebook playsan essential role in VQ. Both the encoder and thedecoder need to have an identical copy of codebook,

c = {cj,cj E 9ZK,j =1,2,. N} where c,, K, and N represent

the codevector, vector dimension, and codebook size of

C , respectively. Let x, be a K-dimensional input vector

for VQ at some time t. The encoder finds the best-

matched codevector c,, in Codebook C for x, according

to the nearest neighbor rule:

C ,. =argmind(x,,c;)Cie C

Vol. 15 No. 6 December 2003

where d(.) represents a distance or distortionmeasure between the two vectors under consideration,and i' denotes the index of c,. and is transmitted orstored for decoding. Given the index i` , a simple table-look-up decoding step is taken to retrieve thecorresponding codevector c,. from the codebook anduse it as the reconstructed output vector it for theinput vector x,. The induced quantization distortiond(x„ i,)=d(x,, c,.) is generally nonzero and variesaccording to the varying characteristics of an inputsource. To maintain the diagnostic features inreconstructed images for a wide variety of imagesignals, the distortion for any input vector is confinedby a pre-determined distortion bound in a distortion-constrained codebook replenishment (DCCR)mechanism, such as the simple yet effective one in [16]to be introduced next.

Given the codebook C, at time t, d(x,, c,.) ischecked constantly to see if its value is larger than thedistortion bound or threshold d,h. If not, the index i' istransmitted or stored to obtain the decoded vector i, .At the same time, the codebook C, is updated to C, inthe following manner: c,. is promoted to the firstposition of C, and all the codevectors in front of it, i.e.,c,^-c,._, , are pushed down by one notch. If d(x,, c,.)>d,h , x, or its approximation must be transmitted orstored as it . In this case, x, or its approximation istreated as a new codevector and is inserted to the firstposition of the codebook C, , all the originalcodevectors are pushed down by one notch and the lastone is discarded.

As the name 'DCCR' implies, the distortion is'constrained' by the pre-determined threshold d,h, andthe insertion of new codevectors and the codebookupdate account for 'codebook replenishment'. Thedistortion-constrained nature keeps the reconstructeddistortion of VQ under controlled and guarantees thepreserving of diagnostic information if the distortionthreshold is set properly. The codebook replenishmentfeature allows the contents of a codebook to beupdated on-line such that the vector quantizer canadapt itself to match the input source. These twofeatures ensure that the DCCR-VQ can be applied tomedical images with essentially any characteristic.

4. SPIHT CODING ALGORITHM

The set partition in hierarchical trees (SPIHT)coding algorithm was proposed by Said and Pearlman[3]. This algorithm has been successfully applied toseveral forms of data such as images [3] andelectrocardiograms [17]. The experimental resultsshow that the SPIHT algorithm is among the best interms of compression performance. Previously, theSPIHT was designed for lossy data compression. By

(1)

-26-

Page 4: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

BIOMEDICAL ENGINEERING-APPLICATIONS, BASIS & COMMUNICATIONS 238

combining the integer-to-integer WT with the SPIHT, both the lossy and lossless compression modes are now supported.

The SPIHT coding is based on three concepts: i) partial ordering of the image coefficients by

their magnitudes and the transmission of order by a subset partitioning algorithm that is duplicated at the decoder;

ii) ordered bit plane transmission of refinement bits, and

iii) exploitation of the self-similarity of the WT coefficients across different scales.

5. PROPOSED METHOD

A new compression method with lossy-to-lossless capability is proposed in this section. The method includes a novel WT-based tree structure and the corresponding codevector arrangement for DCCR-VQ. By using the proposed method, we require just one codebook for various WT subbands.

5.1 Overview A block diagram of the proposed codec is shown

in Fig. 2. First, the kth image in a medical image sequence is taken as input matrix x,. Then, an n-level WT decomposition is performed on x t, resulting in 3«+l wavelet coefficient subbands: x„„ , x„,„, ...,\mi , and xHHI. We use 9/7 WT and 5/3 integer WT for lossy and lossless compression, respectively. We extract m tree vectors \wpj=\,2,..., m, from these subbands in a manner to be explained later. For eachy, x„,y is vector quantized to obtain its reconstructed version xn,y, The DCCR-VQ contains both lossy and lossless parts as shown in Fig. 2, which is a modified version of basic operations discussed in Section III. More details on our DCCR-VQ will be given later. Continue in this way, the entire image sequence can be encoded on-line. The decoder side is not shown explicitly because it simply performs reverse operations of the encoder.

5.2 Novel Tree Vector Architecture and Corresponding VQ Coder

Given the 3«+l coefficient subbands, we extract m tree vectors \7yj, where each tree vector is composed of the wavelet coefficients taken from these subbands in a hierarchical tree order as shown in Fig. 3. Take the first tree vector x^, as an example. The first four coefficient groups from x^,, x„,„, x„,„ and xHH„ , respectively, are assigned to the first 16 coefficients of the tree vector. Next, assign the coefficients from each subband to the rest of corresponding positions in the order of zig-zag scan. Continue this process until the coefficients from subbands xHHI are assigned to the bottom portion of the

tree vector. This completes the formation of xIX0. For x„„ the second coefficient group from xUn is assigned to the first four positions of the tree vector. Continue in this way and follow the example as above to form xw i . It is trivial to show that the vector dimension of a tree vector is 4"*1. Thus, m, the number of tree vectors for anMXN image, is **$-.

With this architecture of the tree vector, the corresponding VQ codebook C„ is given and shown in Fig. 4(a) Note that the well-known pyramid structures are used to highlight the hierarchical relationship among coefficient subbands. This special codebook, consisting of JV tree codevectors, C ,̂, i'=0,l,..., AM, is available for both encoder and decoder. The initial Cw can be obtained using any appropriate codebook training algorithm, say the LBG [18]. The required training vectors for codebook generation are extracted from WT decompositions of some images. For an input tree vector x m its closest tree codevector cTO. can be found from using Equation (1). As an example, for the vector dimension of 256, we show numerical values of the components in the input tree vector x^ , its closest codevector cIV,., and their vector difference in Figs. 4(b), 4(c), and 4(d), respectively. The vector difference will be used later in the DCCR mechanism.

5.3 Dynamic Bit Allocation in DCCR Mechanism

Recall that an input vector or its approximation must be sent or stored as a new codevector when the distortion between xTV and cm. is beyond some distortion threshold in the DCCR mechanism. In this case, the coding performance can be dropped

Lossy / lossy/ >. Lossless

1 r

' r 9/7 WT

T r

\^ Lossless / ' r

5/3 WT

^ r Tree Vector Extraclion

' r VQ

Lossless DCCR

Fig. 2 A block diagram of the proposed code.

- 2 7 -

Page 5: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

239 Vol. 15 No. 6 December 2003

0 1 3 4 5 6 7 16 31 4"+l - I

Fig. 3. The hierarchical structure of our tree vector. significantly, especially when n is large, unless an extremely effective coding strategy is available. One such strategy is discussed next.

Since a tree vector x^ consists of WT coefficients with great magnitude differences, it is not efficient to assign a fixed number of bits to every coefficient. Even if we assign different but fixed numbers of bits to WT coefficients according to their magnitudes, the coding efficiency may still be poor because of the varying characteristics of various images. Therefore, we need a dynamic bit allocation scheme. Furthermore, the reconstructed distortion does not have to be zero if a nonzero distortion threshold is set, thus it is not necessary to represent the new codevector with a precision more than it should. With these considerations, we found that the SPIHT coding is a perfect solution to our problem. The SPIHT coding strategy not only performs dynamic bit allocation efficiently according to the magnitudes of wavelet coefficients, but also sends or stores encoded data progressively and can be stopped at the point when the distortion falls within the pre-specified da . Therefore, we propose a DCCR mechanism with the SPIHT coding strategy as follows. Lossy Compression

Case (1): If, c^x^c^.,. )<</„, the index i and a bit "1" indicating Case (1) are transmitted or stored. The codebook is rearranged according to the original DCCR mechanism.

0 « 100 150 2O0 i l O MO

(a) (d)

Fig. 4. The architecture of the proposed tree codebook and typical numerical examples of tree vectors with 256 components, (a) Tree codebook architecture; (b) An input tree vector; (c) One tree codevector that is closest to the vector in (b); (d) The vector difference of (b) and (c).

Case (2): If rf(xTVcrK,.)></li, the f index and a bit "0" indicating Case (2) are transmitted or stored. Then, components of the difference vector e„=x„-cm, are scalar quantized using the SPIHT strategy to obtain e^. The SPIHT coding process continues until d(xJV ,cTV ..) ^ dlh , where Xrv = cTV .. +e^ . Finally, the resulting bit stream is transmitted or stored following the bit representation of f and the bit "0". Here, x ^ is treated as a new codevector and is inserted to the codebook according to the corresponding procedure in the original DCCR mechanism. Note that every SPIHT bit stream for an eTV includes a 4-bit (24=16) header to indicate the necessary number ( max le^Ml)] for SPIHT coding, where ew(x) is the xth component of e™, and a 10-bit (210=1024) header to denote the bit length of me bit stream. The 14-bit header is selected because it is sufficient for all tested medical images

N:

- 2 8 -

Page 6: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

BIOMEDICAL ENGINEERING-APPLICATIONS, BASIS & COMMUNICATIONS

Fig. 5 . Performance comparison between SPIHTand our method with different dth values.

considered in the simulation study of the next section.Lossless Compression

If d(xTv,cTV,i.) drh , the index i' and a bit "1"indicating Case (1) are transmitted or stored. Otherwisethe index i' and a bit "0" indicating Case (2) aretransmitted or stored. Then, components of thedifference vector e,=x,-c.,i. are scalar quantized usingthe lossless SPIHT strategy to obtain e, In thislossless case, eTV =eTV

By investigating the SPIHT coding efficiency andobserving the probability of zero/one occurrence ineach bit plane, here we propose a SPIHT switchingstrategy that is not found elsewhere. In this strategy,when the SPIHT coding length of a bit plane is higherthan the total number of bits in that bit plane, we willshut the SPIHT coding off and switch to the directprocessing of each bit without further coding (say,entropy coding). Once it was switched to the directprocessing mode, it will not be switched back becauseas we move from the more significant bit planes to theleast significant bit plane, the zero/one occurrence inthe bit plane appears to be more and more noisy andthe coding efficiency of SPIHT will be lower andlower. This switching strategy can avoid theinefficiency of SPIHT in less significant bit planes, andthus enhance the coding performance. Again, we haveXTV = CTV,i. +e, . In this lossless case, kTV = XTV . Thecodebook is rearranged or updated according to theoriginal DCCR mechanism. Finally, the resulting bitstream is transmitted or stored following the bitrepresentation of i` and the bit "1" or bit "0". Again,the 14-bit header is needed as in the lossy part.

6. EXPERIMENTAL RESULTS

(a)

(c)

240

(b)

(d)

Fig. 6 A typical image form each image sequence.(a) MRI ; (b)Flower Garden; (c) Salesman; (d) CTChest.

In the following experiments, four groups ofvolumetric MRI head-scan images for four differentsubjects, respectively, are taken. The 420 images ineach group are obtained in 90 seconds. The scannedtime interval between adjacent image slices is 1.5seconds. The head scan is divided into seven layers,where the first image from each layer is scanned first,followed by scanning the second image for each layer.Continue in this manner until 60 images for each layerare obtained. The image size and pixel precision areand 16 bits, respectively.

6.1 Lossy CompressionSince the biorthogonal 9/7-tap filters for WT are

proven to offer excellent coding performance, they arealso adopted here. As for the vector dimension, theconsideration is as follows. The dimension of a treevector is 4°", where n is the WT decomposition level.Larger vector dimension seems to increase the CR, butit does not in practice. The reason is that the varianceof an input vector will also be larger in general andthus more codevectors in a VQ codebook or largercodebook size N must be used to maintain thereasonable quality of reconstructed signals. Moreover,the computational complexity of VQ increasesexponentially with the vector dimension and codebooksize. In fact, it is a trade-off in any VQ coder. Here, we

-29-

Page 7: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

241

chose n to be 3 because it exploits sufficient energycompaction capability of WT for 256X256 MRIimages and has sufficient coding gain with the vectordimension of 44=256 . Besides , if n is 4, the dimensionof 45=1024 is too large to have high coding efficiencyor low computational complexity according to ourexperience.

We take one layer of MRI images (60 images) inGroupl to initialize a single tree codebook using theLBG training algorithm. Some images in Group2 areused for outside testing. Taking image quality and CRinto consideration, we found that the codebook size of1024 for the vector dimension of 256 is suitable. Fig. 5gives the average coding performances for an imagesequence of 120 images under different values of dthand the results are compared with those by using theSPIHT coding. As a result, the proposed methodoutperforms the SPIHT coding by a significant margin,especially in high CR regions . This is intuitivelyplausible because VQ generally performs well in highCR conditions.

6.2 Lossless CompressionThe proposed algorithm was implemented and

compared to the lossless SPIHT, where the S+Ptransform and the arithmetic codec are used [19]. Table

Vol. 15 No. 6 December 2003

1 shows four sources to obtain initial codebooks,where Codebookl was trained by MRI Head imagesequences from Group3, Codebook2 was trained byFlower Garden image sequences, Codebook3 wastrained by Salesman image sequences, and finallyCodebook4 was trained by CT Chest image sequences.A typical image from each sequence is shown in Fig.6. Some MRI head images from Group4 are used fortesting. Table 2 gives the CR performance for threerandomly selected MRI images using the losslessSPIHT with AC and the proposed method. Fig. 7shows the results of using 120 MRI head images toevaluate the CR performance of our method and thelossless SPIRT with AC. Table 3 summarizes theaverage CR performance of the 120 images for losslesscompression. All results show that our methodachieves better CR performance than that of losslessSPIRT with AC, no matter which initial codebook isused. This proves the robustness of the proposedapproach.

7. CONCLUSIONS

It is not difficult to see that all four problemspresented in section I have been solved in the proposed

Table 1. Different image sources for the training of initial codebooks

No. of images Sources of image sequence

Codebookl 50 MRI Head

Codebook2 60 Flower Garden

Codebook3 100 Salesman

Codebook4 70 CT Chest

Table 2. Comparison of CR performance between lossless SPIHT and theproposed method with different initial codebooks

Images SPIHT+

Arithmetic

Codebook

1

Codebook

2

Codebook

3

Codebook

4

MRI head 1 2.6995 2.7578 2.7113 2.7279 2.7314

MRI head 2 2.687 2.7024 2.7299 2.7165 2.7173

MRI head 3 2.6675 2.7286 2.6858 2.6996 2.7032

Table 3. Average CR performance for lossless compression

Images SPIHT+

Arithmetic

Codebook

1

Codebook

2

Codebook

3

Codebook

4

16-bit Set 2.7324 2.8375 2.7988 2.8146 2.8181

-30-

Page 8: WAVELET-BASED LOSSY-TO-LOSSLESS MEDICAL IMAGE COMPRESSION ...€¦ · image compression algorithms have been proposed in the literature. In particular, techniques that are based on

BIOMEDICAL ENGINEERING-APPLICATIONS, BASIS & COMMUNICATIONS

3

2.95

2.9

2.7

U 2 . 65

2.6

2.55

2.5

- Codebookt- Codebook2

Codebook3Codebook4SPIHT+AC

10 20 30 40 50 60 70 80 90 100 110 120Frame number

Fig. 7 CR performances of lossless SPIHT with ACand the proposed method for medical imagesequences.

approach. The experimental results show that theproposed method is superior to both lossy and losslessSPIHT with AC for the MRI image sequences undertest. The winning margin can be wider if we also useAC in our approach. However, in that case, we mayhave the disadvantage of higher computationalcomplexity. Finally, the proposed framework is flexibleenough to include other functions, for example, regionof interest (ROI), SNR scalability, resolutionscalability, etc. In addition, Image types other thanMRI may also be suitable with the proposed approach,but this requires further study.

REFERENCES

1. Kim Y and Pearlman WA: Stripe-based SPIHT lossycompression of volumetric medical images for lowmemory usage and uniform reconstruction quality.Proc of Int Conf on Image Processing 2000; 3: 652-655.

2. Shapiro JM: Embedded image coding usingzerotrees of wavelet coefficients. IEEE Trans SignalProcessing 1993; 41: 3445-3462.

3. Said A and Pearlman WA: A new, fast and efficientimage codec based on set partitioning in hierarchicaltrees. IEEE Trans Circuits Syst for Video Technol1996; 6: 243-250.

4. Hang X, Greenberg NL, Shiota T, Firstenberg MSand Thomas JD: Compression of real timevolumetric echocardiographic data using modifiedSPIHT based on the three-dimensional waveletpacket transform. Computers in Cardiology 2000;123 -126.

242

5. Bruckmann A and Uhl A: Selective medical imagecompression techniques for telemedical andarchiving application. Computers in Bilolgy andMadicine 2000; 30: 153-169.

6. Miaou SG and Lin CL: A quality-on-demandalgorithm for wavelet-based compression ofelectrocardiogram signals. IEEE Trans Biomed Eng2002; 49: 233-239.

7. Abu-Hajar A and Sankar R: Enhanced partial-spihtfor lossless and lossy image compression. ICASSP2003; 3: 253-256.

8. Liu Z, Hua J, Xiong Z and Wu 0: Lossy-to-losslessROI coding of chromosome images using modifiedSPIHT and EBCOT. IEEE Int Symposium onBiomedical Imaging 2002; 317-320.

9. Said A and Pearlman WA: An image multiresolutionrepresentation for lossless and lossy compression.IEEE Trans on Image Processing 1996; 5: 1303-1310.

10. Abu-Hajar A and Sankar R: Wavelet based losslessimage compression using partial SPIHT and bitplane based arithmetic coder. ICASSP 2002; 4:3497-3500.

11. Miaou SG, Yen HY, and Lin CL: Wavelet-basedECG compression using dynamic vectorquantization with tree codevectors in singlecodebook. IEEE Trans Biomed Eng 2002; 49: 671-680.

12. Zandi A, Allen JD, Schwartz EL, and Boliek M:CREW: Compression with reversible embeddedwavelets. in Proc of Data Compression Conference1995; 212-221.

13. Calderbank AR, Daubechies I, Sweldens W, andYeo BL: Wavelet transforms that map integers tointegers. Appl Comput Harmon Anal 1998; 5: 332-369.

14. Dewitte S and Cornelis J: Lossless integer wavelettransform. IEEE Signal Processing Letters 1997; 4:158-160.

15. Sweldens W: The lifting scheme: A custom-designconstruction of biorthogonal wavelets. ApplComput Harmon Anal 1996; 3(2): 186-200.

16. Goodman RM, Gupta B, and Sayana M: Neuralnetwork implementation of adaptive vectorquantization for image compression. Tech. Rep.,Dept. of Electrical Eng., California Institute ofTechnology, Pasadena, CA, 1991.

17. Lu Z, Kim DY and Pearlman WA: Waveletcompression of ECG signals by the set partitioningin hierarchical trees algorithm. IEEE Trans onBiomed Eng 2000; 47: 849-856.

18. Linde Y, Buzo A, and Gray RM: An algorithm forvector quantizer design. IEEE Trans Commun1980; 28: 84-95.

19.http://www.cipr.rpi.edu/research/SPIHT/spihtO.html

-31-