Lossless Image Compression Using Super

74
Lossless Image Compression Using Super-Spatial Structure Prediction Abstract: We recognize that the key challenge in image compression is to efficiently represent and encode high- frequency image structure components, such as edges, patterns, and textures. In this work, we develop an efficient lossless image compression scheme called super- spatial structure prediction. This super-spatial prediction is motivated by motion prediction in video coding, attempting to find an optimal prediction of structure components within previously encoded image regions. We find that this super-spatial prediction is very efficient for image regions with significant structure components. Our extensive experimental results demonstrate that the proposed scheme is very competitive and even outperforms the state- of-the-art lossless image compression methods. Introduction: SPATIAL image prediction has been a key component in efficient lossless image compression. Existing lossless image compression schemes attempt to predict image data using their spatial neighborhood. We observe that this will

Transcript of Lossless Image Compression Using Super

Page 1: Lossless Image Compression Using Super

Lossless Image Compression Using Super-Spatial Structure Prediction

Abstract:

We recognize that the key challenge in image compression is to efficiently

represent and encode high-frequency image structure components, such as edges,

patterns, and textures. In this work, we develop an efficient lossless image compression

scheme called super-spatial structure prediction. This super-spatial prediction is

motivated by motion prediction in video coding, attempting to find an optimal prediction

of structure components within previously encoded image regions. We find that this

super-spatial prediction is very efficient for image regions with significant structure

components. Our extensive experimental results demonstrate that the proposed scheme is

very competitive and even outperforms the state-of-the-art lossless image compression

methods.

Introduction:

SPATIAL image prediction has been a key component in efficient lossless image

compression. Existing lossless image compression schemes attempt to predict image data

using their spatial neighborhood. We observe that this will limit the image compression

efficiency. A natural image often contains a large number of structure components, such

as edges, contours, and textures. These structure components may repeat themselves at

various locations and scales. Therefore, there is a need to develop a more efficient image

prediction scheme to exploit this type of image correlation. The idea of improving image

prediction and coding efficiency by relaxing the neighborhood constraint can be traced

back to sequential data compression and vector quantization for image compression . In

sequential data compression, a substring of text is represented by a displacement/length

reference

to a substring previously seen in the text. Storer extended the sequential data compression

to lossless image compression. However, the algorithm is not competitive with the state-

Page 2: Lossless Image Compression Using Super

of-the-art such as context-based adaptive lossless image coding (CALIC) in terms of

coding efficiency. During vector

quantization (VQ) for lossless image compression, the input image is processed as

vectors of image pixels. The encoder takes in a vector and finds the best match from its

stored codebook.

The address of the best match, the residual between the original vector and its best match

are then transmitted to the decoder. The decoder uses the address to access an identical

codebook, and obtains the reconstructed vector. Recently, researchers have extended the

VQ method to visual pattern image coding (VPIC) and visual pattern vector quantization

(VPVQ) [8]. The encoding performance of VQ-based methods largely depends on the

codebook design. To our best knowledge, these methods still suffer from lower coding

efficiency, when compared with the state-of-the-art image coding schemes. In the intra

prediction scheme proposed by Nokia [9], there are ten possible prediction methods: DC

prediction, directional extrapolations, and block matching. DC and directional prediction

methods are very similar with those of H.264 intra prediction [10]. The block matching

tries to find the best match of the current block by searching within a certain range of its

neighboring blocks. As mentioned earlier, this eighborhood constraint will limit the

image compression efficiency since image structure components may repeat themselves

at various locations. In fractal image compression [11], the self-similarity between

different parts of an image is used for image compression based on contractive mapping

fixed point theorem. However, the fractal image compression focuses on contractive

transform design, which makes it usually not suitable for lossless image compression.

Moreover, it is extremely computationally expensive due to the search of optimum

transformations. Even with high complexity, most fractal-based schemes are not

competitive with the current state of the art [12]. In this work, we develop an efficient

image compression scheme based on super-spatial prediction of structure units. This so-

called super-spatial structure prediction breaks the neighborhood constraint, attempting to

find an optimal prediction

Page 3: Lossless Image Compression Using Super

of structure components within the previously encoded image regions. It borrows the idea

of motion prediction from video coding, which predicts a block in the current frame using

its previous encoded frames. In order to “enjoy the best of both worlds”, we also propose

a classification scheme to partition

an image into two types of regions: structure regions (SRs) and nonstructure regions

(NSRs). Structure regions are encoded with super-spatial prediction while NSRs can be

efficiently encoded with conventional image compression methods, such as CALIC. It is

also important to point out that no codebook is required in this compression scheme,

since the best matches of structure components are simply searched within encoded

image regions. Our extensive experimental results demonstrate

that the proposed scheme is very competitive and even outperforms the state-of-the-art

lossless image compression methods.

Image Processing:

Image Processing is a technique to enhance raw images received from

cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in

normal day-to-day life for various applications.

Various techniques have been developed in Image Processing during the last four to five

decades. Most of the techniques are developed for enhancing images obtained from

unmanned spacecrafts, space probes and military reconnaissance flights. Image

Processing systems are becoming popular due to easy availability of powerful personnel

computers, large size memory devices, graphics software’s etc.

Image Processing is used in various applications such as:

• Remote Sensing

• Medical Imaging

• Non-destructive Evaluation

• Forensic Studies

• Textiles

• Material Science.

Page 4: Lossless Image Compression Using Super

• Military

• Film industry

• Document processing

• Graphic arts

• Printing Industry

The common steps in image processing are image scanning, storing, enhancing and

interpretation.

Methods of Image Processing

There are two methods available in Image Processing.

i) Analog Image Processing

Analog Image Processing refers to the alteration of image through electrical means. The

most common example is the television image.

The television signal is a voltage level which varies in amplitude to represent brightness

through the image. By electrically varying the signal, the displayed image appearance is

altered. The brightness and contrast controls on a TV set serve to adjust the amplitude

and reference of the video signal, resulting in the brightening, darkening and alteration of

the brightness range of the displayed image.

ii) Digital Image Processing

In this case, digital computers are used to process the image. The image will be converted

to digital form using a scanner – digitizer [6] (as shown in Figure 1) and then process it.

It is defined as the subjecting numerical representations of objects to a series of

operations in order to obtain a desired result. It starts with one image and produces a

modified version of the same. It is therefore a process that takes an image into another.

The term digital image processing generally refers to processing of a two-dimensional

picture by a digital computer. In a broader context, it implies digital processing of any

two-dimensional data. A digital image is an array of real numbers represented by a finite

number of bits. The principle advantage of Digital Image Processing methods is its

versatility, repeatability and the preservation of original data precision.

Page 5: Lossless Image Compression Using Super

The various Image Processing techniques are:

• Image representation

• Image preprocessing

• Image enhancement

• Image restoration

• Image analysis

• Image reconstruction

• Image data compression

Image Representation

An image defined in the "real world" is considered to be a function of two real

variables, for example, f(x,y) with f as the amplitude (e.g. brightness) of the image at the

real coordinate position (x,y).

Image Preprocessing

Scaling

The theme of the technique of magnification is to have a closer view by magnifying or

zooming the interested part in the imagery. By reduction, we can bring the unmanageable

size of data to a manageable limit. For resampling an image Nearest Neighborhood,

Linear, or cubic convolution techniques [5] are used.

I. Magnification

This is usually done to improve the scale of display for visual interpretation or

sometimes to match the scale of one image to another. To magnify an image by a factor

of 2, each pixel of the original image is replaced by a block of 2x2 pixels, all with the

same brightness value as the original pixel.

II. Reduction

To reduce a digital image to the original data, every mth row and mth column of the

original imagery is selected and displayed. Another way of accomplishing the same is by

Page 6: Lossless Image Compression Using Super

taking the average in 'm x m' block and displaying this average after proper rounding of

the resultant value.

Rotation

Rotation is used in image mosaic, image registration etc. One of the techniques of

rotation is 3-pass shear rotation, where rotation matrix can be decomposed into three

separable matrices.

Mosaic

Mosaic is a process of combining two or more images to form a single large image

without radiometric imbalance. Mosaic is required to get the synoptic view of the entire

area, otherwise capture as small images.

Image Enhancement Techniques

Sometimes images obtained from satellites and conventional and digital cameras

lack in contrast and brightness because of the limitations of imaging sub systems and

illumination conditions while capturing image. Images may have different types of noise.

In image enhancement, the goal is to accentuate certain image features for subsequent

analysis or for image display[1,2]. Examples include contrast and edge enhancement,

pseudo-coloring, noise filtering, sharpening, and magnifying. Image enhancement is

useful in feature extraction, image analysis and an image display. The enhancement

process itself does not increase the inherent information content in the data. It simply

emphasizes certain specified image characteristics. Enhancement algorithms are generally

interactive and application-dependent.

Some of the enhancement techniques are:

Contrast Stretching

Noise Filtering

Histogram modification

Contrast Stretching:

Page 7: Lossless Image Compression Using Super

Some images (dense forests, snow, clouds and under hazy conditions over

heterogeneous regions) are homogeneous i.e., they do not have much change in their

levels. In terms of histogram representation, they are characterized as the occurrence of

very narrow peaks. The homogeneity can also be due to the incorrect illumination of the

scene.

Ultimately the images hence obtained are not easily interpretable due to poor human

perceptibility. This is because there exists only a narrow range of gray-levels in the image

having provision for wider range of gray-levels. The contrast stretching methods are

designed exclusively for frequently encountered situations. Different stretching

techniques have been developed to stretch the narrow range to the whole of the available

dynamic range.

Noise Filtering

Noise filtering is used to filter the unnecessary information from an image. It is

also used to remove various types of noises from the images. Mostly this feature is

interactive. Various filters like low pass, high pass, mean, median etc., are available.

Histogram Modification

Histogram has a lot of importance in image enhancement. It reflects the

characteristics of image. By modifying the histogram, image characteristics can be

modified. One such example is Histogram Equalization. Histogram equalization is a

nonlinear stretch that redistributes pixel values so that there is approximately the same

number of pixels with each value within a range. The result approximates a flat

histogram. Therefore, contrast is increased at the peaks and lessened at the tails.

Image Analysis

Image analysis is concerned with making quantitative measurements from an

image to produce a description of it [8]. In the simplest form, this task could be reading a

label on a grocery item, sorting different parts on an assembly line, or measuring the size

and orientation of blood cells in a medical image. More advanced image analysis systems

measure quantitative information and use it to make a sophisticated decision, such as

Page 8: Lossless Image Compression Using Super

controlling the arm of a robot to move an object after identifying it or navigating an

aircraft with the aid of images acquired along its trajectory.

Image analysis techniques require extraction of certain features that aid in the

identification of the object. Segmentation techniques are used to isolate the desired object

from the scene so that measurements can be made on it subsequently. Quantitative

measurements of object features allow classification and description of the image.

Image Segmentation

Image segmentation is the process that subdivides an image into its constituent

parts or objects. The level to which this subdivision is carried out depends on the problem

being solved, i.e., the segmentation should stop when the objects of interest in an

application have been isolated e.g., in autonomous air-to-ground target acquisition,

suppose our interest lies in identifying vehicles on a road, the first step is to segment the

road from the image and then to segment the contents of the road down to potential

vehicles. Image thresholding techniques are used for image segmentation.

Classification

Classification is the labeling of a pixel or a group of pixels based on its grey

value[9,10]. Classification is one of the most often used methods of information

extraction. In Classification, usually multiple features are used for a set of pixels i.e.,

many images of a particular object are needed. In Remote Sensing area, this procedure

assumes that the imagery of a specific geographic area is collected in multiple regions of

the electromagnetic spectrum and that the images are in good registration. Most of the

information extraction techniques rely on analysis of the spectral reflectance properties of

such imagery and employ special algorithms designed to perform various types of

'spectral analysis'. The process of multispectral classification can be performed using

either of the two methods: Supervised or Unsupervised.

In Supervised classification, the identity and location of some of the land cover

types such as urban, wetland, forest etc., are known as priori through a combination of

field works and toposheets. The analyst attempts to locate specific sites in the remotely

Page 9: Lossless Image Compression Using Super

sensed data that represents homogeneous examples of these land cover types. These areas

are commonly referred as TRAINING SITES because the spectral characteristics of these

known areas are used to 'train' the classification algorithm for eventual land cover

mapping of reminder of the image. Multivariate statistical parameters are calculated for

each training site. Every pixel both within and outside these training sites is then

evaluated and assigned to a class of which it has the highest likelihood of being a

member.

In an Unsupervised classification, the identities of land cover types has to be

specified as classes within a scene are not generally known as priori because ground truth

is lacking or surface features within the scene are not well defined. The computer is

required to group pixel data into different spectral classes according to some statistically

determined criteria.

The comparison in medical area is the labeling of cells based on their shape, size, color

and texture, which act as features. This method is also useful for MRI images.

Image Restoration

Image restoration refers to removal or minimization of degradations in an image. This

includes de-blurring of images degraded by the limitations of a sensor or its environment,

noise filtering, and correction of geometric distortion or non-linearity due to sensors.

Image is restored to its original quality by inverting the physical degradation

phenomenon such as defocus, linear motion, atmospheric degradation and additive noise.

Image Reconstruction from Projections

Image reconstruction from projections [3] is a special class of image restoration problems

where a two- (or higher) dimensional object is reconstructed from several one-

dimensional projections. Each projection is obtained by projecting a parallel X-ray (or

other penetrating radiation) beam through the object. Planar projections are thus obtained

by viewing the object from many different angles. Reconstruction algorithms derive an

image of a thin axial slice of the object, giving an inside view otherwise unobtainable

Page 10: Lossless Image Compression Using Super

without performing extensive surgery. Such techniques are important in medical imaging

(CT scanners), astronomy, radar imaging, geological exploration, and nondestructive

testing of assemblies.

Image Compression

Compression is a very essential tool for archiving image data, image data transfer on the

network etc. They are various techniques available for lossy and lossless compressions.

One of most popular compression techniques, JPEG (Joint Photographic Experts Group)

uses Discrete Cosine Transformation (DCT) based compression technique. Currently

wavelet based compression techniques are used for higher compression ratios with

minimal loss of data.

Existing System:

Existing lossless image compression schemes attempt to predict image data using

their spatial neighborhood. We observe that this will limit the image compression

efficiency.

In existing fractal image compression is high complexity

Redundancy is occurred.

Low flexibility.

Proposed System:

Our proposed system, we have developed a simple yet efficient image prediction

scheme, called super-spatial prediction.

This super-spatial structure prediction breaks the neighborhood constraint,

attempting to find an optimal prediction of structure components within the

previously encoded image regions.

We also propose a classification scheme to partition an image into two types of

regions: structure regions (SRs) and nonstructural regions (NSRs). Structure

Page 11: Lossless Image Compression Using Super

regions are encoded with super-spatial prediction while NSRs can be efficiently

encoded with conventional image compression methods, such as CALIC. It is also

important to point out that no codebook is required.

It eliminates the redundancy.

It more flexible comparing to VQ- based image encoders

HARDWARE CONFIGURATION:

PROCESSOR - INTEL PENTIUM III

HARD DISK SPACE - 40 GB

DISPLAY - 15” COLOR MONITOR

MEMORY - 256 MB

FLOPPY DISK DRIVE - 1.44 MB

KEYBOARD - 104 KEYS

COMPONENTS - MOUSE ATTACHED

SOFTWARE CONFIGURATION:

OPERATING SYSTEM - WINDOWS 2000 SERVER

FRONT END - C#.NET(WINDOWS)

BACK END - MS SQL SERVER 7.0

Page 12: Lossless Image Compression Using Super

Architecture:

Algorithm and Techniques:

Super-Spatial Structure Prediction:

It is an efficient lossless image compression. This super-spatial prediction is

motivated by motion prediction in video coding. It is used to encode the structural region.

Context-based adaptive lossless image coding:

CALIC is a spatial prediction based scheme, in which GAP is used for adaptive

image prediction. It used to encode the non structural region.

H.264:

H.264 or AVC (Advanced Video Coding) is a standard for video compression, and

is currently one of the most commonly used formats for the recording, compression, and

distribution of high definition video. It is a very efficient spatial prediction scheme

proposed in H.264 video coding.

Modules:

Page 13: Lossless Image Compression Using Super

IMAGE BLOCK CLASSIFICATION:

We will see that super-spatial prediction works very efficiently for image regions

with a significant amount of structure components. However, due to its large overhead

and high computational complexity, its efficiency will degrade in non structure or smooth

image regions. Therefore, we propose a block-based image classification scheme. More

specifically, we partition the image into blocks We then classify these blocks into two

categories: structure and non structure blocks. Structure blocks are encoded with super-

spatial prediction (Method_A). Non structure blocks are encoded with conventional

lossless image compression methods, such as CALIC (Method_B).

Structural Region:

Image selected for conversion

In this module image was convert into structural region.

SUPER-SPATIAL STRUCTURE PREDICTION:

The proposed super-spatial prediction borrows the idea of motion prediction from

video coding. In motion prediction, we search an area in the reference frame to find the

best match of the current block, based on some distortion metric. The chosen reference

block becomes the predictor of the current block. The prediction residual and the motion

vector are then encoded and sent to the decoder. In super-spatial prediction, we search

within the previously encoded image region to find the prediction of an image block. In

order to find the optimal prediction, at this moment, we apply brute-force search. The

reference block that results in the minimum block difference is selected as the optimal

prediction. For simplicity, we use the sum of absolute difference (SAD) to measure the

block difference. Besides this direct block difference, we can also introduce additional

Page 14: Lossless Image Compression Using Super

H.264-like prediction modes, such as horizontal, vertical, and diagonal prediction .The

best prediction mode will be encoded and sent to the decoder.

IMAGE CODING BASED ON SUPER-SPATIAL STRUCTURE PREDICTION:

The proposed image coding system has the following major steps. First, we

classify the image into NSRs and structure regions. The classification map is encoded

with arithmetic encoder. Second, based on the classification map, our encoder switches

between super-spatial prediction scheme to encode structure regions and CALIC scheme

to encode NSRs. We observe that the super-spatial prediction has relatively high

computational complexity because it needs to find the best match of the current block

from previous reconstructed image regions. Using the Method_B classification, we can

classify the image into structure and NSRs and then apply the super-spatial prediction

just within the structure regions since its prediction gain in the nonstructure smooth

regions will be very limited. This will significantly reduce overall computational

complexity.

Verification:

We compare the coding bit rate of the proposed lossless image coding method

based on super-spatial prediction with CALIC , one of the best encoders for lossless

image compression. We can see that the proposed scheme outperforms CALIC and save

the bit rate by up to 13%, especially for images with significant high-frequency

components. SRs, NSRs, and overhead information (including classification map,

reference block address, and prediction mode). We can see that, the Method_B, although

has low computational complexity, its performance loss is very small, when compared to

Method_A. Prediction from previous reconstructed structure image regions. We can see

that limiting the prediction reference to structure regions only increases the overall

Page 15: Lossless Image Compression Using Super

coding bit rate by less than 1%. This implies most structure blocks can find its best match

in the structure regions.

LANGUAGE SURVEY

Overview Of The .Net Framework:

The .NET Framework is a new computing platform that simplifies application

development in the highly distributed environment of the Internet. The .NET Framework

is designed to fulfill the following objectives: To provide a consistent object-oriented

programming environment whether object code is stored and executed locally, executed

locally but Internet-distributed, or executed remotely. To provide a code-execution

environment that minimizes software deployment and versioning conflicts. To provide a

code-execution environment that guarantees safe execution of code, including code

created by an unknown or semi-trusted third party. To provide a code-execution

environment that eliminates the performance problems of scripted or interpreted

environments. To make the developer experience consistent across widely varying types

of applications, such as Windows-based applications and Web-based applications. To

build all communication on industry standards to ensure that code based on the .NET

Framework can integrate with any other code.

The .NET Framework has two main components: the common language runtime

and the .NET Framework class library. The common language runtime is the foundation

of the .NET Framework. while also enforcing strict type safety and other forms of code

accuracy that ensure security and robustness. In fact, the concept of code management is

a fundamental principle of the runtime. Code that targets the runtime is known as

managed code, while code that does not target the runtime is known as unmanaged code.

The class library, the other main component of the .NET Framework, is a comprehensive,

object-oriented collection of reusable types that you can use to develop applications

ranging from traditional command-line or graphical user interface (GUI) applications to

applications based on the latest innovations provided by ASP.NET, such as Web Forms

and XML Web services.

Page 16: Lossless Image Compression Using Super

The .NET Framework can be hosted by unmanaged components that load the

common language runtime into their processes and initiate the execution of managed

code, thereby creating a software environment that can exploit both managed and

unmanaged features. The .NET Framework not only provides several runtime hosts, but

also supports the development of third-party runtime hosts.For example, ASP.NET hosts

the runtime to provide a scalable, server-side environment for managed code. ASP.NET

works directly with the runtime to enable Web Forms applications and XML Web

services, both of which are discussed later in this topicpossible, but with significant

improvements that only managed code can offer, such as semi-trusted execution and

secure isolated file storage.The following illustration shows the relationship of the

common language runtime and the class library to your ap plications and to the overall

system.

Features Of The Common Language Runtime:

The common language runtime manages memory, thread execution, code

execution, code safety verification, compilation, and other system services. These

features are intrinsic to the managed code that runs on the common language runtime.The

following sections describe the main components and features of the .NET Framework in

greater detail. With regards to security, managed components are awarded varying

degrees of trust, depending on a number of factors that include their origin (such as the

Internet, enterprise network, or local computer). This means that a managed component

might or might not be able to perform file-access operations, registry-access operations,

or other sensitive functions, even if it is being used in the same active application.The

runtime enforces code access security.

The runtime also enforces code robustness by implementing a strict type- and

code-verification infrastructure called the common type system (CTS). The CTS ensures

that all managed code is self-describing. The various Microsoft and third-party language

compilers generate managed code that conforms to the CTS. This means that managed

code can consume other managed types and instances, while strictly enforcing type

Page 17: Lossless Image Compression Using Super

fidelity and type safety.

Architecture of .NET Framework

In addition, the managed environment of the runtime eliminates many common

software issues. For example, the runtime automatically handles object layout and

manages references to objects, releasing them when they are no longer being used. This

automatic memory management resolves the two most common application errors,

memory leaks and invalid memory references.The runtime also accelerates developer

productivity. For example, programmers can write applications in their development

language of choice, yet take full advantage of the runtime, the class library, and

Page 18: Lossless Image Compression Using Super

components written in other languages by other developers. Any compiler vendor who

chooses to target the runtime can do so. Language compilers that target the .NET

Framework make the features of the .NET Framework available to existing code written

in that language, greatly easing the migration process for existing applications.

While the runtime is designed for the software of the future, it also supports

software of today and yesterday. Interoperability between managed and unmanaged code

enables developers to continue to use necessary COM components and DLLs.The

runtime is designed to enhance performance. Although the common language runtime

provides many standard runtime services, managed code is never interpreted. A feature

called just-in-time (JIT) compiling enables all managed code to run in the native machine

language of the system on which it is executing. Meanwhile, the memory manager

removes the possibilities of fragmented memory and increases memory locality-of-

reference to further increase performance.Finally, the runtime can be hosted by high-

performance, server-side applications, such as Microsoft® SQL Server™ and Internet

Information Services (IIS).

Net Framework Class Library:

The .NET Framework class library is a collection of reusable types that tightly

integrate with the common language runtime. The class library is object oriented,

providing types from which your own managed code can derive functionality. This not

only makes the .NET Framework types easy to use, but also reduces the time associated

with learning new features of the .NET Framework. In addition, third-party components

can integrate seamlessly with classes in the .NET Framework.For example, the .NET

Framework collection classes implement a set of interfaces that you can use to develop

your own collection classes. Your collection classes will blend seamlessly with the

classes in the .NET Framework.As you would expect from an object-oriented class

library, the .NET Framework types enable you to accomplish a range of common

Page 19: Lossless Image Compression Using Super

programming tasks, including tasks such as string management, data collection, database

connectivity, and file access. In addition to these common tasks, the class library includes

types that support a variety of specialized development scenarios. For example, you can

use the .NET Framework to develop the following types of applications and services:

Console applications,Windows GUI applications (Windows Forms),ASP.NET

applications,XML Web services,Windows services. For example, the Windows Forms

classes are a comprehensive set of reusable types that vastly simplify Windows GUI

development. If you write an ASP.NET Web Form application, you can use the Web

Forms classes.Client applications are the closest to a traditional style of application in

Windows-based programming. These are the types of applications that display windows

or forms on the desktop, enabling a user to perform a task.

Client applications include applications such as word processors and spreadsheets,

as well as custom business applications such as data-entry tools, reporting tools, and so

on.Client applications usually employ windows, menus, buttons, and other GUI elements,

and they likely access local resources such as the file system and peripherals such as

printers. Another kind of client application is the traditional ActiveX control (now

replaced by the managed Windows Forms control) deployed over the Internet as a Web

page. This application is much like other client applications: it is executed natively, has

access to local resources, and includes graphical elements.In the past, developers created

such applications using C/C++ in conjunction with the Microsoft Foundation Classes

(MFC) or with a rapid application development (RAD) environment such as Microsoft®

Visual Basic®. The .NET Framework incorporates aspects of these existing products into

a single, consistent development environment that drastically simplifies the development

of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be

used for GUI development. You can easily create command windows, buttons, menus,

toolbars, and other screen elements with the flexibility necessary to accommodate

shifting business needs. For example, the .NET Framework provides simple properties to

Page 20: Lossless Image Compression Using Super

adjust visual attributes associated with forms.

In some cases the underlying operating system does not support changing these

attributes directly, and in these cases the .NET Framework automatically recreates the

forms. This is one of many ways in which the .NET Framework integrates the developer

interface, making coding simpler and more consistent.Unlike ActiveX controls, Windows

Forms controls have semi-trusted access to a user's computer. This means that binary or

natively executing code can access some of the resources on the user's system (such as

GUI elements and limited file access) without being able to access or compromise other

resources.

Because of code access security, many applications that once needed to be

installed on a user's system can now be safely deployed through the Web. Your

applications can implement the features of a local application while being deployed like a

Web page.

Creating and Using Threads:

You create a new thread in Visual Basic .NET by declaring a variable of type

System.Threading.Thread and calling the constructor with the AddressOf statement and

the name of the procedure or method you want to execute on the new thread.

Starting and Stopping Threads

To start the execution of a new thread, use the Start method, as in the following

code:

MyThread.Start()

To stop the execution of a thread, use the Abort method, as in the following code:

MyThread.Abort()

Besides starting and stopping threads, you can also pause threads by calling the Sleep or

Suspend methods, resume a suspended thread with the Resume method, and destroy a

Page 21: Lossless Image Compression Using Super

thread using the Abort method, as in the following code:

MyThread.Sleep ()

MyThread.Suspend ()

MyThread.Abort ()

Thread Priorities:

Every thread has a priority property — that determines how large of a time slice it

gets to execute. The operating system allocates longer time slices to high-priority threads

than it does to low-priority threads. New threads are created with the value of Normal,

but you can adjust the Priority property to any of the other values in the

System.Threading.ThreadPriority enumeration.See ThreadPriority Enumeration for a

detailed description of the various thread priorities.

Foreground and Background Threads:

A foreground thread runs indefinitely, while a background thread terminates once

the last foreground thread has stopped. You can use the IsBackground property to

determine or change the background status of a thread

Changing Thread States:

Once a thread has started, you can call its methods to change its state. For

example, you can cause a thread to pause for a fixed number of milliseconds by calling

Thread.Sleep. The Sleep method takes as a parameter a timeout, which is the number of

milliseconds that the thread remains blocked. Calling

Thread.Sleep(System.Threading.Timeout.Infinite) causes a thread to sleep until it is

interrupted by another thread that calls Thread.Interrupt. The Thread.Interrupt method

Page 22: Lossless Image Compression Using Super

wakes the destination thread out of any wait it may be in and causes an exception to be

raised. You can also pause a thread by calling Thread.Suspend. When a thread calls

Thread.Suspend on itself, the call blocks until another thread resumes it by calling

Thread.Resume. When a thread calls Thread.Suspend on another thread, the call is non-

blocking and causes the other thread to pause. Calling Thread.Resume breaks another

thread out of its suspended state and causes it to resume execution.

Unlike Thread.Sleep, Thread.Suspend does not immediately stop a thread; the

suspended thread does not pause until the common language runtime determines that it

has reached a safe point.The Thread.Abort method stops a running thread by raising a

ThreadAbortException exception that causes the thread to die.

Overview Of Ado.Net:

ADO.NET provides consistent access to data sources such as Microsoft SQL

Server, as well as data sources exposed via OLE DB and XML. Data-sharing consumer

applications can use ADO.NET to connect to these data sources and retrieve, manipulate,

and update data.

ADO.NET cleanly factors data access from data manipulation into discrete

components that can be used separately or in tandem. ADO.NET includes .NET data

providers for connecting to a database, executing commands, and retrieving results.

Those results are either processed directly, or placed in an ADO.NET DataSet object in

order to be exposed to the user in an ad-hoc manner, combined with data from multiple

sources, or remoted between tiers. The ADO.NET DataSet object can also be used

independently of a .NET data provider to manage data local to the application or sourced

from XML.

Page 23: Lossless Image Compression Using Super

The ADO.NET classes are found in System.Data.dll, and are integrated with the

XML classes found in System.Xml.dll. When compiling code that uses the System.Data

namespace, reference both System.Data.dll and System.Xml.dll. For an example of

compiling an ADO.NET application using a command line compiler, ADO.NET provides

functionality to developers writing managed code similar to the functionality provided to

native COM developers by ADO. For a discussion of the differences between ADO and

ADO.NET.

Design Goals for ADO.NET

As application development has evolved, new applications have become loosely

coupled based on the Web application model. More and more of today's applications use

XML to encode data to be passed over network connections. Web applications use HTTP

as the fabric for communication between tiers, and therefore must explicitly handle

maintaining state between requests. This new model is very different from the connected,

tightly coupled style of programming that characterized the client/server era, where a

connection was held open for the duration of the program's lifetime and no special

handling of state was required.

In designing tools and technologies to meet the needs of today's developer,

Microsoft recognized that an entirely new programming model for data access was

needed, one that is built upon the .NET Framework. Building on the .NET Framework

ensures that the data access technology would be uniform — components would share a

common type system, design patterns, and naming conventions.ADO.NET was designed

to meet the needs of this new programming model: disconnected data architecture, tight

integration with XML, common data representation with the ability to combine data from

multiple and varied data sources, and optimized facilities for interacting with a database,

all native to the.NET Framework.

Page 24: Lossless Image Compression Using Super

Leverage Current Ado Knowledge:

The design for ADO.NET addresses many of the requirements of today's

application development model. At the same time, the programming model stays as

similar as possible to ADO, so current ADO developers do not have to start from the

beginning in learning a brand new data access technology

ADO.NET is an intrinsic part of the .NET Framework without seeming

completely foreign to the ADO programmer.ADO.NET coexists with ADO. While most

new .NET-based applications will be written using ADO.NET, ADO remains available to

the .NET programmer through .NET COM interoperability services.

Support The N-Tier Programming Model:

ADO.NET provides first-class support for the disconnected, n-tier programming

environment for which many new applications are written. The concept of working with a

disconnected set of data has become a focal point in the programming model. The

ADO.NET solution for n-tier programming is the DataSet.

Integrate XML Support

XML and data access are intimately tied — XML is all about encoding data, and

data access is increasingly becoming all about XML. The .NET Framework does not just

support Web standards — it is built entirely on top of them.XML support is built into

ADO.NET at a very fundamental level.

The XML classes in the .NET Framework and ADO.NET are part of the same

architecture — they integrate at many different levels. You no longer have to choose

between the data access set of services and their XML counterparts; the ability to cross

over from one to the other is inherent in the design of both.

Ado.Net Architecture:

Data processing has traditionally relied primarily on a connection-based, two-tier

model. As data processing increasingly uses multi-tier architectures, programmers are

Page 25: Lossless Image Compression Using Super

switching to a disconnected approach to provide better scalability for their applications.

Xml And Ado.Net:

ADO.NET leverages the power of XML to provide disconnected access to data.

ADO.NET was designed hand-in-hand with the XML classes in the .NET Framework —

both are components of a single architecture.ADO.NET and the XML classes in the .NET

Framework converge in the DataSet object. The DataSet can be populated with data from

an XML source, whether it is a file or an XML stream. The DataSet can be written as

World Wide Web Consortium (W3C) compliant XML, including its schema as XML

Schema definition language (XSD) schema, regardless of the source of the data in the

DataSet. Because the native serialization format of the DataSet is XML, it is an excellent

medium for moving data between tiers making the DataSet an optimal choice for

remoting data and schema context to and from an XML Web service.

Ado.Net Components:

The ADO.NET components have been designed to factor data access from data

manipulation. There are two central components of ADO.NET that accomplish this: the

DataSet, and the .NET data provider, which is a set of components including the

Connection, Command, DataReader, and DataAdapter objects.The ADO.NET DataSet is

the core component of the disconnected architecture of ADO.NET. The DataSet is

explicitly designed for data access independent of any data source. As a result it can be

used with multiple and differing data sources, used with XML data, or used to manage

data local to the application. The DataSet contains a collection of one or more DataTable

objects made up of rows and columns of data, as well as primary key, foreign key,

constraint, and relation information about the data in the DataTable objects.The

Connection object provides connectivity to a data source.

The Command object enables access to database commands to return data, modify

data, run stored procedures, and send or retrieve parameter information. The DataReader

provides a high-performance stream of data from the data source. Finally, the

DataAdapter provides the bridge between the DataSet object and the data source. The

Page 26: Lossless Image Compression Using Super

DataAdapter uses Command objects to execute SQL commands at the data source to both

load the DataSet with data, and reconcile changes made to the data in the DataSet back to

the data source. You can write .NET data providers for any data source.

Ado.net architecture:

ADO.NET Architecture

Remoting Or Marshaling Data Between Tiers And Clients:

The design of the DataSet enables you to easily transport data to clients over the

Web using XML Web services, as well as allowing you to marshal data between .NET

components using .NET Remoting services.You can also remote a strongly typed DataSet

in this fashion.Note that DataTable objects can also be used with remoting services, but

cannot be transported via an XML Web service.

Page 27: Lossless Image Compression Using Super

Data Flow Diagram:

Decode

Image

Encode with

Covert to

Structural Region

Original Image

Image Block Classification

Super Spatial Partitioning

Receiver

Server

Page 28: Lossless Image Compression Using Super

UML DIAGRAM:

Unified Modeling Language (UML):

UML is a method for describing the system architecture in detail using the

blueprint. UML represents a collection of best engineering practices that have proven

successful in then modeling of large and complex systems. The UML is a very important

part of developing objects oriented software and the software development process. The

UML uses mostly graphical notations to express the design of software projects. Using

the UML helps project teams communicate, explore potential designs, and validate the

architectural design of the software. The UML‘s four structural diagrams exist to

visualize, specify, construct and document the static aspects of a system. we can View the

static parts of a system using one of the following diagrams.

Definition:

UML is a general-purpose visual modeling language that is used to specify,

visualize, construct, and document the artifacts of the software system.

UML is a language:

Page 29: Lossless Image Compression Using Super

It will provide vocabulary and rules for communications and function on

conceptual and physical representation. So it is modeling language.

UML Specifying:

Specifying means building models that are precise, unambiguous and complete. In

particular, the UML address the specification of all the important analysis, design and

implementation decisions that must be made in developing and displaying a software

intensive system.

UML Visualization:

The UML includes both graphical and textual representation. It makes easy to

visualize the system and for better understanding.

UML Constructing:

UML models can be directly connected to a variety of programming languages

and it is sufficiently expressive and free from any ambiguity to permit the direct

execution of models.

UML Documenting:

UML provides variety of documents in addition raw executable codes

Page 30: Lossless Image Compression Using Super

Goal of UML:

The primary goals in the design of the UML were:

Provide users with a ready-to-use, expressive visual modeling language so

they can develop and exchange meaningful models.

Provide extensibility and specialization mechanisms to extend the core

concepts.

Be independent of particular programming languages and development

processes.

Provide a formal basis for understanding the modeling language.

Encourage the growth of the OO tools market.

Support higher-level development concepts such as collaborations,

frameworks, patterns and components.

Integrate best practices.

Uses of UML:

The UML is intended primarily for software intensive systems. It has been used

effectively for such domain as

Enterprise Information System

Banking and Financial Services

Telecommunications

Transportation

Defense/Aerospace

Retails

Page 31: Lossless Image Compression Using Super

Rules of UML:

The UML has semantic rules for

NAMES: It will call things, relationships and diagrams.

SCOPE: The content that gives specific meaning to a name.

VISIBILITY: How those names can be seen and used by others.

INTEGRITY: How things properly and consistently relate to another.

EXECUTION: What it means is to run or simulate a dynamic model.

Building blocks of UML

The vocabulary of the UML encompasses 3 kinds of building blocks

1. Things

2. Relationships

3. Diagrams

Things:

Things are the data abstractions that are first class citizens in a model. Things are of 4

types

Structural Things

Behavioral Things

Grouping Things

Annotational Things

Relationships:

Page 32: Lossless Image Compression Using Super

Relationships tie the things together. Relationships in the UML are

Dependency

Association

Generalization

Specialization

UML diagrams:

A diagram is the graphical presentation of a set of elements, most often rendered

as a connected graph of vertices (things) and arcs (relationships).

There are two types of diagrams, they are:

structural diagrams

behavioral diagrams

Structural Diagrams:-

The UML‘s four structural diagrams exist to visualize, specify, construct and

document the static aspects of a system. we can View the static parts of a system using

one of the following diagrams.

Class Diagram:

A class diagram is a type of static structure diagram that describes the structure of a

system by showing the system's classes, their attributes, and the relationships between the

classes.

Page 33: Lossless Image Compression Using Super

Class: Class is a description of a set of objects that share the same

attributes, operations, relationships and semantics. A class implements one

or more interfaces.

Interface: Interface is a collection of operations that specify a service of a

class or component. An interface describes the externally visible behavior

of that element. An interface might represent the complete behavior of a

class or component.

Collaboration: Collaboration defines an interaction and is a society of roles

and other elements that work together to provide some cooperative

behavior. So collaborations have structural as well as behavioral,

dimensions. These collaborations represent the implementation of patterns

that make up a system.Relationships such as

Dependency: Dependency is a semantic relationship between two things in

which a change to one thing may affect the semantics of the other thing.

Generalization: A generalization is a specialization / generalization

relationship in which objects of the specialized element (child) are

substitutable for objects of the generalized element (parent).

Association: An association is a structural relationship that describes a set

of links, a link being a connection among objects. Aggregation is a special

kind of association, representing a structural relationship between a whole

and its parts.

Use case Diagram:

A use case in software engineering and systems engineering is a description of steps or

actions between a user or "actor" and a software system which lead the user towards

something useful. The user or actor might be a person or something more abstract, such

as an external software system or manual process. Use cases are a software modeling

Page 34: Lossless Image Compression Using Super

technique that helps developers determine which features to implement and how to

gracefully resolve errors.

Sender

Image Selection

Image convert into structural region

Encode with Super spatial

Decode

Receiver

CLASS DIAGRAM:

A class diagram is a type of static structure diagram that describes the structure of

a system by showing the system's classes, their attributes, and the relationships between

the classes.

Source1

Datasize

Sending()

Transmission

Node IPData

Transfer()

Destination1

DataSize

Receiving()

Page 35: Lossless Image Compression Using Super

Source

Image

Sending()

Image Selection

ImageStructural Region

Selection()

Super Spatial

Encode

Encoding()

Destination

Decode

Receiving()

SEQUENCE DIAGRAM:

Sequence diagram in Unified Modeling Language (UML) is a kind of interaction

diagram that shows how processes operate with one another and in what order. It is a

construct of a Message Sequence Chart.

Page 36: Lossless Image Compression Using Super

Sender Image Selection

Super Spatial Receiver

Image

Convert into structural region

Encode with super spatial

Decode

Collaboration Diagram:

Collaboration defines an interaction and is a society of roles and other elements that work

together to provide some cooperative behavior. So collaborations have structural as well

as behavioral, dimensions. These collaborations represent the implementation of patterns

that make up a system.

Page 37: Lossless Image Compression Using Super

Image Selection

Sender

Super Spatial Receiver

1: Image2: Convert into structural region

3: Encode with super spatial

4: Decode

Activity Diagram:

Sender

Image Selection

Convert to Structural Region

Encode with Super-spatial

Decode

Receiver

Page 38: Lossless Image Compression Using Super

Literature Survey:

Context-Based, Adaptive, Lossless Image Coding

Abstract:

We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains

higher lossless compression of continuous-tone images than other lossless image coding

techniques in the literature. This high coding efficiency is accomplished with relatively

low time and space complexities. CALIC puts heavy emphasis on image data modeling.

A unique feature of CALIC is the use of a large number of modeling contexts (states) to

condition a nonlinear predictor and adapt the predictor to varying source statistics. The

nonlinear predictor can correct itself via an error feedback mechanism by learning from

its mistakes under a given context in the past. In this learning process, CALIC estimates

only the expectation of prediction errors conditioned on a large number of different

contexts rather than estimating a large number of conditional error probabilities. The

Page 39: Lossless Image Compression Using Super

former estimation technique can afford a large number of modeling contexts without

suffering from the context dilution problem of insufficient counting statistics as in the

latter approach, nor from excessive memory use. The low time and space complexities

are also attributed to efficient techniques for forming and quantizing modeling contexts.

Conclusion:

CALIC was designed to be practical, and suitable for both software and hardware

implementations. Compared with preceding context-based lossless image compression

algorithms, CALIC is more efficient, and also simpler in algorithm structure. Even more

significantly, the low complexity and algorithm simplicity are achieved with compression

performance superior to all existing methods known to us. Since the entropy coding step

is independent of source modeling in CALIC, and it is required by all lossless image

codecs, in The arithmetic and logic operations per pixel involved in these steps are

counted and tabulated . Note that the complexity analysis above only applies to the

continuous-tone mode. The binary mode is much simpler. Steps 1 and 3, the two most

expensive ones, can be easily executed in parallel and by simple hardware, as self-evident

in (2) and (7). For instance, and can be computed

simultaneously by two identical circuitries, cutting the computation time in half; in

context quantization, all ’s in a

texture pattern of (7) in the continuous-tone mode, or of (12) in the binary mode, can be

set simultaneously by a simple circuitry. Because the modeling and prediction algorithms

do not require the support of complex data structures (only onedimensional arrays are

used), their hardware implementations should not be difficult.

The LOCO-I Lossless Image Compression Algorithm: Principles and

Standardization into JPEG-LS

Abstract:

LOCO-I (LOw COmplexity LOssless Compression for Images) is the algorithm at the

core of the new ISO/ITU standard for lossless and near-lossless compression of

continuous-tone images, JPEG-LS. It is conceived as a “low complexity projection” of

Page 40: Lossless Image Compression Using Super

the universal context modeling paradigm, matching its modeling unit to a simple coding

unit. By combining simplicity with the compression potential of context models, the

algorithm “enjoys the best of both worlds.” It is based on a simple fixed context model,

which approaches the capability of the more complex universal techniques for capturing

high-order dependencies. The model is tuned for efficient performance in conjunction

with an extended family of Golomb-type codes, which are adaptively chosen, and an

embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains

compression ratios similar or superior to those obtained with state-of-the-art schemes

based on arithmetic coding. Moreover, it is within a few percentage points of the best

available compression ratios, at a much lower complexity level.We discuss the principles

underlying the design of LOCO-I, and its standardization into JPEG-LS.

Conclusion:

Bit Stream: The compressed data format for JPEG-LS closely follows the one specified

for JPEG [20]. The same

high level syntax applies, with the bit stream organized into frames, scans, and restart

intervals within a scan, markers specifying the various structural parts, and marker

segments specifying the various parameters. New marker assignments are compatible

with [20]. One difference, however, is the existence of default values for many of the

JPEG-LS coding parameters (e.g., gradient quantization thresholds, reset threshold), with

marker segments used to override these values. In addition, the method for easy detection

of marker segments differs from the one in [20]. Specifically, a single byte of coded data

with the hexadecimal value “FF” is followed with the insertion of a

single bit ‘0,’ which occupies the most significant bit position of the next byte. This

technique is based on the fact that all markers start with an “FF” byte, followed by a bit

“1.”

Color Images: For encoding images with more than one component (e.g., color images),

JPEG-LS supports combinations of single-component and multicomponent scans. Section

III describes the encoding process for a single-component scan. For multicomponent

Page 41: Lossless Image Compression Using Super

scans, a single set of context counters (namely, , and for regular mode contexts) is used

across all components in the scan. Prediction and context determination are performed as

in the single component case, and are component independent. Thus, the use of possible

correlation between color planes is limited to sharing statistics, collected from all planes.

For some color spaces (e.g., RGB),

good decorrelation can be obtained through simple lossless color transforms as a pre-

processing step to JPEG-LS. For example, compressing the (R-G,G,B-G) representations

of the images of Table I, with differences taken modulo the alphabet size in the interval ,

yields savings between 25% and 30% over compressing the respective RGB

representations. These savings are similar to those obtained with more elaborate schemes

which do not assume prior knowledge of the color space [26]. Given the variety of color

spaces, the standardization of specific filters was considered beyond the scope of JPEG-

LS, and color transforms are handled at the application level.

In JPEG-LS, the data in a multicomponent scan can be interleaved either by lines (line-

interleaved mode) or by samples (sample-interleaved mode). In line-interleaved mode,

assuming an image that is not subsampled in the vertical direction, a full line of each

component is encoded before starting the encoding of the next line (for images

subsampled in the vertical direction, more than one line from a component are interleaved

before starting with the next component). The index to the table used to adapt the

elementary Golomb code in run mode is component-

dependent. In sample-interleaved mode, one sample from each component is processed in

turn, so that all components which belong to the same scan must have the same

dimensions. The runs are common to all the components in the scan, with run mode

selected only when the corresponding condition is satisfied for

all the components. Likewise, a run is interrupted whenever so dictated by any of the

components. Thus, a single run length, common to all components, is encoded. This

approach is convenient for images in which runs tend to be synchronized between

components (e.g., synthetic images), but should be avoided in cases where run statistics

differ significantly across components, since a component may systematically cause run

Page 42: Lossless Image Compression Using Super

interruptions for another component with otherwise long runs. For example, in a CMYK

representation, the runs in the plane tend to be longer than in the other planes, so it is best

to encode the plane in a different scan.

A Universal Algorithm for Sequential Data Compression

Abstract:

A universal algorithm for sequential data compression is presented. Its

performance is investigated with respect to a non probabilistic model of constrained

sources. The compression ratio achieved by the proposed universal code uniformly

approaches the lower bounds on the compression ratios attainable by block-to-variable

codes and variable-to-block codes designed to match a completely specified source.

Conclusion:

I N MANY situations arising in digital communications and data processing, the

encountered strings of data display various structural regularities or are otherwise subject

to certain constraints, thereby allowing for storage and time-saving techniques of data

compression. Given a discrete data source, the problem of data compression is first to

identify the limitations of the source, and second to devise a coding scheme which,

subject to certain performance criteria, will best compress the given source.

Once the relevant source parameters have been identified, the problem reduces to one of

minimum-redundancy coding. This phase of the problem has received extensive

treatment in the literature [l]-[7].

When no a priori knowledge of the source characteristics is available, and if

statistical tests are either impossible or unreliable, the problem of data compression

becomes considerably more complicated. In order to overcome these difficulties one must

resort to universal coding schemes whereby the coding process is interlaced with a

learning process for the varying source characteristics [8], [9]. Such

coding schemes inevitably require a larger working memory space and generally employ

performance criteria that are appropriate for a wide variety of sources. In this paper, we

describe a universal coding scheme which can be applied to any discrete source and

Page 43: Lossless Image Compression Using Super

whose performance is comparable to certain optimal fixed code book schemes designed

for completely specified sources. For lack of adequate criteria, we do not attempt to rank

the proposed scheme with respect to other possible universal coding schemes. Instead, for

the broad class of sources defined in Section III, we derive upper bounds on the

compression efficiency attainable with full a priori knowledge of the source by fixed

code book schemes, and then show that the efficiency of our universal code with no a

priori knowledge of the source approaches those bounds.

The proposed compression algorithm is an adaptation of a simple copying

procedure discussed recently [lo] in a study on the complexity of finite sequences.

Basically, we employ the concept of encoding future segments of the source-output via

maximum-length copying from a buffer containing the recent past output. The

transmitted codeword consists of the buffer address and the length of the copied segment.

With a predetermined initial load of the buffer and the information contained in the

codewords, the source data can readily be reconstructed at the decoding end of the

process. The main drawback of the proposed algorithm is its susceptibility to error

propagation in the event of a channel error.

Screen Shots:

Page 44: Lossless Image Compression Using Super
Page 45: Lossless Image Compression Using Super
Page 46: Lossless Image Compression Using Super
Page 47: Lossless Image Compression Using Super
Page 48: Lossless Image Compression Using Super
Page 49: Lossless Image Compression Using Super

Conclusion:

In this work, we have developed a simple yet efficient image prediction scheme,

called super-spatial prediction. It is motivated by motion prediction in video coding,

attempting to find an optimal prediction of a structure components within previously

encoded image regions. When compared to VQ-based image encoders, it has the

flexibility to incorporate multiple H.264-style prediction modes. When compared to other

neighborhood- based prediction methods, such as GAP and H.264 Intra prediction, it

allows the block to find the best match from the whole image which significantly reduces

the prediction residual by up to 79%. We classified an image into structure regions and

NSRs. The structure regions are encoded with super-spatial prediction while the NSRs

are encoded with existing image compression schemes, such as CALIC. Our extensive

experimental results demonstrated that the proposed scheme is very efficient in lossless

image compression, especially for images with significant structure components. In our

future work, we shall develop fast and efficient algorithms to further reduce the

Page 50: Lossless Image Compression Using Super

complexity of super-spatial prediction. We also notice that when the encoder switches

between structure

and NSRs, the prediction context is broken and it will degrade the overall coding

performance. In our future work, we shall investigate more efficient schemes for context

switching.

REFERENCES

[1] X. Wu and N. Memon, “Context-based, adaptive, lossless image

coding,” IEEE Trans. Commun., vol. 45, no. 4, pp. 437–444, Apr.

1997.

[2] M. J. Weinberger, G. Seroussi, and G. Sapiro, “The LOCO-I lossless

image compression algorithm: Principles and standardization into

JPEG-LS,” IEEE Trans. Image Process., vol. 9, no. 8, pp. 1309–1324,

Aug. 2000.

[3] J. Ziv and A. Lempel, “A universal algorithm for sequential data

compression,” IEEE Trans. Inform. Theory, vol. 23, no. 3, pp. 337–343,

May 1977.

[4] J. Ziv and A. Lempel, “Compression of individual sequences via

variable- rate coding,” IEEE Trans. Inform. Theory, vol. IT-24, no. 5, pp.

530–536, Sep. 1978.

[5] V. Sitaram, C. Huang, and P. Israelsen, “Efficient codebooks for

vector quantization image compression with an adaptive tree search

algorithm,” IEEE Trans. Commun., vol. 42, no. 11, pp. 3027–3033, Nov.

1994.

[6] J. Storer, “Lossless image compression using generalized LZ1-type

methods,” in Proc. IEEE Data Compression Conference (DCC’96), pp.

290–299.

Page 51: Lossless Image Compression Using Super

[7] D. Chen and A. Bovik, “Visual pattern image coding,” IEEE Trans.

Commun., vol. 38, no. 12, pp. 2137–2146, Dec. 1990.

[8] F.Wu and X. Sun, “Image compression by visual pattern vector

quantization (VPVQ),” in Proc. IEEE Data Compression Conf.

(DCC’2008),

pp. 123–131.

[9] [Online]. Available: http://wftp3.itu.int/av-arch/video-site/0005_Osa/

a15j19.doc

[10] T. Wiegand, G. J. Sullivan, G. Bjntegaard, and A. Luthra, “Overview

of the H.264/AVC video coding standard,” IEEE Trans. Circuits Syst.

Video Technol., vol. 13, no. 7, Jun. 2003.