DIP Image Processing Fundamentals

Post on 29-Nov-2014

291 views 2 download

description

A property of MVG_OMALLOOR

Transcript of DIP Image Processing Fundamentals

Image Processing Fundamentals

CS308 Data Structures

Fall 2001

How are images represented in the computer?

Color images

Image formation

• There are two parts to the image formation process:

– The geometry of image formation, which determines where in the image plane the projection of a point in the scene will be located.

– The physics of light, which determines the brightness of a point in the image plane as a function of illumination and surface properties.

A Simple model of image formation

• The scene is illuminated by a single source.

• The scene reflects radiation towards the camera.

• The camera senses it via chemicals on film.

Pinhole camera• This is the simplest device to form an image of a 3D scene

on a 2D surface.

• Straight rays of light pass through a “pinhole” and form an inverted image of the object on the image plane.

fXx

Z

fYy

Z

Camera optics

• In practice, the aperture must be larger to admit more light.

• Lenses are placed to in the aperture to focus the bundle of rays from each scene point onto the corresponding point in the image plane

Image formation (cont’d)

• Optical parameters of the lens– lens type

– focal length

– field of view

• Photometric parameters– type, intensity, and direction of illumination

– reflectance properties of the viewed surfaces

• Geometric parameters– type of projections

– position and orientation of camera in space

– perspective distortions introduced by the imaging process

Image distortion

What is light?

• The visible portion of the electromagnetic (EM) spectrum.

• It occurs between wavelengths of approximately 400 and 700 nanometers.

Short wavelengths

• Different wavelengths of radiation have different properties.

• The x-ray region of the spectrum, it carries sufficient energy to penetrate a significant volume or material.

Long wavelengths

• Copious quantities of infrared (IR) radiation are emitted from warm objects (e.g., locate people in total darkness).

Long wavelengths (cont’d)

• “Synthetic aperture radar” (SAR) imaging techniques use an artificially generated source of microwaves to probe a scene.

• SAR is unaffected by weather conditions and clouds (e.g., has provided us images of the surface of Venus).

Range images

• An array of distances to the objects in the scene.

• They can be produced by sonar or by using laser rangefinders.

Sonic images

• Produced by the reflection of sound waves off an object.

• High sound frequencies are used to improve resolution.

CCD (Charged-Coupled Device) cameras• Tiny solid state cells convert light energy into electrical

charge.

• The image plane acts as a digital memory that can be read row by row by a computer.

Frame grabber• Usually, a CCD camera plugs into a computer board

(frame grabber).

• The frame grabber digitizes the signal and stores it in its memory (frame buffer).

Image digitization

• Sampling means measuring the value of an image at a finite number of points.

• Quantization is the representation of the measured value at the sampled point by an integer.

Image digitization (cont’d)

Image quantization(example)• 256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel)

• 8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1 bit/pixel)

Image sampling (example) original image sampled by a factor of 2

sampled by a factor of 4 sampled by a factor of 8

Digital image• An image is represented by a rectangular array of integers.

• An integer represents the brightness or darkness of the image at that point.

• N: # of rows, M: # of columns, Q: # of gray levels– N = , M = , Q = (q is the # of bits/pixel)– Storage requirements: NxMxQ (e.g., N=M=1024, q=8, 1MB)

(0,0) (0,1) ... (0, 1)

(1,0) (1,1) ... (1, 1)

... ... ... ...

( 1,0) ( 1,1) ... ( 1, 1)

f f f M

f f f M

f N f N f N M

2 n 2 m 2 q

Image coordinate system

1

2

Nx j

1( )

2

My i

Image file formats

• Many image formats adhere to the simple model shown below (line by line, no breaks between lines).

• The header contains at least the width and height of the image.

• Most headers begin with a signature or “magic number” - a short sequence of bytes for identifying the file format.

Common image file formats

• GIF (Graphic Interchange Format) - • PNG (Portable Network Graphics)• JPEG (Joint Photographic Experts Group)• TIFF (Tagged Image File Format)• PGM (Portable Gray Map)• FITS (Flexible Image Transport System)

Comparison of image formats

PGM format

• A popular format for grayscale images (8 bits/pixel)• Closely-related formats are:

– PBM (Portable Bitmap), for binary images (1 bit/pixel)

– PPM (Portable Pixelmap), for color images (24 bits/pixel)

• ASCII or binary (raw) storage

ASCII vs Raw format

• ASCII format has the following advantages:– Pixel values can be examined or modified very easily using a standard text editor.

– Files in raw format cannot be modified in this way since they contain many unprintable characters.

• Raw format has the following advantages:– It is much more compact compared to the ASCII format.

– Pixel values are coded using only a single character !

Image Class

class ImageType { public: ImageType(); ~ImageType(); // more functions ...

private: int N, M, Q; //N: # rows, M: # columns int **pixelValue; };

An example - Threshold.cpp void readImageHeader(char[], int&, int&, int&, bool&); void readImage(char[], ImageType&); void writeImage(char[], ImageType&);

void main(int argc, char *argv[]) { int i, j; int M, N, Q; bool type; int val; int thresh;

// read image header readImageHeader(argv[1], N, M, Q, type);

// allocate memory for the image array ImageType image(N, M, Q);

// read image readImage(argv[1], image);

Threshold.cpp (cont’d)

cout << "Enter threshold: ";

cin >> thresh;

// threshold image

for(i=0; i<N; i++)

for(j=0; j<M; j++) {

image.getVal(i, j, val);

if(val < thresh)

image.setVal(i, j, 255);

else

image.setVal(i, j, 0);

}

// write image

writeImage(argv[2], image);

}

Reading/Writing PGM images

Writing a PGM image to a file

void writeImage(char fname[], ImageType& image) int N, M, Q; unsigned char *charImage; ofstream ofp;

image.getImageInfo(N, M, Q);

charImage = (unsigned char *) new unsigned char [M*N];

// convert the integer values to unsigned char

int val;

for(i=0; i<N; i++) image.getVal(i, j, val); charImage[i*M+j]=(unsigned char)val; }

Writing a PGM image... (cont’d)

ofp.open(fname, ios::out);

if (!ofp) {

cout << "Can't open file: " << fname << endl;

exit(1);

}

ofp << "P5" << endl;

ofp << M << " " << N << endl;

ofp << Q << endl;

ofp.write( reinterpret_cast<char *>(charImage), (M*N)*sizeof(unsigned char));

if (ofp.fail()) {

cout << "Can't write image " << fname << endl;

exit(0);

}

ofp.close();

}

Reading a PGM image from a file void readImage(char fname[], ImageType& image) { int i, j; int N, M, Q; unsigned char *charImage; char header [100], *ptr; ifstream ifp;

ifp.open(fname, ios::in); if (!ifp) { cout << "Can't read image: " << fname << endl; exit(1); }

// read header ifp.getline(header,100,'\n'); if ( (header[0]!=80) || /* 'P' */ (header[1]!=53) ) { /* '5' */ cout << "Image " << fname << " is not PGM" << endl; exit(1); }

Reading a PGM image …. (cont’d)

ifp.getline(header,100,'\n');

while(header[0]=='#')

ifp.getline(header,100,'\n');

M=strtol(header,&ptr,0);

N=atoi(ptr);

ifp.getline(header,100,'\n');

Q=strtol(header,&ptr,0);

charImage = (unsigned char *) new unsigned char [M*N];

ifp.read( reinterpret_cast<char *>(charImage), (M*N)*sizeof(unsigned char));

if (ifp.fail()) {

cout << "Image " << fname << " has wrong size" << endl;

exit(1);

}

ifp.close();

Reading a PGM image…(cont’d)

// // Convert the unsigned characters to integers //

int val;

for(i=0; i<N; i++) for(j=0; j<M; j++) { val = (int)charImage[i*M+j]; image.setVal(i, j, val); }

}

How do I “see” images on the computer?

• Unix: xv, gimp• Windows: Photoshop

How do I display an image from within my C++ program?

• Save the image into a file with a default name (e.g., tmp.pgm) using the WriteImage function.

• Put the following command in your C++ program:

system(“xv tmp.pgm”);

• This is a system call !!

• It passes the command within the quotes to the Unix operating system.

• You can execute any Unix command this way ….

How do I convert an image from one format to another?

• Use xv’s “save” option• It can also convert images

How do I print an image?

• Load the image using “xv”

• Save the image in “postscript” format

• Print the postscript file (e.g., lpr -Pname image.ps)

Image processing software

• CVIPtools (Computer Vision and Image Processing tools)• Intel Open Computer Vision Library• Microsoft Vision SDL Library• Matlab• Khoros

• For more information, see– http://www.cs.unr.edu/~bebis/CS791E– http://www.cs.unr.edu/CRCD/ComputerVision/cv_resources.html

Image Processing

What is an image?

An image is two dimensional light intensity function, f(x,y), where x and y are spatial coordinates, and the amplitude of f at any pair of coordinates (x,y) is called the intensity or grey level of the image at that point.

Digital Image?

Digital image is an image that has been discretised both in spatial coordinates and brightness

Sample the 2D space on a regular gridQuantize each sample (round to nearest integer)

How are images represented in the computer?

Origin of DIP

Picture of earth’s moon taken by space probe in 1964. Picture made with a television camera (vidicon), transmitted to the earth by analog modulation, and digitized on the ground.

Medical Images

Multispectral Satellite Images

Image Processing in Manufacturing

Radar Image

What is light?

The visible portion of the electromagnetic (EM) spectrum.

It occurs between wavelengths of approximately 400 and 700 nanometers.

Short wavelengths Different wavelengths of radiation have

different properties. The x-ray region of the spectrum, it carries

sufficient energy to penetrate a significant volume or material.

Long wavelengths Copious quantities of infrared (IR)

radiation are emitted from warm objects (e.g., locate people in total darkness).

Long wavelengths (cont’d) “Synthetic aperture radar” (SAR) imaging

techniques use an artificially generated source of microwaves to probe a scene.

SAR is unaffected by weather conditions and clouds (e.g., has provided us images of the surface of Venus).

Sonic images

Produced by the reflection of sound waves off an object.

High sound frequencies are used to improve resolution.

Common image file formats

GIF (Graphic Interchange Format) - PNG (Portable Network Graphics) JPEG (Joint Photographic Experts Group) TIFF (Tagged Image File Format) PGM (Portable Gray Map) FITS (Flexible Image Transport System)

Important steps in a typical image processing system

Step 1: Image acquisition

Step 2: Discretization / digitalization; Quantization ; Compression

Step 3: Image enhancement and restoration

Step 4: Image segmentation

Step 5: Feature selection

Capturing visual data by an image sensor

Convert data into discrete form; compress for efficient storage/transmission

Improving image quality (low contrast, blur noise)

Partition image into objects or constituent parts

Extracting pertinent features from an image that are important for differentiating one class of objects from another

Step 6: Image representation

Step 7: Image interpretation

Assigning labels to an object based on information provided by descriptors

Assigning meaning to an ensemble of recognized objects

Image Acquisition Tools

Human Eye Ordinary Camera X-Ray Machine Positron Emission Tomography Infrared Imaging Geophysical Imaging

Human eye

Average dia=20mm When eye is properly

focused light from the object is imaged on retina.

Pattern vision is afforded by the receptors called rods and cones

Lens

Structure of Human Eye

Image Formation in the Eye

Light Receptors over Retina

Sensors

Sensors are use to transform illumination energy into digital images.

Sensors are three types:

Single Imaging SensorLine sensorArray Sensor

Image Formation

f(x,y) = reflectance(x,y) * illumination(x,y)Reflectance in [0,1], illumination in [0,inf]

Array SensorNumerous electromagnetic and some ultrasonic sensing devices frequently are arranged in an array format. This arrangement is also found in digital cameras.The response of each sensor is proportional to the integral of the light energy projected onto the surface of the sensor, a property that is used in astronomical and other applications requiring low noise images.The illumination source reflects the energy from a scene element and the first function performed by the imaging system is to collect the incoming energy and focus it onto an image plane. If the illumination is light, the front end of the imaging system is a lens, which projects the viewed scene onto the lens focal plane. The sensor array, which is coincident with the focal plane, produces outputs proportional to the integral of the light received at each sensor.

A specially developed CCD used for ultraviolet imaging in a wire bonded package.

Structure of a CCD 1.

The image area of the CCD is positioned at the focal plane of the telescope. An image then builds up that consists of a pattern of electric charge. At the end of the exposure this pattern is then transferred, pixel at a time, by way of the serial register to the on-chip amplifier. Electrical connections are made to the outside world via a series of bond pads and thin gold wires positioned around the chip periphery.

Connection pins

Gold bond wires

Bond pads

Silicon chip

Metal,ceramic or plastic packageImage area

Serial register

On-chip amplifier

Image Sampling and Quantization

The output of most sensors is a continuous voltage

waveform whose amplitude and spatial behavior are related to the physical phenomenon being sensed.

To create a digital image, we need to convert the

continuous sensed data into digital form. This involves two processes: sampling and quantization.

Sampling and Quantization

To convert it to digital form, we have to sample the function in both coordinates and in amplitude.

Digitizing the coordinate values is called sampling.

Digitizing the amplitude values is called quantization.

Sampling and Quantization

Image before sampling and quantization

Result of sampling and quantization

Digital Image?

When x, y and the amplitude values of f are finite, discrete quantities, the image is called digital image

A digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels and pixels.

Pixel values in highlighted region

  

Image Size This digitization process requires decisions about values

for M, N, and for the number, L, of discrete gray levels allowed for each pixel. Where M and N, are positive integers. However, due to processing, storage, and sampling hardware considerations, the number of gray levels typically is an integer power of 2:

L = 2K

Where k is number of bits require to represent a grey value

The discrete levels should be equally spaced and that they are integers in the interval [0, L-1].

The number, b, of bits required to store a digitized image is

b=M*N*k.

Number of storage bits for various values of N and k

Resolution

Spatial Resolution: Spatial resolution is the smallest detectable detail in an image.

Grey level Resolution: Gray-level resolution similarly refers to the smallest detectable change in gray level.

A 1024*1024, 8-bit image subsampled down to size 32*32 pixels.The number of allowable gray levels was kept at 256

(a) 1024*1024, 8-bit image. (b) 512*512 image resampled into 1024*1024 pixels by row and column duplication. (c) through (f) 256*256, 128*128, 64*64, and 32*32 images resampled into 1024*1024 pixels.

(a) 452*374, 256-level image (b)–(d) Image displayed in 128, 64, and 32 graylevels, while keeping the spatial resolution constant.

Coordinate Convention

Digital Image Representation

)1,1(...)1,1()0,1(

............

)1,1(......)0,1(

)1,0(...)1,0()0,0(

),(

MNfNfNf

Mff

Mfff

yxf

Digital Image Image Elements(Pixels)

Pixel Notation

Indexed Images

MATLAB stores the RGB values of an indexed image as values of type double, with values between 0 and 1.

Digital Image ProcessingChapter 11:

Image Description and Representation

Digital Image ProcessingChapter 11:

Image Description and Representation

Image Representation and Description? Image Representation and Description?

Objective:To represent and describe information embedded in

an image in other forms that are more suitable than the image itself.

Benefits:- Easier to understand- Require fewer memory, faster to be processed- More “ready to be used”

What kind of information we can use?- Boundary, shape- Region- Texture- Relation between regions

Shape Representation by Using Chain Codes Shape Representation by Using Chain Codes

Chain codes: represent an object boundary by a connected sequence of straight line segments of specified lengthand direction.

4-directionalchain code

8-directionalchain code

Why we focus on a boundary?The boundary is a good representation of an object shapeand also requires a few memory.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Examples of Chain Codes Examples of Chain Codes

Object boundary

(resampling)

Boundaryvertices

4-directionalchain code

8-directionalchain code

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

The First Difference of a Chain Codes The First Difference of a Chain Codes Problem of a chain code: a chain code sequence depends on a starting point.Solution: treat a chain code as a circular sequence and redefine thestarting point so that the resulting sequence of numbers forms an integer of minimum magnitude.

The first difference of a chain code: counting the number of directionchange (in counterclockwise) between 2 adjacent elements of the code.

Example:

1

2

3

0

Example: - a chain code: 10103322 - The first difference = 3133030 - Treating a chain code as a circular sequence, we get the first difference = 33133030

Chain code : The first difference 0 1 1 0 2 2 0 3 3 2 3 1 2 0 2 2 1 3 The first difference is rotational

invariant.

Polygon Approximation Polygon Approximation

Object boundary Minimum perimeterpolygon

Represent an object boundary by a polygon

Minimum perimeter polygon consists of line segments that minimize distances between boundary pixels.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Polygon Approximation:Splitting Techniques Polygon Approximation:Splitting Techniques

1. Find the line joiningtwo extreme points

0. Object boundary 2. Find the farthest pointsfrom the line

3. Draw a polygon(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Distance-Versus-Angle Signatures Distance-Versus-Angle Signatures

Represent an 2-D object boundary in term of a 1-D function of radial distance with respect to .

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Boundary Segments Boundary Segments

Concept: Partitioning an object boundary by using vertices of a convex hull.This decomposition reduces the boundary’s complexity

Convex deficiency

Object boundary

Partitioned boundary

• Used when thee boundary contains one or more concavities that carry the shape information .

• Convex hull H of a set S (object)is the smallest convex containing S.

• H-S=D called convex deficiency (shaded region)of set S

• Region boundary can be partitioned by following the contour of S and marking the points at which the transition is made into or out of a component of the convex deficiency

Skeletons Skeletons

Used for representing the structural shape of a plane region It is reduced to a graph via thinning or skeletonizing algorithmSkeleton of region may be defined via medial axis transformations (MAT)

Medial axes (dash lines)

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Skeletons Skeletons

Based on prairie fire concept. Consider an image region as prairie(grass land) of uniform dry grass and suppose that a fire lit along its border .All the fire fronts will advance into the region at the same speed .The MAT of a region is the set of points reached by more than one fire front at the same time.

•The MAT of a region R with border B is defined as follows– For each point p in R, we find p’s closest neighbor in B• If p has more than one such neighbor, it is said tobelong to the medial axis• Note: This definition depends on how we defined our distancemeasure (closest)• Generally euclidean distance, but not a restriction

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Thinning Algorithm Thinning Algorithm

Neighborhoodarrangementfor the thinningalgorithm

Concept: 1. Do not remove end points2. Do not break connectivity3. Do not cause excessive erosion

Apply only to contour pixels: pixels “1” having at least one of its 8neighbor pixels valued “0”

p2p9 p3

p8

p7

p1 p4

p6 p5

Let98321)( pppppN

T(p1) = the number of transition 0-1 in the ordered sequence p2, p3, …

, p8, p9, p2.

Notation:

=Let

00 1

1

1

p1 0

0 1

Example

N(p1) = 4T(p1) = 3

Thinning Algorithm (cont.) Thinning Algorithm (cont.)

Step 1. Mark pixels for deletion if the following conditions are true. a)

b) T(p1) =1c)d)

6)(2 1 pN

0642 ppp0864 ppp

p2p9 p3

p8

p7

p1 p4

p6 p5

Step 3. Mark pixels for deletion if the following conditions are true. a)

b) T(p1) =1c)d)

6)(2 1 pN

0842 ppp0862 ppp

Step 4. Delete marked pixels and repeat Step 1 until no change occurs.

(Apply to all border pixels)

Step 2. Delete marked pixels and go to Step 3.

(Apply to all border pixels)

Example: Skeletons Obtained from the Thinning Alg. Example: Skeletons Obtained from the Thinning Alg.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Skeleton

Boundary Descriptors Boundary Descriptors

1. Simple boundary descriptors: we can use

- Length of the boundary- The size of smallest circle or box that can totally enclosing the object

2. Shape number

3. Fourier descriptor

4. Statistical moments

Shape Number Shape Number

Shape number of the boundary definition: the first difference of smallest magnitude

The order n of the shape number: the number of digits in the sequence

1

2

3

0

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Shape Number (cont.) Shape Number (cont.)

Shape numbers of order 4, 6 and 8

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Example: Shape Number Example: Shape Number

Chain code: 0 0 0 0 3 0 0 3 2 2 3 2 2 2 1 2 1 1

First difference:3 0 0 0 3 1 0 3 3 0 1 3 0 0 3 1 3 0

Shape No.0 0 0 3 1 0 3 3 0 1 3 0 0 3 1 3 0 3

1. Original boundary

2. Find the smallest rectanglethat fits the shape

3. Create grid4. Find the nearest Grid.

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Fourier Descriptor Fourier Descriptor Fourier descriptor: view a coordinate (x,y) as a complex number (x = real part and y = imaginary part) then apply the Fourier transform to a sequence of boundary points.

)()()( kjykxks

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

1

0

/2)(1

)(K

k

KukeksK

ua Fourier descriptor :

1

0

/2)(1

)(K

k

KukeuaK

ks

Let s(k) be a coordinate of a boundary point k :

Reconstruction formula

Boundarypoints

Example: Fourier Descriptor Example: Fourier Descriptor

Examples of reconstruction from Fourier descriptors

1

0

/2)(1

)(ˆP

k

KukeuaK

ks

P is the number of Fourier coefficients used to reconstruct the boundary

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Fourier Descriptor Properties Fourier Descriptor Properties

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Some properties of Fourier descriptors

Texture Descriptors Texture Descriptors

(Images from Rafael C. Gonzalez and Richard E. Wood, Digital Image Processing, 2nd Edition.

Purpose: to describe “texture” of the region.

Examples: optical microscope images:

Superconductor(smooth texture)

Cholesterol(coarse texture)

Microprocessor(regular texture)

A

B C

Term of image

PixelResolutionNeighbor of pixelsSpace and FrequencyVector

PIXEL

Pixel is a smallest component of digital image

Pixel is a color point of digital imageAn image should be comprised of many

Pixels.

Neighbors of a pixel

Neighbors of pixel are the pixels that are adjacent pixels of an identified pixel.

Each pixel is a unit distance from the particular pixel.

Some of the neighbors of pixel lie outside the digital image if its position is on the border of the image.

Pixel at coordinate (column x, row y) can be represented by f(x,y)

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0123456789101112131415

Pixel at the7th column and 4th rowis yellow color

Zoom 1600%

4-neighbors of pixel

4-neighbors of pixel is denoted by N4(p)It is set of horizontal and vertical neighbors

f(x,y) is a yellow circlef(x,y-1) is top onef(x-1,y) is left onef(x+1,y) is right onef(x,y+1) is bottom one

(x-1)

(y-1)

(y+1)

(x+1)(x)

(y)

Diagonal neighbors of pixel

Diagonal neighbors of pixel is denoted by ND(p)

It is set of diagonal neighbors

f(x,y) is a yellow circlef(x-1,y-1) is top-left onef(x+1,y-1) is top-right onef(x-1,y+1) is bottom-left onef(x+1,y+1) is bottom-right one

(x-1)

(y-1)

(y+1)

(x+1)(x)

(y)

8-neighbors of pixel

8-neighbors of pixel is denoted by N8(p)

4-neighbors and Diagonal neighbors of pixel

f(x,y) is a yellow circle(x-1,y-1), (x,y-1),(x+1,y-1), (x-1,y), (x,y), (x+1,y),(x-1,y+1),(x,y+1), (x+1,y+1)

(x-1)

(y-1)

(y+1)

(x+1)(x)

(y)

Connectivity

Establishing boundaries of objects and components of regions in an image.

Group the same region by assumption that the pixels being the same color or equal intensity will are the same region

Connectivity

Let C is the set of colors used to define There are three type of connectivity:

4-Connectivity : 2 pixels (p and q) with value in C are 4-connectivity if q is in the set N4(p)

8-Connectivity : 2 pixels (p and q) with value in C are 8-connectivity if q is in the set N8(p)

M-Connectivity : 2 pixels (p and q) with value in C are 8-connectivity if

(i) Q is in N4(p), or

(ii) Q is in ND(p) and the set N4(p) ∩ N4(q) is empty

Binary Image Represent

0 0 0 0 0 0 0 0 0 00 0 0 0 1 1 0 0 0 00 0 1 1 1 1 1 1 0 00 0 0 1 0 0 1 0 0 00 0 0 1 0 0 1 0 0 00 0 0 1 0 0 1 0 0 00 0 0 1 1 1 1 0 0 00 1 1 1 0 0 1 1 1 00 0 0 0 0 0 0 0 0 0

Example 4-Connectivity

Set of color consists of color 1 ; C ={1}

1 1

1

0

1

0

0

0 0

1

1 1

Example 8-Connectivity

Set of color consists of color 1 ; C ={1}

1 1

1

0

1

0

0

0 0

1

1 1

Example M-Connectivity

Set of color consists of color 1 ; C ={1}

1 1

1

0

1

0

0

0 0

1

1 1

Distance Measure

For pixels p, q, and z, with coordinates (x,y), (s,t) and (u,v) respectively, D is a distance function or metric if

(a) D(p,q) ≥ 0 and

(b) D(p,q) = 0 iff p = q and

(c) D(p,q) = D(q,p) and

(d) D(p,z) ≤ D(p,q) + D(q,z)

The D4 Distance

Also called city-block distanceCalculate between p and q is defined as

D4(p,q) = | px- qx | + | py- qy |

1

1 1

1

2

2 2

2 2

2 2

2

0

The D8 Distance

Also called city-block distanceCalculate between p and q is defined as

D8(p,q) = max(| px- qx |,| py- qy |)

1

1 1

1

2

1 1

2 2

1 1

2

0

2

2

2 2 2

2

2 2

2 2

22

Labeling of connected Components

Scan an image pixel by pixel from left to right and top to bottom

There are 2 types of connectivity interested 4-connectivity 8-connectivity

Equivalent labeling

Step: 4-connected components

P is pixel scanned process If pixel p is color value 0 move on the next scaning If pixel p is color value 1 examine pixel top and left

– If top and left were 0, assign a new label to p– If only one of them’re 1, assign its label to p– If both of them’re 1 and have

• the same number, assign their label to p• Different number, assign label of top to p and make note that two

label is equivalent

Sort all pairs of equivalent labels and assign each equivalent to be same type

Example 4-connected components

1 1

1

0

1

0

0

0 0

1

1 1

1 1

2 1 3

4 3

1 2

3 4

Equivalent Table

Step: 8-connected components

Steps are same as 4-connected componentsBut the pixel that are consider is 4 previous

pixels ( top-left, top, top-right, and left )

Example 8-connected components

1 1

1

0

1

0

0

0 0

1

1 1

1 1

1

1

1 1

1

Multi-Color Labeling

Well define the group of color clearly and independency.

Then label each group independency.

Example Multi-Color labeling

แทน 0 แทน 1แทน 2แทน 3

Group of color1st Group consists of 12nd Group consists of 23rd Group consists of 3

Color Labeling 1st Group

1 1 1 11 1 1

1 111

22 2 2 2 22 2

22 2 22 2

2 2

2

222

2222

Color Labeling 2st Group

1 1 11 1 11 1 1

5 5 55 5 5

23 3 33 3 3

3 3 33 3 3

33

4

Color Labeling 3st Group

1 1 11 1 11 1 11 1 11 1 11 1 1

24 24 2

3 3 33 3 3 333

45 5 5 5

5 5 55 5

6

Exercise

If we want to label red and yellow is the same group.

How many label we can get?

Image Processing I

15-463: Rendering and Image Processing

Alexei Efros

…with most slides shamelessly stolenfrom Steve Seitz and Gonzalez & Woods

Today

Image Formation

Range Transformations• Point Processing

Programming Assignment #1 OUT

Reading for this week:• Gonzalez & Woods, Ch. 3• Forsyth & Ponce, Ch. 7

Image Formation

f(x,y) = reflectance(x,y) * illumination(x,y)Reflectance in [0,1], illumination in [0,inf]

Sampling and Quantization

Sampling and Quantization

What is an image?

We can think of an image as a function, f, from R2 to R:• f( x, y ) gives the intensity at position ( x, y )

• Realistically, we expect the image only to be defined over a rectangle, with a finite range:

– f: [a,b]x[c,d] [0,1]

A color image is just three functions pasted together. We can write this as a “vector-valued” function:

( , )

( , ) ( , )

( , )

r x y

f x y g x y

b x y

Images as functions

What is a digital image?

We usually operate on digital (discrete) images:• Sample the 2D space on a regular grid• Quantize each sample (round to nearest integer)

If our samples are apart, we can write this as:

f[i ,j] = Quantize{ f(i , j ) }

The image can now be represented as a matrix of integer values

Image processing

An image processing operation typically defines a new image g in terms of an existing image f.

We can transform either the range of f.

Or the domain of f:

What kinds of operations can each perform?

Point Processing

The simplest kind of range transformations are these independent of position x,y:

g = t(f)

This is called point processing.

What can they do?

What’s the form of t?

Important: every pixel for himself – spatial information completely lost!

Basic Point Processing

Negative

Log

Power-law transformations

Gamma Correction

Gamma Measuring Applet: http://www.cs.berkeley.edu/~efros/java/gamma/gamma.html

Image Enhancement

Contrast Streching

Image Histograms

Cumulative Histograms

Histogram Equalization

Histogram Matching

Match-histogram code

Neighborhood Processing (filtering)

Q: What happens if I reshuffle all pixels within the image?

A: It’s histogram won’t change. No point processing will be affected…

Need spatial information to capture this.

Programming Assignment #1

Easy stuff to get you started with Matlab• James will hold tutorial this week

Distance Functions• SSD• Normalized Correlation

Bells and Whistles• Point Processing (color?)• Neighborhood Processing• Using your data (3 copies!)• Using your data (other images)