saikiran manchikatla

89
CCM 4900 TITLE: FACE RECOGNITION SYSTEM USING DIGITAL IMAGE PROCESSING STUDENT NAME SAIKIRAN MANCHIKATLA STUDENT NUMBER M00258505 SUPERVISOR NAME Dr. ARSENIA CHORTI Campus: Hendon Department of Computer Communications School of Computing Science 8 th January, 2010 A thesis submitted in partial fulfillment of the requirements for the degree of Page 1

Transcript of saikiran manchikatla

Page 1: saikiran manchikatla

CCM 4900

TITLE: FACE RECOGNITION SYSTEM USING DIGITAL IMAGE PROCESSING

STUDENT NAME SAIKIRAN MANCHIKATLA

STUDENT NUMBER M00258505 SUPERVISOR NAME Dr. ARSENIA CHORTI

Campus: Hendon

Department of Computer Communications

School of Computing Science

8th January, 2010

A thesis submitted in partial fulfillment of the requirements for the degree of

Master of Science in Telecommunication Engineering

Page 1

Page 2: saikiran manchikatla

ABSTRACT

The main aim of this project is to identify a particular image or icon, which is processed

using Digital image processing techniques. The identification of image is done by the

comparison of captured image with the preloaded images.

The process of the digital images by the use of the digital computer is known as the

digital image processing. And the digital image has a finite number of the components or the

elements, and every part has the particular location and the particular value. Thus these are

denoted as the image elements.

The most advanced of our senses is the vision so the images which are played the single

most role which is important is not much surprising. Processing an image involves various levels

like image capturing, image restoration, image compression and image recognition i.e. assigning

a label to an object based on its descriptors.

This project involves of digital camera for capturing the image. The captured image is

processed in MATLAB using image processing techniques and the processed image is further

compared with the images, which are already stored in the system. Whenever it recognizes the

image or icon, that information is displayed on the console.

Page 2

Page 3: saikiran manchikatla

AKNOWLWDGEMENTS

First and foremost I would like to thank my supervisor, Dr. Arsenia Chorti, for her valuable

help, guidelines and encouragement and also her feedback improved the quality of dissertation.

As I owe always to my dad and mum, my brother and my sister. They gave me a great

inspiration for all my challenges during my course and also to complete my dissertation.

At last, I would like to thank my supervisor and all of my friends for their kindness and great

support during the course.

Page 3

Page 4: saikiran manchikatla

TABLE OF CONTENTS

ABSTRACT…………………………………………………………………………… 2

AKNOWLEDGEMENTS……………………………………………………………. 3

TABLE OF CONTENTS……………………………………………………………… 4

LIST OF

TABLES……………………………………………………………………………….... 7

LIST OF

FIGURES………………………………………………………………………………… 8

CHAPTER 1: INTRODUCTION………………………………………………………. 9

1.1.AIMS AND OBJECTIVES……………………………………………………… 10

1.2. EVIDENCE OF REQUIREMENTS…………………………………………… 10

1.3. RESARCH METHOD…………………………………………………………… 10

1.4. DELIVERABLES………………………………………………………………. 10

1.5. RESOURCES……………………………………………………………………. 11

CHAPTER 2: ARCHITECTURE OF THE PROJECT………………………………. 12

2.1. DESCRIPTION OF THE PROJECT…………………………………………….. 12

2.1.1. BLOCK DIAGRAM…………………………………………………………… 12

2.2. WORKING OF THE PROJECT……………………………………………………. 13

2.2.1. BLOCK DIAGRAM OF WORKING PROCEDURE……………………….. 13

2.2.2. WORKING PRINCIPLE………………………………………………………. 13

2.3 IMAGE PROCESSING TECHNIQUES USED IN THE PROJECT……………….. 14

CHAPTER 3: DIGITAL IMAGE PROCESSING………………………………………… 15

Page 4

Page 5: saikiran manchikatla

3.1. INTRODUCTION…………………………………………………………………. 15

3.2. DEFINITION OF DIGITAL IMAGE PROCESSING………………………….. 15

3.3. THE OPERATIONS OF DIGITAL IMAGE PROCESSING…………………… 15

3.4. FUTURE OF DIGITAL IMAGE PROCESSING…………………………………. 15

3.5. APPLICATIONS OF DIGITAL IMAGE PROCESSING………………………. 16

CHAPTER 4: INTRODUCTION AND EXPLANATION OF MATLAB……………….. 17

4.1. WHAT IS MATLAB?............................................................................................. 17

4.2. THE MAIN FEATURES OF MATLAB………………………………………… 18

4.3. THE MATLAB SYSTEM………………………………………………………… 18

4.3.1. DEVELOPMENT ENVIRONMENT………………………………………… 18

4.3.2. THE MATLAB MATHEMATICAL FUNCTION LYBRARY…………… 18

4.3.3. THE MATLAB LANGUAGE………………………………………………. 19

4.3.4. GRAPHICS………………………………………………………………….. 19

4.3.5. THE MATLAB APPLICATION PROGRAM INTERFACE (API)…… 19

4.4. MATLAB WORKING ENVIRONMENT……………………………………………. 20

4.4.1. MATLAB DESKTOP……………………………………………………… 20

4.4.2. CREATING M-FILES USING THE MATLAB EDITOR………………. 21

4.4.3. MATLAB GETTING HELP…………………………………………… 21

Page 5

Page 6: saikiran manchikatla

4.5 MATLAB TOOL BOXES……………………………………………………………… 22

4.5.1. IMAGE ACQUISITION TOOL BOX………………………………….. 22

4.5.2. BASIC IMAGE ACQUISITION PROCEDURE……………………… 22

4.6 IMAGE PROCESSING TOOLBOX………………………………………………… 33

4.6.1. READING IMAGES…………………………………………………… 33

4.7 GRAPHICAL USER INTERFACE (GUI) …………………………………………… 42

4.8 THE SUCCESSIVE MEAN QUANTIZATION TRANSFORM (SMQT)…………. 51

4.8.1. INTRODUCTION TO SMQT…………………………………………………… 51

4.8.2. DESCRPTION OF SMQT………………………………………………………… 52

4.8.3. SMQT IN SPEECH PROCESSING……………………………………………… 55

4.8.4. SMQT IN IMAGE PROCESSING……………………………………………….. 56

4.8.5. APPLICATION OF SMQT IN IMAGE PROCESSING………………………... 58

CHAPTER 5: SOURCE CODE……………………………………………………………… 59

5.1: YSN FACEFIND.M FILE………………………………………………………… 59

5.2: FACEFIND.M FILE………………………………………………………………… 60

5.3: PLOTBOX.M FILE…………………………………………………………………. 60

5.4: PLOTSIZE.M FILE……………………………………………………………… 62

CHAPTER 6: APPLICATIONS AND FUTURE SCOPE………………………………… 64

6.1. APPLICATIONS ……………………………………………………………………. 67

6.2.CONCLUSION AND FUTURE SCOPE…………………………………………. 65

REFERENCES:……………………………………………………………………………… 66

Page 6

Page 7: saikiran manchikatla

LIST OF TABLES

TABLE 4.1: IMAGE ACQUISITION PROCEDURE

TABLE 4.2: DEVICE INFORMATIONSSSSS

TABLE 4.3: DILATION AND EROSION OPERATION

Page 7

Page 8: saikiran manchikatla

LIST OF FIGURES

FIGURE 2.1: BLOCK DIAGRAM OF THE PROJECT

FIGURE 2.1: BLOCK DIAGRAM OF WORKING PROCEDURE

FIGURE 4.1: VIDEO PREVIEW WINDOW

FIGURE 4.4: SHOWING THE OUTPUT FOR THE DILATION OPERATION

FIGURE 4.4: CONNECTIVITIES IN AN IMAGE

FIGURE 4.5: INPUT IMAGE

FIGURE 4.6: CONVERSION TO BINARY IMAGES

FIGURE 4.7: TRACING THE BOUNDARY FOR ONLY ONE PARTICULAR OBJECT

IN AN IMAGE

FIGURE 4.8: STARTING POINT BOUNDARY NOTATION

FIGURE 4.9: INPUT IMAGE

FIGURE 4.10: ADDITION OF NOISE

FIGURE 4.11: AVERAGE FILTERING

FIHURE 4.12: MEDIAN FILTERING IMAGE

FIGURE 4.7.1: AXES

FIGURE 4.8.2: THE SMQT AS A BINARY TREE OF MQUs

FIGURE 4.8.3: THE STEPS FROM SPEECH COEFFICIENTS BY MFCC AND SMQT-

MFCC

FIGURE 4.8.4: COMPARISION BETWEEN MFCC AND SMQT-MFCC

FIGURE 4.8.5: EXAMPLE OF SMQT ON WHOLE IMAGE

FIGURE 4.8.6 DIFFRENCE BETWEEN THE IMAGES IN SMQT

Page 8

Page 9: saikiran manchikatla

CHAPTER NO.1

INTRODUCTION

Face recognition is not so much about face recognition at all - it is much more about face

detection! It is a strong belief of many scientists, that the prior step to face recognition, the

accurate detection of human faces in arbitrary scenes, is the most important process involved.

When faces could be located exactly in any scene, the recognition step afterwards would not be

so complicated.

The images are acquired using the camera. We use images with a plain mono color background,

or use them with a predefined static background - removing the background will always give you

the face boundaries. Further, algorithms are used to detect the faces in the image. Hence provides

a basic step for the face recognition.

In this project the image is obtained with help of camera (i.e. Frames are obtained for

every Two seconds) and are loaded onto MATLAB workspace. It will be in the RGB space that

is in 3D format. In General processing of 3D Images is more complicated and hence we are now

converting into gray scale Images. The converted images passed through a manual window for

scaling the region of interest. After converting it into gray scale the faces are found with

Successive mean quantization Algorithm. After finding the faces box will be drawn along the

face coordinates with the specified coordinates like thickness and color of the line. This can be

done with help of vector coordinates. Then the resultant image will be displayed on the monitor.

Page 9

Page 10: saikiran manchikatla

1.1. Aims and objectives:

Thesis aims at:

To identify the Human faces in each frame of streaming video and plot the

rectangular boxes to identify the faces

The objectives of the thesis are:

To obtain the images with help of camera (i.e. Frames are obtained for every Two

seconds)

To convert into gray scale Images and find the faces in each image with Successive

mean quantization Algorithm. To plot boxes along the face coordinates with the

specified coordinates like thickness and color of the line. This can be done with help

of vector coordinates.

1.2. Evidence of Requirements

Illumination and sensor variation are major concerns in visual object detection. It is

desirable to transform the raw illumination and sensor varying image so the information only

contains the structures of the object. Successive Mean Quantization Transform (SMQT) can be

viewed as a tunable tradeoff between the number of quantization levels in the result and the

computational load. The SMQT is used to extract features from the local area of an image.

Derivations of the sensor and illumination insensitive properties of the local SMQT features.  

1.3 Research Method.

• Improving relevant technical knowledge by reading books and by using helps in the

internet.

• Having discussion with the lecturer in-charge for the project.

• Having discussion, meeting and chats with Engineers to increase

• Mat lab knowledge to do this project.

• Search and study existing related area.

1.4 Deliverables.

• Mat lab source code

Page 10

Page 11: saikiran manchikatla

• •Documentation

1.5 Resources.

•For Documentation

– Hardware

Processor Pentium Core2Duo 2.0 GHz

4 GB Ram

512 MB VGA

250 GB hard disk

– Software

Windows XP SP2

Microsoft word 2007

Internet Explorer 8t.0

• •For user

Web Camera

Computers

Page 11

Page 12: saikiran manchikatla

CHAPTER NO.2

ARCHITECTURE OF THE PROJECT

2.1 DESCRIPTION

This main objective of the intended project is to detect the faces In the image front of the

camera. The USB web camera is used to acquire the images. That is the acquired images are

given to the PC via USB port. This will in turn consist of USB controllers which will control the

hardware. This is connected to CPU through PCI bus. Here the CPU will process the image with

help of MATLAB tool. Those Images are taken into MATLAB workspace with the help of

Image Acquisition Toolbox in the MATLAB

The PC's Monitor is used as the display device and hence the results are displayed on it.

2.1.1 Block Diagram:

Figure 2.1: Block Diagram Of Project

Page 12

Page 13: saikiran manchikatla

2.2 WORKING

2.2.1 Block Diagram:

Figure 2.1: Block Diagram Of Working Procedure

2.2.2 Working Principle:

In this project the image is obtained with help of camera (i.e. Frames are obtained for

every Two seconds) and are loaded onto MATLAB workspace. It will be in the RGB space that

is in 3D format. In General processing of 3D Images is more complicated and hence we are now

converting into gray scale Images. The converted images passed through a manual window for

scaling the region of interest.

Page 13

Page 14: saikiran manchikatla

After converting it into gray scale the faces are found with Successive mean quantization

Algorithm. After finding the faces box will be drawn along the face coordinates with the

specified coordinates like thickness and color of the line. This can be done with help of vector

coordinates. Then the resultant image will be displayed on the monitor.

2.3 IMAGE PROCESSING TECHNIQUES USED

1. Image Conversions

2. Image Windowing

3. Image scaling

4. Algorithm implementation

Here with help of image conversion technique first the RGB frame is converted into gray in

order to reduce the complexity. The image scaling is done to extract the particular color. This

technique is used to display the image on the windows. The Algorithm implementation is done in

order to find the face. And then the box is drawn along the face coordinates and final image is

disp0layed on monitor.

Page 14

Page 15: saikiran manchikatla

CHAPTER NO. 3

DIGITAL IMAGE PROCESSING

3.1. INTRODUCTION

What is digital? To represent the data in the form of numbers ,operate by the use of

discrete signals.

Define an image: An image is defined as 2D or an artifact that has an appearance of

some subjects usually a person or a physical object.

Processing: The operations are performed on data according to programmed instructions

3.2. DEFINITION OF DIGITAL IMAGE PROCESSING

Digital image processing is the use of computers algorithms to perform image processing

and digital images.

Digital image processing is an electronic data processing on a two dimensional array of

numbers and the array is a numeric representation of an image.

3.3. THE OPERATIONS OF DIGITAL IMAGE PROCESSING

The operations of image processing can be divided in to three major categories:

• Image extraction

• Image enhancement restoration

• Image compression

3.4. FUTURE OF DIGITAL IMAGE PROCESSING

Now a day’s digital image processing uses a wide range of uses in in different modern

applications and the following are some of them

• Neural networks

• Expert systemsParallel processing

Page 15

Page 16: saikiran manchikatla

3.5. APPLICATIONS OF DIGITAL IMAGE PROCESSING

• Feature detection

• Remote sensing

• computer vision

• Face detection

• Non-photorealistic rendering

• Lane departure warning system

• Morphological image processing

• Medical image processing

• Microscope image processing lane

Page 16

Page 17: saikiran manchikatla

CHAPTER NO.4

INTRODUCTION AND EXPLANATION OF

MATLAB

4.1. WHAT IS MATLAB?

MATLAB is a high-performance language for technical computing. It integrates

computation, visualization, and programming in an easy-to-use environment where problems and

solutions are expressed in familiar mathematical notation. Typical uses include Math and

computation Algorithm development Data acquisition Modeling, simulation, and prototyping

Data analysis, exploration, and visualization Scientific and engineering graphics Application

development, including graphical user interface building

.

Now a days MATLAB has evolved and most of the people are make use of MATLAB.In

the mathematics and in the university courses it is the introductory of the coursesas in the

engineering and the mathematics and many more subjects are included and the people make use

of this MATLAB most.

MATLAB features a family of add-on application-specific solutions called toolboxes.

Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized

technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that

extend the MATLAB environment to solve particular classes of problems. Areas in which

toolboxes are available include signal processing, control systems, neural networks, fuzzy logic,

wavelets, simulation, and many others.

Page 17

Page 18: saikiran manchikatla

4.2. THE MAIN FEATURES OF MATLAB:

1. The ability to define ones owns functions and a large collection of predefined

mathematical functions.

2. In the field of matrix algebra Advance algorithm for high performance

numerical computation.

3. For solving advanced problems in several application areas Toolboxes are

available .

4. For individual application Powerful, matrix or vector oriented high level

programming language

5. A complete online help system

6. Two-and three dimensional graphics for plotting and displaying data.

4.3. THE MATLAB SYSTEM:

The MATLAB system consists of five main parts:

4.3.1. The Development Environment:

This is the set of tools and facilities that help you use MATLAB functions and files.

Many of these tools are graphical user interfaces. It includes the MATLAB desktop and

Command Window, a command history, an editor and debugger, and browsers for viewing help,

the workspace, files, and the search path.

4.3.2. The MATLAB Mathematical Function Library:

This is a vast collection of computational algorithms ranging from elementary functions,

like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix

inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms.

Page 18

Page 19: saikiran manchikatla

4.3.3. The MATLAB Language:

This is a high-level matrix/array language with control flow statements, functions, data

structures, input/output, and object-oriented programming features. It allows both "programming

in the small" to rapidly create quick and dirty throw-away programs, and "programming in the

large" to create large and complex application programs.

4.3.4. Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well

as annotating and printing these graphs. It includes high-level functions for two-dimensional and

three-dimensional data visualization, image processing, animation, and presentation graphics. It

also includes low-level functions that allow you to fully customize the appearance of graphics as

well as to build complete graphical user interfaces on your MATLAB applications. This helps

you to plot input and output images on the particular plots. With the help of these we can plot the

histogram levels of each image on the particular plot which makes the task easier for the human

interface.

4.3.5. The MATLAB Application Program Interface (API):

This is a library that allows you to write C and FORTRAN programs that interact with

MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling

MATLAB as a computational engine, and for reading and writing MAT-files.

Page 19

Page 20: saikiran manchikatla

4.4. MATLAB WORKING ENVIRONMENT:

4.4.1. MATLAB Desktop:

The main MATLAB application window is MATLAB Desktop .And also it has 5 sub

windows: and they are the command window , the current directory window ,the workspace

browser and the command history window and one or more windows of the figures. when the

user displays a graphics then they are shown over there

When the user the expression prompt’>>’ then that is the command window and

information. The user creates in the workstation that that was created by the MATLAB and also

it was defined by the MATLAB.The variables and some of the information was shown by the

workspace browsers.

For example, in the windows operating system the path might be as follows:.

IN MATLAB, it uses the MATLAB files and the search path to find the M-files which are shown

in the computers of the directory. All the files that are used in the MATLAB should be shown in

the directory or should be shown on the search path .In the search path the files used are shown

on the search path as can be seen in the MATLAB. Also the math works toolboxes are also used

in the search path. Set path dialogue box is the important command and most used and useful

command in the MATLAB because whatever the files in the directories can be shown by using

the set path, and also modify or add the search path, for this use the set path from the file menu

on the desktop and then dialogue box of the set path is used. The best way is to add the all

commonly used directories to the directory of the MATLAB, so that we can have the use of the

repeated directories.

A record of the commands a user has entered in the command window will be in the

Command History Window , including previous MATLAB sessions and both current. And also

previously entered MATLAB commands can be chosen or selected and re-executed by right

clicking on a command or sequence of commands from the command History window. So such

action produses the list from which to opt the various executing additions that commands, so this

is the way of selection that commands are executing in addition to that .

Page 20

Page 21: saikiran manchikatla

4.4.2. Editor to Create M-Files Using the MATLAB:

Extension.m was denoted by the M-file, that is in pixelup.m and the editor of the

MATLAB window has the number of pulldown lists for the aims such that debugging, viewing

and saving files. The MATLAB editor is both a graphical MATLAB debugger and a text editor

specialized for creating M-files and. And also the editor can appear in a window by itself, or it

can be a sub window in the desktop..Because it uses some color to differentiate between various

elements of code performs some simple checks and, this text editor is recommended as the tool

of choice for editing M-functions and writing. Type edit at the prompt opens the M-file

filename.m in an editor window, to open the editor, ready for editing. the file must be in the

current directory, or in a directory in the search path as noted earlier.

4.4.3. MATLAB Getting Help:

The basic approach and the principal way to get help online is to use the MATLAB help

browser, it will be opened as a separate window either by typing help browser at the prompt in

the command window or by clicking on the question mark symbol (?) on the desktop toolbars.

The help Browser that displays a The Hypertext Markup Language (HTML) documents are

displayed with the help of a web browser integrated into the MATLAB desktop. And also the

Help Browser consists of two panes, the display pane, used to view the information and the help

navigator pane, used to find information. The other navigator pane are used to perform a search

is Self-explanatory tabs.

Page 21

Page 22: saikiran manchikatla

4.5. MATLAB TOOL BOXES

4.5.1 Image Acquisition Toolbox

1. acquiring images through many types of image acquisition devices, from professional

grade frame grabbers to USB-based Webcams

2. Viewing a preview of the live video stream

3. Triggering acquisitions (includes external hardware triggers)

4. Configuring callback functions that execute when certain events occur

5. Bringing the image data into the MATLAB workspace

Many of the toolbox functions are MATLAB M-files. You can view the MATLAB code for

these functions using the statement

type function name

You can extend the capabilities of the Image Acquisition Toolbox by writing your own

M-files, or by using the toolbox in combination with other toolboxes, such as the Image

Processing Toolbox and the Data Acquisition Toolbox. The Image Acquisition Toolbox also

includes a Simulink block, called the Video Input block that can be used to bring live video data

into a model.

4.5.2. Basic Image Acquisition Procedure

This section illustrates the basic steps required to create an image acquisition application

by implementing a simple motion detection application. The application detects movement in a

scene by performing a pixel-to-pixel comparison in pairs of incoming image frames. If nothing

moves in the scene, pixel values remain the same in each frame. When something moves in the

Page 22

Page 23: saikiran manchikatla

image, the application displays the pixels that have changed values. To use the Image

Acquisition Toolbox to acquire image data, you must perform the following basic steps:

Table 4.1: Image Acquisition Procedure

Step Description

Step 1Install and configure your image acquisition device

Step 2

Retrieve information that uniquely identifies your image

acquisition device to the Image Acquisition Toolbox

Step 3Creation of the video input of the object

Step 4Preview the video stream (Optional)

Step 5Configure image acquisition object properties (Optional)

Step 6Acquire image data

Step 7 Clean up

Step 1: Install Your Image Acquisition Device

The instructions of the setup typically involve the instructions that come with the image

acquisition device.

1. The frame grabber board was installed in your computer.

2. The installation of the software drivers that is required for the device.

3. The connection of the camera to the connector of the frame grabber board.

4. Verify that the camera is working properly by the use of application that came with your

laptop or to the computer.

Page 23

Page 24: saikiran manchikatla

The installation of the frame grabber board is not required for the generic windows image

of the devices that are acquired. By the use of the firewireport or USB we can directly

connect to the laptop or computer. When the installation the image acquisition hardware

was completed in your computer ,by double clicking the MATLAB icon on your desktop

start the MATLAB. To perform the image acquisition, no need of other configuration of

the MATLAB.

Step 2: Retrieve Hardware Information

To identify the several image acquisition devices we get several things of information for

which that needs the toolbox and that the device wants to access it. By the creation of the image

acquisition object we can use these information and the following shows the information of the

lists and imaqhwinfo is the function you are going to use and each item is retrieved

TABLE 4.2.DEVICE INFORMATION

Device Information Description

Adaptor name

An adaptor is the software that the toolbox uses to communicate with an

image acquisition device via its device driver. The toolbox includes

adaptors for certain vendors of image acquisition equipment and for

particular classes of image acquisition devices.

Device ID

To identify an image by single way the device i.e the number that

assigns to uniquely adopt and identify the acquisition of the image

device that will communicate with it..

Video format

The video format specifies the image resolution (width and height) and

other aspects of the video stream. Image acquisition devices typically

support multiple video formats.

Determining the Adaptor Name

Imageinfo function is entered to determine the name of the adaptor and also enter

this function in the MATLAB prompt without the use on any arguments

Page 24

Page 25: saikiran manchikatla

.

Determining the Device ID

To find the device ID of a particular image acquisition device, enter the

imaqhwinfo function at the MATLAB prompt, specifying the name of the adaptor as the

only argument.

Determining the Supported Video Formats

For the determination of the video formats of the image acquisition device that

should support for that you have to go through the Device Info of the data that was return

by the imagewinfo. Device Info space is array of structure that each or one structure

provides the information about the device particularly . To view the device information

for a particular device, you can use the device ID as a reference into the structure array.

Alternatively, you can view the information for a particular device can be called as the

function denoted as imaqhwinfo, specified as the adaptor name and arguments the

device ID.

To get the list of the video formats supported by a device, look at Supported

Formats field in the device information structure. The Supported Formats field is a cell

Page 25

Page 26: saikiran manchikatla

array of striwhere each string is the name of a video format supported by the device.

In this process we will be obtaining the properties of the webcam we are using.

Step 3: Creating a Video Input Object

The video input objects that you should create in this step and also the use of

toolbox represents the interaction between the MATLAB and the image acquisition

device. With the use of the properties of the input object of the video we can control as

more as image acquisition can be done.

By connecting to hardware we can see more information about the acquisition objects.

The video input function at the MATLAB prompt is used to create a video input

object. These Device Info structure returned by the imaqhwinfo function contains the

default video input syntax function for the constructor object field. For more information

the device information structure, see determining the Supported Video Formats.

The following example creates a video input object for the DCAM adaptor. The

name of the adaptor is substituted in the acailable device of the system.

vid = videoinput('dcam',1,'Y8_1024x768')

Three arguments are accepted by the video input object :device ID,video format

and the name of the adaptor. This information can be retrieved in step 2. The

adaptor is the only required argument. To determine the default video format, look

at the DefaultFormat field in the device information structure. See Determining

the Supported Video Formats for more information.

Instead of specifying the video format, you can optionally specify the name of a

device configuration file, also known as a camera file. Device configuration files are

typically supplied by frame grabber vendors. These files contain all the required

configuration settings to use a particular camera with the device. See Using Device

Configuration Files (Camera Files) for more information.

Page 26

Page 27: saikiran manchikatla

Viewing the Video Input Object Summary

In the MATLAB command prompt enter the variable name as ‘vid’ so you can

view the summary of the video input object.Much more characteristics are shown in the

information of the summary, such that the trigger can be typed, the trigger can be

captured and finally the current stage of the object can be viewed.

Step 4: Preview the Video Stream (Optional)

After you create the video input object, MATLAB is able to access the image

acquisition device and is ready to acquire data. However, before you begin, you might

want to change the position of the camera, you might want to see a preview of the video

stream to make sure that the image is satisfactory. For example, make some other change

to your image acquisition setup change the lighting, correct the focus.

To preview the video stream in this example, you have to enter the function

preview at the MATLAB prompt, specifying the video input object created in step 3 as

an argument.

preview(vid)

The preview function opens a Video Preview figure window on your screen

containing the live video stream. To stop the stream of live video, you can call the stop

preview function. To restart the preview stream, call preview again on the same video

input object. While a preview window is open, the video input object sets the value of the

Previewing property to 'on'. If you change characteristics of the image by setting image

Page 27

Page 28: saikiran manchikatla

acquisition object properties, the image displayed in the preview window reflects the

change.

To close the Video Preview window, click the Close button in the title bar or use

the close preview function, specifying the video input object as an argument.

closepreview(vid)

Calling close preview without any arguments closes all open Video Preview windows

FIGURE 4.1: Video Preview Window

Step 5: Configure Object Properties (Optional)

After creating the video input object and previewing the video stream, you might

want to modify characteristics of the image or other aspects of the acquisition process.

You accomplish this by setting the values of image acquisition object properties. This

section

1. Describes the types of image acquisition objects used by the toolbox

Page 28

Page 29: saikiran manchikatla

2. Describes how to view all the properties supported by these objects, with their

current values

3. Describes how to set the values of object properties

Types of Image Acquisition Objects

The toolbox uses two types of objects this is because to represent the connection

with the image acquisition device:

1. Video input objects

2. Video source objects

A video input object represents the connection between MATLAB and a video

acquisition device at a high level. The properties supported by the video input object are

the same for every type of device. You created a video input object using the videoinput

function in step 3.

When you create a video input object, the toolbox automatically creates one or

more video source objects associated with the video input object. Each video source

object represents a collection of one or more physical data sources that are treated as a

single entity. The number of video source objects the toolbox creates depends on the

device and the video format you specify. At any one time, only one of the video source

objects, called the selected source, can be active. This is the source used for acquisition.

For more information about these image acquisition objects, see Creating Image

Acquisition Objects.

Page 29

Page 30: saikiran manchikatla

Viewing Object Properties

To view a complete list of all the properties supported by a video input object or a

video source object, use the get function. To list the properties of the video input object

created in step 3, enter this code at the MATLAB prompt.

get(vid)

The get function lists all the properties of the object with their current values.

To view the properties of the currently selected video source object associated

with this video input object, use the getselectedsource function in conjunction with the get

function. The getselectedsource function returns the currently active video source. To list

the properties of the currently selected video source object associated with the video input

object created in step 3, enter this code at the MATLAB prompt.

get(getselectedsource(vid))

The get function lists all the properties of the object with their current values.

Page 30

Page 31: saikiran manchikatla

Setting Object Properties

To set the value of a video input object property or a video source object property,

also the use of set function or you can reference the object properties as you would a field

in a structure, using dot notation.

Some properties are read only; you cannot set their values. These properties

typically provide information about the state of the object. Other properties become read

only when the object is running. To view a list of all the properties you can set, use the set

function, specifying the object as the only argument.

To implement continuous image acquisition, the example sets the Trigger Repeat

property to Inf. To set this property using the set function, enter this code at the

MATLAB prompt.

set(vid,'TriggerRepeat',Inf);

To help the application keep up with the incoming video stream while processing

data, the example sets the FrameGrabInterval property to 5. This specifies that the object

acquire every fifth frame in the video stream. (You might need to experiment with the

value of the FrameGrabInterval property to find a value that provides the best response

with your image acquisition setup.) This example shows how you can set the value of an

object property by referencing the property as you would reference a field in a MATLAB

structure.

vid.FrameGrabInterval = 5;

To set the value of a video source object property, you must first use the

getselectedsource function to retrieve the object. (You can also get the selected source by

Page 31

Page 32: saikiran manchikatla

searching the video input object Source property for the video source object that has the

Selected property set to 'on'.)

vid_src = getselectedsource(vid);

set(vid_src,'Tag','motion detection setup');

Step 6: Acquiring the Image Data

You can acquire data and also you create the video input object and configure its

properties but this is typically the core of any image acquisition application, and also it

involves these steps:

Starting with the video input object

You start an object by calling the start function. Starting an object prepares the

object for data acquisition. For example, starting an object locks the values of certain

object properties (they become read only). Starting an object does not initiate the

acquiring of image frames, however. The initiation of data logging depends on the

execution of a trigger.

Triggering the acquisition

To acquire data, a video input object must execute a trigger. Triggers can occur in

several ways, while it depends on how the Trigger Type property is configured. So for

example, if you specify an immediate trigger, the object executes a trigger automatically,

immediately after it starts. If you specify a manual trigger, the object waits for a call to

the trigger function before it initiates data acquisition. For more information, see

Acquiring Image Data. The best example is in GUI the program starts execution until and

unless you press the start button and than the program executes.

Bringing data into the MATLAB workspace

The toolbox stores acquired data in a memory buffer, a disk file, or both,

depending on the value of the video input object Logging Mode property. To work with

Page 32

Page 33: saikiran manchikatla

this data, you must bring it into the MATLAB workspace. To bring multiple frames into

the workspace, use the getdata function.

Step 7: Clean Up

When you finish using your image acquisition objects, you can remove them from

memory and clear the MATLAB workspace of the variables associated with these objects.

delete(vid)

clear

close(gcf)

4.6 IMAGE PROCESSING TOOL BOX

4.6.1. Reading Images

Images are read into the MATLAB environment using function imread is the syntax of

imread(‘filename’)

MORPHOLOGICAL OPERATIONS

Morphology is a broad set of image processing operations that process images

based on shapes. Morphological operations apply a structuring element to an input image,

creating an output image of the same size. The most basic morphological operations are

dilation and erosion. In a morphological operation, the value of each pixel in the output

image is based on a comparison of the corresponding pixel in the input image with its

neighbors. By choosing the size and shape of the neighborhood, you can construct a

morphological operation that is sensitive to specific shapes in the input image. You can

use these functions to perform common image processing tasks, such as contrast

enhancement, noise removal, thinning, skeletonization, filling, and segmentation.

Page 33

Page 34: saikiran manchikatla

Dilation and Erosion

Dilation and erosion are two fundamental morphological operations. Dilation adds

pixels to the boundaries of objects in an image, while erosion removes pixels on object

boundaries. The number of pixels added or removed from the objects in an image depends

on the size and shape of the structuring element used to process the image.

Understanding of Dilation and Erosion

The rule used to process the pixels defines the operation as dilation or erosion.

This table lists the rules for both dilation and erosion in the morphological dilation and

erosion operations, the state of any given pixel in the output image is determined by

applying a rule to the corresponding pixel and its neighbors in the input image.

TABLE 4.3 DILATION AND EROSION OPERATION

Operation Rule

Dilation

In a binary image, if any of the pixels is set to the value 1, the output

pixel is set to one(1).The number or value of the output pixel is the

maximum value of all the pixels in the input pixel's neighborhood.

Erosion

In a binary image ,as if any of the pixels is set to zero(0), the output

pixel is set to 0.The value of the output pixel is the minimum value

of all the pixels in the input pixel's neighborhood.

Structuring Elements

An essential part of the dilation and erosion operations is the structuring element

used to probe the input image. A structuring element is a matrix consisting of only 0's and

1's that can have any arbitrary shape and size. The pixels with values of 1 define the

neighborhood.

Two-dimensional, or flat, structuring elements are more typical and much more

smaller than the image being to be processed. The identification of pixel of element is the

center pixel of the structuring element, called the origin and the pixel being processed.

The 1's define the neighborhood of the structuring element which is structured in the

element contained. These pixels are also considered in erosion or dilation processing.

Page 34

Page 35: saikiran manchikatla

The nonflat or Three-dimensional structuring elements use zeros(0's) and ones(1's)

to define the extent of the structuring element in the x- planes and y-planes and add height

values to define the third dimension.

Dilating an Image

we can use the imdilate function to dilate an image. Two primary arguments are

accepted by imdilate function. And the input image to be process (grayscale, packed or

binary image binary).The strel function is returned by the structuring element object , or a

binary matrix defining the neighborhood of a structuring element.

Also imdilate accepts 2 optional arguments: PACKOPT and PADOPT. The PACKOPT

argument identifies the input image as packed binary. (Packing is a method of

compressing binary images that can speed up the processing of the image. See the bwpack

reference page for information.). The PADOPT argument affects the size of the output

image. The binary image containing one rectangular object is the simple example that

dilates.

The example uses a 3-by-3 square structuring element object to expand all sides of the

foreground component.

Page 35

Page 36: saikiran manchikatla

To dilate the image, the structuring element SE and pass the image BW and to the

imdilate function. Note how dilation adds a rank of 1's to all sides of the foreground

object.

FIGURE 4.4 SHOWING THE OUTPUT FOR THE DILATION OPERATION

IMAGE RECONSTRUCTION

The other main Morphological reconstruction is part of morphological image

processing. The following are the unique properties based on the dilation, morphological

reconstruction has these unique properties:

1. There will no change in the image until the processing repeats.

2. The two images are based on the processing a mask ands the marker rather that a

structuring image and the one image..

3. Processing is based on the concept of connectivity, rather than a structuring

element.

Marker and Mask

Morphological reconstruction processes one image, called the marker, based on

the characteristics of another image, called the mask. The high points, or peaks, in the

marker image specify where processing begins. The processing continues until the image

values stop changing.

Pixel Connectivity

Morphological processing starts at the peaks in the marker image and spread

throughout the rest of the image based on the connectivity of the pixels. Connectivity

defines which pixels are connected to other pixels. A set of pixels in a binary image that

form a connected group is called an object or a connected component.

Page 36

Page 37: saikiran manchikatla

Choosing a Connectivity

The type of neighborhood you choose affects the number of objects found in an

image and the boundaries of those objects. For this reason, the results of many

morphology operations often differ depending upon the type of connectivity you specify.

Specifying Custom Connectivities

You can also define custom neighborhoods by specifying a 3-by-3-by-...-by-3

array of 0's and 1's. The 1-valued elements define the connectivity of the neighborhood

relative to the center element.

FIGURE 4.4.CONNECTIVITIES IN AN IMAGE

Tracing Boundaries

The toolbox includes two functions you can use to find the boundaries of objects

in a binary image:

Bwtraceboundary

Bwboundaries

Page 37

Page 38: saikiran manchikatla

I = imread('colornumber.JPG');

imshow(I)

FIGURE 4.5 : INPUT IMAGE

.

BW = im2bw(I);

imshow(BW)

Page 38

Page 39: saikiran manchikatla

FIGURE 4.6: CONVERSION TO BINARY IMAGES

FIGURE 4.7: TRACING THE BOUNDARY FOR ONLY ONE PARTICULAR

OBJECT IN AN IMAGE

Page 39

Page 40: saikiran manchikatla

Figure 4.8 : Starting point boundary notation

Using Median Filtering

The following example compares using an averaging filter and medfilt2 to remove salt

and pepper noise. This type of noise consists of random pixels' being set to black or white (the

extremes of the data range).In both cases the size of the neighborhood for filtering is 3-by-3.In

this we will be giving an input image and we will be giving the threshold value and the values

above that threshold will be eliminated.

Page 40

Page 41: saikiran manchikatla

Figure 4.10: Addition of Noise

3 The average filter and the diplays the result with the noisy image..

K = filter2(fspecial('average',3),J)/255;

figure, imshow(K)

Figure 4.11:Average Filtering4

.To filter the noisy image the median filter is used and the result is displayed and also notice that

medfilt2 has a better process of removing noise and then it has the less blurring of the edges

L = medfilt2(J,[3 3]);

figure, imshow(K)

figure, imshow(L)

Page 41

Page 42: saikiran manchikatla

Figure 4.12: MEDIAN FILTERING IMAGE

4.7. GRAPHICAL USER INTERFACE

GUIDE, the MATLAB Graphical User Interface development environment, provides a

set of tools for creating graphical user interfaces (GUIs). These tools simplify the process of

lying out and programming GUIs.

Laying Out a GUI

The guide layout editor enables and the GUI laying out by dragging and clicking the

components of the GUI, such as the sliders, the text fields and ets in the area of the layout.Other

tools which are accessible for the editor of the layout enable to component look modify ,GUI of

the size,tab set order,object components list .The following topic, Laying Out a Simple GUI,uses

some of these tools to show you the basics of laying out a GUI. GUIDE Tools Summary

describes the tools.

Programming the GUI

When you save your GUI layout, GUIDE automatically generates an M-file that you can

use to control how the GUI works. This M-file provides code to initialize the GUI and contains a

framework for the GUI callbacks -- the routines that execute in response to user-generated events

such as a mouse click. Using the M-file editor, you can add code to the callbacks to perform the

functions you want them to. Programming the GUI shows you what code to add to the example

M-file to make the GUI work

Page 42

Page 43: saikiran manchikatla

Guide

Open GUI Layout Editor

Syntax

guide

guide('filename.fig')

guide(figure_handles).

Panel

Panels group GUI components and can make a GUI easier to understand by visually

grouping related controls. A panel can contain panels and button groups as well as axes and user

interface controls such as push buttons, sliders, pop-up menus, etc. The position of each

component within a panel is interpreted relative to the lower left corner of the panel.

If the GUI is resized, the panel and its components are also resized, but you may want to

adjust the size and position of the panel's components. You can do this if you set the GUI Resize

behavior to Other (Use ResizeFcn) and provide a ResizeFcn callback for the panel.

To set Resize behavior for the figure to Other (Use ResizeFcn), select GUI Options from

the Layout Editor Tools menu. This example of a ResizeFcn callback retrieves the handles of its

children from its Children property. The order in which components appear in a child list is the

reverse of the order in which they tab. See Setting Tab Order for information about using the Tab

Order Editor to determine the tab order. It then modifies the Position properties of the child

components relative to the panel's new size. The Position property is a four-element vector that

specifies the size of the component and its location with respect to the bottom left corner of its

parent. The vector is of the form:

[left bottom width height]

The example assumes the panel Units property is normalized. That is, units map the

lower-left corner of the panel to (0,0) and the upper-right corner to (1.0,1.0). Because of this, the

example can modify the component size relative to the panel size without knowing the actual

Page 43

Page 44: saikiran manchikatla

size of the panel. It positions the left edge of a button 20% of the panel's width from the left edge

of the panel and its bottom edge 40% of the panel's height from the bottom of the panel. It sets

the button's height to be 50% of the panel's width and 20% of its height.

function uipanel1_ResizeFcn(hObject, eventdata, handles)

children = get(hObject,'Children');

child_position = get(children(1);'Position');

child_position = [.20,.40,.50,.20];

set(children(1),'Position',child_position);

% Proceed with callback...

Button Group

Button groups are like panels, but can be used to manage exclusive selection behavior for

radio buttons and toggle buttons. The button group's SelectionChangeFcn callback is called

whenever a selection is made.

For radio buttons and toggle buttons that are managed by a button group, you must

include the code to control them in the button group's SelectionChangeFcn callback function, not

in the individual component Callback functions. A button group overwrites the Callback

properties of radio buttons and toggle buttons that it manages.

This example of a SelectionChangeFcn callback uses the Tag property of the selected

object to choose the appropriate code to execute.

function uibuttongroup1_SelectionChangeFcn(hObject,eventdata,handles)

switch get(hObject,'Tag') % Get Tag of selected object

case 'radiobutton1'

% Code for when radiobutton1 is selected goes here.

case 'radiobutton2'

Page 44

Page 45: saikiran manchikatla

% Code for when radiobutton2 is selected goes here.

% Continue with more cases as necessary.

End

Axes

Axes components enable your GUI to display graphics, such as graphs and images. This

topic provides two examples: Plotting to a GUI with a Single Axes Plotting to a GUI with

Multiple Axes

Plotting to a GUI with a Single Axes

If a GUI contains only one axes, MATLAB automatically generates the plot in that axes.

In most cases, you create a plot in an axes from a callback that belongs to some other component

in the GUI. For example, pressing a button might trigger the plotting of a graph to an axes. In

this case, the button's Callback callback contains the code that generates the plot. This example

plots a graph in an axes when a button is pressed. The following figure shows a push button and

an axes as they might appear in the Layout Editor.

For a button group, contrary to the automatically generated comment, hObject is the

handle of the selected radio button or toggle button, and not the handle of the button group. Use

handles. uipaneln as the handle of the button group, where uipaneln is the Tag of the button

group. The code above refers to the handle of uipanel1 rather than the handle of uibuttongroup1,

because a button group is a kind of panel.

Figure 4.7.1: Axes

Page 45

Page 46: saikiran manchikatla

Creation of Graphical User Interfaces (GUIs)

Edit Text Components and Programming the Slider

The designed components of the useful combination are employed by the GUI. Each one

of the slider is connected or connected to the edited component of the text. And also it displays

the text of the slider to the current value. To update the value the user can cause the sliders and it

enters the number in to the edited text box.When the user activates the model parameters then

both of the components are updated.

Slider Callback

Two sliders are used for the specification of the model parameters appropriately and are

enabled for the selection of the values continuously. When the slider was changed by the user the

value calls back and then executed the following steps :It will calls the model open ,then the

simulink model will ensure the model and the parameters of the simulink will be set. After that

the appropriate parameter will be on the block model and then sets the block parameter as the

(set_parameter), ensure model(%) is opened and then it gets the new value as the KF gain as the

slider of the model. The KF current value is set to the % set as the new value.

Here is the callback for the Proportional (Kf) slider.

function KfValueSlider_Callback(hObject, eventdata, handles)

% Ensure model is open.

model_open(handles)

% Get the new value for the Kf Gain from the slider.

NewVal = get(hObject, 'Value');

% Set the value of the KfCurrentValue to the new value

% set by slider.

set(handles.KfCurrentValue,'String',NewVal)

Page 46

Page 47: saikiran manchikatla

% Set the Gain parameter of the Kf Gain Block to the new value.

set_param('f14/Controller/Gain','Gain',num2str(NewVal))

The automatically converted unicontrols are converted the values of the type which are

correct and when the number of the sliders returns the text which are edited strings are required

and then the callback for the slider integral similarly follows the approach and then the value is

correct text edited callback.s

Current Value Edit Text Callback

When the user clicks on another component in the GUI after typing into the text box, the

edit text callback executes the following steps: Calls model_open to ensure that the Simulink

model is open so that it can set simulation parameters. For the respective parameters the edit text

box enable the users to type the value respectively and then converts returned string in the box

editedare returned by the string properly as the double. The slider value property is updated when

the value is in the range and then it is updated the value as new and then thes block parameter is

appropriate to the value which is news (set_param).And also it checks the value of the weather

for the slider within range and then the text which is edited is string property and the value is set

to the slider, the typed number is rejected by the user.ss

Here is the callback for the Kf Current value text box.

function KfCurrentValue_Callback(hObject, eventdata, handles)

% Ensure model is open.

model_open(handles)

% Get the new value for the Kf Gain.

NewStrVal = get(hObject, 'String');

NewVal = str2double(NewStrVal);

% Check that the entered value falls within the allowable range.

Page 47

Page 48: saikiran manchikatla

if isempty(NewVal) || (NewVal< -5) || (NewVal>0),

% Revert to last value, as indicated by KfValueSlider.

OldVal = get(handles.KfValueSlider,'Value');

set(hObject, 'String',OldVal)

else % Use new Kf value

% Set the value of the KfValueSlider to the new value.

set(handles.KfValueSlider,'Value',NewVal)

% Set the Gain parameter of the Kf Gain Block

% to the new value.

set_param('f14/Controller/Gain','Gain',NewStrVal)

end

The callback for the Ki Current value follows a similar approach.

Running the Simulation from the GUI

The Graphical User Interphase (GUI) Simulate and store results stores the results in the

handles structure button callback run the model simulation. Since the structure can be passed as

an argument. The process of passing data to the other is the storing data which handles the

structures .

When the store result button and the simulation are clicked by the user then the execution

of the callback is the following steps: the values of the current simulation, a structure of the

simulation results and the name of the number. Calls sim, which runs the simulation and returns

the data that is used for plotting Creates. The handle structure is stored in the structure and and

also it updates the recent most run also simulate callback button results.s .

Here is the Simulate and store results button callback.

Page 48

Page 49: saikiran manchikatla

% Determine the maximum run number currently used.

maxNum = ResultsData(length(ResultsData)).RunNumber;

ResultNum = maxNum+1;

else % Set up the results data structure

ResultsData = struct('RunName',[],'RunNumber',[],...

'KiValue',[],'KfValue',[],'timeVector',[],...

'outputVector',[]);

ResultNum = 1;

end

function SimulateButton_Callback(hObject, eventdata, handles)

[timeVector,stateVector,outputVector] = sim('f14');

% Retrieve old results data structure

if isfield(handles,'ResultsData') &

~isempty(handles.ResultsData)

ResultsData = handles.ResultsData;

if isequal(ResultNum,1),

% Enable the Plot and Remove buttons

set([handles.RemoveButton,handles.PlotButton],'Enable','on')

end

% Get Ki and Kf values to store with the data and put in the

Page 49

Page 50: saikiran manchikatla

results list.

Ki = get(handles.KiValueSlider,'Value');

Kf = get(handles.KfValueSlider,'Value');

ResultsData(ResultNum).RunName = ['Run',num2str(ResultNum)];

ResultsData(ResultNum).RunNumber = ResultNum;

ResultsData(ResultNum).KiValue = Ki;

ResultsData(ResultNum).KfValue = Kf;

ResultsData(ResultNum).timeVector = timeVector;

ResultsData(ResultNum).outputVector = outputVector;

% Build the new results list string for the listbox

ResultsStr = get(handles.ResultsList,'String');

if isequal(ResultNum,1)

ResultsStr = {['Run1',num2str(Kf),' ',num2str(Ki)]};

else

ResultsStr = [ResultsStr;...

{['Run',num2str(ResultNum),' ',num2str(Kf),' ', ...

num2str(Ki)]}];

end

set(handles.ResultsList,'String',ResultsStr);

% Store the new ResultsData

handles.ResultsData = ResultsData;

Page 50

Page 51: saikiran manchikatla

guidata(hObject, handles)

4.8 THE SUCCESSIVE MEAN QUANTIZATION TRANSFORM

The (SMQT) Successive Mean Quantization Transforms is applied and described in

speech processing and image processing and also it is considered as an additional step processing

for the mel frequency spectral coefficients which is used in speech recognition. SMQT reveals

the structure or organization of an data and also takeoff the instructions such as bias and gain.

The tranports in the Image processing is applied in dynamic range compression and automatic

image enhancement.

4.8.1 Introduction

The robust and reliable of the feature extraction is an additional and an important task for

pattern recognition. The problems of using analog to digital and sensors are different impacts on

the system performance. Due to the difference in bias and gain these discrepancies may occurs.

Removing disparities in the s sensors due to bias and gain are the main aims of the

SMQT. In an efficient manner the data should be done in additionally with extraction. The

problem of dynamic range compression can be seen in the form of structure extraction problem

[1]..The More recently the emerged one is Modified Census Transform [3] ; The first level

SMQT has the similarities with the modified census transform . However the structure kernels

are of only one bit structures. To extend the structure representation the SMQT can be used to an

number that is arbitrary predefined and arbitrary dimensional data and also the result was applied

and shown the SMQT in both speech processing and image processing. In the speech recognition

MFCC ,these are commonly used as the hidden Markov Model and front end [4, 5].And also

there will be a mismatch between testing and training is a problem that commonly occurs. And

Page 51

Page 52: saikiran manchikatla

also the techniques to overcome the adjusting parameters in parallel Model combination(PMC)

and Hidden Markov Models(HMMs) For example the fast PCA-PMA, PMC Jacobean adaptation

numerical integration, log-add approximation and the weighted PMC[6,7].And those some of

these operations are mandatory if the signal is going to be varies or changed its gain and

bias,because since bias and gain varies , there will be a propagate in the signal to the

MFCC .Thus a step that will be extra in the calculation of MFCC yield a signal change. In

several areas the detail well is a strong requirement in detailed. The detailed images are

producing in the remote sensing and the render contrast, fault detection and the analysis of the

biomedical [8].The performance of these tasks without automatically the participation of the

intervention of the human in particular task in image processing. The different types of the

techniques and the approaches for this problem [8, 9, 10, 11]. The Successive Mean Quantization

Transform uses a technique or the performance of the approach thet usually performs

automatical structural breakdown of the info.The progressive focus on the image with the detail

will be technique in this operation.

4.8.2. Description of the SMQT

X is the set of the data points let us define we define. V(x) will be the value of a data

point as defined. The data points can be arbitrary i.e. D(x) is a matrix vector and in the form of

some arbitrary. The successive mean quantization has a level ‘L’ ,data points D is an important

input also have the number and only parameter input is one. The output set is denoted as M(x)

and from the transform has the same as input which is same

The function in SMQTL can be described and denoted by a binary tree where the

vertices are Mean Quantization Units (MQUs).

A Mean Quantization Unit(MQU) consists of three steps a quantization, a mean calculation and a

split of the input set.The MQU finds the mean of the data as the first step, denoted as V(x), and

then according to

Page 52

Page 53: saikiran manchikatla

The next step that is the use of steps to mean and to quantize the values of the points or to

quantize the data points {0,1}.Let the function can be denoted as

and let denote concatenation, and then

Is defined as the mean quantized set. The main output from a MQU is denoted as U(x) . The last

step that is the third step splits the input set into 2 subsets

see Figure 4.8.1

Page 53

Page 54: saikiran manchikatla

Figure 4.8.1: Above figure is the operation of 1 Mean Quantization Unit

(MQU).

Figure 4.8.2. The (SMQT) as a (MQUs).

the final SMQTL is found and the values of the points of data which is weighted in the the

results. These weighted performance by 2L-l at the each of the level l. Thus the result can be

found as the

Quantized number of levels, denoted QL as a consequence of this weighing, QL=2L this is the

level L for the structure. The MQU is insensitive to bias and gain and also is defined as

the building blocks that as the basic of the MQT.

4.8.3 SMQT in Speech Processing

The frequently used speech parameterization of the MFCC in speech recognizers . Some

of the MFCC are directly affected by the speech frames due to the mismatch and are calculated,

due the difference in bias and gain in the speech signal, the difference between the frames can

occur .And one of the main other definite difference is the shape or the structure of the changes

in the speech frames. The task and the motivation of the Mel-Frequency spectral(smqt-mfcc)

coefficients successive mean quantization (SMQT) – is to delete the bias and gain disparity

between the testing and the training. The most basic guidelines or steps for the calculation of

MFCC and SMQT-MFCC can be found in figure 4.8.3

Page 54

Page 55: saikiran manchikatla

Fig4.8.3. by MFCC and SMQT-MFCC The steps from speech- coefficients.

The perfect and the correct sampled signal that is band limited speech at sixteen KHZ and

the word one is the speech that comes and defines from the pronouncing of the male speaker. In

the modification by bias ,gain and the signal to noise ratio(SNR) levels at these stage the signal

undergoes and also we can see the fig 4.8.4 that the frames in the speech of 20ms, it uses and the

SMQT8 is the frames that applied.

Fig4.8.4. Comparison between the SMQT-MFCC and MFCC

The MFCC-SMQT has an exactly the match between the (A),(B),(C) and by which the

MFCC does not have like that. Since the SMQT is not related or its is independent to bias and

gain. So two of them i.e SMQT-MFCC and the MFCC are detected and these are effected by the

Page 55

Page 56: saikiran manchikatla

degrees in different white Gaussian noise. Thus the augmentation of the SMQT-MFCC that the

information that the level has the signal might be desirable to that. It is that emphasized and that

is moved or separate between the levels of the signal and the structure and also that the signal

can that be used in the speech processing.

4.8.4 SMQT in Image Processing

Transform of the image that is to the whole in different levels that give an illustrative

abbreviation to the operation of the SMQT, see Figure 4.8.5.

(A) (B) (C)

(D) (E) (F)

(Figure 4.8.5. The above is the example of the whole image of the SMQT. dynamic range

0-255 (8 bit).. (A) original image

The bits used that is the number used that described the transformation of an image and is

denoted as the level in the SMQTL. Thus a SMQTL image of the one bit represents {0,1} and

the SMQT2 of the image are have 2 bits representation{0,1,2,3} and se the images (b) and (c)

respectively. The range of the dynamic is represented as 8 bits and is SMQT8 of an image which

Page 56

Page 57: saikiran manchikatla

is a AN that will be have an image of uncompressed with the details of enhanced.The histogram

equalization can be compared [9] and are conducted that can be seen in fig 4.8.6.and the

problems with this is artifacts and the oversaturation in the several areas and the images are the

histogram equalization has some. Notice that how an histogram equalization or they are equal

and that it has a tendency to get the unnatural equalized images.these effects doesnot occurs and

limited easily. And also it is enhanced in the images of SMQT. Also the SMQT have the

adjustments which are only fewer techniques[8,10,11] and less computational complexity

Figure 4.8.6:In the figure above Top row is the actual image, central image

represents image of SMQT8 and the last image represents. Note that the difference of the

forehead in the face images.

4.8.5 Application of SQMT in Image Processing

The steps that reveals the organization of underlying or the data of the structure has the

new transformation that is denoted as the SMQT (successive mean quantization). The transform

separates or enlarges the image in the form of robust and that is that manner also that makes it in

to insensitive of the change in the gain and the bias of the signal.

And that the transform have apply to it in the form of the extension of SMQT-MFCC

and the MFCC are introduced. The main advantage of the MFCC-SMQT is the separates or

divides these structures and that rejects or detects the signal level to that. Thus it implied that the

MFCC_SMQT are robust in gain and bias differences in the signal of the speech. And also the

Page 57

Page 58: saikiran manchikatla

transform of the automatic enhancement been applied to the images. Thus the performance of the

histogram equalization has been compared also that shown the advantage of the SMQT.

CHAPTER NO.5

SOURCECODE

5.1.ysn_facefind.m file

clear%observe that clear all may cause error (link to webcam lost) if VFM is activated at run

start

close all

vid = videoinput('winvideo');

preview(vid);

pause(2);

%Do extra check to check that VFM works

try

x = getsnapshot(vid);

catch

disp('VFM not found (vfm.dll) or webcam not connected.')

disp('Download VFM from: http://www2.cmp.uea.ac.uk/~fuzz/vfm/default.html')

disp('If you forgot to connect your webcam: restart Matlab and run again.')

break;

end

disp('Press CTRL-C to break.')

while 1

Page 58

Page 59: saikiran manchikatla

tic

x = getsnapshot(vid);

x=rgb2gray(x);%image toolbox dependent

x=double(x);

hold off

clf

cla

imagesc(x);colormap(gray)

ylabel('Press CTRL-C to break.')

[output,count,m]=facefind(x,48,[],2,2);%speed up detection, jump 2 pixels and set minimum

face to 48 pixels

% plotsize(x,m)

plotbox(output)

drawnow;drawnow;

t=toc;

%note that the FPS calculation is including grabbing the image and displaying the

%image and detection

title(['Frames Per Second (FPS): ' num2str(1/t) ' Number of patches analyzed: '

num2str(count)])

drawnow;drawnow;

end

5.2. facefind.m file

function [output,count,m,svec]=facefind(x,minf,maxf,dx,dy,sens);

5.3. plotbox.m file

function plotbox(output,col,t);

Page 59

Page 60: saikiran manchikatla

%function plotbox(output,col,t);

%

%INPUT:

%output - output from face dection (see facefind.m)

%col - color for detection boxes [default [1 0 0]]

%t - thickness of lines [default 2]

if ((nargin<2) | isempty(col))

col=[1 0 0];

end

if ((nargin<3) | isempty(t))

t=2;

end

N=size(output,2);

if (N>0)

for i=1:N

x1=output(1,i);

x2=output(2,i);

y1=output(3,i);

y2=output(4,i);

boxv([x1 x2 y1 y2],col,t)

end

end

function boxv(vec,col,t);

%function boxv(vec,col,t);

%

%INPUT:

Page 60

Page 61: saikiran manchikatla

%vec - vector with corner coordiantes ([x1 x2 y1 y2]) for a box

%col - color [default [1 0 0]]

%t - thickness of lines [default 0.5]

if ((nargin<2) | isempty(col))

col=[1 0 0];

end

if ((nargin<3) | isempty(t))

t=0.5;

end

ind=find(isinf(vec));%special case if coordinate is Inf

a=200;%should be realmax, but bug in Matlab? (strange lines occur)

vec(ind)=sign(vec(ind))*a;

h1=line([vec(1) vec(2)],[vec(3) vec(3)]);

h2=line([vec(2) vec(2)],[vec(3) vec(4)]);

h3=line([vec(1) vec(2)],[vec(4) vec(4)]);

h4=line([vec(1) vec(1)],[vec(3) vec(4)]);

h=[h1 h2 h3 h4];

set(h,'Color',col);

set(h,'LineWidth',t)

5.4. plotsize.m file

function plotsize(x,m);

%function plotsize(x,m);

%

%INPUT:

Page 61

Page 62: saikiran manchikatla

%x - image

%m - min and max face size in vector, see 3rd output from facefind.m

minf=m(1);

maxf=m(2);

ex1=size(x,1)*0.01;

ex1e=size(x,1)*0.02;

ex2=size(x,1)*0.04;

ex2e=size(x,1)*0.05;

bx1=[0 maxf maxf 0];

by1=[ex1e ex1e ex1 ex1];

bx2=[0 minf minf 0];

by2=[ex2e ex2e ex2 ex2];

hold on

fill(bx1,by1,[0 1 0])

fill(bx2,by2,[0 1 0])

hold off

Page 62

Page 63: saikiran manchikatla

CHAPTER NO.6

APPLICATIONS AND FUTURE SCOPE

6.1 APPLICATIONS:

Biometrics,

Video surveillance

Human computer interface

Image database management.

digital cameras

Home Security Systems

Industrial Security Systems

Airports

Government Buildings

Research Facilities

Military Facilities

Medical Imaging

Artificial Intelligence

Machine Vision Applications

Page 63

Page 64: saikiran manchikatla

6.2 CONCLUSION AND FUTURE SCOPE

CONCLUSION:

The main objectives of the thesis were is to research on the face recognition or face

identification. The thesis focused on different algorithms based with different algorithms the

only best one is successive mean quantization Algorithm (SMQT).

Finally I conclude that during the research I have shown the plot size and plot box based on

SMQT and also I conclude that the best algorithm for the face recognition or identification is

SMQT.

FUTURE SCOPE:

The face recognition techniques used today are well in the good conditions and all the

techniques and the systems work more better with the front of the images and the brightness or

the light is constant. Under the vastly varying conditions all the face recognition algorithms are

failed under which the human need to identify.

In the constrained situations and also in the real time there exists much and the system that

identifies is robust in the environments is natural, and that people will believe. Now a days the

Page 64

Page 65: saikiran manchikatla

technology used are obstructive an allows users to act freely. Considering all the systems speaker

identification system and the identification system that use face recognition.

Now a days the microphones and cameras are very small and also light-weighted which are

integrated successfully and the systems are wearable. Video and audio identification systems are

based and are of critical advantage and that modifies the use of human’s recognition

And finally, researchers are starting to identify and demonstrate the video-audio based person

detection systems and can aim high recognition rates with the user to be in controlled in high

environments.

REFERENCES

1.references of part 4.8

[1].1 “Dynamic range compression of audio signals consistent with recent time-

varying loudness models,” by R.J.cassidy in IEEE International Conference on

Acoustics, Speech, and Signal Processing, May 2004, vol. 4, pp. 213–216.

[1].2 , “Non-parametric local transforms for computing visual correspondence,” Ramin

Zabih and John Woodfill in ECCV (2), 1994, pp. 151–158.

[1].3 “Face detection with the modified census transform,” in Sixth IEEE International

Conference on Automatic Face and Gesture Recognition,by B. Froba and A. Ernst May

2004, pp. 91–96.

[1].4 Discrete-Time Processing of Speech Signals, Deller John R. Jr., Hansen John

J.L., and Proakis John G., IEEE Press, 1993, ISBN 0-7803-5386-2.

[1].5 Fundamentals of Speech Recognition, Prentice-Hall, L. Rabiner and B.H. Juang,

1993, ISBN 0-13-015157-2.

[1].6 , “Cepstrum-domain model combination based on decomposition of speech and

noise for noisy speech recognition,” H. K. Kim and R. C. Rose in Proceedings of

ICASSP, May 2002, pp. 209–212.

Page 65

Page 66: saikiran manchikatla

[1].7 “Cepstral parameter compensation for hmm recognition in noise. Speech

communication,” M. J. F. Gales and S. J. Young Speech Communication, vol. 12, no.3,

pp. 231–239, July 1993.

[1].8 [8] “Towards automatic image enhancement using genetic algorithms,” by C.

Munteanu and A. Rosa Proceedings of the 2000 Congress on Evolutionary Computation,

vol. 2, pp. 1535–1542, July 2000.

[1].9 Digital Image Processing by William K. Pratt, John Wiley & Sons, 3rd edition,

2001.

[1].10 “Multiscale retinex for color image enhancement,” by Z. Rahman, D.J. Jobson,

and G.A. Woodell International Conference on Image Processing, vol. 3, pp. 1003–

1006, September 1996.

[1].11 [11] “Properties and performance of a center/surround retinex,”by D.J. Jobson, Z.

Rahman, and G.A. Woodell, IEEE Transactions on Image Processing, vol. 6, pp. 451–

462, March 1997.

[1].12 “Speech recognition in noisy environments:A survey,” by Y. Gong, in Speech

Communication, 1995, vol. 16, pp. 261–291.

[1].13 “From few to many: Generative models for recognition under variable pose and

illumination,” by A. S. Georghiades, P. N. Belhumeur, and D. J. Kriegman in IEEE Int.

Conf. on Automatic Face and Gesture Recognition, 2000, pp. 277–284.

Page 66