software for IP webcams

download software for IP webcams

of 65

Transcript of software for IP webcams

  • 8/7/2019 software for IP webcams

    1/65

    Software for IP WebCams

    Abdulla Alzaabi

    May 6, 2009

  • 8/7/2019 software for IP webcams

    2/65

    Abstract

    WebCams are usually connected to the computers using USB connection,

    but they cannot be easily shared in a network of computers. However, theyrequire the installation of drives before plugging in the cameras and usingthem.

    The project is to develop software in java that allow a network of computersto easily access a webcam that is connected to the network by Ethernet ca-ble and enable them to use all the features of the camera and explore all theimages and videos that are recorded by the camera. Moreover, the softwarewill add features that are not originally provided by the camera such asvideo recording and image recognition without the installation of any extrahardware support. The software does not require any type of drivers to be

    installed and it works in all platforms on the java virtual machine.

    The project has successfully achieved all the required features and supportsall types of IP cameras that can be directly connected to a network accesspoint.

  • 8/7/2019 software for IP webcams

    3/65

    acknowledgment

    All praise it to Allah Lord of the Worlds, from whom our aid and victorycomes. Peace and prayer be upon our prophet, Muhammad, his family, andhis companions. Especially, I would like to give my special thanks to my par-ents Mr. Obaid Alzaabi and Mrs. Aisha Alzaabi whose patience love enabledme to complete this work. I am deeply indebted to their stimulating encour-agement and support. My family, thank you for your support throughoutthese years especially my grandmother Amna Alzaabi, my beloved brothersMohammed and Omar Alzaabi, and uncles. Finally, not forgetting my bestfriends whom great help in difficult times and who always been there for me,without them it would not be possible, especially Miss Huda Al?Shammari.

    God bless you all. I would like to express my gratitude to all those who gaveme the possibility to complete this thesis. I want to thank my supervisor,Dr. Douglas Edwards who was abundantly helpful and offered invaluableassistance, support and guidance, without his knowledge and assistance thisstudy would not have been successful. Special thanks also to my colleaguesthat had shared this unforgettable chapter of my life who supported methroughout my years in the University Of Manchester. I want to thankthem for all their help, support, interest and valuable hints. Our sharedlaughter and sleepless nights studying at the library moments will be cher-ished in my heart. Deepest gratitude to the class of 2009, especially SaeedAlsaadi, Juel Hussain, and Khalid Alnaqbi. The author wishes to express

    her love and gratitude to her beloved families; for their understanding &endless love, through the duration of her studies. The author would alsolike to convey thanks to the Ministry and Faculty for providing the financialmeans and laboratory facilities.

    1

  • 8/7/2019 software for IP webcams

    4/65

    Contents

    1 Introduction 5

    1.1 IP cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.1.1 Motion Detection . . . . . . . . . . . . . . . . . . . . . 51.1.2 Video Recording . . . . . . . . . . . . . . . . . . . . . 5

    1.2 Pro ject Proposal . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Pro ject Ob jectives . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.3.1 Software for IP cameras: . . . . . . . . . . . . . . . . . 61.3.2 Software for browsing captured images: . . . . . . . . 61.3.3 Image sharing in a network: . . . . . . . . . . . . . . . 71.3.4 Motion Detection: . . . . . . . . . . . . . . . . . . . . 71.3.5 Video Recording: . . . . . . . . . . . . . . . . . . . . . 71.3.6 User Interface: . . . . . . . . . . . . . . . . . . . . . . 7

    1.4 Development Platform . . . . . . . . . . . . . . . . . . . . . . 81.5 Chapter Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2 Background Research 102.1 Digital Images: . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.1.1 Colour Space Models: . . . . . . . . . . . . . . . . . . 112.1.2 Noise in digital images: . . . . . . . . . . . . . . . . . 11

    2.2 Image segmentation: Thresholding . . . . . . . . . . . . . . . 122.3 Motion Detection . . . . . . . . . . . . . . . . . . . . . . . . . 13

    3 Design 153.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.3 Running the IP camera on a network . . . . . . . . . . . . . . 163.4 Image Browser . . . . . . . . . . . . . . . . . . . . . . . . . . 173.5 Image Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    3.5.1 Server Side: . . . . . . . . . . . . . . . . . . . . . . . . 183.5.2 Client Side: . . . . . . . . . . . . . . . . . . . . . . . . 19

    3.6 Video Recording . . . . . . . . . . . . . . . . . . . . . . . . . 203.7 Motion Detection . . . . . . . . . . . . . . . . . . . . . . . . . 20

    3.7.1 Algorithms for motion detection: . . . . . . . . . . . . 21

    2

  • 8/7/2019 software for IP webcams

    5/65

    3.7.2 Different modes for detecting motions . . . . . . . . . 25

    3.8 The Software Interface . . . . . . . . . . . . . . . . . . . . . . 26

    4 Implementation 284.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.2 Running the IP camera and capturing images . . . . . . . . . 28

    4.2.1 cameraPanel . . . . . . . . . . . . . . . . . . . . . . . 284.2.2 checkDirectory . . . . . . . . . . . . . . . . . . . . . . 294.2.3 MainGUI . . . . . . . . . . . . . . . . . . . . . . . . . 29

    4.3 Image Browser . . . . . . . . . . . . . . . . . . . . . . . . . . 294.3.1 ImagePanel . . . . . . . . . . . . . . . . . . . . . . . . 304.3.2 ImageBrowser . . . . . . . . . . . . . . . . . . . . . . . 30

    4.4 Image Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.4.1 Server side . . . . . . . . . . . . . . . . . . . . . . . . 314.4.2 Client side . . . . . . . . . . . . . . . . . . . . . . . . . 32

    4.5 Video Recording . . . . . . . . . . . . . . . . . . . . . . . . . 334.5.1 cameraPane . . . . . . . . . . . . . . . . . . . . . . . . 354.5.2 VideoGUI . . . . . . . . . . . . . . . . . . . . . . . . . 354.5.3 JpegImageToMovie . . . . . . . . . . . . . . . . . . . . 37

    4.6 Motion Detection . . . . . . . . . . . . . . . . . . . . . . . . . 374.6.1 Red Marking Mode . . . . . . . . . . . . . . . . . . . . 384.6.2 Mailing Mode . . . . . . . . . . . . . . . . . . . . . . . 414.6.3 Video Recording Mode . . . . . . . . . . . . . . . . . . 43

    5 Results 445.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.2 Graphics User Interface Layout and features . . . . . . . . . . 44

    5.2.1 Main Application . . . . . . . . . . . . . . . . . . . . . 445.2.2 Running the IP camera . . . . . . . . . . . . . . . . . 455.2.3 Image Browser . . . . . . . . . . . . . . . . . . . . . . 455.2.4 Video Recording . . . . . . . . . . . . . . . . . . . . . 475.2.5 Image Sharing . . . . . . . . . . . . . . . . . . . . . . 475.2.6 Motion Detector . . . . . . . . . . . . . . . . . . . . . 48

    6 Testing and Evaluation 506.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506.2 System Performance . . . . . . . . . . . . . . . . . . . . . . . 506.3 GUI Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506.4 Image Sharing Testing . . . . . . . . . . . . . . . . . . . . . . 516.5 Motion Detection Testing . . . . . . . . . . . . . . . . . . . . 52

    6.5.1 Red Marking Mode . . . . . . . . . . . . . . . . . . . . 526.5.2 Video Recording Mode . . . . . . . . . . . . . . . . . . 546.5.3 Mailing Mode . . . . . . . . . . . . . . . . . . . . . . . 54

    6.6 Evaluation of system . . . . . . . . . . . . . . . . . . . . . . . 55

    3

  • 8/7/2019 software for IP webcams

    6/65

    6.6.1 Running the IP camera . . . . . . . . . . . . . . . . . 55

    6.6.2 Image Browser . . . . . . . . . . . . . . . . . . . . . . 556.6.3 motion detection . . . . . . . . . . . . . . . . . . . . . 56

    7 Conclusion 577.1 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . 577.2 Areas for improvements and extensions . . . . . . . . . . . . . 57

    7.2.1 Motion Detection . . . . . . . . . . . . . . . . . . . . . 577.2.2 Image Sharing . . . . . . . . . . . . . . . . . . . . . . 587.2.3 Video Streaming . . . . . . . . . . . . . . . . . . . . . 58

    4

  • 8/7/2019 software for IP webcams

    7/65

    Chapter 1

    Introduction

    1.1 IP cameras

    IP cameras are Closed-circuit television cameras, which are also known asCCTV cameras, they basically utilize Internet Protocol to transmit dataover fast Ethernet connection. They are also known as network camerasbecause users can easily assigned these cameras to IP addresses and actas computers, but instead they provide stream of image data. IP camerasare mainly used for surveillance in the same manner as analog closed-circuittelevision that uses video cameras to monitor a specific area and sends signalto number of monitors. Network cameras or IP surveillance cameras are not

    all the same, each IP camera will differ in terms of the features and functions,video encoding schemes and supported network protocols.

    1.1.1 Motion Detection

    Motion detection is the action of sensing physical movement in a given area.Motion simply is a change in the occurrence of change in the target scene.Motion can be detected by detection an object that changes in its speedor vector. This can be achieved by either mechanical devices or electronicdevices by interacting physically with the monitored environment to detectany motion that may occur in their fields. However, software can achieve

    motion detection by developing some methods and techniques to enable itto detection motion through observing properties of images in a sequence ofimages and process them in some way. The IP camera used in this projecthas stream of JPEG images, one of the aims of the project is to developsome algorithms to enable the software to achieve motion detection.

    1.1.2 Video Recording

    Video is the technology of reconstructing a sequence of still images repre-senting scenes in motion by electronically capturing sequences of still images,

    5

  • 8/7/2019 software for IP webcams

    8/65

    process them and finally the reconstruction of images that will result in the

    construction of video. There are many characteristics of video streams, theones that are important for the project are the number of frames per second,display resolution and video compression method. In this project, the devel-oped software tool constructs a compressed video format from a sequence ofstill images that can be captured from the IP camera, then process them toconstruct the video file. The camera does not any supporting tools to sup-port voice recording, so the developed software tool only constructs videoswith no sounds.

    1.2 Project Proposal

    The aim of this project is to develop software in java that allow number ofcomputers in a network to connect directly to any IP camera that is directlyconnected through Fast Ethernet connection without the need of installingany drives for the camera. Moreover, the software should be able to captureimages from the camera, store them in a directory, and detect any motionthat may occur in the area that the camera is monitoring. The method ofperforming this has been outlined in the existing paper and this project isto provide an application in java that satisfies all the features detailed inthe paper.

    1.3 Project Objectives

    1.3.1 Software for IP cameras:

    Develop java software that allows any computer in a network to communi-cate with any IP camera with the knowledge of its IP Address. It has thefollowing features:

    Manually capture images with user input. Take a picture with a click of a button. Capture sequence of pictures with a click of a button. Store all the captured images in a central storage.

    1.3.2 Software for browsing captured images:

    Develop java software to explore and browse all stored images that are cap-tured by the IP camera. It has the following features:

    The software will load all the stored images from the central storage. The software ill convert all images to internal representations. The user will be able to replay the images and fast forward/back and

    freeze frame.

    6

  • 8/7/2019 software for IP webcams

    9/65

    1.3.3 Image sharing in a network:

    Develop java software to act as a server in the computer that holds as thecentral storage. All the computers in the network can connect to the serverto view all the images captured by the camera and they will have samefeatures as the browsing software of the camera that outlined in the abovesection. The server will be up and running all the time and accepts anyconnection from the private network or from offsite machine.

    1.3.4 Motion Detection:

    Develop java software that implements motion detection with different al-gorithms and features that will be chosen by the user. This technique will

    be achieved by developing different algorithms working in different ways,however, resulting in same results but with different efficiency depending inthe selected algorithm. Some algorithm will provide video recording of themotion the software detects, and some of them will provide sending notifi-cations to email accounts of any change that occurred in the scene in thetime of their occurrence.

    1.3.5 Video Recording:

    Develop java software that implement capture image method of the mainapplication to temporarily store sequence of images, process them and then

    compress them into a video format. The compressed video format for thisapplication is MOV which is the Quick Time file format for videos andsounds.

    1.3.6 User Interface:

    A user interface is required to each application or component of the projectas well as the final interface which will put all the individual applicationsinto one main application that the user can freely choose what part of theapplication to run.

    Software for IP camera: A user interface required for this application toprovide the user with live view from the camera, offers a method to capturea single image or a sequence of images, with providing the state of the cam-era, and information about the directory once an image is captured.Software for browsing captured images: A user interface required tooffer the user ease in browsing the captured images, providing list of im-ages that are captured by the camera and to replay the images and fastforward/back and freeze frame.

    7

  • 8/7/2019 software for IP webcams

    10/65

    Motion detection: A user interface required for this application to pro-

    vide a live view from the camera and an effective visual output of the systemstate whether a motion detected or not depending on the mode of detectionthe user selects.

    Video recording: A user interface required for this application to providethe user with methods to start and to stop video recording, and choosing aname for the video file to be saved in the central storage.

    1.4 Development Platform

    The chosen development platform must be able to provide the performancerequired to allow capturing images from the network camera to operate withreal-time image sequence, as well as working freely in any operating systemof any of the computers in the network. The main environment suited for thisproject is Java. Java is a high programming language originally developedat Sun Microsystems. The language derives much of its syntax from C andC++. Java applications are typically compiled to bytecode that can runon any Java Virtual Machine (JVM) regardless of the architecture of thecomputer or the operating system that runs on the computer.

    1.5 Chapter SynopsisHere is a summary of the structure of the paper and contents of each chapter:

    Chapter 2: Background Research This chapter provides details of differenttechniques and algorithms to overcome the problems outlined in this paper.

    Chapter 3: Design Shows the main design aspects of each project as wellas providing an overview of each component of the overall application.

    Chapter 4: Implementation This chapters contains the details of the im-

    plementation along with the important, difficult or interesting aspects of theimplementation.

    Chapter 5: Results This chapter presents the designed system in prac-tice and how it works.

    Chapter6: Testing and Evaluation Illustrates the results obtained fromtesting the developed system as well as the evaluation of different parts ofthe system.

    8

  • 8/7/2019 software for IP webcams

    11/65

    Chapter7: Conclusion Include what has been accomplished and achieved,

    and highlight the areas were improvements could be added to achieve betterresults.

    9

  • 8/7/2019 software for IP webcams

    12/65

    Chapter 2

    Background Research

    2.1 Digital Images:

    Digital images consist of a 2-Dimension (or 3-Dimension) array of numbers,in the case of binary images; the representation of binary digital images of a2-D image is by using zeroes and ones. Digital images consist of small pictureelements called pixels; each pixel (picture element) contains information ofthe region of the pixel. The term digital images usually refer to raster im-ages. Raster images have finite set of digital values, each value correspondsto a picture element. The digital image contains fixed number of columnsand rows of pixels, the number of columns and rows is determined by the

    resolution of the digital image, higher resolution results in larger number ofcolumns and rows, hence, higher level of details. Each pixel of the imageholds a value that represents the brightness of a specific colour at a specificpoint. For each digital image of type greyscale, each pixel represents theamount of light reflected towards the observer at that position. Generallyspeaking, the pixels values of digital images are represented in the computermemory as raster images or taste maps, these values are usually transmittedor stored in a compressed form resulting in different type of image formatssuch as JPEG, PNG, etc. Raster images can be created or captured bydifferent electronic devices such as digital cameras and scanners. There aredifferent types of digital images classified according to pixels values and their

    positions. Here is the list of raster images types:

    Binary Greyscale Colour False-colour Multi-spectral Thematic Picture function

    10

  • 8/7/2019 software for IP webcams

    13/65

    Figure 2.1: illustrates noise comparison of 2 pictures of the same object

    2.1.1 Colour Space Models:

    In this project, we are only interested in colour images. Colour image isa digital image where each pixel holds information of colour. It is vital toprovide three different colour channels for each pixel, which are interpretedas coordinates in some colour space. In this project, we will present alldigital images in RGB colour space because it is commonly used in com-puter displays and it is the colour mode of choice of most digital cameras.There are different other colour spaces such as HSV and YUV. Here is abrief description of RGB colour space. RGB color space has three different

    color channels, which are: Red, Green and Blue. This color space storesvalues of the three different components of color and represents the ratiosof wavelengths of light perceived.

    2.1.2 Noise in digital images:

    When pictures are captured by electronic devices such as digital cameras andscanners, they often have common feature, which is image noise. Image noiseis a random, usually unwanted variation in brightness or colour informationin an image. Noise in images is often introduced during the acquisition ofimage from variety of image sources. Noise is often modelled as random

    process with a gaussian distribution causing small scale change throughoutthe image. If we take 2 pictures of the same scene, it is most likely that largenumber of pixels will have different values to their corresponding pixels ofthe second image even if the 2 images look identical, the reason for that isimage noise. An example of image noise is shown in Figure 2.1

    There are different methods and techniques to reduce noise in imagesthat they take place in image processing. Examples of some of the techniquesare: Smoothing and local averaging.

    11

  • 8/7/2019 software for IP webcams

    14/65

  • 8/7/2019 software for IP webcams

    15/65

    FT[i, j] =

    1 F[i, j] Z0 otherwise

    (2.4)

    Where Z is set of intensity values for object components.

    2.3 Motion Detection

    Most biological vision systems can easily cope with the changing world.For computers, computer vision, the ability of coping with moving objects,changing in scene, and changing viewpoints is essential to perform differenttasks and it is known as motion detection. The input to analysing a dynamicvision is a sequence of frames taken to a scene. Each frame represents thescene at a particular time. The changes of the scene could be due to themotion of the camera, moving objects, illumination changes or changing thesize or shape of the objects in the scene. A scene usually has several objects.The image of the scene at a given time represents a projection of the scene,which depends on the position of the camera.There are four possibilities for the dynamic nature of the camera and theworld setup:

    1. Stationary camera, stationary objects (SCSO)

    2. Stationary camera, moving objects (SCMO)

    3. Moving camera, stationary objects (MCSO)

    4. Moving camera, moving objects (MCMO)

    In order to detect motion in each of the above cases, we need to analyzesequence of images and different techniques required for each of the abovecase. We are only interested in the second case when the camera is station-ary and only the objects are moving (SCMO).

    Change detection or motion detection can be determined or performed by

    analyzing changes in two frames, the first frame represents projection of thescene the camera is monitoring, and the second frame represents the currentprojection of the scene. The most obvious method of comparing the twoframes for determining if any change has taken place in the second frameis by directly comparing corresponding pixels. In the simplest form, com-paring two colour pictures, if the RGB value of two corresponding pixelsfrom the two frames is equal, then no change is detected, otherwise changeis detected. However, it is never that simple, if two frames represent exactlysame scene and taken successively, they will often have many or even mostof the values of corresponding pixels different due to the universal feature

    13

  • 8/7/2019 software for IP webcams

    16/65

    of pictures which is image noise. On the other hand, it will be easier and

    more efficient to compare two binary images that are obtained from twoframes that have are converted to greyscale images then to binary images.Before going ahead with converting the type of the frames, it will be usefulto reduce the noise of images using different techniques such as blurring orsmoothing. Having done all that, now we can directly compare the valueof the pixels, but this time the pixels that have different values with theircorresponding pixels represent objects or the occurrence of change in thescene.

    14

  • 8/7/2019 software for IP webcams

    17/65

    Chapter 3

    Design

    3.1 Introduction

    This chapter covers the main design aspects of the system, detailing differenttechniques and modes for motion detection as well as the algorithms usedfor each mode. This section also represents the requirements driving thedesign decisions made throughout the development of the system.

    3.2 Requirements

    In the development of any system, it is essential to cover both functionaland non?functional requirements. Functional requirements represent the be-haviour of the system in terms of the functionality the system is expectedto deliver, whereas the non?functional requirements represent the behaviourof the system in terms of its performance.

    The following list of requirements is the functional requirements:

    1. FR1. Display live feed of the scene2. FR2. Allow the user to capture images

    3. FR3. Allow the user to capture sequence of images4. FR4. Convert sequence of images into a compressed video format5. FR5. A browser application for the captured images6. FR6. Detect any motions that occur in the scene the camera monitor-

    ing7. FR7. Provide different modes for detecting motions8. FR8. Send notifications of detecting motions to an email account9. FR9. Allow all the computers in the network to share the captured

    images

    15

  • 8/7/2019 software for IP webcams

    18/65

    Figure 3.1: class diagram for running the IP camera application

    The following list of requirements is the non?functional requirements.These requirements enhance the performance of the system:

    1. NFR1. Capture images in real time2. NFR2. Record changes in compressed video format3. NFR3. Recorded images are only accessible in the local network

    3.3 Running the IP camera on a network

    The IP camera used in this project is AXIS 2100 Network Camera, it isconnected to the network through Ethernet Cable and it provides a streamof JPEG images. In order to get a live feed from the JPEG stream, we haveto constantly get images from the camera and display them successivelyto the user interface screen. By using supporting classes of java, a singleimage of JPEG can be obtained from the stream of data that representsthe projection of the scene at a particular instant of time. After obtaininga single JPEG frame, obtaining live feed from the camera can be achieved

    through drawing the image in the display screen then constantly refreshingthe contents of the screen to get a live feed of the scene. Having done whatmentioned above, the process of capturing images can be carried out easilythough saving them to the central storage, that is done by using the sameframe that is been drawn in the screen, is being saved to the memory. Here isa class diagram to show the architecture of the application explained above.

    The classes used in running the network camera application are shown inFigure 3.1. The application checks all the images stored in the central stor-age to avoid overwriting the images that are stored previously by the user.Images in Java can be presented using different classes such as ImageIcon,

    16

  • 8/7/2019 software for IP webcams

    19/65

    Figure 3.2: Class diagram for Image Browser application

    Image and BufferedImage. The reason for choosing BufferedImage objectsis that it is easier to retrieve images data, as they will be needed in differenttasks.

    3.4 Image Browser

    Running IP Camera application captures images directly from the cam-

    era, and stores them in a directory that is accessible by Image Browserapplication. All images that are stored in the directory must be in JPEGformat. This application first checks the directory and then loads all theimages as BufferedImage objects in the application. As mentioned above,BufferedImage objects are used to represent the images for the reasonthat data from images is needed in performing other tasks, another rea-son for behind choosing BufferedImage objects, is that if the applicationuses ImageIcon objects to represent the images, it is very likely that thememory associated with Java Virtual Machine will get full when executingall the images and rise an OutOfMemoryError exception, this is caused dueto the way ImageIcon objects are represented in java. Image Browser ap-

    plication loads all the images and allow the user to freely choose a way toexplore and navigate through the images. It provides a list holding all thenames of images and shows the image on click, it also provides buttons tonavigate forward and backward through the frames. Here is a class diagramfor the Image Browser application.

    The classes used in the image browser application are shown in Figure3.2. As I mentioned in this paper that the use of ImageIcon objects raisessome memory problems in the Java Virtual Machine, it is very efficient touse BufferedImage objects to represent all the images. The application willhave references to all images that are stored in the central storage and will

    17

  • 8/7/2019 software for IP webcams

    20/65

    Figure 3.3: Class diagram for server side of image sharing application

    only update the contents of the current viewing image to replace it with theselected image by the user. Only the images that are stored in JPEG formatwill be loaded to the application.

    3.5 Image Sharing

    The project involves number of computer in a network. Each of the com-puters should be able to view all the stored images from the camera. Theeasiest way to work around this is through socket connections in java, be-cause they are efficient and secured and there is no need for installing anytransfer server. The computer that has all the images stored in it acts asa server to transfer all the stored images over java sockets. All the othercomputers are clients, they will have all the images stored in them tem-porarily only for browsing the images, after closing the browser of images,automatically all the images will be deleted. All the client computers needthe knowledge of the server IP address to connect to it. There are two sides

    of Image Sharing application, which are: server side and client side, eachside of the application is explained in the next section.

    3.5.1 Server Side:

    There is no direct way of sending image objects over java sockets same wayas other java objects like strings and integers, the easiest way to work aroundthis is to implement serialization for each image object. In this way, imageswill be serializable objects that can be directly transferred over sockets. Hereis the class diagram for the server side of image sharing.

    18

  • 8/7/2019 software for IP webcams

    21/65

  • 8/7/2019 software for IP webcams

    22/65

    3.6 Video Recording

    The Java Media Framework API (JMF) provides various media classes toenable different sorts of media such as audio, video and other time-basedmedia to be added to applications built using java. This optional pack-age which can handles variety of media formats, provides powerful toolkitto develop scalable, cross-platform technology. Using some classes of javamedia framework API such as Processor, DataSource , DataSink , Pull-BufferDataSource , and mediaBuffer , we can convert a set of static, com-pressed JPEG images into a compressed video in QuickTime format. VideoRecording application provides a mechanism to start storing JPEG imagesinto a temporary directory and stop recording with user input. The Java

    Media Framework provides a class that uses the above classes to covert a setto Images into a video format, the class called textttJpegImagesToMovie.Using the same mechanism of capturing images and store them into a direc-tory, we can set a button in the main class that views the live feed from thecamera, to start saving the images into a temporary directory, as soon as theuser presses stop button, then and object of the class textttJpegImagesTo-Movie will handle processing the images through passing video parametersto it. After processing the images, there is no need to leave the imagesin the directory because they use space of the memory, so the main classdeletes all the contents of the temporary directory. Here is a class diagramof Video Recording application, it shows all the classes used to build up the

    application.

    3.7 Motion Detection

    After implementing all the basic features of the IP camera, we can usethe knowledge of these features to help develop an algorithm to detect anymotion or change that may occur in the area or the scene the camera ismonitoring. There are different ways and techniques to achieve the softwaremotion detection, however only a few of them will work efficiently as we areseeking for an algorithm that can process images in real time as the cameracaptures more than 10 frames a second. As discussed before, there are

    different aspects or features of images that affect calculating the differencebetween two successive frames. This section introduces different algorithmsdeveloped during building this application. At the end of this section, thebest working algorithm that suits the purpose of the application in terms ofperformance and efficiency is stated.

    20

  • 8/7/2019 software for IP webcams

    23/65

    Figure 3.5: class diagram for video recording application

    3.7.1 Algorithms for motion detection:

    First Algorithm:

    The first algorithms undertakes series of steps, starting with getting framesof the scene that the camera is monitoring successively, second step is gettingthe greatest difference between the pixels values of corresponding pixels ofthe frames to get hold of the range that the value will be in due to the noiseof images. Third step is image segmentation, a process in which the imageis converted to regions which contains pixels that are similar to pixels inthe same region of different frames, the number of regions are nine regionsspread across the image. At this point we have nine different segments ofthe image representing the scene with the knowledge of the range of valuesfor each pixel. After that we will keep passing frames from the images to

    the algorithm to look for image difference between the target frame andthe scene frame. A change is detected as a difference occur in any of theregions, however the difference should be greater than 5000 pixels becausewe would not be interested in very small objects that may be caused fromimage noise or anything else. The reason for image segmentation is thatwe do not want to compare every single pixel of the image since it takelarge time, thus, it is not time efficient. The image size is 640x480, sowill have around 307,200 pixels to compare, whereas when we apply imagesegmentation, the process results in nine segments, each segment is of size100x100 pixels which results in 90,000 pixels in total. This process will

    21

  • 8/7/2019 software for IP webcams

    24/65

    Figure 3.6: shows image segmentation process

    improve the speed and performance of the algorithm by almost 4 times.The result of image segmentation process is shown below, the highlightedareas in grey colour correspond to the segments or regions of the picture arethe areas of interest. All the frames that are captured from the camera will

    be represented in this application as BufferedImage objects because it iseasy to pull data of pixels and colour models from them. Another techniquethat helps improve the efficiency of this process is comparing the regions atthe corner first because they are most likely to encounter any change once itoccurs. This algorithm has been developed and tested, it performs well andfast but it has number of issues, one of the issues is that it does not detectobjects which are far from the camera because it consider them as noise inimages. Another issue is that getting the range of values for each pixel isnot ideal if we want to extend this algorithm to make it identify detectedobjects in the scene.

    Second Algorithm:

    The second algorithm works slightly different that the first algorithm. Thesecond algorithm was introduced to overcome some of the issues rose in thefirst algorithm. The second algorithm has series of steps before starting todetect any change in the scene. The first step this algorithm undertake issetting the scene frame, it starts to get frames from the camera for aroundtwo seconds then it gets that last frame, the reason for that is the first fewimages have more noise that the rest of the images as the camera startingto record. After setting a frame for the scene, then all the values of pixels

    22

  • 8/7/2019 software for IP webcams

    25/65

    will be stored in an array to refer to it in latter steps. The second step is

    called getting the difference, this method takes up to 100 frames to try toget all the possible noise that may appear in any frame. Whenever a frameis captured, the method checks the value of each pixel and store the overalllowest value and greatest value of each pixel in an array. So far the algorithmhas two main arrays, one represents the lowest values and the other to storethe highest values, and then we subtract them to get the greatest differenceof each pixel. Since we want to compare every pixel, we need use imageswith small instead of doing image segmentation. All the frames that areprocessed in this algorithm are in the size of 320x240 pixels. In order toget the ranges of values of each pixel, we calculated the greatest differencefor each pixel by subtracting the greatest value from the lowest value for

    each pixel, and stored them in an array to be used in the next step. Thethird and the final step is to seek for image difference, which is detecting anychange in the scene. This step takes frames successively from the cameraand gets the data of each frame at a time, then it compares the differencebetween the current frame and the scene frame and check the difference foreach pixel. When comparing, the values of the current frame is comparedwith the values of the scene frame. For each pixel, if the difference betweenthe two values is equal or less than its corresponding value stored in thedifference array, the pixels considered equal, otherwise they are considereddifferent. The logic of the third step of the algorithm is explained below:

    currentdifference = |sceneRGBvalueforx, y| |currentRGBvalueforx, y|(3.1)

    |currentdifference| |difference| samepixels(3.2)

    |currentdifference| |difference| differentpixels(3.3)

    At this point, the algorithms is able to determine of a change has oc-curred in the scene, and as mentioned in the first algorithm that we arenot interested in very small objects, so we need to calculate the number ofdifferent pixels as well as considering the possibility of pixels difference dueto image noise. This algorithm as been developed and tested, it processeseach frame and correctly detect any motion but the problem it is not timeefficient. The process of calculating the difference and comparing with thedifference that originally stored in step 2 takes time so the camera comparesabout 3 to 5 frames a second and that is not good enough.

    Third Algorithm:

    Noise in images is the main obstacle for motion detection algorithms torecognise the actual image difference not the representation of noise in im-

    23

  • 8/7/2019 software for IP webcams

    26/65

  • 8/7/2019 software for IP webcams

    27/65

    in the previous algorithms because it uses binary images instead of colour

    images, which means that there is no need to get the difference or the rangeof each pixel since each pixel has either the value 0 or 1. The images sizeused in this algorithm is 320x240 pixels, so the algorithm does not requireany operation to segment the images. The third step is checking the valuesof the binary image pixels, which determines the occurrence of changes inthe scene. Each frame obtained from the camera is processed to producea binary image. The operation of checking the difference in images is per-formed through directly comparing the frames obtained from the camerawith the scene frame, the comparison method is an equality test. The noisewill be presented in the obtained binary images in only less than 26% of thepixels because noise will be faded out in either the background or the ob-

    jects. The method has been developed and tested properly, converting theimages using BufferedImage objects is very fast so the overall performanceof this algorithm is satisfying in terms of performance and time efficiency.As mentioned above that the noise will be presented in the binary images inonly less than 26% of the pixels, consequently if the percentage goes abovethis limit then that indicates a change in the scene is detected.

    Summary:

    Most of these algorithms provide satisfying results but some of them workfaster than the others. The motion detection application provides different

    modes of detection motions using the best suitable algorithm for each of themodes. In this project, having the list of detection modes, the most suitablealgorithms are the third and the fourth algorithms as they produce desirableoutput in real time.

    3.7.2 Different modes for detecting motions

    The user needs to be informed when a change has occurred or a motion isdetected. There are number of methods providing different features, thesemethods are called modes of detection. The choice of detection mode is madeavailable of the user to select the best suitable detection mode to monitoran area. In this section, the different modes of detections are introduced aswell as the algorithm used in each of the modes.

    Video Recording Mode (VRM):

    Video Recording Mode provides a mechanism to record all changes thatoccurred in the scene as a video files, and then store all these videos intoa directory. This mode uses the fourth algorithm to detect the motion, thereason for choosing this algorithm is the its accuracy and efficiency to detectmotions. As soon as a change is detected by the algorithm, the applicationstarts recording frames of the scene that considered as change, and store

    25

  • 8/7/2019 software for IP webcams

    28/65

    them in a temporary directory. Once the object has left the scene the camera

    monitoring or the scene is no more detecting any motion, the applicationstops the process of storing the frames and make use of the video recordingapplication to convert the images into a video format. When using thevideo recording application, the algorithm configure the videos propertiesto allow the videos to be in shorter length by increasing the frame rate toreduce the space requirement. After a successful creation of the video file,all the frames that are stored in the temporary directory will be deleted andthe application will go back to state of checking the frames for any changedetection.

    Red Marking Mode (RMM):

    This mode does not provide any method to record any change that mayoccur, it marks any object that appear in the area the camera is monitoringbut did not appear when the application is first started. The purpose ofthis mode is to detect or mark moving objects. This mode uses the thirdalgorithm as the RGB values of the scene and the frames are needed inprocessing the marking feature.

    Mailing Mode (MM):

    Mailing Mode allows the user to be alerted by any change of the scene bysending emails to an email account the user provides. Each message shows

    the date and the time the motion is detected. The mode uses the Java MailAPI to enable the application to send emails. It uses the fourth algorithm,as it is the most accurate algorithm of the algorithms introduced earlier inthis chapter and it is very efficient in terms of time and performance.

    Summary:

    When Motion Detection Application is launched, it provides the user withthe list of the three modes for motion detection and the user will freely choosewhich mode is most suitable. In order to get a live feed from the camera andprocess the images at the same time, the application uses threads in Java

    to allow the selected algorithm be running at the same time the applicationprovides live feed of the scene. The reason for that is to avoid delaying theactual viewing of the camera and the process of the algorithm. Figure 3.5shows the class diagram for motion detection application that has all thedifferent modes of detections.

    3.8 The Software Interface

    The integration is very important part of this project, which groups allthe applications and features of this project and put them all in one main

    26

  • 8/7/2019 software for IP webcams

    29/65

    Figure 3.7: class diagram for motion detection application

    interface to allow the user to choose the application he/she wants to use.The system has a list of the applications provided by the project, and then itcalls any of the individual application to run according to the user selection.

    27

  • 8/7/2019 software for IP webcams

    30/65

    Chapter 4

    Implementation

    4.1 Introduction

    This section represents the implementation details and how each componentof each application has been implemented. As described in section 1.3, allthe applications that build up the whole system have been written in Javawith the support of optional packages such as the JMF API and JAI APIfor processing the images.

    4.2 Running the IP camera and capturing images

    A high level description of the process flow in this application is given below:

    1. Getting the stream of images and capturing them in a cameraPanelobject.

    2. cameraPanel is part of the main GUI interface for this application3. To avoid overwriting images, checkDirectory object gets the names of

    images.4. checkDirectory object uses formatFilter object to filter the files in the

    central storage.

    4.2.1 cameraPanel

    cameraPanel class has been created to provide functionality to control andview live feed from the IP camera. cameraPanel object reads the streamof JPEG images provided by the network camera as a URL object, theURL object represents the image at a particular instant of time in a JPEGformat. This class creates a BufferedImage object created from the URLobject. cameraPanel object extends the JPanel functionality to overridesthe pain method of JPanel, the reason for that is whenever the object orthe panel is repainted, it provides a projection of the scene the camera ismonitoring at that time. This object handles all the exceptions that may

    28

  • 8/7/2019 software for IP webcams

    31/65

    occur when reading the URL address, constructing a BufferedImage object

    and saving the image, it also provides the user with feedback when each ofthese exceptions is triggered.

    4.2.2 checkDirectory

    This class provides methods to check the contents of the main directoryresponsible for storing the captured images. The reason for that is thecomputer saves any captured images JPEG image with the name of IMAGEXXX.jpg, the symbols XXX represent the number of the image, the programneeds to know the index or the number of the last saved image so it avoidsoverwriting the stored images in the directory. checkDirecotry object uses

    formatFilter object which is created to filter all the files and only return thefiles that are in the format of JPEG. checkDirecotry object has one methodwhich is getIndex() method, and it returns the integer which corresponds tothe index of the last image saved.

    4.2.3 MainGUI

    The MainGUI class is responsible for putting all the objects of this applica-tion together, to create the graphical user interface which displays live feedfrom the camera, as well as providing some features such as capturing image,capture sequence of images, and provides a status bar to show the state ofthe camera. The MainGUI object has two main panels, the first panel isthe display panel and the second one is to show status of the camera, soif an image is captured, the status shows that an image was captured andalso it provides the path of the central storage where the image is stored.Capturing image feature is provided by cameraPanel because it is the onlyobject of this application that has a direct connection with the IP camera,however the method is invoked by the MainGUI object. This object createsan instant of cameraPanel object, and then invokes some methods of thisobject such as capture image and paint the panel. The method paint willbe running continually as long as the MainGUI object is running througha while loop to provide a live feed from the camera. Here is an activitydiagram of running the IP camera application to illustrate the logic and theactivities of this application.

    4.3 Image Browser

    Image browser application provides some features to allow users to accessand browse the images that are in the central storage. A high level descrip-tion of the process flow in this application is given below:

    1. Reading all JPEG format files in the directory and load them in theImagePanel object.

    29

  • 8/7/2019 software for IP webcams

    32/65

    Figure 4.1: Activity diagram for running the IP camera application

    2. ImageBrowser object contains the ImagePanel object as a panel toprovide functions to allow ease in browsing the images.

    4.3.1 ImagePanel

    This object uses formatFilter described in section 4.2.3 that checks the for-mat of files in the directory and only loads the files, which are in JPEGformat. This object creates an array of strings holding names of the imagesto provide a list with all the names of the images as a way of navigatingthrough the images. This object represents the images stored in the centralstorage as BufferedImage objects to be able to paint the image to the displaypanel whenever needed. This object extends the functionality of JPanel ob-ject, it also has other features to allow the main program to send messages

    to change the current image which is in view. The name of the methodis setImage and it requires the image name to be passed as a parameterwhen invoking the method. This object instantiated and controlled by theImageBrowser object.

    4.3.2 ImageBrowser

    ImageBrowser class has been created to provide the user with a graphicalinterface to allow view and navigating through the stored images. As ex-plained in section 4.3.1, objects of this class have an array of strings provided

    30

  • 8/7/2019 software for IP webcams

    33/65

    by ImagePanel object to select an image to view to the user. This object

    contains a panel to view the current image, a Jlist object to list all the imagesin the array of images names, and buttons to help browsing the next or theprevious image. An action listener is added to the list of names, so whena name is clicked, then the method fires a method to ImagePanel objectwith the name of the image, and the image panel immediately updates thepainted image to view the target image. Moreover, the object provides twobuttons to help navigating through images by playing the images forwardand backward.

    4.4 Image Sharing

    As mentioned in an earlier section of this paper that the IP camerais con-nected to a network of computers, nevertheless the computers should be ableto connect to the computer that contains the central storage of the capturedimages. As outlined in the design section that connections between comput-ers and data transfer for image sharing in the network is achieved using Javasockets. In image sharing application, Image sharing application consists oftwo main parts that establish the connection and process it, the server sideand the client side.

    4.4.1 Server side

    A high level description of the process flow in this application is given below:1. Server object opens a port and keep listening for connection attempts.2. Once the Server receives a successful connection it gets the Input/Output

    streams.3. Server object loads all the images in the main directory and construct

    ImageSerialaization objects.4. Server object writes the ImageSerialaization objects to the socket.

    ImageSerialaization

    Java sockets can only transfer serializable objects. This class has been cre-

    ated to provide serializable objects of BufferedImage objects that can betransferred over java sockets. This object implements the serializable classof java, and the constructor requires a BufferedImage object to pull out someinformation about the image such as the width, the height and the valuesRGB of each pixel to allow constructing an object of BufferedImage whenrequired by the retrieval method. The retrieval method is getImage(), withthe knowledge of the width, the height and the RGB values of an image, itcan construct a new BufferedImage object with type RGB image, and thenloads the RGB values to the images to construct a new image. The resultof this object is a serialaizable object of BufferedImage object.

    31

  • 8/7/2019 software for IP webcams

    34/65

    Server

    Objects of this Server class has no graphical user interface, it only runsas a console application. Server object has series of steps to successfullyestablish a connection with a client. First it constructs a new ServerSocketto open an unoccupied port to start listening for connections attempts, andalso specify the length of queue that can wait to connect to the server. TheServer object has a set of methods run successively as follows:

    1. waitForConnection2. getStreams3. transferProcess4. closeConnection

    The first method listens to the port for any connection, when it receives asuccessful connection it get the input stream and the output stream to readand write to the socket. The second method is getSreams, which is responsi-ble for getting these streams. After that, the Server object runs the transferprocess and this includes the method LoadImages. LoadImages method fil-ters the files in the directory and loads only JPEG images as BufferedImageobjects. Once the object has references to all the images as BufferedIm-age objects, it starts constructing serializable objects, these objects are ofImageSerialaization objects. Once the server successfully construct serializ-able objects, it starts performing the transger process through writing theobjects to the connection socket. However, the first object transferred orwritten to the connection socket is an integer that represents the numberof objects the Server objects is writing to the socket. That gives the clientthe knowledge of the number of objects that are expected to be transferred.After a successful transfer of all images, which is handled by transferProcessmethod, the Server object closes all the streams and the connection socketto allow the ServerSocket to listen to a new attempt for connection. Anactivity diagram is shown in Figure 4.4 to illustrate the process flow of theserver side of image sharing application.

    4.4.2 Client side

    A high level description of the process flow in this application is given below:

    1. Client object opens a new socket to connect to the Server object.2. Once the Client makes a successful connection to the Server object, it

    gets the input stream to read data from the socket.3. Client object writes all the images to a temporary directory.4. Client object start the ImageBrowser object to browse the images.

    As mentioned in the section 4.4.1, all the images will be transferred asImageSerialaization objects. Thus, the client side should be able to retrieve

    32

  • 8/7/2019 software for IP webcams

    35/65

    the images from the received serializable objects as ImageSerialaization ob-

    jects. The objects used to start the client side are the same as the objects ofthe server side of this application except the Client object. The operationscarried out by these objects are explained in the section 4.4.1.

    Client

    This class has been created to allow the computers in the network to tem-porarily copy the images that are captured by the camera for viewing them.The first thing this object does is creating a java socket to connect to theserver. Connecting to the server is handled by the method connectToServer,which is created in this object. connectToServer method attempt a connec-

    tion to the ServerSocket with the knowledge of the IP address of the hostmachine and the port number. Once it has a successful connection estab-lished with the server, it sends information of the host machine to confirmthat the connection has been established successfully. After a successfulconnection, Client object gets the input stream from the socket to startreading data from it. The first object that will be read by the Client objectis an integer that represents the number of objects will be transferred tothis object. With the knowledge of the number of objects that are expectedto be received, the Client object creates a loop with the knowledge of thenumber of objects. As the object iterates through the received objects, itcasts every received ob ject to an ImageSerialaization ob ject. After receiving

    and casting all the objects to ImageSerialaization objects, the Client objectinvokes getImage of ImageSerialaization objects which constructs Buffered-Image objects from the serilaizable objects, an then saves all the files into atemporary directory to allow the ImageBrowser object to view the images.When the Client object finish receiving all the objects, it closes the connec-tion to the server and starts the ImageBrowser object to allow other clientsto connect to the server. The Client object deletes all the images that havebeen transferred and saved to the temporary directory when the user closesthe ImageBrowser object. Both writing the objects to files and deleting thefiles are handled by the methods writeToFile and clearDir respectively. Hereis an activity diagram to illustrate the activities carried in Image Sharing

    application.

    4.5 Video Recording

    One of the applications the system provides is recording video by convertingset of static images into a compressed video format. A high level descriptionof the process flow in this application is given below:

    1. Getting the stream of images and capture frames in cameraPanel ob-ject.

    2. cameraPanel is part of the main GUI interface for this application.

    33

  • 8/7/2019 software for IP webcams

    36/65

  • 8/7/2019 software for IP webcams

    37/65

    3. VideoGUI objects start the process of recording video.

    4. JpegImageToMovie Object is responsible for generating the Quick-Time movie file.

    Video Recording application has set of classes, each class participate inthe process of video recording. The main class that responsible in buildingthe graphical user interface of this application is VideoGUI. This object in-stantiates cameraPanel object to get live feedback from the camera as wellas capturing frames. After capturing the frames required to create a videoformat file, VideoGUI object uses JpegImageToMovie object to process theset of frames and result in a video format file. There are other objectsinvolved in this application such as Background and checkDirectory, these

    objects are explained in the pervious sections. This section represents theimplementation of the application, specially the main objects that are re-sponsible for the video recording process which are:: VideoGUI,cameraPanel, and JpegImageToMovie .

    4.5.1 cameraPane

    This object extends the functionality of JPanel object of java, the mainresponsibility of this object is to provide the VideoGUI object with live feedfrom the camera. This object has a method to capture frames from thecamera.

    4.5.2 VideoGUI

    This object extends the functionality of JFrame object of Java, this object isresponsible for building the graphical user interface for this application andalso handles the operations for directing and controlling the other objectsto start capturing frames and processing videos. This object starts withinitializing the GUI components of java and instantiating the cameraPanelobject to get live feed from the camera. After that, it calls the method thatruns this object, which is called Running method. This method has a setof statements that are constantly called to check whether a video recordingcommand has been executed, otherwise it keeps reprinting the cameraPanel

    object. It has a main Boolean variable to indicate the trigger of starting avideo recording operation, the Boolean variable is updated by a button inthe main interface of the application, once the button is clicked, the value ofBoolean is updated to true to start calling the method responsible for cap-turing images in cameraPanel object. Below is the code used for Runningmethod of VideoGUI object.

    while(true)

    { if(recording){

    35

  • 8/7/2019 software for IP webcams

    38/65

    ((cameraPanel)camPan).captureFrame(frameIndex);

    frameIndex++;}camPan.repaint();

    }

    The Boolean variable is recording, two buttons, which are start videorecording and stop video recording buttons, update it. When stop videorecording button is clicked, the VideoGUI object calls the methods thathandle image processing and creating a video file. When stop video record-ing button is clicked, the first thing the object executes is updating theBoolean variable to false to stop recording frames and reset the frameIndex

    variable to zero. After that, the object prompts to the user a dialogue box toenter a name for the recorded video using JOptionPane, once it is inserted,it is passed to the method responsible for creating the video format file cre-ateVideo. This method starts with loading all the JPEG files in the Framesdirectory, then it adds all the files into a vector object which is then used increating the video file. After that, the VideoGUI object creates a MediaLo-cator object with the string entered by the user for the name of the videofile. When it finished executing all the above steps, it creates an instance ofJpegImageToMovie object to invoke doIt method, which is the main methodthat implements video creation functionality from JPEG images. The cre-ateVideo method of VideoGUI object passes the following parameters to

    the doIt method of JpegImagesToMovie object: width, height, frame rate,the vector object that contains all the JPEG images, and the MediaLo-cator object that has been created. At the end, the method empties thecontents of Frames directory after successfully creating a video file by call-ing the method clearDir. Below is the code segment for createVideo method.

    public void createVideo(String st)

    {File fileDir = new File("./Frames");

    FilenameFilter filter = new formatFilter("jpg");

    File[] imgFiles = fileDir.listFiles(filter);

    Vector inputFiles = new Vector();MediaLocator oml = JpegImagesToMovie.createMediaLocator(st);

    if(imgFiles.length>0)

    {for(int i=0;i

  • 8/7/2019 software for IP webcams

    39/65

    imageToMovie.doIt(640, 480, 10, inputFiles, oml);

    clearDir();}}

    After processing the images and creating a video file, the application goback to perform Running method with the Boolean variable as false to beprepared for another video recording operation.

    4.5.3 JpegImageToMovie

    This object is responsible for generating a video of type QuickTime movieof still images, staring with a list of compressed JPEG image files stored in

    a directory. A custom PullBuffereDataSource is designed to read the listof compressed JPEG image files from a vector object then used to create aProcessor. The result of the output generated from the Processor is a setto generate QuickTime data. The output DataSource of the Processor isthen connected to a DataSink object to save the bits to a movie file. TheDataSink is generated by the MediaLocator, which is passed to the objectwhen calling doIt method.

    4.6 Motion Detection

    Motion Detection application has different modes for detections, and a set

    of algorithms that are used by different modes as described in the designchapter. A high level description of the process flow in this application isgiven below:

    1. GUI object starts with providing list of available detection modes.2. It calls the correct algorithm to run for each mode when selected.3. When RedMarkingMode object is selected, it takes a set of frames for

    the scene as training set to run the algorithm.4. MailingMode and videoRecordingMode objects use the same algorithm

    but they provide different features.5. Each algorithm object uses SwingWorker to allow them to run on the

    background as threads controlled by the GUI object.

    Before we start with the GUI object as it determines which of the modesto load and it controls the flow of processes in the application, we willintroduce the cam object, the SwingWorker object, and the GUI object.

    SwingWorker

    The swing class is a simple utility for computing a new value or a newprocess on a new thread. The SwingWorker objectt overrides the Swing-Worker.construct() method to compute the value or new the process. This

    37

  • 8/7/2019 software for IP webcams

    40/65

    object has a method called start, which starts computing the value or start

    executing the process that is introduced in the constructor of the object.This object is part of the main class that is responsible in running any ofthe detection modes, since it is used in running the object responsible forgetting the live feed of the scene.

    cam

    This class has been created to provide the GUI object with a live feed fromthe camera while the algorithm is running. Instead of processing the imagesby the algorithm and viewing feed from the camera as the same time whichresults in running both operations successively. Threads are used to run

    both the object running the camera and the object running the algorithmin parallel. This object uses the SwingWorker object, the reason for thatis avoid the application from updating the view and processing the imagesat the time, i.e. successively, which results in causing some delays in eitherthe view of the camera or the computation of the algorithm. This objectprovides methods to get the images from the camera and then to paint themin the panel.

    GUI

    This class has been created to hook up all the components for motion detec-tion needed in this application. As mention before, this object will contain

    two main objects running in parallel using SwingWorker ob ject, the first ob-ject is the cam object for the live feed, and the second object is the algorithmthat process the images to detect changes. When this object starts, usingJOption object, it illustrates the list of available detection modes, along withdescription of each mode to allow the user to choose the detection mode.Here is screen shot illustrating the JOption object. Once the user selects amode, the GUI object will initialize the application for the selected detec-tion mode. The next section introduces the processes that are carried outonce a detection mode is selected, along with the algorithm for computingthe mode. For each mode, the application loads the correct backgroundimage using BackgroundImage object, and then it process the other steps

    differently depending on the selected mode.

    4.6.1 Red Marking Mode

    When the Red Marking Mode is selected, the object loads the appropriateimage as a background image. Since this mode only notifies the user of achange by marking the detected object with red pixels, it overwrites thecam object of the application, and processes all the images while paintingeach frame. This object starts with initializing the scene image of the de-tection area, and then performs the initialization state to get an array of

    38

  • 8/7/2019 software for IP webcams

    41/65

    the maximum difference that could occur between frames for each pixel due

    to image noise. The values of pixels of the scene frame will be stored in anarray in the object, then it will be compared with the other frames to getthe difference array in the initialization state. This state is performed byinvoking getDifference method, it provides the user with a process bar toshow the state of the application while initializing the detection mode. Theobject takes up to 30 frames to get the difference in pixels vales and storesthe greatest value and the lowest value of each pixel in separate arrays. Af-ter processing 30 different frames to get the highest and the lowest values,the object calculates the difference for each pixel. Here is a pseudo code toillustrate and explain the process of initializing the scene when getDifferencemethod is invoked:

    BufferedImage scene = setSecen();

    int[] sceneRGBs = scene.getRGBs();

    int[] lowest,highest

    loop (30)

    iterate through all pixels

    getRGB(x,y)

    if value > highest

    highest = value;

    else if value < lowest

    lowest = value;end of iteration

    end loop

    iterate through elements of highest

    difference[x] = highest[x] lowest[x]

    end of iteration

    return difference[]

    After getting the difference array, then the object starts running the al-gorithm for detecting any change. The method that actually implementsmotion detection is checkImage method, it is responsible for processing the

    images and it returns a BufferedImage object since it checks the images di-rectly from the camera with the scene image. This method is invoked inpaint method, because the result of the detection needs to be painted onthe JPanel object then viewed to the user. The method that processes theframes checks the image that is passed to its argument with scene image tocalculate the difference in pixels values. If the difference is greater than thecalculated difference in the initialization state, then this pixel is consideredas a different pixel and the pixel is coloured in red by replacing the valueof pixel with the RGB value that represents the colour red. Otherwise, itthe method skips the pixel and process next pixel to check for the difference

    39

  • 8/7/2019 software for IP webcams

    42/65

    value. If the result in the changed pixels is greater than 4000 pixels, then

    the method returns the modified image, otherwise the original frame thatis processed is returned. Here is a pseudo code for the checkImage processexplained above:

    int[] sceneRGBs = scene.getRGBs();

    int[] currentRGBs

    BufferedImage currentImage

    BufferedImage redImage

    int notSame;

    iterate through the pixels values

    currentDifference = |currentRGBs(x,y) sceneRGBs(x,y)if(difference(x,y) >= currentDifference)

    redImage at (x,y) = red pixel

    notSame++

    end of iteration

    if(notSame>4000)

    return redImage

    else

    return currentImage

    As explained earlier, when comparing for image difference, the methoddeals with arrays of pixels extracted from each BufferedImage object. Someof the important arrays or variables in the method are:sceneRGBs: is the array holding the pixels values of the scene image forcomparing all the frames with it.currentRGBs: is the array holding the current frame that being comparedwith scene image to seek for image difference.difference: is the array that holds the calculated values of difference in thepervious method getDifference.Once the process of comparing all the pixels of the current images, it re-places the original pixels values in the current image with the modified pixels

    values to show the object marked in red pixels.The Red Marking Mode only colours the pixels that are outside the rangeof difference array, if the pixel value is outside this range, it means that thedifference in value is not caused by image noise feature, but it is more likelycaused by the interference of an object in the monitored area.

    40

  • 8/7/2019 software for IP webcams

    43/65

    4.6.2 Mailing Mode

    When the Mailing Mode is selected, the application loads the appropriateimage as a background image using BackgroundImage object. The programneeds to get an email account to send the notifications to, so the GUI objectloads another JOption object to ask the user for an email account. Afterinserting an email account, the application validates the email account usingmatching pattern. The application will check if the email account containsthe character @ then it checks the format of the inserted email address,such as if the domain name ends with a dot. If the email account supplied isinvalid then the application shows an error message informing the user thatthe email account provided is invalid then the application will exit automat-ically. Otherwise the application continues executing the list of commandsin the object until it reaches the algorithm part to start running the cor-rect algorithm as a thread in the application. The algorithm used in thismode is different from the pervious mode outlined in pervious section, thisalgorithm works more accurate than the pervious one because it uses binaryimages when comparing images for image difference. The object responsiblefor running the algorithm is MailingMode object, and the object responsiblefor sending the notifications of motion detection is by the sendEmail object.

    MailingMode

    This object compares binary image objects, when retrieving images from

    the camera, the object converts the image from colour image to a greyscaleimage, then convert the greyscale image to result in a binary image. Theoperations carried to convert images are in the method getBinaryImage, be-low is the pseudo code illustrating the processes of getBinaryImage method.

    read JPEG stream to extract a color image

    create an instance of grayscale type BufferedImage

    paint the color image into the grayscale image

    create an instance of binary type BufferedImage

    paint the grayscale image into the binary image

    return BufferedImage object of binary image type

    The first step the algorithm performs when running is setting the sceneimage to compare it with all the other images, it sets a binary image asthe scene image. The second step is comparing the images to test for anychange. In binary images, objects are represented in the image with value1 and the background with value 0. When comparing two successive framesof binary image type, the difference in pixels values due to image noise will

    41

  • 8/7/2019 software for IP webcams

    44/65

    be minimal, between 10% and 20%. If the total number of different pixels is

    greater than 4000 pixels, then a motion is detected and the object invokesmethods of sendEmail object to send the notification. Before invoking anymethod for sending emails the object stores a BufferedImage object of colourimage type to be used by the object responsible for sending emails. Aftersending an email, the ob ject deletes the colour image. Here is a pseudo codedescribing the processes carried out by the algorithm for detection motionsin MailingMode object.

    currentImage = getBinaryImage();

    currentRGBs = get rgbs of currentImage

    sceneRGBs = rgbs of the scene image

    iterating through the elements of sceneRGBsif sceneRGBs at x,y is not equal to currentRGBs at x,y

    add 1 to not same variable

    end of iteration

    if not same variable is > 3500

    motion detected

    save a color image into the directory

    invoke send email method of sendEmail object

    end if

    SendEmail

    This class has been created to implement sending emails function using javaapplication, this class uses classes that are not provided by Java SE package,however it uses extension classes provided by JavaMail API and Java Acti-vation API to implement sending messages via emails using java. Java MailAPI uses a properties file for reading serve names and related configuration.These settings override any system defaults. Alternatively, the configurationcan be set directly in code, using Java Mail API. Example of these config-urations is the STMP HOST NAME, STMP PORT, SSL FACTORY, etc.The email account used in sending messages to the target email address

    is: [email protected]. The email account of the application isconfigured and ready to use by the application. All the configurations ofGoogle mail are set directly in the code of this class. There are numberof methods in this class, but the method responsible for implementing sendmessage is sendSSLMessage. Once an instance of this class is created, the re-cipient email address is added directly to the object by invoking the methodsetEmailAddress. The method that actually implements send message func-tionality is only called once a change occurs. Once send message function iscalled, the method checks a directory called attachments to attach an imageof the detected object with the message. Adding files as attachments to the

    42

  • 8/7/2019 software for IP webcams

    45/65

    email is achieved via creating an instance of DataSource object of the image,

    and then adding the object to the message of the email. When the messageit about to be sent, the method gets the time and the date from the systemto add them to the body of the email message to show the time and the datethe object is detected. After sending a notification email to the recipient,the contents of the attachments directory get deleted to avoid any conflictwhen sending the next notification email.

    4.6.3 Video Recording Mode

    As in the pervious modes, once this mode is selected, the appropriate back-ground image is loaded to the application and the correct algorithm is run.

    The Video Recording Mode is very similar to the Mailing Mode because theyuse the same algorithm, the only difference between the two modes is whatget invoked after detecting a motion. As discussed in the pervious mode, amessage is sent to an email account once a motion detected. In this mode,once a motion is detected the application records a video of the motion andstores in the directory. This algorithm uses the same application explainedin section 4.5, in video recording application, however it slightly differs insame way. In video recording application, the video recording starts is trig-gered by pressing a start button and ends with a stop button, whereas inthis application, the video recording starts when a motion is detected andends when the camera is no more detecting the motion that triggered the

    start of video recording. Another difference is the output video file, in thisapplication the output video file has a shorter length, hence faster framerate to reduce space capacity. Once a motion is detected, the video video-RecordingMode object starts capturing images continually and stores theminto a temporary directory, once the detection motion is cleared the objectstarts processing the images using the same objects used in Video Record-ing Application to process the images into a video format then clears thecontents of the temporary directory.

    43

  • 8/7/2019 software for IP webcams

    46/65

    Chapter 5

    Results

    5.1 Introduction

    This chapter illustrates the designed system and how it works in practice,and how it is intended to be used. The system consist of different applica-tions for webcams as outlined in this paper, however the main applicationof the system is intended to connect up all the individual components of thesystem. Here is a list with the applications of the system:

    Running the IP camera Image Brower Video Recording Image Sharing Motion Detector

    This section illustrates the GUI layout of each application of the system,as well as the function each application provides.

    5.2 Graphics User Interface Layout and features

    5.2.1 Main Application

    The main application is the software for networked webcams, which hooksup all the parts of the system. The Main Application shows the list of theapplications provided by the system to allow the user to select an applicationto run. The Figure bellow shows the layout of the application.

    The application provides a list of panels, each panels corresponds to aspecific application of the camera. Once a panel is clicked, the correspondingapplication starts running the main application is shut down.

    44

  • 8/7/2019 software for IP webcams

    47/65

    Figure 5.1: The main application of the system

    5.2.2 Running the IP camera

    Running the IP camera application is responsible for viewing life feed fromthe camera by reading the stream of data. In Figure 5.1, a screen shotillustrates the GUI layout of the application.

    As we can see in Figure 5.1, the application provides the user with apanel to view a live feed from the camera, with a status panel to show the

    state of the camera whether it is connected or not. These panels are labelledwith view and status. The features the application provides is through 2buttons, capture image button and capture sequence button, each of thesebuttons capture images from the camera immediately once the button isclicked, but the difference is one button captures a single frame, and thesecond button captures a sequence of frames successively. Once a frame iscaptured, the status panel is updated to show the last action carried out.

    5.2.3 Image Browser

    Image browser application is a browser application to view and navigatethrough the captured images from the running the IP camera application.The application contains two panels; the first panel contains a list of all theavailable images in the directory, the second panel is two show the selectedimage. In the Figure below, a screen shot to show the GUI layout of theapplication.

    The application provides another way of navigation, is through the but-tons, once a button is clicked to goes one step higher or lower depending onthe selected button until it reaches one of the edges, the top or bottom. Theapplication highlights the selected image in the list to show what image isbeing selected.

    45

  • 8/7/2019 software for IP webcams

    48/65

    Figure 5.2: GUI Layout of Running the IP camera application

    Figure 5.3: The Image Browser Application

    46

  • 8/7/2019 software for IP webcams

    49/65

    Figure 5.4: The GUI interface of video recording application

    5.2.4 Video Recording

    Video recording application has a GUI layout very similar to running the IPcamera application. It provides the user with the status of the camera. Thisapplication provides a simple interface, similar to the interface of runningthe IP camera. It provides the user with two main panels, a view panel,which illustrates a live feed from the camera, and the second panel is astatus panel to show the status of the application to the user. The interfaceof the application is shown in figure 5.4.

    The application provides the user with a button to start the operationof recording a video. The operations provided by this application is tested

    and explained in chapter 6.

    5.2.5 Image Sharing

    Image Sharing application has two parts, a server side and a client side.The client side is responsible for viewing the images, and it is the sameapplication as the Image Browser. The only difference between the ImageBrowser Application and this application is the directory that the programuses to load the files from. The server side does not have a GUI interface,it is does not have any functions for the user, it should be up and running

    47

  • 8/7/2019 software for IP webcams

    50/65

    Figure 5.5:Server console application

    Figure 5.6: Image detection modes

    all the time to accept connections and transfer images. The Figure belowshows the server console application when it is started.

    As we can see, when the server starts it shows the status of the server,and it gets updated once a client is connected to it.

    5.2.6 Motion Detector

    Motion Detection application has different modes as we explained earlier.When the application is started, it shows the list of the available modes fordetections as illustrated in Figure 5.6.

    Each mode is briefly explained in terms of its functionality. Once amode is selected, the application starts the correct mode to start the motiondetection algorithm. Each mode and the results obtained when a motiondetected is explained in chapter 6. All algorithms have the same GUI layoutdesign with different title for each mode. but. However, they differ in the

    48

  • 8/7/2019 software for IP webcams

    51/65

    Figure 5.7: Red Marking Detection Mode GUI Layout

    result of detection motion. The Figure below illustrates the GUI layoutdesign of Red Marking Mode which is the same for the rest of the modes.

    All the other detection modes and the results obtained when a motionis detected for each of the modes is explained in the next chapter.

    49

  • 8/7/2019 software for IP webcams

    52/65

    Chapter 6

    Testing and Evaluation

    6.1 Introduction

    This chapter represents the results obtained from the system when test-ing each of its applications to confirm how each part of the system workscorrectly. Moreover, the chapters covers the performance of each detectionmode, hence the efficiency and the performance of the algorithms used inthose modes.

    6.2 System Performance

    The overall performance of the system is highly acceptable. The qualityof the live feed the scene projected in the system depends on IP cameraused and the frame rate of the camera. The performance of the motiondetection feature of the system depends on the quality of the images obtainedfrom the camera, thus the amount of noise presented in the image. Eachapplication of the system runs correctly and it produces the appropriateerror messages when something goes wrong. The efficiency of the systemallows the programs to run smoothly.

    6.3 GUI Testing

    In order to test the interaction between the user and the GUI and to en-sure that the state of the GUI was always consistent with the state of theprogram a series of test was performed. The complete series can been foundin Appendix A. The output of the program when tested can be also seen inAppendix A.

    50

  • 8/7/2019 software for IP webcams

    53/65

  • 8/7/2019 software for IP webcams

    54/65

    6.5 Motion Detection Testing

    The performance of motion detection algorithms can be measured by theperformance of each mode. In order to test this application, each detectionmode provided by the application is tested individually. Once the appli-cation is run, it provides a list of detection modes as shown in the resultchapter. The results obtained from testing each of the detection modes areshown in this section. The performance of the algorithm behind each modeis critical to the success of the system to be able to detect motions and startperforming the services provided by each mode. There are wide number ofvariables that can effect the performance if the algorithm:

    Thresholds performed by java.

    Similar coloured objects in the scene. Image noise. Li