Post on 20-Oct-2015
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 01-19
www.iosrjournals.org
Review of Graph, Medical and Color Image base Segmentation
Techniques
Patel Janakkumar Baldevbhai1, R.S. Anand
2
1Image & Signal Processing Group, Electrical Engineering Department, Research Scholar, EED, Indian
Institute of Technology Roorkee, Uttarakhand, India,Pin:247 667.
2Electrical Engineering Department, Professor, EED, Indian Institute of Technology Roorkee, Uttarakhand,
India
Abstract—This literature review attempts to provide a brief overview of some of the most common
segmentation techniques, and a comparison between them.It discusses Graph based methods, Medical image
segmentation research papers and Color Image based Segmentation Techniques. With the growing research
on image segmentation, it has become important to categorise the research outcomes and provide readers
with an overview of the existing segmentation techniques in each category. In this paper, different image
segmentation techniques starting from graph based approach to color image segmentation and medical image segmentation, which covers the application of both techniques, are reviewed.Information about open
source software packages for image segmentation and standard databases are provided. Finally, summaries
and review of research work for image segmentation techniques along with quantitative comparisons for
assessing the segmentation results with different parameters are represented in tabular format, which are the
extracts of many research papers.
Index Terms—Graph based segmentation technique, medical image segmentation, color image
segmentation, watershed (WS) method, F-measure, computerized tomography (CT) images
I. Introduction Image segmentation is the process of separating or grouping an image into different parts. There are
currently many different ways of performing image segmentation, ranging from the simple thresholding
method to advanced color image segmentation methods. These parts normally correspond to something that
humans can easily separate and view as individual objects. Computers have no means of intelligently
recognizing objects, and so many different methods have been developed in order to segment images. The
segmentation process in based on various features found in the image. This might be color information, boundaries or segment of an image. The aim of image segmentation is the domain-independent partition of
the image into a set of regions, which are visually distinct and uniform with respect to some property, such as
grey level, texture or colour. Segmentation can be considered the first step and key issue in object
recognition, scene understanding and image understanding.
Its application area varies from industrial quality control to medicine, robot navigation, geophysical
exploration, military applications, etc. In all these areas, the quality of the final results depends largely on the
quality of the segmentation. In this review paper we will discuss on graph based segmentation techniques,
color image segmentation techniques and medical image segmentation, which is the real time application and
very important field of research. The mathematical details are avoided for simplicity.
II. SEGMENTATION METHODS
2.1 Graph Based Methods: A family of graph-theoretical algorithms based on the minimal spanning tree are capable of detecting
several kinds of cluster structure in arbitrary point sets [1][2]. In the year 1971,C. T. Zahn presented a paper,
where brief discussion is made of the application of cluster detection to taxonomy and the selection of good
feature spaces for pattern recognition. Detailed analyses of several planar cluster detection problems are
illustrated by text and figures. The well-known Fisher iris data, in four-dimensional space, have been analysed by these methods.During 1993,N.R. Pal and S.K. Pal [3] presented a review paper on image
segmentation techniques. They mentioned that, selection of an appropriate segmentation technique largely
depends on the type of images and application areas. It has been established by Pal and Pal that the grey level
distributions within the object and the background can be more closely approximated by Poisson
distributions than that by the normal distributions. An interesting area of investigation is to find methods of
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 2 | Page
objective evaluation of segmentation results. It is very difficult to find a single quantitative index for this
purpose because such an index should take into account many factors like homogeneity, contrast,
compactness, continuity, psycho-visual perception, etc. Possibly the human being is the best judge for
this.N.R. Pal and D. Bhandari [4] presented a paper in 1993. The segmentation of an image is a partitioning
of the picture into connected subsets, each of which is uniform, but no union of adjacent subsets is uniform.
Kittler and Illingworth suggested an iterative method for minimum error thresholding assuming normal distributions for the gray level variations within the object and background. The computational steps required
implementing the method of Kittler and Illingworth have been derived assuming Poisson distribution for the
gray level variation within the object/background. The method proposed by pal & pal requires less
computation time than the method of Kittler and Illingworth. Attempts have also been made for the objective
evaluation of the segmented images using measures of correlation, divergence, entropy and uniformity.
Objective evaluation of the thresholds has been done by them using divergence, region uniformity,
correlation between original image and the segmented image and second order entropy.In his paper (1996),
Y. J. Zhang[5] investigates how the results of object feature measurement are affected by image
segmentation. Seven features have been examined by Yu Jin Zhang under different image conditions using a
common segmentation procedure. The results indicate that accurate measures of image properties profoundly
depend on the quality of image segmentation. As there is currently no general segmentation theory, the empirical methods are more suitable and useful than the analytical methods for performance evaluation of
segmentation algorithms [6]. A number of controlled tests are carried out to examine the dependence of
feature measurements on the quality of segmented images. In the year 1997, Mark Tabb et al.[7] proposed an
algorithm for image segmentation at multiple scales, concerned with the detection of low level structure in
images. It describes an algorithm for image segmentation at multiple scales. Jianbo Shi and Jitendra Malik
[8] in the year 2000, proposed a novel approach for solving the perceptual grouping problem in vision.
Normalized cut is an unbiased measure of disassociation between subgroups of a graph and it has the nice
property that minimizing normalized cut leads directly to maximizing the normalized association, which is an
unbiased measure for total association within the subgroups. Bir Bhanu and Jing Peng[9]presented a general
approach to image segmentation and object recognition that can adapt the image segmentation algorithm
parameters to the changing environmental conditions. Rather than focusing on local features and their
consistencies in the image data, their approach aims at extracting the global impression of an image. They treat image segmentation as a graph partitioning problem and proposed a novel global criterion, the
normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity
between the different groups as well as the total similarity within the groups.Leo Grady and Eric L. Schwartz
[10] introduced in the year 2006 an alternate idea that finds partitions with a small isoperimetric constant,
requiring solution to a linear system rather than an eigenvector problem. Suggestions for future work are
applications to segmentation in space-variant architectures, supervised or unsupervised learning, three-
dimensional segmentation, and the segmentation/clustering of other areas that can be naturally modelled with
graphs.In Aug. 2010,Lei Zhang et al. [11] first proposed to employ Conditional Random Field (CRF) to
model the spatial relationships among image super pixel regions and their measurements. They then
introduced a multilayer Bayesian Network (BN) to model the causal dependencies that naturally exist among
different image entities, including image regions, edges, and vertices. The CRF model and the BN model are then systematically and seamlessly combined through the theories of Factor Graph to form a unified
probabilistic graphical model that captures the complex relationships among different image entities. Using
the unified graphical model, image segmentation can be performed through a principled probabilistic
inference.
2.2 COLOR IMAGE SEGMENTATION: Healey [12] includes the edge information to guide the colour segmentation process, while in [13]
the authors combine the texture features extracted from each sub-band of the colour space with the colour
features using heuristic merging rules. In [14], the authors discuss a method that employs the colour–texture information for the model-based coding of human images, while Shigenaga [15] adds the spatial frequency
texture features sampled by Gabor filters to complement the CIE Lab (CIE is the acronym for the
Commission International d‘Eclairage) colour image information. In order to capture the colour–texture
content, Rosenfeld et al. [16] calculated the absolute difference distributions of pixels in multi-band images,
while Hild et al. [17] proposed a bottom-up segmentation framework where the colour and texture feature
vectors were separately extracted and then combined for knowledge indexing. In papers [113,115–
122,127,128,132,133] extraction of colour image and texture are derived as a sequence of serial processes.
The study of chromatic content has been conducted by Paschos and Valavanis [18]. Shafarenko et al. [19]
explored segmentation of randomly textured colour images. In this approach the segmentation process is
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 3 | Page
implemented using a watershed transform that is applied to the image data converted to the CIE Luv colour
representation. The application of the watershed transform results in over-segmentation and to compensate
for this problem the resulting regions are merged according to a colour contrast measure until a termination
criterion is met. Hoang et al. [20] proposed a different approach to include the colour and texture information
in the segmentation process and they applied the resulting algorithm to the segmentation of synthetic and
natural images. Their approach proceeds with the conversion of the RGB image into a Gaussian colour model and this is followed by the extraction of the primary colour–texture features from each colour channel using a
set of Gabor filters. They applied Principal Component Analysis (PCA) to reduce the dimension of the
feature space from sixty to four. The resulting feature vectors are used as inputs for a K-means algorithm that
is employed to provide the initial segmentation that is further refined by a region-merging procedure.
Segmentation results are obtained by the method using a set of images from Berkeley database [21]. Wang et
al. [22] proposed quaternion Gabor filters to sample the colour–texture properties in the image. In their
approach the input image is initially converted to the Intensity Hue Saturation (IHS) colour space and then
transformed into a reduced biquaternion representation. The paper presented by Jain and Healey [23] where
a multi-scale classification scheme in the colour–texture domain has been investigated. The experiments
were conducted using several images from the MIT VisTex database (Vision Texture—Massachusetts
Institute of Technology) [24] and the segmentation results were visually compared against those returned by JSEG (J-image Segmentation) proposed by Deng and Manjunath [25].
2.2.1 VARIOUS APPROACHES FOR COLOUR EXTRACTION
Among various strategies, the evaluation of the colour attributes in succession was one of the most
popular directions of research. A segmentation algorithm has been proposed by Mirmehdi and Petrou [26]. In
this paper the authors introduced a colour–texture segmentation scheme where the feature integration is
approached from a perceptual point of view. Here, convolution matrices are calculated using a weighted sum
of Gaussian kernels and are applied to each colour plane of the opponent colour space (intensity, red–green
and blue–yellow planes, respectively) to obtain the image data that make-up the perceptual tower. A related
segmentation approach was proposed by Huang et al. [27]. In this paper the authors applied Scale Space
Filters (SSF) to partition the image histogram into regions that are bordered by salient peaks and valleys. A
well-known technique of colour–texture feature integration approach was proposed by Deng and Manjunath [25] and this algorithm is widely regarded as a benchmark by the computer vision community. The proposed
method is referred to as JSEG and consists of two computational stages, namely colour quantisation and
spatial segmentation. The segmentation process is defined as a multi-scale region growing strategy that is
applied to the J-images, where the initial seeds required by the region growing procedure correspond to
minima of local J values. This multi-scale region growing process often generates an over-segmented result,
and to address this problem, a post-processing technique is applied to merge the adjacent regions based on
colour similarity and the Euclidian distance in the CIE Luv colour space. In general the overall performance
of the JSEG algorithm is very good. Results depend on the optimal selection of three parameters specified by
the user: the colour quantisation threshold, the number of scales and the merge threshold. They evaluated the
performances of the new JSEG-based algorithm and the original JSEG implementation when applied to 150
images randomly chosen from the Berkeley database [21]. In [28,29] the authors replaced the colour quantisation phase of the JSEG algorithm with an adaptive Mean Shift clustering method, while Zheng et al.
[30] followed the same idea and combined the quantisation phase of the JSEG algorithm with fuzzy
connectedness. Yu et al. [31] attempted to address the over-segmentation problems associated with the
standard JSEG algorithm. The authors validated the proposed algorithm on 200 images from the Berkeley
database [21] and they reported that the Local Consistency Error (LCE). Krinidis and Pitas [32] introduced
an approach called MIS (Modal Image Segmentation). The MIS algorithm was evaluated on all 300 images
contained in the Berkeley database [21] using measures such as the Probabilistic Rand (PR) [33], Boundary
Displacement Error (BDE) [35], Variation of Information (VI) [36] and Global Consistency Error (GCE)
[34]. The MIS colour segmentation technique was compared against four state of the art image segmentation
algorithms namely Mean-Shift [26], Normalised Cuts [37], Nearest Neighbour Graphs [27] and Compression
based Texture Merging [25]. The unsupervised segmentation of textured colour images has been recently
addressed in the paper by Hedjam and Mignotte [38].They proposed a hierarchical graph-based Markovian Clustering (HMC) algorithm. The authors have conducted a quantitative performance evaluation of the
proposed technique applied on the Berkeley Database [21] by calculating performance metrics such as
Probabilistic Rand Index (PR) [33], Variation of Information (VI) [36], Global Consistency Error (GCE) [34]
and Border Displacement Error (BDE) [35]. A split and merge image segmentation technique was
implemented for the retrieval of complex images by Gevers [39]. Ojala and Pietikainen [40,41] proposed a
split and merge segmentation algorithm. The split process is evaluated using a metric called Merging
Importance (MI). If the colour distribution is homogenous, the weights w1 and w2 control the contribution of
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 4 | Page
the texture and colour during the merge process are adjusted in order to give the colour information more
importance. If the colour distribution is heterogeneous it is assumed that the texture is the dominant feature in
the image and the algorithm allocates more weight to texture in the calculation of the MI values. The merge
process is iteratively applied until the minimum value for MI is higher than a pre-defined threshold. Chen
and Chen [42] also implemented split and merge approach for colour–texture segmentation. In this algorithm
a colour quantisation based on a cellular decomposition strategy was applied in the HSV (Hue Saturation Value) colour space to extract the dominant colours in the image. Next, the extraction of the colour and local
edge pattern histograms are performed. P.Nammalwar et al. [43,44] presents a similar strategy for a split and
merge segmentation scheme. Garcia Ugarriza et al. [45] proposed an automatic Gradient SEGmentation
algorithm (referred to as GSEG) for the segmentation. J. Chen et al. [46] implemented an algorithm for the
segmentation application of natural images to content-based image retrieval (CBIR). In [47]G.Paschos and
Valavanis proposed an approach using a region growing procedure. Fondon et al. [48] performed colour
analysis in the CIE Lab space using multi-step region growing and Grinias et al. [49] that proposed a
segmentation framework using K-Means clustering followed by region growing in the spatial domain.
Freixenet et al. [50] proposed to combine the region and boundary information for colour image
segmentation. Sail-Allili and Ziou [51] proposed an approach where the colour–image information is
sampled by compound Gaussian Mixture Models. Another related technique was developed by Luis-Garcia et al. [52], where the local structure tensors and the image colour components were combined within an
energy minimisation framework to accomplish colour–texture segmentation. A graph-based implementation
has been proposed by Han et al. [53]. The colour and texture features are modelled by Gaussian Mixture
Models (GMMs) and integrated into a framework based on the Grab Cut algorithm. Kim and Hong [54] also
defined the colour image segmentation as the problem of finding the minimum cut in a weighted graph. The
experiments were conducted using images selected from the VisTex [24] and Berkeley databases [21]. Level
sets approaches have also been evaluated in the context of colour image analysis in the review presented by
Crème‘s et al. [55]. Active contours in capturing complex geometries and dealing with difficult initialisations
can be found in [56,57]. Brox et al. [58] propose to combine colour, texture and motion in a level-set image
segmentation framework. Zoller et al. [59] formulated the image segmentation as a data clustering process. A
related image segmentation approach was proposed by Ooi and Lim [60]. The clustering methods [61] have
been widely applied in the development of colour image segmentation algorithms due to their simplicity and low computational cost. Colour–texture segmentation framework (referred to as CTex) [62,63], where colour
and texture are investigated on separate channels. The colour image segmentation involves filtering the input
data using a Gradient-Boosted Forward and Backward (GB-FAB) anisotropic diffusion algorithm [64] that is
applied to eliminate the influence of the image noise and improve the local colour coherence. Campbell and
Thomas [65] proposed an algorithm where the extracted colour and texture features are concatenated into a
feature vector and then clustered using a randomly initialised Self Organising Map (SOM) network. The
proposed segmentation technique was tested on natural images taken from the Bristol Image Database [66]
and the segmentation results were compared against manually annotated ground-truth data. Other related
approaches that integrate colour and texture features using clustering methods can be found in [67–73].
Liapis and Tziritas [74] proposed an algorithm for image retrieval where the colour and texture are
independently extracted. Liapis and Tziritas‘ first tests were conducted on Brodatz images [75], a database composed of 112 grayscale textures. In addition, the authors also obtained results using 55 colour natural
images from VisTex [24] and 210 colour images from Corel Photo Gallery. Martin et al. [76] adopted a
different approach for the boundary identification of objects present in natural images. The authors proposed
a supervised learning procedure to combine colour, texture and brightness features by training a classifier
using the manually generated ground-truth data taken from the Berkeley segmentation dataset [21]. The
Normalised Cuts segmentation algorithm [77] and the numerical evaluation was carried out by computing the
mean GCE [34], precision, recall and F-measure values for all images in the Berkeley database [21]. Carson
et al. [78] proposed a colour–texture segmentation scheme (that is referred to as Blobworld) that was
designed to achieve the partition of the input image in perceptual coherent regions. The colour features are
extracted on an independent channel from the CIE Lab converted image that has been filtered with a
Gaussian operator. For automatic colour–texture image segmentation, the authors proposed to jointly model
the distribution of the colour, texture and position features using Gaussian Mixture Models (GMMs). The main advantage of the Blobworld algorithm consists in its ability to segment the image into compact regions.
In this approach proposed by Manduchi [79], he has extracted the colour and texture features on independent
channels, where the mixture models for each class was estimated using an Expectation-Maximisation (EM)
algorithm. Jolly and Gupta [80] followed an approach where the colour and texture features were extracted
on separate channels. An approach was adopted by Khan et al. [81]. In their implementation, the input image
is converted to the CIE Lab colour space prior to the calculation of the colour and texture features. A related
approach was adopted by Fukuda et al. [82] with the purpose of segmenting an object of interest from the
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 5 | Page
background information. The RGB colour features are combined into a multi-dimensional feature vector with
the local texture features given by the wavelet coefficients. The colour–texture integration is modelled using
Gaussian Mixture Models (GMMs) and the segmentation process is carried out in a coarse-to-fine fashion
using an iterative process to find a minimum cost in a graph. Approaches based on Markov Random Field
(MRF) models were often employed for texture analysis [83–85]. Other approaches that employed MRF
models for colour–texture segmentation include the works detailed in [86–92]. The Local Consistency Error (LCE) and the Global Consistency Error (GCE) [34] are examples of area-based metrics that sample the level
of similarity between the segmented and ground-truth data by summing the local inconsistencies for each
pixel in the image. The paper [93] describes a multiresolution image segmentation algorithm based on
adaptive gradient thresholding, and progressive region growing. The proposed algorithm was tested on the
Berkeley segmentation database of 300 images.The Multiresolution Adaptive and Progressive Gradient-
based colour image SEGmentation (MAPGSEG) results were qualitatively evaluated utilizing the Berkeley
segmentation database consisting of 300 images (of dimensions 321X481), by computing the Normalized
Probabilistic Rand index (NPR)[93]. Table 1 represents quantitative evaluation of research work for
segmentation techniques.
2.3 Video Segmentation: Luciano et al.[94] performed an experiment on real and synthetic sequences in April 2010. In this
paper they experimented with real and synthetic sequences suggest that our method also could be used in
other image processing and computer vision tasks, besides video coding, such as video information retrieval
and video understanding. Their work described an approach for object-oriented video segmentation based on
motion coherence.
2.4 MEDICAL IMAGE BASED RESEARCH:
In the year 1990, Lawrence M. Lifshitz et al.[95] presented a paper on amultiresolution hierarchical
approach to Image Segmentation Based on Intensity Extrema for abdominal CT images. Lawrence &
Stephen presented that Stack-based image segmentation correctly isolates anatomical structures in abdominal
CT images. The aim of their research has been to create a computer algorithm to segment gray scale images
into regions ofinterest (objects). These regions can provide the basis for scene analysis (including shape
parameter calculation) or surface-based shaded graphics display. The algorithm creates a tree structure for image description by defining a linking relationship between pixels in successively blurred versions of the
initial image. In the year 2000,Dzung, Chenyang & Jerry [93] presented critical appraisal of the current status
of semi-automated and automated methods for the segmentation of anatomical medical images and the
methods in medical image segmentation. Tonsillitis disease is the cause of cardiac valvular disease and
pneumonia. Bacteria causing tonsillitis are responsible for future Mitral valve stenosis and pneumonia. These
sequels can be prevented by early diagnosis of disease and finding out the causative pathogens. To improve
data transfer rates, Pranithan Phensadsaeng et al.[96] proposed VLSI architecture by using color model for
early-state tonsillitis detection. Anusha Achuthan et al.[97]presented a new segmentation method that
integrates a wavelet based feature, which is able to enhance the dissimilarity between regions with low
variations in intensity. This feature is integrated to formulate a new level set based active contour model that
addresses the segmentation of regions with highly similar intensities in medical images, which do not have clear boundaries between them. A major difficulty that is specific to the segmentation of MR images is the
‗intensity inhomogeneity artefact, which causes a shading effect to appear over the image.They divided
segmentation methods into eight categories: (a) thresholding approaches, (b) region growing approaches, (c)
classifiers, (d) clustering approaches, (e) Markov random field (MRF) models, (f) artificial neural networks,
(g) deformable models, and (h) atlas-guided approaches. Soumik Ukil et al. [98] developed in Feb. 2009, an
automatic method for the segmentation and analysis of the fissures, based on the information provided by the
segmentation and analysis of the airway and vascular trees. This information is used to provide a close initial
approximation to the fissures, using a watershed transform on a distance map of the vasculature. In a further
refinement step, this estimate is used to construct a region of interest (ROI) encompassing the fissures. The
human lungs are divided into five distinct anatomical compartments called the lobes, which are separated by
the pulmonary fissures. The accurate identification of the fissures is of increasing importance in the early
detection of pathologies, and in the regional functional analysis of the lungs. Soumik Ukil and Joseph M. Reinhardt developed a method to detect incomplete fissures, using a fast-marching based segmentation of a
projection of the optimal surface. A new interactive image processing method is proposed by Zhanli Hu et al.
[99] Using that method, image smoothing, sharpening, histogram processing, pseudo-colour processing,
segmentation, reading, local amplification and measurement for medical image in DICOM format can be
realized. The ROI is enhanced using a ridgeness measure, which is followed by a 3-D graph search to find
the optimal surface within the ROI. The paper (in the year 2010) presented by Yao-Tien Chen [100] proposes
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 6 | Page
an alternative criterion derived from the Bayesian risk classification error for image segmentation and also
proposed model on a level set method based on the Bayesian risk for medical image segmentation. Atfirst,
the image segmentation is formulated as a classification of pixels. Then the Bayesian risk is formed by the
losses of pixel classification. The proposed model introduces a region-based force determined through the
difference of the posterior image densities for the different classes, a term based on the prior probability
derived from Kullback–Leibler information number, and is gularity term a dopted to avoid the generation of excessively irregular and small segmented regions.Arnau Oliver et al. [101] presented a paper in 2010. They
presented and reviewed different approaches to the automatic and semi-automatic detection and segmentation
of masses in mammographic images. Specific emphasis has been placed on the different strategies. Region-
based segmentation relies on the principle of homogeneity, which means that there has to be at least one
feature that remains uniform for all pixels within a region. Region-based methods can be split in two basic
strategies: the well-known region growing and split and merge approaches. A classification of both detection
and segmentation techniques has been proposed, describing several algorithms and pointing out their specific
features. Dae Sik Jeong et al.[102] presented a paper in the year 2010. They proposed a new iris
segmentation method for non-ideal iris images. This paper proposes a new iris segmentation method that can
be used to accurately extract iris regions from non-ideal quality iris images. Chuan-Yu Chang et al. [103]
proposed a method in June 2010 that includes image enhancement processing to remove speckle noise, which greatly affects the segmentation results of the thyroid gland region obtained from US images. They proposed
a complete solution to estimate the volume of the thyroid gland directly from US images. The radial basis
function neural network is used to classify blocks of the thyroid gland. The integral region is acquired by
applying a specific-region-growing method to potential points of interest. Mustafa et al. [104] presented a
new active contour-based, statistical method for simultaneous volumetric segmentation of multiple
subcortical structures in the brain. Early papers include the work of Funakubo [105] where a region-based
colour–texture segmentation method was introduced for the purpose of biomedical tissue segmentation and
Harms et al. [106] where the colour information has been employed in the computation of texton values for
blood cell analysis. Celenk and Smith [107] proposed an algorithm that integrates the spatial and spectral
information for the segmentation of colour images detailing natural scenes, while Garbay discussed in [108]
a number of methods that were developed for the segmentation of colour bone marrow cell images. Katz et
al. [109] introduced an algorithm for the automatic retina diagnosis where combinations of features including colour and edge patterns were employed to identify the vascular structures in the input image. Table 2
represents quantitative evaluation for medical image based research work.
III. OPEN SOURCE SOFTWARE PACKAGES FOR IMAGE SEGMENTATION
Several open source software packages are available for performing image segmentation [110]. Open
source software packages for performing image segmentation are listed below:
1. ITK - Insight Segmentation and Registration Toolkit (Open Source).
2. ITK-SNAP is a GUI tool that combines manual and semi-automatic segmentation with level sets.
3. GIMP which includes among other tools SIOX (Simple Interactive Object Extraction).
4. VXL is a computer vision library. 5. ImageMagick segments using the Fuzzy C-Means algorithm.
6. 3DSlicer includes automatic image segmentation.
7. MITK has a program module for manual segmentation of gray-scale images.
8. OpenCV is a computer vision library originally developed by Intel.
9. GRASS GIS has the program module i.smap for image segmentation.
10. Fiji - Fiji is just ImageJ, an image processing package which includes different segmentation plug-
ins.
11. AForge.NET - an open source C# framework.
There is also software packages available free of charge for academic purposes:
1. GemIdent 2. CVIPtools
3. MegaWave
Open source software for medical image analysis[110]:
Several open source software packages are available for performing analysis of medical images. Open source software packages for performing analysis of medical images are listed below:
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 7 | Page
1. ImageJ
2. 3D Slicer
3. ITK
4. OsiriX
5. GemIdent
6. MicroDicom
7. FreeSurfer
8. ClearCanvas
9. Seg3D
10. NumPy + SciPy + MayaVi/Visvis
11. InVesalius
3.1 DATABASES& THEIR WEBSITES Several databases have been proposed by the computer vision community that consist of a large
variety of colour images that can be employed in the quantification of colour image segmentation
algorithms. Standard database is required for implementation of image segmentation techniques or method
on input images. These databases are also useful for evaluation of methods or evaluation of results. It can also be useful for validating the available results by comparisons with the standard database. There are some
publicly available databases containing images with colour image segmentation characteristics. They are
listed below with their respective websites along with the referred research paper.
Name of Databases Database Web address
Berkeley Segmentation Dataset and Benchmark (2001)
[21] http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/
McGill calibrated colour image database (2004) [111] http://tabby.vision.mcgill.ca
Outex database (2002) [112] http://www.outex.oulu.fi/
VisTex database (1995)
[24] http://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html
Caltech-256 (2007) [113]http://www.vision.caltech.edu/Image_Datasets/Caltech256/
The Prague Texture Segmentation Data Generator and Benchmark (2008) [114] http://mosaic.utia.cas.cz/
Pascal VOC (updated 2009) [115]http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2009/#devkit
CUReT (1999) [116]http://www.cs.columbia.edu/CAVE/software/curet/
SIMPLIcity (2001) [117]http://wang.ist.psu.edu/docs/related/
Minerva (2001) [118]http://www.paaonline.net/benchmarks/minerva/
The Texture Library Database (updated 2009) http://textures.forrest.cz
BarkTex (1998) [119]ftp://ftphost.uni-koblenz.de/outgoing/vision/Lakmann/BarkTex
Corel Commercially available
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 8 | Page
TABLE 1: QUANTITATIVE EVALUATION OF RESEARCH WORK FOR SEGMENTATION
TECHNIQUES
SR.
NO.
SEGMENTATION
ALGORITHM/PAPER
TITLE
Techniques/Algorithms
/Methods
Quantitative Results
1. Segmentation of colour
texturesMirhmedi an
d Petrou[26]
Edge flow (% misclassified pixels (avg.
error ))
Error per(%) of incorrectly pixels Test Image MM97 Proposed Method
T1 2.8% 0.002%
T2 2.6% 0.06%
T27 11.4% 1.6%
2. Colour texture
segmentation based
on the modal energy
of deformable
surfacesMIS
(Krinidis and Pitas
[32])
Proposed technique MIS
compared with MS,NC
PR BDE VI GCE
Ground truth 0.8754 4.9940 1.1040 0.0797
MIS ᵧ=0 0.7869 8.0930 2.0117 0.2110
MIS ᵧ=50 0.7981 7.8263 1.9348 0.1942
MIS ᵧ=100 0.7912 7.9213 1.9801 0.2022
MS 0.7550 9.7001 2.4770 0.2598
NC 0.7229 9.6038 2.9329 0.2182
NNG 0.7841 9.9497 2.6647 0.1895
3. Automatic image
segmentation by
dynamic region
growth and
multiresolution
merging(GSEG).[45
]
Proposed technique
GSEG compared with
JSEG,GRF
GRF JSEG GSEG
Avg.Time(sec) 240 16 24
Avg.NPR 0.357 0.439 0.495
Std.Dev.NPR 0.345 0.318 0.306
Environment C C MATLAB
4. Colour–texture
segmentation using
unsupervised graph
cuts.UGC .[54]
Proposed technique
UGC compared with
JSEG (Precision-recal,
F-measure (avg)
Human JSEG UGC
Color Color-texton
Precision 0.77 0.42 0.57 0.64
Recall 0.75 0.46 0.49 0.51
F-measure 0.76 0.44 0.53 0.57
5. CTex—an adaptive unsupervised
segmentation
algorithm based on
colour–texture
coherence.[62]
Proposed technique CTex compared with
JSEG
PR Indexmean PR Indexstandard_deviation
JSEG 0.77 0.12
CTex 0.80 0.10
6. Segmentation of
natural images using
self-organising
feature maps. [65]
Area overlap Features Dimensionality Avg. Seg. Accuracy
Texture Only 16 36.4%
Colour Only 3 55.7%
Colour+Texture 19 62.2%
7. B-JSEG(Wang et
al.[120]
Proposed technique B-
JSEG compared with JSEG
JSEG HSEG B-JSEG
Average error 33.1 29.3 24.1 Percentage (%)
8. Unsupervised segmentation of
natural images via
lossy data
compression.CTM
[123]
Proposed technique CTM compared with
MS,NC,FH
PRI VoI GCE BDE Humans 0.8754 1.1040 0.0797 4.994
CTMᵧ=0.1 0.7561 2.4640 0.1767 9.4211
CTMᵧ=0.15 0.7627 2.2035 0.1846 9.4902
CTMᵧ=0.2 0.7617 2.0236 0.1877 9.8962
MS 0.7550 2.477 0.2598 9.7001
NC 0.7229 2.9329 0.2182 9.6038
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 9 | Page
FH 0.7841 2.6647 0.1895 9.9497
9. Unsupervised
multiscale colour
image segmentation
based on MDL
principle. [126]
F-measure hs hr
4.0 4.5 5.0 5.5 6.0
3.0 0.6587 0.6644 0.6720 0.6713 0.6755
3.5 0.6585 0.6631 0.6698 0.6753 0.6802
4.0 0.6625 0.6684 0.6700 0.6714 0.6801
4.5 0.6624 0.6684 0.6699 0.6734 0.6756
5.0 0.6587 0.6680 0.6706 0.6696 0.6742
10. Colour Image
Segmentation Using Soft Computing
Techniques[127]
Comparison of HCM,
FCM,PFCM and CNN Techniques
PSNR Compression Execution
Ratio time
HCM 28.24 11.15 253.14
FCM 30.57 16.03 1617.1
PFCM 29.53 12.00 2.281
CNN 39.39 17.95 2.125
11. Morphological
Description of Color
Images for
Content-Based
Image
Retrieval[129]
Comparison of
AC,CSD,CDE,CSG,MH
W,MHL to the standard
color histogram
Histogram AC CSD CDE CSG MHW MHL
1 127 89 9 548 470 443
12. Morphological
segmentation on
learned boundaries.(WF and
WS) [132]
Proposed Techniques
WF and WS compared
with NC
Method GCE Precision Recall F-mea.
WF level 2 0.19 0.64 0.37 0.44
NC (6 reg.) 0.29 0.52 0.38 0.42 WS Vol (18 reg.)0.22 0.54 0.60 0.55
NC (18 reg.) 0.23 0.44 0.58 0.48
13. Textured image
segmentation based
on
modulation
models[137]
Proposed Algorithm
compared with
(DCA)+K means,
DCA+WCE
Algorithms Avg.Values Med.Values
of BCE of BCE
Proposed 0.4876 0.5124
DCA+K-Means 0.5454 0.5478
DCA+WCE 0.5044 0.5143
14. An adaptive and
progressive
approach for
efficient gradient-
based multiresolution
color image
segmentation [141]
Proposed technique
MAPGSEG compared
with GRF,JSEG and EG
GRF JSEG EG GSEG MAPGSEG
Avg.Time(sec) 240 16.2 7.0 24.1 11.1
Avg.NPR 0.357 0.439 0.497 0.495 0.495
NPR>0.7 38 65 68 91 85
(# Images)
Environment C C C MATLAB
15. Image Segmentation
with a Unified
Graphical Model
[143]
BN and CRF
Segmentation model
consistency
Algorithm Overall accuracy
TextonBoost 72.2%
Yang et al. 75.1%
Auto-Context 74.5%
Our approach 75.4%
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 10 | Page
TABLE 2: QUANTITATIVE EVALUATION FOR MEDICAL IMAGE BASED RESEARCH WORK
SR
.
N
O.
SEGMENTATION
ALGORITHM/PAPER
TITLE
Techniques/Algorithms/M
ethods
Quantitative Results
1. Anatomy-Guided
Lung Lobe
Segmentation
in X-Ray CT
Images[98]
An automatic method for
the segmentation of the
lobar fissures on chest CT
scans. The method uses the
interactive watershed
transform, calculated on a vessel distance map, to
obtain an initial
segmentation
Subject Resolution Graph size Time RMS
error
factor X Y Z (s) (mm)
1 1.0 215 93 406 40.06 1.55
2 1.0 222 94 400 27.08 1.88
3 1.0 228 104 442 34.55 1.37 4 1.0 230 123 391 47.47 1.49
5 1.0 229 98 417 32.51 1.70
2. A new iris
segmentation method
that can be used to
accurately extract iris
regions from non-
ideal quality iris
images[102]
Proposed technique a new
iris segmentation method
E1 error rate E2 error rate FPR FNR
2.8 14.4 1.2 27.6
3. Thyroid
Segmentation and
Volume Estimation
in Ultrasound Images[103]
Proposed method copared
with AWMF+ACM and
AWMF+Watershed model
Segmentation Measurement Indices[%]
Methods Accuracy Sensitivity Specificity PPV
NPV
The proposed 96.54 91.98 97.69 91.17 97.91
Method
AWMF+ACM 94.56 85.66 96.90 88.42
92.27
AWMF+
Watershed 88.27 78.80 90.78 70.19
94.14
4. Coupled Non-
Parametric Shape and
Moment-Based Inter-
Shape Pose Priors for Multiple Basal
Ganglia Structure
Segmentation[104]
Quantitative accuracy
results for the 3D
experiments
TPR FPR 1-
DC
C&V 0.7301 0.0075
0.2680 Coupled Shape 0.7299 0.0050
0.2351
Coupled Shape and 0.7291 0.0049
0.2335
Relative pose
5. Image structure
representation and
processing: a
discussion of some
segmentation
methods in
cytology[108]
Compared segmentation
performances as evaluated
according to
Human observer diagnosis
Observer type of diagnosis Correct seg. Uncorrect
seg.
process
Relaxation 70 30
Region Growing 93 7
Frontier detection 89 11
IV. CONCLUSION Color has been widely used for image and video. Since the introduction of color distributions as
descriptors of image content, various research projects have addressed the problems of color spaces,
illumination invariance, color quantization, and color similarity functions. Many different methods have been
developed to enhance the limited descriptive capacity of color distributions. We have presented here the state
of the art of color-based methods that can be used to segment color images. The most surprising element that
emerges from our study of color indexing and segmentation is that most of the methods analysed do not
explore the problem of how to deal with color in a device independent way. Very seldom are details given, or
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 11 | Page
references made to image acquisition and management in terms of standard color coordinates, although it is
reasonable to assume that the image database contains images acquired from many sources, and subjected to
a number of processing steps before indexing and display. Quantization and segmentation reduce acquisition
noise, and may to some extent cope with changes in imaging conditions, but a more rigorous approach to
color imaging is surely desirable. Several of the algorithms proposed have been designed to implement
machine color constancy, but their application in real world conditions is still under investigation. The very definition of machine color constancy is still a matter for future research, rather than an effective tool that can
be employed in current content-based image segmentation engines. Most color based algorithms make it
possible to perform searches for the presence of specific colours, but not, with very few exceptions, for their
absence. Consequently, it would be desirable to modify segmentation methods so that users can specify
which colours should be excluded from the image segmentation. The definition of feature similarity also
plays fundamental role image segmentation, as images with "similar" feature distributions are often
considered similar in appearance without requiring any semantic expression of this.
The major objective of this paper was to analyse the main directions of research in the field of
colour image segmentation and to categorise the main approaches with respect to the colour image
segmentation process. After evaluating a large number of papers, we identified three major trends in
the development of colour image segmentation, namely algorithms based on feature integration, approaches that integrate the colour and texture attributes in succession and finally methods that extract the colour
image features on independent channels and com- bine them using various integration schemes. As we
discussed, the methods that fall in the latter categories proved to be more promising when viewed from
algorithmic and practical perspectives. However, since the level of algorithmic sophistication and the
application domain of the newly proposed algorithms are constantly increasing it is very difficult to predict
which approach will dominate the field of colour image analysis in the medium to long term but we believe
that the next generation of algorithms will attempt to bridge the gaps between approaches based on sequential
feature integration and those that extract the colour image features on independent channels. Currently, the
main research area in the field of colour image segmentation is focused on methods that integrate the features
using statistic/probabilistic schemes and methods based on energy minimisation. However in line with the
development of new algorithms an important emphasis should be placed on methodologies that are applied to
evaluate the performance of the image segmentation algorithms. We feel that this issue had not received the attention that it should deserve and as a result the lack of widely accepted metrics by the computer vision
community made the task of evaluating the appropriateness of the developed algorithms extremely difficult.
Although substantial work needs to be done in the area of performance evaluation, it is useful to mention that
most of the algorithms that have been recently published had been evaluated on standard databases and using
well-established metrics. Also, it is fair to mention that the publicly available datasets are not sufficiently
generic to allow a comprehensive evaluation, but with the emergence of benchmark suites such as Berkeley
database this issue starts to finally find an answer. We believe that this review has thoroughly sampled the
field of colour image segmentation using a systematic evaluation of a large number of representative
approaches with respect to feature integration and has also presented a useful overview about past and
contemporary directions of research. To further broaden the scope of this review, we have also provided a
detailed discussion about the evaluation metrics, we examined the most important data collections that are currently available to test the image segmentation algorithms and we analysed the performance attained by
the state of the art implementations. Finally, we cannot conclude this paper without mentioning the
tremendous development of this field of research during the past decade and due to the vast spectrum of
applications, we predict that colour image segmentation analysis will remain one of the fundamental research
topics in the foreseeable future.
Most of the methods described here have been tested on different databases, of very different sizes,
ranging from two to more than thousand images, for different segmentation tasks. This makes it extremely
difficult, if not impossible, to provide an absolute ranking of the effectiveness of the algorithms. We can
make the general observation that color alone cannot suffice to index large, heterogeneous image databases.
The combination of colour with other visual features is a necessary approach that merits further study. Much
more dubious, instead, are methods that assume that affordable, unsupervised image segmentation is always
possible, and that the evaluation of image similarity can be dealt with as a graph matching problem, with a reasonable computational burden. The design of a segmentation system must address issues of efficiency in
addition of those of effectiveness. A promising direction for future research is, in our opinion, the
exploitation of colour image similarity for image database navigation and visualization is an attempt in this
direction and the segmentation of colour images based on psychological effects. We would like to see new
generation systems that support querying by emotions and open-ended searches in the image database where
similar images are located next to each other. We would also like to see new generation systems by
combination of various segmentation techniques (hybrid techniques) along with solving content based query
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 12 | Page
on real time colour images as well as solving online applications. There is no universal theory on colour
image segmentation still now. All of the existing colour image segmentation approaches are by nature ad
hoc. They are strongly application dependent, in other words, there are no general algorithms and colour
space that are good for all colour images. An image segmentation problem is basically one of psychophysical
perception, and it is essential to supplement any mathematical solutions by a priori knowledge about the
picture knowledge. Most gray level image segmentation techniques could be extended to color image, such as histogram thresholding, clustering, region growing, edge detection and fuzzy based approaches. They can
be directly applied to each component of a colour space, and then the results can be combined in some way to
obtain the final segmentation result. However, one of the problems is how to employ the colour information
as a whole for each pixel. When colour is projected onto three components the colour information is so
scattered that the colour image becomes simply a multispectral image and the colour information that human
can perceive is lost. Another problem is how to choose the colour representation for segmentation, since each
colour representation has its advantages and disadvantages. In most of the existing colour image
segmentation approaches, the definition of a region is based on similar colour. This assumption often makes
it difficult for many algorithms to separate the objects with highlights, shadows, shading or texture which
cause inhomogeneous colours of the objects surface. Using HSI can solve this problem to some extent except
that hue is unstable at low saturation. Some physics based models have been proposed to find the objects boundaries based on the type of materials, but these models have too many restrictions which limit them to
be extensively employed. The fuzzy set theory has attracted more and more attention in the area of image
processing. Fuzzy set theory provides us with a suitable tool which can represent the uncertainties arising in
image segmentation and can model the cognitive activity of the human beings. Fuzzy operators, properties,
mathematics and inference rules IF_THEN rules, have found more and more applications in image
segmentation. Despite the computational cost, fuzzy approaches perform as well as or better than their crisp
counterparts. The more important advantage of a fuzzy methodology lies in that the fuzzy membership
function provides a natural means to model the uncertainty in an image. Subsequently, fuzzy segmentation
results can be utilized in feature extraction and object recognition phases of image processing and computer
vision. Fuzzy approach provides a promising means for colour image segmentation.
In summary, we covered state of art color image segmentation techniques,comparisons of
segmentation techniques, brief about assessment parameters, review of research work in tabular way for quick reference. We have also mentioned many open source software for implementation work for image
segmentation, as well as also mentioned standard databases for algorithms‘ validation purposes, in keeping
mind for larger interest of research community.
References [1] C. T. Zahn, "Graph-theoretical methods for detecting and describing gestalt clusters,‖ IEEE Transactions on Computers, pp.
68-86, Vol. 20, No. 1, 1971.
[2] Heng-Da Cheng and Ying Sun,‖ A Hierarchical Approach to Colour Image Segmentation Using Homogeneity‖, IEEE
transactions on image processing, vol. 9, no. 12, December 2000.
[3] N.R. Pal and S.K. Pal, ―A review on image segmentation techniques,‖ Pattern Recognition 26 (1993), pp. 1277–1294.
[4] N.R. Pal and D. Bhandari, ―Image thresholding: some new techniques,‖ Sign. Process. 33 (1993), pp. 139–158.
[5] Y. J. ZHANG, ―A survey on evaluation methods for image segmentation,‖ Pattern Recognition, Volume 29, Issue 8, August
1996, Pages 1335-1346.
[6] Y. Zhang,―Influence of image segmentation over feature measurement,‖ Pattern Recognition Lett. 16 (1995), pp. 201–206.
[7] M. Tabb and N. Ahuja, ―Unsupervised multiscale image segmentation by integrated edge and region detection,‖ IEEE
Transactions on Image Processing, Vol. 6, No. 5, 642-655, 1997.
[8] Jianbo Shi and Jitendra Malik, "Normalized Cuts and Image Segmentation", IEEE Transactions on pattern analysis and
machine intelligence, pp 888-905, Vol. 22, No. 8, 2000.
[9] Bir Bhanu and Jing Peng,‖ Adaptive Integrated Image Segmentation and Object Recognition‖, IEEE transactions on systems,
man, and cybernetics—part c: applications and reviews, vol. 30, no. 4, November 2000.
[10] Leo Grady and Eric L. Schwartz, "Isoperimetric Graph Partitioning for Image Segmentation," IEEE Transactions on Pattern
Analysis and Machine Intelligence, pp. 469-475, Vol. 28, No. 3, 2006.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 13 | Page
[11] Lei Zhang, and Qiang Ji, ―Image Segmentation with a Unified Graphical Model,‖ IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 32, No. 8, August 2010.
[12] G. Healey, Segmenting images using normalized colour, IEEE Transactions on Systems, Man and Cybernetics 22 (1)
(1992) 64–73.
[13] A.P. Dhawan, A. Sicsu, Segmentation of images of skin lesions using colour and texture information of surface pigmentation,
Computerized Medical Imaging and Graphics 16 (3) (1992) 163–177.
[14] S. Ishibashi, F. Kishino, Colour–texture analysis and synthesis for model based human image coding, Proceedings of the
SPIE—The International Society for Optical Engineering 1605 (1) (1991) 242–252.
[15] A. Shigenaga, Image segmentation using colour and spatial-frequency representations, in: Proceedings of the Second
International Conference on Automation, Robotics and Computer Vision (ICARCV ‘92), vol. 1, 1992, pp. CV-1.3/1-5.
[16] A. Rosenfeld, C.Y. Wang, A.Y. Wu, Multispectral texture, IEEE Transactions on Systems, Man and Cybernetics SMC-12 (1)
(1982) 79–84.
[17] M. Hild, Y. Shirai, M. Asada, Initial segmentation for knowledge indexing, in: Proceedings of the 11th IAPR International
Conference on Pattern Recognition, vol. 1, 1992, pp. 587–590.
[18] G. Paschos, K.P. Valavanis, Chromatic measures for colour texture description and analysis, in: Proceedings of the IEEE
International Symposium on Intelligent Control, 1995, pp. 319–325.
[19] L. Shafarenko, M. Petrou, J. Kittler, Automatic watershed segmentation of randomly textured colour images, IEEE
Transactions on Image Processing 6 (11) (1997) 1530–1544.
[20] M.A. Hoang, J.M. Geusebroek, A.W. Smeulders, Colour texture measurement and segmentation, Signal Processing 85 (2)
(2005) 265–275.
[21] D. Martin, C. Fowlkes, D. Tal, J. Malik, A database of human segmented natural images and its application to evaluating
segmentation algorithms and measuring ecological statistics, in: Proceedings of the IEEE International Conference on
Computer Vision (ICCV ‘01), 2001, pp. 416–425.
[22] H. Wang, X.H. Wang, Y. Zhou, J. Yang, Colour texture segmentation using quaternion-Gabor filters, IEEE International
Conference on Image Processing, 2006, pp. 745–748.
[23] A. Jain, G. Healey, A multiscale representation including opponent colour features for texture recognition, IEEE Transactions
on Image Processing 7 (1) (1998) 124–128.
[24] VisTexdatabase (1995) http://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html
[25] Y. Deng, B.S. Manjunath, Unsupervised segmentation of colour–texture regions in images and video, IEEE Transactions on
Pattern Analysis and Machine Intelligence 23 (8) (2001) 800–810 JSEG source code is available online at the following
website: /http://vision.ece.ucsb.edu/segmentation/jseg.
[26] M. Mirmehdi, M. Petrou, Segmentation of colour textures, IEEE Transactions on Pattern Analysis and Machine Intelligence
22(2)(2000) 142–159.
[27] C.L. Huang, T.Y. Cheng, C.C. Chen, Colour images‘ segmentation using scale space filter and Markov random field, Pattern
Recognition 25 (10) (1992) 1217–1229.
[28] Y. Wang, J. Yang, N. Peng, Unsupervised colour–texture segmentation based on soft criterion with adaptive mean-shift
clustering, Pattern Recognition Letters 27 (5) (2006) 386–392.
[29] T. Gevers, Image segmentation and similarity of colour–texture objects, IEEE Transactions on Multimedia 4 (4) (2002) 509–
516.
[30] Y. Zheng, J. Yang, Y. Zhou, Y. Wang, Colour–texture based unsupervised segmentation using JSEG with fuzzy
connectedness, Journal of Systems Engineering and Electronics 17 (1) (2006) 213–219.
[31] S.Y. Yu, Y. Zhang, Y.G. Wang, J. Yang, Unsupervised colour–texture image segmentation, Journal of Shanghai Jiaotong
University (Science) 13E (1) (2008) 71–75.
[32] M. Krinidis, I. Pitas, Colour texture segmentation based on the modal energy of deformable surfaces, IEEE Transactions on
Image Processing 18 (7) (2009) 1613–1622.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 14 | Page
[33] R. Unnikrishnan, M. Hebert, Measures of similarity, in: Proceedings of the Seventh IEEE Workshop on Computer Vision
Applications, 2005, pp. 394–400.
[34] D. Martin, an Empirical Approach to Grouping and Segmentation, Ph.D. Dissertation, U.C. Berkeley, 2002.
[35] J. Freixenet, X. Munoz, D. Raba, J. Marti, X. Cufi, Yet another survey on image segmentation: region and boundary
information integration, in: Proceedings of the Seventh European Conference on Computer Vision, 2002, pp. 408–422.
[36] M. Meila, Comparing clusterings by the variation of information, learning theory and kernel machines, Lecture Notes in
Computer Science, vol. 2777, Springer, Berlin, Heidelberg, 2003, pp. 173–187.
[37] W.Y. Ma, B.S. Manjunath, Edge flow: a framework of boundary detection and image segmentation, IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (1997) 744–749.
[38] R. Hedjam, M. Mignotte, A hierarchical graph-based Markovian clustering approach for the unsupervised segmentation of
textured colour images, in: Proceedings of the International Conference on Image Processing (ICIP 09),2009, pp. 1365–1368.
[39] T. Gevers, Image segmentation and similarity of colour–texture objects, IEEE Transactions on Multimedia 4 (4) (2002) 509–
516.
[40] T. Ojala, M. Pietik¨ainen, Unsupervised texture segmentation using feature distributions, Pattern Recognition 32 (3) (1999)
477–486.
[41] M. Pietik¨ainen, T. M¨aenp¨a¨ a, J. Viertola, Colour texture classification with colour histograms and local binary patterns, in:
Proceedings of the Second International Workshop on Texture Analysis and Synthesis, Copenhagen, Denmark, 2006, pp.
109–112.
[42] K.M. Chen, S.Y. Chen, Colour texture segmentation using feature distributions, Pattern Recognition Letters 23(7) (2002)
755–771.
[43] P. Nammalwar, O. Ghita, P.F. Whelan, Integration of feature distributions for colour texture segmentation, in: Proceedings of
the 17th International Conference on Pattern Recognition (ICPR 2004), vol. 1, 2004, pp. 716–719.
[44] P. Nammalwar, O. Ghita, P.F. Whelan, A generic framework for colour texture segmentation, Sensor Review 30 (1) (2010)
69–79.
[45] L. Garcia Ugarriza, E. Saber, S.R. Vantaram, V. Amuso, M. Shaw, R. Bhaskar, Automatic image segmentation by dynamic
region growth and multiresolution merging, IEEE Transactions on Image Processing 18 (10) (2009) 2275–2288.
[46] J. Chen, T.N. Pappas, A. Mojsilovic, B.E. Rogowitz, Adaptive perceptual colour–texture image segmentation, IEEE
Transactions on Image Processing 14 (10) (2005) 1524–1536.
[47] G. Paschos, F.P. Valavanis, A colour texture based visual monitoring system for automated surveillance, IEEE Transactions
on Systems, Man and Cybernetics, Part C (Applications and Reviews) 29 (2) (1999) 298–307.
[48] I. Fondon, C. Serrano, B. Acha, Colour–texture image segmentation based on multistep region growing, Optical Engineering
45 (5) (2006).
[49] I. Grinias, N. Komodakis, G. Tziritas, Bayesian region growing and MRFbased minimisation for texture and colour
segmentation, in: Proceedings of the Eighth International Workshop on Image Analysis for Multimedia Interactive Services
(WIAMIS), 2008.
[50] J. Freixenet, X. Mun˜oz, J. Martı´, X. Llado´ , Colour texture segmentation by region-boundary cooperation, Proceedings of
the European Conference on Computer Vision 2 (2004) 250–261.
[51] M. Saı¨d Allili, D. Ziou, Globally adaptive region information for automatic colour–texture image segmentation, Pattern
Recognition Letters 28(15)(2007) 1946–1956.
[52] R. Luis-Garcı´a, R. Deriche, C. Alberola-Lo´ pez, Texture and colour segmentation based on the combined use of the
structure tensor and the image components, Signal Processing 88 (4) (2008) 776–795.
[53] S. Han, W. Tao, D. Wang, X.C. Tai, X. Wu, Image segmentation based on GrabCut framework integrating multiscale
nonlinear structure tensor, IEEE Transactions on Image Processing 18 (10) (2009) 2289–2302.
[54] J.S. Kim, K.S. Hong, Colour–texture segmentation using unsupervised graph cuts, Pattern Recognition 42 (5) (2009) 735–
750.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 15 | Page
[55] D. Cremers, M. Rousson, R. Deriche, A review of statistical approaches to level set segmentation: integrating colour, texture,
motion and shape, International Journal of Computer Vision 72 (2) (2007) 195–215.
[56] X. Xie, Active contouring based on gradient vector interaction and constrained level set diffusion, IEEE Transactions on
Image Processing 19 (1) (2010) 154–164.
[57] X. Xie, M. Mirmehdi, MAC: Magnetostatic Active Contour IEEE Transactions on Pattern Analysis and Machine Intelligence
30 (4) (2008) 632–646.
[58] T. Brox, M. Rousson, R. Deriche, J. Weickert, Colour, texture, and motion in level set based segmentation and tracking,
Image and Vision Computing 28 (3) (2010) 376–390.
[59] T. Zoller, L. Hermes, J.M. Buhmann, Combined colour and texture segmentation by parametric distributional clustering, in:
Proceedings of the 16th International Conference on Pattern Recognition (ICPR 2002), vol. 2, 2002, pp. 627–630.
[60] W.S. Ooi, C.P. Lim, Fusion of colour and texture features in image segmentation: an empirical study, Imaging Science
Journal 57 (1) (2009) 8–18.
[61] A.K. Jain, R.C. Dubes, Algorithms for Clustering Data, Prentice-Hall, 1998.
[62] D.E. Ilea, P.F. Whelan, CTex—an adaptive unsupervised segmentation algorithm based on colour–texture coherence, IEEE
Transactions on Image Processing 17 (10) (2008) 1926–1939.
[63] D.E. Ilea, P.F. Whelan, Colour saliency-based parameter optimisation for adaptive colour segmentation, Proceedings of the
IEEE International Conference on Image Processing (2009) 973–976.
[64] D.E. Ilea, P.F. Whelan, Adaptive pre-filtering techniques for colour image analysis, Proceedings of the International Machine
Vision and Image Processing Conference (IMVIP 2007), IEEE Computer Society Press, 2007, pp. 150–157.
[65] N.W. Campbell, B.T. Thomas, Segmentation of natural images using self-organising feature maps, in: Proceedings of the
Seventh BMVC British Machine Vision Conference, vol. 1, 1996, pp. 222-232.
[66] W.P.J. Mackeown, A Labelled Image Database and its Application to Outdoor Scene Analysis, Ph.D. Thesis, University of
Bristol, UK, 1994.
[67] M. Niskanen, O. Silven, H. Kauppinen, Colour and texture based wood inspection with non-supervised clustering, in:
Proceedings of the 12th Scandinavian Conference on Image Analysis (SCIA 01), 2001, pp. 336–342.
[68] P.Gorecki, L. Caponetti, Colour texture segmentation with local fuzzy patterns and spatially constrained fuzzy C-means,
Applications of Fuzzy Sets Theory (2007)362–369.
[69] Y. Chang, Y. Zhou, Y. Wang, Combined colour and texture segmentation based on Fibonacci lattice sampling and mean
shift, in: Proceedings of the Second International Conference on Image Analysis and Recognition (ICIAR 2005), Lecture
Notes in Computer Science, vol. 3656, 2005, pp. 24–31.
[70] J. Cheng, Y.W. Chen, H. Lu, X.Y. Zeng, Colour- and texture-based image segmentation using local feature analysis
approach, Proceedings of the SPIE—The International Society for Optical Engineering 5286 (1) (2003) 600–604.
[71] A. Khotanzad, O.J. Hernandez, Colour image retrieval using multispectral random field texture model and colour content
features, Pattern Recognition 36 (8) (2003) 1679–1694.
[72] W.S. Ooi, C.P. Lim, Fuzzy clustering of colour and texture features for image segmentation: a study on satellite image
retrieval, Journal of Intelligent & Fuzzy Systems 17 (3) (2006) 297–311.
[73] M. Datar, D. Padfield, H. Cline, Colour and texture based segmentation of molecular pathology images using HSOMS, in:
IEEE International Symposium on Biomedical Imaging: From Macro to Nano (ISBI ‘08), 2008, pp. 292–294.
[74] S. Liapis, G. Tziritas, Colour and texture image retrieval using chromaticity histograms and wavelet frames, IEEE
Transactions on Multimedia 6 (5) (2004) 676–686.
[75] P. Brodatz, A Photographic Album for Artists and Designers, Dover Publications, New York, 1966.
[76] D.R. Martin, C.C. Fowlkes, J. Malik, Learning to detect natural image boundaries using local brightness, colour, and texture
cues, IEEE Transactions on Pattern Analysis and Machine Intelligence 26 (5) (2004) 530–549.
[77] J. Shi, J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence
22 (8) (2000) 888–905.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 16 | Page
[78] C. Carson, S. Belongie, H. Greenspan, J. Malik, Blobworld: image segmentation using expectation-maximization and its
application to image querying, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (8) (2002) 1026–1038.
[79] R. Manduchi, Bayesian fusion of colour and texture segmentations, in: Proceedings of the IEEE International Conference on
Computer Vision, vol. 2, 1999, pp. 956–962.
[80] M.P. Dubuisson-Jolly, A. Gupta, Colour and texture fusion: application to aerial image segmentation and GIS updating,
Image and Vision Computing (18) (2000) 823–832.
[81] J.F. Khan, R.R. Adhami, S.M.A. Bhuiyan, A customized Gabor filter for unsupervised colour image segmentation, Image
and Vision Computing 27 (4) (2009) 489–501.
[82] K. Fukuda, T. Takiguchi, Y. Ariki, Graph cuts segmentation by using local texture features of multiresolution analysis,
IEICE Transactions on Information and Systems E92-D (7) (2009) 1453–1461.
[83] B.S. Manjunath, R. Chellappa, Unsupervised texture segmentation using Markov random fields models, IEEE Transactions
on Pattern Analysis and Machine Intelligence 13 (5) (1991) 478–482.
[84] G.C. Cross, A.K. Jain, Markov random field texture models, IEEE Transactions on Pattern Analysis and Machine
Intelligence 5 (1) (1983) 25–39.
[85] Z. Kato, T.C. Pong, A Markov random field image segmentation model for colour textured images, Image and Vision
Computing 24 (2006) 1103–1114.
[86] Z. Kato, T.C. Pong, S.G. Qiang, Multicue MRF image segmentation: combining texture and colour features, in: Proceedings
of the 16th International Conference on Pattern Recognition, vol. 1, 2002, pp. 660–663.
[87] D. Huawu, D.A. Clausi, Unsupervised image segmentation using a simple MRF model with a new implementation scheme,
Pattern Recognition 37 (12) (2004) 2323–2335.
[88] T. Echigo, S. Iisaku, Unsupervised segmentation of coloured texture images by using multiple GMRF models and a
hypothesis of merging primitives, Systems and Computers in Japan 31 (2)(2000)29–39.
[89] C. Serrano, B. Acha, Pattern analysis of dermoscopic images based on Markov random fields, Pattern Recognition 42 (6)
(2009) 1052–1057.
[90] F. Destrempes, J.F. Angers, M. Mignotte, Fusion of hidden Markov random field models and its Bayesian estimation, IEEE
Transactions on Image Processing 15 (10) (2006) 2920–2935.
[91] Y. Xia, D. Feng, R. Zhao, Adaptive segmentation of textured images by using the coupled Markov random field model, IEEE
Transactions on Image Processing 15 (11) (2006) 3559–3566.
[92] A.H. Kam, W.J. Fitzgerald, General unsupervised multiscale segmentation of images, in: Proceedings of the 33rd Asilomar
Conference on Signals, Systems and Computers, vol. 1, 1999, pp. 63–67.
[93] Dzung L. Pham, Chenyang Xu, and Jerry L. Prince, ―Current Methods in Medical Image Segmentation‖, Annual Review of
Biomedical Engineering, volume 2, pp 315-337, 2000.
[94] Luciano S. Silva and Jacob Scharcanski,―Video Segmentation Based on Motion Coherence of Particles in a Video
Sequence,‖ IEEE Transactions On Image Processing, Vol. 19, No. 4, April 2010.
[95] Lifshitz, L. and Pizer, S., ―A multiresolution hierarchical approach to image segmentation based on intensity extrema,‖ IEEE
Transactions on Pattern Analysis and Machine Intelligence, 12:6, 529 - 540, 1990.
[96] Pranithan Phensadsaeng, Werapon Chiracharit and Kosin Chamnongthai, A VLSI Architecture of Colour Model–based
Tonsillitis Detection, 978-1-4244-4522-6, IEEE ISCIT 2009.
[97] Anusha Achuthan, Mandava Rajeswari, Dhanesh Ramachandram, Mohd Ezane Aziz, Ibrahim Lutfi, ― Wavelet Energy
Guided Level Set for Segmenting, Highly Similar Regions in Medical Images,‖ IEEE 978-1-4244-4134-1/2009.
[98] Soumik Ukil and Joseph M. Reinhardt, ―Anatomy-Guided Lung Lobe Segmentation in X-Ray CT Images,‖ IEEE
Transactions on Medical Imaging, Vol. 28, No. 2, February 2009.
[99] Zhanli Hu, Hairong Zheng, Jianbao Gui, ―A Novel Interactive Image Processing Approach for DICOM Medical Image
Data.‖ IEEE Biomedical Engineering and Informatics 2nd International Conference, Oct 2009.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 17 | Page
[100] Yao-Tien Chen, ―A level set method based on the Bayesian risk for medical image segmentation,‖ Pattern Recognition 43
(2010)3699–3711.
[101] Arnau Oliver, Jordi Freixenet, Joan Marti, Elsa Perez, Josep Pont, Erika R.E. Denton, Reyer Zwiggelaar, ―A review of
automatic mass detection and segmentation in mammographic images,‖ Medical Image Analysis 14 (2010) 87–110.
[102] Dae Sik Jeong , Jae Won Hwang , Byung Jun Kang , Kang Ryoung Park , Chee Sun Won, Dong-Kwon Park , Jaihie Kim, ―A
new iris segmentation method for non-ideal iris images,‖ Image and Vision Computing 28 (2010) 254–260.
[103] Chuan-Yu Chang, Yue-Fong Lei, Chin-Hsiao Tseng, and Shyang-Rong Shih, ―Thyroid Segmentation and Volume
Estimation in Ultrasound Images,‖ IEEE Transactions on Biomedical Engineering, Vol. 57, No. 6, June 2010.
[104] Mustafa Go khan Uzunbas¸, Octavian Soldea, Devrim U¨ nay, Mu¨jdat C¸ etin, Go¨zde U¨ nal, Ayt¨ul Erc¸il, and Ahmet
Ekin, ―Coupled Non-Parametric Shape and Moment-Based Inter-Shape Pose Priors for Multiple Basal Ganglia Structure
Segmentation,‖ IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 29, NO. 12, DECEMBER 2010.
[105] N. Funakubo, Region segmentation of biomedical tissue image using colour texture features, in: Proceedings of the Seventh
International Conference on Pattern Recognition, vol. 1, 1984, pp. 30–32.
[106] H. Harms, U. Gunzer, H.M. Aus, Combined local colour and texture analysis of stained cells, Computer Vision, Graphics,
and Image Processing 33 (3) (1986) 364–376.
[107] M. Celenk, S.H. Smith,Modeling of human colour perception of visual patterns for feature extraction, in: Proceedings of the
15th International Symposium on Automotive Technology and Automation (ISATA 86), vol. 2, 1986.
[108] C. Garbay, Image structure representation and processing: a discussion of some segmentation methods in cytology, IEEE
Transactions on Pattern Analysis and Machine Intelligence 8 (2) (1986) 140–146.
[109] N. Katz, M. Goldbaum, M. Nelson, S. Chaudhuri, An image processing system for automatic retina diagnosis, Proceedings of
the SPIE—The International Society for Optical Engineering 902 (1988) 131–137.
[110] www.wikipedia.org
[111] A. Olmos, F.A.A. Kingdom, McGill Calibrated Colour Image Database, http://tabby.vision.mcgill.caS, 2004.
[112] T. Ojala, T. M¨aenp¨a¨ a, M. Pietik¨ainen, J. Viertola, J. Kyll ¨onen, S. Huovinen, Outex—new framework for empirical
evaluation of texture analysis algorithms, in: Proceedings of the 16th International Conference on Pattern Recognition (ICPR
‗02), Quebec, Canada, vol. 1, 2002, pp. 701–706.
[113] G. Griffin, A.D. Holub, P. Perona, the Caltech-256, Caltech Technical Report, 2007.
[114] M. Haindl, S. Mikeˇs, Texture segmentation benchmark, in: Proceedings of the 19th International Conference on Pattern
Recognition, 2008, pp. 1–4.
[115] M. Everingham, L. van Gool, C.K.I. Williams, J. Winn, A. Zisserman, The PASCAL visual object classes challenge
workshop 2009, in: Proceedings of the International Conference on Computer Vision, Kyoto, Japan, 2009.
[116] K.J. Dana, B. Van-Ginneken, S.K. Nayar, J.J. Koenderink, Reflectance and texture of real world surfaces, ACM Transactions
on Graphics (TOG) 18 (1) (1999) 1–34.
[117] J.Z. Wang, J. Li, G. Wiederhold, SIMPLIcity: Semantics-sensitive Integrated Matching for Picture Libraries, IEEE
Transactions on Pattern Analysis and Machine Intelligence 23 (9) (2001) 947–963.
[118] M. Sharma, S. Singh, Minerva scene analysis benchmark, in: Proceedings of the Seventh Australian and New Zealand
Intelligent Information Systems Conference, 2001, pp. 231–235.
[119] R. Lakmann, Statistische Modellierung von Farbtexturen, Ph.D. Thesis, University Koblenz-Landau, Koblenz, 1998.
[120] Y.G. Wang, J. Yang, Y.C. Chang, Colour–texture Image segmentation by integrating directional operators into JSEG
method, Pattern Recognition Letters 27 (16) (2006) 1983–1990.
[121] Jia-nan wang, Jun kong, Ying-hua lu, Wen-xiang gu, Ming-hao Yin, Yong-peng Xiao, A region-based srg algorithm for color
image segmentation, Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong,
19-22 August 2007.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 18 | Page
[122] Jia-Hong Yang, Jie Liu, Jian-Cheng Zhong,‖ Anisotropic Diffusion with Morphological Reconstruction and Automatic
Seeded Region Growing for Colour Image Segmentation,‖ IEEE, Computer Society, International Symposium on
Information Science and Engineering, 2008.
[123] A.Y. Yang, J. Wright, Y. Ma, S. Sastry, Unsupervised segmentation of natural images via lossy data compression, Computer
Vision and Image Understanding 110 (2) (2008) 212–225.
[124] Zhiding Yu, Ruobing Zou, and Simin Yu, A modified fuzzy c-means algorithm with adaptive spatial information for color
image segmentation, 978-1-4244-2760-4/09/ IEEE 2009.
[125] David A. Kay and Alessandro Tomasi, ―Color Image Segmentation by the Vector-Valued Allen–Cahn Phase-Field Model: A
Multigrid Solution,‖ IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 10, pp. 2330-2339, OCTOBER
2009.
[126] Q. Luo, T.M. Khoshgoftaar, Unsupervised multiscale colour image segmentation based on MDL principle, IEEE
Transactions on Image Processing 15 (9) (2006) 2755–2761.
[127] B.Sowmya, B.Sheelarani, Colour Image Segmentation Using Soft Computing Techniques, and International Journal of Soft
Computing Applications ISSN: 1453-2277 Issue 4 (2009), pp.69-80, Euro Journals Publishing, Inc.
2009.http://www.eurojournals.com/IJSCA.htm.
[128] L. Shi, B. Funt, Quaternion colour texture segmentation, Computer Vision and Image Understanding 107 (1–2) (2007) 88–
96.
[129] Erchan Aptoula and Sebastian Lefebvre, Morphological Description of Color Images for Content-Based Image Retrieval,
IEEE transactions on image processing, vol. 18, no. 11, November 2009.
[130] Dr. S.V.Kasmir Raja, A. Shaik Abdul Khadir, Dr. S.S.Riaz Ahamed,‖ Moving toward region-based image segmentation
techniques: a study,‖ journal of theoretical and applied information technology, Vol. 5, no. 13, 2005 – 2009.
[131] Hesam Izakian, Ajith Abraham, Fuzzy C-Means and Fuzzy Swarm for Fuzzy Clustering Problem, Expert Systems with
Applications (2010), DOI: 10.1016/j.eswa.2010.07.112.
[132] A. Hanbury, B. Marcotegui, Morphological segmentation on learned boundaries, Pattern Recognition and Image Processing
Group 2009
[133] M. Ozden, E. Polat, A colour image segmentation approach for content based image retrieval, Pattern Recognition 40(4)
(2007) 1318–1325.
[134] M. Mignotte, A label field fusion Bayesian model and its penalized maximum Rand estimator for image segmentation, IEEE
Transactions on Image Processing 19 (6) (2010) 1610–1624.
[135] J. Cousty a, L. Najman a, M. couprie a, S. Clément-Guinaudeau b, T. Goissen B, J. Garot, Segmentation of 4D cardiac MRI:
Automated method based on spatio-temporal watershed cuts, Image Vis.Comput. (2010), doi:10.1016/j.imavis.2010.01.001.
[136] Khang Siang Tan, Nor Ashidi Mat Isa, Color image segmentation using histogram thresholding – Fuzzy C-means hybrid
approach, Pattern Recognition 44 (2011) 1–15.
[137] Qingqing Zheng, Nong Sang, Leyuan Liu,Changxin Gao, Textured image segmentation based on modulation models, Optical
Engineering 49(9), 097009 September 2010.
[138] Lidi Wang, Tao Yang and Youwen Tian,‖ Crop disease leaf image segmentation method based on colour features,‖
Computer and Computing Technologies in Agriculture, Vol. 1; Daoliang Li; (Boston: Springer), pp. 713–717, 2008.
[139] E. Mehdizadeh, S. Sadi-Nezhad and R. Tavakkoli-Moghaddam, optimization of fuzzy clustering Criteria by a hybrid PSO
and fuzzy c-means clustering algorithm, Iranian Journal of Fuzzy Systems Vol.5,No. 3, (2008) pp. 1-14.
[140] Aznul Qalid Md Sabri, Random Walker Segmentation Algorithm for Medical Images, Erasmus Mundus Masters in Vision &
Robotics (ViBot) 5th of June, 2009.
[141] Sreenath rao vantaram, Eli saber, Sohail Dianat, Mark Shaw and Ranjit bhaskar, An adaptive and progressive approach for
efficient gradient-based multiresolution color image segmentation, 978-1-4244-5654-3/09/ IEEE ICIP 2009.
[142] Camille Couprie, Leo Grady, Laurent Najman, Hugues, Power Watersheds A unifying graph-based optimization framework,
INRIA, November 4, 2009.
Review of Graph, Medical and Color Image base Segmentation Techniques
www.iosrjournals.org 19 | Page
[143] Lei Zhang, Qiang Ji, Image Segmentation with a Unified Graphical Model, IEEE Transactions On Pattern Analysis And
Machine Intelligence, Vol. 32, No. 8, August 2010.
Janak B. Patel (born in 1971) received B.E. (Electronics & Communication Engg from L.D. College
of Engg. Ahmedabad, affiliated with Gujarat University and M.E. (Electronics Communication & System Engg.) in 2000 from Dharmsinh Desai Institute of Technology, Nadiad, affiliated with Gujarat University. He is Asst. Prof. & H.O.D. at L.D.R.P. Institute of Technology & Research, Gandhinagar, and Gujarat. Currently, he is pursuing his Ph.D. program at Indian Institute of Technology, Roorkee under quality improvement program of AICTE, India. His research interest includes digital signal processing, image processing, bio-medical signal and image processing. He has 7 years of industrial and 12 years of teaching experience at Engineering College. He taught many subjects in EC, CS and IT disciplines. He is a life member of CSI, ISTE & IETE.
R R.S. Anand received B.E., M.E. and Ph.D. in Electrical Engg. from University of Roorkee (Indian
Institute of Technology, Roorkee) in 1985, 1987 and 1992, respectively. He is a professor at Indian Institute of Technology, Roorkee. He has published more than 100 research papers in the area of
image processing and signal processing. He has also supervised 10 PhDs, 60 M.Tech.s and also organized conferences and workshops. His research areas are biomedical signals and image processing and ultrasonic application in non-destructive evaluation and medical diagnosis. He is a life member of Ultrasonic Society of India.
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 20-30 www.iosrjournals.org
www.iosrjournals.org 20 | Page
Analytical approaches for Optimal Placement and sizing of
Distributed generation in Power System
Mohit Mittal*,Rajat Kamboj**,Shivani Sehgal*** *Student of Electrical & Electronics Engg.,Doon Valley Institute of Engg. & Technology,Karnal,India.
**Assistant Professor,Department of EEE, Doon Valley Institute of Engg. & Technology,Karnal,India.
*** Assistant Professor,Department of EEE, Doon Valley Institute of Engg. & Technology,Karnal,India.
Abstract- This work proposes a new algorithm which investigates the performance of Distribution system with multiple DG sources for the reduction in the line loss, by knowing the total number of DG units that the user is
interested to connect. Strategic placement of multiple DG sources for a distribution system planner is a complex
combinatorial optimization problem.
The new and fast algorithm is developed for solving the power flow for radial distribution feeders
taking into account embedded distribution generation sources. Also, new approximation formulas are proposed
to reduce the number of required solution iterations. Power flow techniques (PF) for calculating Network
performance index (NPI),Genetic algorithm in search of best locations, with considering NPI as fitness function.
I. INTRODUCTION The impact of DG in system operating characteristics , such as electric losses, voltage profile, stability
and reliability needs to be appropriately evaluated. The problem of DG allocation and sizing is of great
importance. The installation of DG units at non-optimal places can result in an increase in system losses,
implying in an increase in costs and thereof having an effect of opposite to the desired. For that reason the use
of an optimization method capable of indicating the best solution for a given distribution network can be very
useful for the system planning engineer when dealing with the increase of DG penetration that is happening
nowadays.
II. MATHEMATICAL FORMULATION System Description Consider a three-phase, balanced radial distribution feeder with n buses, I laterals and sublatera
generations. Also, nDG distributed and nC shunt capacitors as shown in figure 331ls.
Fig. 1 : Redial Distribution feeder model including DG and Capacitor
The three recursive branch power flow equations are:
Pi+1 = Pi - ri+1(Pi2+Qi
2) / Vi2- PLi+1+µpAPi+1 (2.1 a)
Qi+1 = Qi - xi+1(Pi2+Qi
2) / Vi2- QLi+1+µpAQi+1 (2.1 b)
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 21 | Page
V2i+1 = Vi2- 2(ri+1Pi+xi+1Qi) + (r2
i+1+x2i+1) x (P2
i+Q2i) / Vi
2 (2.1 c)
The following terminal conditions should be satisfied [6]:
i. At the end of the main feeder, laterals and sublaterals as shown in fig.2.1:
Pn = Qn = 0 (2.2)
Pkm = Qkm =0 (2.3)
ii. The voltage at bus k is the same voltage of its lateral i.e:
Vk = Vk0 (2.4)
The real and reactive power losses of each section connecting two buses are:
Plossi+1 = (Pi2 + Qi
2 / Vi2)ri+1 (2.5)
Qlossi+1 = (Pi2 + Qi2/Vi
2)xi+1 (2.6)
III. DG SIZING ISSUES For single DG case, The DG optimal Size will be done by using Analytical Method based on Exact
Loss Formula.
The real power loss in a system is given by
N N
PL - ΣΣ [xij(PiPj + QiQj) + βij(QiPi – PjQj)] (2.7)
i-1 j-1
Where
Xij = 𝑟𝑔
𝑉𝑖𝑉𝑖cos δi− δj ,𝛽 =
𝑛𝑖𝑗
𝑉𝑖𝑉𝑖 δi− δj
Ri + jxij=zij are the ijth element of [ZBus] matrix.
Analytical method is based on the fact that the power loss against injected power is a parabolic function and at
minimum losses the rate of change of losses with respect to injected power becomes zero.
∂PL = 2 𝑥𝑖𝑗 𝑃𝑗 −𝛽𝑖 Q ∙𝑖) =0 (2.8)
∂Pi
Where Pi is the real power injection at node I, which is the difference between real power generation and the real
power demand at that node.
Pi – (iPDGi PDi) (2.9)
Where Pdgi is the real power injection from DG placed node I, and Pli is Load at node i. By combining equations
results in to
PDgi - PDi+1/αij[βijQi-∑i=1j≠iN](αijPj-βijQi) (2.10)
The above equation gives the optimum size of DG for each bus I, for the loss to be minimum. Any
size of DG other than Pdgi placed at bus I, will lead to higher loss. In calculating the optimum sizes of DG at
various locations, using equation (2.10), it was assumed that the values of variable remain unchanged. This
result in small difference between the optimum sizes obtained by this approach and repeated load flow.
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 22 | Page
IV. PROPOSED ALGORITHM 4.1 Algorithm for Single DG case
The developed algorithm is explained stepwise as follows:
Step 1: First Read the Distribution Network Data and DG size.
Step 2: Give Network Data to Power flow Algorithm to get base case power flow.
Step 3: Save base case power flow for later use.
Step 4: Calculate the Network performance for Different DG.
Step 5: Insert the DG at which the NPI value closer to unity & optimize the DG size.
Step 6: Evaluate NPI with the help of base case power flow and Power flow with single DG inserted case. Step 7: Print results and stop.
4.2 Algorithm for Multiple DG The following Algorithm is developed with the help of New and Fast Power Flow
Solution Algorithm and Genetic Algorithm and is used to get the appropriate results. The developed algorithm is
explained stepwise as follows:
Step 1: First read the Distribution Network Data.
Step 2: Give Network Data to New and Fast Powerflow Algorithm to get base case powerflow.
Step 3: Save base case powerflow for later use.
Step 4: Read Inserting Distributed Generators capacities. [Market available DG sizes 100KW, 220KW,
300KW, 500KW, 750KW, 1KW, 1.6MW]
Step 5: Read Bus numbers for DG insertions from Genetic Algorithm. If this is first time GA will give Initial
population, otherwise GA will give New Population. Step 6: Again apply powerflow for Distribution system with the inserted DG at the position from GA with the
help of New and Fast powerflow Algorithm.
Step 7: Evaluate Multi Objective Index with the help of Base case powerflow and powerflow with DG
inserted.
Step 8: Repeat step 6&7 for all combination of GA population.
Step 9: Check for number.
Step 10: Give NPI values as fitness values to the GA.
Step 11: With the help of fitness function values GA will do the operations (Selection, Crossover, Mutation, and
Elitism) and generate new population next go to step5.
Step 12: Save the n best results and go to next Step
Step 13: For every best result decrease the capacity of DG with fixed % of their individual capacities and do the same for the maximum number of iterations.
Step 14: Lower limit of DG capacity will be 1% of Total load.
Step 15: Run powerflow with for all combinations of new capacities of DG and calculate NPI.
Step 16: Now compare these results with previously saved results fitness values, and print the results according
to the best fitness values.
Step 17: Stop Corresponding Flowchart is as follows:
V. CASE STUDY AND ANALYSIS
5.1 TEST SYSTEM
The radial system with 33 buses and 32 branches with the total load of 3.715 MW and 2.28 MVAR,
11KV is taken as test system. The single line diagram of 33bus distribution system is shown in Figure 6.1.
System Data is given in appendix-B.
19 20 21 22
26 27 28 29 30 31 32 33
S/S
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 23 | Page
23 24 25
Fig.2. Single line diagram of 33 bus distribution system
5.2 GENETIC ALGORITHM SET UP
Representation
A Binary genetic algorithm (BGA) is employed to generate the combination of bus numbers.
Initialize Population
An initial population of size 30 is selected.
Selection
Tournament Selection is chosen for testing.
Reproduction
Two point Cross Over [0.8], [60% to 95% range] and Binary Mutation of ratio [0.05], [0.5% to 1% range] is used.
Generation-Elitism
Generation Elitism is taken 5, which copies the best chromosomes into next generation.
Fitness Evaluation
The network performance index for the distribution system with DG sources is aimed to be maximum and is
selected for fitness evaluation.
Termination
The algorithm stops if the number of generations reaches 300, each simulation is a fairly lengthy process, but
given that this process is a strategic one, the durations reasonable.
5.3 RELEVANCE FACTORS OF NPI The NPI will numerically describe the impact of DG, considering a given location and size, on a
distribution network. Close to unity values for the Network Performance Index means higher DG benefits. Table
6.0 shows the value for the relevance factors utilized in here, considering a normal operation stage analysis.
ILp W1 ILQ W2 IVDQ W3 IVR W4 VSI W5
0.33 0.10 0.15 0.10 0.32
Table.1.NPI Relevance factors
5.4. Results and analysis
A series of simulations were run to evaluate the performance of Distribution System With a defined
number of potential DG units. The capacities of DGs are considered in two ways with constant capacity and
with tuned capacity. These were for the best set of 1,3 and 5 DG units located within the 32 possible sites and
the corresponding Distribution system performance results.
Base case
PLOAD QLOAD PLOSSES QLOSSES PUTILITY QUTILIITY VSI
3.715MW 2.28MVAR 210.3KW 137.3KVAR 3.916MW 2.417MVAR 0.675
Table.2 Base System load flow Data
Bus Number Voltage
(p.u)
1 1.0000
2 0.9972
3 0.9836
4 0.9763
5 0.9691
6 0.9511
7 0.9475
8 0.9339
9 0.9276
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 24 | Page
10 0.9217
11 0.9208
12 0.9193
13 0.9132
14 0.9109
15 0.9095
16 0.9088
17 0.9067
18 0.9067
19 0.9061
20 0.9967
20 0.9931
21 0.9924
22 0.9918
23 0.9801
24 0.99734
25 0.9701
26 0.9491
27 0.9466
28 0.9351
29 0.9269
30 0.9234
31 0.9192
32 0.9183
33 0.9180
Table 2.1 Base case Voltage Profile
Fig. 3. Base case Voltage Profile Plot
Single DG Case The Following table represents the IVD, IVR values for best combination
IVD IVR
0 5 10 15 20 25 30 350
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
bus no.
volt
ag
e(p
er
unit
)
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 25 | Page
0.95499 0.95393
NPI Bus Number
PDG
(KW)
P Loss
(KW)
Q Loss
(KVAR)
VSI PUTILITY
(KW)
QUTLITY
(KVAR)
0.5489 31 1296.190 105.496 78.833 0.7971 2524.298 2378.832
0.5477 32 1244.690 105.718 79.501 0.7923 2576.028 2379.506
0.5411 33 1183.540 107.125 82.370 0.7865 2638.548 2382.369
0.5328 30 1471.350 108.178 80.041 0.8134 2351.828 2380.041
0.5331 6 1491.520 108.671 80.375 0.7923 13332.152 2380.376
Table 2.2. Single DG Test Result with 5 best combinations
Bus Number Voltage
(p.u)
1 1.0000
2 0.9988
3 0.9936
4 0.9925
5 0.9917
6 0.9877
7 0.9844
8 0.9713
9 0.9652
10 0.9596
11 0.9587
12 0.9573
13 0.9514
14 0.9492
15 0.9478
16 0.9472
17 0.9452
18 0.9446
19 0.9982
20 0.9947
21 0.9940
22 0.9933
23 0.9900
24 0.9834
25 0.9802
26 0.9875
27 0.9874
28 0.9852
29 0.9840
30 0.9848
31 0.9889
32 0.9881
33 0.9878
Table 2.3 Voltage Profile with Single DG at bus 31
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 26 | Page
Fig.4. Voltage Profile plot with Single DG at bus 31
3DG Case
The following table represents the IVD, IVR values for best combination.
IVD IVR
0.9675 0.95657
Without Tuning
DG sizes are 1000 KW, 750 KW, and 500 KW
NPI Bus
Number
DG
(KW) PDG
(Total)
(KW)
PLoss
(KW)
QLoss
(KVAR)
VSI PUTILIT
Y
(KW)
QUTILI
TY
(KVAR)
0.79106 31 14
25
1000 750
500
2250 69.339 51.6276 0.9176 1534.339 2351.62
3
0.78828
30
25
16
1000
750
500
2250 70.8966
9 57.962 0.9104 535.890
2351.96
2
0.78611
29
15
16
1000
750
500
2250 71.741 52.3996 0.89038 1536.743 2352.40
0
0.77022
11
31
4
1000
750
500
2250 75.8641 54.9118 0.89486 1540.665 2354.91
2
0.7644
30
9
25
1000
750
500
2250 77.088 55.8658 0.86670 1542.087 2355.86
6
Table 2.4. 3DG test Results without Tuning 5 best combinations.
0 5 10 15 20 25 30 350
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
bus no.
volt
ag
e(p
er
unit
)
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 27 | Page
With Tuning
NPI Bus
Numbe
r
DG
(KW) PDG
(Total)
(KW)
PLoss
(KW)
QLoss
(KVAR)
VSI PUTILIT
Y
(KW)
QUTI
LITY
(KVA
R)
0.79612
31
14 25
899.977
678.183 449.838
2027.69
9 68.4845 48.2754 0.9273 1755.786
2348.2
76
0.78903
30
25
16
955.671
724.049
480.026
2159.98 69.031 49.207 0.9138 1624.328 2349.0
21
0.78657
29
15
16
955.744
713.188
477.872
2146.80
5 70.181 49.479 0.9119 1638.377
2349.4
8
0.77024
11
31
4
980
731.25
490
201.25 72.1742 51.4284 0.8987 1585.924 2351.4
28
0.7625
30
9
25
931.994
706.110
465.997
2104.10
2 74.52 52.297 0.88297 18685.421
2352.2
98
Table 2.5. 3DG test Results with Tuning 5 best combinations.
Bus Number Voltage
(p.u)
1 1.0000
2 0.9995
3 0.9980
4 0.9974
5 0.9971
6 0.9942
7 0.9918
8 0.9871
9 0.9861
10 0.9874
11 0.9858
12 0.9861
13 0.9874
14 0.9879
15 0.9865
16 0.9859
17 0.9840
18 0.9835
19 0.9989
20 0.9954
21 0.9947
22 0.9940
23 0.9959
24 0.9922
25 0.9918
26 0.9936
27 0.9930
28 0.9889
29 0.9863
30 0.9862
31 0.9884
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 28 | Page
32 0.9876
33 0.9873
Table2.6 Voltage Profile with 3DGs at bus 31, 14, 25
5 DG Case
The Following table represents the IVD, IVR values for best combination.
IVD IVR
0.9949 0.9852
NPI Bus
Number PDG
(KW)
PDG
(Total)
(KW)
P Loss
(KW)
Q Loss
(KVAR)
VSI PUTILITY
(KW)
QUTLITY
(KVAR)
0.7877
24
18
33
8
9
1000
250
750
300
500
2800 74.9616 51.5129 0.91643 989.9644 2351.515
0.7847
32
25
2
15
12
1000
250
750
300
500
2800 75.2615 53.1826 0.91241 990.262 2335.184
0.7761
3 12
13
32
31
1000 250
750
300
500
2800 78.5738 78.5738 54.9075 0.89104 2354.902
0.7664
30
25
14
27
21
1000
250
750
300
500
2800 86.984 86.984 60.672 0.88.632 2360.675
Table. 2.7. 5DG Test Results without tuning 5 best combinations
NPI Bus
Number PDG
(KW)
PDG
(Total)
(KW)
P Loss
(KW)
Q Loss
(KVAR)
VSI PUTILITY
(KW)
QUTLITY
(KVAR)
0.79027
24
18
33
8
9
868.4808
222.6871
671.4525
264.5321
438.6601
2465.813 687794 47.6779 0.92094 1317.967 2374.678
0.78721
32 25
2
15
12
922.604 324.181
695.465
279.598
461.302
2593.151 71.703 50.63 0.91868 1193.553 2350.631
0.7772
3
12
13
32
31
940.3366
237.7294
709.5862
283.8345
477.8723
2650.359 71.9391 50.8451 0.908348 1136.580 2350.845
0.77432 30
25
821.808
206.4949 2287.264 72.488 51.145 0.91399 1500.224 2351.145
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 29 | Page
14 27
21
603.998 244.058
410.904
Table. 2.8. 5DG Test Results with tuning 5 best combinations
Table2.9 Voltage Profile with 3DGs at bus 24,18,33,8,9
In table 2, the base system load data, losses, voltage, voltage stability index and utility generated real
and reactive power are given. The base case voltage profile and corresponding voltage profile plot are in table 2.1 and fig.3 , the single DG case results are given for best five combinations in NPI priority order. This is the
case in which the user wants to connect single DG unit to utility in order to reduce loss and to improve voltage
profile and voltage stability index of the distribution system. The voltage improvement and voltage profile plot
with this case is shown in table 2.4 and fig.4.
In the table 2.4, the 3DG case results are given for best five best combinations in NPI priority order.
This is the case in which the user wants to connect 3DG with market available DG capacities to utility in order
to reduce loss and to improve voltage profile, voltage stability index and hence the NPI of the distribution
system from the previous single DG case. The voltage improvement and voltage profile plot with this case is
shown in table 2.6 and table 2.5, represents the 3DG case results for five best combinations with tuning of
market available DG sizes, which results reduction in losses, improvement in voltage stability index and hence
NPI of Distribution system from case of fixed DG capacities.
Bus Number Voltage
(p.u)
1 1.0000
2 1.0001
3 0.9989
4 0.9996
5 1.0007
6 1.0009
7 0.9985
8 0.9939
9 0.9929
10 0.9924
11 0.9925
12 0.9929
13 0.9941
14 0.9946
15 0.9933
16 0.9927
17 0.9908
18 0.9902
19 1.0001
20 1.0012
21 0.0018
22 1.0012
23 0.9961
24 0.9910
25 0.9891
26 1.0007
27 1.0006
28 0.9966
29 0.9940
30 0.9939
31 0.9900
32 0.9892
33 0.9889
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
www.iosrjournals.org 30 | Page
Similarly in the 5DG case results are given for best five combinations are given in the tables 2.7 and
2.8 with market available DGs and with tuning of market available DGs. These cases results reduction in losses
improvement in voltage profile, voltage stability index of system and hence NPI.
VI CONCLUSION This work present a method of combining a new and fast power flow and genetic algorithms with an
aim to provide a means of finding the combination of sites within a distribution network for connecting a
predefined number of DGs. The network performance index is used in finding best combination of sites within
network. In doing so it evaluates the distribution system performance with DG capacities and maximizes the Network Performance index. Voltage stability index is used to determine the weak branches in the distribution
network. Its use world be to enable Distribution System Planner to search a network for the best sites to
strategically connect a small number of DGs among a large number of potential combinations in order to
improve Distribution system performance.
This work concentrated on the technical constraints on DG development like voltage limits, thermal
limits and especially the loss reduction (DG impacts on losses is an area that is being extensively researched at
present).
This work can be easily being adapted to cope with variable energy sources.
REFERENCES [1] H. B. Puttgen, P. R. Mac Gergor, F. C. Lambert, “Distributed generation: Semantic hype or the dawn of a new era?”, power &
Energy Magazine, IEEE, Vol.1, Issue. 1, pp.22-29, 2003
[2] W. EI-Khattam, M.M.A. Salama, “Distributed generation technologies definitions and benefits, Electric Power Systems Research,
vol.71, issue 2, pp. 119-128, October 2004.
[3] Thomas Ackermann, Goran Andersson, Lennart Soder, “Distributed generation: a definition”, Electric power system research, vol .
57 Issue 3, pp. 195-204-20 April 2001.
[4] J.L. Del Monaco, “The role of distributed generation in the critical electric power infrastructure”, power engineering society winter
meeting, 2001, IEEE, vol. 1, pp. 144-145,2001.
[5] Xu. Ding, A.A. Girgis, “Optimal load shedding strategy in power systems with distributed generation”, Power engineering Society
Winter Meeting, 2001, IEEE vol.2, pp, 788 – 793, 2001.
[6] N.S. Rau and Yih Heui Wan, “Optimum Location of resources in distributed planning”, IEEE Transactions on Power Systems, vol.
9, issue. 4, pp. 2014- 2020, 1994.
[7] J.O. Kim, S.W. Nam, S. K. Park, C. Singh, “ Dispersed generation planning using improved Hereford ranch algorithm”, Electric
Power Systems Research, vol. 47, issue 1, pp. 47-55, October 1998.
[8] T. Griffin, K. Tomsovic, D. Secrest, A. Law, “ Placement of dispersed generation systems for reduced losses”, Proceedings of the
33rd Annual Hawaii International Conference on System Science, 2000, IEEE, Jan 2000.
[9] K. Nara, Y. Hayashi, K. Ikeda, T. Ashizawa, “Application of tabu search to optimal placement of distributed generators”, Power
Engineering Society Winter Meeting, 2001, IEEE, vol. 2, pp.918 – 923, 2001.
[10] Caisheng Wang, M.H.Nehrir,, “ Analytical approaches for optimal placement of distributed generation sources in power systems “,
IEEE Transactions on power Systems, vol. 19, issue 4, pp.2068 – 2076, 2004.
[11] T. Gozel, M. H. Hocaoglu, U. Eminoglu, A. Balikci, “ Optimal placement and sizing of distributed generation on radial feeder with
different static load models”, international Conference on Future Power Systems, IEEE, pp. 1-6, issue. Now. 2005.
[12] Naresh Acharya, Pukar Mahat, N. Mithulananthan, “ An analytical approach for DG allocation in primary distribution network”,
Electrical Power and energy systems, vol.28 pp.669-678, feb.2006.
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 31-37 www.iosrjournals.org
www.iosrjournals.org 31 | Page
Depth Image Processing and Operator Imitation Using a Custom
Made Semi Humanoid
Ghanshyam Bhutra
1, Piyush Routray
2, Subrat Rath
3, Sankalp Mohanty
4
1(Dept. of EEE, S‘o’A University, India) 2(Dept. of E&I, S‘o’A University, India)
3(Dept. of EEE, S‘o’A University, India) 4(Dept. of EEE, S‘o’A University, India)
ABSTRACT : The motivation for creating humanoids arises from the diverse socio-economical interests
ranging from the restoration of day-to-day activities of differently abled to assisting humans in nearly
inaccessible areas such as mines, radiation sites, military projects, etc. Recent developments in the field of
image processing, thus enabling depth imaging and skeleton tracking easily has greatly increased the potential
of accurately inferring the signals of human operator. The time constraints on various jobs make them grossly dependant on real time data processing and execution. Also, the acceptance by the industrial community
depends on the accuracy of the complete system. The objective is to develop a proof based accurate system to
assist human operation in potentially inaccessible areas. The system has to analyze image feed from the camera
and deduce the gestures of the operator. Then the system communicates wirelessly with the self designed Semi-
Humanoid which in-turn, imitates the operator with maximum accuracy.
Keywords – Humanoid Robot, MATLAB, Microsoft Kinect, Servo Motor, Skeleton Tracking.
I. INTRODUCTION Humanoid robots, in the future, seem to be the best alternative for human labor. Communication at
work place environment greatly depends on body movements apart from direct audio signals. While verbal
signals give brief idea of the details of the work to be done, it is the physical demonstration that tends to have
maximum impact. Much work has progressively been done in the fields of gesture recognition and
implementation [1]-[3]. However, with advent of comparatively cheaper and easy to use depth imaging
techniques such as Microsoft Kinect, the accuracy of gesture recognition can be increased to greater percentage. Perfect imitation of the human, by a robot, requires tremendous calculations and harmonious work of many
different aspects of the system. A common form of mobile robot today is semi-autonomous, where the robot acts
partially on its own, but there is always a human in the control loop through a link i.e. Telerobotic. In this
technique, there are no sensors on the bot, those it may use to take a decision. One of the possible methods of
controlling such a bot is by amalgamation of depth imaging and skeletal tracking of the human operator.
For extending the area of application of the bot, it has to be wirelessly controlled and equipped with fine
maneuvering techniques. The signal transmission demands many-to-many communication with optimum use of
bandwidth and data protection. If achieved with maximum accuracy this technology stands to be of great
applications at outer space exploration, remote surgeries, hostile industrial and military conditions, security
purpose and of course, gaming and recreational activities.
II. APPROACH The approach to the problem was made on a calculated basis by first simulating on software and then
implementing as hardware. Broadly, the approach can be divided into four categories dealing with different
works. Successive amalgamations of the acquired results lead to the final goal. First step is to model the
Humanoid. It is followed designing of the control architecture, which is the suite of control required for
behavioral synchronization between human and the BOT. Next step caters to the acquisition and processing of
image feed from the Kinect sensor. Thus, the controlling parameters are determined here. Final step involves
wirelessly transmitting the inferred values of the controlling parameters to the Bot. the microcontrollers (mcu)
used in the robot decodes these parameters and thus redirects the robot to react accordingly. The approach has been discussed thoroughly through the following paragraphs.
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
www.iosrjournals.org 32 | Page
2.1 Hardware Modeling of the Humanoid
The software design was readied with 6 Degrees of Freedom. The model was divided into various links
and every link was modeled individually in the SolidWorks software. All the links were mated and ADAMS
was used to test the model under real physics by simultaneously solving equations for kinematics, statics, quasi-
statics and dynamics for multi-body system. Specifically for simulation purpose, the virtual model is designed
based on principles of kinematic modeling by Denavit-Hartenberg methodology for robotic manipulators [4].
There are total 18 Degrees of Freedom (DOF). Each arm has 5 DOF and each leg has 4 DOF. (Fig.1)
figure 1. Adam‟s plant model of humanoid for simulation.
The destination coordinates are fed into the ADAMS plant which has been configured to take input in
the form of angular velocities and output the current joint angles. The ADAMS plant is executed in MATLAB
(Simulink) environment with a function file to control it. The stability of the humanoid was inspired from
previous works done in this regard [5] – [8]. There are two separate methods to control the motion of a
humanoid i.e. DC (Direct Control) and CC (Command Control).Direct Control performs the conversion of 3D joint coordinates of the user to servo angles whereas the Command Control converts information to a single
command for the robot‟s understanding. The information that is sent is the destination positions for the tool
frame and the calculation of joint angles is left for the model. For gain of maneuvering options in terms of speed
and simplicity for most cases, the biped locomotion was voted against in favor of wheeled maneuvering system.
Next step was to construct the hardware model. The material for construction was chosen to be Aluminium and
6 Servo Motors of the rating 60g/17kg/0.14sec were used. For the base of the BOT, four motors, two each of the
rating 12V, 250 rpm, 60Kgcm and 12V, 250 rpm, 30Kgcm were used. To regulate the speed of the driving
motors, PWM was executed with Motor Drivers. The gross weight of the BOT is 10kg (approx.) including the
weight of the BECs (Battery Eliminator Circuits) and Power Supply. LiPo Batteries of the rating 11.1V, 6000
mAh 25~50C (Qty-2) and 11.1V, 850 mAh 20C (Qty-4) have been used for power supply. The servos operate in
the range 4.8V~6.0V and thus BECs were used to
convert 11.1V to 6.0V. The completed structure of the robot is shown as below. (Fig.2)
figure 2. Complete structure of the robot. Figure 3. Control architecture
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
www.iosrjournals.org 33 | Page
2.2 Control Architecture
As shown above (fig.3), for convenience this has been described in two parts, AVR Controller and
ZIGBEE Communication. The AVR microcontroller that has been used is ATMEGA 32 which is an 8-bit MCU.
This controls the servo motors and the driving motors. There are in-total 10 motors to be controlled (6 Servo + 4
Shunt). Four MCUs have been used, out of which three control the servos and the remaining one controls the
driving motors. Each MCU has been interfaced with an LCD to continuously display its status.
ZIGBEE is the standard that defines a set of communication protocols for low-data-rate, very low power
applications. We are using XBee RF modules for communication. These modules interface to a host device
through a logic-level asynchronous serial port. The link between XBee and ATMEGA 32 takes place via UART
(Universal Asynchronous Receiver/Transmitter). The data format and transmission speeds are configurable. The
AVR CPU connects to AVR UART via six registers i.e. UDR, UCSRA, UCSRB, UCSRC, UBRRH and
UBRRL.
2.3 Depth Image Processing & Gesture Interpretation.
Depth Imaging is the central point of this publication as it can be considered a giant leap in terms of
boosting accuracy while operator imitation. The sensor being used is Kinect which is a motion sensing input
device by Microsoft for XBOX 360 video game console. It is a horizontal bar connected to a small base with a
motorized pivot and is designed to be positioned lengthwise. The device features an "RGB camera, depth sensor
and multi-array microphone running proprietary software", which provide full-body 3D motion capture, facial
recognition and voice recognition capabilities. The depth sensor consists of an infrared laser projector combined
with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions. The
sensing range of the depth sensor is adjustable, and the Kinect software is capable of automatically calibrating
the sensor based on the physical environment, accommodating for the presence of the user.
Figure 4. Kinect Sensor.
Third party software and open source drivers were implemented to gain compatibility with MATLAB. At the heart of Kinect, lies a time of flight camera that measures the distance of any given point from the sensor using
the time taken by near-IR light to reflect from the object. In addition to it, an IR grid is projected across the
scene to obtain deformation information of the grid to model surface curvature. Cues from RGB and depth
stream from the sensor are used to fit a stick skeleton model to the human body. The reference frame for the
Kinect sensor is a predefined location approximately at the center of the view. The horizontal right hand side of
observation denotes the x axis and transverse axis denotes y axis
The z axis is the linear distance between the Kinect and user, the farther the user the higher the z value. We
monitor the real time positions and orientations of 15 different data points in the user‟s body, namely: Head,
Neck, Left Shoulder, Left Elbow, Left Hand, Right Shoulder, Right Elbow, Right Hand, Torso, Left Hip, Left
Knee, Left Foot, Right Hip, Right Knee and Right Foot. The sensor generates a 2D Matrix of 225x7 dimension
with the above mentioned datapoints for a single person. The seven columns are User ID, Tracking Confidence, x, y,z (coordinates in mm) and the next X,Y (the pixel value of the corresponding data point). Using the raw
data points from Kinect, vectors are constructed and using those, the angles between different links and the
lengths of different links were found. The skeletal map is obtained according to user specisified link and node
colours as shown in fig.5a&b.
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
www.iosrjournals.org 34 | Page
Figure5 a. Skeletal mapping in low light conditions.
Figure5 b. Skeletal mapping in sufficient lighting.
The data points are then used to calculate and derive values ranging from link length to angular velocity as
discussed in the later part.
2.4 Data transmission and Robot Control
As mentioned earlier, the technology defined by the ZigBee specification is intended to be simpler and
less expensive than other WPANs, such as Bluetooth. ZigBee is targeted at radio-frequency (RF) applications
that require a low data rate, long battery life, and secure networking. One of the Xbee pair is connected to the
PC using Xbee USB adapter board and the other one is interfaced to the Microcontroller. The data enters the module‟s UART from the serial port of the PC through the DI pin as ansynchronous serial signal. The signal
should idle be high when no data is being transmitted. Each data byte consists of a start bit(low), 8 databits
(least significant being first) and a stop bit.
Now that the information to be sent and method of communication is ready, the next step is finding out the
memory addressing technique so that the data transmitted should be in synchronization with the data received.
In order to address the 10 actuated joints with a single duplex channel of 8 bit word length we had to involve an
addressing strategy. The number of bits required to address 10 individual motors would ideally be 4 bits and to
transmit 8 bits of data to each of the motors makes the word length of 12 bits which is not available. So instead
of delivering the word in a single transmission the information is broken in 2 nibbles of 4 bit length and 2
consecutive transmissions are required to successfully transmit the 8 bit of data. With this split we had to assign
1 bit to distinguish between the higher and the lower nibble which again created a problem as the word length of
a transmission is limited to 8 bits only. Two separate 4-bit address for each motor are used to send the higher and lower nibble of data from the source. This scheme allows us to address a 40 maximum of 256 different
addressees and transmit information of 8 bits to each one in 2 consecutive transmission. The analogy, for
betterunderstanding can be presented as in fig.6.
Figure 6. Diagramatic analogy of communication method.
After the reception of the control signal at the remotely located manipulators the microcontrollers
controlling the servos decodes the servo angles from the control signal and implements it. Controlling a servo
using a microcontroller requires no external driver like H-bridge only a control signal needs to be generated for
the servo to position it in any fixed angle. The control signal is 50Hz (i.e. the period is 20ms) and the width of
positive pulse controls the angle. This implementation of the servo angle completes 1 cycle of the process as it
ends with the beginning of the kinect sensor inputting the joint information to the control system. Thus, in this
way the behavioral synchronization between an operator and the BOT is achieved.
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
www.iosrjournals.org 35 | Page
III. IMPLEMENTATION The dynamic process of behavioral synchronization through a humanoid begins with derivation of real
time information about the joints of the user be it prismatic (linear) or revolute (rotational). With degree of
freedom as high as 10 the task becomes extremely tedious and requires tremendous processing power. One of
the major problems in monitoring multiple joints information is the definition of the reference frame with
respect to which information about the other frames can be derived. Even though the reference for individual
joints may vary it is convenient to process all the joints with a single reference point. The Kinect Sensor
provides this information to the central control block using a combination of infrared imaging and RGB camera.
The infrared laser creates a depth profile of the area in display and the RGB camera returns RGB matrix. The sensor generates a 2D matrix of 225 x 7 dimension with the above mentioned data points for a single person
and a maximum tracking capability of 15 people. The seven columns are User ID, Tracking Confidence, x, y, z
(coordinates in mm) and the next X, Y (the pixel value of the corresponding data point). Using this information
we calculate the joint angles in different planes between different links as well as the link lengths between data
points.
Having raw joint information is not enough for further operations, because of that on the next step the
vector between joints should be constructed.
As an example below a description of construction of vector between joints of right hand has been given:
Connecting the right shoulder, right elbow and right elbow and palm determines the angle for right shoulder.
Right shoulder = 𝑥𝑟𝑠 ,𝑦𝑟𝑠 , 𝑧𝑟𝑠
Right elbow = 𝑥𝑟𝑒 ,𝑦𝑟𝑒 , 𝑧𝑟𝑒
Right palm = 𝑥𝑟𝑝 , 𝑦𝑟𝑝 , 𝑧𝑟𝑝
Vect1 = 𝑥𝑟𝑠 ,𝑦𝑟𝑠 , 𝑧𝑟𝑠 − 𝑥𝑟𝑒 ,𝑦𝑟𝑒 , 𝑧𝑟𝑒
Vect2 = 𝑥𝑟𝑝 , 𝑦𝑟𝑝 , 𝑧𝑟𝑝 − 𝑥𝑟𝑒 ,𝑦𝑟𝑒 , 𝑧𝑟𝑒
Now the angle between the vectors is calculated by using a simple geometrical operation such as angle between
two vectors.
Joint angle = cos−1 𝑉𝑒𝑐𝑡 1 ∙𝑉𝑒𝑐𝑡 2
𝑉𝑒𝑐𝑡 1 ∗ 𝑉𝑒𝑐𝑡 2
Using the x, y and z coordinates derived from Kinect sensor and using the formula for distance calculation the
link lengths are found.
length = (xrs − xre )2 + (yrs − yre )2 + (zrs − zre )2
After calculating the joint angles and link lengths, the next step in the behavioral synchronization would be to
realize these real time parameters in the desired form. This step includes conversion of the parameters to actual
servo angles. It is necessary for tuning purpose between the real angle values of joints and robot servo values.
Another reason for the necessity of this step is because the real human joint angle values range from 0~360°,
when the robot servo motor can accept values from 0~255.
Now that the destination coordinates of the individual tool frames for the humanoid robot have been derived,
two simultaneous steps follow, embedding the gesture movements into the virtual (software) and real (hardware)
model.
Since each arm has 3DOF, the tool frame coordinates – x, y and z depends on 3 angles θ1 , θ2 and θ3 which can
be expressed as:
xyz = T
θ1
θ2
θ3
Here „T‟ is the transformation matrix that relates the coordinates with the joint angles.
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
www.iosrjournals.org 36 | Page
xy z
= J
θ1
θ2
θ3
„J‟ denotes the Jacobian matrix and x , y and z are the product of error signal of the corresponding axis and the
proportionality constantkp andθ1 , θ2
and θ3 are the joint anglular velocities.
x = (xdestination − x) × kp
y = (ydestination − y) × kp
z = (zdestination − z) × kp
To evaluate the angular velocities, we use the expression,
θ1
θ2
θ3
= J−1 xy z
These angular velocities are then converted as discussed earlier and then transmitted to the Robot. The servo-
motors attached at designated joints thus move accordingly and operator imitation is achieved.
Figure7. Imitation of its human operator by ACE, the semi-humanoid robot.
IV. CONCLUSION The implemented architecture suggests simple, low cost and robust solution for controlling a semi-
humanoid with human gestures. The complete setup requires the synchronization of software and hardware. The
above mentioned problems were faced for setting it up. This project can serve as a development platform for
interaction between human and BOT via gestures and with the quality of resources of products, more
sophistication can be inculcated. Presently biped motion and holding mechanism in the hands are being worked
upon. The legged motion is expected to enable the robot to maneuver over uneven terrains easily than the
wheels. The picking/holding mechanism would increase its applicability in the industries by manifolds. Thus the greater goal of attaining better man-machine teams for dynamic environments can be realized with perfection.
ACKNOWLEDGEMENTS We acknowledge the research facilities extended to us at Centre for Artificial Intelligence and Robotics
(DRDO), Bengaluru for carrying out the simulations and learning other mechanical nuances of humanoids.
We also thank faculty advisor of the ITER Robotics Club, Asst. Prof Farida A Ali for her continuous guidance
and support over the period of research.
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
www.iosrjournals.org 37 | Page
REFERENCES [1] David Koterkamp, Eric Huber and R. Peter Bonasso, “Recognising and Interpreting Gestures on a Mobile Robot”, Thirteenth
National Conference on Artificial Intelligence (AAAI), 1996.
[2] Stefan Waldherr, Roseli Romero and Sebastian Thrun, “A Gesture Based Interface for Human-Robot Interaction”, Kluwer
Academic Publishers, 2000.
[3] Hideaki Kuzuoka, Shin'ya Oyama, Keiichi Yamazaki, Kenji Suzuki and Mamoru Mitsuishi, “GestureMan: A Mobile Robot that
Embodies a Remote Instructor's Actions”, Proceedings of SCW‟00, December 2-6, 2000.
[4] John J. Craig, Introduction to Robotics, pg70-74, 3rd
Edition, Pearson Education International. 2005.
[5] Vitor M. F. Santos and Filipe M. T. Silva, “Engineering Solutions to Build an Inexpensive Humanoid Robot Based on a Distributed
Control Architecture”, 5thIEEE-RAS InternationalConference on Humanoid Robots, 2005.
[6] “Engineering Solutions to Build an Inexpensive Humanoid Robot Based on a Distributed Control Architecture”, 5thIEEE-RAS
InternationalConference on Humanoid Robots, 2005.
[7] Santos, F. Silva, “Development of a Low Cost Humanoid Robot: Components and Technological Solutions”, in Proc. 8th
International Conference on Climbing and Walking Robots, CLAWAR‟2005, London, UK, 2005.
[8] J.-H. Kim et al. –“Humanoid Robot HanSaRam: Recent Progress and Developments”, J. of Comp. Intelligence, Vol 8, nº1, pp.45-
55, 2004.
[9] Jung-Hoon Kim and Jun-Ho Oh, “Realization of dynamic walking for the humanoid robot platform KHR-1”, Advanced Robotics,
Vol. 18, No. 7, pp. 749–768 (2004)
[10] K. Hirai et al., “The Development of Honda Humanoid Robot”, Proc. IEEE Int. Conf. on R&A, pp. 1321-1326, 1998.
[11] [Kinect] Use With Matlab www.timzaman.nl
[12] KINECTHACKS www.kinecthacks.com
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 38-48 www.iosrjournals.org
www.iosrjournals.org 38 | Page
DVR Based Compensation of Voltage Sag due to Variations of
Load: A Study on Analysis of Active Power
Anita Pakharia1, Manoj Gupta
2
Department of Electrical Engineering 1Assistant Professor, Global College of Technology, Jaipur, Rajasthan, India
2Professor, Poornima College of Engineering, Jaipur, Rajasthan, India
ABSTRACT : Dynamic Voltage Restorer (DVR) has become very popular in recent years for compensation
of voltage sag and swell. The voltage sag and swell is very severe problem of power quality for an industrial customer which needs urgent attention for its compensation. There are various methods for the compensation of
voltage sag and swell. One of the most popular methods of sag and swell compensation is Dynamic Voltage
Restorer (DVR), which is used in both low voltage and medium voltage applications. In this work, our main
focus is on DVR. DVR compensate the voltage sag by injecting voltage as well as power into the system. The
compensation capability of this is mainly influenced by the various load conditions and voltage dip to be
compensated. In this work the Dynamic Voltage Restorer is designed and simulated with the help of Matlab
Simulink for sag compensation. Efficient control technique (Park’s Transformations) is used for mitigation of
voltage sag through which optimized performance of DVR is obtained. The performance of DVR is analyzed on
various conditions of active and reactive power of load at a particular level of dc energy storage. Parameters of
load are varied and the results are analyzed on the basis of output voltages.
Keywords —Structure and control technique for dvr, dvr test system, power quality.
I. INTRODUCTION Power quality (PQ) issue has attained considerable attention in the last decade due to large penetration
of power electronics based loads and microprocessor based controlled loads. On one hand these devices
introduce power quality problem and on other hand these mal-operate due to the induced power quality problems. PQ disturbances cover a broad frequency range with significantly different magnitude variations and
can be non-stationary, thus, appropriate techniques are required to compensate these events/disturbances [1].
The growing concern for power quality has led to development for variety of devices designed for mitigating
power disturbances, primarily voltage sag and swell. Voltage sag and swell are most wide spread power quality
issue affecting distribution systems, especially industries, where involved losses can reach very high values.
Short and shallow voltage sag can produce dropout of a whole industry [2]. In general, it is possible to consider
voltage sag and swell as the origin of 10 to 90% power quality problems. The main causes of voltage sag are
faults and short circuits, lightning strokes, and inrush currents and swell can occur due to a single line-to ground
fault on the system and also be generated by sudden load decreases, which can result in a temporary voltage rise
on the unfaulted phases [1] [2].
The voltage sag and swell is very severe problem for an industrial customer which needs urgent
attention for its compensation. Among several devices, a Dynamic Voltage Restorer (DVR) is a novel custom
power device proposed to compensate for voltage disturbances in a distribution system. The dynamic voltage
restorer is the most efficient and effective power device used in power distribution networks. Its appeal includes
lower cost, smaller size, and its fast dynamic response to the disturbance. Our main focus in this thesis is on the
Dynamic Voltage Restorer (DVR) [3]. This device using series-connected VSC is used to inject controlled
voltage (controlled amplitude and phase angle) between the Point of Common Coupling (PCC) and the load [3]
[4].
For proper voltage sag and swell compensation, it is necessary to derive suitable and fast control scheme for inverter switching. Here, we develop the simulation model using MATLAB SIMULINK and also discusses
simulation results with different load conditions.
II. STRUCTURE AND CONTROL TECHNIQUE FOR DVR DVR is connected in the utility primary distribution feeder. This location of DVR mitigates the certain
group of customer by faults on the adjacent feeder as shown in fig. 1. The point of common coupling (PCC) feds
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 39 | Page
the load and the fault. The voltage sag in the system is calculated by using voltage divider rule [5]. The general
configuration of the DVR consists of:
(a) Series injection transformer (b) Energy storage unit
(c) Inverter circuit (d) Filter unit
(e) DC charging circuit (f) A Control and Protection system
Energy storage device is the most expensive component of the DVR, therefore it is essential
requirement to use such mitigation strategy at which DVR can operate with minimum energy storage
requirement. Different voltage sag mitigation strategies including pre-sag, in-phase, and phase advance
compensation have described in this thesis. Injection of active power by DVR is related to energy storage. DVR
injecting large amount of active power requires bigger size of energy storage leading to more expensive scheme.
Therefore, optimization of energy storage can be obtained by optimizing DVR active power injection. In case of
zero active power injection by DVR, it injects reactive power only to compensate for voltage sag [6] [7].
AC SourceStep-down
Transformer
LOAD1
DVR
Step-down
Transformer
Step-down
Transformer
Sensitive
Load
Transmission Line Distribution Line
Fig. 1 Location of DVR
Control of DVR is performed by using d-q coordinate system. This transformation allows DC components,
which is much simpler than AC components.
The dqo transformation or Park’s transformation is used to control of DVR. The dqo method gives the sag depth
and phase shift information with start and end times. The quantities are expressed as the instantaneous space
vectors. Firstly convert the voltage from a-b-c reference frame to d-q-o reference [8].
Vd
Vq
Vo
=
cos θ cos θ −2π
3 1
−sin(θ) −sin θ −2π
3 1
1
2
1
2
1
2
Va
Vb
Vc
(1)
Above equation 1 defines the transformation from three phase system a, b, c to dqo stationary frame. In this
transformation, phase A is aligned to the d axis that is in quadrature with the q-axis [9]. The theta (θ) is defined
by the angle between phases A to the d-axis. The error signal is used as a modulation signal that allows
generating a commutation pattern for the power switches (IGBT’s) constituting the voltage source converter.
The commutation pattern is generated by means of the sinusoidal pulse width modulation (SPWM) technique, voltages are controlled through the modulation [8] [9].
III. DVR TEST SYSTEM Electrical circuit model of DVR test system is shown in fig.2. System parameters are listed in table 1.
Voltage sag is created at load terminals via a three-phase fault. Load voltage is sensed and passed through a
sequence analyzer [10]. The magnitude is compared with reference voltage. MATLAB Simulation model of the
DVR is shown in fig.3 which is display at the end if this paper. Table 1 shows the values of system parameters.
System comprises of 15 kV, 50 Hz generator, feeding transmission lines through a 3-winding transformer
connected in Y/Δ/Δ, 15/115/11 [10] [11].
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 40 | Page
VDC
Storage
VS
Source
load
Series Injection
Transformer
Three Phase Fault 'X'
Three PhaseV-I measurement
A
B
C
IGBTInverter
G
Pulse PLL
+
_
Vref
Constant
Positive Sequence
Magnitude
Bus p.u.
Voltage
1
Vin
Fig. 2 Circuit Model of DVR Test System
TABLE 1 SYSTEM PARAMETERS
S.No. System Quantities Ratings
1. Main Supply
Voltage Per Phase 15 KV
2. Line Impedance Ls = 0 .006 H , Rs=0.002Ω
3. Series Transformer
Turns Ratio 1:1
4. Filter Inductance 1mh
5. Filter Capacitance 0.50 µF
6. Load Resistance 40 Ω
7. Load Inductance 0.05 H
8. Line Frequency 50 HZ
9. Inverter
Specifications
IGBT Based, 3 Arms, 12
Pulse,
Carrier Frequency=1024 HZ
Sample Time= 0.5 sec.
Here, the outputs of a three-phase half-bridge inverter are connected to the utility supply series
transformer. Once a voltage disturbance occurs, with the aid of dqo transformation based control scheme (Park’s
Transformation), the inverter output can be steered in phase with the incoming ac source while the load is
maintained constant. As for the filtering scheme of the proposed method, output of inverter is installed with
capacitors and inductors [11] [12].
IV. SIMULATION RESULTS Dynamic Voltage Restorer is simulated using MATLAB SIMULINK and the results are analyzed on
the basis of output voltage. Various cases of different active power of load at different dc energy storage are
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 41 | Page
considered to study the impact on sag waveform and compensated waveform as shown in fig. 4 to 15. These
various cases are listed in table 2 and discussed below.
Case I : A three-phase fault is created via a fault resistance of 0.55 Ω, load 1 is 5 KW, 100 VAR and load 2 is
10 KW, 100 VAR which results in a voltage sag of 10.07 %. Transition time for the fault is considered from 0.1
sec to 0.14 sec as shown in fig.4. Fig.5 and 6 shows the voltage injected by the DVR and the corresponding load
voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 99.43 % of
sag is compensated and deviation of 0.57 % is attained from three phase source voltage with 600 V of dc energy storage
(a)
(b)
Fig. 4 Three phase voltage sag at load 5 KW, 100 VAR:
(a) Source voltage (b) RMS value of source voltage
(a)
(b)
Fig. 5 Three phase injected voltage at load 5 KW, 100 VAR:
(a) Injected voltage (b) RMS value of injected voltage
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
Volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 42 | Page
(a)
(b)
Fig. 6 Three Phase Compensated Voltage at load 5 KW, 100 VAR:
(a) Load (Improved) Voltage (b) RMS Value of Load (Improved) Voltage
Case II : A three-phase fault is created via a fault resistance of 0.55 Ω, load 1 is 10 KW, 100 VAR and load 2 is
10 KW, 100 VAR which results in a voltage sag of 28 %. Transition time for the fault is considered from 0.1 sec
to 0.14 sec as shown in fig.7. Fig.8 and 9 shows the voltage injected by the DVR and the corresponding load
voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 98.46 % of
sag is compensated and deviation of 1.54 % is attained from three phase source voltage with 3.1 KV of dc
energy storage.
(a)
(b)
Fig. 7. Three phase voltage sag at load10 KW, 100 VAR
(a) Source voltage (b) RMS value of source voltage
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Tim
e (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 43 | Page
(a)
(b)
Fig. 8. Three phase injected voltage at load 10 KW, 100 VAR:
(a) Injected voltage (b) RMS value of injected voltage
(a)
(b)
Fig. 9. Three phase compensated voltage at load 10 KW, 100 VAR:
(a) Load (improved) voltage (b) RMS value of load (improved) voltage
Case III : A three-phase fault is created via a fault resistance of 0.55 Ω, load 1 is 50 KW, 100 VAR and load 2
is 10 KW, 100 VAR which results in a voltage sag of 80 %. Transition time for the fault is considered from 0.1
sec to 0.14 sec as shown in fig. 10. Fig.11 and 12 shows the voltage injected by the DVR and the corresponding
load voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 97.96
% of sag is compensated and deviation of 2.03 % is attained from three phase source voltage with 8.5 KV of dc
energy storage.
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 44 | Page
(a)
(b)
Fig. 10. Three phase voltage sag at load 50 KW, 100 VAR:
(a) Source voltage (b) RMS value of source voltage
(a)
(b)
Fig. 11. Three phase injected voltage at load 50 KW, 100 VAR:
(a) Injected voltage (b) RMS value of injected voltage
(a)
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2 0.3-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.2 0.30
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2 0.3-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 45 | Page
(b)
Fig. 5.12. Three phase compensated voltage at load 50 KW, 100 VAR: (a) Load (improved) voltage (b) RMS value of load (improved) voltage
Case IV : A three-phase fault is created via a fault resistance of 0.55 Ω, load 1 is 1 MW, 100 VAR and load 2 is
10 KW, 100 VAR which results in a voltage sag of 97.12 %. Transition time for the fault is considered from 0.1
sec to 0.14 sec as shown in fig.13. Fig.14 and 15 shows the voltage injected by the DVR and the corresponding load voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 99.39
% of sag is compensated and deviation of 0.61 % is attained from three phase source voltage with 13 KV of dc
energy storage.
(a)
(b)
Fig. 5.13. Three phase voltage sag at load 1MW, 100 VAR:
(a) Source voltage (b) RMS value of source voltage
(a)
0 0.1 0.2 0.30
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.20
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2 0.3-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 46 | Page
(b)
Fig. 5.14. Three phase injected voltage at load 1 MW, 100 VAR: (a) Injected voltage (b) RMS value of injected voltage
(a)
(b)
Fig. 5.15. Three phase compensated voltage at load 1 MW, 100 VAR:
(a) Load (improved) voltage (b) RMS value of load (improved) voltage
TABLE 2 VARIATIONS WITH PARAMETERS OF ACTIVE POWER OF LOAD
Case Load 1 Load 2 Fault
Resistance Sag
DC Energy
Storage
Compensated
Voltage
Devi
ation
I 5 KW, 100 VAR 10 KW, 100 VAR 0.55 Ω 10.07 % 600 V 99.43 % 0.57 %
II 10 KW, 100
VAR 10 KW, 100 VAR 0.55 Ω 28 % 3.1 KV 98.46 %
1.54 %
III 50 KW, 100
VAR 10 KW, 100 VAR 0.55 Ω 80 % 8.5 KV 97.97 %
2.03 %
IV 1 MW, 100
VAR 10 KW, 100 VAR 0.55 Ω 97.12 % 13 KV 99.39 % 0.61
Hence, simulation results shows that by increasing the dc storage the effect of voltage sag in output voltage is
decreased. As can be seen from table 2 with increment in load 1 while keeping load 2 constant the voltage sag
continuously increases which can be compensated by proportional increase in the dc energy storage.
0 0.1 0.2 0.30
5000
10000
15000
Time (sec)
Voltage (
volts)
0 0.1 0.2 0.3-1.5
-1
-0.5
0
0.5
1
1.5x 10
4
Time (sec)
Voltage (
volts)
0 0.1 0.2 0.30
5000
10000
15000
Time (sec)
Voltage (
volts)
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 47 | Page
V. CONCLUSION Hence foregoing analysis shows the impact of DVR and dc energy storage on voltage sag
compensation. Consideration on different active power of load with varying dc energy storage shows that with
increasing values of load there is an enhancement of voltage sag which can be further compensated by adjusting
the values of dc energy storage. The simulation results are compared on the basis of output voltages and output
waveforms shows that DVR is fairly efficient for compensation of sag and swell. Further dimensions to this work can be added by analyzing other disturbances of power quality like voltage
swell, harmonics etc. Various sag compensations cases are discussed while varying the active power of load.
Fig.4. Simulation model of the DVR
REFERENCES [1] Mladen Kezunovic, ―A Novel Software Implementation Concept for Power Quality Study‖, IEEE Transactions on power delivery,
vol. 17, NO. 2, April 2002.
[2] Boris Bizjak, Peter Planinšič, ―Classification of Power Disturbances using Fuzzy Logic‖, University of Maribor - FERI, Smetanova
17, Maribor, SI – 2000.
[3] J. G. Nielsen, M. Newman, H. Nielsen, and F. Blaabjerg, ―Control and Testing of DVR at Medium Voltage‖, IEEE Transaction on
Power Electronics, Vol.19.No.3, May 2004.
[4] Burke J.J., Grifith D.C., and Ward J., ―Power quality—Two different perspectives‖, IEEE Trans. on Power Delivery, vol. 5, pp.
1501–1513, 1990.
[5] Margo P., M. Heri P., M. Ashari, Hendrik M. and T. Hiyama, ―Compensation of Balanced and Unbalanced Voltage Sags using
Dynamic Voltage Restorer Based on Fuzzy Polar Control‖, International Journal of Applied Engineering Research, ISSN 0973-
4562 Volume 3, Number 3, pp. 879–890, 2008.
[6] Y. W. Lie, F. Blaabjerg, D. M. Vilathgamuwa, P. C. Loh, ―Design and Comparison of High Performance Stationary-Frame
Controllers for DVR Implementation‖, IEEE Transactions on Power Electronics, Vol.22, No.2, March 2007.
sag compansator
powergui
injectedwave1
scopedata9
Injectedwave
scopedata8
PWM
scopedata7
wave1
scopedata6
improvedwave1
scopedata5
sag1
scopedata4
wave
scopedata3
improvedwave
scopedata2
sag
scopedata1
Continuous
dq0
sin_cosabc
dq0_to_abc
Transformation
abc
sin_cos
dq0
abc_to_dq0
Transformation1
abc
sin_cosdq0
abc_to_dq0
TransformationTransport
Delay
A1+
A1
B1+
B1
C1
+
C1
A2+
A2
B2+
B2
C2
+
C2
Three-Phase Transformer
12 Terminals
A
B
C
Three-Phase Source
A
B
C
a
b
c
Three-Phase Breaker
VabcIabc
A
B
C
abc
Three-Phase
V-I Measurement5
VabcA
B
C
a
b
c
Three-Phase
V-I Measurement4
VabcIabc
A
B
C
abc
Three-Phase
V-I Measurement3
Vabc
IabcA
B
C
a
b
c
Three-Phase
V-I Measurement2
VabcIabc
A
B
C
abc
Three-Phase
V-I Measurement
A
B
C
a
b
c
Three-Phase
Transformer
(Two Windings)
A
B
C
Three-Phase
Series RLC Load1
A B C
A B CThree-Phase
Series RLC Branch1
A
B
C
A
B
C
N
A
B
C
Three-Phase
Programmable
Voltage Source
A
B
C
Three-Phase
Parallel RLC Load 3
A B C
Three-Phase
Parallel RLC Load 2
A
B
C
A
B
C
Three-Phase
PI Section Line1
g
A
B
C
+
N
-
Three-Level Bridge
ScopesagrmsScopesag
Scopeload
Scope9
Scope8
Scope7
Scope6
Scope5
Scope2Scope12
Scope10
RMS
RMSsag
RMS
RMSload
RMS
RMS2
RMS
RMS1
Signal(s)Pulses
PWM Generator
node 10
node 10
Divide
Vabc (pu)
Freq
wt
Sin_Cos
Discrete
3-phase PLL
DC Voltage Source1
DC Voltage Source
1
Constant
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active
Power
www.iosrjournals.org 48 | Page
[7] P.W. Lehn and MR. Iravani, ―Experimental Evaluation of STATCOM Closed Loop Dynamics‖, IEEE Trans. Power Delivery, vol.
13, no.4, pp.1378-1384 October 1998.
[8] Rosli Omar, Nasrudin Abd Rahim, Marizan Sulaiman, ―Modeling and 8simulation for voltage sags/swells mitigation using dynamic
voltage restorer (dvr)‖, Journal of Theoretical and Applied Information Technology, JATIT, 2005 - 2009.
[9] S. Chen, G. Joos, L. Lopes, and W. Guo, ―A nonlinear control method of dynamic voltage restorers‖, IEEE 33rd Annual Power
Electronics Specialists Conference, pp. 88- 93, 2002.
[10] H.P. Tiwari, Sunil Kumar Gupta, ―Dynamic Voltage Restorer Based on Load Condition‖, International Journal of Innovation,
Management and Technology, Vol. 1, No. 1, ISSN: 2010-0248, April 2010.
[11] Toni Wunderline and Peter Dhler, ―Power Supply Quality Improvement with A Dynamic Voltage Restorer (DVR)‖, IEEE
Transaction, 1998.
[12] Carl N.M.Ho, Henery and S.H. Chaung, ―Fast Dynamic Control Scheme for Capacitor- Supported Dynamic Voltage Restorer:
Design Issues, Implementation and Analysis‖, IEEE Transaction, 2007.
1Anita Pakharia obtained her B.E. (Electrical Engineering) in year 2005 and pursuing M. Tech. in Power systems
(batch 2009) from Poornima College of Engineering, Jaipur. She is presently Assistant Professor in the Department of
Electrical Engineering at the Global College of Technology, Jaipur. She has more than 6 years of teaching experience.
She has published five papers in National/ International Conferences. Her field of interest includes Power Systems,
Generation of electrical power, Power electronics, Drives and Non-conventional energy sources etc.
2 Manoj Gupta received B.E. (Electrical) and M. Tech. degree from Malaviya National Institute of Technology
(MNIT), Jaipur, India in 1996 and 2006 respectively. In 1997, he joined as Electrical Engineer in Pyrites, Phosphates
and chemicals Ltd. (A Govt. of India Undertaking), Sikar, Rajasthan. In 2001, he joined as Lecturer in Department of
Electrical Engineering, Poornima College of Engineering, Jaipur, India and now working as Associate Professor in
Department of Electrical Engineering in the same. His field of interest includes power quality, signal/ image
processing, electrical machines and drives. Mr. Gupta is a life member of Indian Society for Technical Education
(ISTE), and Indian Society of Lighting Engineers (ISLE).
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 49-52 www.iosrjournals.org
www.iosrjournals.org 49 | Page
Performance Analysis of Coded OFDM Signal for Radio over
Fiber Transmission
Shikha Mahajan1, Naresh Kumar
2
1(ECE, U.I.E.T/ Panjab University, India) 2(Assistant Professor, ECE, U.I.E.T/Panjab University, India)
ABSTRACT : The main aim of this paper is to analyze the influence of the modulation techniques; on the
performance of coded orthogonal frequency-division multiplexing (COFDM) based Radio over Fiber (RoF)
system. The modelling of the COFDM scheme for RoF system has been completely done by using software
Optisystem. Both analysis and simulation results are presented for 16-QAM and 16-QPSK modulation
techniques and performance is analyzed for optical amplifier and Erribium Doped Fiber Amplifier (EDFA).
The parameter Quality factor (Q) of the COFDM signal transmitted through the fiber optic link is examined.
From the analysis for Radio over Multimode Fiber with link length of 2 km (medium haul communication)
optical amplifier gives better results than EDFA for either of the modulation schemes for set of bit rates.
Keywords – COFDM, Convolutional Encoder, Multimode Fiber, Quality Factor, Radio over Fiber
I. INTRODUCTION RoF technology is one of the recent advancements in optical communication, finding applications in
cable TV networks, base station links for mobile communication and antenna remoting [1]. Particularly, radio
over multimode fiber system (ROMMF) has gained much attention recently for its suitability to deliver high-
purity signals in analog and digital format for short-reach coverage such as wireless local area networks and ultrawideband radio signals [2].
COFDM has been specified for digital broadcasting systems for both Digital Audio Broadcasting
(DAB), Digital Video Broadcasting (DVB-T) [3]. COFDM is particularly well matched to these applications,
since it is very tolerant of the effects of multipath (provided a suitable guard interval is used). Indeed, it is not
limited to 'natural' multipath as it can also be used in so-called Single-Frequency Networks (SFNs) in which all
transmitters radiate the same signal on the same frequency. A receiver may thus receive signals from several
transmitters, normally with different delays and thus forming a kind of 'unnatural' additional multipath.
Provided the range of delays of the multipath (natural or 'unnatural') does not exceed the designed tolerance of
the system (slightly greater than the guard interval) all the received-signal components contribute usefully. For
short-distance applications, COFDM can be used efficiently for high data rate signal delivery and to tolerate the
effect of modal dispersion. Also, the exponent refractive index profile effect has been studied and shown that
the best performance can be achieved near the optimum value of exponent refractive index profile [4]. COFDM system employs conventional forward error correction codes and interleaving for protection
against burst errors caused by deep fades in the channel. Often it is concatenated with a block code for
improving the performance [5]. A Viterbi decoder for decoding a convolution code is easy to implement. Codes
of different rates are used in conjunction with different modulation schemes to support different quality of
services. While error correction codes provide coding gain in the system, interleaving provides diversity gain.
For COFDM, when a deep fade occurs in the channel the bits within the deep fade are erased. Interleaving the
bits across different frequency bins distributes the energy within a symbol among different sub-carriers. Since
distinct sub-carriers undergo different fading conditions, the probability that all the bits corresponding to a
symbol are lost, decreases significantly. An uncoded OFDM system cannot exploit frequency diversity.
Modal dispersion is the dominant performance limiting factor in MMFs. The advantage of MMFs over
single mode fibers (SMFs) is their relaxed coupling tolerance, their larger core diameters, which lead to reduced system-wide installation and maintenance costs. MMFs allow the propagation of multiple guided modes albeit
with different propagation constants. The difference in the mode propagation times leads to intermodal
dispersion, which severely limits the fiber bandwidth.
In ROF, COFDM impact has been demonstrated with MMF as a promising modulation scheme for
mitigating the modal dispersion penalty [6]. By implementing coding and interleaving across sub-carriers, the
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
www.iosrjournals.org 50 | Page
effect of multipath fading is minimised since interleaving reorganises the number of bits in a way so as to avoid
the effects of fading. Convolutional coding is being used commonly in COFDM systems, and it can give coding
gain at different coding rates [7]. Thus, COFDM makes the transmitted signal robust to multipath fading and
distortion. Also it has been observed in [8] that the frequency selectivity becomes significant with increasing the
length of the MMF. This reveals the need for COFDM.
II. SIMULATION SETUP FOR RADIO OVER FIBER SYSTEM RoF system is very cost effective because localization of signal processing in central station and use of
a simple base station. It realizes transparent transformation between RF signal and optical signal. Fig 1 shows
implementation approach for transmitting convolutionally encoded data over a multi mode fiber.
Fig. 1.Block Diagram of COFDM based RoF System
Pseudo-random bit sequence generator (PRBSG) data format is used which is convolutionally encoded
at rate ½ and generator polynomials 1338 and 1718. For downlink simulation, a narrow bandwidth continuous wave (CW) (wavelength, 1300 nm) from laser diode is modulated via a Mach-Zehnder Modulator (MZM). An
ideal pre amplifier before optoelectronic conversion by PIN-PD having centre frequency of 1300 nm and post
amplifier is used at uplink simulation. Optical link i.e. a multi mode fiber of length 2 km is used. In receiver
section, optical signal is detected by a PIN-photodiode. Q-factor values of signals are measured by BER
analyzer at base station.
III. COFDM MODEL The implementation process of COFDM is as follows:
The data bits are convolutionally encoded with a code rate 1/2.
The encoded bits are bit-interleaved and then mapped into in-phase and quadrature components of the
complex symbol.
The complex symbols are modulated by OFDM.
At the receiver, direct detection is used. Table 1 summarises COFDM parameters.
The decoding process employs the hard-decision viterbi decoding algorithm.
Table 1.COFDM parameters
Parameter Value
Code rate ½
Generator polynomials 1338, 1718
Constraint length 7
IV. MMF RESPONSE MODEL The implementation process of MMF is as follows:
COFDM signal generated out of the OFDM transmitter is injected into the laser diode (LD).
The optical intensity signal out of the LD is fed into an MMF followed by a photodetector.
PRBSG Coder OFDM
Modulator
E/O Converter
O/E Converter OFDM
Demodulator
Decoder BER
Analyzer
PRBSG
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
www.iosrjournals.org 51 | Page
The received COFDM signal is demodulated and decoded to retrieve the data. Table 2 summarizes MMF
parameters.
Table 2.MMF parameters
Parameter Value
Fiber dispersion -100 ps/(nm
km)
Fiber attenuation 1.25 dB/km
Wavelength
Fiber length
1300 nm
2 km
V. RESULTS AND DISCUSSION In this section we have tried to investigate the impact of modulation techniques on RoF system. For
RoF approach, multiple data signals are analyzed using optical and RF spectrum analyzers.
Fig. 2.Comparative analysis of QAM using optical amplifier and EDFA
From the Fig 2, above it can be seen that for bit rates used in simulation, QAM with optical amplifier
gives better Quality factor than QAM with EDFA. Also as the bit rate increases the quality factor begins to
degrade for both the modulation techniques for values greater than equal to 800 Mbps and falls below the value
required for communication purpose. The fall in the Q factor is due to increase in dispersion as bit rate
increases.
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
www.iosrjournals.org 52 | Page
Fig. 3.Comparative analysis of QPSK using optical amplifier and EDFA
From the Fig 3, above it can be seen that for bit rates used in simulation, QPSK with optical amplifier
gives better Quality factor values than QPSK with EDFA. Also as the bit rate increases the quality factor begins
to degrade for both the modulation techniques for values greater than equal to 800 Mbps and falls below the
value required for communication purpose. The fall in the Q factor is due to increase inn dispersion as bit rate
increases.
From above two figures 2 and 3, it can be analyzed that either of the modulation techniques i.e. QAM
or QPSK gives better performance with optical amplifier than used with EDFA.
VI. CONCLUSION From the results for Radio over MMF with link length of 2 km (medium haul communication) based
upon the comparative analysis between QAM and QPSK modulation techniques with optical amplifier and
EDFA for same values of increasing bit rates, optical amplifier gives better results than EDFA for either of the
modulation techniques. After 800 Mbps the quality factor value falls below permissible value for
communication.
REFERENCES [1] Hamed Al Raweshidy, Shozo Komaki, Radio over fiber technologies for mobile communications networks, pp. 128-132 (2002).
[2] Guennec, Y.l., Pizzinat, A., Meyer, S., et al., Low-cost transparent radio-over-fiber system for in- building distribution of UWB signals, pp. 2649–2657 (2009).
[3] Stott, J.H., The DVB terrestrial (DVB-T) specification and its implementation in a practical modem, pp. 255-260 (1996).
[4] A.M. Matarneh, S.S.A. Obayya, Bit-error ratio performance for radio over multimode fiber system using coded orthogonal frequency division multiplexing, pp. 151–157 (2011).
[5] Arun Agarwal, Kabita Agarwal, Design and Simulation of COFDM for High Speed Wireless Communication and Performance
Analysis, pp. 22-28 (2011).
[6] Dixon, B.J., R.D., Iezekiel, S., Orthogonal frequency-division multiplexing in wireless communication systems with multimode fiber feeds, pp. 1404–1409 (2001).
[7] Prasad, R., OFDM for wireless communication systems, Artech House, (2004).
[8] Aser M. Matarneh, S. S. A. Obayya, I. D. Robertson, Coded orthogonal frequency division multiplexing transmission over graded
index multimode fiber, (2007).
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 53-61 www.iosrjournals.org
www.iosrjournals.org 53 | Page
Performance Analysis of Linear Block Code, Convolution code
and Concatenated code to Study Their Comparative
Effectiveness.
Mr.Vishal G. Jadhao, Prof.Prafulla D. Gawande 1, 2, (Department Of Electronics and Telecommunication, Sipna College of Engineering and Technology
Amravati, India) 2(Department of Electronics & Telecommunication Engineering, SIPNA College of Engineering and
Technology, Amravati – 444701 (M.S.)
ABSTRACT: In data communication the codes are used to for security and effectiveness which is thoroughly
followed by the network. Here in LBC (linear block code), COC (convolution code), Concatenated codes (CC) are used. The work presented here is used the to make comparative effectiveness of this codes in order to make
secure data analysis. Error coding is a method of detecting and correcting these errors to ensure information is
transferred intact from its source to destination. Error coding uses mathematical formulas to encode data bits at
the source into longer bit words for is transmission. Decoding of the code word is possible at side of receiver.
The extra bits in the code word provide redundant bit, according to the coding scheme used, will allow the
destination to use the decoding process to determine if the communication mediums expected error rate, signal
to noise ratio and whether or not data retransmission is possible. Above coding technique implemented system
can show error of exact bit in given data this error display bitwise. Faster processors and better
communications technology make more complex coding schemes, with better error detecting and correcting
capabilities, possible for smaller embedded systems, allowing for more robust communications.
Keywords: Concatenated code error control coding technique, Convolutional codes error control coding
technique, Embedded Communication, Fault Tolerant Computing, linear block codes error control coding
technique, Real-Time System, Software Reliability
I. INTRODUCTION Environmental interference and physical defects in the communication medium can cause random bit
errors during data transmission. Error coding is a method of detecting and correcting these errors to ensure
information is transferred intact from its source to its destination. Error coding is used for fault tolerant
computing in computer memory, magnetic and optical data storage media, satellite and deep space
communications, network communications, cellular telephone networks, and almost any other form of digital
data communication. Error coding uses mathematical formulas to encode data bits at the source into longer bit
words for transmission. The “code word” can then be decoded at the destination to retrieve the information. The
extra bits in the code word provide redundancy that, according to the coding scheme used, will allow the
destination to use the decoding process to determine if the communication mediums expected error rate, and
whether or not data retransmission is possible. Faster processors and better communications technology make more complex coding schemes, with better error detecting and correcting capabilities, possible for smaller
embedded systems, allowing for more robust communications. During digital data transmission error will
occurring this error can detect and correct this error binary signal and original signal can show bitwise. System
can implemented using MATLAB.The introduction of the paper should explain the nature of the problem,
previous work, purpose, and the contribution of the paper. The contents of each section may be provided to
understand easily about the paper.
II. CODING TECHNIQUES 1. Overview of Error Control Coding: Error-correcting codes are used to reliably transmit digital data over unreliable communication channels subject to channel noise. When a sender wants to transmit a possibly very
long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such
piece is called message and the procedure given by the block code encodes each message individually into a
codeword, also called a block in the context of block codes. The sender then transmits all blocks to the receiver,
who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 54 |Page
corrupted received blocks. The performance and success of the overall transmission depends on the parameters
of the channel and the block code.
The theoretical contribution of Shannon’s work in the area of channel coding was a useful definition of
“information “and several “channel coding theorems” which gave explicit upper bounds, called the channel
capacity, on the rate at which “information” could be transmitted reliably on a given. In the context of this
paper, the result of primary interests the “noisy channel coding theorem for continuous channels with average
power limitations.” This theorem states that the capacity C of a band limited additive white Gaussian noise
(AWGN) channel with bandwidth W, a channel model that approximately represents many practical digital
communication and storage systems.
1.1. Linear Block Codes: Linear block codes are so named because each code word in the set is a linear
combination of a set of generator code words. If the messages are k bits long, and the code words are n bits long
(where n > k), there are k linearly independent code words of length n that form a generator matrix. To encode
any message of k bits, you simply multiply the message vector u by the generator matrix to produce a code word
vector v that is n bits long. Linear block codes are very easy to implement in hardware, and since they are
algebraically determined, they can be decoded in constant time. They have very high code rates, usually above
0.95. They have low coding overhead, but they have limited error correction capabilities. They are very useful in
situations where the BER of the channel is relatively low, bandwidth availability is limited in the transmission,
and it is easy to retransmit data.
One class of linear block codes used for high-speed computer memory are SEC/DED (single-error-correcting/double-error-detecting) codes. In high speed memory, bandwidth is limited because the cost per bit is
relatively high compared to low-speed memory like disks . The error rates are usually low and tend to occur by
the byte so a SEC/DED coding scheme for each byte provides sufficient error protection. Error coding must be
fast in this situation because high throughput is desired. SEC/DED codes are extremely simple and do not cause
a high coding delay.
1.2. Convolutional Codes: Convolutional codes are generally more complicated than linear block codes, more
difficult to implement, and have lower code rates (usually below 0.90), but have powerful error correcting
capabilities. They are popular in satellite and deep space communications, where bandwidth is essentially
unlimited, but the BER is much higher and retransmissions are infeasible. Convolutional codes are more
difficult to decode because they are encoded using finite state machines that have branching paths for encoding
each bit in the data sequence. A well-known process for decoding convolutional codes quickly is the Viterbi Algorithm. The Viterbi algorithm is a maximum likelihood decoder, meaning that the output code word from
decoding a transmission is always the one with the highest probability of being the correct word transmitted
from the source.
The main difference between block codes and the Convolutional codes (or recurrent) codes is the
following: In block code, a block of n digits generated by the encoder in a particular time unit depends only on
block of k input message digits generated by the encoder in a time unit depends on not only the block of k
message digits within that time unit, but also on the preceding (N-1) blocks of message digits (N>1). Usually the
values of k and n will be small.
1.3. Concatenated codes: The field of channel coding is concerned with sending a stream of data at the highest
possible rate over a given communications channel, and then decoding the original data reliably at the receiver,
using encoding and decoding algorithms that are feasible to implement in a given technology.
Shannon's channel coding theorem shows that over many common channels there exist channel coding
schemes that are able to transmit data reliably at all rates R less than a certain threshold C, called the channel
capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as
the block length N of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding
scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially
with N, so such an optimum decoder rapidly becomes infeasible .
Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error
probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with
the code block length.
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 55 |Page
III. INDENTATIONS AND EQUATIONS
1. Linear block code:
Theorem 1: Let a linear block code C have a parity check matrix H. The minimum distance of C is equal to the
smallest positive number of columns of H which are linearly dependent.
This concept should be distinguished from that of rank, which is the largest number of columns of H
which are linearly independent.
Proof:
Let the columns of H be designated as d0; d1; : : : ; dn1. Then since cHT = 0 for any codeword c, we have
0 = c0d0 + c1d1 +……+ cn1dn1
Let c be the codeword of smallest weight, w = w(c) = dmin. Then the columns of H corresponding to the
elements of c are linearly dependent. 2 Based on this, we can determine a bound on the distance of a code:
dmin = n- k + 1: The Singleton bound
This follows since H has n k linearly independent rows. (The row rank = the column rank.) So any combination
of n k + 1 columns of H must be linearly dependent.
For a received vector r, the syndrome is
s = rHT
Obviously, for a codeword the syndrome is equal to zero. We can determine if a received vector is a codeword.
Furthermore, the syndrome is independent of the transmitted codeword. If r = c + e,
s = (c + e)HT
= eHT
Furthermore, if two error vectors e and e’ have the same syndrome, then the error
Vectors must differ by a nonzero codeword. That is, if
eHT = e’HT then
(e – e’) HT = 0
This means they must be a codeword.
2. Convolutional Code:
2.1 Coding and decoding with Convolutional Codes: Convolutional codes are commonly specified by three
parameters; (n, k, m). n = number of output bits
k = number of input bits
m = number of memory registers
The quantity k/n called the code rate is a measure of the efficiency of the code. Commonly k and n
parameters range from 1 to 8, m from 2 to 10 and the code rate from 1/8 to 7/8 except for deep space
applications where code rates as low as 1/100 or even longer have been employed. Often the manufacturers of
Convolutional code chips specify the code by parameters (n, k, L), the quantity L is called the constraint length
of the code and is defined by
Constraint Length, L = k (m-1)
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 56 |Page
The constraint length L represents the number of bits in the encoder memory that affect the generation
of the n output bits. The constraint length L is also referred to by the capital letter K, which can be confusing
with the lower case k, which represents the number of input bits. Sometimes K is defined as equal to product the
of k and m. Often in commercial spec, the codes are specified by (r, K), where r = the code rate k/n and K is the
constraint length. The constraint length K however is equal to L – 1.
2.2 Code Parameters and the Structure of the Convolutional code:The Convolutional code structure is easy
to draw from its parameters. First draw m boxes representing the m memory register. Then draw n modulo-2
adders to represent the n output bits. Now connect the memory registers to the adders using the generator
polynomial.
This is a rate 1/3 code. Each input bit is coded into 3 output bits. The constraint length of the code is 2.
The 3 output bits are produced by the 3 modulo-2 adders by adding up certain bits in the memory registers. The
selection of which bits are to be added to produce the output bit is called the generator polynomial (g) for that
output bit. For example, the first output bit has a generator polynomial of (1, 1, 1). The output bit 2 has a
generator polynomial of (0, 1, 1) and the third output bit has a polynomial of (1, 0, 1). The output bits just the
sum of these bits.
v1 = mod2 (u1 + u0 + u-1)
v2 = mod2 ( u0 + u-1)
v3 = mod2 (u1 + u-1)
from fig.(1)
The polynomials give the code its unique error protection quality. One (3,1,4) code can have completely different properties from an another one depending on the polynomials chosen.
3. Concatenated Codes:Let Cin be a [n, k, d] code, that is, a block code of length n, dimension k,
minimum Hamming distance d, and rate r =n/k, over an alphabet A:
Let Cout is a [N, K, D] code over an alphabet B with |B| = |A|k symbols:
The inner code Cin takes one of |A|k = |B| possible inputs, encodes into an n-tuple over A, transmits, and
decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol
from the alphabet B. We use these channel N times to transmit each of the N symbols in a codeword of Cout. The concatenation of Cout (as outer code) with Cin (as inner code), denoted Cout Cin, is thus a code of
length Nn over the alphabet A:[1]
It maps each input message m = (m1, m2, ..., mK) to a codeword (Cin(m'1), Cin(m'2), ..., Cin(m'N)), where
(m'1, m'2, ..., m'N) = Cout(m1, m2, ..., mK).
The key insight in this approach is that if Cin is decoded using a maximum-likelihood approach (thus
showing an exponentially decreasing error probability with increasing length), and Cout is a code with length N =
2nr that can be decoded in polynomial time of N, then the concatenated code can be decoded in polynomial time
of its combined length n2nr = O(N⋅log(N)) and shows an exponentially decreasing error probability, even
if Cin has exponential decoding complexity.[1] This is discussed in more detail in section decoding concatenated
codes.
In a generalization of above concatenation, there are N possible inner codes Cin,i and the i-th symbol in
a codeword of Cout is transmitted across the inner channel using the i-th inner code. The Justesen codes are
examples of generalized concatenated codes, where the outer code is a Reed-Solomon code
3.1. Theorem: The distance of the concatenated code Cout∘Cin is at least dD, that is, it is a [nN, kK, D'] code with D' ≥ dD.
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 57 |Page
Proof: Consider two different messages m1 ≠ m2 ∈ BK. Let Δ denote the distance between two codewords. Then
Thus, there are at least D positions in which the sequence of N symbols of the codewords Cout(m
1)
and Cout(m2) differ. For these positions, denoted i, we have
Consequently, there are at least d⋅D positions in the sequence of n⋅N symbols taken from the
alphabet A in which the two codewords differ, and hence
2. If Cout and Cin are linear block codes, then CoutCin is also a linear block code.
This property can be easily shown based on the idea of defining a generator matrix for the concatenated code in
terms of the generator matrices of Cout and Cin.
Figures and Tables:
1. Convolutional code:
Fig.1 Sequential Statement
1.1. polynomials are selected for given table:
Constraint
Length
G1 G2
3
4
5
6
7
8
1 1 0
1 1 0 1
1 1 0 1 0
1 1 0 1 0 1
1 1 0 1 0 1
1 1 0 1 1 1
1 1 1
1 1 1 0
1 1 1 0 1
1 1 1 0 1 1
1 1 0 1 0 1
1 1 1 0 0 1 1
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 58 |Page
1.2. Convolutional code resulted binary message and bitwise graph
Fig.2 Binary message of convolutional code
9
10
1 1 0 1 1 1
1 1 0 1 1 1 0
0 1
1 1 1 0 0 1 1 0 1
1 1 1 0 0 1 1 0 0
1
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 59 |Page
Fig.3 Bitwise graph of convolutional code
2. Linear block code resulted binary message and bitwise graph
Fig.4 Binary message of linear block code
Fig.5 Bitwise graph of linear block code
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 60 |Page
3. Concatenated code resulted bitwise graph
Fig.6 Concatenated code resulted graph bitwise.
4. BER to Eb/No graph
Fig.7 BER to Eb/No
IV. CONCLUSION 1 Show in the BER to Eo/No graph the convolutional code is very good capability of error correction then
linear block code.
2 Convolutional code is very difficult to implementation then linear block code.
3 Concatenated code is both binary and non binary data can be send through encoder most useful for error
correction capability.
4 Convolutional code is better than linear block code and concatenated code.
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their
Comparative Effectiveness
www.iosrjournals.org 61 |Page
5 Important reasons to use coding are achieving dependable data storage in the face of minor data
corruption/loss mid gaining the ability to provide high precision I/O even on noisy transmission lines
such as cellular phones (error coding decouples message precision from analog noise level).
6 Most use full in real time system can show error bitwise exact bit is show in result.
7 Error signal can easily display on screen command window of matlab.
8 The error coding technique for an application should be picked based on The types of errors expected on
the channel (e.g., burst errors or random bit error)
9 Whether or not it is possible to retransmit the information (codes for error detection only or error
"detection and correction) 10 Expected error tale on the communication channel (high or low)
11 There are several tradeoffs between channel efficiency and the amount of coding/decoding logic
implemented
FUTURE APPLICATION AND DEVLOPMENT
1 Fault tolerant computing in computer memory
2 Magnetic and optical data storage media
3 Satellite and deep space communications
4 Network communications 5 Cellular telephone networks
6 Almost any other form of digital data communication
ACKNOWLEDGEMENTS We are thankful Prof. A. P. Thakare HOD (Department Of Electronics and Telecommunication),
Dr. Prof. A. A. Gurjar Sipna C.O.E.T. Amravati for their encouragement and support.
References 1. Kai Yang and Xiaodong Wang, Analysis of Message-Passing Decoding Of Finite-Length Concatenated Codes,IEEE Transaction On
Communication,VOL.59,No.8,August 2011.
2. Application of error control coding Deniel J. Costello, Jr. Fellow, IEEE, Joachim Hagenauer, Fellow, IEEE, Hideki Imai, Fellow,
IEEE, and Stephen B. Wicker, Sr. member, IEEE. IEEE Transactions on Information theory, Vol. 44, No. 6, October 98.
3. Error Correction in a Convolutionally Coded System,Wang,Member,IEEE,WaniunZhao,Member,IEEE,and Georgios B.
Giannakis,Fellow,IEEE TransactionCommunications,Vol.56,No.11,November 2008.
4. S. Lin and D. J. Costello, Error Control Coding. Englewood Cliffs, NJ: Prentice Hall, 1982.
5. W. W. Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2nd ed. Cambridge, MA: The MIT Press, 1972.
6. V. Pless, Introduction to the Theory of Error-Correcting Codes, 3rd ed. New York: John Wiley & Sons, 1998.
7. K. Sam Shanmugam, Digital and Analog Communication Systems.
8. S. Lin and D. J. Costello, Error Control Coding. Englewood Cliffs, NJ: Prentice Hall, 1982.
9. M. Michelson and A. H. Levesque, Error Control Techniques for Digital Communication. New York: John Wiley & Sons, 1985.
10. W. W. Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2nd ed. Cambridge, MA: The MIT Press, 1972.
11. V. Pless, Introduction to the Theory of Error-Correcting Codes, 3rd ed. New York: John Wiley & Sons, 1998.
12. C. Schlegel and L. Perez, Trellis Coding. Piscataway, NJ: IEEE Press, 1997
13. S. B. Wicker, Error Control Systems for Digital Communication and Storage. Englewood Cliffs, NJ: Prentice Hall, 1995.
First Author-Mr.Vishal G. Jadhao
He is also pursuing his M.E. in Digital Electronics from the sipna’s College of
Engineering and Technology, Amravati (India). His areas of interest are Digital
Communication System and Digital Signal Processing.
Second Author-Prof.Prafulla D. Gawande
He is currently working as Professor in Electronics and Telecommunication
Engineering Department, Sipna’s College of Engineering, Amravati (India)
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE)
ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 62-67 www.iosrjournals.org
www.iosrjournals.org 62 | Page
Utilization of renewable energy resources
Vishnu vardhan.ch1
1(ECE department, AL-Aman College of engineering/ Jawaharlal Nehru technological university Kakinada,
India )
ABSTRACT : Man has gloriously innovated, discovered, profoundly invented some of the state of the art
technologies and designed modern structures that stand apart from other civilizations .yet there has been
growing concern of deteriorating mother earth. Of late, this very dooms day can just be limited to a sci-fi story unlike we approach extra terrestrial life or explore celestial bodies for mineral extraction. Until and unless the
very typical and running alternatives are implemented for this desperate eco-friendly transformations.
Human civilization has leaded the path to qualitative technology with greater efficiency than
previously used. Yet there has been a demerit for all the technologies that were developed ever since the idea of
having technology came. By this we put before you some of the exquisite ways to make use of pristine forces of
nature which have been trivialized since the era of modern age.
Keywords: EO, EPGSP, ME, SIE, ST .
I. INTRODUCTION This paper explains about two working models called simplified traveler and enhanced power
generation from solar panel. There are working models partially resembling with the simplified traveler but the ST is far more advanced. The ST is capable of converting solar energy and mechanical energy into electrical
energy through three different ways. this simplified traveler travels with the help of three power giving sources
which are renewable and non-polluting so that if one fails the other can be used , on the other hand using three
power giving sources does not mean of overexploitation , this could be best understood in the future pages of the
paper . Enhanced power generation from solar panels, when a solar panel is placed under the sun it converts
solar energy into electrical. lets say for example 10 volts can be produced with single solar panel , but through
the enhanced power generation we can produce produce 100 volts of electrical energy with the help of 10
concave reflecting surfaces on to the single solar panel . Due to the reflecting surfaces huge amount of light falls
on the solar panel and as a result it produces 10 times of its previous out put . [1]
II. ENERGY CONVERSION EFFICIENCY This is a simple definition of efficiency of a device that converts one energy form into another. : -
It is a number between 0 and 1, or between 0 and 100%. The key word in the definition is useful energy
output (EO). Were it not there, the efficiency of any device would be 100%, because of the law of conservation
of energy (which states that energy cannot be created or destroyed). Instead, we see that quite a few devices
used in daily life have very low efficiencies. The purpose of a device determines its useful energy output. For
example, we want light from a lamp, but we get mostly heat; only 5% of the energy input (electricity) is
converted into light, so the efficiency of a conventional incandescent light bulb is 5%.
It is obvious why fluorescent lights are preferable for heavy-duty use (e.g., basements, kitchens, public
buildings). Similarly, diesel engines (the kinds used in most trucks and in some cars, especially the European
ones) are typically more efficient than conventional spark-ignition engines (SIE) [2] The familiar term 'fuel economy' or ' mileage' (distance traveled per liter of fuel consumed) is
equivalent to the efficiency of an automobile; it is just expressed in different units. Instead of being expressed in
units of mechanical energy (work), the useful energy output is given as the average number of kilometers
traveled. And instead of being expressed in units of chemical energy, the energy input is given as the number of
liters of fuel [3]. Refer figure-(1)
III. MAXIMUM POWER TRANSWER THEOREM Maximum power transfer theorem states that, to obtain maximum external power from a source with a
finite internal resistance, the resistance of the load must be equal to the resistance of the source as viewed from the output terminals. The theorem results in maximum power transfer, and not maximum efficiency. If the
resistance of the load is made larger than the resistance of the source, then efficiency is higher, since a higher
percentage of the source power is transferred to the load, but the magnitude of the load power is lower since the
total circuit resistance goes up.[4]
Utilization of renewable energy resources
www.iosrjournals.org 63 | Page
If the load resistance is smaller than the source resistance, then most of the power ends up being
dissipated in the source, and although the total power dissipated is higher, due to a lower total resistance, it turns
out that the amount dissipated in the load is reduced. The theorem states how to choose (so as to maximize
power transfer) the load resistance, once the source resistance is given, not the opposite. It does not say how to
choose the source resistance, once the load resistance is given. Given a certain load resistance, the source
resistance that maximizes power transfer is always zero, regardless of the value of the load resistance.[5] The theorem was originally misunderstood (notably by Joule) to imply that a system consisting of an
electric motor driven by a battery could not be more than 50% efficient since, when the impedances were
matched, the power lost as heat in the battery would always be equal to the power delivered to the motor. In
1880 this assumption was shown to be false by either Edison or his colleague Francis Robbins Upton, who
realized that maximum efficiency was not the same as maximum power transfer. To achieve maximum
efficiency, the resistance of the source (whether a battery or a dynamo) could be made close to zero. Using this
new understanding, they obtained an efficiency of about 90%, and proved that the electric motor was a practical
alternative to the heat engine[6] .The condition of maximum power transfer does not result in
maximum efficiency(ME)
Consider three particular cases:
The efficiency is only 50% when maximum power transfer is achieved, but approaches 100% as the load resistance approaches infinity, though the total power level tends towards zero. Efficiency also approaches
100% if the source resistance can be made close to zero. When the load resistance is zero, all the power is
consumed inside the source (the power dissipated in a short circuit is zero) so the efficiency is zero[7]
IV. LAW OF CONSERVATION OF ENERGY The law of conservation of energy states that the amount of energy in a system will stay constant
throught time . Therefore this law is stating that no matter how much time goes by , the amount of energy in a
certain state of matter will remain the same , hence conserved , no matter how much time has passed . Therefore
energy cannot be created nor destroyed but it can be transformed from one form to another form .[8]
A motor is fixed to one of the rod and to the other end of the rod a dynamo is fixed. The axle of the
motor is perpendicular to the rod and the same with the dynamo. Now the tires are fixed to the axel of the motor
and also to the dynamo. The positive terminal of the motor is fixed to the positive terminal of the dynamo and
vice-versa. Let us for a while consider that efficiency of the dynamo and motor in conversion of the given input
is only 75% that means 25% of the given input is wasted, when input is converted into another form of
energy.[9]
Consider an electrical motor and a dynamo of coinciding ratings and parameters, connected to either
ends of a supporting rod. However care must be taken that these devices are port vise synchronized (with parallel terminals) .let each of these two devices attached with one rotating tire so that it would constitute to a
bicycle like movement. Now the arrangement is kept on the ground and is given an initial manual trust which
causes the whole arrangement to move forward, because of this moment the dynamo will rotate. It produces 75
% of electrical energy to which 100% of input mechanical energy is given (cause of pushing). The 75% of the
electrical output produced by the dynamo is given as input to the motor; it converts its input into 50% of
mechanical energy. As a result the motor rotates the wheel and the whole arrangement moves forward. Again
the dynamo rotates and converts its 50% of the mechanical input into 25% of electrical out, which is in turn
given to the motor, whose mechanical conversion will be 0%. This results in deceleration of the system and
finally the system stops. This example proves that conversion of energy involves in wastage of energy and
100% of energy conversion is not possible.[10] . refer fig-(2) and table-1
V. SIMPLIFIED TRAVELLER There are many vehicles capable of converting solar power into mechanical one and the electrical
power to mechanical. The vehicle most popular for electrical power conversion into mechanical energy is E-
bike. Due to the limited battery charge, it cannot travel long distances as there are no alternatives except finding
a power source (like E-bunks ) , therefore the only solution is to be precautious . Now there arises the necessity
of having a vehicle which is capable of converting both solar energy and mechanical energy into electrical
energy and it should not need any external source to be recharged with in the mid way. This paper puts forward the model called simplified traveler (ST) which possess all the above features.
The simplified traveler is basically a bicycle, with advanced features. The simplified traveler is
powered by solar panel, dynamo and simple muscular pedaling so this has three power sources , when one kind
of energy source fails the other can be utilized . In this scenario the ST has two alternate power producing
sources which under any circumstances can set the ST in motion. For an instance if it‟s cloudy the solar panel
cannot produce sufficient energy to set ST in motion. In this case when ST is pedaled the dynamo also rotates
Utilization of renewable energy resources
www.iosrjournals.org 64 | Page
which produces energy and is stored in the battery, when peddling is stopped the stored energy in the power
storage can be utilized to set the ST in motion.
There has a chain from the pedal and also from the dynamo which is given to the rare twin chain crank
wheel which could accommodated two chains .peddling the ST corresponds to the rotation of the dynamo and
wheel .this results in forward moment of ST and production of electrical energy by the dynamo, this generated
electrical energy is stored in the power storage device. If in case the person traveling on ST is unable to pedal then the solar panel can be used.
The simplified traveler is very simple in its working and mechanism. The efficiency of the simplified
traveler depends on the kind of dynamo used in producing the electricity when rotated [11].There are different
ways to increase the efficiency of the dynamo is
1. by increasing the number windings
2. by increasing the pole strength , magnetic flux density of the magnet
3. by increasing the thickness of the windings refer fig-(3)
VI. ENHANCED POWER GENERATION FROM A SOLAR PANNEL This is a simple arrangement of mirrors cut in small uniform dimensions like square. It includes the
positioning of such mirrors on to a concave surface which is in turn tilted towards the sun. The arrangement is
such that the reflected light from it gets focused on to a particular area. We should arrange a solar panel of high
power or any heat conversion energy device.
This arrangement facilitates to produce the same amount of energy from N number of individual solar
panels when compared with a single solar panel assisted by concave reflecting surfaces [9]. As a result the cost
of production reduces thousands of times. Because of this arrangement thousands of joules of energy can be
concentrated at one point from where thousands of watts of energy can be produced with less loss in energy
conversion [12]. Refer fig-(4) (5). Thus EPGSP is very useful in power generation.
VII. FIGURES AND TABLES
Fig-(1) Efficiency of conversion device.
tyres
dynamomotor
rod
positive terminal
Negative terminal
BEST EXAMPLE FOR LAW OF CONSERVATION OF ENERGY :
CONNECTING DYNAMO AND MOTOR
tyres
dynamomotor
rod
positive terminal
Negative terminal
BEST EXAMPLE FOR LAW OF CONSERVATION OF ENERGY :
CONNECTING DYNAMO AND MOTOR
Fig (2): Example for law of conservation of energy.
Utilization of renewable energy resources
www.iosrjournals.org 65 | Page
MO DE L O F S IMP L IF IE D T R AVE L L E R
Solar panel
dynamo
chain
Lower twin
chain Crank
wheel
Power
Storage device
Front
tire
pedal
Upper crank wheel
seat
side crank wheel
Figure (3): Simplified traveler
Fig(4): Circular array of reflecting surfaces around solar tower.
BATTERY
SOLAR
PANNEL
SUN
INCIDENT RAYS FROM SUN
REFLECTED RAYS FROM CONCAVE
REFLECTOR
CONCAVE
REFLECTORS
EVENLY
CUT
MIRRORED
SURFACE
Figure (5): Enhanced power generation from solar panels
Utilization of renewable energy resources
www.iosrjournals.org 66 | Page
TABLE 1: Energy transformation table
VIII. SIMULATION RESULTS
Fig (6): output voltage of a solar panel
Fig(7): out put voltage of a motor
Fig(8): out put voltage of a dynamo
Utilization of renewable energy resources
www.iosrjournals.org 67 | Page
IX. CONCLUSION The paper finally explains the utilization of renewable energy resources, in the way that they do not
pollute the environment. More over the simplified traveler can be used and constructed with less effort, the only
limitation of the ST is more number of people cant travel on it at once. The enhanced power generation from solar panel helps in producing power at higher rate with the help of a single panel. This reduces the usage of
more number of solar panels to produce the same amount of energy which is obtained with the help of the
enhanced power generation from the solar panel
REFERENCES [1] Georg Hille, Werner Roth, and Heribert Schmidt, „„Photovoltaic systems,‟‟ Fraunhofer Institute for Solar Energy Systems, Freiburg,
Germany, 1995.
[2] OKA Heinrich Wilk, „„Utility connected photovoltaic systems,‟‟ contribution to design handbook, Expert meeting Montreux, October
19–21, 1992, International Energy Agency (IEA): Solar Heating and Cooling Program.
[3] Stuart R. Wenham, Martin A. Green, and Muriel E. Watt, „„ Applied photovoltaics,‟‟ Centre for photovoltaic devices and systems,
UNSW.
[4] N. Ashari, W. W. L. Keerthipala, and C. V. Nayar, „„A single phase parallel connected Uninterruptible power supply/Demand side
management system,‟‟ PE-275-EC (08-99), IEEE Transactions on Energy Conversion, August 1999.
[5] C. V. Nayar, and M. Ashari, „„Phase power balancing of a diesel generator using a bidirectional PWM inverter,‟‟ IEEE Power
Engineering Review 19 (1999).
[6] C. V. Nayar, J. Perahia, S. J. Philips, S. Sadler, and U. Duetchler, „„Optimized powerelectronic device for a solar powered centrifugal
pump,‟Journal of the Solar Energy Society of India, SESI Journal 3(2), 87–98 (1993).
[7] Ziyad M. Salameh, and Fouad Dagher, „„The effect of electrical array reconfiguration on the performance of a PV-powered
volumetric water pump,‟‟ IEEE Transactions on Energy Conversion 5 653–658 (1990).
[8] C. V. Nayar, S. J. Phillips, and W. L. James, T. L. Pryor, D. Remmer, „„Novel wind/diesel/battery hybrid system,‟‟ Solar Energy 51,
65–78 (1993).
[9] W. Bower, „„Merging photovoltaic hardware development with hybrid applications in the U.S.A.‟‟ Proceedings Solar ‟93 ANZSES,
Fremantle, Western Australia (1993).
[10] Carlson, D. E. 1995. “Recent Advances in Photovoltaics,” 1995 Proceedings of the Intersociety Engineering Conference on Energy
Conversion 1995, p. 621-626.
[11] www.google.com
[12] www.en-wikipedia.org