Ijciis May 2011 Vol. 2 No. 5
-
Upload
rachel-wheeler -
Category
Documents
-
view
215 -
download
0
Transcript of Ijciis May 2011 Vol. 2 No. 5
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
1/107
International Journal of
Computational Intelligence and
Information SecurityISSN: 1837-7823
May 2011
Vol. 2 No. 5
IJCIIS Publication
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
2/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
2
IJCIIS Editor and Publisher
P Kulkarni
Publishers Address:
5 Belmar Crescent, Canadian
Victoria, Australia
Phone: +61 3 5330 3647
E-mail Address:[email protected]
Publishing Date: May 31, 2011
Members of IJCIIS Editorial Board
Prof. A Govardhan, Jawaharlal Nehru Technological University, India
Dr. A V Senthil Kumar, Hindusthan College of Arts and Science, India
Dr. Awadhesh Kumar Sharma, Madan Mohan Malviya Engineering College, India
Prof. Ayyaswamy Kathirvel, BS Abdur Rehman University, India
Dr. Binod Kumar, Lakshmi Narayan College of Technology, India
Prof. Deepankar Sharma, D. J. College of Engineering and Technology, India
Dr. D. R. Prince Williams, Sohar College of Applied Sciences, Oman
Prof. Durgesh Kumar Mishra, Acropolis Institute of Technology and Research, IndiaDr. Imen Grida Ben Yahia, Telecom SudParis, France
Dr. Himanshu Aggarwal, Punjabi University, India
Dr. Jagdish Lal Raheja, Central Electronics Engineering Research Institute, India
Prof. Natarajan Meghanathan, Jackson State University, USA
Dr. Oluwaseyitanfunmi Osunade, University of Ibadan, Nigeria
Dr. Ousmane Thiare, Gaston Berger University, Senegal
Dr. K. D. Verma, S. V. College of Postgraduate Studies and Research, India
Prof. M. Thiyagarajan, Sastra University, India
Dr. Manjaiah D. H., Mangalore University, India
Dr.N.Ch.Sriman Narayana Iyengar, VIT University ,India
Prof. Nirmalendu Bikas Sinha, College of Engineering and Management, Kolaghat, India
Dr. Rajesh Kumar, National University of Singapore, Singapore
Dr. Raman Maini, University College of Engineering, Punjabi University, India
Dr. Seema Verma, Banasthali University, India
Dr. Shahram Jamali, University of Mohaghegh Ardabili, Iran
Dr. Shishir Kumar, Jaypee University of Engineering and Technology, India
Dr. Sujisunadaram Sundaram, Anna University, India
mailto:[email protected]:[email protected]:[email protected]:[email protected] -
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
3/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
3
Dr. Sukumar Senthilkumar, National Institute of Technology, India
Prof. V. Umakanta Sastry, Sreenidhi Institute of Science and Technology, India
Dr. Venkatesh Prasad, Lingaya's University, India
Journal Website:https://sites.google.com/site/ijciisresearch/
https://sites.google.com/site/ijciisresearch/https://sites.google.com/site/ijciisresearch/https://sites.google.com/site/ijciisresearch/https://sites.google.com/site/ijciisresearch/ -
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
4/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
4
Contents
1. Comparative Analysis Of Different Image Sharpening Techniques Using Different
Quality Matrics (pages 5-16)
2. Data Hiding for Medical Images: Issues and Challenges (pages 17-26)
3. A Survey on Ontology-Based Approach for Context Modelling and Reasoning (pages
27-36)
4. A proposed scheme for implementation of password authentication mechanism in the
security architecture of MANETs (pages 37-42)
5. A Survey on Recovery Techniques in Self-Healing Systems (pages 43-54)
6. Software Defect Prediction Based On Data Mining Techniques and Statistical Models
(pages 55-61)
7. Intrusion Response System with Self-Healing Intelligence (pages 62-70)
8. Fractal Geometry of Polynomial Surfaces pages (71-80)
9. Survey On Self-Adaptation In Context-Aware Systems (pages 81-89)
10. Reliability Forecast For Sugar Plant With Standby Redundant Boiler (pages 90-99)
11. Experimental Results Of Multilevel Inverter Based Statcom (pages 100-105)
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
5/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
5
COMPARATIVE ANALYSIS OF DIFFERENT IMAGE SHARPENING
TECHNIQUES USING DIFFERENT QUALITY MATRICS
G. P.Hegde1
and Dr. I.V. Muralikrisna2
1Assistant Professor, SDMIT, Ujire
Email:[email protected],
2Retd. Professor, JNTU, Hyderabad
Email: [email protected]
Abstract
In this paper we focus on pan-sharpening algorithms especially for the remote sensing satellite imaging
application and employ experimental testing to compare their performance. Four different image sharpening
techniques were applied to fuse higher resolution panchromatic and lower spatial resolution multispectral
images of SPOT satellite. The pan sharpening results were evaluated according to five measures of performance,
such as Mannons quality index, Difference quality index, Objective measure, Mutual information, Image
quality index. Finally quality evaluation of fused image was carried and the experimental results show that
PHLST proposed image sharpening technique yields more information from pan-sharpened images.
Keywords: Image sharpening, Performance metric, Polyharmonic local sine transform (PHLST), NSCT.
mailto:[email protected]:[email protected] -
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
6/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
6
1. Introduction
Image sharpening or fusion is the process by which two or more images are combined into a single image
retaining the important features from each of the original images. It aims at the integration of complementary data
to enhance the information apparent in the images as well as to increase the reliability of the interpretation. The
successful fusion of images acquired from different modalities or instruments is of great importance in many
applications such as remote sensing, medical imaging, microscopic imaging, computer vision, and robotics.
Now a days, with the rapid development in high-technology and modern instrumentations, satellite imaging has
become a vital component of a large number of applications, including remote sensing, space research, and military
applications. In order to support more accurate earth and space information satellite images were properly
registered and corrected for evaluation and image sharpening was carried out by combining the features of high
resolution SPOT panchromatic (SPOT-PAN) images with multispectral (SPOT-XS) image. These remote sensing
satellite images usually provide complementary and occasionally conflicting information. The SPOT-PAN sensed
images can provide dense structures like water area and trees with less distortion, but it has poor spectral changes,
while the SPOT-XS image can provide spectral more information but it cannot support the high resolution spatial
information. In this case, only one kind of image may not be sufficient to provide accurate analysis of earth
observation and study for the researchers and astronomers. Therefore, the pan-sharpening or fusion of the
multimodal remote sensing satellite images is necessary and it has become a promising and very challenging
research area in recent years[3].
This paper presents 4 different pan-sharpening or fusion techniques. In section 2 the four suitable image pan-
sharpening techniques will be introduced. In section 3, introduction of five methods of evaluation parameters are
given. Section 4 presents quantitative analysis and experimental results of applying these image fusion techniques
to SPOT-PAN and SPOT-XS images.
2. Image Pan-sharpening Techniques
In this paper we mentioned Poly Harmonic Local Sine Transformation (PHLST) as a proposed image pan-
sharpening or fusion technique; it is compared with other fusion techniques qualitatively and quantitatively.
2.1 Wavelet Transformation Technique
The two-dimensional Discrete Wavelet Transform (DWT) is one of the standard pan-sharpening technique,
computed by successive lowpass and highpass filtering of the digital images. Its significance is in the manner it
connects the continuous time multiresolution to discrete-time filters. The principle of image fusion using
wavelets is to merge the wavelet decompositions of the two original images using fusion methods applied to
approximations coefficients and details coefficients [13]. The figure 1 shows the process of image fusion tomerges two different images leading to a new image.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
7/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
7
The wavelet transform decomposes the image in to low-high, high-low and high-high spatial frequency bands at
different scales and the low-low band at the coarsest scale. The L-L band contains the average image
information whereas the other bands contain directional information due to spatial orientation. Higher absolute
values of wavelet coefficients in high bands correspond to salient features such as edge or lines.
Fig. 1.Block diagram of a DWT based image fusion approach
2.2 Spatial Frequency (SF) Techniques
Spatial frequency (SF) is used to measure the overall activity level of an image [11] [16]. For anMNimage
F, with the gray value at pixel position O(m, n) denoted by F(m, n), its spatial frequency is defined as
2 2SF CF RF= + (1)
WhereRFand CFis row frequency and column frequency
( )2
1 2
1( , ) ( , 1)
M N
m n
RF F m n F m nMN = =
= (2)
( )2
1 2
1( , ) ( , 1)
N M
n m
CF F m n F m nMN = =
= (3)
The basic algorithm may be written as follows: (i) Decompose the source images into blocks of size MN;
(ii) Compute the spatial frequency for each block; (iii) Compare the spatial frequencies of two corresponding
blocksAi andBi, and construct the ithblockFi of the fused image as
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
8/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
8
(4)
Where TH is the threshold and (iv) Verify and correct the fusion result in step (iii) with saliency checking. In
this case the aim of this process is to avoid isolated blocks the process is illustrated in figure 2.
Fig.2. Flow chart of the technique with SF as a parameter of clarity of images.
2.3 (NSCT+HIS) Techniques
The Non-Subsampled Contourlet Transform (NSCT) combines nonsubsampled pyramids and non-
subsampled directional filter bank (DFBs.). The pyramids provide multiscale decomposition and the DFBs
provide directional decomposition. This process is iterated repeatedly on the lowpass subband outputs of
nonsubsampled pyramids resulting in the non-subsampled contourlet transform. [6]. In this paper a fusion
method based on NSCT combining with HIS is briefly explained. This method is especially yields better results
for edges and contours than DWT technique. [7]. The core of the NSCT is the non-separable two-channel
nonsubsampled filter banks. It is easier and more flexible to design the needed filter banks that lead to a NSCT
with better frequency selectivity and regularity when compared to the counterlet transform. Based on mapping
approach and ladder structure fast implementation, the NSCT frame elements are regularity, symmetric and the
frame is close to a tight frame. The multiresolution decomposition of NSCT can be realized by nonsubsampled
pyramid (NSP), which can each the subband decomposition structure similar to Laplacian pyramid [23].
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
9/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
9
A general scheme for the NSCT+ HIS fusion methods is shown in figure 3. This method can be performed in
the following steps:
Step 1: Perform HIS on the SPOT-XS image and get saturation, hue and intensity components;
Step 2: Apply histogram matching between the SPOT-PAN image and intensity to get a histogram-matched
PAN image.
Step 3: Employ NSCT on intensity and the histogram-matched SPOT-PAN image, and get low frequent
subband and high frequent subbands.
Step 4: Fuse the intensity and the histogram-matched SPOT-PAN image. The fused low frequent data employ
the low frequent coefficient of intensity. The fused high frequent coefficient adopt Maximum the region-
energy for every coefficient of each subband of SPOT-PAN image and intensity get by step 3.
Step 5: Apply NSCT reconstruction with new coefficient to obtain the new intensity.
Step 6: Perform the inverse HIS transform to obtain the fused image.
Fig 3. Image fusion flow chart of NSCT+HIS
2.4 PHLST Based Pan-sharpening Technique
In this paper a proposed fusion technique such as Polyharmonic Local Sine Transform is briefly explained.
A more detailed description of polyharmonic local transform may be found in [20]. AssumeI(x,y) is a spatial-
domain image. The main idea of PHLST is that an image I(x,y) can be divided into two parts: p which we call
the polyharmonic component ofI(x, y) and r which we call the residual ofI(x, y). P is a polynomial. R is a
geometric series. P represents base or trend or predictable part of the original image, whereas rstands for
texture or fluctuation or unpredictable part of the original image. This method coincides with the
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
10/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
10
characteristic of human visual system. Human beings first focus on the noticeable parts of an image. The
noticeable parts are the fluctuation of an image. So, we extract texture which is in favor for subsequent
manipulation. I(x,y) is an rectangular image. Let Iinterr be the interior ofI(x,y),Ibou be the boundary ofI(x,y).
For simplicity, 0 x 1, 0 y 1. By solving polyharmonic equation (5) with given boundary conditions (6),
we can obtain the polyharmonic component
np = 0 in Iinter, n =1,2,.. (5)
kl I
kl p
_______=
_______on Ibou, l =0,m-1 (6)
nkl n
kl
where kl= 2l, the even order normal derivatives. We need not to consider the odd order normal derivatives
because this is automatically guaranteed [7]. The k0 = 0, which means that p = f(x, y) on the boundary. These
boundary values and normal derivatives ensure the function values and the normal derivatives of orders k1, ,
kn-1 ofp along the boundary to match those of the original imageI(x,y) over there.
For n = 1, we obtain the following Laplace equation with the Dirichlet boundary condition:
p= 0 in Iinter
p =I(x,y) on Ibou (7)
For n = 2, Eq. (4.1) becomes biharmonic equation with the mixed boundary condition:
2p =0 in Iinter
(8)
p =I(x,y) , 2p 2I on Ibou
______=
______
n2 n
2
We use the Laplace/Possion equation solver proposed by AVERBUCH et al. [2,3] to solve Eqs. (7) and (8).
The ABIV method provides more accurate solutions than those based on the finite difference (FD)[5,15]. There
are several versions of the ABIV method. We choose the simplest and most practical one to solve (7) that does
not need to estimate any derivative. It follows the recipe
p(x,y)=p1(x,y)+ {p2k(1)
gk(x,1-y)+ p2k(2)
gk(y,1- x) + p2k(3)
gk(x,y)+ p2k(4)
gk(y,x)} (9)
Wherep1(x,y) is a harmonic polynomial that matchesI(x,y) at the four corner points of the image. And its
simplest form is:
p1(x,y)=a3xy+a2x+a1y+a0 (10)
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
11/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
11
Letp1(0,0)=I(0,0), p1(0,1)= I(0,1), p1(1,0)= I(1,0), p1(1,1)= I(1,1), we have
I(0,0) = a0
I(0,1) = a1+ a0
I(1,0) = a2+ a0
I(1,1) = a3+ a2+a1+ a0 (11)
By solving (11), we can easily obtain the parameters ai. The function gk(x,y) is defined as follows:
sinh( )( , ) sin( )
sinh( )k
ky x y kxg
k
= (12)
and2
( )k ip , i = 1, 2, 3, 4, are the k-th 1D Fourier sine coefficients of boundary functions I(x, 0) p1(x, 0),I(0,
y) p1(0,y),I(x, 1) p1(x, 1), andI(1,y) p1(1,y), respectively, where 0 x 1, 0 y 1. Subtractingp(x,y)
fromI(x,y), we obtain r(x,y). It can be written as:
1 1ij
( , ) sin( )sin( )si j
r x y i x j y
= (13)
where sij is the 2D Fourier sine coefficients ofr(x,y).
For a more precise approximation ofI(x,y), we can segment an imageI(x,y) into a set of rectangular blocks
(of different sizes possible) using the characteristic function. There is no overlap between adjacent patches, but
adjacent patches may share the boundaries. Then, we decompose each patch into two components: the polyharmonic componentp and the residual r, according to the foregoing method.
2.4.1The Image Pan-sharpening Scheme
Figure 4 shows a schematic diagram of the basic structure of the image fusion scheme proposed. For
simplicity, we make an assumption that there are just two source images,I1 andI2, and the fused image is F.
Fig. 4. Block diagram of PHLST, proposed method of fusion method
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
12/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
12
2.4.2 Pan-sharpening or Fusion Rules
The objective of image pan-sharpening or fusion is to combine multiple source images of the same scene and
obtain a better quality image. The straightforward approach to image fusion is to compute the pixel by pixel
average of the input images. Although image averaging is a simple method, a major drawback is that it can
cause a decreased image contrast. To avoid a loss of detail, the basic strategy here is to fusep and rseparately to
construct a fused PHLST representation from the PHLST representations of the original data. P represents
base of the original image. We use the simplest method to compute p averaging.R represents the detail or
texture of the source image. The larger values in rcorrespond to the sharper brightness changes and thus to
the salient features in the image, such as edges, lines, and region boundaries. Therefore, a good integration rule
is to conserve r of the two source images at each point. So, we compute the composite r by the following
equation.
1 2( )
F I Ir r r = + (14)
where and represent rs, fromI1andI2, respectively, rFis the composite r.
Subsequently, a composite image is constructed by performing an inverse PHLST. Since the PHLST
provides spatial localization, the effect of the direct summing fusion rule can be illustrated in the following two
aspects. If the same object appears more distinctly (in other words, with better contrast) in image I1 than in
imageI2, after fusion the object in imageI1 and in imageI2 will be preserved with better contrast than inI2; in a
different scenario, suppose an object appears in the image I1, while being absent in image I2, after fusion theobject in imageI1 will be preserved and the contrast of the composite image will be enhanced.
3. Evaluation Parameters
Performance measures are essential to determine the possible benefits of fusion as well as to compare
results. Computational objective fusion metrics are an efficient alternative as they need no display equipment or
complex organization of an audience. Recent proliferation of image fusion algorithms has prompted the
development of reliable and objective ways of evaluating and comparing their performance for any given
application [9],[18], [5]. Five different measures are used to evaluate the performance of the algorithms under
investigation. These measures are: Difference quality index (QD), Objective measure (E), Mannons quality
index(QM), Mutual information(MI), Image quality index(Qp). Detailed equations of these measures can be
found in the literature. Objective measure is used here to measure the average objective edge information
between fused and reference images.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
13/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
13
4. Qualitative Analysis and Experimental Results
In this section, we verify the significant performance of the image fusion method proposed by comparing it
with four different image fusion methods using five image fusion metrics. The first algorithm is a Discrete
Wavelet Transform fusion algorithm [10], where the source images are decomposed using DWT, the
coefficients of the integrated image are computed by choosing the corresponding coefficients of input images
with the largest amplitude in high frequency bands and by averaging the coefficients of base band. Second
algorithm is Spatial Frequency measures overall activity level in an image. The third fusion algorithm is a Non-
Subsampled Contourlet Transform and Hue Intensity Saturation (NSCT+HIS) fusion algorithm [18],. The fourth
fusion Polyharmonic Local Sine Transform (PHLST) algorithm is a proposed fusion method gives more
quantitative information [12].
Satellite images consist of much information like river, agriculture land, urban area, forest area, sea, roads.
Extracting the information of both urban and rural features are important work. High resolution SPOT-PAN
image of 1024x1024 resolution and high spectral low resolution multispectral image of 256x256 resolution
images were used in this study for image sharpening. The four fusion techniques were applied to different cases.
Results were compared both qualitatively and quantitatively [5].
Images used in this study are from Kammam Dist. Hyderabad and its vicinity with both urban and rural
features. Two images are geometrically corrected using ground control points extracted from the maps and both
these images are fused together using conventional and non conventional methods of images fusion. Beforefusion of images it must be properly co-registered and resampled. There are several image sets are tested, but in
this paper only two data set image are shown. Figures 5 and 7 shows data set 1 and data set 2 images
respectively. Where a and b are the SPOT-XS and SPOT-PAN input images, c is the pan-sharpened or fused
image obtained by Spatial Frequency fusion technique, d is the fused image using Discrete Wavelet
Transformation method, e is the fused image using NSCT+HIS method, f is the fused image using PHLST
method.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
14/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
14
Fig. 5. Image Fusion of Data set 1 images (a-SPOT-XS image, b-SPOT-PAN image )
The quantitative assessments of fused images are listed in Table. 1. From this table, we can observe that the
performance of the proposed algorithm is best according to all metrics. The fused images are illustrated in Figs.
5c5f. .
Table. 1. Experimental results of the pan-sharpened images of Data Set 1
Fig. 6. Image Fusion of Data set 2 images (a-SPOT-XS image, b-SPOT-PAN image )
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
15/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
15
Table 2.The experimental results of pan-sharpened images of Data Set 2
5. Conclusions
The aim of this study is to select the best image sharpening techniques by evaluating qualitative and
quantitative parameters. In this study four image sharpening algorithms have been applied to remote sensing
satellite images. They are based on the Spatial Frequency, DWT, NSCT+HIS and PHLST. Five meaningful
performance evaluation quality metrics based on Mutual information, Image quality index, Mannnons quality
index, Objective measure, Difference quality index were used to access the effectiveness of different image
fusion algorithms. Results of different dataset images have proved that entropy and mutual information is more
in PHLST fused image. Hence this method preserves large amount of information of both SPOT-XS and SPOT-
PAN images. It is hoped that the techniques can be extended for different bands of SPOT-XS multispectral
images and for fusion of multiple sensor images.
Reference
[1] Arthur L Da C, Zhou J P, Do M N. 2006. The nonsubsampled contourlet transform: theory, design and
application. IEEE Trans. on Image Process, 15(10),pp. 30893101.
[2] AVERBUCH A., Israeli M., Vozovoi L., A fast Poisson solver of arbitrary order accuracy in rectangular
regions, SIAM Journal on Scientific Computing 19(3), 1998, pp. 933952.
[3] Alparone, L., et al., 2004. A global quality measurement of pan-sharpened multispectral imagery. IEEE
Geoscience and Remote Sensing Letters, 1 (4), 313317.
[4] Brverman E., Israeli M., Averbuch A., Vozoovoi L.,A fast 3D Poisson solver of arbitrary order accuracy,
Journal of Computational Physics 144(1), 1998, pp. 109136.
[5] Chen, H. and P. K. Varshney: 2007, A human perception inspired quality metric for image fusion based on
regional information.Image Fusion, 8, 193207.
[6] D. A. Bluemake et al. Detection of Hepatic Lesions in Candidates for Surgery: Comparision of
Ferumoxides Enhanced MR Imaging and Dual Phase Helical CT, AJR 175, pp 1653-1658, December
2000.
[7] E. Braverman, M. Israeli, A. Averbuch, and L. Vozovoi. A fast 3D Poisson solver of arbitrary order
accuracy.J. Comput. Phys., 144:109136, 1998.
[8] F. Maes, D. Vandermeulen, and P. Suetens, Medical image registration using mutual information,
Proceedings of the IEEE, vol. 91, no. 10, pp. 16991721, 2003.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
16/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
16
[9] Firooz Sadjadi, Comparative Image Fusion Analysis, Proceedings IEEE Computer Society Conference
on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03, 2005.
[10] Gonzalez and Woods, 2001, Digital Image Processing , Prentice Hall , 2nd edition.
[11] J.L. Johnson, M.L Padgett, PCNN models and applications, IEEE Trans. Neural Networks, Vol.10, pp.
480-498, 1999.
[12] L. Da Cunha, J. Zhou, and M. N. Do "The Nonsubsampled Contourlet Transform: Theory, Design, and
Applications," Image Processing, IEEE Transactions on, vol.15, no.10, pp.3089-3101, Oct. 2006
[13] Li, H., B.S. Manjunath, and S.K. Mitra, Multisensor image fusion using the wavelet transform, Graph.
Models Image Process, vol. 57, no. 3,pp. 235245, 1995
[14] Liu Shangzheng1, , Bowen Liu, Zhang An image fusion algorithm based on polyharmonic local sine
transform (PHLST) Optica Applicata, Vol. XXXIX, No. 2, 2009
[15] M. N. Do and M. Vetterli, The contourlet transform: An efficient directional multiresolution image
representation,IEEETrans. Image Proc., 2005, to appear
[16] M.A. Mohamed and B.M. El-Den2, Implementation of Image Fusion Techniques Using FPGA ,IJCSNS
International Journal of Computer Science and Network Security, VOL.10 No.5, May 2010
[17] N. Saito and J.-F. Remy. The polyharmonic local sine transform: A new tool for local image analysis and
synthesis without edge effect.Applied and ComputationalHarmonic Analysis, 20(1):4173, 2006.
[18] Nedeljko Cvejic, Artur oza, David Bull and Nishan Canagarajah A Similarity Metric for Assessment of
Image Fusion Algorithms International Journal of Information and Communication Engineering 2:3 2006
[19] O. Rockinger, Pixel-level fusion of image sequences using wavelet frames., in Proceedings in Image
Fusion and Shape Variability Techniques, Mardia, K. V., Gill, C. A., and Dryden, I. L., Ed., pp. 149154.Leeds University Press, Leeds, UK, 1996.
[20] Saito N., Remy J.-F., The polyharmonic local sine transform: A new tool for local image analysis and
synthesis without edge effect, Applied and Computational Harmonic Analysis 20(1), 2006, pp. 4173.
[21] Thomas Lehmann, Walter Oberschelp, Erich Pelikan and Rudolf Repges Medical Image processing
Publisher: Springer Verlag, 1997.
[22] V. Bara and J.-Y. Boire, A general framework for the fusion of anatomical and functional medical
images, NeuroImage, vol. 13, no. 3, pp. 410424, 2001.
[23] Y. Jiaa, M. Xiao Fusion of Pan and Multispectral Images Based on Contourlet Transform. July 57,
2010, IAPRS, Vol. XXXVIII, Part 7B
[24] Y.-M. Zhu and S. M. Cochoff, An object-oriented framework for medical image registration, fusion,and
visualization, Computer Methods and Programs in Biomedicine, vol. 82, no. 3, pp. 258267, 2006.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
17/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
17
Data Hiding for Medical Images: Issues and Challenges
J.Samuel Manoharan1
Dr.Kezi Selva Vijila2
A.Sathesh3
D.Narain Ponraj4
1,3,4Asst.Professor, ECE Dept, Karunya University, SouthIndia
2Professor, Christian College of Engineering, SouthIndia
[email protected] [email protected]
[email protected] [email protected]
Abstract
Data Hiding is an age old technique and has been gaining wide spread attention and significance with
increasing threat of insecured data transmission and reception and also data hacking. Data Hiding in medical
images are of great significance as they are multipurpose based like copyright protection, reduction of
bandwidth, telediagnosis etc., Medical Image Data hiding has to be carefully dealt with as there cannot be any
compromise on the accuracy of data hiding as it may result in wrong diagnosis and ultimately to severe
consequences. An extensive survey has been carried out in a pool of transform based techniques for medical
image hiding of patient information in an attempt to bring out an ideal choice of transform for appropriate
applications.
Keywords: Robustness, Fidelity, Embedding Capacity, Correlation Coefficient, Geometric Attacks
mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected] -
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
18/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
18
1. Introduction
Data Hiding is an ancient technique and still widely used for concealing vital information inside another
image, audio or a video sequence. It may serve the purpose of content authenticity, copyright protection,
fraudulent and data manipulation detection etc., apart from the above mentioned data security applications, it
also serves as a medium of transmitting a secret data or a code inside the host image, audio or video for stego
applications and at the same time for bandwidth reduction applications. Due to its wide range of applicationsespecially in the fields of data security and communication, several tehcniques are being brought about to bring
out an optimal embedding and recovery procedures and algorithms with respect to many parameters. A basic
Data Hiding system for medical images is shown below in Figure 1 where the medical image which may be a
retinal image, MRI or Cranial Image is used as the cover or host image. The data which is usually the patient
information (Electronic Patient Information EPR) as well the diagnosis report is used as the watermark which
has to be embedded in the cover image.
Figure 1: A General Data Embedding and Retreival System
The cover image is transformed into frequency domain using any choice of transforms (T) selected using
certain criteria and a suitable embedding algorithm is used to embed the text inside the cover image. The
Embedded Image is then transmitted, received and subjected to the same transform as used in the transmitter
side and the patient information and the cover image are retrieved independently. Figure 2 illustrates the
different medical images that could be used as cover images where the first one is a retinal image and the latter acranial image and figure 3 illustrates the different watermarks that could be embedded inside the cover images.
The former is a doctors digital signature which could be embedded inside the cover image to serve the purpose
of copyright protection while the latter is a patient information or diagnosis report which could be embedded
inside the cover image to aid in tele diagnosis.
Figure 2: Medical Cover Images used
Cover
EPR
Embedde
d
Cover
EPR
T
T
T
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
19/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
19
Figure 3: Watermarks used
Though the general system shown in Figure 1 may appear to be a simple mechanism,
the optimality of the embedding and retreival techniques used highly depend on the choice of various factors
which have been surveyed and deeply dealt with in the preceding sections.
2. Literature Review
Basically, any data hiding technique is broadly classified into Spatial domain technique based and
frequency domain technique based. Spatial Domain technique based data hiding involve manipulation of pixel
values while frequency domain techniques involve manipulation of the frequency coefficients. While each have
their own merits and demerits, almost all data hiding techniques revolve around certain key factors like
robustness, fidelity, embedding capacity, method of retrieval etc., Generally, after the data embedding process,
the embedded medical image is sent through a communication channel which might be a wired or wireless
media. In both the cases, there are always some components present which tend to degrade the watermarked
image. Commonly, these components are termed to be noise. Some predominant forms of noise are random
noise, Gaussian noise, Impulse noise, Speckle noise etc., when the embedded image along with the noise added
during the transmission time is subjected to retrieval at the receiver side, the extracted information or the
watermark does not exactly resemble the original watermark before embedding which means that the
embedding algorithm is not sufficiently strong enough to tackle or withstand the noise. Here, robustness is the
parameter used as a measure as to how much the embedded image withstands the attacks where attacks may
comprise intentional which may be cropping, rotating, filtering, compression etc., and unintentional which may
be noise. Fidelity is used to describe the degree of resemblance of extracted image to the original image. The
closer it resembles the better the embedding algorithm. It is usually measured in terms of a parameter known ascorrelation coefficient which lies in the interval of [0,1]. A value towards 1 indicates strong embedding
algorithm while values towards 0 indicate weakness in the embedding algorithms. Another important criteria is
the embedding capacity which is a measure of how much data could be packed inside a cover image without
causing any distortion to the embedded image.
Another method of classification is its division into robust, fragile and semi fragile. While
robust watermarks are able to withstand any external attacks, fragile watermarks get destroyed when exposed to
attacks. While robust watermarks could serve the purpose of secret message transmission, copyright protection,
fragile watermarks on the other hand serve the purpose of tamper detection. Another classification is based on
the method of extraction of watermarks at the receiver side. If the original image is needed at the receiver side
for extraction, it is known as a Non - Blind extraction process and if it does not require an original image for
extraction,. Such an extraction is called as Blind watermarking. The survey has been carried out taking into
account certain key factors like robustness, fidelity etc., in terms of PSNR and cross correlation coefficient.
2.1 Review of Spatial Domain Technqiues
The work in watermarking has commenced since the late 1980s with
Ingemar J. Cox et als [1] technique for Secure Spread Spectrum Watermarking for Multimedia which had the
property of tamper resistance followed by Jiri Friedrich [2], who utilized the complementary robustness
properties of both low frequency watermarks and spread spectrum generated watermarks to obtain a
watermarked image capable of surviving an extremely wide range of severe image distortions. Brian Chen et al
[3], was able to establish a tradeoff between the embedding capacity and quality of watermarked image through
his Quantization Index Modulation methods (QIM). With the advancements in technology, a fuzzy based
watermarking method was proposed by Pankaj Lande et al [4] enabling it to be applicable for ownership and
copyrights protection. Shaomin Zhu et al [5]proposed a scheme for tamper identification but a fragile system
showing poor tolerance towards high frequency attacks.Srdjan Stankovic et al [6] introduced a Radon basedapproach to incorporate translation invariance properties to the watermark. Following these developments, the
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
20/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
20
research in watermarking areas has taken a turn, to exploit both spatial and frequency domain properties to
achieve the desire robustness and Image Quality. Frank et al [7] introduced a watermarking scheme to increase
the watermarking capacity and also to provide a double kind of protection to the watermarking through his
watermark splitting approach.Hsien et al [8]provided with a vector quantization based method to reduce the
storage and transmission time. Navneet Mandhani et al [9] introduced a code division multiple access scheme
for hiding data in monochrome images. Phen Lan et als [10] hierarchical digital watermarking used the method
of average intensity comparison and provided to be storage effective. A genetic codebook partition scheme wasproposed by Feng-Hsing et al [11] which proved have a good encoding time, good imperceptibility and strong
robustness towards attacks.A region of Interest based data hiding scheme was introduced by Amit Phadikar et
al [12] where the regions were selected using the quad tree decomposition method. But it was a non blind
approach during extraction and but was translation invariant. Ming-Chiang Hu proposed a blind, lossless and
two phase data embedding method [13] in the spatial domain which exhibited good tolerance towards various
attacks especially to Geometric attacks. A block based approach was proposed by Ju-Yuan Hsiao et al, [14]
where the image was divided into two areas with one being used for data embedding and other for auxillary
information embedding based on edge prediction. This method proved to increase the embedding capacity. A
further improvement in embedding capacity was shown by Shih-chieh Shie et al [15] by using Compressed VQ
Indices of Images. Xiang-Yang Wang et al [16], utilized the pseudo Zernike moments and Krawtchouk
moments to develop a robust image watermarking algorithm to specifically address geometric distortion. A
recent advancement in the spatial domain methods are the utilization of Luminance values of an image proposedby Jamal Hussein [17] which exhibited good tolerance towards JPEG compression and rotation attacks.
2.2 Review of Frequency Domain Techniques
Even though the above mentioned spatial domain techniques provide a good fidelity and
embedding capacity increase, the quality of image tends to degrade with increasing aggressive image processing
operations such as increased compression, scaling, filtering and increased levels of noise as spatial domain
techniques tend to operate on raw pixel values as such. Hence, in attempt to overcome the above said
drawbacks, there was a shift towards frequency domain techniques where the image pixels are converted into
frequency domain coefficients before embedding. Normally the transformation divides the image into high
frequency and low frequency components with mid band frequency components in between. This
decomposition or separation of frequencies also provides the user increased flexibility in choice of an ideal
embedding location depending on the application. If the watermarked image tends to be compressed during its
path, the watermarks could be embedded into the low or mid frequency components. On the other hand, if the
watermarked image tends to be passed through a channel prone to high levels on noise, then it is desirable to
embed in the low frequency components of the image. The heart of any frequency domain watermarking is the
transform used for decomposition and reconstruction. Many transforms exist such as the Fast Fourier Transform
(FFT), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Contourlet Transform (CT),
Ridgelet Transform (RT), Shearlet Transform (ST) etc., Each transform is unique in the sense that DWT
provides increased levels of decomposition but cannot be used for image with sharp discontinuities whereas CT
can be utilized for smooth contoured images while RT could be used for fingerprint watermarking and
reconstruction. Hence, choice of appropriate transform for specific application is truly a challenge to obtain
optimal embedding results. Most of watermarking in Frequency domain utilize the robustness property and
presence of mid band coefficient characteristics of Discrete Cosine Transform (DCT).
After a DCT is performed on the image to get the coefficients, a pseudo random sequencecorresponding to the watermark may be embedded into the DCT coefficients as proposed by Mauro Barni et al
[18]. The resulting watermarked image proved to be robust towards aggressive image processing operations like
compression, medial filtering etc., A blind and translation invariant frequency domain watermarking approach
was put forward by Joseph et al [19], by utilizing the modulation of magnitude components in Fourier space. Yi
Ta et al [20] proposed a adjusted purpose watermarking technique where the user can vary a parameter known
as quantity factor so as to make the resulting watermarking technique to be fragile, semi fragile or robust
watermarks. Keeping in view the security parameter in the watermarking system, an Arnold iteration transform
was utilized by Rongrong et al [21] and the resulting watermark was found to be robust against some spatial
attacks like contrast changing, scribbling, low pass and high pass filtering and JPEG processing. A blind
approach was proposed by Dimitar et al [22] by use of a visual mask generated from the image content and Jieh
Ming et al [23] and Chin Chen Chang et al [24] proposing a semi blind approach by using singular value
decomposition method (SVD) with the watermarked image to be strongly resistant towards attacks and alsocould be used for tamper detection applications. A middle band coefficient exchange system was introduced by
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
21/107
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
22/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
22
that Curvelet transforms [51] [59] and Contourlet [60] - [63] have a significant edge over the other
conventional techniques. Works of Chen et al, have shown that optimal coefficients in fingerprint images [64]
could be extracted using the complex ridgelet transforms. A further advancement of the Curvelet transforms is
the Shearlet transforms put forward by Wang Q. Lim and Sheng Yi et al, which are used to predict the behavior
of edges [65] - [66] towards multiscale representations. Contourlet Transforms exploited by Ibrahim et al,
Guiduo et al, Akhaee et al, Haohao et al and Minh Do et al put forward another transform for efficient
directional multiresolution representation through the Contourlet transform [67] which is capable of bringing outthe directional properties of each of the coefficients.
2.3 Attacks and Embedding Capacity
Once the choice of transform has been done suitable and compatible to the application, the
most important requirement following it is that the embedding algorithm should be stable. The stability is best
when the embedded image or content is able to withstand against Intentional and Unintentional attacks that
intervene in the communication channel. A review of Voloshynovskiy et al, Jonathan et al, Frank Hartung et al,
Claude Desset et al and Raphael et al s work [68] - [71] show a wide range of attacks predominant in the
transmission channel. Noise is a common obstacle present which is classified as an unintentional attack while
cropping, filtering, scaling, rotating [72] , compression are classified as intentional attacks as they are done on
the embedded image in an attempt to destroy the watermark or retrieve the information in some way or theother. Hence, it is necessary to test the stability of the embedding algorithm by subjecting the watermarked
image to all the above attacks and measuring its robustness. As mentioned in previous sections, normalized
cross correlation coefficient is mostly used to evaluate the robustness where a value towards 1 indicates a strong
embedding algorithm while values toward 0 indicate weakness in the algorithm. A set of images subjected to the
above attacks have been shown below in Figure 4.
Figure 4: Lena Images subjected to Noise, Rotation and Compression
Another critical criterion is the estimation of embedding capacity which is a measure of how much of
information could be packed or embedded inside the image without causing any visual degradation or affecting
the fidelity. Pierre Moulin and M Krvanc Mihcak [73] used a statistical model comprising of auto regression,
wavelet statistical models and block DCT while Fan Zhang exploited the relationship between Watermark
Capacity and Watermark average energy to achieve a tradeoff.
3. Prospects and Applications
With all the above aspects discussed so far, the area of Digital watermarking is proven to bean evergreen field as long as the security of data transmitted or received is an issue. Since Multimedia content
are always subject to hacking and attacks, and also increase in bandwidth requirements for communication, data
embedding along with encryption stands to be one of the solutions for protection, reduction of bandwidth, time
and storage spaces, and also detection of attacks. A recent extension of data hiding towards medical imaging
[74] has invited considerable interests from researchers all over due to its significant benefits ranging from
telemetry to telediagnosis. Rajendra Acharya et al [75] introduced a technique where in the electronic
information of the patient commonly termed as the Electronic Patient Information (EPR) which contains the
name and personal details of the patient is being embedded into the medical image thus saving storage space and
also providing a high class of electronic security and also preventing any attempt of tampering. Following this,
Jason Dowling et al, put forward a comparative analysis [76] between the DCT and DWT techniques for
medical image embedding of EPR and obtain critical inferences after exposing them to some common
prevailing attacks. The above thoughts could be extended for embedding the entire patient diagnosis reportavailable in the form of text inside the medical image thus reducing the storage space. The text could be
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
23/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
23
compressed thus facilitating the requirement of lesser bandwidth for transmission. Once transmitted, the doctor
on the receiver side could extract the report, analyze, modify and re- embed and transmit the medical image
along with the report thus aiding in tele diagnosis and tele medicine. Since, no compromise can be made on the
fidelity criteria of the embedded medical image, since even the smallest change would bring about a wrong
diagnosis, all the above parameters play a very critical role to bring about a perfect precision with which the
report is embedded. Hence, appropriate transforms for medical images could be investigated and incorporated to
bring about an optimal embedding in medical images.
4. References
[1]. Ingemar J. Cox et al, Secure Spread Spectrum Watermarking for Multimedia, IEEE Transactions on
Image Processing, Vol. 6, No. 12, pp. 1673-1687, 1997.
[2]. Jieh-Ming Shieh et al, A Semi-Blind Digital Watermarking Scheme based on Singular Value
Decomposition, Intl Journal of Computer Standards & Interfaces, Vol. 28, pp. 428-440, 2006.
[3]. Brian Chen et al, Preprocessed and Post processed Quantization Index Modulation Methods for Digital
Watermarking, Proc. of SPIE: Security and Watermarking of Multimedia Contents II, Vol. 3971, pp 48-59,
2000.
[4]. Pankaj U. Lande et al, A Fuzzy Logic Approach to Encrypted Watermarking for still Images in Wavelet
Domain on FPGA, International Journal of Signal Processing, Image Processing and Pattern Recognition,
Vol.3, No.2, pp. 1 10, 2010.
[5]. Shaomin Zhu et al, A Novel Fragile Watermarking Scheme for Image Tamper Detection and Recovery,
Chinese Optics Letters, Vol. 8, Issue. 7, pp. 661 665, 2010.
[6]. Srdjan Stankovic et al, Watermarking in the Space/Spatial Domain using Two-Dimensional Radon-
Wigner Distribution,IEE Transactions on Image Processing , Vol. 10, No.4, pp. 650-658 , April2001.
[7]. Frank Hartung et al, Spread Spectrum Watermarking: Malicious Attacks and Counteratacks, Intl Journal
of Security and Watermarking of Multimedia Contents, Vol. 3657, pp. 147-158, 1999.
[8]. Hsien Chu Wu and Chin Chen Chang, A Novel Digital Wateramarking Scheme based on the Vector
Quantization Technique, International Journal of Computers and Security, Vol. 24, Issue. 6, pp. 460 471,
2005.
[9]. Navneet Mandhani and Subhash Kak, Watermarking using Decimal Sequences, International Journal of
Cryptyologia, Vol. 29, pp. 50-58, 2005.[10]. Phen Lan Lin, Chung Kai Hsieh and Po Whei Huang, A Hierarchial Digital Watemarking Method for
Image Tamper Detection and Recovery, International Journal of Pattern Recognition, Vol. 38, pp. 2519
2529, 2005.
[11]. Feng Hsing Wang, Lakshmi C. Jain and Jeng Shyang Pan, VQ based Watermarking scheme with Genetic
codebook partition, International Journal of Network and Computer Applications, Vol. 30, Issue. 1, pp. 4
23, 2007.
[12]. Amit Phadikar and Santi P. Maity, ROI Based Error Concealment of Compressed Object Based Image
using QIM Data Hiding and Wavelet Transform, IEEE Transaction on Consumer Electronics, Vol. 56, No.
2, pp. 971-979, 2010.
[13]. Miang Chiang Hu, Der Chyuan Lou and Ming Chang Chang, Dual Wrapped Digital Watermarking
scheme for copyright protection, International Journal of Computers and Security, Vol. 26, Issue. 4,
pp. 319 330, 2007.
[14]. Ju-Yuan Hsiao, Block-based Reversible Data Embedding, International Journal of Signal Processing,
Vol. 89, Issue. 4, pp. 556-569, 2009.
[15]. Shih-chieh Shie and Shinfeng D. Lin, Data Hiding Based on Compressed VQ Indices of Images,
International Journal of Computer Standards and Interfaces, Vol. 31, Issue. 6, pp. 1143 1149, 2009
[16]. Xiang-Yang Wang, Zi Han Xu and Hong Ying Yang, A Robust Image Watermarking Algorithm using
SVR Detection, International Journal of Expert Systems with Applications, Vol. 36, Issue. 5, pp. 9056
9064, 2009.
[17]. Jamal A. Hussein et al, Spatial Domain Watermarking Scheme for Coloured Images based on Log -
Average Luminance, International Journal of Computing, Vol. 2, Issue. 1, pp. 100 103, 2010.
[18]. Mauro Barni et al, A DCT-Domain System for Robust Image Watermarking, Intl Journal of Signal
Processing, Vol. 66, Issue.3, pp. 357-372, 1998.
[19]. Joseph O Ruanaidh, Holger Peterson, Alexander Herrigel, Shelby Pereira and Thierry Pun, Cryptographic
Copyright Protection for Digital Images based on Watermarking Techniques, International Journal ofTheoretical Computer Science, Vol. 226, Issues. 1- 2, pp. 117-142, 1999.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
24/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
24
[20]. Yi-Ta Wu and Frank Y. Shih, An Adjusted-Purpose Digital Watermarking Technique, International
Journal of Pattern Recognition, Vol. 37, Issue. 12, pp. 2349-2359, 2004.
[21]. Rongrong Ni, Quiqi Ruan and H.D. Cheng, Secure Semi-Blind Watermarking based on Iteration
Mapping and Image Features, International Journal of Pattern Recognition, Vol. 38, Issue. 3, pp. 357-368,
2005.
[22]. Dimitar Taskovski, Sofia Bogdavona and Momcilo Bogdanov, Blind Low Frequency Watermarking
Method, International Journal of Signal Processing, Vol. 2, Issue. 3, pp. 146 150, 2006.[23]. Jieh-Ming Shieh, Der Chyuan Lou and Ming Chang Chang, A Semi-Blind Digital Watermarking Scheme
based on Singular Value Decomposition, International Journal of Computer Standards and Interfaces, Vol.
28, Issue. 4, pp. 428-440, 2006.
[24]. Chin-Chen Chang, Piyu Tsai and Chia Chen Lin, SVD Based Digital Image Watermarking Scheme,
Pattern Recognition Letters, Vol. 26, Issue. 10, pp 1577-1586, 2005.
[25]. Vikas Saxena, Paridhi Khemka, Adti Harsulkar and J.P.Gupta, Performance Analysis of Color Channel
for DCT based Image Watermarking Scheme, International Journal of Security and its Applications, Vol.
1, No. 2, pp. 41 47, 2007.
[26]. Neminath Hubballi and Kanyakumari D.P., Novel DCT based Watermarking Scheme for Digital
Images, International Journal of Recent Trends in Engineering, Vol. 1, No. 1, pp. 430-433, 2009.
[27]. David Asatryan and Naira Asatryan, Combined Spatial and Frequency Domain Watermarking,
International Conference on Data Mining, pp. 323- 326, 2003[28]. Bum Soo Kim et al, Robust Digital Watermarking method against Geometrical Attacks, International
Journal of Real Time Imaging, Vol. 9, pp. 139 149, 2003.
[29]. Don Zhang, Jian Feng and Kwok Tung Lo, Image Watemarking using tree based spatial frequency feature
of wavelet transform, International Journal of Visual Communication and Image representation, Vol. 14,
Issue. 4, pp. 474 491, 2003.
[30]. Fan Zhang, Wenyu Liu and Chunxiao Liu, High Capacity Watermarking in non edge texture under
statistical distortion constraint,
[31]. Dan Yu and Farook Sattar, A new blind Watermarking based on Independent Component Analysis,
Proceedings of the Ist International Conference on Digital Watermarking, pp. 51 63, 2003.
[32]. Chao-Hung Lai et al, Robust Image Watermarking against Local Geometric Attacks using Multiscale
Block Matching Method, International Journal of Visual Communicationa and Image Representation.,
Vol. 20, pp. 377-388, 2009
[33] Chiang S. Jao, Brint S.U. and Hier D.B., Applying Wavelet Transform on Internet Based Radiological
Images, International Journal of Computer Methods and Programs in Biomedicine, Vol.58, Issue. 3, pp.
239 244, 1999.
[34]. Deepa Kundar and Hatzinakos D., Digital Watermarking using Multiresolution Wavelet Decomposition
Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. 5, Issue.
12 15, pp. 2969 2972, 1998.
[35]. Jong Ryul Kim and Young Shik Moon, A Robust Wavelet-Based Digital Watermarking using Level
Adaptive Thresholding, Proceedings of the International Conference on Image Processing, Vol. 2, pp. 226
230, 1999.
[36]. Liu Gang and Yin Ke Xin, A Novel Chaos and HVS based Image Watermarking Algorithm,
Proceedings of International Conference on Computer Mechatronics, Control and Electronic Engineering,
Vol. 1, Issue. 24 26, pp. 31 34, 2010
[37]. Jian-Guo Cao, Fowler J.E, and Younan N.H., An ImageAdaptive Watermark based on a RedundantWavelet Transform , Proceedings of the IEEE International Conference on Image Processing, Vol.2, pp.
277-280, 2001.
[38]. Nam-Yong and Lucier B.J., Wavelet Methods for Inverting Radon transform with Noisy Data, IEEE
Transactions on Image Processing, Vol. 10, Issue. 1, pp. 79 94, 2001
[39]. Cayre F, Fontain C and Furon T., Watermarking Security: Theory and Practise, IEEE Transactions on
Signal Processing, Vol. 53, Issue. 10, pp. 3976 3987, 2005.
[40]. Zhang Diming and Yao Li, A Non Blind Watermarking on 3D model in Spatial Domain, Proceedings of
International Conference on Computer Application and System Modelling, Vol.10, Issue. 22 24, pp. 267
269, 2010.
[41]. G.S. El-Taweel, Onsi H.M, Samy M and Darwish M.G., Secure and Non-Blind Watermarking Scheme
for color Images based on DWT, International Journal on Graphics, Vision and Image Processing, Vol. 5,
Issue 4, pp. 1-5, 2005
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
25/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
25
[42]. El Iskandarani, Darwish and Abubahia A.M., Capacity and Quality Improvement in blind second
generation Watermarking, Proceedings of the International Conference on Security Technology, Vol. 1,
Issue. 5 8, pp. 139 143, 2009.
[43]. Yong Wu, Yuanjun He and Hongming Cai, Optimal Threshold Selection Algorithm in Edge Detection
based on Wavelet Transform, International Journal of Image and Vision Computing, Vol. 23, Issue. 13, pp.
1159-1169, 2005
[44]. Santi P. Maity, Malay K.Kundu and Tirtha S. Das, Robust Spread Spectrum Watermarking withImproved Capacity, International Journal of Pattern Recognition, Vol. 28, Issue. 3, 2007.
[45]. Jiang Lung Liu, Der Chyuan Lou, Ming Chang Chang and Hao Kuan Tso, A Robust Watermarking
Scheme using self reference image, International Journal of Computer Standards and Interfaces, Vol. 28,
Issue. 3, pp. 356 367, 2006.
[46]. Mahmood Al Khassaweneh and Seling Aviyente, Spatially Adaptive Wavelet Thresholding for Image
Watermarking, Proceedings of International Conference on Multimedia and Expo, pp. 1597 1600, 2006.
[47]. Gaurav Bhatnagar and Balasubramanian Raman, A New Robust Reference Watermarking Scheme based
on DWT-SVD. International Journal of Computer Standards and Interfaces, Vol.35, Issue.5, pp. 1002-
1013, 2009.
[48]. Huijuan Li, Image Encryption based on Gyrator Transform and Two-Step Phase-Shifting Interferometry,
International Journal of Optics and Lasers in Engineering, Vol.47, Issue. 1, pp 45-50, 2009.
[49]. Mohammed Ouhsain and Ben Hamza A., Image Watermarking scheme using Non Negative Matrix
Factorization and Wavelet Transform, International Journal of Expert Systems with Applications, Vol. 36,
Issue. 2, pp. 2123- 2129, 2009.
[50]. Emmanuel J. Candes et al, New Tight Frames of Curvelets and Optomal Representations of Objects with
C2
Similarities, Intl Journal of Communications on Pure and Applied Mathematics, Vol. 57, Issue.2, pp.
219 266, 2002.
[51]. Jean-Luc Starck, Mai K. Nguyen and Fionn Murtagh Wavelets and Curvelets for Image Deconvolution: a
Combined Approach, International Journal of Signal Processing Vol. 83, Issue. 10, pp. 2279-2283, 2003.
[52]. Jean-Luc Starck, Moudden Y, Abrial L and M.Nguyen, Wavelets, Ridgelets and Curvelets on the
Sphere, International Journal of Astronomy and Astrophysics, Vol. 446, Issue. 3, pp.1191-1204, 2008.
[53]. Birgir Bjorn Saevarsson, Sveinsson J.R, and Benediktsson J.A, Time Invariant Curvelet Denoising,
Proceedings of the 6th Nordic Signal Processing Symposium, pp. 117 120, 2004.
[54]. Demin Wang and Spiranza F, Curved Wavelet Transform for Image Coding, IEEE Transactions on
Image Processing, Vol.15, Issue.8, pp.2413-2421, 2006
[55]. Yi Xiao, Cheng L.M, and Cheng L.L., A Robust Image Watermarking Scheme based on novel HVS
model in Curvelet Domain, Proceedings of International Conference on Intelligent Information Hiding and
Multimedia Signal Processing, Issue. 15 17, pp. 343 347, 2008
[56]. Chune Zhang, Cheng L. L, Zhengding Qiu and Cheng L.M., Multipurpose Watermarking Based on
Multiscale Curvelet Transform, IEEE Transactions on Information Forensics and Security, Vol. 3, No. 4,
pp 611-619, December 2008.
[57]. G. Jagadeeswar Reddy, Jaya Chandra Prasad and Giri Prasadl, Finger Print Image Denoising using
Curvelet Transform, ARPN Journal of Engineering and Applied Sciences, Vol. 3, No. 3, pp. 31-35, 2008.
[58]. Zheng Wei Shen and Fu Cheng Liao, Adaptive Watermark Algorithm based fast Curvelet Transform,
Proceedings of International Conference on Wavelet Analysis and Pattern Recognition, Vol. 2, Issue. 30
31, pp. 518 523, 2008.[59]. Myungjin Choi et al, Rae Young Kim and Moon Gyu Kim, The Curvelet Transform for Image Fusion,
International Journal of Physics, Vol. 48, pp. 324 328, 2006.
[60]. G.Y.Chen and B.Kegl, Complex Ridgelets for Image Denoising, International Journal of Pattern
Recognition, Vol. 40, Issue.2, pp. 578 585, 2005
[61]. Wang Q. Lim, The Discrete Shearlet Transform: A new directional transform and compactly supported
Shearlet frames, IEEE Transactions on Image Processing, Vol. 19, Issue. 5, pp. 1166 1180, 2010.
[62]. Sheng Yi, Labate, Easley and Krim H., Edge Detection and Processing using Shearlets, International
Journal of Applied and Computational Harmonic Analysis, Vol. 27, Issue.1, pp. 24 46, 2009
[63]. Ibrahim A. El Rube, Mohamad Abou El Nasr, Mostafa Naim and Mahmoud Farou, Contourlet versus
Wavelet Transform for a Robust Digital Watermarking Technique, Proceedings of World Congress of
Science, Engineering and Technology, Vol. 60, pp. 288 0- 292, 2000.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
26/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
26
[64]. Guiduo Duan, Ho, Anthony T.S and Xi Zhao, A Novel Non Redundant Contourlet Transform for Robust
Image Watermarking against non geometrical and geometrical attacks, Proceedings of 5th
International
Conference on Visual Information Engineering,Vol.1, pp. 124 129, 2009.
[65]. Akhaee M.A., Sahraenian, and Marvasti, Contourlet Based Image Watermarking using Optimum
Detector in a Noisy Environment, IEEE Transactions on Image Processing, Vol. 19, Issue. 4, pp. 967 980,
2010.
[66]. Haohao Song, Songyu Yu, Xiaokang Yang, Li Song and Chen Wang, Contour based Image AdaptiveWatermarking, International Journal of Signal Processing: Image Communication, Vol. 23, Issue. 3, pp.
162 178, 2008.
[67]. Minh N. Do and Vetteli M, The Contourlet Transform: An Efficient Directional Multiresolution Image
Representation, IEEE Transactions on Image Processing, Vol. 14, Issue. 12, pp. 2091 2106, 2005
[68]. S. Voloshynovskiy, Pereira, Pun T, Eggers J.J and Su J.K., Attacks on Digital Watermarks:
Classification, Estimation-based Attacks and Benchmarks, IEEE Communications Magazine, Vol. 39,
Issue. 8, pp. 118 126, 2001.
[69]. Jonathan K. Su, Joachim J. Eggers and Bernd Girod, Analysis of Digital Watermarks subjected to
Optimum Linear Filtering and Additive Noise, International Journal of Signal Processing, Vol. 81, Issue.
6, pp. 1141-1175, 2001.
[70]. Frank Hartung, Su J.K and Girod B., Spread Spectrum Watermarking: Malicious Attacks andCounterattacks, International Journal of Security and Watermarking of Multimedia Contents, Vol. 3657,
pp. 147 - 158, 1999.
[71]. Claude Desset, Benoit Macq and Luc Vandendorpe, Block Error-Correcting Codes for Systems with a
very high BER: Theoretical Analysis and Application to the Protection of Watermarks, International
Journal of Signal Processing: Image Communication, Vol. 17, Issue. 5, pp.409-421, 2002.
[72]. Kourosh Jafari-Khouzani and Soltanian Zadeh, Rotation-Invariant Multiresolution Texture Analysis using
Radon and Wavelet Transforms, IEEE Transactions on Image Processing, Vol. 14, No. 6, pp 783-795,
June 2005.
[73]. Pierre Moulin and Mihcak, A Framework for Evaluating the Data Hiding Capacity of Image Sources,
IEEE International Conference on Image Processing, Vol. 11, Issue. 9, pp 1-34, 2002.
[74]. Gouenou Coatrieux, Le Guillo, Cauvin and Roux, Reversible Watermarking for Knowledge Digest
Embedding and Reliability Control in Medical Images, IEEE Transactions, on Information Technology in
Biomedicine, Vol. 13, Issue. 2, pp. 158-165, 2009.
[75]. Rajendra Acharya U, Niranjan, Iyengar, Kannathal and Lim Choo Min, Simultaneous Storage of Patient
Information with medical images in the Frequency Domain, Computer Methods and Programs in
Biomedicine, Vol. 76, pp. 13-19, 2004.
[76]. Jason Dowling et al, A Comparision of DCT and DWT Block Based Watermarking on Medical Image
Quality, Proceedings of the 6th International Conference on Digital Watermarking, Vol. 5041, pp. 454-
466, 2008.
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
27/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
27
A Survey on Ontology-Based Approach for Context Modelling and
Reasoning
R.Shyamala, R.Sunitha, G.Aghila
Department of Computer Science
School of Engineering and Technology
Pondicherry University, India.
Abstract
Computing becomes increasingly mobile and pervasive in todays scenario; this implies that the
applications should adapt to the dynamic environments. Context aware infrastructure requires an efficient
context model. There are several approaches for modelling context; object oriented models, key-value, markup
scheme, graphical, logic-based, spatial model and ontology based model. The most efficient approach isontology based model which is used to represent concepts and their relationships. In this paper we present a
comparative study of different ontology based models for context modelling and reasoning.
Keywords: Context modeling, Ontology, Pervasive computing.
mailto:[email protected]:[email protected]:[email protected]:[email protected] -
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
28/107
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
29/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
29
ambiguity. There are many types of ontology in literature which includes: Domain ontology, Generic ontology,
Metadata ontology, Representation ontology, Task ontology, Method ontology.Domain ontology is designed to
represent knowledge relevant to a certain domain type, e.g. medical, mechanical etc. Generic ontology is one
which has general concepts that can be applied to various technical domains. Representation ontology
formulates general representation entities without defining what should be represented e.g. Frame Ontology.
Task ontology provides specific terms for a particular task. Method ontology provides specific terms for a
particular problem solving method. Ontology-based model uses OWL-DL (Web Ontology Language Description Logic) to represent context information. OWL-DL is used to model a particular domain by defining
classes, individuals, characteristics of individuals (data type properties), and relations between individuals
(object properties) and it is supported by number of reasoning services. OWL-DL ontological models are used in
several architectures like Context Broker Architecture (CoBrA), Service Oriented Context Aware Middleware
(SOCAM) etc.
3.1 Advantages and Disadvantages
The most promising assets for context modelling is found in ontology-based models [1,4], because it
meets the six requirements dominant in pervasive environments: (1) distributed composition, (2) partial
validation, (3) richness and quality of information, (4) incompleteness and ambiguity, (5) level of formality, and
(6) applicability to existing environments. They clearly outperformed the key-value, markup scheme, graphical,
logic-based, and object oriented models in terms of expressiveness and interoperability. The big challenge
remains the right usage of the ontology tools and languages. If ontology consists of large number of individuals
then online execution of ontology reasoning poses scalability issues.
4. Context Modeling and Reasoning
Literature works for ontology based context modelling can be classified as works related to ontology for
context aware applications, architecture using ontologies to model context and domain specific ontologies as
shown in Figure 1.
4.1 Ontologies
CONON (CONtext ONtology) [5] is Web Ontology Language (OWL) encoded context ontology formodelling context in pervasive computing environments, and for supporting logic based context reasoning.
CONON context model is divided into upper ontology and specific ontology. Upper context ontology captures
general concepts about basic context, and also provides extensibility for adding domain-specific ontology in a
hierarchical manner. Upper ontology consists of abstract classes describing a physical object including Person,
Activity, Computational Entity and Location, as well as a set of abstract sub-classes. Each entity is associated
with its attributes (owl: DatatypeProperty) and relations with other entities (owl:
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
30/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
30
Figure 1: Classification of Ontology based approaches
ObjectProperty). Specific ontology is a collection of ontology set which define the details of general concepts
and their features in each sub-domain. A number of concrete sub-classes are defined to model specific context ina given environment (e.g., the abstract class IndoorSpace of home domain is classified into four sub-classes
Building, Room, Corridorand Entry). Logic reasoning is used in order to perform consistency checks and to
deduce high-level context knowledge from explicitly given low-level context information. There are two distinct
ways to perform reasoning with CONON: Ontology reasoning by description-logic rules which are integrated in
the OWL semantics, e.g. for transitive and inverse relations. User-defined reasoning is done by creating user
rules using first-order-logic. For e.g. to find whether the user is sleeping or not, the rule is (? u locatedIn
Bedroom) ^ (Bedroom lightLevel LOW) ^ (Bedroom drapeStatus CLOSED) => (? u situation SLEEPING).
Similar to CONON is the SOUPA [6] ontology Standard Ontology for Ubiquitous and Pervasive
Applications, it is designed using the Web Ontology Language (OWL) to model and support pervasive
computing applications, and includes modular component vocabularies to represent intelligent agents with
associated beliefs, desires, and intentions, time, space, events, user profiles, actions, and policies for security and
privacy. SOUPA consists of two distinctive but related set of ontologies: SOUPA Core SOUPA Extension. The
set of the SOUPA Core ontologies attempts to define generic vocabularies for expressing concepts that are
associated with person, agent, belief-desire-intention (BDI), action, policy, time, space and event that are
universal for different pervasive computing applications. The set of SOUPA Extension ontologies, extended
from the core ontologies, define additional vocabularies for supporting specific types of applications and
provide examples for the future ontology extensions. The SOUPA Extension ontologies are defined with two
purposes: (i) define an extended set of vocabularies for supporting specific types of pervasive application
domains, and (ii) demonstrate how to define new ontologies by extending the SOUPA Core ontologies.
An ontology created by merging publicly available ontological content into a single, comprehensive,
and cohesive structure is called the SUMO [17, 18] (Suggested Upper Merged Ontology). SUMO is a large,
free, upper ontology in first order logic. SUMO provides definitions for general-purpose terms and acts as a
foundation for more specific domain ontologies. It is increasingly being used as a resource in natural language
understanding research. SUMO hasbeen used as thebasis for an interchange language, to resolve the meaning
of terms in web search, to express the deep semantics of restricted natural language sentences, and as a
repository of pragmatics and world knowledge to support question answering. The language used in SUMO to
Ontology Architecture DomainSpecific
CALA-ONT
GCOM
COBRA
CROCOON
SUMO
SOUPA
COBRA-
CONO
CROCO
CASP
OWL&SWRL
CALA
Home Healthcare
Ontology based approaches
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
31/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
31
represent knowledge is a version of KIF (Knowledge Interchange Format).
CoBrA-ONT [7] is an ontology model developed with the help of OWL and other building tools for
CoBrA. CoBrA-ONT is a collection of OWL ontologies for context-aware systems. CoBrA-ONT models the
basic concepts like people, places, agent etc in the environment. CoBrA-ONT consists of four sub-ontologies:
Place, Agent, Agents Location and Agents Activity. In Place ontology the central concept is Place with
attributes such as latitude and longitude to describe its location and related concepts like Atomic place and
Compound place. In Agent ontology the central concept is Agent with specializations like person, software
agentand has attributes like name, email address and assigned roles like speaker, audience. Agents Location
ontology adds the locatedIn relation to the agent concept to capture the agents location i.e. in atomic place or
compound place. From the locatedIn property, two sub properties are derived locatedInAtomicPlace and
locatedInCompoundPlace which has sub properties like locatedInRoom, locatedInRestroom, locatedInBuilding,
and locatedInCampus etc. Agents Activity ontology describes the events happen at places and events which are
attended by agents. Current event is represented using the class EventHappeningNow. Every event has a
schedule; PresentationSchedule is a class for presentation event with properties like startTime, endTime,
location.
The next ontology is the CroCoON [9] (Cross-application Context Ontology) which is a generic
ontology based context model developed for the architecture CroCo. CroCo is an ontology-based context
management service that allows for cross-application context gathering, modelling, and provision. CroCoON
allows for the integration of domain-specific knowledge to facilitate the usage of CroCo in diverse applications.CroCoON consist of upper ontology and several sub-ontologies. Upper ontology is used to extend the model
and to integrate domain specific knowledge for diverse applications. These extensions are called Ontology
Profiles. Sub-ontology models several aspects of context like place, person, activity, time, device, software,
space, documents etc. These concepts are reused from ontologies like SOUPA, PROTON, and W3C Time
Ontology. CroCoON uses OWL and RDF for representing the context and it uses Jena Semantic Web
Framework which provides Jena rules and rule reasoner for reasoning purpose.
An ontology context model developed for learning environments is called CALA-ONT [10] (Context
Aware Learning ArchitectureONTology) which is designed to use within the CALA - Context Aware Learning
Architecture, CALA is developed to support a context aware learning service that employs knowledge and
reasoning of context and share this information in intelligent learning services in ubiquitous learning
environments. In CALA-ONT context information is represented in first order predicate logic and context model
is defined in OWL-DL. CALA-ONT consists of four top-level classes and sub-classes, and twelve mainproperties which describe the relations between individuals in top level class and its sub properties. XML, RDF
Schema and OWL are a part of CALA-ONT model. For an intelligent school spaces, the four top-level classes
are Person, Place, Computational Entity and Activity. Each top-level class has its sub-classes. For e.g. the class
Person may have sub-classes like Student, Teacher, Office staff etc. The twelve main object properties related to
top-level class are presentIn, hasUsage, hasComEntity, isUsedBy etc. Each property represents the binary
relationship linking an individual in the domain to an individual in the range. There are two ways to perform
reasoning with CALA-ONT: Ontology reasoning using first order predicate logic of the class relationship,
property characteristics, and limitations. Ontology reasoning of the context reasoning engine is expressed in first
order predicate logic for a transitive relation is subClassOf (?A rdfs:subClassOf ?B), (?B rdfs:subClassOf ?C) -
> (?A rdfs:subClassOf ?C). Rule-based reasoning is one where new context is reasoned based on information
about various other contexts using Boolean algebra. The AND operator is used to connect information of two
contexts and a new context is reasoned.
4.2 Architectures using ontology for context modelling
In literature there are number of distributed systems developed to support pervasive computing like
Intelligent rooms [14], Cooltown [15] and Context Toolkit [16] etc. These architectures dont support
knowledge sharing and context reasoning because they do not have common ontologies. CoBrA [7] - Context
Broker Architecture addresses the drawbacks like support for knowledge sharing and context reasoning by using
common ontology defined using Semantic Web languages. CoBrA is agent based architecture for context ware
computing in intelligent spaces. Physical spaces (e.g. living rooms, meeting rooms) embedded with intelligent
systems that provide computing services to users are called intelligent spaces. CoBrA-ONT an ontology model
is developed for use within CoBrA architecture (discussed in 4.1). Context information is acquired from agent,
sensors and then it is integrated into a coherent model and shared among the devices and agents. CoBrA
Architecture consists of three components: a context broker, context aware agents, and context aware devices.
Agents and devices can contact the context broker and exchange information by the FIPA Agent Communication
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
32/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
32
Language. Context Broker is an important component in CoBrA which maintains and manages the shared model
of context. Context Broker acquires context from two sources (1) external sources like information servers,
semantic web services, database, (2) intelligent spaces (data from sensors). Context Broker has the responsibility
of (i) acquiring contexts from heterogeneous information sources and maintaining the consistency of the overall
context knowledge through reasoning, (ii) helping distributed agents to share context knowledge through the use
of ontologies, agent communication languages and protocols, and (iii) protecting the privacy of users by
establishing and enforcing user defined policies while sharing sensitive personal information with agents in thecommunity. Context reasoning is done using logic inference engine. The problem of the broker agent being a
bottle-neck in distributed systems is solved by a so-called broker-federation, which is a network of context
broker agents.
GCoM [8] is a generic context management model that supports collaborative reasoning by providing
structure for contexts, rules and their semantics in a multi-domain pervasive context-aware application. In
GCoM context is represented using upper level and lower level ontology and rules are used for reasoning
purpose which are represented using ontology compatible rule language. Context is divided into semi-
independent components i.e. static and dynamic context instance, which makes GCoM dynamic and reusable in
pervasive computing environment.GCoM model consists of three components: Context Ontology, Context Data
and Context related Rules. Ontology represents semantics, concepts and relationships in the context data.
Ontology component is formed by integrating the Generic ontology and Domain specific ontology. This
ontology is then stored in a Context-Onto repository. Context data represents instances of context that exist inthe form of profiled data or in the form of context instances obtained from the sensors. Sensed context is to be
communicated to GCoM using RDF/XML triple representation format. Sensed context is stored in a repository
and then converted into ontologies. Rules represent certain axioms that are used by context-aware systems to
reason out and derive decisions. These rules have two sources; rules that are explicitly given by the users
through the user interface and rules that are implicitly learnt by the system itself. Semantic mapping and
delivery module is responsible for mapping and conversion between rules and context-onto repository so as to
deliver a data that is ready for reasoning using the Jena generic rule language.
Context management services for heterogeneous environments should support generic and flexible
mechanisms for cross application context handling, reasoning, security and privacy. One such service is the
CroCo [9] an ontology-based, cross-application context service which allows cross-application context
gathering and modelling for heterogeneous and networked environments. A generic, ontology-based context
model, called CroCoON (Cross-application Context Ontology), is developed for the use within CroCo(discussed in 4.1). CroCo allows arbitrary context providers to submit, and context consumers to request context
data via specific service interfaces, it follows the Blackboard model, which promotes a data-centric approach
enabling easy addition of new context providers and consumers. CroCo consists of three modules: Context
management module, Consistency checking and reasoning module, and Context data update and provision
module. Context management module consists of three layers: Context History (CH) consists of history of
updates to the context model, Consistent Context (CC) represents the currently valid, consistent contextual data,
and Inferred Knowledge (IK) layer consists of all derived information, i.e. reasoned from the current context
information. Consistency checking and reasoning module consists of a Consistency Manager (CM) and
Reasoning manager (RM). Consistency manager is triggered whenever new context is added and consistency
enforces within the manager is responsible for consistency checks and conflict detection. Reasoning manager is
similar to Consistency manager which invokes the reasoners to start the reasoning process when relevant data
changes. Context data update and provision moduleprovides two services: Update service and Query service.
Update service enables data update and changes in the model. Query service enables to retrieve context
information from CroCo. Privacy Enforcers ensure security to data. There are three additional mechanisms in
CroCo which enables efficient consistency check and reasoning: Confidence value, Variability and Reputation.
Each context provider is given a consistency value indicating the accuracy and reliability. A users name may be
static while his location may be dynamic; this is called Variability which is stored in Aging Knowledge Base.
Each context provider is given a reputation depending on the data quality, if the provider sends inconsistent data
continuously its Reputation decreases resulting in lower consistency value.
Ubiquitous computing leads to ubiquitous learning environments, where various embedded
computational devices will be pervasive and interoperate to support learning, and introduces context-aware
learning service that employs knowledge and reasoning to understand the local context and share this
information in support of intelligent learning services. CALA [10] (Context Aware Learning Architecture) a
context-aware manager based architecture developed to support a context aware learning service for ubiquitous
learning environments like intelligent school spaces. CALA-ONT - Context Aware Learning Architecture ONTology is an ontology context model designed to use within the CALA architecture (discussed in 4.1).
-
8/6/2019 Ijciis May 2011 Vol. 2 No. 5
33/107
International Journal of Computational Intelligence and Information Security, May 2011 Vol. 2, No. 5
33
CALA architecture consists of five components: Personal agent, Computing entity, Physical sensor, Activ