The new segmentation and fuzzy logic based multi sensor image fusion

24
The New Segmentation and Fuzzy Logic based Multi- Sensor Image Fusion Jamal Saeedi, and Karim Faez Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran Presented By: Jamal Saeedi

Transcript of The new segmentation and fuzzy logic based multi sensor image fusion

Page 1: The new segmentation and fuzzy logic based multi sensor image fusion

The New Segmentation and Fuzzy Logic based Multi-Sensor Image Fusion

Jamal Saeedi, and Karim Faez

Electrical Engineering Department, Amirkabir University of

Technology, Tehran, Iran

Presented By:

Jamal Saeedi

Page 2: The new segmentation and fuzzy logic based multi sensor image fusion

Content

1. Introduction

2. Multi-Resolution based Fusion Methods

3. The New Segmentation and Fuzzy based Method

4. Evaluation Methods and Results

5. Conclusion

2/24

Page 3: The new segmentation and fuzzy logic based multi sensor image fusion

1.1. DefinitionImage fusion is commonly described as the task of enhancing perception by integration multiple images into a composite image, which is more appropriate for the purposes of human visual perception and computer processing tasks such as segmentation, feature extraction and target recognition.

Here, we concentrate on fusion of visible and infrared images. The main reason for combining visible and infrared (IR) sensors is that a fused image, constructed by combination of features, enables improved detection and instant recognizable localization of a target (in the IR image) with respect to its background (in the visible image).

Images source: http://imagefusion.org

Visible Image

Fused Image

IR Image

3/24

Page 4: The new segmentation and fuzzy logic based multi sensor image fusion

1.2. Important applications of image fusion

Medical imaging Surveillance and TargetingRemote sensingMerging Out-Of-Focus Images

4/24

Page 5: The new segmentation and fuzzy logic based multi sensor image fusion

1.3. Fusion MethodsMany algorithms developed so far can be classified into three primary categories:

Fusion TechniquesSpatial Domain

PCA

Averaging

Weighted Averaging

Transform DomainMulti-Resolution

Transform

Optimization ApproachBayesian

5/24

Page 6: The new segmentation and fuzzy logic based multi sensor image fusion

2. Multi-Resolution based Fusion MethodsMR image fusion is a biologically-inspired method which fuses images at different spatial resolutions.Similar to the human visual system, this fusion approach operates by decomposing the input images into a resolution pyramid of numerous levels.

Multi-Resolution Transform Fusion Rule

Inverse Transform

6/24

Page 7: The new segmentation and fuzzy logic based multi sensor image fusion

3. The New Multi-Sensor image Fusion MethodThe block diagram of proposed Segmentation and Fuzzy based Multi-sensor image fusion:

Fused Image 7/24

Page 8: The new segmentation and fuzzy logic based multi sensor image fusion

3.1. NotationsHere, we use Dual-Tree Discrete Wavelet Transform (DT-DWT), which introduces limited redundancy and allows the transform providing approximate shift invariance and directionally selective filters while preserving the usual properties of perfect reconstruction and computational efficiency.The DT-DWT of an image is denoted by y and it is assumed in the different scales to be of the form:

Here, represents the approximation sub-bands at the last level of the DT-DWT, while represent the details sub-bands at level l :

Fused Image

0xx

],,...,,[ 21 LL xyyyy

Lxly

]6|.,...,2|.,1|.[ llll yyyy DT-DWT of “UN-Camp” image at level one 8/24

Page 9: The new segmentation and fuzzy logic based multi sensor image fusion

3.2. Segmentation Algorithm--> Gradient image

For multivalued images, a single-valued approach can be adopted by segmenting each band, separately, but jointly segmented images work better for image fusion.

This is because the segmentation map will contain a smaller number of regions to represent all features in both IR and visible images.

First a Gaussian derivative function is used to generate gradient magnitude from source images as:

22 ),(),(),( yxS GyxSGyxSyxG

are Gaussian partial derivative filtersyx GandG 9/24

Page 10: The new segmentation and fuzzy logic based multi sensor image fusion

3.2. Segmentation Algorithm--> Watershed

It is followed by morphological opening. Opening is based on a max(.) followed by a min(.) operation on a local neighborhood around the pixels. Then a joint gradient image is obtained using:

Finally we use the joint gradient image to obtain segmentation map via Marker-based watershed algorithm.

),(

~max

),(~

),(~

max),(

~

21),(

,,yxG

yxGyxG

yxGyxJG

Byx

B

Ayx

A

are the Gradient of IR and visible imagesBA GandG~~Joint Gradient image

IR Image Visible Image

Segmentation map

10/24

Page 11: The new segmentation and fuzzy logic based multi sensor image fusion

3.3. Pixel and Region-based Decision Map-->1

The activity level measurement is used to judge the quality of a given part of each source image in the transform domain. First, we use two texture features as window-based activities for generating pixel-based decision map (dm) in each level of decomposition.

where are the window's weights, which are obtained by Gaussian function.

2

)(1 )|()|()|(

ls

ls

dwn

ll ydnnydnwdnal

Standard deviation

)|()|()|()(

2 dnnydnwdna ls

dwn

ll

l

Energy

)|(. dwl

11/24

Page 12: The new segmentation and fuzzy logic based multi sensor image fusion

3.3. Pixel and Region-based Decision Map-->2

Having the two features, pixel-based decision map (Pdm) in each level of decomposition is obtained using:

Then

In fact, we use inter-scale dependency of wavelet coefficients in the different orientations (in the DT-DWT at ±15, ±45 and ±75 orientations) to obtain a confident Pdm for each level.

otherwise

dadaanddadaifddm

lB

lA

lB

lAl

0

)|(.)|(.)|(.)|(.1)|(.

2211

otherwise

ddmifPdmd

ll

021)|(.

611(.)

6

1

12/24

Page 13: The new segmentation and fuzzy logic based multi sensor image fusion

3.3. Pixel and Region-based Decision Map-->3

Region-based activity measurement is also used to obtain the decision map for selecting wavelet coefficients between different source images, but it is very sensitive to the large coefficients in the region and is not very accurate.

Therefore, we use the segmentation map to obtain the region-based decision map (Rdm) based on Pdm:

otherwise

RnPdmifRRdm

Rn

ll

02

)(1)(

IR images Visible images Pixel-based dm Region-based dm13/24

Page 14: The new segmentation and fuzzy logic based multi sensor image fusion

3.4. Fusion Rule for High Frequency Coeff-->1

Having the pixel and region-based decision maps, we have two strategies for fusion rule:

Also in some regions of the source images, we cannot make a good decision for selecting wavelet coefficients, because there is not enough difference between extracted features. For this reason, we define a new fusion rule as:

)|(.(.)1)|(.(.))|(.: 1 dyPdmdyPdmdyRuleFirst lB

llA

ll

)|(.(.)1)|(.(.))|(.: 2 dyRdmdyRdmdyRuleSecond lB

llA

ll

otherwisedydy

dysigndysigndydy

dyRuleThirdlB

lA

lB

lA

lB

lA

l

)|(.,)|(.max

)|(.)|(.2

)|(.)|(.)|(.: 3

14/24

Page 15: The new segmentation and fuzzy logic based multi sensor image fusion

3.4. Fusion Rule for High Frequency Coeff-->2

We define a dissimilarity measure to combine these three fusion rules to integrate as much information as possible into the fused image. The dissimilarity measure (DIS) is intended to quantify the degree of ‘dissimilarity’ between the source images:

By analyzing the DIS measure, we can determine where the source images differ and to what extent, and use this information to combine the fusion rules.

jifjifTT

jifjifjiDIS

djiadjiaD

jifdjiadjiaD

jif

ll

ji

lll

D

d

lB

lA

lD

d

lB

lA

l

,,max,2

),(),(sin),(

)|,()|,(1),(,)|,()|,(1),(

11,

2/1

21

1

222

1

111

IR images Visible images DIS Measure15/24

Page 16: The new segmentation and fuzzy logic based multi sensor image fusion

3.4. Fusion Rule for High Frequency Coeff-->3

First, we define the following linguistic rules for fusion operation:

1) IF the DIS measure at a given position is high (i.e. the sources are distinctly different at that position) THEN we use the first fusion rule

2) IF the DIS measure at a given position is medium (i.e. the sources are different at that position) THEN we use the second fusion rule

3) IF the DIS measure at a given position is low (i.e. the sources are similar at that position) THEN we use the third fusion rule

Then, for constructing standard rules from Linguistic ones, we define two thresholds and the final fused image is obtained as:

21

212

13

(.))|(.(.))|(.

(.))|(.)|(.

TDISdyTDISTdy

TDISdydy

ll

ll

ll

lF

16/24

3.025.01.005.0

2

1

TT

Page 17: The new segmentation and fuzzy logic based multi sensor image fusion

3.4. Fusion Rule for Low Frequency Coeff-->1

Commonly, averaging is a popular method to fuse approximation or low frequency sub-bands of the source images.

However, equally weighing the input approximation sub-bands leads to the problem of contrast reduction, because IR and visible images have different contrast conditions.

In this study, a weighted averaging method is used for low frequency fusion rule.

17/24

Page 18: The new segmentation and fuzzy logic based multi sensor image fusion

3.4. Fusion Rule for Low Frequency Coeff-->2

We use a region-based activity measurement (the normalized Shannon entropy) of high frequency coefficients in the first level of decomposition as the priority (pr) of the region, for weighted combination:

Then

Rnd

dnydnyR

Rpr )|(log)|(161)( 11

6

1

(.)(.)

(.)(.)(.)(.)(.)

BA

LBB

LAAL

F

prpr

xprxprx

IR Image Visible Image

Priority for IR Image Priority for visible Image

18/24

Page 19: The new segmentation and fuzzy logic based multi sensor image fusion

The proposed image fusion method was tested against several state-of-the-art image fusion methods:

Simple averaging.

The Dual-tree discrete wavelet transform with maximum selection rule.

Lewis’s region-based algorithm.

The images used in experiments are surveillance images from TNO Human Factors:

Image sequence “UN Camp” containing 32 pair images.

Image sequences “Trees” containing 19 pair images.

Image sequences “Dune” containing 19 pair images.

4. Evaluation Methods and ResultsImage sequence

“UN Camp”

Image sequence “Trees”

Image sequence “Dune”

19/24

Images source: http://imagefusion.org

Page 20: The new segmentation and fuzzy logic based multi sensor image fusion

Two metrics are considered in this paper, which do not require ground truth images for evaluation:

The Xydeas and Petrovic metric , considers the amount of edge information transferred from the input images to the fused images using a Sobel edge detector to calculate the strength and orientation information at each pixel in both source and the fused images.

Entropy metric , which measures the information content in an image. An image with high information content will have high entropy.

4.1. Objective Results

BAQ /

En

Metric Method UN camp Dune Trees

PetrovicAveraging

DT-CWT (MS)

Lewis

Proposed

0.332

0.439

0.503

0.540

0.338

0.409

0.502

0.525

0.351

0.470

0.553

0.602

EntropyAveraging

DT-CWT (MS)

Lewis

Proposed

6.29

6.75

6.66

6.92

6.75

6.90

6.88

7.08

6.47

6.58

6.54

6.79

Performance of Image Fusion methods

20/24

Page 21: The new segmentation and fuzzy logic based multi sensor image fusion

4.2. Subjective Results --> “UN Camp” images

Simple Averaging DT-CWT (Maximum Selection rule)

Lewis’s region-based algorithm Proposed Algorithm

21/24

Page 22: The new segmentation and fuzzy logic based multi sensor image fusion

4.2. Subjective Results --> “Dune” images

Simple Averaging DT-CWT (Maximum Selection rule)

Lewis’s region-based algorithm Proposed Algorithm

22/24

Page 23: The new segmentation and fuzzy logic based multi sensor image fusion

4.2. Subjective Results --> “Trees” images

Simple Averaging DT-CWT (Maximum Selection rule)

Lewis’s region-based algorithm Proposed Algorithm

23/24

Page 24: The new segmentation and fuzzy logic based multi sensor image fusion

Proposing a new fusion rule for merging wavelet coefficients, which is the second step in the wavelet based image fusion, is the main novelty of this paper.

Also this new method used DT-DWT for finer frequency decomposition and shift invariant property compared to discrete wavelet transform.

The experimental results demonstrated that the proposed method outperforms the standard fusion methods in the fusion of IR and visible images.

5. Conclusion

Thanks for your attention24/24