Image Fusion for Multi Source Image

4
2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics 978-1-4244-5194-4/10/$26.00 ©2010 IEEE CAR 2010 The Study on the Image Fusion for Multisource Image Baosen So ng, Yongqi ng Fu College of Information and Communication Engineering Harbin Engineering University Harb in, Chin a, 150001  Abstract  —Considering evaluation of multisource image fusion performance, various meth ods of subjective & quantita tive and objective & qualitative ass essments of multis ource image fu sion are discussed in this paper. Based on the relationship and individual characteristics of these measures, the synthetic evaluation system and criterions of multisource image fusion are formed. Using these evaluation criteria, the fusion performance of mu lti sou rc e image fu sio n bas ed on the fea tur es a nd interrelatio ns o f these methods is ana lyzed. The e xperime ntal a nd calculation results show that the fusion evaluation criteria are effective. Conclusions made may provide guidance to choose appropriate fusion methods in practice.  Index Term  s   Image Fu sion,Mult isource Imag e,Fusion Performance I. I  NTRODUCTION Information fu sion is a mult i-level and multi-a spect process. During this p rocess, multi-s ource informat ion is detected, integrated , correlated, eva luated, and combin ed to get a precise status and identi ty evaluation as we ll as a complete and timely situation asse ssment and thre at evaluation. Multi-sour ce image f usion is a branc h of the multi-se nsor information fusion-fusi on of visual informat ion [1]. As a sensor has various restric tions i n its physical feature s, im aging mechanism , and viewing angle, a single image sen sor usually cannot extra ct enough informat ion from a sc ene, and thus has difficult ies or even cannot give an all -round desc ription for a scene independe ntly. Multi-sourc e im age f usion, however ,  provides an ef fective sol ution to this probl em and has become an important branch of information fusion. II. IMAGE FUSION TECHNOLOGY With taking the image as research object, image fusion technology is intended to extract the information of corresp onding channel and fina lly form a composite image in system output ba sed on certain image process ing of multiband information, whic h is caug ht by the single se nsor or many images under same scene acquiring from image sensor of many different models in an effor t to observe or integrate dif ferent image information for further process ing. Image fusion metho d, is clos ely rel atin g to the type of proc ess ing obje ct and  processin g grade, whi ch is mainl y determine d in complianc e with the actu al situation by virtue of the diff erent ana lyzing degree and pres entations of ea ch image. In the last two decade s, a great deal of mod el and algorithm researc hes are made on im age fusions at dif ferent levels and various fusion system s are given around the world[2 ]. However, there i s yet no gene rally-acc epted complet e theory in the research fiel d of multi-source image fusion so far.  A. Bas ic The ory o f the Image Fusion Te chnology Categorized in information fusion, image fusion is a technology synthesizing the multisource image and image sequence in multi-measurable space, which reasonably dominates and utiliz e the informa tion acquir ed in the c ourse of  pre-processing the image of same scene caught by many sensors or by single sensor at different moment based on making the best use of mult i-images in effo rts to fuse an image  by virtue of the com plementar y informati on in space or time according to certa in standard, as we ll as improve the quality of fused im age bette r than a ny relat ed image to acquire the consistent explanatio n or de scription of the scene and thus reflect the fact accurately.  Figure1. Information structure of multi-source images Image fusion integr ates the information of many multisour ce images for below main purposes: (1) Denosing. Because the image caught by the sensor is generally with the noise and the subsequent proce ssing requires for a limitation of the noise, im age fusion thus can be for reduction of the noise and improvement of the SNR. (2) Resoluti on improve ment, anothe r importa nt purpose of image fusion. Beca use the infrared image tran smitted from the satellite is sometimes with low resolution, ima ge fusion can be adopted to impr ove the resolu tion by fusing the image cau ght  by other se nsors (in cluding opti cal image, synthetic a perture image). wnloaded on June 23,2010 at 07:41:15 UTC from IEEE Xplo re. Restrictions apply .

Transcript of Image Fusion for Multi Source Image

Page 1: Image Fusion for Multi Source Image

8/8/2019 Image Fusion for Multi Source Image

http://slidepdf.com/reader/full/image-fusion-for-multi-source-image 1/4

2010 2nd International Asia Conference on Informatics in Control, Automation and Robotics

978-1-4244-5194-4/10/$26.00 ©2010 IEEE CAR 2010

The Study on the Image Fusion for MultisourceImage

Baosen Song, Yongqing Fu

College of Information and Communication EngineeringHarbin Engineering University

Harbin, China, 150001

 Abstract  —Considering evaluation of multisource image fusion

performance, various methods of subjective & quantitative and

objective & qualitative assessments of multisource image fusion

are discussed in this paper. Based on the relationship and

individual characteristics of these measures, the syntheticevaluation system and criterions of multisource image fusion are

formed. Using these evaluation criteria, the fusion performance

of multisource image fusion based on the features and

interrelations of these methods is analyzed. The experimental and

calculation results show that the fusion evaluation criteria are

effective. Conclusions made may provide guidance to choose

appropriate fusion methods in practice.

  Index Term s — Image Fusion,Multisource Image,Fusion

Performance

I. I NTRODUCTION

Information fusion is a multi-level and multi-aspect process.

During this process, multi-source information is detected,integrated, correlated, evaluated, and combined to get a precisestatus and identity evaluation as well as a complete and timelysituation assessment and threat evaluation.

Multi-source image fusion is a branch of the multi-sensor information fusion-fusion of visual information [1]. As a sensor has various restrictions in its physical features, imagingmechanism, and viewing angle, a single image sensor usuallycannot extract enough information from a scene, and thus hasdifficulties or even cannot give an all-round description for ascene independently. Multi-source image fusion, however, provides an effective solution to this problem and has becomean important branch of information fusion.

II. IMAGE FUSION TECHNOLOGY

With taking the image as research object, image fusiontechnology is intended to extract the information of corresponding channel and finally form a composite image insystem output based on certain image processing of multibandinformation, which is caught by the single sensor or manyimages under same scene acquiring from image sensor of manydifferent models in an effort to observe or integrate differentimage information for further processing. Image fusion method,is closely relating to the type of processing object and  processing grade, which is mainly determined in compliancewith the actual situation by virtue of the different analyzing

degree and presentations of each image.In the last two decades, a great deal of model and algorithm

researches are made on image fusions at different levels and

various fusion systems are given around the world[2]. However,there is yet no generally-accepted complete theory in theresearch field of multi-source image fusion so far.

 A. Basic Theory of the Image Fusion Technology

Categorized in information fusion, image fusion is atechnology synthesizing the multisource image and imagesequence in multi-measurable space, which reasonablydominates and utilize the information acquired in the course of   pre-processing the image of same scene caught by manysensors or by single sensor at different moment based onmaking the best use of multi-images in efforts to fuse an image  by virtue of the complementary information in space or timeaccording to certain standard, as well as improve the quality of fused image better than any related image to acquire theconsistent explanation or description of the scene and thusreflect the fact accurately.

  Figure1. Information structure of multi-source images

Image fusion integrates the information of many multisourceimages for below main purposes:

(1) Denosing. Because the image caught by the sensor isgenerally with the noise and the subsequent processing requiresfor a limitation of the noise, image fusion thus can be for reduction of the noise and improvement of the SNR.

(2) Resolution improvement, another important purpose of image fusion. Because the infrared image transmitted from thesatellite is sometimes with low resolution, image fusion can beadopted to improve the resolution by fusing the image caught  by other sensors (including optical image, synthetic aperture

image).

aded on June 23,2010 at 07:41:15 UTC from IEEE Xplore. Restrictions apply.

Page 2: Image Fusion for Multi Source Image

8/8/2019 Image Fusion for Multi Source Image

http://slidepdf.com/reader/full/image-fusion-for-multi-source-image 2/4

(3) Information improvement content. The informationcontent shall be improved in the course of image transmissionand image feature extraction, image fusion is an importantmeans for the improvement of information content.

(4) Definition improvement. In the course of image  processing, it always requires to improve the image quality,highlight the details and texture feature, as well as preserve theedge details and energy of the image on the basis of reservingthe original information, image fusion can get these effectdespite it is difficult for some general image enhancement.

(5) To compensate the loss/failure messages in a sensor image with the images from other sensors. Obviously, theimage fusion technology is not the image enhancement in thegeneral sense, but a new technology in the computer vision andimage understanding field.

 B. Image fusions at different levels

Based on the phase at which the image is fused, there arethree levels of image fusions[3]: Pixel-level image fusion,feature-level image fusion, and decision-level image fusion .

1) Pixel-level image fusion:

However, provided that such method is adopted, the to-be-fused data should be caught by the sensor of same type, itcannot fuse the data caught by the sensor and data acquired inthe course of ground investigation. While the pixel-level fusionmethod focuses on fusing the data from all sensors and extractsthe feature vectors from the result according to therequirements, and makes further estimation and recognition.The superiority of data-level fusion consists in saving the mostoriginal data of the scene and the result is also accurate more,  but it can process little information in the course of pre-  processing, and resulting in high requirement of communication bandwidth because of excessive calculationand poor real-time[4].

2) Feature-level image fusion

Ranked as the fusion of intermediate level, feature fusionfocuses on extracting the feature of original data caught by thesensor and making a comprehensive analysis and processing of the information in an effort to achieve the classification,collection and fusion of such multi-sensor data. In the course of fusion and processing, the actual forms and content of the mainfeature information is closely relating to the application purpose and situation of the multi-sensor image fusion [5]. If the data caught by the sensor is the image data, the feature is

abstractly extracted by and from the image pixel informationwith the typical features of line type, edge, texture, spectrum,similar brightness region and field depth region, etc., and thento realize the feature fusion and classification of multi-sensor image. Image fusion based on these feature can not onlyincrease the possibility of extracting the feature from the image, but also acquire some useful complex feature, as well as findout relevant feature information, improve the reliability of thesefeature information, eliminate the false feature and establishnew complex feature, etc.. Feature fusion predominates inreaching considerable information compression and convenientfor real-time processing, and good for data processing andtransmission efficiency to promote the real-time processing of 

the data, moreover, because the feature acquired is directly

 pertinent to the decision analysis, the fusion result can show themost feature information requiring in decision analysis.

3) Decision-level image fusion

Decision image fusion, also called as symbol image fusion,is a type of high-level information fusion, namely makes afusion processing at the superior level of informationrepresentation, which is a course focusing on making logicreasoning or statistic reasoning of many images’ information, itwill provide a basis for various related control or decision[6].

Such method involves in processing the original data firstlyto respectively obtain the corresponding result of estimationand recognition, and the fusion center then coordinates thereliability of each data source decision in accordance withcertain rules to conclude the final result, and the result shall bethe supreme strategy of all fusions.

This method requires a high abstract and lower homogeneityof data source, the data thus keeps a strong openness and it can be applied to widely. Furthermore, because it requires to extract

the respective judgment result at the stage of pre-processing, itasks for the large information processing and the fusion center reserves little calculation and low requirement of communication bandwidth. Main advantages of the decisionfusion can be generalized as below:

(1)Low requirement of communication and transmissionand sound real time. Input data from decision fusion is variousfeature information and the result obtained shall be the decisiondescription, and leads to little data volume and strong anti-interference ability.

(2) High fault tolerance. It can eliminate the datainterference of single or many sensors by appropriate method.

(3) Low data requirement. Because it is abstracted highly,

the sensor can either be homogeneous or heterogeneous, thereliance and requirement for a sensor is reduced.(4) Strong analysis ability. It can reflect the overall

information about the object and environment of the object tosatisfy different application requirements.

The above three levels of image fusions are somewhatcorresponding to the three levels of multi-sensor informationfusions. In actual applications, you can choose and combine thefeatures of the image fusions at different levels for optimalfusion effect.

For a concrete image fusion system, the informationaccepted by it can be the information at a single level and canalso be the information at several levels. The basic strategy of 

fusion is first fusing the information at the same level to obtainthe fused information at a higher level, and then performing thefusion at corresponding level. Therefore, image fusion isessentially an information processing process of integrating andlevel-by-level abstracting the multi-source information from alow level to a high level. Figure 2 shows the complete processof image fusion.

aded on June 23,2010 at 07:41:15 UTC from IEEE Xplore. Restrictions apply.

Page 3: Image Fusion for Multi Source Image

8/8/2019 Image Fusion for Multi Source Image

http://slidepdf.com/reader/full/image-fusion-for-multi-source-image 3/4

Figure 2. Basic flow of multi-source image fusion

III. SUBJECTIVE EVALUATION OF THE IMAGE FUSION

PERFORMANCE

In some cases, image fusion is performed for people. The purposes of image fusion can be: (1) Improve the performanceof images to improve the visual effect of people; (2) Increasethe content of information in the fused images or the precisionand reliability of the information to provide richer, more  precise, and more reliable image information for people tomake decisions. In this case, the performance of the fusedimages is usually observed and evaluated by people.

The understanding or comprehension of people for images

depends on not only the content of the images, but also themental state of the observer. As people’s visual system is verycomplicated and is affected by the environment condition,visual performance, mood, interest, and knowledge level,  people's evaluation for the image fusion performance issomewhat subjective and very complicated. Generally speaking,the observers making subjective evaluations fall into twocategories: observers without any training and observersexperienced in image technologies. As the information peopleuse for understanding images derives from not only theinformation received by the eyes, but also the imageunderstanding generated from the past experience andknowledge, the evaluations made by the above two categories

of observers are quite different. It is worthwhile pointing outthat the evaluation made by an observer who has experienceand is familiar with the image content is usually strict andeasily affected by such subjective factors as experience,knowledge, environment, and psychology, while the highevaluation made by an observer who receives no training and isunfamiliar with the image content does not always truly reflectthe performance of the image.

How to appraise the quality of fusion image shall be a key  procedure for the image fusion. As for the determination of appraisal index about image fusion effect, it mainly refers tothe selection of objective appraisal index, which is selectedaccording to the fusion purpose in efforts to compare the

quality of fusion image, on the other hand, to compare thestrongpoint and weakness of the fusion method by thecomparison of fusion image. Different appraisal indexes shall be adopted for the image with different fusion purposes.

IV. OBJECTIVE EVALUATION CRITERION OF THE IMAGE

FUSION PERFORMANCE

Based on the previous analysis, we can select suitableobjective performance indicators to evaluate the image fusioneffect and the performance of the fusion algorithm based on therequirements and fusion purpose in the actual application. The

following are the general principles for selecting objective performance indicators based on the fusion purpose.1) Compare the fusion methods: by using different fusion

methods to fuse the same group of source images, you canobtain different fusion results. You can use such evaluationindicators as root mean square error, cross entropy, mutualinformation, and combination entropy.

2) Increase the spatial resolution: increasing the spatialresolution of the image is an important purpose of remotesensing image fusion. For such image fusion, you can use suchindicators as image average value, standard deviation, andspatial resolution to evaluate its effect.

3) Improve the resolution: image fusion usually requires

improving the image performance, enhancing the detailinformation and texture features of the image, remain the edgedetails and energy without losing the original main information.You can select such evaluation indicators as standard deviation,average gradient, spatial frequency, and contrast variation.

4) To evaluate whether the information content of a fusedimage is increased or not, you can use such indicators asentropy, cross entropy, mutual information, combinationentropy, and standard deviation.

5) Reduce the image noise: Usually, the multi-source imagesobtained from the sensor are images with noise. Thesubsequent image processing usually requires to control the

aded on June 23,2010 at 07:41:15 UTC from IEEE Xplore. Restrictions apply.

Page 4: Image Fusion for Multi Source Image

8/8/2019 Image Fusion for Multi Source Image

http://slidepdf.com/reader/full/image-fusion-for-multi-source-image 4/4

Figure 3. Synthetic evaluation system of multi-source image fusion performance

noise within a certain range. Therefore, the fusion method isused to reduce the noise and improve the signal-to-noise ratio.

6) Fuse the spectral characteristic of the image: to evaluatewhether the spectral characteristic of the fused image ischanged compared to the source image, you can use suchindicators as deviation index, correlation coefficient, andspectral distortion.

Of course, we should not use only objective evaluationmethods without any subjective evaluation, but combinesubjective evaluation with objective quantitative evaluationcriterion for synthetically evaluation, that is, perform objectivequantitative evaluation of the image fusion performance basedon the subjective qualitative visual evaluation. Figure 3 showsthe synthetic evaluation system of multi-source image fusion performance.

V. CONCLUSIONS

At present, a great many researches on the multi-sourceimage fusion technology are going on, while little is done for the evaluation of the multi-source image fusion performance,which, however, is very important and meaningful for theobjective and quantitative evaluation of the multi-source imagefusion performance. This document, by focusing on theevaluation problems in the multi-source image fusion, gives anevaluation system for multi-source image fusion as well asevaluation criterion, which will promote the researches on themulti-source image fusion technology.

R EFERENCES

[1] Te-Ming Tu, Shun-Chi Su, Hsuen-Chyun. A new look at HSI-like imagefusion methods. Information Fusion, 2008, 2(5):177-186.[2] Zhang E, J S Zhang, V W Song. Pixel-by-Pixel VIS NIR and LIR Sensor Fusion System. Proceedings of SPIE, 2008, 4820: 535-549.[3] Varshney P K. Multisensor data fusion. Electronic&CommunicationEngineering. 1997, 9(6): 245-253.[4] Xue Z., Blum R.S. and Li Y., Fusion of visual and IR images for concealed weapon detection, in Proc. of Int. Conf on Information Fusion,2002, vol. 2: 1198-1205.[5] Zhang Z. and Blum R.S., A categorization of multiscale-decomposition-  based image fusion schemes with a performance study for a digital cameraapplication, Proceedings of the IEEE 1999, 87(8): 1315-1326.[6] Chavez P.S., Sides S.C. and Anderson J.A., Comparison of threedifference methods to merge multiresolution and multispectral data: LandsatTM and SPOT panchromatic, Photogrammetric Engineering and RemoteSensing, 1991, 57(3): 295-303.

aded on June 23 2010 at 07:41:15 UTC from IEEE Xplore Restrictions apply