A Prototype System for 3D Color Fusion and Mining of...

8
A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral Imagery * * This work was sponsored in part by the Air Force Office of Scientific Research, under Air Force Contract F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and not necessarily endorsed by the U.S. Government. Allen Waxman, Jacques Verly, David Fay, Fang Liu, Michael Braun, Benjamin Pugliese, William Ross, and William Streilein Sensor Exploitation Group MIT Lincoln Laboratory Lexington, MA, USA [email protected] Abstract - We have developed a prototype system in which a user can fuse up to 4 modalities (or 4 spectral bands) of imagery previously registered to one another with respect to a 3D terrain model. The color fused imagery can be draped onto the terrain to support interactive 3D fly- through. The fused imagery, and its opponent-sensor contrasts, can be further processed to yield extended boundary contours and texture measures. Together, these layers of registered imagery and image features can be interactively mined for objects of interest. Data mining for infrastructure and compact targets is achieved using a point-and-click user interface in conjunction with a Fuzzy ARTMAP neural network for on-line pattern learning and recognition. Graphical user interfaces enable the user to control each stage of processing: image enhancement, image fusion, contour and texture extraction, 3D terrain characterization, 3D graphics model building, preparation for exploitation, and interactive data mining. The system is configured as a client-server architecture, enabling remote collaborative exploitation of multisensor imagery. Throughout, the processing of imagery and patterns relies on neural network models of spatial and color opponency, and the adaptive resonance theory of pattern processing. This system has been used to process imagery of a variety of geographic sites, in order to extract roads, rivers, forests and orchards, and performance has been assessed against manually determined “ground truth.” The data mining approach has been extended for the case of hyperspectral imagery of hundreds of bands. This prototype system has now been installed at multiple US government sites for evaluation by image analysts. We plan to extend this approach to include various non- imaging sensor modalities that can be localized to geographic coordinates (e.g., GMTI and SIGINT). We also plan to embed these image fusion and mining capabilities in commercial open software environments for image processing and GIS. Keywords: Multisensor fusion, image fusion, data fusion, data mining, neural networks, pattern recognition, adaptive resonance theory 1 Introduction In light of the rapid increase in number and type of imaging satellites becoming available to support both government and commercial interests throughout the world, we are motivated to develop paradigms and systems that can take advantage of these complementary and voluminous data sets. Space-based imaging of the earth’s surface is being conducted on a routine basis across the spectrum, employing panchromatic visible (EO), multispectral (MSI) visible through infrared (IR), hyperspectral (HSI) and polarimetric synthetic aperture radar (SAR) imaging, all of varying spatial resolution. It is clear that assembling and fusing multispectral and multi-modality data from multiple platforms will greatly improve our ability to detect, classify and map objects, land cover, surface features, and changes over time. There is an increasing need for systems that can aid image analysts to exploit this rich multi-modality data, in order to: Fuse it to enable high fidelity 2D and 3D visualization (particularly of non-literal modalities like thermal IR and SAR imagery); Combine spectral imagery with spatial image features (e.g., color boundary contours, textures and 3D features) into unified spatio-spectral signatures of objects and terrain; Support interactive image mining to discover information bearing layers of data particular to objects/features of interest; Learn the spatio-spectral signatures of objects (i.e., create Search Agents ) and automate the search for them over vast geographic data sets; Use trained Search Agents to access fused imagery from databases based on object/feature content, and to assess changes in objects and terrain over time; Convert this exploited multisensor imagery into vector maps and hybrid raster-vector 2D/3D products Enable collaboration among remote analysts and rapid dissemination of customized products to users via web- based communications. At the Fusion 2000 conference, we reported progress towards developing methods that support many of the

Transcript of A Prototype System for 3D Color Fusion and Mining of...

Page 1: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

A Prototype System for 3D Color Fusion and Miningof Multisensor / Spectral Imagery*

* This work was sponsored in part by the Air Force Office of Scientific Research, under Air Force Contract F19628-00-C-0002. Opinions,

interpretations, conclusions, and recommendations are those of the authors and not necessarily endorsed by the U.S. Government.

Allen Waxman, Jacques Verly, David Fay, Fang Liu,Michael Braun, Benjamin Pugliese, William Ross, and William Streilein

Sensor Exploitation GroupMIT Lincoln Laboratory

Lexington, MA, [email protected]

Abstract - We have developed a prototype system in whicha user can fuse up to 4 modalities (or 4 spectral bands) ofimagery previously registered to one another with respectto a 3D terrain model. The color fused imagery can bedraped onto the terrain to support interactive 3D fly-through. The fused imagery, and its opponent-sensorcontrasts, can be further processed to yield extendedboundary contours and texture measures. Together, theselayers of registered imagery and image features can beinteractively mined for objects of interest. Data mining forinfrastructure and compact targets is achieved using apoint-and-click user interface in conjunction with a FuzzyARTMAP neural network for on-line pattern learning andrecognition. Graphical user interfaces enable the user tocontrol each stage of processing: image enhancement,image fusion, contour and texture extraction, 3D terraincharacterization, 3D graphics model building, preparationfor exploitation, and interactive data mining. The systemis configured as a client-server architecture, enablingremote collaborative exploitation of multisensor imagery.Throughout, the processing of imagery and patterns relieson neural network models of spatial and color opponency,and the adaptive resonance theory of pattern processing.This system has been used to process imagery of a varietyof geographic sites, in order to extract roads, rivers,forests and orchards, and performance has been assessedagainst manually determined “ground truth.” The datamining approach has been extended for the case ofhyperspectral imagery of hundreds of bands. Thisprototype system has now been installed at multiple USgovernment sites for evaluation by image analysts. Weplan to extend this approach to include various non-imaging sensor modalities that can be localized togeographic coordinates (e.g., GMTI and SIGINT). We alsoplan to embed these image fusion and mining capabilitiesin commercial open software environments for imageprocessing and GIS.

Keywords: Multisensor fusion, image fusion, data fusion,data mining, neural networks, pattern recognition, adaptiveresonance theory

1 Introduction

In light of the rapid increase in number and type of imagingsatellites becoming available to support both governmentand commercial interests throughout the world, we aremotivated to develop paradigms and systems that can takeadvantage of these complementary and voluminous datasets. Space-based imaging of the earth’s surface is beingconducted on a routine basis across the spectrum,employing panchromatic visible (EO), multispectral (MSI)visible through infrared (IR), hyperspectral (HSI) andpolarimetric synthetic aperture radar (SAR) imaging, all ofvarying spatial resolution. It is clear that assembling andfusing multispectral and multi-modality data from multipleplatforms will greatly improve our ability to detect, classifyand map objects, land cover, surface features, and changesover time. There is an increasing need for systems that canaid image analysts to exploit this rich multi-modality data,in order to:Ø Fuse it to enable high fidelity 2D and 3D visualization

(particularly of non-literal modalities like thermal IRand SAR imagery);

Ø Combine spectral imagery with spatial image features(e.g., color boundary contours, textures and 3Dfeatures) into unified spatio-spectral signatures ofobjects and terrain;

Ø Support interactive image mining to discoverinformation bearing layers of data particular toobjects/features of interest;

Ø Learn the spatio-spectral signatures of objects (i.e.,create Search Agents) and automate the search for themover vast geographic data sets;

Ø Use trained Search Agents to access fused imagery fromdatabases based on object/feature content, and to assesschanges in objects and terrain over time;

Ø Convert this exploited multisensor imagery into vectormaps and hybrid raster-vector 2D/3D products

Ø Enable collaboration among remote analysts and rapiddissemination of customized products to users via web-based communications.

At the Fusion 2000 conference, we reported progresstowards developing methods that support many of the

Page 2: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

above technical goals [1,2]. These methods for image fusionand mining that we typically apply off-line to archivedimagery, are also applicable to real-time imaging in 2Dusing up to four passive sensors [3] as well as active 3DLADAR imaging [4]. Their early development has beenreported in the context of color night vision [5,6],EO/IR/SAR fusion [7] and multispectral infrared fusion [8].

Our image fusion methods are motivated by biologicalexamples of visible/IR fusion in snakes [9,10], and colorprocessing in the human visual system [11-14]. We modelthese processes via spatial- and spectral-opponent neuralnetworks embodied in center-surround receptive fields [15],oriented multiscale processing of textures and boudaries[16], and adaptive resonance models of learning andrecognition [17,18].

2 Paradigms and a Prototype System

As introduced in [1,2], we adopt an image fusion paradigmbased on a layered data structure comprised of multispectraland multi-modality imagery registered with respect to 3Dsite models. We employ digital terrain data to orthorectifyand register imagery of terrain. An extension of these ideasapplies separately to the rooftops and sides of buildings,when modeled. The 3D site model can be constructeddirectly from imagery using a variety of means (e.g., photo-grammetry, radar mapping, ladar measurements).

Figure 1: Multisensor 3D fusion paradigm

In Fig. 1 we illustrate our paradigm of multisensor fusionand data mining. Multiple imaging (and possibly non-imaging) modalities are registered by means of a 3D sitemodel, forming a layered georeferenced database. (This datastructure will be expanded to include processed imagerybelow.) The imagery is then fused and hosted in a client-server web-based environment for interactive 3D visual-ization and data mining by multiple collaborating analysts.

We have developed a prototype system that supports eachof these stages of multisensor fusion and exploitation. It canbe organized as a processing chain, as shown in Fig. 2. Theprocessing chain consists of a set of software modules, someof which employ commercial tools (shown in blue: for dataingest, orthorectification, cross-modality image registration,vector product generation, and 3D viewing), and others that

we have developed (shown in red) specifically for imageconditioning, opponent-band image fusion, spatio-spectralcontext feature extraction, interactive data mining, andmined feature display. The approach is extensible, asadditional imaging sensors can provide source data forfusion, and additional image features can be added toprovide additional context for subsequent data mining. Avariety of change detection methods (indicated in green) canbe incorporated. Each additional data layer (source, feature,or change) must be registered using the 3D site model.

Figure 2: Processing chain for image fusion & mining

Results of image fusion and data mining are viewed as 3Dfly-throughs, 2D images, and feature detection image maps(which can be converted to vector maps and combinedraster-vector 2D/3D imagery using commercial software).

In our prototype system, the modules we developedoperate in a client-server environment, where the user (at aclient computer) controls the selection and processing ofimagery (on a remote host computer), views results on theclient and saves them out to the host where others canaccess them. The first three modules in the chain employcommercial (COTS) tools like ERDAS Imagine and RSIENVI to ingest, orthorectify, and register imagery. Fromthis point on the system operates within a Netscape browserusing the Session Manager control panel shown in Fig. 3.

A session consists of a set of multisensor/spectral imageryfrom a chosen site (selected within our Site Manager GUI),already registered via 3D terrain. Each button on the controlpanel launches a Java GUI (graphical user interface) wherethe user selects imagery, adjusts processing parameters,views results on a small area selected, and initiatesprocessing over the entire site. Figs. 4 & 5 illustrate theGUIs for the Conditioning and Fusion modules. The userloads individual modalities or bands into the Conditioningmodule and adjusts parameters that correspond tobrightness, local contrast, image contrast, edge sharpnessand edge scale, reflecting the center-surround shunting neuralnetworks [15] employed for contrast enhancement andadaptive normalization of dynamic range. The user can thenload up to four conditioned image modalities or bands intothe Fusion module, which is based on a double-opponentcolor processing stream [12-14] built from center-surroundshunting networks, as shown in Fig. 6. The resultingopponent-sensor contrasts are assigned to human opponent-color space (YIQ) [14], converted to HSI color space so asto enable global hue rotation, and then to RGB for colordisplay. This is illustrated in Fig. 6 using 30m resolutionEO, IR and SAR imagery of Tuzla, Bosnia, obtained fromthe Landsat (bands 1-3 and 6) and Radarsat satellites.

Page 3: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

Figure 3 : Session Manager control panel for image fusion operates as an applet within a Netscape browser

Figure 4 : GUI for the image Conditioning module

Figure 5: GUI for the opponent Fusion module

Figure 6: Opponet-sensor network for image fusionconstructed from center-surround shunting fields

The resulting fused imagery and source modalities can becombined with the 3D terrain in the ExploitationPreparation module of the control panel in Fig. 3, forinteractive stereo visualization using a viewer built on top ofSGI Inventor or TGS Open Inventor viewer (in an Irix orWindows NT operating system, respectively). An exampleis shown in Fig. 7 for Tuzla, Bosnia, based on 30mresolution DTED, and textured with the fused imagery. Theuser can switch among the modalities to change textures.

Figure 7: Fused EO, IR, SAR imagery (from Landsatand Radarsat) draped over DTED, within a 3D viewer

Following the image fusion process, the user can extractcontext features (see Fig. 2) by launching the Contours &

Page 4: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

Textures module from the control panel in Fig. 3. The GUIshown in Fig. 8 allows the user to select any conditionedband or opponent-band contrast imagery created, and selectamong a variety of image texture processing routines basedon oriented multi-scale Gabor filtering, local variance andauto-correlation, and boundary contour extraction based onBCS theory [16]. This creates additional layers of spatio-spectral data that is registered to the processed sensorimagery. This module has been augmented to enable terraingeometry processing for ground slopes and curvatures.

Figure 8: GUI for the Contours & Textures module,selected here for texture processing

The entire set of conditioned source imagery, opponent-band imagery, spatio-spectral textures and contours, andterrain features, is then assembled into a layered data stack(or deepfile) in the Exploitation Preparation modulelaunched from the Session Manager. This layered data stackcan be augmented with other kinds of registered data (notnecessarily imagery), and is ideal for fused image mining.

3 Fused Image Mining

The layered data stack is a useful data structure, andprovides a mechanism to augment the data mining approachwith other source imagery, non-imaging geospatial data likeGMTI and associated tracks, SIGINT localizations andtracks, contextual features extracted by other algorithms,image change layers, spatial and topological analyses, etc.Such a layered stack is illustrated in Fig. 9. As indicatedthere, a vertical line through the stack selects a set ofregistered data and features corresponding to a pixel or geo-

spatial location (the features capture information about theneighborhood of the location on the ground). This dataforms a feature vector suitable for classification and mining.

Figure 9: Layered deepfile suitable for data mining

Fused image mining is accomplished through the use ofan interactive data selection GUI (shown in Fig. 10), and aFuzzy ARTMAP neural network for pattern learning (tocreate search agents) and search [17,18] (shown in Fig. 11).

Figure 10: GUI for fused Image Mining; the user selectsexamples (green) and counter-examples (red) of targetpixels (i.e., feature vectors) in the lower left zoom box

Features

Colorfused

EO

IR

SAR

Opponents..

Textures

Contours

GMTI

SIGINT

Pixels/Geolocation

Page 5: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

The user selects (with the mouse) in the lower left zoombox of the GUI, examples and counter-examples of targetpixels, as shown in Fig. 10. Any layer in the data stack canbe viewed in the upper panel, yet selection of a pixelactually selects the entire corresponding feature vector fromthe data stack. All feature vectors are labeled as either targetor non-target class, and are fed to the Fuzzy ARTMAP net.

Input Multisensor Feature Vector

Figure 11: Fuzzy ARTMAP neural network for learningsearch agents that discriminate targets from non-targets

The ARTMAP architecture shown in Fig.11 is simplifiedand consists only of a lower ART module, which performsunsupervised clustering of feature vectors into categories,and an upper layer in which the learned categories formassociations with one or the other class for a target ofinterest. This approach enables the network to learn acompressed representation of the target class in the contextof non-targets in the scene. That is, it learns to discriminatethe target from the context, using the multisensor data andspatio-spectral features. A target of interest is typicallyrepresented by a few learned categories, as are the non-targets. For a target of interest, the user goes through theprocess of example/counter-example selection, local searchin a selected area, display of search results, and possiblerefinement by selecting additional pixels as examples ofmissed detections (targets) or false alarms (non-targets) andthen relearning the representation. This interactive cycletypically takes seconds. Again, the user interacts with anapplet running on a (possibly) light client computer, whilethe computations run on a remote server computer. Thiskind of interactive training of search agents is enabled bythe on-line learning capabilites of ART architectures, andthe simplicity of the Fuzzy ART computations. In addition,we have developed a fast procedure for isolating the subsetof features (or layers) necessary to actually discriminate thetarget from the non-target [2] in the selected training data,which then accelerates the search over the image session orentire site. For each target class we actually train five FuzzyARTMAP networks with randomized order of selectedtraining data, and derive a confidence value for targetdetections from the voting consensus across all five nets.

Figs. 12 & 13 illustrate search results for roads andbuildings in the fused example shown in Fig. 7.

Figure 12: Search for roads in fused EO, IR, SAR

Figure 13: Search for buildings in fused EO, IR, SAR

This same approach to fused image mining can be appliedto hyperspectral imagery (HSI) of hundrends of bands,though our current prototype system is not designed toaccommodate so many bands. Still, the same methods canbe applied directly, to derive opponent-band contrast, extractcontours and textures from select (opponent) bands, andcreate a layered data stack for interactive mining. Forpurposes of display and target pixel selection, we canaggregate bands into four broad spectral bands and create afused visualization. This was demonstrated on AVIRISimagery in [2], and here we illustrate results for HYDICE

RecognizedClass

Category-to-ClassMapping

UnsupervisedClustering

Bottom-Up & Top-DownAdaptive Filters

ComplementCoding

Page 6: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

imagery of an area of Ft. Hood, Texas, consisting of 210bands in the spectral range 0.4-2.5 microns (visible throughshort-wave infrared). Figs. 14-16 show search results forrivers, roads/trails, and forested areas, respectively.

Figure 14: Search for rivers in HYDICE fused HSI

Figure 15: Search for trails in HYDICE fused HSI

Figure 16: Search for forested areas in HYDICE HSI(detected areas are delineated by their boundaries)

In these examples of mining fused HSI, though the deep-file of feature vectors exceeds two-hundred layers, the sameFuzzy ARTMAP architecture can easily process it, anddiscover that only four layers are necessary to find all threeclasses of targets searched for. The salient layers involveopponent-band contrasts and contours. Searches whichdiscriminate forests from orchards discover that texture is(not surprisingly) a key feature, since the spectral content isso similar. Reducing a hyperspectral image search down toonly a few layers accelerates the process by two orders ofmagnitude. This approach is quite different from the moreconventional principal components decomposition.

Figure 17: Performance summary (percent detection vs.false alarms) for fused image mining on six sites(different symbols) for 4 classes of target features

Page 7: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

In Fig. 17 we summarize the performance of fused imagemining for four classes of targets (roads, rivers, forests,orchards) extracted from six sites at four different geographiclocations using different mixtures of sensor imagery. Wedetermine the percent detection and percent false alarm bycomparison with manually delineated target features. We seehere the potential to achieve in excess of 90% detection withless than 5% false alarms in single-pass search for suchcultural features. This suggests that multisensor fused imagemining may provide a tool to aid geospatial image analysts.Converting these feature detections from raster to vectorformat, enables vector editing and map generation usingexisting COTS software (as indicated in Fig. 2).

4 Collaborative Map-Image Interface

We’ve noted that our prototype fusion and mining systemis structured as a client-server architecture that supports web-based communications. This provides a simple means tosupport collaborative exploitation by remotely locatedanalysts, and a mechanism for rapid dissemination of resultsto remote users. In fact, users can easily select among thesearch results relevant to them, and possibly conduct furtheranalysis such as spatial relations among detected targets. Toenable such collaboration, we’ve prototyped a map-imageinterface which is displayed on the client and supportsselective display of search results, annotation, and read-outof lattitude & longitude for the cursor location. In fact, it isthis same GUI which is used to select the area of interest forimage mining, and to launch the mining GUI of Fig. 10.

Figure 18: Client-server architecture supports four-bandMSI fusion & mining for Monterey, CA

An example of client-server based exploitation is shown inFig.18, for four-band multispectral imagery (MSI) collectedwith the Positive Systems Inc., airborne ADAR sensor overthe Naval Postgraduate School, Monterey, CA, to supportthe Urban Warrior Experiment of March 1999.

Results of mining for two classes of targets (red cars andbuildings) within a selected area by two analysts are eachsaved to the server, then down-loaded into a single map-image interface, and displayed as an applet within Netscapeby a remote user. This is illustrated (with an enlarged area)in Figs. 19 & 20, where the user can toggle between anunderlying fused image and an (old) map.

Figure 19: Map-Image Interface for collaborative

exploitation (detected red cars & buildings)

Figure 20: Map-Image Interface selected to show a mapbeneath the detected targets (red cars & buildings); themap is found to be out of date with buildings removed.

Page 8: A Prototype System for 3D Color Fusion and Mining of ...fusion.isif.org/proceedings/fusion01CD/fusion/... · A Prototype System for 3D Color Fusion and Mining of Multisensor / Spectral

5 Summary and Future Directions

We have developed a prototype integrated system formultisensor image fusion and mining, and shown itsperformance on a variety of data sets. It builds on the under-lying science of neural models of visual computation,learning and recognition, in the form of opponent-colorcenter-surround shunting networks, Gabor multi-scaleoriented filtering, Boundary Contour System theory, andFuzzy ARTMAP pattern processing. These neural methods,in conjunction with graphical user interfaces, are combinedwith client-server web-based communication technologies toenable remote, collaborative exploitation of fused sensorimagery. Application to multi-modality medical imageryhas already begun and looks very promising [19].

We intend to extend this fusion approach by incorporatingnon-imaging geospatial data, like GMTI and SIGINT, andenhancing search with additional features and multi-passmethods that treat prior search results as context. But thereal challenge for the future is higher levels of informationfusion for image analysis, and that too will follow fromneural models of information fusion in the brain.

References

[1] W. Ross, A. Waxman, W. Streilein, J. Verly, F. Liu,M. Braun, P. Harmon, and S. Rak, “Multisensor imagefusion for 3D site visualization and search,” in Proceedingsof the Third International Conference on InformationFusion, Paris, July, 2000.

[2] W. Streilein, A. Waxman, W. Ross, F. Liu, M.Braun, J. Verly, and C.H. Read, “Fused multisensor imagemining for feature foundation data,” in Proceedings of theThird International Conference on Information Fusion,Paris, July, 2000.

[3] D.A. Fay, A.M. Waxman, M. Aguilar, D.B. Ireland,J.P. Racamato, W.D. Ross, W.W. Streilein, and M.I.Braun, “Fusion of multisensor imagery for night vision:Color visualization, target learning and search,” inProceedings of the Third International Conference onInformation Fusion, Paris, July, 2000.

[4] D.A. Fay, A.M. Waxman, J.G. Verly, M.I. Braun,J.P. Racamato, and C. Frost, “Fusion of visible, infraredand 3D LADAR imagery,” in Proceedings of the FourthInternational Conference on Information Fusion, Montreal,August, 2001.

[5] A.M. Waxman, D.A. Fay, A.N. Gove, M.C. Seibert,J.P. Racamato, J.E. Carrick and E.D. Savoye, “Color nightvision: Fusion of intensified visible and thermal IRimagery,” Synthetic Vision for Vehicle Guidance andControl, SPIE-2463, pp. 58-68, 1995.

[6] A.M. Waxman, A.N. Gove, D.A. Fay, J.P. Racamato,J.E. Carrick, M.C. Seibert and E.D. Savoye, “Color nightvision: Opponent processing in the fusion of visible and IRimagery,” Neural Networks , 10, pp. 1-6, 1997.

[7] A.M. Waxman, M. Aguilar, R.A. Baxter, D.A. Fay,D.B. Ireland, J.P. Racamato, and W.D. Ross, “Opponent-color fusion of multisensor imagery: Visible, IR and SAR,”Proceedings of the Meeting of the IRIS Specialty Group onPassive Sensors 1998, I, pp. 43-61, 1998.

[8] A.N. Gove, R.K. Cunningham, and A.M. Waxman,“Opponent-color visual processing applied to multispectralinfrared imagery,” Proceedings of the Meeting of the IRISSpecialty Group on Passive Sensors 1996, II, pp. 247-262,1996.

[9] E.A. Newman and P.H. Hartline, “Integration of visualand infrared information in bimodal neurons of therattlesnake optic tectum,” Science, 213, pp. 781-791, 1981.

[10] E.A. Newman and P.H. Hartline, “The infrared visionof snakes,” Scientific American, 246, pp. 116-127, March,1982.

[11] P. Schiller, “The ON and OFF channels of the visualsystem,” Trends in Neuroscience, TINS-15, pp. 86-92,1992.

[12] P. Schiller and N.K. Logothetis, “The color-opponentand broad-band channels of the primate visual system,”Trends in Neuroscience, TINS-13, pp. 392-398, 1990.

[13] P. Gouras, “Color Vision,” Chapter 31 in Principlesof Neural Science (E.R. Kandel, J.H. Schwartz, and T.M.Jessell, editors), pp. 467-480, New York: Elsevier SciencePublishers, 1991.

[14] P.K. Kaiser and R.M. Boynton, Human ColorVision, Optical Society of America, Washington, DC,1996.

[15] S. Grossberg, “Nonlinear neural networks: Principles,mechanisms and architectures,” Neural Networks, 1, 17-61,1988.

[16] S. Grossberg and E. Mingolla, “Neural dynamics ofperceptual grouping: Textures, boundaries and emergentsegmentations,” Perception & Psychophysics, 38, 141-171,1985.

[17] G.A. Carpenter, S. Grossberg, and D.B. Rosen,“Fuzzy ART: Fast stable learning and categorization ofanalog patterns by an adaptive resonance system,” NeuralNetworks, 4, 759-771, 1991.

[18] G.A. Carpenter, S. Grossberg, N. Markuzon, J.H.Reynolds, and D.B. Rosen, “Fuzzy ARTMAP: A neuralnetwork architecture for incremental supervised learning ofanalog multidimensional maps,” IEEE Transactions onNeural Networks , 3, pp. 698-713, 1992.

[19] M. Aguilar and A.L. Garrett, “Neurophysiologically-motivated sensor fusion for visualization & characterizationof medical imagery,” in Proceed of the Fourth InternationalConf. on Information Fusion, Montreal, August, 2001.