Software Solutions for NVG ENVG Integration

15
Software Solutions for Software Solutions for NVG ENVG Integration NVG ENVG Integration Keywords Keywords SWIR Imaging SWIR Imaging Night Vision Night Vision Sensor Fusion Sensor Fusion Sub Pixel Image Analysis Sub Pixel Image Analysis September 22th 2010 September 22th 2010 Robotics and Computer Vision System Integration Guy Martin BEng MScA [email protected] Marie-Josée Perreault BBA [email protected]

description

Software Solutions for NVG ENVG Integration. Keywords SWIR Imaging Night Vision Sensor Fusion Sub Pixel Image Analysis September 22th 2010. Robotics and Computer Vision System Integration Guy Martin BEng MScA [email protected] Marie-Josée Perreault BBA [email protected]. - PowerPoint PPT Presentation

Transcript of Software Solutions for NVG ENVG Integration

Page 1: Software Solutions for NVG ENVG Integration

Software Solutions for NVG Software Solutions for NVG ENVG IntegrationENVG Integration

KeywordsKeywordsSWIR ImagingSWIR Imaging

Night VisionNight VisionSensor FusionSensor Fusion

Sub Pixel Image AnalysisSub Pixel Image Analysis

September 22th 2010September 22th 2010

Robotics and Computer Vision System Integration

Guy Martin BEng [email protected]

Marie-Josée Perreault [email protected]

Page 2: Software Solutions for NVG ENVG Integration

No automation system is more accurate than its instrument…

All advanced imagery techniques require a pinhole ‘errorless’ camera image, where geometric and chromatic distortion are removed, and without bias in the image center. In NVG and ENVG applications, it enables sub pixel edge extraction, sensor fusion, and added lossless video compression.

We discovered systematic biases in the modeling and calibration procedure for digital cameras

Our testing uses Our testing uses low accuracy 1024x768 and 640x480low accuracy 1024x768 and 640x480 cameras, a typical resolution for cameras, a typical resolution for SWIR imaging. We retrieve the camera focal distance to an unmatched 10e-10 mm…SWIR imaging. We retrieve the camera focal distance to an unmatched 10e-10 mm…

We demonstrated in June 2009 that a software image correction approach could gain

8:1 higher image accuracy8:1 higher image accuracy4:1 faster computation time4:1 faster computation time30% added lossless video compression30% added lossless video compression

It provides us with multiple integration trade offs between, computation speed, cost, accuracy, It provides us with multiple integration trade offs between, computation speed, cost, accuracy, video compression, lens selection,…video compression, lens selection,…

Our software platform is source code compatible and opened...Our software platform is source code compatible and opened...

Page 3: Software Solutions for NVG ENVG Integration

Impact: Image Fusion and Night VisionImpact: Image Fusion and Night Vision

The problem:

Use wide angle lenses to increase the camera angle of view

Compensate SWIR (900 – 1700 nm) cameras’ low 640x480 resolution

Modify the ROIC analog circuit for uncooled InGaAs arrays to allow more amplification in low light conditions

Allow sensor fusion between SWIR Color synthetic images

Constraint: The image may not lag by more than 250 msec

Our answer: High accuracy camera calibration and software image correction combined with sub pixel edge analysis

ROIC analog filter design

Page 4: Software Solutions for NVG ENVG Integration

As soon as you increase the lens angle of view…

Geometric distortion curves straight lines and shears objects’ squareness

Chromatic distortion splits light with respect to wavelength, whatever the spectrum

Both prevent sub pixel edge analysis and have to be removed from the image.

Some compensation comes from lens design, the remainder has to be corrected through software

Making sub pixel information significantMaking sub pixel information significantAs the wavelength increases, f increases, As the wavelength increases, f increases, and the image grows bigger. Measuring and the image grows bigger. Measuring an object across the spectrum creates a an object across the spectrum creates a size bias…size bias…

Lens distortion creates the Lens distortion creates the biggest error in software biggest error in software imaging, and it amplifies imaging, and it amplifies getting away from the getting away from the image centerimage center

Page 5: Software Solutions for NVG ENVG Integration

Lens-filter selection and designLens-filter selection and design

Read out analogue circuit behaviour with regards to pixel Read out analogue circuit behaviour with regards to pixel gain variation and noisegain variation and noise

Camera embedded softwareCamera embedded software

Computer software for advanced image treatmentComputer software for advanced image treatment

Overall Performance Is an Overall Performance Is an Integration Trade Off BetweenIntegration Trade Off Between

Hardware design impacts on future software image enhancements

Software image correction is dependant on accurate camera calibration

Page 6: Software Solutions for NVG ENVG Integration

Camera CalibrationCamera Calibration

The Camera Model has three partsThe Camera Model has three parts - External Model - External Model - Lens Model - Lens Model - Internal Model - Internal Model

is knowing how an image prints through is knowing how an image prints through the lens on the camera surfacethe lens on the camera surface

Shawn Becker’s Lens Distortion Model (MIT & NASA)

x' = x + x*(K1*r^2 + K2*r^4 + K3*r^6) + P1*(r^2 + 2*x^2) + 2*P2*x*y y' = y + y*(K1*r^2 + K2*r^4 + K3*r^6) + P2*(r^2 + 2*y^2) + 2*P1*x*y

We removed ¾ of the terms to gain accuracy!...

5 Internal Parameters

The camera pixel being square, a should equal b, with skew parameter s close to zero

For a fixed f lens

Page 7: Software Solutions for NVG ENVG Integration

Calibration Performance CriteriaCalibration Performance Criteria

We have to compensate for wavelength-colour variations in order to find the true edge at sub pixel level

Page 8: Software Solutions for NVG ENVG Integration

Calibration ResultsCalibration Results

6 External Parameters

5 internal parameters

2 geometric distortion parameters

Leftmost data set gives results for a model equivalent to the ones generally used, and rightmost, our most accurate result using the same experimental data on our own model.

The camera pixel being square,

a should equal b=f, with skew parameter s close to zero

Left model shows error on f 10e-03mm

Right model shows error on f 10e-10mm , corrected a systematic error on image center by as much as 2 pixels, and an underestimation of distortion parameters

Page 9: Software Solutions for NVG ENVG Integration

Lens Model: Chromatic DistortionLens Model: Chromatic Distortion

Amplified 50 times, our chromatic distortion model is purely radial and has a single image center for all three color channels RGB. We remove a ±½ pixel error on edge location.

Top left - red distortion, Bottom left - blue distortion

Note that they don’t peek at the same distance from the image centerTesting on a 1024x768 camera

Page 10: Software Solutions for NVG ENVG Integration

Lens Distortion CorrectionLens Distortion CorrectionThis image was taken by a 640x480 Bayer pattern color camera using a f=4mm lens, calibrated in lab from our algorithms and setup.

Page 11: Software Solutions for NVG ENVG Integration

Sub pixel edge analysisSub pixel edge analysis

Working from a 3x3 footprint on low definition cameras, edges look blockish.

Devernay’s non maxima suppression technique (INRIA 1995) works for horizontal or vertical straight lines only.

It had to be adapted for corner detection and corrected for curvature end edge orientation bias

Page 12: Software Solutions for NVG ENVG Integration

Once corrected, it becomes a good all purpose edge detection technique for highly pixelized and blurred images

Sub pixel edge analysisSub pixel edge analysis

Page 13: Software Solutions for NVG ENVG Integration

Sensor FusionSensor FusionA computer generated image has exact f and perspective

Fusion of SWIR and Color images require exactly same f and exact removal of lens distortion

Fusion to synthetic image is basic to augmented reality

Synthetic vision with Vision Amplification should appear in civilian airline transportation around year 2018

Color camera should have a 1024x768 resolution

Fusion with a 640x480 SWIR

Eventually use a zooming lens on the color camera…

Computation speed then becomes an issue

Page 14: Software Solutions for NVG ENVG Integration

Annex A: Spectrum and Lens Distortion

SWIR wavelengths will focus further right along the lens axis…

A CCD will see lower SWIR wavelengths

Split up remaining SWIR spectrum to give spectral resolution

Page 15: Software Solutions for NVG ENVG Integration

Annex B: Bayer Pattern RecoveryAnnex B: Bayer Pattern Recovery

The most accurate Bayer pattern interpolation schemesThe most accurate Bayer pattern interpolation schemesuse edge sensing to recover missing RGB information.use edge sensing to recover missing RGB information.Missing values are interpolated using neighbouring pixelMissing values are interpolated using neighbouring pixelinformation. information.

In a two step process, we first compute the missing G pixel values on B and R In a two step process, we first compute the missing G pixel values on B and R pixelspixels

Ex.: On red pixel R13, the missing G13 value is computed as Ex.: On red pixel R13, the missing G13 value is computed as (G12+G14)/2 if the edge is horizontal (R13(G12+G14)/2 if the edge is horizontal (R13>(R3+R23)/2)>(R3+R23)/2)(G8+G18)/2 if the edge is vertical (R13(G8+G18)/2 if the edge is vertical (R13>(R11+R15)/2)>(R11+R15)/2)(G12+G8+G14+G18)/4 otherwise(G12+G8+G14+G18)/4 otherwise

In step two, we compute missing B and R values using known GIn step two, we compute missing B and R values using known G

But the lens introduces errors in the image, geometric and chromatic distortion, curving But the lens introduces errors in the image, geometric and chromatic distortion, curving edges, and ‘color shifting’ edge location as we scan from B to G to R pixels.edges, and ‘color shifting’ edge location as we scan from B to G to R pixels.

The Bayer pattern recovery requires adapting for geometric and chromatic distortion, The Bayer pattern recovery requires adapting for geometric and chromatic distortion, while in monochrome imaging, accuracy is dependant on optical spectrum spread.while in monochrome imaging, accuracy is dependant on optical spectrum spread.

A BAYER COLOR CAMERA IS A SPECTRUM ANALYZERA BAYER COLOR CAMERA IS A SPECTRUM ANALYZERUSE THE SAME SCHEME ON THE SWIR SPECTRUM ?!…USE THE SAME SCHEME ON THE SWIR SPECTRUM ?!…