[American Institute of Aeronautics and Astronautics 35th AIAA Fluid Dynamics Conference and Exhibit...

11
American Institute of Aeronautics and Astronautics 1 Image Processing Tools Used for PSP and Model Deformation Measurements Y. Le Sant * , A. Durand , M-C. Merienne Onera, 29 Avenue de la Division Leclerc 92322 Chatillon, France Recent advances in using image processing tools for wind tunnel applications as PSP (Pressure Sensitive Paint) and model deformation measurements are presented. The input for these tools is markers located on the model as well as on the wind tunnel walls. Assessing their image coordinates is then the first step of the image processing pipe. A dedicated tool has been developed and its accuracy is discussed in detail. Then the matching step with real markers known by their world coordinates is presented. The algorithm is quite new and uses the RANSAC paradigm. It is 3D and models the lens with the usual pinhole lens model. As a result, it solves also the camera pose problem together with the matching problem. Finally, model deformation can be assessed with a grid tracking method, which enables to measure 3D deformation even with only one camera. It has been learned from a lot of PSP tests that the residual error comes uncertainties about marker world coordinates as well as from optical distortions. A new approach has been developed to calibrate the camera parameters and to measure the marker coordinates by using several views of the model in the test section. This method is simple and provides a high accuracy. Determining model deformation requires an accurate in situ camera calibration which is still a problem in wind tunnel testing. A solution is proposed which used the epipolar geometry which solves the matching problem even for unknown markers. I. INTRODUCTION ew optical methods are being widely used in wind tunnel testing and thus image processing tools are now included in the wind tunnel software package. Surface measurement methods as PSP 1,2 (Pressure Sensitive Paint) and IRT (Infrared Thermography) 3 are of particular interest because they require to identify the model shape in the image. For 3D models, a 3D method must be used since conventional 2D image processing tools cannot capture the 3D feature. For example, even an image of a flat plate cannot be mapped when the camera orientation is rather far from an orthogonal view ! This is especially true for wide lenses that create an important perspective effect. Model deformation under aerodynamic loading must be considered for industrial tests and is the motivation to develop reliable methods based either on the Moiré principle 4,5,6,7 , speckle interferometry or stereo vision 8,9,10 . The best solution would be the simplest one which uses no special devices. This is the reason why Onera is developing a stereo vision method that requires only digital cameras or even PSP cameras. As a consequence, the knowledge gained from PSP applications can be directly applied to stereo vision. However, model deformation requires a higher accuracy level and this is why all the existing tools developed for PSP have been revisited and improved. Section §II addresses marker recognition and discusses the recognition uncertainty. Then Section §III presents the registration step, which includes 3D marker matching and solving the camera pose problem. The RANSAC paradigm widely used in image vision is described. Section §IV deals with the calibration of the internal camera parameters, which includes optical distortion that cannot be ignored when considering model deformation. The other error source concerns the real marker * Senior Engineer, Fundamental and Experimental Aerodynamics Insert, Meudon, [email protected]. Student at Ecole Nationale Supérieure de Mécanique et d’Aérotechnique. [email protected] Senior Engineer, Fundamental and Experimental Aerodynamics Insert, Meudon, [email protected]. N 35th AIAA Fluid Dynamics Conference and Exhibit 6 - 9 June 2005, Toronto, Ontario Canada AIAA 2005-5007 Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Transcript of [American Institute of Aeronautics and Astronautics 35th AIAA Fluid Dynamics Conference and Exhibit...

American Institute of Aeronautics and Astronautics1

Image Processing Tools Used for PSP and ModelDeformation Measurements

Y. Le Sant*, A. Durand†, M-C. Merienne‡

Onera, 29 Avenue de la Division Leclerc 92322 Chatillon, France

Recent advances in using image processing tools for wind tunnel applications as PSP(Pressure Sensitive Paint) and model deformation measurements are presented. The inputfor these tools is markers located on the model as well as on the wind tunnel walls. Assessingtheir image coordinates is then the first step of the image processing pipe. A dedicated toolhas been developed and its accuracy is discussed in detail. Then the matching step with realmarkers known by their world coordinates is presented. The algorithm is quite new anduses the RANSAC paradigm. It is 3D and models the lens with the usual pinhole lens model.As a result, it solves also the camera pose problem together with the matching problem.Finally, model deformation can be assessed with a grid tracking method, which enables tomeasure 3D deformation even with only one camera. It has been learned from a lot of PSPtests that the residual error comes uncertainties about marker world coordinates as well asfrom optical distortions. A new approach has been developed to calibrate the cameraparameters and to measure the marker coordinates by using several views of the model inthe test section. This method is simple and provides a high accuracy. Determining modeldeformation requires an accurate in situ camera calibration which is still a problem in windtunnel testing. A solution is proposed which used the epipolar geometry which solves thematching problem even for unknown markers.

I. INTRODUCTIONew optical methods are being widely used in wind tunnel testing and thus image processing tools are nowincluded in the wind tunnel software package. Surface measurement methods as PSP1,2 (Pressure Sensitive

Paint) and IRT (Infrared Thermography)3 are of particular interest because they require to identify the model shapein the image. For 3D models, a 3D method must be used since conventional 2D image processing tools cannotcapture the 3D feature. For example, even an image of a flat plate cannot be mapped when the camera orientation israther far from an orthogonal view ! This is especially true for wide lenses that create an important perspectiveeffect.

Model deformation under aerodynamic loading must be considered for industrial tests and is the motivation todevelop reliable methods based either on the Moiré principle4,5,6,7, speckle interferometry or stereo vision8,9,10. Thebest solution would be the simplest one which uses no special devices. This is the reason why Onera is developing astereo vision method that requires only digital cameras or even PSP cameras. As a consequence, the knowledgegained from PSP applications can be directly applied to stereo vision. However, model deformation requires ahigher accuracy level and this is why all the existing tools developed for PSP have been revisited and improved.Section §II addresses marker recognition and discusses the recognition uncertainty. Then Section §III presents theregistration step, which includes 3D marker matching and solving the camera pose problem. The RANSAC paradigmwidely used in image vision is described.

Section §IV deals with the calibration of the internal camera parameters, which includes optical distortion thatcannot be ignored when considering model deformation. The other error source concerns the real marker

* Senior Engineer, Fundamental and Experimental Aerodynamics Insert, Meudon, [email protected].† Student at Ecole Nationale Supérieure de Mécanique et d’Aérotechnique. [email protected]‡ Senior Engineer, Fundamental and Experimental Aerodynamics Insert, Meudon, [email protected].

N

35th AIAA Fluid Dynamics Conference and Exhibit6 - 9 June 2005, Toronto, Ontario Canada

AIAA 2005-5007

Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

American Institute of Aeronautics and Astronautics2

coordinates: markers may be misplaced, or they may be painted manually so that their coordinates are not knownaccurately. It could also happen that errors exist in the model drawing or in model manufacturing. It is even possiblethat the model deforms because of its own weight. As a result, the exact marker world coordinates must be assessedaccurately. An unified method is presented in section Section §IV which enables to calibrate the camera and tomeasure the marker coordinates. Its main feature is that it uses a lot of pictures (about 20 or more) taken by “hand”.As a consequence, it is simple to carry out compare to conventional calibration methods and 3D measurementdevices.

Section §V presents briefly the main features of the model deformation assessment method11. The accuracy ofthis method comes from the camera calibration and marker measurement accuracy. Since wind tunnel testing is arather difficult environment (for example, it cannot be assumed that cameras do not move slightly), a dedicatedprocedure is being developed. The principle is to put a lot of markers on reference surfaces in the test section and touse them to calibrate the cameras. The stereo vision method enables then to measure the model deformation.However, the reference markers must be known accurately and this is difficult to realize. An alternative solution issuggested which consists in using a few primary reference markers to recognize the other markers, called secondaryreference markers. This is done by using a lot of pictures and then arises the matching problem which is to match thesecondary markers between the pictures. The presented method in section §VI uses the epipolar geometry which isnew in wind tunnel application, while it is widely used in machine vision because it has been proven to be extremelyefficient. Section §VII provides some concluding remarks concerning the obtained results and planneddevelopments.

II. MARKERS

A. IntroductionThis section describes the marker recognition tool developed at Onera for wind tunnel applications, and

especially for PSP applications. Markers are used instead of contours of the model because a high accuracy isrequired. There are constraints in placing the markers:+ They should be as small as possible, according to the required accuracy, in order to minimize the lost of

information where there are located.+ They should cover a large part of the model because the registration process (also called resection) becomes

less accurate outside the area covered by the markers.As a result, the markers are usually quite small and may look like ellipses because they are not normal to the

lens axis of the camera. They often look blurred because of two reasons. The first one is model vibrations and thesecond one is that they may be out of focus. Figure 1 on the left is an infrared image obtained in a hypersonic windtunnel. Markers were made with small plugs of plaster, thus their surface temperature increases faster than the restof the model. Marker a can be seen clearly, but marker b is barely visible, because the heat flux level is very low onthis part of the model. As a consequence, the temperature increases slowly, even on the marker. Besides, this markeris located on the top of the model and it looks stretched, as the two markers in c.

a

b

c

a

b

c

Figure 1. Infrared marker and PSP marker examples.

Figure 1 on the right was obtained during a PSP test in the largest wind tunnel of Onera at transonic conditions.Marker a was nearly normal to the camera, so it looks like a disk. Marker b is on the wing and it looks like an ellipsesince the wing is not normal to the camera. Marker c looks like an ellipse too, but it is slightly blurred because it islocated at the border of the in-focus area. The marker detection algorithm has been tested and improved constantly

American Institute of Aeronautics and Astronautics3

with these types of examples. The next sub-sections describe the algorithm and its uncertainty, which has beenproven to be 0.1 pixel.

B. Recognition methodSimple marker detection methods are based on a thresholding criterion. The center of the marker is found by

using a weight function based on the gray level. This type of solution do not work accurately for real images as inFig.1. As a consequence, a method that uses a kind of cross-correlation functionshould be used. Such a method, called SPOT, has been developed andimplemented in the Onera resection software AFIX_2. Markers are black disks andcan be ideally modeled by using an inverse top hat shape, as in Fig. 2.

The top hat shape can be modeled with its depth d and its radius r0. Themodel is fitted over a disk of radius 2r0. The implemented model is more complexbecause it takes into account the local slope. The quadratic error is used to checkthe relevancy of the model and is divided by the depth of the hole d. This defines amerit parameter called md. It is a normalized measure of the volume between the

real shape an the inverse top hat shape. A value of 1 is obtained when the error volume is equal to the volume of thehole. The less is m, the best is the agreement. From our experience, it comes that a value of md=0.7 enables to rejectnearly all false markers. A merit parameter map is then obtained and a threshold is applied which select areas. Eacharea corresponds to an unique marker and it looks like a “bowl”. The shape of this “bowl” is modeled with aquadratic surface and its minimum value is the marker location. This location is determined with an iterativealgorithm.

C. UncertaintyThe uncertainty assessment of the SPOT tool is of prior interest for high accuracy applications. However, it is

rather difficult to address it with real images since there are too many parameters, as lens distortion, which mayoverride the SPOT uncertainty. As a result, the only one practical way is to create virtual images to test SPOT.

A small software has been written to create synthetic images of markers. It creates virtual markers with a givenradius and a given depth. The depth parameter is nearly without effect, and was kept constant for all the images. The

virtual markers are created with various parameters : the radius(which is not an integer value), the marker center (markers arenot usually centered on a pixel), the viewing angles (whichcreate an ellipse instead of a circle), the blurring level and thenoise level. Markers are created by averaging the values on a11x11 sub-pixel matrix. The coordinates of each sub-pixel inthis matrix are computed according to the marker orientation.Then the distance to the center of the marker is computed,enabling to determine if the sub-pixel is inside the marker.Afterwards, the value of the pixel is obtained by counting thenumber of sub-pixels inside the marker. The blurring effect isobtained by making an iterative average on non-blurred images.

The marker parameters are randomized and 144 markers are calculated for each set of radius, blurring level andnoise level. Figure 3 is an example obtained with a radius of 3 pixels. Fig.3a is obtained without blurring and noisewhile Fig.3b is obtained with blurring and noise.

From our experience at Onera, it comes out that the choice of the marker radius is quite easy: there is usually nodoubt since only one integer value fits the size of the markers in the images. But the real size of the marker may be

different, so the assessment of the uncertaintymust take into account a small change in thesize of the real marker. It has been chosen tovary the size of the virtual markers from±0.5pixel according to the reference radius.For each reference radius r0, 11 images werecreated with radius in the range [r0-0.5,r0+0.5]and the RMS value is calculated with the 144random markers. Figure 4 shows the results

Figure 2. The inverse tophat shape.

Figure 3. Virtual markers r=3 ; a)noblurring, no noise. b) blurring and noise.

Figure 4. RMS error with blurring and noise.

American Institute of Aeronautics and Astronautics4

with the reference radius in the range [2,5] and with a blurring effect and a noise level as in Fig.3. It appears that theradius should be greater than 3 pixels and that the RMS error decreases when the radius increases. It is also shownthat the error increases when the real radius is greater than the reference radius. This has been confirmed with otherassessments and this suggests to use a reference radius always greater than the radius observed in the images. Theimportant conclusion is that the uncertainty (twice the RMS value) is 0.1pixel when the radius is greater than 3pixels. Uncertainty of 0.1 pixel is a quite good value, but it could be useful to improve this value because all thefollowing results are based on marker detection. A good solution, as suggested by Ruyten12, would be to use markertemplates based on their shape. The principle is to assess the marker observation parameters in order to compute alikely pattern and then to fit this theoretical pattern with the real one.

III. REGISTRATION

A. IntroductionRegistration methods that map the 2D image onto the 3D model are usually called resection methods. All of

them use a grid and they include model recognition tools. These tools require markers used to identify the positionof the camera. Aligning images by using image markers without using their world coordinates is an usual imageprocessing application. The advantage of this solution, called 2D alignment, is that it can compensate the modeldeformation since no model grid is required. However, this does not provide any information on the deformation.Therefore advanced resection methods uses 3D alignment since they compensate for 3D effects. 3D alignment isperformed by making a guess about the camera location and then checking if the real markers match their imagecoordinates. This implies to solve the pose problem, which is described in the next sub-section. Then the RANSACparadigm enables to match the markers and to find out the real camera location.

Onera has developed a resection software called AFIX_213 which is used for PSP, IRT and video applications. Itmaps the image onto the 3D model and enables to compensate accurately for the model motion. It is used for imagealignment, which is an important issue for PSP, and to extract information from the resulting images.

B. The pose problemThe camera location is defined by its Point Of View (called hereafter PoV) relative to the model. It is defined by

three angles, three distances and the field of view of the lens (see Fig.5). Since there are six parameters, a minimumof three fiducial markers should be enough to determinethem. However, it has been demonstrated14,15 that it existsalways two solutions and sometimes four solutions. Asconsequence, a fourth marker is required to find the rightsolution among the four possibilities.

An algorithm has been implemented in AFIX_2 to assessthe PoV candidates. It works in two steps. The first oneprovides the coordinates of the three fiducial markers in thecamera base. At this point, up to four solutions can be found.Then the second step provides the PoV parameters using thecamera base coordinates. The difficulty lies in the rotationparameters which have to be determine from the rotationmatrix. A careful analysis has shown that combining severalterms of this matrix enables to retrieve sine and cosine of the

16three rotation angles. A quite straightforward solution has been designed, with special attention paid to singularcases (arising when some angle comes close to a 0°,90°,180° or 270°).

C. 3D matching using the RANSAC paradigmThe RANSAC paradigm is widely used in machine vision. RANSAC means RANdom SAmple Consenus. It was

proposed by Fischler and Bolles14 in order to fit complex models on a huge amount of information, as in imageprocessing applications. A preliminary version was implemented in the earliest versions of AFIX_217, but withoutknowing RANSAC ! The reason was that RANSAC is nearly unknown outside the machine vision community.RANSAC applied to marker matching and pose determination is implemented as following:1) Make a random choice of three real markers.

Y

Z

x

yz

O

o

rz

ry

rx

dxdy

dz

Figure 5. Point of View (PoV) of the camera.

American Institute of Aeronautics and Astronautics5

2) Make a choice of three image markers and link them to the real marker set. This defines a fiducial marker set.3) Use the fiducial marker set to determine the PoV candidates.4) Apply the PoV on each real marker to compute its image coordinates and compare them to the image markers

found with SPOT. A matching criterion is used, which is a neighborhood qualifying distance.5) Use the matched marker set to improve the PoV parameters by using a conventional RMS method6) Count the number of matched markers.7) If all the real markers are matched, the exact solution is found.8) If one or more real marker is not matched, make an other guess of the three fiducial image markers at step2.9) If one or more real marker is not matched when all the image marker set have been checked, then make another

choice of the three real markers at step 1.

When all the possible solutions have been checked, the best solution is the one that provides the higher numberof matched markers. The algorithm is very fast when all the real markers are visible in the image. It may take sometime when this is not the case. Some refinements have been added to make it faster: PoVs too far from the currentone are rejected and one or more non-matched real marker can be accepted. PoV rejection is based on scalingchange and on angle change. A preliminary rejection of the three initial real markers is also used by checking theinterior angles of the triangles : they must be in a range of 90°±80°.

The effectiveness of the RANSAC method lies in the randomized choice of the first three markers. If all of themare viewed, the right solution is found nearly immediately because the second loop with the image marker is veryfast. For example, if 90% of the true markers are viewed, the probability to make a right choice for a solution is0.93=0.73. Then the probability to make a bad choice is 0.27, which means that the probability to make a continuousseries of n bad choices is 0.27n. For n=10 the probability is 2 10-6 which is very low !

RANSAC is now used in place of an old method based on polynomial matching functions. It is very fast (about0.1s) and the main advantage for the user is that no manual action is needed. A second important advantage is theability of RANSAC to handle even severe motion because it uses the pose solution which takes into account the 3Dfeature of the real world. This is not the case of the old solution with polynomial functions which are not adequate tomodel large 3D motion.

D. Resection : virtual imagesWhen the PoV has been identified, each point on the model is related to a pixel in the image. What is more

useful is to relate each pixel to a point on the model. This is done by projecting the vertices of the grid triangles onthe image. Then a bilinear interpolation is done on the projected triangle to relate each pixel with a point on themodel. By doing this, each pixel is known by its image coordinates and spatial coordinates. Virtual images can becomputed according to a virtual PoV. The spatial coordinates of the pixel are used to compute the image coordinatesin the virtual image. This enables to compensate the model motion : all images are recomputed according to anunique PoV, which provides 3D image alignment. Virtual images can be used to combine several views or to helpfor flow understanding. They are often used to create views according to a perfect PoV as a top view. 3D mappingenables to extract data from images. If the results are required on a given point set or on a line, the imagecoordinates of the points are calculated and used to get the wanted information.

IV. CAMERA CALIBRATION

A. PresentationPoV parameters are called exterior camera parameters in

comparison with interior camera parameters which are the effectivefocal length, the pixel aspect ratio and lens distortion parameters. Lensdistortion parameters include six parameters18 which are two radialdistortion parameters, two decentering distortion parameters and thetwo coordinates of the lens axis on the image sensor. Additionalparameters may be added as the image sensor tilts.

For usual applications, the main interior parameter is the effectivefocal length, which can be determined quite easily. It should be kept inmind that the focal length depends slightly on the focus distance, asFigure 6. Out of focus effect.

American Institute of Aeronautics and Astronautics6

illustrated in Fig.6. The two images were obtained with the same camera at the same location, but with differentfocusing distances. The two horizontal lines show that the apparent object size is modified.

For applications as model motion and deformation measurement, a full calibration must be carried out. Sincethe focus distance is always modified, and because the interior parameters depend on it, a laboratory calibration isnot practicable for wind tunnel applications. As a consequence, an in situ calibration method would be a significantprogress. Besides, markers used for camera calibration must be accurately known and this is also a difficult task forwind tunnel applications. As a consequence, a new calibration method has been designed and is presented hereafter.

B. The methodThe principle of the method is to use an embedded loop system to identify the camera parameters as well as the

marker location. The input of the method is a set of images with already matched markers. The images must beobtained with the same camera and without modifying the focusdistance. A rough estimate of the marker location is also needed,which could be replaced by more advanced tools, as the epipolargeometry, see below. Then the method uses a RMS engine toidentify all the parameters which are : the exterior parameters foreach image, the interior parameters for the whole image set and themarker world coordinates. Figure 7 shows the calibration flow chart.At each iteration, the exterior camera parameters are identified forall the images. The RMS method is the same for the “global”parameters (interior parameters and marker world coordinates) asfor the exterior parameters. It minimizes the RMS value for eachparameter but only for a small iteration number, no more than three.

The parameter step is increased if the RMS keeps decreasing and it is decreased when the RMS does not decreases.Thus it works nearly as a ball moving toward the lowest position.

Figure 8. Image calibration set.

Figure 8 is a selection of images used to calibrate a camera and to measure the marker coordinates. The modelwas a flat plate covered with a PSP paint and tested in a low speed wind tunnel. The chord was 690mm. The camerawas a NIKON D100 (3004x2000 pixels) and the focal length of the lens was 24mm. The residual error was 0.4 pixelwhich is quite large compare to the marker detection uncertainty. The explanation was bad illumination conditionsand results as good as 0.2pixel have been obtained under better illumination conditions.

Figure 9. Lens distortion.

The effect of lens distortion is highlighted in Fig.9. The straight trailing edge of the model is curved and themaximum deformation is about 10 pixels which is very large. This defect is suppressed in the corrected image,which was obtained by inverting the lens distortion effect. The image on the right hand side shows the effect when it

Figure 7. Camera calibration flow chart.

American Institute of Aeronautics and Astronautics7

is magnified with a factor 3. Several image calibration sets were made and the results were in very good agreement(0.2mm on the marker location).

V. MODEL DEFORMATIONModel deformation must be addressed for PSP applications since the data reduction requires a grid. If model

deformation is ignored, an error is introduced which can be significant. This is why a model deformation methodapplied to PSP measurements11 was developed at Onera. However, model deformation is now considered as animportant issue for industrial tests and this is why the capabilities of the Onera’s method have been updated.

A solution has been proposed by Liu 19 to assess sting deformation in order to measure the aerodynamic loads.This solution takes the 3D feature of the model into account and provides quantitative information. The basic idea isto apply a deformation law on the grid. The law is designed knowing the mechanical behavior of the model: thisenables to reduce the number of deformation parameters. The deformation law depends on a few parameters, whichare identified by using the marker displacements. Here comes out the main drawback of the method: since there isonly one camera, a displacement coordinate must be known. Usually, the y displacement is assumed to be null.

The solution developed at Onera overcomes this limitation. Its principle is to mix exterior camera calibration(PoV recognition) and deformation parameter identification. For a given parameter deformation set, markerdisplacements can be calculated and compared to image marker locations to assess PoV parameters. This conceptcan be called grid tracking instead or marker tracking because the main parameters are the deformation parameters.As a result the method is able to handle 3D displacement and deformation. Its advantages are:a. The magnitude of model deformation is known even when only one camera is used.b. Only one camera is required.c. The solution can be extended for multi-cameras systems, allowing to measure accurately the model deformation.

The only one drawback is that the model deformation law must be determined for each model. However,mechanical knowledge about the model and aerodynamic loads helps greatly in designing the deformation law.Nevertheless, it should be kept in mind that even marker tracking methods require a model which is fitted by usingthe markers displacements.

Models are made with several parts which may have their own deformation law. For instance, the fuselage canbe considered as a rigid body while wings behave as beams. This remark leads to improve the grid concept byadding an extra feature which is the volume ID number. Grids used for model gather volumes identified with an IDnumber and having their own deformation law. Deformation laws are applied also on markers, which must beconnected to the volume to which they belong. As for the grid, the marker concept has been upgraded by adding avolume ID.

Deformation laws depend on the model and must be rewritten for each application. The coding phase must be assimple as possible and a tradeoff between efficiency and easiness has been done: deformation laws are gathered in aDynamically Linked Library (dll) written in C++. They are written in a simple way, so even users not experienced inC++ can handle them. Deformation parameters are identified with the algorithm used for PoV identification, see theprevious section.

To summarize, the main features of the method are:a) The grid is made with volumes having an ID number.b) Markers are connected to the volume to which theybelong.c) Deformation laws are written by the user in a devotedDLL.d) Deformation laws are applied on volume and on therelated markers.e) Deformation parameters are identified in the sameway as PoV parameters.

The method provides impressive results as in Fig.10where the left part is PSP results without model

deformation correction and the right image with correction. However, deformation measurements are accurateenough only when the camera is calibrated, which means that the exterior parameters have been determined.

Figure 10. Model deformation correction

American Institute of Aeronautics and Astronautics8

Moreover, PSP cameras are usually located quite orthogonally to the model surface so that they provide a pooraccuracy on the z motion.

Accurate measurements can be made only with a stereo-vision system, that can be built by combining a PSPcamera with a conventional digital camera. If the cameras are calibrated, the 3D marker location of each marker canbe obtained easily by using automatic detection and matching tools presented in the previous sections.

However, the keyword is camera calibration. Calibrating the camera is done with a rigid calibration body orwith the model itself when the world coordinates of the markers are accurately known. The problem is that thecameras may move slightly under testing conditions because of test section deformations due to aerodynamicloading. This is why it is recommended to use markers located on the test section walls in order to calibrate thecameras continuously.

VI. USING THE EPIPOLAR GEOMETRY

A. ObjectivesAs already mentioned, determining the exact location of markers is a tedious task with conventional devices (3D

measurement systems) and having an optical method would be a significant progress. However, the presentedmethod enable to assess accurately the marker coordinates providing a reasonable guess is available. This becomesan issue when there are about 100 markers, since they also have to be matched for all the images of the imagecalibration set.

The epipolar geometry is a kind of magical image processing tool since it virtually enables to match markers andto retrieve their world coordinates with only two images taken at close positions. The only one requirement is thatthe camera must be calibrated (mainly the exterior parameters).

B. The epipolar geometryThere is a huge amount of studies on the epipolar geometry in computer vision which has been introduced in

the early 90's by O.Faugeras20. It is described in Fig.11. It is suitable when there are two or more images of the sameobject. The aim of computer vision is to reconstruct the model, without knowing anything about its real shape.

Let us consider a point P1 on the model, which is projected in the image 1 on the point m1. By only knowingm1, we know that P1 lies on the line Cm1. This line is projected on line l1 in the second image. We only know thatthe projection of the point P1 on the image 2 lies on this line l1. This line is the intersection of the plane CP1C' with

the image plane of the second camera. Oneparticular point of l1 is the point e', which is theintersection of the line CC' and the image plane. Ifthe same work is done for a second point P2, theline l2 also goes through the point e', since this pointlies on the line CC' which does not depend on thepoint location. This point is known as the epipole ofthe second image. The epipolar geometry is basedon the existence of this particular point, and oneimportant result is the fundamental matrix F, whichrelates the point mi to the epipolar line li. When thismatrix has been identified, the two cameras can becalibrated and the marker world coordinates can bedetermined.

The epipolar geometry can be applied when the views are not too far from each other. A solution is then to takeseveral pictures by moving slightly the camera. Another solution would be to match manually a few markers ( theprimary reference markers). This enables to determine a good guess of the fundamental matrix and then to match theremaining markers.

C. The fundamental matrixLet us assume A the matrix which relates a point m1 in image 1 to the point P on the model. Since we only know

that P is on the line passing through m1, we can write the relation between P and m1 as:

Figure 11. The epipolar geometry.

American Institute of Aeronautics and Astronautics9

1

1

1

Aml

c

A

z

yP ααα =

=

=

where α is the coordinate of P on the line passing through m1. P is known in the camera 1 base system and wewant to determine its value in the camera 2 base system. Its coordinates in the absolute base system are :

( )111

1 TAmRP −= − αwhere R1 is the rotation matrix and T1 is the translation vector.The coordinates of P in the camera 2 base system are:

( )111

122 TAmRRTP −+= − αwhich can be rewritten as:

1121211221211212−=−=+= RRRTRTTAmRTP α

T12 is the translation between the two optical centers expressed in the camera 2 base. R12 is the rotation betweenthe two cameras expressed in the camera 2 base. The coordinates of P can also be obtained from the point m2 inimage 2:

2* AmP β=

The two vectors T12 and P define a plane, which is called the epipolar plane. Its normal vector is :

PTN ∧= 12

which can be expressed as :

( ) 112121121212 AmRTAmRTTN ∧=+∧= ααThe vector product can be expressed with a matrix :

XtXT ×=∧ 1212 with

−−

−=

=

0

0

0

1212

xy

xz

yz

z

y

x

tt

tt

tt

t

t

t

t

T

The normal vector can then be expressed as :

11212 AmRtN ××=αSince P* is on the epipolar line, we have:

0* =× NP T

which gives:

( ) 0112122 =××× AmRtAm T

The final expression is then

121212 0 RtEAEAFmFm TT ×=××==××

F is called the fundamental matrix and E is called the essential matrix. The fundamental matrix enables tocalculate the parameters of the epipolar line on which lies the point P. When it has been normalized, the equation

12 mFm T ×× provides the distance of m2 to the epipolar line related to m1. The matching process is then quite simple

: two markers are matched when the distance to the epipolar line is lower than a given distance (a few pixels).When markers have been matched, the 3D world coordinates of the related real point can the be calculated.

D. An exampleFigure 12 shows the epipolar lines obtained on two images of the image set presented in Fig.8. It can be

observed that the epipolar lines go through the markers on the two images. The epipole of the left image is on itsright because it is the intersection of the image plane with the line linking the two projective centers. The inverse

American Institute of Aeronautics and Astronautics10

remark can be made for the right image. One epipolar line misses in the two images because one marker is missing(bottom right for the left image and bottom left for the right image).

Figure 12. Matching with the epipolar geometry.Matching markers is then very simple since the marker are nearly exactly on the epipolar line. It is then quite

easy to retrieve their world coordinates. However, several markers could lie on the same line. This is why anadditional criterion is used which is a neighborhood distance. This is also the reason why images must be taken quiteclose in order to avoid any mismatching. The practical way to use the epipolar geometry is then to take about 20/50images, and to match them iteratively.

However, world coordinates are obtained in an arbitrary coordinate system. Since absolute coordinates arerequired, it is necessary to know accurately the location of a few markers, called primary markers. Then the worldcoordinates of all the other secondary markers can be obtained. The primary markers can be located on the modelitself or on a reference surface that could be the test section walls.

VII. CONCLUDING REMARKSSeveral image processing tools have been described. They enable to determine accurately the location of the

markers as well as their world coordinates. The marker detection uncertainty is 0.1pixel and it could be improve byusing a marker template method. However, 0.1pixel seems to be enough for all wind tunnel applications. There arestill progress to me made about combining camera calibration and 3D marker location measurement. This could bedone by modifying the algorithm as using the Levenberg-Marquardt method.

The RANSAC paradigm has been applied to 3D matching. It is very fast and does not require any user input withis a valuable advantage. Model deformation is well compensated and now model deformation measurements areconsidered. The only one remaining problem concerns camera calibration under flow conditions to take into accountthe test section deformation. Markers located on the test section walls should fix this difficulty.

The last presented tool which uses the epipolar geometry is a kind of magical tool since it simplifies thematching process. It is the key to develop a fast and efficient model deformation and motion measurement method.It is also very useful to identify marker location when there are a lot of them without a priori guess of their reallocation.

It should be noted that these tools are very well know in the computer vision community which has developedthem. The novelty is to use them in wind tunnel testing in order to provide accurate and fast results. For someapplication as model deformation measurements, real-time applications can already be planned.

VIII. REFERENCES

1 Bell, J.H., Shairer E.T., Hand L.A., Mehta R.D, March 2001, “Surface Pressure Measurements Using LuminescentCoatings”, Annual Review of Fluid Mechanics, Vol. 33, pp. 155-206.

2 Sullivan, J., January 2001,”Temperature and Pressure Sensitive Paint”, Lecture Series 2000-2001, Advanced MeasurementTechniques, Von Karman Institute for Fluid Mechanics,.

3 Le Sant, Y., Marchand, M., Millan, P. and Fontaine, J., 2002, “An overview of infrared thermography techniques used inlarge wind tunnels”, Aerospace Science and Technology, Vol.6, pp. 355-366, 2002.

4 Patorski, K., 1993, “Handbook of the Moiré Fringe Technique”, Elsevier Science Publishers, Amsterdam.5 Pallek, D., Büttefisch, K. and Quest, J., “Model Deformation Measurement in ETW using the Moire Technique”, 20th

International Congress on Instrumentation in Aerospace Simulation Facilities, Göttigen, Germany, August 2003.6 Bülow, Th., Pallek, D., Sommer, G., 2000, “Riesz transforms for the isotropic estimation of the local phase of Moire

interferograms”, Proceedings of the 22nd Symposium of the German Pattern Recognition Society (DAGM), Kiel, Germany.

American Institute of Aeronautics and Astronautics11

7 Fleming,G.A., Bartram,S.M., Waszak,M.R, Jenkins, L.N., “ Projection Moire Interferometry Measurements of Micro AirVehicle Wings”, 2001, SPIE conference on Optical Diagnostics for Fluids, Solids and Combustion, part of the SPIEInternational Symposium on Optical Science and Technology, San Diego, CA, July 29-August 3, SPIE paper No 4448-16.

8 Burner, A.W., Martinson, D.D., June 17-20, 1996, “Automated wing twist and bending measurements under aerodynamicload”, 19th AIAA Advanced Measurement and Ground Testing Technology Conference, paper AIAA 96-2253. June 17-20.

9 Liu, T., Radeztsky, R., Garg, S., Cattafesta L., January 11-14,1999, “A videogrammetric model deformation system and itsintegration with pressure paint”, 37th AIAA Aerospace Sciences Meeting and Exhibit, Paper AIAA 99-0568, Reno, NV.

10 Liu, T., Radeztsky, R., Cattafesta L., June 2000, “Photogrammetry applied to wind-tunnel testing”, AIAA Journal, vol.38,no.6, pp. 964-971.

11 Le Sant, Y., Merienne, M-C., Lyonnet, M. , Deléglise, B., Guilmard, A., 2004, “A Model Deformation MeasurementMethod and its Application on PSP Measurements”,24th AIAA Aerodynamic Measurement Technology and Ground TestingConference, paperAIAA 2004-2192, Portland, Oregon, 28 Jun - 1 Jul 2004

12 Ruyten, W, 2002, “Subpixel localization of synthetic references in digital images by use of an augmented template”,Optical Engineering, vol. 41, n°3, pp 601-607

13 http://www.onera.fr/dafe-en/afix2/index.html14 Fischler, M.A., Bolles, R.C.,1981,”Random Sample Consensus : A Paradigm for Model Fitting with Applications to Image

Analys and Automated Cartography”, Graphics and Image Processing, Vol 24, N°6, pp381-395.15 Wolfe,W.J., Mathis, D., Sklair, C.W., Magee, M., 1991,“The perspective View of Three Points”,IEE Transactions on

Pattern Analysis and Machine Intelligence, Vol 13, N°13, pp 66-73.16 Le Sant, Y., Mérienne, M-C., 1995, “An Image Resection Method Applied to Mapping Techniques”, 16th International

Congress on Instrumentation in Aerospace Simulation Facilities , Dayton, OH, July 17-21.17 Le Sant, Y., Deléglise , B., Mébarki, Y., 1997, “An automatic image alignment method applied to pressure sensitive paint

measurements”, 17th International Congress on Instrumentation in Aerospace Simulation Facilities, Monterey (U.S.A.),September 29, October 2.

18 Tsai, R.Y., 1986, “An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision”, Proceedings ofIEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, pp. 364-374, 1986.

19 Liu, T., Barrows, D.A., Burner, A.W. and Rhew, R.D., June 2002, “Determining aerodynamic loads based on opticaldeformation measurements”, AIAA Journal, Vol.40, No.6, pp. 1105,1112.

20 Quang-Tuan, L., Faugeras, O., 1997, “Camera Calibration, Scene Motion and Structure recovery from pointcorrespondences and Fundamental matrices”, The International Journal of Computer Vision . 22(3): 261-289, 1997