[IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and...

8
Tommaselli, Antonio Maria Garcia Accuracy assessment of a photogrammetric close-range reconstruction system Antonio Maria Garcia Tommaselli São Paulo State University – Unesp-Campus at Presidente Prudente, S.P. Rua Roberto Simonsen, 305, 19060-900 Presidente Prudente, S.P., Brasil e_mail: [email protected] Abstract. Geometric accuracy of a close-range photogrammetric system is assessed in this paper considering surface reconstruction with structured light as its main purpose. The system is based on an off- the-shelf digital camera and a pattern projector. The mathematical model for reconstruction is based on the parametric equation of the projected straight line combined with collinearity equations. A sequential approach for system calibration was developed and is presented. Results obtained from real data are also presented and discussed. Experiments with real data using a prototype have indicated 0.5mm of accuracy in height determination and 0.2mm in the XY plane considering an application where the object was 1630mm distant from the camera. Keywords: Reconstruction, Calibration, Close_Range Photogrammetry. 1. Introduction Geometric accuracy of a photogrammetric system with structured light is assessed in this work. This project was proposed to meet the requirements of users interested in body surface measurements, for scoliosis diagnosis. Previous research projects were conducted using non- metric cameras to measure body surface shapes for clinical applications such as diagnosis of postural problems associated with scoliosis, mastectomy and other diseases. The application of conventional stereophotogrammetry was unsuitable due to several reasons: texture of the human body is homogenous and stereoscopic measurement with floating marks is inaccurate and may produce spurious results; there is a need for a high degree of automation, because non-specialists have to deal with the system routinely and it is desirable that they could perform the measurements; conventional photogrammetric systems are quite expensive for these applications; real time responses are often required. Digital photogrammetry seems to fulfil these needs and high precise close range surface measurements can be achieved. The traditional stereo configuration is avoided in this project due to the problems associated with correspondence determination. An active pattern projector can be introduced and treated as an active camera and only one digital imaging camera may be used. This arrangement is similar to triangulation with structured light [Guisser et al, 1992],[Schalkoff (1989)], [Singh et al. (1997)], [Dunn et al. (1989)], [Shrinhande & Stockman (1989)]. One of the problems in this method is the projector calibration, which has been approached with bundle adjustment [Frobin & Hierholzer (1981)], [Frobin & Hierholzer (1991)]. This paper presents the geometric characterization of a photogrammetric system composed of low cost hardware, based on a pattern projector, a digital camera and software and the first experimental results obtained with a prototype. 2. Mathematical Model The proposed approach can be seen as a fusion between a stereo and a structured light system. The projector is treated as a second camera and can be conveniently used as a source of reliable geometric information after a bundle calibration. The main contribution of this proposal is to avoid inner calibration of the projector by computing only each straight line equation of the projected bundle instead of inner orientation parameters of the projector. This direct solution eliminates the need for a complex mathematical model and it is also expected that even non parameterised errors in the camera model could be absorbed by the straight line equations. Figure 1 depicts the geometric concept of the system. The mathematical model is based both on the well known collinearity equations and similarity (or Helmert) transformation [Moffit et al. (1980)]. Some further derivations have been introduced regarding the intersection equations and projector calibration. Considering an arbitrary global reference system and the camera reference system, an inverse similarity transformation, from image to object space can be defined

Transcript of [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and...

Page 1: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Tommaselli, Antonio Maria Garcia

Accuracy assessment of a photogrammetric close-range reconstruction system

Antonio Maria Garcia Tommaselli

São Paulo State University – Unesp-Campus at Presidente Prudente, S.P.Rua Roberto Simonsen, 305, 19060-900 Presidente Prudente, S.P., Brasil

e_mail: [email protected]

Abstract. Geometric accuracy of a close-range photogrammetric system is assessed in this paperconsidering surface reconstruction with structured light as its main purpose. The system is based on an off-the-shelf digital camera and a pattern projector. The mathematical model for reconstruction is based on theparametric equation of the projected straight line combined with collinearity equations. A sequentialapproach for system calibration was developed and is presented. Results obtained from real data are alsopresented and discussed. Experiments with real data using a prototype have indicated 0.5mm of accuracyin height determination and 0.2mm in the XY plane considering an application where the object was1630mm distant from the camera.

Keywords: Reconstruction, Calibration, Close_Range Photogrammetry.

1. Introduction

Geometric accuracy of a photogrammetric system withstructured light is assessed in this work. This project wasproposed to meet the requirements of users interested inbody surface measurements, for scoliosis diagnosis.Previous research projects were conducted using non-metric cameras to measure body surface shapes forclinical applications such as diagnosis of posturalproblems associated with scoliosis, mastectomy and otherdiseases. The application of conventionalstereophotogrammetry was unsuitable due to severalreasons:

• texture of the human body is homogenous andstereoscopic measurement with floating marks isinaccurate and may produce spurious results;

• there is a need for a high degree of automation,because non-specialists have to deal with the systemroutinely and it is desirable that they could performthe measurements;

• conventional photogrammetric systems are quiteexpensive for these applications;

• real time responses are often required.

Digital photogrammetry seems to fulfil these needsand high precise close range surface measurements can beachieved. The traditional stereo configuration is avoidedin this project due to the problems associated withcorrespondence determination. An active pattern projectorcan be introduced and treated as an active camera andonly one digital imaging camera may be used. Thisarrangement is similar to triangulation with structuredlight [Guisser et al, 1992],[Schalkoff (1989)], [Singh et al.(1997)], [Dunn et al. (1989)], [Shrinhande & Stockman

(1989)]. One of the problems in this method is the projectorcalibration, which has been approached with bundleadjustment [Frobin & Hierholzer (1981)], [Frobin &Hierholzer (1991)].

This paper presents the geometric characterization ofa photogrammetric system composed of low costhardware, based on a pattern projector, a digital cameraand software and the first experimental results obtainedwith a prototype.

2. Mathematical Model

The proposed approach can be seen as a fusion between astereo and a structured light system. The projector istreated as a second camera and can be conveniently usedas a source of reliable geometric information after abundle calibration.

The main contribution of this proposal is to avoidinner calibration of the projector by computing only eachstraight line equation of the projected bundle instead ofinner orientation parameters of the projector. This directsolution eliminates the need for a complex mathematicalmodel and it is also expected that even non parameterisederrors in the camera model could be absorbed by thestraight line equations.

Figure 1 depicts the geometric concept of thesystem. The mathematical model is based both on the wellknown collinearity equations and similarity (or Helmert)transformation [Moffit et al. (1980)]. Some furtherderivations have been introduced regarding theintersection equations and projector calibration.

Considering an arbitrary global reference systemand the camera reference system, an inverse similaritytransformation, from image to object space can be defined

Page 2: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Tommaselli, Antonio Maria Garcia

Anais do X SIBGRAPI (1997) 101-104

2

by equation (1)

Z

Y

X

+

z

y

x

.

rrr

rrr

rrr

. =

Z

Y

X

0

0

0

i

i

i

332313

322212

312111

i

i

i

λ (1)

Considering the projective nature of the

photogrammetric process, the collinearity equations canbe derived. These derivations are omitted and areapproached in the photogrammetric literature, e.g. [ASP,(1980)] and [Moffit et al. (1980)].

x = f

r (X - X ) + r (Y - Y ) + r (Z - Z )

r (X - X ) + r (Y - Y ) + r (Z - Z )

y = fr (X - X ) + r (Y - Y ) + r (Z - Z )

r (X - X ) + r (Y - Y ) + r (Z - Z )

i

11 i 0 12 i o 13 i 0

31 i 0 32 i 0 33 i 0

i

21 i 0 22 i 0 23 i 0

31 i 0 32 i 0 33 i 0

(2)

In the collinearity equations (2), (Xi,Yi,Zi) are thecoordinates of a generic point in the object space, xi, yi are the refined image coordinates of the same point(systematic effects are mathematically eliminated), the rij

are the elements of the rotation matrix and f is the camerafocal length.

The exterior orientation parameters of the camerareference system with respect to the global referencesystem must be computed, considering an arbitrarysequence of rotations. In this work the adopted rotationmatrix, defined by the sequence Mz(κ) My(ϕ) Mx(ω) isgiven in (3).

. . + . . . - . .

- . . - . . . + . . k

- . .

cos cos cos sin sin sin cos sin sin cos sin cos

cos sin cos cos sin sin sin sin cos cos sin sin

sin sin cos cos cos

φ κ ω κ ω φ κ ω κ ω φ κ

φ κ ω κ ω φ κ ω κ ω φ

φ ω φ ω φ

(3) The exterior orientation parameters are:

[ ]T

0 0 0, , ,X ,Y ,Zκ φ ω (4)

The coordinates of the perspective centre of the

projector (Xp, Yp, Zp) and the direction cosines of theprojected straight line (li, mi, ni) were previouslycomputed in the calibration step and are related to theglobal reference system. The parameter λi is an unknownthat must be computed using both the information fromthe camera and from the projector.

Regarding the projector, the parametric equation of aprojected straight line is given by:

n + Z = Z

m + Y = Y

l + X = X

ipip

pi

ipip

pi

ipip

pi

λ

λ

λ (5)

where:

• (Xp, Yp, Zp) are the coordinates of a known point inthe projected straight line which is, for instance, theperspective centre of the projector;

• (li, mi, ni) are the direction cosines of each projectedray;

• λip is a parameter that can be associated to the

distance between the projector perspective centreand the generic point in the object space.

In equation (5) the superscript p has been used todenote information acquired from the projectorgeometry.

Let us rearrange equation (1) using the geometricinformation from the camera and considering that allthe zi coordinates of the photogrammetric image areequal to -f:

Z+w. = Z

Y+v. = Y

X+u. = X

0iii

0iii

0iii

λλλ

(6)

where:

(-f)r + yr + xr = w

(-f)r + yr + xr = v

(-f)r + yr + xr = u

33i23i13i

32i22i12i

31i21i11i

(7)

Considering that in equations (5) and (6) the

Camera Projector

Xi , Yi ,Zi

X

Y

Z

y

x

z Xp, Yp , Zp

X0 , Y0 , Z0 limini

Figure 1 Geometry of the camera-projector system.

Page 3: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Anais do X SIBGRAPI (1997) 101-104

coordinates of a generic point are the same in bothequations, λi

p and λi can be determined using equations(8) and (9):

l.w-u.n

l).Z-Z(-n).X-X( = iiii

i0pi0piλ (8)

l.w-u.n

u).Z-Z(-w).X-X( = iiii

i0pi0ppiλ (9)

These equations are quite similar to those developedfor space intersection [ASP (1980)].

Object space coordinates can be obtained byintroducing the computed λi

p into equation (5). The sameapproach can be done by using equations (8) and (6). Amore reliable estimation can be obtained by using theaverage value from equations (5) and (6).

3. System Parts

3.1 Digital Camera

Several models of digital cameras have beencommercially available recently and most of them canbe successfully applied to built digital reconstructionsystems. Our prototype has integrated a Kodak DC40,which has a CCD with a resolution of 768x512 pixels(6,9mm x 4,6mm) and progressive scanning. The camerafocal length is 46mm equivalent, which means that antheoretic imaging area of 34.5mm x 23mm, with a pixelsize of 45 µm, must be considered. On the other hand,considering the actual CCD imaging area of 6,9mm x4,6mm and the pixel size of 9 µm a reduced focal lengthof 9.2 mm should be used.

Other highest resolution digital cameras probablycould give better results, but it seems that this modelcompares favourably with off-the-shelf video cameras.Currently, we are also integrating a Kodak DC 210, withhigher resolution, to the prototype. Fig. 2 depicts the KodakDC40 digital camera.

3.2 Pattern projector

The pattern projector is the second element in thesystem and it provides both the structured illuminationand the geometric information that are used toreconstruct the scene. The structured illuminationprovides some cues to enhance the correspondenceprocess. Pre-defined targets are easier to locate in theimage and can be measured with higher accuracy, forexample circular targets. Geometric information isprovided by the parameters of the projected ray, whichare to be used with camera coordinates to computeobject space coordinates, as presented in the previoussection.

The pattern projector is an ordinary slide projector,which was reassembled in a mount with a mirror and thedigital camera. The mirror can be used to redirect theprojected bundle. The pattern was printed usingphotographic process and inserted within two glassplates. Figure 3 presents the final configuration of theprototype, integrating the digital camera and the patternprojector.

The whole system configuration includes acalibration plate, which has 20 control points, 6 ofwhich where 200 mm apart from the projection plane. In

digitalcamera

Systembracket

Patternprojector

lensmirror

slide

Figure 3 Final configuration of the prototype.

Figure 2 Digital Camera.

����������

����������

����

��������

����

������

�������

Figure 4 Prototype being used.

Page 4: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Tommaselli, Antonio Maria Garcia

Anais do X SIBGRAPI (1997) 101-104

4

figure 4 the control and the projected points can be seen.The prototype was attached to the carrier of arectangular plotter, in order to accurately measure itsdisplacements in the calibration step.

4. Feature Extraction

The pattern projected onto the surface must be designed toallow image analysis in real time, e.g. a set of crossedlines, squares or dots. In order to improve thecorrespondence process it is recommended to design apattern with distinct spots, both in shape and dimension.The feature extraction process attempts to detect,recognise and "measure" the projected lines onto theobject. Several methods have been implemented from theassisted measurement in the computer screen to theautomatic extraction both with least squares matching andwith region grouping. Although these methods are highprecision, in some cases they could fail in theidentification of the targets and could provide incorrectcorrespondences because the pattern has similar targets.This step is still under development and more robusttechniques are being studied.

For the case study, the coordinates of the targetswere extracted by direct measurement which can ensurethe correct identification but the precision is limited to 1or ½ pixel size.

5. System calibration

The system calibration involves the determination of thecamera inner orientation parameters and the parametricequations of all projected straight lines. This concept isdepicted in figure 5.

Although the simultaneous estimation of thoseparameters would be desirable, experiments havedemonstrated unreliable results due to singularities in thenormal system equations. Based on this experience, asequential solution was derived. It involves seven steps,which are summarised in figure 6.

5.1 Camera calibration

The first step in the system calibration is the cameracalibration. The camera is calibrated using self calibratingbundle adjustment with convergent cameras. The DC40has a fixed focus and it is expected that the calibrationparameters remains unchanged for a period. A set-up with48 circular targets was used and convergent images wereacquired. The mathematical model for camera calibrationis simplified because the CCD has squared pixels and thecoverage angle is narrow. Only first order radial lensdistortions (k1), coordinates of the principal point (cx andcy) and the focal length (f) have been considered as thecamera inner parameters.

5.2 Calibration of the projected bundle

The concept of projector calibration method is similar tothe ∆Z method used to determine the perspective centrecoordinates in independent model triangulation withanalogue photogrammetric stereoplotters. Severalimages of the projected bundle are acquired on theparallel reference planes which have targets to be used ascontrol points. Different image coordinates of a projected

Camera

Projector

Projected points

Control points

∆∆∆∆Z

X

Y

Xp , Yp , Zp

X0 ,Y0 ,Z0

Figure 5 Reference system and the geometric concept ofthe calibration method.

Camera Calibration

Control points locationin the i-th plane

Semi-automatic location ofthe projected points in the i-th

plane

Computation of the Spaceresection parameters (from

control points)

Computation of the projectedpoints coordinates in the i-th

plane (in the camera referencesystem)

Estimation of the coordinates ofthe projector perspective center

Move the projectionplane and grab the

next image

Estimation of the directioncosines of the projected rays

Figure 6 Steps in system calibration.

Page 5: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Anais do X SIBGRAPI (1997) 101-104

point in several planes will correspond to the samestraight line in the object space. Control points will beused to compute the projection plane position andorientation with respect to the camera reference system.Therefore, projected points coordinates over the plane canbe computed both in the global and in the camerareference systems.

Camera calibration is the first step and can beperformed previously, supposing that the inner orientationelements are stable.

In the second step control points on the calibrationplate are identified and measured.

The third step involves the identification andmeasurement of projected points coordinates for eachimage collected. Control points and projected points areextracted using methods of feature extraction or by directmeasurement on the computer screen.

In the fourth step position and orientation parametersof the camera with respect to the control points arecomputed using space resection procedures. Accurateresults were only obtained after the introduction of six noncoplanar control points, 200 mm apart from thecalibration plate.

The fifth step attempts to compute the 3Dcoordinates of the projected points in the global referencesystem, using the position and orientation parameterspreviously computed and the image coordinates(measured in step 3). These coordinates are computedusing eq. (10).

planoi

33i32i13

23i22i120i0i

33i32i13

13i12i110i0i

ZZ

fryrxr

fryrxr )Z-(ZYY

fryrxr

fryrxr )Z-(ZXX

=−+−++=

−+−++=

(10)

All the Z coordinates of the projected points are

considered to be the same, supposing that the plate is areally flat surface.

Once the reference plane is moved, another image isgrabbed and steps 2, 3, 4 and 5 are repeated, until, at least,information from three planes has been available.

Coordinates of the projector perspective centre areestimated in the sixth step. The mathematical model andprocedures for this computation can be found in [Moffit &Mikhail (1980)]. The observation equations are based onthe straight line that fits two projected points and theprojector perspective centre, as one can see in fig. 5. Thestraight line equation in 3D space is given by eq. (11):

X X

X X

Z Z

Z Zp a

b a

p a

b a

−−

=−−

Y Y

Y Y

Z Z

Z Zp a

b a

p a

b a

−−

=−−

(11)

After some algebraic manipulations and associating

residual errors (vx and vy) to the equations, linearobservation equations can be obtained, as seen in eq. (12).

v Z Z X X X Z X Z X Zx b a p a b p a b b a− − − − = − −( ) ( ) ( )

(12) v Z Z Y Y Y Z Y Z Y Zy b a p a b p a b b a− − − − = − −( ) ( ) ( )

Once at least two different points on two planes are

available it is feasible to compute the projectorperspective centre coordinates, using the Least SquaresMethod.

Direction cosines of the projected rays are estimatedin the seventh step. Using the computed coordinates of theprojector perspective centre and the coordinates of theprojected points both in the global reference system,direction cosines of each projected ray can be computedusing eq. (13) and (14).

ia

ip

i

ia

ip

i

ia

ip

i

ZZZ

YYY

XXX

−=∆−=∆−=∆

(13)

( )

( )

( )2

2

2

iii

ii

iii

ii

iii

ii

ZYX

Zn

ZYX

Ym

ZYX

Xl

∆+∆+∆∆=

∆+∆+∆

∆=

∆+∆+∆∆=

(14)

For each projection plane a set of direction

parameters can be computed and the average value willgive the best estimate.

6. Results

6.1 Experiments with simulated data

Extensive experiments with simulated data wereperformed in order to assess the theoretical accuracy ofthe system and to study variations in the configurations todesign a prototype. From these simulations an accuracy of1mm in depth and 0.3 in XY plane was expected. Thesepreliminary results were reported in [Tommaselli et al(1996)] and will be omitted here due to the lack of space.The implementation both of the prototype and theexperiments with real data were driven by the parameters

Page 6: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Tommaselli, Antonio Maria Garcia

Anais do X SIBGRAPI (1997) 101-104

6

obtained from these results.

6.2 Experiments with real data

6.2.1 Camera Calibration

The inner orientation parameters of the Kodak DC40digital camera were obtained using a self-calibratingbundle adjustment, with the nine images taken from aset of 48 circular targets and with the principal pointrelated to the approximated centre of the frame buffer(378, 252) and considering an approximated scalefactor of 0.045 (pixel size). The values of the calibratedinner orientation parameters and their standarddeviations are presented in table 1. The posterioristandard deviation (�σ0 ) indicates an observation errorranging ½ pixel. The target coordinates in the image weremeasured automatically using a gravity centre criteria.

Table 1 Inner parameters of the digital camera DC40

using 48 control points and 9 images.

Parameters EstimatedValues

EstimatedStandard

Deviations

cx (mm) -0.133 0.052

cy (mm) 0.208 0.068

k1 0.000063 0.000001

f (mm) 46.981 0.066

�σ0 (mm) 0.025

6.1.2 Projector calibration

The proposed method for the system calibration, includingcamera calibration, has been implemented in the Clanguage. The method was tested with real data, whichwas collected using the prototype. The global coordinatesof the 20 control points located over the calibration platewere measured with an accuracy of 0.1 mm. Theprojector and the digital camera were both turned on and3 images were collected. The first position of theprototype was 1640mm from the calibration plate. Eachimage was collected and then the plotter carrier wasmoved by 100mm in order to generate a displacementbetween the projected planes.

The grabbed images each had 440 projected points.In this experiment only the 16 visible control points and 9projected points were measured. Visual identification andmeasurement in the computer screen were used and it isexpected that with an automatic measurement process theaccuracy of the image coordinates may increase severaltimes. Also, if more projected points were used the wholeaccuracy of the estimates could be improved. However,

even with these conditions the coordinates of the projectorperspective centre were accurately estimated, as one cansee from the following tables.

In table 2 the camera exterior orientation parametersfor each of the three images are presented. Thehighlighted lines show the differences between thesesparameters and it can be seen that the angulardiscrepancies may be neglected whilst the positionaldifferences are due to the orientation angles of the camerawith respect to the global reference system. Residuals inthe space resection procedure were within 0,025mm thatcompares to ½ pixel size.

Estimated values for the coordinates of the projectorperspective centre are presented in Table 3. The secondline of this table presents the coordinates in the camerareference system which were obtained applying an inversesimilarity transformation to the coordinates in the globalreference system using the camera exterior orientationparameters. The original output of the estimation processby the ∆Z method were the coordinates in the globalreference system which are presented in the third line oftable 3.

The direction cosines were estimated using eq. (13)and (14) for each projection plane. The average valueconsidering the three planes for the nine points used inthis experiment are presented in table 4.

From this calibrated set (coordinates of theprojector perspective centre and direction cosines) andfrom the image coordinates of a projected point, the 3Dcoordinates of the surface points can be computed. The3D coordinates are estimated using the equationspresented in section 2.

Table 2 Camera exterior orientation parameters for eachprojection plane.

κ (rad) φ (rad) ω (rad) X0

(mm) Y0

(mm) Z0 (mm)

1st plane 0.01330 0.01418 -0.02600 375.12 364.54 1842.67 Dif 2-1 -0.00008 -0.00435 -0.00041 -5.35 1.04 -101.18 2nd plane 0.01321 0.00983 -0.02641 369.76 365.59 1741.49 Dif. 3-2 -0.00047 0.00607 -0.00146 12.49 3.29 -102.11 3th plane 0.01274 0.01590 -0.02787 382.26 368.88 1639.37

Table 3 Coordinates of the projetor perspective centre Xp (mm) Yp (mm) Zp (mm)

Coordinates inthe camera

reference system

-490.304 23.837 56.8250

Coordinates inthe global

reference system.

-114.601 383.502 1905.981

Page 7: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Anais do X SIBGRAPI (1997) 101-104

Table 4 Direction Vectors. li mi ni

01 0.09159 -0.17179 -0.98087 02 0.24040 -0.16860 -0.95592 03 0.38490 -0.16341 -0.90838 11 0.09012 -0.02403 -0.99564 12 0.23928 -0.01954 -0.97075 13 0.38453 -0.01461 -0.92300 21 0.08562 0.12279 -0.98873 22 0.23365 0.12957 -0.96365 23 0.37689 0.13301 -0.91666

6.2.3 Accuracy of a reconstructed flat surface

In order to evaluate the real accuracy of the proposedapproach and the reliability of the prototype anexperiment with one of the images of the calibrationplane was conducted.

The 3D coordinates of the projected points werecomputed using the calibrated parameters of theprojector and the image coordinates of the same pointswhose direction cosines had been established.

Table 5 Reconstructed coordinates and errors with

respect to the projected coordinates X (mm) Y (mm) Z (mm) 01 43.161 85.613 216.417 -1.280 0.983 16.417 02 310.371 83.755 216.126 -3.646 1.032 16.126 03 602.154 76.648 214.399 -5.750 0.874 14.399 11 38.361 341.798 216.116 -1.556 -1.272 16.116 12 302.043 348.593 215.657 -3.782 -1.177 15.657 13 589.534 355.892 215.842 -5.957 -1.061 15.842 21 31.696 592.934 216.478 -1.676 -3.338 16.478 22 295.196 610.164 215.883 -3.901 -3.156 15.883 23 581.149 628.259 213.808 -5.737 -2.696 13.808

The “reconstructed” coordinates must fit in a planeand its scattering will provide a measure of the systemaccuracy. The reconstructed coordinates will be comparedwith the coordinates of the projected points obtained inthe fifth step of the calibration process (fig. 6). Theseprojected coordinates were computed using eq. 10. Table5 presents the obtained results. The errors with respect to

the projection planes are presented in the highlighted lines(Table 5). It is worth noting that there is a systematiceffect due to the error in the location of the projectorperspective centre. These errors cause a rotation in thereconstructed surface.

A similarity transformation (7 parameters) was thenapplied to the reconstructed coordinates forcompensation of the orientation discrepancies betweenthe reconstructed and the real flat surface. The obtainedtransformation parameters are presented in table 6. Theinverse transformation was then applied to thereconstructed points and the errors with respect to theprojected points in the global reference system wereevaluated.

Table 6 Similarity transformation parameters relating thereconstructed surface and the projected points

κ ϕ ω Xc Yc Zc λ -0.00062 -0.00299 0.00058 1.751 -1.250 -18.431 0.992

In Table 7 the reconstructed coordinates after asimilarity transformation and the errors with respect to theprojected plane are presented. The errors in thereconstructed coordinates are highlighted and show that ahigh accuracy reconstruction was achieved, evenconsidering the direct measurement of targets and the lowresolution of the camera.

Although the original reconstructed flat surface wasrotated after the transformation, the coordinate scatteringwith respect to the projection plane was within 0.5 mm.The penultimate line of table 7 presents the mean of theerrors whilst the last one presents the Mean Square Error(MSE) of the errors in the flat surface reconstruction. Themean approaches zero, whilst the MSE for the XYcoordinates are around 0.2mm and 0.5mm for Zcoordinates. The relative error in Z is 1/3278, consideringthat the camera is 1639mm from the plane surface.

The orientation of the surface was affected mainlyby a rotation ϕ around the Y axis. This inaccuracy in thesurface orientation recovery is greatly influenced by theconfiguration of the control points which are aligned inY direction generating a weak geometry to recover the ϕ rotation. The similarity transformation had correctedsuch a displacement caused by a weak geometry,although the shape of the surface could be consideredaccurate even in a rotated position.

The presented results with accurate calibration dataenable the assessment of the accuracy of the system. Theaccuracy, considering the close range environment waswithin 0.5mm and that is considered suitable for thetarget application.

Page 8: [IEEE Comput. Soc SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision - Rio de Janeiro, Brazil (20-23 Oct. 1998)] Proceedings SIBGRAPI'98. International

Tommaselli, Antonio Maria Garcia

Anais do X SIBGRAPI (1997) 101-104

8

Table 7 Coordinates of the reconstructed points aftera similarity transformation and errors withrespect to the projected coordinates

X (mm) Y (mm) Z(mm)

01 44.644 84.865 199.826 -0.201 -0.233 0.173

02 313.903 82.825 200.338 0.113 -0.101 -0.335

03 607.924 75.480 199.474 -0.018 0.291 0.522

11 39.969 343.018 199.660 -0.050 0.051 0.338

12 305.678 349.700 199.997 0.146 0.070 0.003

13 595.378 356.873 201.055 0.109 0.078 -1.047

21 33.409 596.085 200.152 -0.038 0.185 -0.151

22 298.942 613.281 200.358 0.153 0.037 -0.356

23 587.105 631.337 199.140 -0.215 -0.379 0.854

Mean -0.0001 -0.0001 0.0001 MSE 0.141 0.206 0.559

The system performance can be improved by:

• Introducing more effective feature extractionmethods. Some authors have reported a precision of0.01 pixel [Trinder et al, (1995)] for targetmeasurement;

• Using a high resolution camera, such as a KodakDCS 420;

• Establishing a more reliable camera calibrationsetup, with more control points and more images;

• Establishing a different control points configuration,avoiding alignment of the targets;

• Augmenting the number of projected points andprojection planes for the projector calibration.

7. Conclusion

A photogrammetric close range system was tested andboth its accuracy and its geometric characterisation werepresented. The main contribution of the proposedapproach is the calibration method of the projector, whichavoids complex systematic errors modelling.

All computer software was written in the C language,including image processing routines. The obtained resultswith the proposed structured light system using real dataseems to be suitable for the proposed applications

The results obtained indicated that 0.5mm ofaccuracy in height determination and 0.2mm in XY planecan be reached, in a typical application. Some questionsassociated with feature extraction must be further studiedto improve the accuracy of the system.

Acknowledgements

This work was partially supported by CNPq (NationalResearch Council), FUNDUNESP and FAPESP.

7. References

ASP-American Society of Photogrammetry and RemoteSensing. Manual of Photogrammetry. 4th edition, 1980.

Dunn, S. M.; Keizer, R.L.; Yu, J. Measuring the Area andVolume of the Human Body with Structured Light. IEEETransactions on Systems, Man and Cybernetics., (1989), .19/.6,. 1350-1364.

Frobin, W. Hierholzer, E. Rasterstereography; APhotogrammetric Method for Measurement of BodySurfaces Photog. Eng. and Remote Sensing., (1981),47/12,.1717-1724.

Frobin, W. Hierholzer, E. Video Rasterestereography: AMethod for On-Line Measurement of Body Surfaces.Photog. Eng. and Remote Sensing., (1991), 57/10, 1341-1345.

Guisser, L.; Payrissat, R.; Castan, S. A New 3-DMeasurement System Using Structured Light, IEEE,(1992), 784-786.

Mitchel, H.L. Applications of digital photogrammetry tomedical investigations. ISPRS Journal of Photog .&Remote Sensing, Vol. 50, N. 3, (1995), 27-36.

Moffit, F. H.; Mikhail, E.M. Photogrammetry. Harper &Row, 3d ed., 1980, 648 p.

Tommaselli, A.M.G.; Shimabukuro, M.H.; Scalco,P.A.P.; Nogueira, F.M.A. Implementation of aPhotogrammetric Range System In: InternationalArchives of Photog. and Remote Sensing, Viena, 1996.

Schalkoff, R. J. Digital Image Processing and ComputerVision. Singapura: John Wiley & Sons, Inc, 1989. 489 p.

Shrikhande, N. ; Stockman, G. Surface Orientation froma Projected Grid. IEEE Transactions on PAMI, (1989),11/6, 650-655.

Singh, R.; Chapman, D.P. Atkinson, K.B. DigitalPhotogrammetry for automatic measurement of texturelessand featureless objects. Photogrammetric Record, (1997),15/9, 691-702.

Trinder, J.C.; Jansa, J.; Huang, Y. An assessment of theprecision and accuracy of methods of digital targetlocation. ISPRS J. of Photog. & Remote Sensing, 50/2,(1995).