cylindrical rectification.pdf

download cylindrical rectification.pdf

of 7

Transcript of cylindrical rectification.pdf

  • 8/13/2019 cylindrical rectification.pdf

    1/7

    Cylindrical Rectification to Minimize Epipolar Distortion

    Skbastien Roy*t Jean Meuniert IngemarJ. Cox

    NEC Research Institute*4 Independence Way

    Princeton, N J 08540, U.S.A.

    Universitk de MontrkaltDkpartement d'informatique et de recherche opkrationnelleC.P. 6128, Succ. Centre-Ville, Montrkal, Qukbec,H3C 3J7

    AbstractWe propose a new rectification meth odfo r aligning epipo-

    lar lines of a pair of stereo images taken under any camerageometry. I t effectively remaps both images onto the suga ceof a cylinder instead of a plane, which is used in commonrectijication methods. For a large set of camera motions,remapping to a plane has the drawback of creating rectijiedimages that are potentially infinitely large and presents aloss of pixel information along epipolar lines. In contrast,cylindrical rectijication guarantees that the rectijied imagesare bounded o r all possible camera motions and minimizesthe loss o pixel information along epipolar line. The pro-cesses eg. stereo matching, etc..) subsequen tly appliedto the rectified images are thus more accu rate and g eneralsince they can accommoda te any cam era geometry.

    1 IntroductionRectification is a necessary step of stereoscopic analysis.

    The process extracts epipolar lines and realigns them hori-zontally into a new rectified image. This allows subsequentstereoscopic analysis algorithms to easily take advantage ofthe epipolar constraint and reduce the search space to onedimension, along the horizontal rows of the rectified images .

    For different camera motions, the set of matching epipo-lar lines varies considerably and extracting those lines forthe purpose of depth estimation can be quite difficult. Thedifficulty does not reside in the equat ions themselves; for agiven point, it is straightforward to locate the epipolar linecontaining that point. The problem is to find a set of epipo-lar lines that will cover the whole image and introduces aminimum of distortion, for arbitrary camera motions. Sincesubsequent stereo matching occurs along epipolar lines, it i simportant that no pixel information is lost along these linesin order to efficiently and accurately recover depth.

    Fig. 1 depicts the rectification process. A scene S isobserved by two cameras to create images 11 and 12. Inorder to align the epipolar lines of this stereo pair, someimage transformation must be applied. The most common ofsuch transformations, proposed by Ayache [11 and referredto as planar rectijication, is a remapping of the originalimages onto a single plane that is parallel to the line joiningthe two cameras optical centers (see Fig. 1, images PI andP. ) . This is accomplished by using a linear transformationin projective space applied to each image pixels.

    The new rectification method presented in this paper,

    1063-6919/97 10.00 1997 IEEE

    Figure 1: Rectification. Stereo images Il, 2 fscene S shown with planar rectification Pl, P2and cylindrical rectification (Cl, Cz)

    referred to as cylindrical rectification,proposes a transfor-mation that remaps the images onto the surface of a cylin-der whose principal axis goes through both cameras opticalcenters (see Fig. 1, images C1 and C,). The actual imagesrelated for Fig. 1 are shown in Fig. 2.

    The line joining the optical centers of the cameras (seeFig. 1) defines the focus of expansion foe) . All epipolarlines intersect the focus of expansion. The rectificationprocess applied to an epipolar line always makes that lineparallel to the f oe . This allows the creation of a rectifiedimage where the epipolar lines do not intersect and can beplaced as separate rows. Obviously, both plane and cylinderremapping satisfy the alignment requirement with the foe .

    Planar rectification, while being simple and efficient, suf-fers from a major drawback: it fails for some camera motion,as demonstrated in Sec. 2. As the forward motion compo-nent becomes more significant, the image distortion inducedby the transformation becomes progressively worse until theimage is effectively unbounded. The image distortion in-duces a loss of pixel information that can only be partlycompensated for by making the rectified image size larger'.Consequently, this method is useful only for motions witha small forward component, thus lowering the risk of un-bounded rectified images . One benefit of planar rectificationis that it preserves straight lines, which is an important con-

    lSee Sec. 3.6 for a detailled discussion

    393

  • 8/13/2019 cylindrical rectification.pdf

    2/7

    sideration if stereo matching is to be performed on edges orlines.

    On the other hand, cylindrical rectification is guaranteedto provide a bounded rectified image and significantly re-duce pixel distortion, for all possible camera motions. Thistransformation also preserves epipolar line length. For ex-ample, an epipolar line 100 pixels long will always be recti-fied to a line 100 pixels long. This insures a minimal loss ofpixel information when resampling the epipolar lines fromthe original images. However, arbitrary straight lines areno longer preserved, though this may only be a concern foredge based stereo.

    Planar rectification uses a single linear transformationmatrix applied to the image, making it quite efficient. Cylin-drical rectification uses one such linear transformation ma-trix for each epipolar line. In many cases, these matricescan be precomputed so that a similar level of performancecan be achieved.

    Although it is assumed throughout this paper that inter-nal camera parameters are known, cylindrical rectificationworks as well with unknown internal parameters, as it is thecase when only the Fundamental matrix (described in [2])is available (See Sec. 3.5).

    Many variants of the planar rectification scheme havebeen proposed [1,3,4]. A detailed description based on theessential matrix is given in [ 5 ] . n [ 6 ] , hardware implemen-tation is proposed. In [7], the camera motion is restricted toa vergent st r o geometry to simplify computations. It alsopresents a faster way to compute the transformation by ap-proximating it with a non-projective linear transformation.This eliminates the risk of unbounded images at the expenseof potentially severe distortion. In [8], a measure of imagedistortion is introduced to evaluate the performance of therectification method. This strictly geometric measure, basedon edge orientations, does not address the problem of pixelinformation loss induced by interpolation (see Sec. 3.6).

    Sec. 2 describes planar rectification in more details. The

    cylindrical rectification method is then presented in Sec. 3.It describes the transformation matrix whose three compo-nents are explicitly detailed in Sec. 3.3,3.2 and 3.1 . Sec. 3.4discuss the practical aspect of finding the set of correspond-ing epipolar lines in both images to rectify. It is demon-strated in Sec. 3.5 that it is possible to use uncalibrated aswell as calibrated cameras. A measure of image distortionis introduced in Sec. 3.6 and used to show how both rec-tification methods behave for different camera geometries.Examples of rectification for different camera geometriesare presented in Sec. 4.

    2 Linear transformation in projective

    In this section we show how rectification methods basedon a single linear transformation in projective space [1,3 ,4]fail for some camera geometries.

    As stated earlier, the goal of rectification is to apply atransformation to an image in order to make the epipolarlines parallel to the focus of expansion. The result is a setof images where each row represents one epipolar line andcan be used directly for the purpose of stereo matching (see

    space

    Figure 2: Images from Fig. 1. Original imagesI I , 2 are shown with cylindrical rectification

    (Cl, C 2 and planar rectific ation P I , . ) .

    Fig. 2).In projective space, an image point is expressed as p =

    p , , p , , h)T where h is a scale factor. Thus we can assumethese points are projected to p = (p,,py, l ) T.

    The linear projective transformation F is used to trans-form an image point U nto a new point v with the relation

    where

    Tv = W + / , W h ) U = U z , U y , W J T U h 0

    The fact that U h 0 simply implies that the original imagehas a finite size. Enforcing that the reprojected point is notat infinity implies that w must be non-zero, that is

    2 )h = U Z F g u y F 7 f U ~ F S0

    Since U,, U , are arbitrary, Eq. 2 has only one possible so-lution F G , 7 , F s ) = 0 ,0, l ) since only u can guarantee2/h to be non zero and F to be homogeneous. Therefore, thetransformation F must have the form

    F = F 3 F 4 F 5[ p :which corresponds to a camera displacement with no for-ward (or backward) component.

    In practice, the rectified image is unbounded only whenthe foe is inside the image. Therefore, any camera motionwith a large forward component (making the foe visible)cannot be rectified with this method. Moreover, as soon asthe forward component is large enough, the image points are

    394

  • 8/13/2019 cylindrical rectification.pdf

    3/7

    epipolar line FOE

    Epipolar linelocation

    Figure 3: The basic steps of the cylindri cal recti-fication method. First Rfoe), n epipolar line i srotated in the epipolar plane until it is parallel tothe foe. Second Tfoe), a change of coordinatesystem is applied. Third S f , . ) , a projection ontothe surface of the uni t cylinder i s applied.

    mapped so far apart that the rectification becomes unusabledue to severe distortion.

    In the next section, we described how cylindrical recti-jication can alleviate these problems by making a differentuse of linear transformations in projective space.

    3 Cylindrical rectificationThe goal of cylindrical rectification is to apply a trans-

    formation of an original image to remap on the surface of acarefully selected cylinder instead of a plane. By using theline joining the cameras optical centers as the cylinder axis(Fig. I) , all straight lines on the cylinder surface are neces-sarily parallel to the cylinder axis and focus of expansion,making them suitable to be used as epipolar lines.

    The transformation from image to cylinder, illustratedin Fig. 3 , is performed in three stages. First, a rotation isapplied to a selected epipolar line (step Rf,, . his rotationis in the epipolar plane and makes the epipolarline parallel tothe foe. Then, a change of coordinate system is applied (stepTfoe) o the rotated epipolar line from the image system tothe cylinder system (with foe as principal axis). Finally,(step Sfoe), his line is normalized or reprojected onto thesurface of a cylinder of unit diameter. Since the line isalready parallel to the cylinder, it is simply scaled along thedirection perpendicular to the axis until it lies at unit distancefrom the axis. A particular epipolar line is referenced by itsangle around the cylinder axis, while a particular pixelon the epipolar line is referenced by its angle and position

    along the cylinder axis (see Fig. 3) .Even if the surface of the cylinder is infinite, it can beshown that the image on that surface is always bounded.Since the transformation aligns an epipolar line with the axisof the cylinder, it is possible to remap a pixel to infinity onlyif its epipolar l i ne is originally infinite. Since the originalimage is finite. all the visible parts of the epipolar lines are

    also of finite length and therefore the rectified image cannotextend to infinity.

    The rectification process transforms an image point pxyzinto a new point qfoe which is expressed in the coordinatesystem foe of the cylinder. The transformation matrix Lfoeis defined so that the epipolar line containing p:yz willbecome parallel to the cylinder axis, the foe. Since allpossible epipolar lines will be parallel to the foe, they willalso be parallel to one another and thus form the desired

    parallel aligned epipolar geometry.We have the linear rectification relations between qfoe

    and p z y z tated as

    q f o e = L f o e P z y z

    = S f mTf oe Rfoe ) p l y = 3)

    and inversely

    where

    1 0 0

    S f , , = 0 0 ] and Tfoe = yo i(4)

    These relations are completely invertible (except for thespecial case p z y z = foe, which is quite easily handled).The matrix Rfoe epresents the rotation of the image pointin projective space. The matrix Tfoe epresents the changefrom the camera coordinate system to the cylinder system.The matrix S f , , represents the projective scaling used toproject rectified point onto the surface of the unit cylinder.

    The next three subsections will describe how to computethe coordinate transformation Tfoe , he rotation Rf,, andthe scaling S f , , .3.1 Determining transformation T

    from system (x; y; z ) to system foe; U ; v) such thatThe matrix Tfoe s the coordinate transformation matrix

    and is uniquely determined by the position and motion ofthe cameras (see Fig. 3 ) .

    Any camera has a position pos and a rotation of q ~degreesaround the axis axis relative to the world coordinate system.A homogeneous world point pw s expressed in the systemof camera a (with posa, axis,, and &) as

    pa = RaWpw

    where Raw s the 4 x 4 homogeneous coordinate transfor-mation matrix obtained as

    ot(axis,,-&)Raw = i o

    395

  • 8/13/2019 cylindrical rectification.pdf

    4/7

    wherer a w = rot axis,, -4,)

    and rot(A , 0) is a 3 x 3 rotation matrix of angle 8 aroundaxis A. The corresponding matrix Rbw for camera b withposh, axisb, and b is defined in a similar way.

    The direct coordinate transformation matrices for cameraa and b such that

    Pa = RabPbPb = RbaPa

    are defined as

    where

    rab r a w r ; f

    r b a = r b w r a wT

    foe, = r w . posh Os , )foeb = rbw . pas, - posh)

    from which we can derive the matrix Tfoe;a or rectifyingthe image of camera a as

    n foea)n z x foea)

    n foe, x z x foe,))T f o e ; a =

    where n(v) = v/llvll is a normalizing function. The corre-sponding matrix Tfoe;b or rectifying the image of camerab can be derived similarly or more simply by the relation

    Tfoe;b = - T f o e ; a rab (7)For the case where foe, = z, the last two rows of Tfoe ;acan be any two orthonormal vectors perpendicular to z .

    3.2 Determining rotation RThe epipolar line containing a point p z y z will be rotated

    around the origin (the cameras optical center) and alongthe epipolar plane until it becomes parallel to the foe. Theepipolar plane containing pzyz lso contain the foe (bydefinition) and the origin. The normal to that plane is

    axis = foe x pIyz (8)

    and will be the axis of rotation (see Fig. 3 ) , hus ensuring thatp z y z emains in the epipolar plane. In the case pzyz = foe,the axis can be any vector normal to the foe vector.

    The angle of rotation needed can be computed by usingthe fact that the normal z = O,O, l ) T o the image planehas to be rotated until it is perpendicular to the foe. This isbecause the new epipolar line has to be parallel to the foe.The rotation angle is the angle between the normal z =(0, 0, projected on the epipolar plane (perpendicular

    to the rotation axis) and the plane normal to the foe alsocontaining the origin. By projecting the point pzyz ontothat plane, we can directly compute the angle. We have z,the normal z projected on the epipolar plane defined as

    -axrzaxis: ]-axis,axis,

    axis, + axis,z = axis x z x a i s ) =

    and p, the projected pzyz n the plane normal to the foe,defined as

    9)

    where Tfoe was previously defined in Eq. 6 .

    vector p around the axis of Eq. 8 and is defined asThe rotation matrix Rfoe otates the vector z onto the

    where r O t a b rotates vector b onto vector a such that

    n(i(;)b)T

    . a x b) x a) . a x b) x b)

    If the point q f o e s available instead of point pzyz , aswould be the case for the inverse transformation of Eq. 4) wecan still compute Rf oe rom Eq. 10 by substituting qzyz orp z y z n Eq. 8 and 9 where qzyz s derived from qfoe singEq. 5. Notice that because p z y z nd q z y z re in the sameepipolar plane, the rotation axis will be the same. Also, theangle of rotation will also be the same since their projectiononto the plane normal to the foe is the same (modulo a scalefactor).

    3.3 Determining the scaling SThe matrix S f , , is used to project the epipolar line from

    the unit image plane (i.e. located at z = 1) onto the cylin-der of unit radius. To simplify notation in the followingequation, we define

    1 0 0 0 0 0A = 0 0 0 B = O l O

    [ 0 0 0 1 [ o 0 1 1

    As shown in Eq. 3 and 4, Sfoe as one scalar parameter kThis parameter can be computed for a known point pyyz(Eq. 3) by enforcing unit radius and solving the resultingequation

    II r l o 0 1 I

    which yields the solution

    396

  • 8/13/2019 cylindrical rectification.pdf

    5/7

    For the case of a known point qfoe (Eq. 4), enforcingthat the epipolar lines all have their z coordinates equal to 1gives the equation

    r o i

    which can be simplified to

    T f o e c 3 ) . A q f o e ) + k T j o e c 3 ) . B q f o e ) = 1where c 3 is the third column of rotation matrix Rf,,. Thesolution is then

    The fundamental matrix F defines the epipolar relationbetween points pa and Pb of the images as

    (12)Pb . F . p , O

    It is straightforward to extract the focus of expansion foreach image by noticing that all points of one image mustsatisfy Eq. 12 when the point selected in the other image isits foe. More precisely, the relations for foe, and foeb are

    p r . F . f o e , = 0 vpbf o e r . F . p , = 0 Vpa

    which yield the homogeneous linear equation systems

    F. f o e , = 0 13)FT.foeb = 0 14)

    It should be noted that the denominator can never be zerobecause of Eq. 1 1 and the fact that Tfoec3 an never be zeroor orthogonal to B qfoe .

    3.4 Common angle intervalIn general, a rectified image does not span the whole

    cylinder. The common angle interval is the interval thatyields all common epipolar lines between two views. Inorder to control the number of epipolar lines extracted, it isimportant to determine this interval for each image.

    Notice that the rectification process implicitly guaranteesthat a pair of corresponding epipolar lines have the sameangle on their respective cylinder, and therefore the samerow in the rectified images. The concern here is to determinethe angle interval of epipolar lines effectively present in bothimages.

    It can be shown that if a rectified image doesnot

    span thewhole cylinder, then the extremum angles are given by twocorners of the image. Based on this fact, it is sufficient tocompute the angle of the four corners and one point betweeneach pair of adjacent corners. By observing the ordering ofthese angles and taking into account the periodicity of anglemeasurements, it is possible to determine the angle intervalfor one image.

    Given the angle intervals computed for each image sepa-rately, their intersection is the common angle interval sought.The subsequent stereo matching process has only to considerepipolar lines in that interval.

    3.5 The case of uncalibrated camerasUntil now, it was always assumed that the cameras where

    calibrated, i.e. their internal parameters are known. The pa-rameters are the principal point (optical axis), focal lengthsand aspect ratio. More generally, we can represent all theseparameters by a 3 x 3 upper triangular matrix. In this section,we assume that only the fundumentul matrix is available.This matrix effectively hides the internal parameters withthe camera motion (external parameters) in a single matrix.

    which are easily solved.At this point, it remains to show how to derive the con-

    stituent of matrix L f o eof Eq. 3 from the fundamental matrixF. These are the matrices S f o e ,Rfoe, nd T f o e .

    The transformation Tfoeia an be directly obtained fromEq. 6, using foe, obtained in Eq. 13. Symmetrically (usingEq. 14) we obtain

    l foe )n z x foeb)

    n f0eb x z x foe )))Tfoe;b =

    The rotation matrix R f o e is computed from the foe(which is readily available from the fundamental matrixF) and the transform matrix Tfoe , xactly as described inSec. 3.2.

    Since the scaling matrix S f , , is directly computed fromthe value of rotation matrix Rfoe and transform Tfoe, t iscomputed exactly as described in Sec. 3.3.

    The rectification method is applicable regardless of theavailability of the internal camera parameters. However,without these parameters, it is impossible to determine theminimum and maximum disparity interval which is of greatutility in an subsequent stereo matching. In this paper, all theresults obtained performed with known internal parameters.

    3.6 Epipolar distortion and image sizeThe distortion induced by the rectification process in con-

    junction with the resampling of the original image can createa loss of pixel information, i.e. pixels in the original im-age are not accounted for and the information they carryis simply discarded during resampling. We measure thisloss along epipolar lines, since it is along these lines that asubsequent stereo process will be carried out. To establisha measure of pixel information loss, we consider a rectifiedepipolar line segments of a length of one pixel and computethe lenght L of the original line segment that is remapped toit. For a given length L , we define the loss as

    397

  • 8/13/2019 cylindrical rectification.pdf

    6/7

    3 0: cylindrical: lanar

    Translation zcomponent

    Figure 4: Pixel loss as a funct ion of camera transla-tion T = ( l ,O,z) . Rectif ied im age width is 365, 730and 1095 pixels for an original width of 256 pixels.

    A shrinking of original pixels (i.e. L > 1) creates pixelinformation loss while a stretching (i.e. L < 1) simplyreduce the density of the rectified image. For a whole image,the measure is the expected loss over all rectified epipolarlines, broken down into individual one pixel segments.

    The fundamental property of cylindrical rectification isthe conservation of the length of epipolar lines. Since pixelsdo not stretch or shrink on these lines, no pixel informationis lost during resampling, except for the unavoidable loss in-troduced by the interpolation process itself. For planar rec-tification, the length of epipolar lines is not preserved. Thisimplies that some pixel loss will occur if the rectified imagesize is not large enough. In Fig. 4, three different rectifiedimage width (365, 730, 1095 pixels) were used with bothmethods, for a range of camera translations T = (1 ,0 ,z )with a z component in the range z E [0,1]. Cylindricalrectification shows no loss for any camera motion and anyrectified image width2. However, planar rectification in-duces a pixel loss that depends on the camera geometry. To

    compensate for such a loss, the rectified images have to beenlarged, sometimes to the point where they become uselessfor subsequent stereo processing. For a z component equalto 1 (i.e. T = (1,0, l ) ) , ll pixels are lost, regardless ofimage size.

    4 Experiments and resultsSome examples of rectification applied to different cam-

    era geometries are illustrated in this section. Fig. 5 presentsan image plane and the rectification cylinder with the repro-jected image, for an horizontal camera motion. In this case,the epipolar lines are already aligned. The rows representdifferent angles around the cylinder, from 0 to 360 . Theimage always appears twice since every cylinder point isprojective across the cylinder axis. The number of rows de-

    termine the number of epipolar lines that are extracted fromthe image.Fig. 6 depicts a camera geometry with forward motion.

    The original and rectified images are shown in Fig. 7 (pla-nar rectification can not be used in this case). Notice how

    2T he minimum image width tha t gu arantees no pixel loss isequal to v ' i T P 2 for an original image of size ut, h )

    Figure 5: Image c ube rect if ied. Horizontal cam-era motion foe = l , O , O ) ) . A row represent anindiv idual epipolar lin e.

    the rectified displacement of the sphere and cone is purelyhorizontal, as expected.

    Fig. 8 depicts a typical camera geometry, suitable forplanar rectification, with rectified images shown in Fig. 9.While the cylindrical rectification (images C1, C, in Fig. 9)introduces little distortion, planar rectification (imagesPl,P2) significantly distorts the images, which are alsolarger to compensate for pixel information loss.

    Examples where the foe is inside the image are obtainedwhen the forward component of the motion is large enoughwith respect to the focal length (as in Fig. 7). It is importantto note that planar rectification always yields an unboundedimage (i.e. infinite size) for these cases and thus can not beapplied.

    The execution time for both methods is very similar. Formany camera geometries, the slight advantage of planarrectification relating to the number of matrix computation isovercome by the extra burden of resampling larger rectifiedimages to reduce pixel loss.

    5 ConclusionWe presented a new method, called cylindrical rectiJ-

    cation, for rectifying stereoscopic images under arbitrarycamera geometry. It effectively remaps the images onto thesurface of a unit cylinder whose axis goes through both cam-eras optical centers. It applies a transformation in projectivespace to each image point. A single linear transformationis required per epipolar line to rectify. While it does notpreserves arbitrary straight lines, it preserves epipolar linelengths, thus insuring minimal loss of pixel information. Asa consequence of allowing arbitrary camera motions, therectified images are always bounded, with a size indepen-dent of camera motion.

    The approach has been implemented and used success-fully in the context of stereo matching [SI, ego-motionestimation [ 101 and tridimensional reconstruction and hasproved to provide added flexibility and accuracy at no sig-

    nificant cost in performance.References

    [l] N Ayache and C . Hansen. Rectification of images for binoc-ular and trinocular stereovision. In Proc. of Int. Conf. onPattern Recognztzon, pages 11-16, Washington, D.C., 1988.

    [2] Q.-T. Luong and 0 . D. Faugeras. The fundamental matrix:Theory, algorithms, and stability analysis. Int. J . ComputerVzszon, 17:43-75, 1996.

    398

  • 8/13/2019 cylindrical rectification.pdf

    7/7

    Figure 7: Rectification of forward camera motion.The images 11 z are shown with their cylindricalrectification Cl, C2. The rectified image displace-ments are all horizontal.

    Figure 6: Forward motion. A sphere and coneare observed from two cameras displaced alongtheir optical axis. The original images I l , I z areremapped onto the cylinder as CI, Cz.

    11[3] S . B. Kang, J . A. Webb, C. L . Zitnick, and T. Kanade.

    An active multibaseline stereo system with real-time imageacquisition. Technical Repor t CMU-(3-94-167, School ofComputer Science, Carnegie Mellon University, 1994.

    [4] 0 . Faugeras. Three-dzmentaonal computer uzszon. MITPress, Cambridge, 1993.

    [5] R. Hartley and R. Gup ta. Computing matched-epipolarprojections. In Proc. of IEEE Conference on Computer

    Vzsaon and Pat ter n Recognation, pages 549-555, New York,. 7 r .-- .I U . Y. I Y Y Y.[6] P. Courtney, N. A . Thacker, and C. R. Brown. A hardware

    architectur e for image rectification and ground plane obsta-cle detection. In Proc. of Int . C onf . on Patt ern Recognition, Figure 8: Camera geometry suitable for planar rec-

    tification. 11, z are the original images.ages 23-26, Th e Hague, Nether lands , 1992.[7] D. V. Papadimitriou and T. J . Dennis. Epipolar line estima-

    tion and rectification for stereo image pairs. IEEE Trans.Image Processing, 5(4):672-676, 1996.

    [8] L. Robert, M. Buffa, and M . HBbert. Weakly-calibratedster eo perception for rover navigation. In Proc. 5th Znt.Conference on Computer Vision, pages 46-51, Cambr idge,199 5

    Figure 9: Rectified mages. Cylindrical rectification(Cl, C2) and planar rectification Pl, P z

    399