Visp 2.6.1 Tutorial Computer Vision

download Visp 2.6.1 Tutorial Computer Vision

of 19

Transcript of Visp 2.6.1 Tutorial Computer Vision

ViSP 2.6.1: Visual Servoing PlatformComputer vision algorithmsLagadic projecthttp://www.irisa.fr/lagadicOctober 27, 2011Francois ChaumetteEric MarchandFabien SpindlerRomain Tallonneau2Contents1 Camera calibration 51.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Visual servoing and calibration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Multi-images calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Calibration from points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.1 Camera model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.2 Deriving the interaction matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5 Camera calibration in ViSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Pose estimation 112.1 3D model based tracker. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.2 3D Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 2D motion estimation 173.1 Example of use: the vpPlanarObjectDetector class . . . . . . . . . . . . . . . . . . . 174 CONTENTSChapter 1Camera calibration1.1 PrincipleThe basic idea of our approach is to dene the pose computation and the calibration problem as the dualproblem of 2D visual servoing [2, 3]. In visual servoing, the goal is to move the camera in order to observean object at a given position in the image. This is achieved by minimizing the error between a desired stateof the image features pdand the current state p. If the vector of visual features is well chosen, there isonly one nal position of the camera that allows to achieve this minimization. We now explain why thecalibration problem is very similar.Letusdeneavirtualcamerawithintrinsicparameterslocatedatapositionsuchthattheobjectframe is related to the camera frame by the homogeneous 44 matrixcMo.cMo denes the pose whoseparameters are called extrinsic parameters. The position of the object point cPin the camera frame is denedby:cP =cMooPand its projection in the digitized image by:p = pr(cP) = pr(cMooP) (1.1)where pr(.) is the projection model according to the intrinsic parameters . The goal of the calibration isto minimize the error between the observed data denoted pd (usually the position of a set of features on acalibration grid) and the position of the same features computed by back-projection according to the currentextrinsic and intrinsic parameters p (as dened in Equation 1.1). In order to ensure this minimization wemove the virtual camera (initially inciMo and modify the intrinsic camera parameters (initially i) using avisual servoing control law. When the minimization is achieved, the parameters of the virtual camera willbecfMo, that is the real pose, and f.We will show in the next paragraphs how to perform this minimization using visual servoing an theinterests of considering this approach.1.2 Visual servoing and calibrationThe goal is to minimize the error p pd. We therefore dene the error in the image e by the simplerelation:e = p pd(1.2)6 CHAPTER 1. CAMERA CALIBRATIONThe motion of the features in the image is related to the camera velocity vc and the time variation of theintrinsic parameters by: p =prdrdt+pddt(1.3)that can be rewritten as: p = HpV with V =__vc__(1.4)Matrix Hp is classically called interaction matrix or image Jacobian in the visual servoing community. It isgiven:Hp =_prp_(1.5)If we specify an exponential decoupled decrease of the error e that is: e = e (1.6)where is a proportional coefcient that tunes the decay rate, we nally get:V = Hp+e (1.7)where H+pis the pseudo inverse of matrix Hp (H+= (HTH)1HTif His a full rank matrix).Comments about the choice of p. Any kind of feature can be considered within this control law as soonas we are able to compute the image Jacobian Hp. In [2], a general framework to computepris proposed.On the other sidepis seldom difcult to compute as it will be shown in the next section. This is oneof the advantages of this approach with respect to other non-linear calibration approaches. Indeed we areable to perform calibration from a large variety of primitives (points, lines, circles, etc...) within the sameframework. Furthermore, considering various kind of primitives within the same calibration step is alsopossible.1.3 Multi-images calibrationThe intrinsic parameters obtained using one image may be, in practice, very different from the parametersobtained with another image taken from another viewpoint, even if the same lens, the same camera, the sameframe grabber and the same calibration grid are used. It is therefore important to consider a multi-imagecalibration process that integrates within a unique minimization process data from various images.The underlying idea is to compute a unique set of intrinsic parameters that is correct for all the images(i.e., for all the camera positions). Puget and Skordas considered this idea in [4]. They computed the nalintrinsic parameters as the mean value of the parameters computed for each images, the camera positionsare then recomputed wrt. to these new intrinsic parameters. Our approach is different. We consider a visualservoing scheme that computes the motion of n virtual cameras and the variation of the l intrinsic parametersthat have to be the same for all the images. For n images, we therefore have 6n + l unknown variables and

ni=1 mi equations (where mi is the number of features observed in the ithimage).1.4. CALIBRATION FROM POINTS 7If piis the set of features extracted from ithimage, the interaction matrix used in the calibration processis then given by the relation:__ p1 p2... pn__ = H__vc1vc2...vcn__(1.8)withH =__p1r0 . . . 0p10p2r. . . 0p2.........0 . . . 0pnrpn__(1.9)Minimization is handled using the same methodology:__vc1vc2...vcn__= H+__p1p1dp2p2d...pnpnd__(1.10)1.4 Calibration from points1.4.1 Camera modelIn this section, we use the perspective camera model introduced in the 3D and vision related transformationstutorial. Let us dene by M = (X, Y, Z)Tthe coordinates of a point in the camera frame. The coordinatesof the perspective projection of this point in the image plane is given by m = (x, y)Twith:___x = X / Zy = Y/ Z(1.11)If we denote (u, v) the position of the corresponding pixel in the digitized image, this position is related tothe coordinates (x, y) by:___u = u0 + pxx + uv = v0 + pyy + v(1.12)where u and vare geometrical distortions introduced in the camera model. These distortions are due toimperfections in the lenses design and assembly. u and v can be modeled as follow :___u(x, y) = pxx kud_x2+ y2_v(x, y) = pyykud_x2+ y2_(1.13)8 CHAPTER 1. CAMERA CALIBRATIONor :___u(u, v) = (u u0) kdu__uu0px_2+_vv0py_2_v(u, v) = (v v0) kdu__uu0px_2+_vv0py_2_(1.14)The six parameters to be estimated are thus {px, py, u0, v0, kud, kdu}. But in a rst time,we will only con-sider the ve parameters ud = {px, py, u0, v0, kud}.1.4.2 Deriving the interaction matrixWe have to compute the interaction matrix Hp that links the motion p = ( u, v) of a point p = (u, v) in theimage to (vc,ud). For one point, this Jacobian is given by:Hp =_prpud_(1.15)wherepris a 26 matrix andpudis a 25 matrix. Considering a calibration with n points, the full imageJacobian is given by the 2n 11 matrix:H = (Hp1, Hp2, . . . , Hpn) (1.16)One of the interest of this approach is that it is possible to consider the background in visual servo-ing. The image Jacobianprthat relates the motion of a point in the image to the camera motion is quiteclassical [2, 3] and is given by:pr=__px00 py____1 + kud_3x2+ y2_2kudxy2kudxy 1 + kud_x2+ 3y2___Lp(1.17)withLp =__ 1Z0xZxy (1 + x2) y0 1ZyZ1 + y2xy x__(1.18)Furthermore, from (1.12), differentiating u and v for ud leads very easily to:pud=__x_1 + kud_x2+ y2__0 1 0 x_x2+ y2_0 y_1 + kud_x2+ y2__0 1 y_x2+ y2___(1.19)We are now able to write the relation that links the motion of a point in the image to the camera motionand the variation of the intrinsic camera parameters.Let us nally note that, if we do not want to consider distortion within the camera model, this equationcan be simplied and we replacepudby:p=__x 0 1 00 y 0 1__(1.20)with = (px, py, u0, v0)1.5. CAMERA CALIBRATION IN VISP 9Considering the model with the ve parameters du = (px, py, u0, v0, kdu), the expression of distortion isgiven by equation 1.14. According these equations, the new interaction matrix is easily obtained from :pr=__px00 py__Lp(1.21)withLp =__ 1Z0xZxy (1 + x2) y0 1ZyZ1 + y2xy x__(1.22)and :uu0= 1 + kdu_3_uu0px_2+_vv0py_2_uv0= 2kdu_(uu0)(vv0)py2_upx= x + 2kdu_uu0px_3upy= 2kdu(uu0)(vv0)2py3ukdu= (u u0)__uu0px_2+_vv0py_2_vu0= 2kdu_(uu0)(vv0)px2_vv0= 1 + kdu__uu0px_2+ 3_vv0py_2_vpx= 2kdu(uu0)2(vv0)px3vpy= y + 2kdu_vv0py_3vkdu= (v v0)__uu0px_2+_vv0py_2_(1.23)1.5 Camera calibration in ViSPCamera calibration tools are avalaible in the vpCalibration class. Each instance of this class contains allthe informations that are requested to calibrate one image :a list of 3D points and their corresponding 2D coordinates in the image,current camera parameters for the different models that are estimated,member functions to compute camera calibration,static member functions to compute multi-view camera calibration.Here is an example of use of this class for a single-image calibration:10 CHAPTER 1. CAMERA CALIBRATION12 vpCalibration calibration;34 //Add a set of 3D points (in meters) and their 2D corresponding coordinates in the image (in pixels)5 //in the calibration structure (X,Y,Z,u,v)6 calibration.addPoint(0, 0, 0, vpImagePoint(55.2, 64.3) );7 calibration.addPoint(0.1, 0, 0, vpImagePoint(245.1, 72.5) );8 calibration.addPoint(0.1, 0.1, 0, vpImagePoint(237.2, 301.6) );9 calibration.addPoint(0, 0.1, 0.1, vpImagePoint(34.4, 321.8) );10 .... (add more points here)111213 vpCameraParameters cam; //Camera parameters to estimate14 vpHomogeneousMatrix cMo; //resulting pose of the object in the camera frame1516 //Compute the calibration with the desired method,17 //here an initialisation with Lagrange method is done and after18 //the virtual visual servoing method is used to finalized camera calibration19 //for the perspective projection model with distortion.20 calibration.computeCalibration(vpCalibration::CALIB_LAGRANGE_VIRTUAL_VS_DIST,cMo,cam);2122 //print camera parameters23 std::cout refImagePoints;11 //! Minimal number of inliers to consider the homography correct.12 unsigned int minNbMatching;1314 vpPlanarObjectDetector ();15 vpPlanarObjectDetector (const std::string &dataFile, const std::string &objectName);1617 unsigned int buildReference (const vpImage< unsigned char > &I);18 unsigned int buildReference (const vpImage< unsigned char > &I, vpImagePoint &iP,19 unsigned int height, unsigned int width);20 unsigned int buildReference (const vpImage< unsigned char > &I, const vpRect rectangle);2122 bool matchPoint (const vpImage< unsigned char > &I);23 bool matchPoint (const vpImage< unsigned char > &I, vpImagePoint &iP,24 const unsigned int height, const unsigned int width);25 bool matchPoint (const vpImage< unsigned char > &I, const vpRect rectangle);2627 void recordDetector (const std::string &objectName, const std::string &dataFile);28 void load (const std::string &dataFilename, const std::string &objName);2930 void display (vpImage< unsigned char > &I, bool displayKpts=false);31 void display (vpImage< unsigned char > &Iref, vpImage< unsigned char > &Icurrent,18 CHAPTER 3. 2D MOTION ESTIMATION32 bool displayKpts=false);3334 std::vector< vpImagePoint > getDetectedCorners () const;35 void getHomography (vpHomography &_H) const;3637 void getReferencePoint (const unsigned int _i, vpImagePoint &_imPoint);38 void getMatchedPoints (const unsigned int _index, vpImagePoint &_referencePoint,39 vpImagePoint &_currentPoint);4041 void setMinNbPointValidation (const unsigned int _min);42 unsigned int getMinNbPointValidation () const ;43 };Toconsiderthehomographycorrect, thisclassrequires, bydefault, aminimumof10pairsofin-liers (pairs of points coherent with the nal homography). It is possible to modify this value using themethod setMinNbPointValidation(...). The homography, as well as the pair of points used to com-pute it can be obtained using the methods: getHomography(...), getReferencePoint(...) andgetMatchedPoints(...).Acomplete example of this class is available in ViSP (example/key-point/planarObjectDetector.cpp).Bibliography[1] A.I. Comport, E. Marchand, andF. Chaumette. Robustmodel-basedtrackingforrobotvision. InIEEE/RSJ Int. Conf. on Intelligent Robots and Systems, IROS04, volume 1, pages 692697, Sendai,Japan, September 2004.[2] B. Espiau, F. Chaumette, and P. Rives. A new approach to visual servoing in robotics. IEEE Trans. onRobotics and Automation, 8(3):313326, June 1992.[3] S. Hutchinson, G. Hager, and P. Corke. A tutorial on visual servo control. IEEE Trans. on Robotics andAutomation, 12(5):651670, October 1996.[4] P. Puget and T. Skordas. An optimal solution for mobile camera calibration. In European Conf. on Com-puter Vision, ECCV90, volume 427 of Lecture Notes in Computer Science, pages 187198, Antibes,France, April 1990.