Relative Measurement for Kinematic Calibration...

10
Relative Measurement for Kinematic Calibration Using Digital Image Processing A.A. Fratpietro, M.J.D. Hayes Department of Mechanical & Aerospace Engineering, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6, Canada, [email protected] [email protected] An algorithm is developed for the purpose of extracting metric information from a Thermo CRS A465 manipulator using a camera-based data acquisition system and a measurement artifact. The algorithm uses common image pro- cessing techniques to extract the location of a point in the X-Y plane of the robot workspace. This measurement point will be used in the future to calibrate the Thermo CRS manipulator. The process used to calibrate the algorithm and a validation of its resolution are also presented. A characterization of the manipulator repeatability using this system is performed and the results presented in this paper. 1 Introduction The kinematic calibration[1] of a serial manipulator with six revolute axes generally requires a system for measuring the error in the position and orientation (pose) of the robot end-effector (EE). This error refers to the difference between the pose of the robot EE as determined by some external measurement sys- tem and the pose of the robot EE as determined by the robot joint encoders. By implementing a kinematic model based on the Denavit-Hartenberg parameters, robot kinematic calibration can be achieved requiring only the external measurement of relative positional error in the robot EE. The measurement of this posi- tional error in three directions is obtained through the implementation of a CCD camera and a laser distance sensor mounted to the robot EE. The distance sensor is oriented parallel to the camera optical axis. The CCD camera captures images of a precision ruled surface that must be digitally processed in order to extract accurate metric information. The acquisition of these measurements requires that the precise loca- tion, orientation and representative equations of all ob- jects in the image be determined in the image pixel co- ordinate system. The scale of an image is determined by estimating the pixel distance between adjacent lines in any single image. A comparison of images taken be- fore and after a linear robot motion parallel to the ruled surface produces the relative positional error resulting from that motion. It is these positional errors that are used to calibrate the kinematic model of the robot. In this paper we develop the algorithm for extract- ing accurate metric information from images of a mea- surement artifact. This algorithm will be applicable to the previously outlined calibration system. There are a multitude of techniques[2] in the realm of image pro- cessing that might be used for this purpose. The core image processing algorithm that we employ consists of noise compensation, thresholding, edge-detection, edge-scanning and linear regression techniques. A comparison is presented between the measurement ac- curacy achieved through the use of several different methods of noise compensation and several different parameter values within the edge-scanning algorithm. The results of this comparison are used to determine the best algorithm for this calibration system. A char- acterization of the repeatability of the Thermo CRS A465 manipulator is also presented. 2 Equipment The Thermo CRS A465 robotic manipulator is a robot arm consisting of 6 serially connected revolute joints and a total arm span of 710mm. This robot is rated as having a repeatability of ±0.05mm. The data pre- sented in this paper is produced using the A465. The measurement head consists of a camera, lens, mounting structure and appropriate power/data ca- bling. The combined mass of these devices does not CSME 2004 Forum 1

Transcript of Relative Measurement for Kinematic Calibration...

Page 1: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Relative Measurement for Kinematic Calibration Using DigitalImage Processing

A.A. Fratpietro, M.J.D. HayesDepartment of Mechanical & Aerospace Engineering, Carleton University,

1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6, Canada,[email protected]@mae.carleton.ca

An algorithm is developed for the purpose of extracting metric information from a Thermo CRS A465 manipulatorusing a camera-based data acquisition system and a measurement artifact. The algorithm uses common image pro-cessing techniques to extract the location of a point in the X-Y plane of the robot workspace. This measurement pointwill be used in the future to calibrate the Thermo CRS manipulator. The process used to calibrate the algorithm and avalidation of its resolution are also presented. A characterization of the manipulator repeatability using this system isperformed and the results presented in this paper.

1 Introduction

The kinematic calibration[1] of a serial manipulatorwith six revolute axes generally requires a system formeasuring the error in the position and orientation(pose) of the robot end-effector (EE). This errorrefers to the difference between the pose of the robotEE as determined by some external measurement sys-tem and the pose of the robotEE as determined by therobot joint encoders. By implementing a kinematicmodel based on the Denavit-Hartenberg parameters,robot kinematic calibration can be achieved requiringonly the external measurement of relative positionalerror in the robotEE. The measurement of this posi-tional error in three directions is obtained through theimplementation of aCCD camera and a laser distancesensor mounted to the robotEE. The distance sensoris oriented parallel to the camera optical axis.

The CCD camera captures images of a precisionruled surface that must be digitally processed in orderto extract accurate metric information. The acquisitionof these measurements requires that the precise loca-tion, orientation and representative equations of all ob-jects in the image be determined in the image pixel co-ordinate system. The scale of an image is determinedby estimating the pixel distance between adjacent linesin any single image. A comparison of images taken be-fore and after a linear robot motion parallel to the ruledsurface produces the relative positional error resultingfrom that motion. It is these positional errors that are

used to calibrate the kinematic model of the robot.In this paper we develop the algorithm for extract-

ing accurate metric information from images of a mea-surement artifact. This algorithm will be applicable tothe previously outlined calibration system. There are amultitude of techniques[2] in the realm of image pro-cessing that might be used for this purpose. The coreimage processing algorithm that we employ consistsof noise compensation, thresholding, edge-detection,edge-scanning and linear regression techniques. Acomparison is presented between the measurement ac-curacy achieved through the use of several differentmethods of noise compensation and several differentparameter values within the edge-scanning algorithm.The results of this comparison are used to determinethe best algorithm for this calibration system. A char-acterization of the repeatability of the Thermo CRSA465 manipulator is also presented.

2 Equipment

The Thermo CRS A465 robotic manipulator is a robotarm consisting of 6 serially connected revolute jointsand a total arm span of710mm. This robot is ratedas having a repeatability of±0.05mm. The data pre-sented in this paper is produced using the A465.

The measurement head consists of a camera, lens,mounting structure and appropriate power/data ca-bling. The combined mass of these devices does not

CSME 2004 Forum1

Page 2: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Figure 1: A465 robot and data acquisition system

exceed the2.0kg payload of the robot.The camera is a Pulnix TM-200. This camera has

horizontal and vertical resolutions of 768 and 494 pix-els, respectively, and a minimum signal to noise ratioof 50dB. Mounted to the camera through a C-mountinterface is a Rodenstock Macro 1X lens. This lens hasa field of vision of6.4× 4.8mm at a working distanceof 68mm.

The camera and lens are mounted to the robot us-ing a stainless steel measurement head designed andmanufactured at Carleton University. The completecamera/lens/head system integrated with the robot endeffector can be viewed in Figure 1. Images producedby the camera are captured using the National Instru-ments PCI-1409 Image Acquisition Card integratedinto a personal computer. The program IMAQ is im-plemented for the simple capture and saving of images.

All measurements are performed through the extrac-tion, processing and comparison of images depicting ameasurement artifact. The artifact in question is a thinblock of aluminum with a pattern precision-machinedonto its surface. The pattern on the artifact consistsof a set of intersecting perpendicular grooves of depth2mm and a line width of0.794mm ( 1

16”). The ac-curacy on these dimensions has an upper limit of+0.000” and a lower limit of−0.002”. The lengths ofthe grooves are approximately2cm each, much largerthan the field of view of the camera and lens. Thegrooves are partially filled with black enamel paint toprovide sharp contrast between the grooves and sur-face of the artifact. All processing of images and datais performed in the MATLAB environment using cus-tom developed high-level image processing softwareand pre-packaged MATLAB library functions.

The calibration and validation of results is per-formed using the components described above, ex-cluding the A465 robotic manipulator, and including

the Mitutoyo Vernier X-Y Table0− 2”, with a resolu-tion of 0.0002”.

3 Image Processing

The A465 robotic manipulator is programmed to ma-nipulate a measurement head into a pose such that animage of a measurement artifact can be captured. Atypical pose can be viewed in Figure 1. This imageindicates the initial position that the robot will attemptto return to throughout the repeatability test. After theimage is captured, the robot moves through a singlemotion or series of motions and then attempts to returnto the initial position. Another image of the artifact iscaptured. This new image indicates the projection inthe plane of the camera of the positional error that themanipulator has produced while attempting to returnto the original pose. This process is repeated multi-ple times to produce sets of images. These raw imagescaptured by the data acquisition system contain the in-formation that is used to characterize the repeatabilityof the robot. Digital image processing is required toextract the metric information from these sets of im-ages.

3.1 Image Characteristics

The selection of an image processing algorithm isbased on the type of application in which it will beused. In this paper, the algorithm is required to extractprecise metric information from an image captured bya data acquisition system. Figure 2 shows a typicalimage captured by the data acquisition system in ques-tion. This image clearly displays the measurement ar-tifact, including surface irregularities on the aluminumand reflected light from the black enamel paint locatedin the groove of the artifact. The position of the robotduring image acquisition is indicated by the crossingof the exact centreline of the horizontal and verticallines located on the artifact. The image processing al-gorithm is required to extract the location of this pointfrom each image.

The images in question are 8-bit monochrome im-ages. Each pixel contains a possible value in the range0 to 255, where a value of 0 represents the shade blackand 255 represents the shade white. The values be-tween 0 and 255 are shades of gray.

The size of an image is640(H) × 480(V )pixels.Since the field of view of the lens is6.4(H) ×4.8(V )mm, each pixel represents a field of approxi-mately10(H) × 10(V )um. It can be noted that thepixel length is an order of magnitude smaller than therepeatability of the robot.

CSME 2004 Forum2

Page 3: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Figure 2: Raw image captured for processing

In order to determine the location of the artifact cen-trepoint, both lines can be described in terms of theirtwo binding edges. The vertical line, when describedfrom the left to the right side of the image, is bound bya light-to-dark edge and, subsequently, a dark-to-lightedge. Since these two edges were machined to be com-pletely linear, the first-order equations of each of theseedges can be extracted and then combined to producethe equation of the line directly between them. Thissame procedure can then performed on the horizontalline resulting in the first order equation of its centre-line. The solving of two first-order functions can resultin only one solution and, in the case of the crossinglines in question, this solution is the point of intersec-tion.

Figure 3: Image with four edges labelled

The processing of the images consists of extractingthe first-order equations of the 4 edges located in eachimage. These four edges are labelled in Figure 3. Thealgorithm used to extract this information is describedin the following section.

3.2 Algorithm Selection

The first step in the processing of an image is obtainingthe image data. For the 8-bit monochrome images usedin this paper, the data is represented by a640×480 el-ement matrix where each element in the matrix repre-sents the intensity observed by the corresponding pixelin the camera.

3.2.1 Noise Filtering

The image data contains at least 2 types of noise ofvarying effects which are inherent in digital CCD cam-eras. Salt and pepper noise is the random inclusionof peaks and valleys across the image data. Thesepeaks and valleys appear as a distribution of lighterand darker pixels which can skew calculations involv-ing pixel intensities. Gaussian noise affects the inten-sities of pixels in a manor proportional to the Gaussiandistribution. There are several different means of deal-ing with noisy data. This paper explores the results ofapplying two different convolution masks to the data.

A convolution mask is ann ×m dimensional win-dow that is centered on each element in the image datamatrix. The elements in the mask can be weighted,as in the Gaussian filter, or not, as in the Mean fil-ter. The elements in the mask are then combined withthe corresponding elements in the image data matrixto produce a weighted or non-weighted sum of the im-age data which represents the value of the filtered datapixel at that location in the matrix.

The Mean filter implemented in this paper is a3×3matrix with all elements equal to a value of one. 1 1 1

1 1 11 1 1

(1)

This mask sets the value of each element in the datamatrix equal to the mean value of its surrounding pix-els. The benefit of implementing this filter is that localpeaks and valleys in pixel intensity caused by noisewill be reduced. The possible problem with imple-menting this filter is the smoothing of possible edgesresulting in a reduction of contrast used to positionedges.

The Gaussian filter implemented in this paper is a7x7 Gaussian mask with elements chosen from thetwo-dimensional zero-mean Gaussian distribution,

g[i, j] = ce−i2+j2

2σ2 (2)

A typical convolution mask described by this distri-bution is used in the development of this algorithm.

CSME 2004 Forum3

Page 4: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

1 1 2 2 2 1 11 2 2 3 2 2 12 4 8 16 8 4 22 2 4 8 4 2 21 2 2 4 2 2 11 1 2 2 2 1 1

(3)

The benefits of this filter are similar to those of theMean filter, but the Gaussian filter is suited to filter-ing out noise with a Gaussian distribution. The partialprocessing of one image using both Mean and Gaus-sian filtering can be viewed in Figure 4.

Figure 4: Comparison of raw image, mean-filtered im-age, and gauss-filtered image

3.2.2 Edge Detection

After filtering, the image contains data with most ofthe noise-related spikes in pixel intensity removed.The next step of the processing is the accentuation ofpossible edge pixels through the use of an edge detec-tion operator. An edge detection operator is anothertype of convolution mask with its elements weightedin such a way that the weighted sum of the elements inthe data matrix produce larger intensities in the prox-imity of sharp transitions from one pixel intensity toanother. The elements of these convolution maskscan be weighted in such a way as to accentuate pixelscontaining horizontal or vertical edges with transitionsfrom high to low or low to high intensity. The edge de-tection mask implemented in this algorithm is knownas the Prewitt operator, −1 −1 −1

0 0 01 1 1

(4)

Figure 5: Basic result of edge-detection

This operator is a3 × 3 element convolution maskthat uses pixels in the two columns adjacent to the cur-rent pixel under investigation and along the length ofthe edge being detected to determine whether or not anedge is present. Figure 5 shows a graphic descriptionof the edge-detection by such an operator. As the Pre-witt operator passes over pixels of equal intensity, theresulting pixels are nearly unchanged. When the oper-ator passes over a sharp transition from high to low orlow to high intensity, the calculated resulting pixel iseither attenuated or accentuated depending on the biasof the operator being used. Four distinctly weightedPrewitt operators are required to distinguish betweenthe two possible horizontal transitions and two possi-ble vertical transitions. The mask shown in Equation 4is used to reveal horizontal edges that transition fromlow-intensity pixels located above the edge to high-intensity pixels below the edge. The 3-dimensionalinverted view of the filtered image is shown in Fig-ure 6. This figure illustrates how the transition in theX-direction from pixels of low intensity to pixels ofhigh intensity is accentuated using the Prewitt operatorin Equation 4. The three other Prewitt operators willhave a similar effect on the other 3 edges. In the caseof this algorithm, each of the four edges is processedseparately.

One possible benefit to using the Prewitt edge-detector is that this operator also acts as a partial noisefilter by including a3 × 3 pixel area in all calcula-tions. Using the pixels located in adjacent columnswill reduce the effect of noise in the calculation. Thisreduces the necessity of using a separate noise filter.

3.2.3 Applying a Threshold

After the edge-detection operator is applied, it is pos-sible to eliminate many of the pixels in the image aspossible edge locations by applying a threshold limit.The elements in the data matrix with larger values aremore likely to belong to the edge being characterized.The elements with lower values are not located on thevisible edge. The application of a threshold limit in-

CSME 2004 Forum4

Page 5: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Figure 6: Results of Prewitt edge detection

sures that only the most likely pixels are consideredas possible edge locations. After experimentation, itwas determined that a threshold of approximately 30percent of the maximum pixel intensity works to re-tain the information required for the line-scanning al-gorithm described in the following section.

3.2.4 Segmentation

The line-segmentation algorithm makes use of both theprocessed and unprocessed image data in determiningthe most likely location of the artifact edge currentlybeing processed. The result of this segmentation isa slope/y-intercept form of the equation that best de-scribes the edge. Several considerations must be takenin order to achieve this result.

The processed data at this point in the algorithm isa matrix containing elements whose intensity valuesare larger at coordinates where a certain transition ina certain direction (an edge) is located. Following theexample in the Figure 6, the transition in question oc-curs from a low to a high intensity (dark to light) inthe vertical direction (from the top to the bottom of theimage). The edge under segmentation is primarily hor-izontal. The pixels along this edge must be extractedso that they can be used in determining the location ofthe edge. The algorithm is able to focus on extractinga maximum of one pixel for each column along theedges length. This will result in a maximum of640pixels used in the computation of any horizontal edgeand480 pixels used in the computation of any verticaledge. These values represent the maximum possibleelements used for each edge computation, but in prac-tice there is a certain class of pixel which is excludedfrom any line computation. These pixels are locatedin the neighborhood of the intersection between theedge currently being segmented and any perpendicu-

lar edge. At these locations, the measurement artifactdisplays a gradual round between the two perpendicu-lar edges, as illustrated in Figure 7. Any pixels locatedon this transition are not located along the linear edgebeing segmented. Special consideration is given to theremoval of these pixels.

Figure 7: Round between horizontal and vertical edges

The segmentation of the edge is performed as a se-ries of scans across the width of the artifact edge witha window referred to as theTestBox. This windowstarts from the upper-left corner of the processed im-age and travels downwards in search of pixels belong-ing to the edge currently being segmented. When thewindow detects a pixel belonging to the edge, the lo-cation of this pixel is recorded, the window shifts onepixel to the right and several pixels upwards,and thescanning then continues downwards in the image untilanother pixel is recorded. The scanning stops when,column by column, the entire length of the edge hasbeen scanned.

Figure 8:TestBox used in segmenting edges

The window, illustrated in Figure 8, is used to ex-tract pixels that meet a predefined set of requirements.

CSME 2004 Forum5

Page 6: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

As the window scans the processed image, pointC iscentered over the pixel under investigation. There areseveral requirements that must be met in order for thispixel to be considered part of the edge. First, the pixelmust have an intensity greater then zero. All pixelsthat contain intensity levels below the threshold limitexplained in the previous section are not considered tobelong to edge. This requirement insures that they arenot considered. Second, the intensity at pixelC mustbe the maximum intensity of all the pixels in the1×mcolumn referred to in Figure 8 as columnA. This re-quirement insures that pixelC is the most likely pixelin columnA to belong to the edge. The length of col-umnA is a variable defined as,

m = (2× TESTHEIGHT ) + 1 (5)

whereTESTHEIGHT is the number of pixelsabove or below the pixel under consideration thatshould be used in determining its edge-worthiness. Animage containing lines located closer together mightuse a smallerTESTHEIGHT value to be able todistinguish between adjacent lines. The third require-ment is that the two1×m matrices referred to in Fig-ure 8 as columnsE andF must both contain at leastone non-zero element. These two columns, spaced adistance defined by,

n = (2× TESTWIDTH) + 1 (6)

pixels apart, are used to exclude the pixel atC ifit is located close to any edges perpendicular to theedge being segmented. Figure 9 illustrates that, afterthe Prewitt line detection and threshold are applied, thetransitions that do not occur in the direction being de-tected by the line detector are no longer visible in theimage data. The rounded transitions between perpen-dicular edges are present, but by scanning a distanceTESTWIDTH ahead and behind the pixel underconsideration and searching for the region where allvertical edges have disappeared, pixels located on therounded transitions can be excluded. With these threerequirements met, the pixel located atC is added to adata matrix containing all pixels considered part of theedge.

A parameter referred to asSKIP is used to selectthe spacing of pixels along the length of an edge thatare used in determining its location in the image. Thisparameter can be set to a value of 10 allowing onlyevery tenth pixel along the length of the edge to beused in subsequent calculations. ASKIP value of 10will provide a faster result since less calculations willbe performed by the computer, but it is expected thatthe use of more pixels in calculating the edge locationwill provide a more accurate result.

Figure 9: Off-edge pixels near vertices

After the segmentation of one edge is complete, anarray of data is created containing thei andj coordi-nates of all pixels in the image that are considered tobelong to the edge. Since each pixel represents an areaof 10 × 10um, using these pixels in the linear regres-sion computation of the edge will result in a possible10um error in both thei andj coordinates. This errorcan be reduced through the use of a moment calcula-tion in two directions[3]. Figure 10 illustrates pixe-lated representation of an edge in an image. By ap-plying the moment calculation, one can observe thatthe coordinates are effectively shifted based on the in-tensities of the surrounding pixels and a sub-pixel ac-curacy is achieved. The moment calculations are im-plemented using a convolution mask and the followingequations,

x =∑∑

x× IP (x, y)∑∑IP (x, y)

(7)

y =∑∑

y × IP (x, y)∑∑IP (x, y)

(8)

whereIP (x, y) is the intensity of the pixel at the lo-cation(x, y). These values are summed over a3 × 3area. The pixel intensityIP is taken from the raw im-age data so that any bias introduced by the processingis removed. A calculation is performed for each of thetwo coordinates at the location specified by the coor-dinates in the segmented data array.

After the moment calculation, the segmented dataarray contains a list of the pixel coordinates, at sub-pixel accuracies, determined by processing to belongto the edge under segmentation. An attempt is thenmade to optimize the linearity of this set of data pointsusing a recursive function. The data matrix is sent tothe recursive function and linear regression is used tofit a line through the data. Using this ’best fit’ line,

CSME 2004 Forum6

Page 7: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Figure 10: Shift caused by the moment calculation

the perpendicular distance between the line and eachof the sets of coordinates is calculated. Another setof data is created containing all coordinates from theoriginal data set excluding the one set of coordinateswith the largest perpendicular distance. The linear cor-relation coefficient is calculated for both sets of data,resulting in the current linear correlation coefficientand the potential difference in the coefficient if thespecified coordinate is removed. If either the coeffi-cient or the difference are below a pre-determined tol-erance then the smaller data set is used in the subse-quent recursion of the function. The recursion is setto continue until both of the requirements are met ora maximum of 50 coordinates are removed. The finalresult of this function is the calculation of the slope/y-intercept form of the ’best fit’ line through the opti-mized data.

This same segmentation continues for all four edgeslocated in each image. The algorithm for segmentingeach edge is very similar with only a few exceptions;when segmenting the lower horizontal edge in an im-age, theTestBox scans the edge from the lower-leftcorner travelling upwards; when scanning the verticaledges, the entire image is transposed and the same al-gorithm is used as that for the horizontal edges (thisresults in the slope/x-intercept of the lines).

3.2.5 Projective Transformation

The image data at this point in the algorithm consistsof four first-order slope/intercept equations describingthe four edges in the image. By solving the appropriateequations together, the coordinates of the four pointsof intersection can be calculated. These four coordi-nates are described in terms of pixels rather then anapplicable unit such as millimeters, and they containdistortion caused by the inclusion of perspective in theimages. The removal of distortion and the proper scal-ing of the data is performed through the use of a pro-jective transformation.

Figure 11: Four points used to calculateT matrix

The measurement artifact was machined to a widthof 1/16” along both the horizontal and verticalgrooves. Using each of the four vertices located at theintersection of the edges, four points can be charac-terized with the coordinates shown in Figure 11. Thepixel-based location of these four points is extractedfrom the image of the measurement artifact, althoughthe location of these points is distorted by perspectiveand given in pixel units. Using these eight coordinates,a solution can be found for the following set of equa-tions,

ρW = Tw (9)

ρX = Tx (10)

ρY = Ty (11)

ρZ = Tz (12)

where W,X,Y, and Z are each one of the four scaledcoordinate sets and w,x,y, and z are the correspondingimage coordinates.ρ is a scaling factor set equal toone. The solution to these equations, a transformationmatrix T, can be constructed to transform points fromthe image space to a scaled world coordinate system.The origin of the image (pixel(1, 1)) is then trans-formed to the world coordinate system and shifted toits origin. This transformation shifts the location ofthe measurement artifact vertices to a location in thepositive x and y directions.

3.2.6 Centre Point

At this point in the algorithm, the data consists of thefour coordinates of the measurement artifact scaled to

CSME 2004 Forum7

Page 8: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

the proper dimensions and comprising the vertices ofa perfect square. A simple mean-value calculation inboth the x and y directions results in the centre coor-dinate of the artifact with respect to the image originusing the units millimeters.

4 Validation and Calibration

The algorithm is able to extract measurements charac-terizing the position in the X-Y plane of the robot end-effector with respect to the location of the measure-ment artifact. The resolution of these measurements isnot known. In order to determine and improve this res-olution, a calibration and validation procedure is per-formed.

The Vernier X-Y table is a displacement device thatcontrols the position of an object in the X-Y plane. Thedevice, displayed in Figure 12, has two perpendicularaxes. The displacement along each axis is manipulatedby one manually controlled actuator with a resolutionof 5.08um (0.0002”). The validation procedure usesthis device to determine the resolution of the imageprocessing output.

Figure 12: The Mitotoyo Vernier X-Y table

The measurement artifact is fixed onto the VernierX-Y table and placed into the robot workspace. Therobot and measurement head are positioned such thatthe camera can clearly extract images of the measure-ment artifact for processing. Both the robot and mea-surement head remain stationary for the calibrationprocedure. The Vernier X-Y table is used to displacethe measurement artifact within the field of view ofthe camera, into 40 positions. At each position, an im-age of the artifact is extracted and processed. These40 positions construct a square pattern of dimensions50.8x50.8um. This pattern can be viewed in Figure13. The displacement of the measurement artifact is

also extracted from the images taken at each location.By comparing the displacement from the images withthat of the Vernier X-Y table, a reasonable estimate ofthe resolution of the image processing algorithm canbe obtained.

Figure 13: The displacement pattern plotted by the X-Y table

After this test was performed several times, it wasdetermined that the two axes of the Vernier X-Y tablemight not be positioned exactly perpendicular to eachother. As a result, the comparison of image-extracteddisplacements to the displacements measured usingthe Vernier X-Y table uses a ’well-fit’ parallelogram.The use of this parallelogram eliminates most of theerror that would be incurred from the use of the offsetVernier X-Y table.

A diagram of a typical comparison of image data toactual data can be viewed in Figure 14. In this fig-ure, data points extracted from an image are markedwith a plus(+) sign. The points as measured usingthe Vernier X-Y table are marked with small circles(o). The lines between the markers represent the errorbetween that image point and the actual point. For thedata in Figure 14, the calculated mean displacementerror is determined to be0.0170mm and the maximumdisplacement error is measured at0.0372mm. A cali-bration exercise is used to improve this result.

The discrepancy in values between the image datapoints and Vernier Table points is partially a result ofthe resolution of the Vernier Table, but mostly a re-sult of poor optimization of the image processing al-gorithm. There are several parameters within the bodyof the algorithm must be properly selected in order toincrease the measurement accuracy of the camera.

The parameter names are FILTER,TESTHEIGHT , TESTWIDTH, and SKIP .The parameterFILTER refers to the use of oneof the two filters mentioned in section 3.2.1. The

CSME 2004 Forum8

Page 9: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Figure 14: Comparison between actual and measureddata (Trial 1)

Table 1: Parameter optimization tableTrial Filter SK TW TH MaxE Mean E

1 None 20 5 10 0.0372 0.01702 None 10 5 10 0.0197 0.01063 None 1 5 10 0.0170 0.00814 Mean 1 5 10 0.0150 0.00815 Gauss 1 5 10 0.0160 0.00836 None 1 25 10 0.0177 0.00857 None 1 5 5 0.0168 0.00808 None 1 5 15 0.0171 0.00819 None 1 5 20 0.0171 0.0081

parameters TESTHEIGHT , TESTWIDTH,and SKIP are all explained in section 3.2.4. Theproper selection of these parameters will result in ahigher resolution of the measurements produced bythe algorithm.

The calibration procedure consists of setting eachof the four parameters and then performing the vali-dation procedure to establish the mean and maximumerror in the readings. The algorithm is optimized byselecting parameters that produce a lower mean andmaximum error. Table 1 documents several results ofthis optimization where the valuesSK, TW , TH re-fer to the parametersSKIP , TESTHEIGHT , andTESTWIDTH respectively.

The parameters that produce the best combinationof mean and maximum error correspond to Trial 7 inTable 1. Figure 15 is a plot of the error associated witheach measurement during Trial 7.

5 Repeatability Test

The algorithm developed in this paper will be used inthe camera-based robot calibration system. The algo-

Figure 15: Comparison between actual and measureddata (Trial 7)

rithm is able to extract measurements to a maximumresolution of0.0168mm with an average resolution of0.0080mm. Since both of these values are well belowthe 0.05mm repeatability of the Thermo CRS A465manipulator, this algorithm, along with the measure-ment head and data acquisition equipment, can be usedto characterize the repeatability of this device.

The repeatability of a robot is established as its abil-ity to reproduce certain measurements under the re-peated application of a stated value and under certainconditions[4]. In terms of calculations the repeatabil-ity is given by[5],

Repeatability = l + 3σ (13)

whereσ is the standard deviation andl is the meanattained position. The following equations are used tocalculateσ andl,

Attained Positioni: Xai, Yai

Mean Attained Position:

X =1N×

∑Xai (14)

Y =1N×

∑Yai (15)

li =√

(Xai −X)2 + (Yai − Y−)2 (16)

l =1N×

∑li (17)

σ =

√∑(li − l)2

N − 1(18)

where li is the attained position. The experimentwill test the manipulator at approximately 50 percentof its 2kg rated load and approximately80 percent ofits maximum velocity. The repeatability tests consist

CSME 2004 Forum9

Page 10: Relative Measurement for Kinematic Calibration …faculty.mae.carleton.ca/John_Hayes/Papers/CSMEPaper2004...2004/01/29  · Relative Measurement for Kinematic Calibration Using Digital

Table 2: Results of joint repeatability testJoint Repeatability (mm)

1 0.016642 0.018133 0.023214 0.016715 0.052996 0.01601

of separate actuation of each of the robots six jointsfollowed by two sequences of arbitrary robot motionusing all six joints. The results of the repeatability testcan be viewed in Table 2 and in Figure 16 and Figure17.

6 Conclusions and Future Work

The algorithm for extraction of metric information hasbeen designed and validated. According to the vali-dation procedure, the algorithm produces a maximumerror of 0.0168mm and a mean error of0.0080mm.This error is well below the repeatability of the A465manipulator (havinf a range of0.1mm) and can beused in its calibration. Future work on this projectwill involve the integration of a laser displacementsensor for the measurement of displacement in the Z-direction. This reading can also be included in the cal-culation of the repeatability of the manipulator, as itis in standard practice. The algorithm must also betailored to extract information from a second measure-ment artifact that resembles a ruler. This second ar-tifact will be used in the manipulator calibration. Athorough optimization of the four parameters will beperformed in an attempt to further reduce the measure-ment error.

References

[1] Andrew Fratpietro, Nick Simpson, and M. John D.Hayes. Advances in Robot Kinematic Calibration.Technical report, Carleton University, 2003.

[2] Ramesh Jain, Rangachar Kasturi, and Brian G.Schunck. Machine Vision. MIT Press andMcGraw-Hill, Inc., 1995.

[3] Ronald Ofner, Paul O’Leary, and Markus Leitner.A Collection of Algorithms for the Determinationof Construction Points in the Measurement of 3dGeometries via Light-Sectioning.Institute for Au-tomation at the University of Leoben, 1998.

Figure 16: Repeatability of A465 using arbitrary se-quence of motion (R = 0.06346)

Figure 17: Repeatability of A465 using second arbi-trary sequence of motion (R = 0.06939)

[4] A. T. J. Hayward. Repeatability and Accu-racy. Mechanical Engineering Publications Lim-ited, 1977.

[5] Nicholas G. Dagalakis. Industrial Robot Stan-dards. Chapter 27 in the Handbook of IndustrialRobots, 1998.

CSME 2004 Forum10