ECE661 Computer Vision : HW6 - Purdue University...ECE661 Computer Vision : HW6 Kihyun Hong 1...
Transcript of ECE661 Computer Vision : HW6 - Purdue University...ECE661 Computer Vision : HW6 Kihyun Hong 1...
ECE661 Computer Vision : HW6
Kihyun Hong
1 Problem Description
In this homework assignment, our task is to reconstruct 3D structure from 2 capturedstereo images. First compute the fundamental matrix (F ) of the 2 images. From the fun-damental matrix calculate epipoles (e and e′) and camera matrices (P and P ′). Then, findcorrespondences between 2 images by searching epipolar lines (interest points are obtainedfrom Canny edge detection). To do this, apply the rectification transform that makes theepipolar lines parallel (to make easy to find the correspondences). Finally, compute theprojected world points of the correspondences of the captured 3D structure.
1.1 Fundamental Matrix
We applied 8 point algorithm to estimate the fundamental matrix of the 2 images. Fromthe fundamental matrix constraint, x′T Fx = 0, we can set a linear equation for the elementsof F . Let F = {f}nm and Ai =
[x′ixi x′iyi x′i y′ixi y′iyi y′i xi yi 1
]. Then, for all i
points we get the following minimization problem. Using SVD, we can find the best estimateof F .
min ‖Af‖, s.t. ‖f‖ = 1 (1)
where A =
A1...
An
if n corresponding points are used.
Since the fundamental matrix F should be rank 2, we need adjust the solution. If weassign the smallest singular value of F as 0, then it makes sure F is rank 2. This refined F isoptimal in the sense of minimizing Frobenius norm. These procedure is described as follows
1. Data normalization.
2. Set Af from x′Fx = 0.
3. Solve : min ‖Af‖, s.t. ‖f‖ = 1.
4. Refine F with rank 2 constraint.
5. Denormalization.
1
After we get F , we can easily compute epipoles (e and e′) and camera matrices (P andP ′). e is a null vector of F which Fe = 0 and e′ is a left null vector of F that e′T F = 0. Andfor camera matrices, we can set
P = [I|0] (2)
P ′ = [[e′]xF + e′vT |λe′] (3)
where v is any vector and λ is a non-zero constant.
1.2 Rectification Transform and Correspondence Matching
To find correspondences of the interest points, we transform the stereo images to haveepipoles at infinity. This makes us easier to search correspondences since the correspondingepipolar lines are on the same parallel line. To compute the rectification transforms foreach image, we first set the rectification transform H ′ for the right image, then calculate thematched transform H. The procedure for calculating H and H ′ is described as follows
1. Set T that sends an interesting point to the origin.
2. Compute R that rotates y of the translated epipole to be 0.
3. Set G that sends the pre-rectified epipole to infinity.
4. H ′ = GRT .
5. Solve : min∑
i d(Hxi, H′x′i)
2, s.t. H = (I + H ′e′aT )H ′P ′P+.
After rectification transform, we can find the correspondences. We can apply a simplemethod to do this. If we assume that disparity of the two images is small, then we additionallycan use a distance constraint in which we search correspondences only in some range.
1. Find interest points on image 1.
2. Find interest points on image 2.
3. Find correspondences for these interest points using NCC.
4. Check ordering constraint and distance constraint.
We have to mention that this method is not robust. We don’t check whether the corre-spondences are right or not. We just check the constraints.
2
1.3 Reconstruction of 3D Structure
From the correspondences, we can reconstruct 3D structure up to projective transforma-tion. Using the fact x× (PX) = 0 and x′ × (P ′X) = 0, we can formulate a linear equationfor the world position X.
AX =
xP 3T − P 1T
yP 3T − P 2T
x′P ′3T − P ′1T
y′P ′3T − P ′2T
X = 0. (4)
Then, the world position X can be obtained by the least square solution of eq.(4).
2 Results
We used 2 stereo indoor object images (800 by 600) (fig.1). To calculate the fundamentalmatrix, we manually provided 9 points.
The corresponding epipolar lines of the 9 points are shown in fig.2. After rectificationtransform, we got the parallel epipolar lines (fig.3). For the rectified images, we estimatedthe correspondences of the interest points. The final 3D reconstructed result, the projectionof the correspondences, is shown in fig.5.
Figure 1: captured stereo images.
3
Figure 2: Computed epipolar lines of the 9 correspondences.
Figure 3: Rectified images.
3 Codes
The codes are composed of 3 parts which are HW6.cpp, stereo.cpp, and utility.cpp.HW6.cpp is the main function file. In stereo.cpp, fundamental matrix estimation, rectifica-tion transformation, correspondence search, and 3D reconstruction are implemented. Someimage handling tools and basic utility functions such as NCC, normalization, transform arecontained in utility.cpp. And opencv library is used to implement most of the functions.
3.1 Main Function File : HW6.cpp
//// file : HW6.cpp// ---------------------------//
4
Figure 4: Estimated correspondences of the 2 images’ interest points.
−0.9955
−0.995
−0.9945
−1.4
−1.2
−1
−0.8
−0.6
−0.4
−0.2
−3.5 −3 −2.5 −2
x 10−3
Figure 5: Reconstructed 3D structure.
5
#include <stdlib.h>#include <stdio.h>#include <math.h>#include "cv.h"#include "cxcore.h"#include "highgui.h"#include "utility.h"#include "stereo.h"
// canny edgy detection parameters#define APERTURE_SIZE 3#define LOW_THRESHOLD 150#define HIGH_THRESHOLD 230
void DrawEpipolarLine(IplImage *image1, IplImage *image2, PairPoints *ptPair, CvMat *F);
int main(int argc, char **argv) {// declarationchar imageName1[80], imageName2[80];int numOfCorspPts;IplImage *inImage1 = 0, *inImage2 = 0;PairPoints ptPair, recPtPair; // corresponding points for computing F
IplImage *edgeImage1, *edgeImage2, *edgeRec1, *edgeRec2;IplImage *grayImage1, *grayImage2, *inRec1, *inRec2;
CvMat *F = cvCreateMat(3, 3, CV_64FC1); // fundamental matrixCvMat *e1 = cvCreateMat(3, 1, CV_64FC1); // epipolesCvMat *e2 = cvCreateMat(3, 1, CV_64FC1);CvMat *P1 = cvCreateMat(3, 4, CV_64FC1); // camera matricesCvMat *P2 = cvCreateMat(3, 4, CV_64FC1);CvMat *H1 = cvCreateMat(3, 3, CV_64FC1); // rectification transformsCvMat *H2 = cvCreateMat(3, 3, CV_64FC1);CvMat *invH1 = cvCreateMat(3, 3, CV_64FC1); // rectified camera matricesCvMat *invH2 = cvCreateMat(3, 3, CV_64FC1);
// load images and corresponding pointsif(argc == 4){
strcpy(imageName1, argv[1]);strcpy(imageName2, argv[2]);numOfCorspPts = atoi(argv[3]);
}else{printf("\n");printf(" Usage: hw5 leftImageName rightImageName
[number of corresponding points for computing F]\n");printf("\n");
6
}
ptPair.len = numOfCorspPts;
printf("Type %d corresponding points of image1 and 2.\n", numOfCorspPts);for(i = 0; i < numOfCorspPts; i++){
printf("Type points (%d)\n", i+1);scanf("image1 x = %d\n", ptPair.pt1J[i]);scanf("image1 y = %d\n", ptPair.pt1I[i]);scanf("image2 x = %d\n", ptPair.pt2J[i]);scanf("image2 y = %d\n", ptPair.pt2I[i]);
}
// load imagesinImage1 = cvLoadImage(imageName1, -1);if(!inImage1){
printf("error : could not load image file: %s\n", imageName1);exit(0);
}inImage2 = cvLoadImage(imageName2, -1);if(!inImage2){
printf("error : could not load image file: %s\n", imageName2);exit(0);
}
DrawImagePair(inImage1, inImage2, "inputImages.jpg", true);
////////////////////////////////////////////////////////////// 1. Fundamental Matrix //// compute fundamental matrix, epipoles and //// camera matrices //////////////////////////////////////////////////////////////EstimateFundamentalMatrix(&ptPair, F);ComputeEpipoles(F, e1, e2);ComputeCameraMatrices(F, e2, P1, P2);DrawEpipolarLine(inImage1, inImage2, &ptPair, F);
////////////////////////////////////////////////////////////// 2. Rectification //// transform images to have parallel epipolar lines //////////////////////////////////////////////////////////////
recPtPair.len = ptPair.len;int width = inImage1->width;int height = inImage1->height;CvPoint center;center.x = width/2;
7
center.y = height/2;
// calculate rectification transforms// 1. find rectification transform for right image, H2// 2. match transform --> find H1 for left imageComputeRectificationTransform(e2, H2, ¢er);ComputeMatchedTransform(H1, &ptPair, H2, P1, P2);
// image transformationgrayImage1 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);grayImage2 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);edgeImage1 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);edgeImage2 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);inRec1 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);inRec2 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);edgeRec1 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);edgeRec2 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 1);
cvCvtColor(inImage1, grayImage1, CV_BGR2GRAY);cvCvtColor(inImage2, grayImage2, CV_BGR2GRAY);// Canny edge detectioncvCanny(grayImage1, edgeImage1, LOW_THRESHOLD, HIGH_THRESHOLD, APERTURE_SIZE);cvCanny(grayImage2, edgeImage2, LOW_THRESHOLD, HIGH_THRESHOLD, APERTURE_SIZE);// transform interesting pointsTransformImage(edgeImage1, edgeRec1, H1);TransformImage(edgeImage2, edgeRec2, H2);
// transform imagesTransformImage(inImage1, inRec1, H1);TransformImage(inImage2, inRec2, H2);DrawImagePair(edgeRec1, edgeRec2, "RectifiedImages.jpg", true);
////////////////////////////////////////////////////////////// 3. Correspondences //// find correspondences of edge points //////////////////////////////////////////////////////////////// after rectification, points on the same i// (epipolar line on image 1) have correspondences// on the same epipolar line on image 2 (assumption).// 1. find interest points on image 1// 2. find interest points on image 2// 3. find correspondences using NCC (not robust)// 4. check ordering constraints//// n.b. : don’t check whether the correspondences are right or not.// for each interest points of one image, find corresponding points// of another image. then check ordering constraint.
8
// if ordering is ascendent, then keep the correspondences.
int step = edgeRec1->widthStep;uchar *data1 = (uchar *)edgeRec1->imageData;uchar *data2 = (uchar *)edgeRec2->imageData;PairPoints corspPt;Points interestPt1, interestPt2;int i, j, n;
corspPt.len = 0;for(i = 0; i < height; i++){
// find interest points on image 1n = 0;for(j = 0; j < width; j++){
if(data1[i*step + j] == 255){ // if there are interest pointsinterestPt1.ptI[n] = i;interestPt1.ptJ[n] = j;n++;
}}interestPt1.len = n;
// find interest points on image 2n = 0;for(j = 0; j < width; j++){
if(data2[i*step + j] == 255){ // if there are interest pointsinterestPt2.ptI[n] = i;interestPt2.ptJ[n] = j;n++;
}}interestPt2.len = n;
FindCorrespondences(&corspPt, inRec1, inRec2, &interestPt1, &interestPt2);}
////////////////////////////////////////////////////////////// 4. Reconstruction //// reconstruct up to projective transform //////////////////////////////////////////////////////////////Points3D pt3D;Points x1, x2, xNew1, xNew2;x1.len = corspPt.len;x2.len = corspPt.len;for(i = 0; i < corspPt.len; i++){
x1.ptI[i] = corspPt.pt1I[i];x1.ptJ[i] = corspPt.pt1J[i];
9
x2.ptI[i] = corspPt.pt2I[i];x2.ptJ[i] = corspPt.pt2J[i];
}cvInvert(H1, invH1);cvInvert(H2, invH2);TransformPoints(&x2, &xNew2, invH2);TransformPoints(&x1, &xNew1, invH1);for(i = 0; i < corspPt.len; i++){
corspPt.pt1I[i] = xNew1.ptI[i];corspPt.pt1J[i] = xNew1.ptJ[i];corspPt.pt2I[i] = xNew2.ptI[i];corspPt.pt2J[i] = xNew2.ptJ[i];
}
CalculateWorldPosition(&corspPt, &pt3D, P1, P2);
FILE *file;file = fopen("3D position.txt", "w");fprintf(file, "3D world position \n");fprintf(file, "-----------------------------------------\n");for(i = 0; i < pt3D.len; i++){
fprintf(file, "%f %f %f; \n", pt3D.x[i], pt3D.y[i], pt3D.z[i]);}fclose(file);
// release the images a1nd matrixcvReleaseImage(&inImage1); cvReleaseImage(&inImage2);cvReleaseImage(&grayImage1); cvReleaseImage(&grayImage2);cvReleaseImage(&inRec1); cvReleaseImage(&inRec2);cvReleaseImage(&edgeImage1); cvReleaseImage(&edgeImage2);cvReleaseImage(&edgeRec1); cvReleaseImage(&edgeRec2);cvReleaseMat(&F); cvReleaseMat(&e1); cvReleaseMat(&e2);cvReleaseMat(&P1); cvReleaseMat(&P2);cvReleaseMat(&invH1); cvReleaseMat(&invH2);cvReleaseMat(&H1); cvReleaseMat(&H2);
return 0;}
void DrawEpipolarLine(IplImage *inImage1, IplImage *inImage2,PairPoints *ptPair, CvMat *F)
{CvMat *x1, *x2, *l1, *l2;CvPoint pos1, pos2;int n;
10
IplImage *image1, *image2;
int height = inImage1->height;int width = inImage1->width;int channel = inImage1->nChannels;
image1 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, channel);image2 = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, channel);cvCopyImage(inImage1, image1);cvCopyImage(inImage2, image2);
x1 = cvCreateMat(3, 1, CV_64FC1);x2 = cvCreateMat(3, 1, CV_64FC1);l1 = cvCreateMat(3, 1, CV_64FC1);l2 = cvCreateMat(3, 1, CV_64FC1);
for(n = 0; n < ptPair->len; n++){// draw epipolar lines of image2pos1.x = cvRound(ptPair->pt1J[n]);pos1.y = cvRound(ptPair->pt1I[n]);cvCircle(image1, pos1, 2, cvScalar(255, 255, 100), 2);cvmSet(x2, 0, 0, ptPair->pt2J[n]);cvmSet(x2, 1, 0, ptPair->pt2I[n]);cvmSet(x2, 2, 0, 1);// compute epipolar lineGetEpipolarLine(F, x2, l1, false);DrawLine(image1, l1);
// draw epipolar lines of image2pos2.x = cvRound(ptPair->pt2J[n]);pos2.y = cvRound(ptPair->pt2I[n]);cvCircle(image2, pos2, 2, cvScalar(255, 255, 100), 2);cvmSet(x1, 0, 0, ptPair->pt1J[n]);cvmSet(x1, 1, 0, ptPair->pt1I[n]);cvmSet(x1, 2, 0, 1);// compute epipolar lineGetEpipolarLine(F, x1, l2, true);DrawLine(image2, l2);
}
DrawImagePair(image1, image2, "EpipolarLine.jpg", true);
cvReleaseImage(&image1);cvReleaseImage(&image2);
}
11
3.2 Program Header File : stereo.h
//// file : stereo.h//----------------------------------------------// this file contains functions for stereo image// process//
#define NO_COR 10000#define DIST_THRESHOLD 50
void EstimateFundamentalMatrix(PairPoints *ptPair, CvMat *F);
void ComputeEpipoles(CvMat *F, CvMat *e1, CvMat *e2);
void ComputeRectificationTransform(CvMat *e, CvMat *H, CvPoint *interestPt);
void GetEpipolarLine(CvMat *F, CvMat *x, CvMat *l, bool pointFlag);
void ComputeMatchedTransform(CvMat *H1, PairPoints *ptPair,CvMat *H2, CvMat *P1, CvMat *P2);
void ComputeCameraMatrices(CvMat *F, CvMat *e2, CvMat *P1, CvMat *P2);
void FindCorrespondences(PairPoints *corspPt,IplImage *image1, IplImage *image2,Points *interestPt1, Points *interestPt2);
void CorrespondencePair(int *corspOrder, IplImage *sImage, IplImage *lImage,Points *sNumPt, Points *lNumPt);
bool IsRightOrder(int *corspOrder, int len);
void CalculateWorldPosition(PairPoints *corspPt, Points3D *pt3D,CvMat *P1, CvMat *P2);
3.3 Program Function File : stereo.cpp
//// file : stereo.cpp//----------------------------------------------// this file contains functions for stereo image// process//
12
#include <stdlib.h>#include <stdio.h>#include <math.h>#include "cv.h"#include "cxcore.h"#include "highgui.h"#include "utility.h"#include "stereo.h"
//// function : EstimateFundamentalMatrix// usage : EstimateFundamentalMatrix(ptPair, F);// -------------------------------------------------------------// This function returns the estimated fundamental matrix using// 8 point algorithm.//// constraint : x2 F x1 = 0// 1. data normalization// 2. set Af from x2 F x1 = 0// 3. solve : min ||Af||, s.t. ||f|| = 1// 4. rank 2 constraint : last singular value should be 0// 5. denormalization//void EstimateFundamentalMatrix(PairPoints *ptPair, CvMat *F) {
int numOfCorresp = ptPair->len;int NumOfCol = 9; // num of columns of Aint n, m;float x1, y1, x2, y2;CvMat *A, *Ftmp;CvMat *D, *V, *U;CvMat *Df, *Vf, *Uf;CvMat *UfTrans, *temp;CvMat *T1, *T2, *T2Trans;Points pt1, pt2;
A = cvCreateMat(numOfCorresp, numOfCorresp, CV_64FC1);Ftmp = cvCreateMat(3, 3, CV_64FC1);D = cvCreateMat(numOfCorresp, NumOfCol, CV_64FC1);U = cvCreateMat(numOfCorresp, numOfCorresp, CV_64FC1);V = cvCreateMat(NumOfCol, NumOfCol, CV_64FC1);Df = cvCreateMat(3, 3, CV_64FC1);Uf = cvCreateMat(3, 3, CV_64FC1);Vf = cvCreateMat(3, 3, CV_64FC1);UfTrans = cvCreateMat(3, 3, CV_64FC1);temp = cvCreateMat(3, 3, CV_64FC1);T1 = cvCreateMat(3, 3, CV_64FC1);
13
T2 = cvCreateMat(3, 3, CV_64FC1);T2Trans = cvCreateMat(3, 3, CV_64FC1);
// set data set for normalizationpt1.len = numOfCorresp;pt2.len = numOfCorresp;for(n = 0; n < numOfCorresp; n++){
pt1.ptJ[n] = ptPair->pt1J[n];pt1.ptI[n] = ptPair->pt1I[n];pt2.ptJ[n] = ptPair->pt2J[n];pt2.ptI[n] = ptPair->pt2I[n];
}
DataNormalization(&pt1, T1);DataNormalization(&pt2, T2);
// set matrix A to calculate f (x2^T F x1 = 0 --> Af = 0)for(n = 0; n < numOfCorresp; n++){
x1 = pt1.ptJ[n];y1 = pt1.ptI[n];x2 = pt2.ptJ[n];y2 = pt2.ptI[n];float Ai[9] = {x2*x1, x2*y1, x2, y2*x1, y2*y1, y2, x1, y1, 1};
// set matrix Afor(m = 0; m < NumOfCol; m++){
cvmSet(A, n, m, Ai[m]);}
}
//////////////////////////////////////////////////// solve : min ||Af||, s.t. ||f|| = 1 ////////////////////////////////////////////////////cvSVD(A, D, U, V, CV_SVD_U_T|CV_SVD_V_T);// A = U^T D V in openCV : A = U’ D’ V’^T in text
// take last column of V’ : last row of Vfloat f[NumOfCol];for(n = 0; n < NumOfCol; n++){
f[n] = cvmGet(V, NumOfCol - 1, n);}
Array2CvMat(f, Ftmp, 3, 3);
//////////////////////////////////////////////////// Refine F (F should be rank 2) ////////////////////////////////////////////////////
14
cvSVD(Ftmp, Df, Uf, Vf, CV_SVD_U_T|CV_SVD_V_T);// F = Uf^T Df Vf in openCV : F = Uf’ Df’ Vf’^T in text
// F = Uf’ diag(sigma1, sigma2, 0) Vf’^TcvmSet(Df, 2, 2, 0.0f); // make rank 2 matrixcvTranspose(Uf, UfTrans);cvMatMul(UfTrans, Df, temp);cvMatMul(temp, Vf, Ftmp);
// denormalization// F = T2Trans * Ftmp * T1 <- Ftmp = invT2Trans * F * invT1cvTranspose(T2, T2Trans);cvMatMul(T2Trans, Ftmp, temp);cvMatMul(temp, T1, F);
printf("F = %f, %f, %f\n", cvmGet(F, 0, 0),cvmGet(F, 0, 1),cvmGet(F, 0, 2));printf(" %f, %f, %f\n", cvmGet(F, 1, 0),cvmGet(F, 1, 1),cvmGet(F, 1, 2));printf(" %f, %f, %f\n", cvmGet(F, 2, 0),cvmGet(F, 2, 1),cvmGet(F, 2, 2));
// release matricescvReleaseMat(&A); cvReleaseMat(&Ftmp);cvReleaseMat(&D); cvReleaseMat(&U); cvReleaseMat(&V);cvReleaseMat(&Df); cvReleaseMat(&Uf); cvReleaseMat(&Vf);cvReleaseMat(&UfTrans); cvReleaseMat(&temp);cvReleaseMat(&T1); cvReleaseMat(&T2); cvReleaseMat(&T2Trans);
}
//// function : ComputeEpipoles// usage : ComputeEpipoles(F, e1, e2);// -------------------------------------------------------------// This function computes the 2 epipoles of left and right images// e1 is null vector of F. (F e1 = 0)// e2 is left null vector of F. (e2^T F = 0)// Using SVD, we calculate the null spaces.//void ComputeEpipoles(CvMat *F, CvMat *e1, CvMat *e2) {
int i;CvMat *D, *V, *U;
D = cvCreateMat(3, 3, CV_64FC1);U = cvCreateMat(3, 3, CV_64FC1);V = cvCreateMat(3, 3, CV_64FC1);
cvSVD(F, D, U, V, CV_SVD_U_T|CV_SVD_V_T);// F = U^T D V in openCV : F = U’ D’ V’^T in text
15
// take last column of V’ : last row of V// take last column of U’ : last row of Ufor(i = 0; i < 3; i++){
if(cvmGet(V, 2, 2) != 0){cvmSet(e1, i, 0, cvmGet(V, 2, i) / cvmGet(V, 2, 2));
}else{cvmSet(e1, i, 0, cvmGet(V, 2, i));
}
if(cvmGet(U, 2, 2) != 0){cvmSet(e2, i, 0, cvmGet(U, 2, i) / cvmGet(U, 2, 2));
}else{cvmSet(e2, i, 0, cvmGet(U, 2, i));
}}
// release matricescvReleaseMat(&D); cvReleaseMat(&U); cvReleaseMat(&V);
}
//// function : ComputeRectificationTransform// usage : ComputeRectificationTransform(e, H);//--------------------------------------------------------// This function computes the retification transform H of// the given epipole e.// also preserves the image shape around interest point.// H e = infinity point//void ComputeRectificationTransform(CvMat *e, CvMat *H, CvPoint *interestPt) {
float x, y, f, theta;CvMat *T, *R, *G, *temp;float x0, y0;
T = cvCreateMat(3, 3, CV_64FC1);R = cvCreateMat(3, 3, CV_64FC1);G = cvCreateMat(3, 3, CV_64FC1);temp = cvCreateMat(3, 3, CV_64FC1);
x0 = interestPt->x;y0 = interestPt->y;
if(cvmGet(e, 2, 0) != 0){// pre-rectification transform
16
// compute translation matrix T// (x0, y0, 1) --> (0, 0, 1) (interest point to orign)// (ex, ey, 1) --> (ex-x0, ey-y0, 1)float t[9] = {1, 0, -x0,
0, 1, -y0,0, 0, 1};
Array2CvMat(t, T, 3, 3);
// compute rotation matrix R// (ex-x0, ey-y0, 1) --> (f, 0, 1)// x = ex-x0, y = ey-y0// [cos(theta) -sin(theta)][x] = [f]// [sin(theta) cos(theta)][y] = [0]x = cvmGet(e, 0, 0) / cvmGet(e, 2, 0) - x0;y = cvmGet(e, 1, 0) / cvmGet(e, 2, 0) - y0;
theta = atan(-y / x);float r[9] = {cos(theta), -sin(theta), 0,
sin(theta), cos(theta), 0,0, 0, 1};
Array2CvMat(r, R, 3, 3);
// compute transform Gf = x * cos(theta) - y * sin(theta);
float g[9] = { 1, 0, 0,0, 1, 0,
-1/f, 0, 1};Array2CvMat(g, G, 3, 3);// rectification transform// H = GRTcvMatMul(R, T, temp);cvMatMul(G, temp, H);
// (0,0) --> (x0, y0) : origin to interst pointCenteringHomography(H, interestPt);
}else{// in this case, we won’t check the epipole is// onto x direction or y direction// and won’t assign H as I.printf("error : the epipole is already infinity point\n");exit(0);
}
// realse matricescvReleaseMat(&T); cvReleaseMat(&R);
17
cvReleaseMat(&G); cvReleaseMat(&temp);}
//// function : ComputeMatchedTransform// usage : ComputeMatchedTransform(H1, H2, P1, P2);//------------------------------------------------------// this function matches H1 into H2.// the epipolar lines that are transformed by H1 and H2// will be parallel.// H1 = (I + H2 e2 a^T) H2 (P2 P1^+)//void ComputeMatchedTransform(CvMat *H1, PairPoints *ptPair,
CvMat *H2, CvMat *P1, CvMat *P2){
float a, b, c;int len = ptPair->len;int i;Points x1, x2, x1New, x2New;CvMat *Ha = cvCreateMat(3, 3, CV_64FC1);CvMat *invP1 = cvCreateMat(4, 3, CV_64FC1);CvMat *M = cvCreateMat(3, 3, CV_64FC1);CvMat *H0 = cvCreateMat(3, 3, CV_64FC1);CvMat *A = cvCreateMat(len, 3, CV_64FC1);CvMat *B = cvCreateMat(len, 1, CV_64FC1);CvMat *aVec = cvCreateMat(3, 1, CV_64FC1);
x2.len = len;x1.len = len;// compute right inverse of P1// invP1 = P^T(PP^T)^-1cvPseudoInv(P1, invP1);
// M = P2 invP1cvMatMul(P2, invP1, M);
// H0 = H2 McvMatMul(H2, M, H0);
///////////////////////////////////////////////// find least square solution of a, b, and c /////////////////////////////////////////////////// p.307for(i = 0; i < len; i++){
x1.ptI[i] = ptPair->pt1I[i];
18
x1.ptJ[i] = ptPair->pt1J[i];x2.ptI[i] = ptPair->pt2I[i];x2.ptJ[i] = ptPair->pt2J[i];
}TransformPoints(&x2, &x2New, H2);TransformPoints(&x1, &x1New, H0);
// A [a b c]^T = Bfor(i = 0; i < len; i++){
cvmSet(A, i, 0, x1New.ptJ[i]);cvmSet(A, i, 1, x1New.ptI[i]);cvmSet(A, i, 2, 1);cvmSet(B, i, 0, x2New.ptJ[i]);
}
//LeastSquare(A, B, aVec);cvSolve(A, B, aVec, CV_SVD);
a = cvmGet(aVec, 0, 0);b = cvmGet(aVec, 1, 0);c = cvmGet(aVec, 2, 0);
// Ha = (I + H2 e2 a^T)float ha[9] = {a, b, c,
0, 1, 0,0, 0, 1};
Array2CvMat(ha, Ha, 3, 3);
cvMatMul(Ha, H0, H1);
cvReleaseMat(&Ha); cvReleaseMat(&invP1);cvReleaseMat(&M); cvReleaseMat(&H0);cvReleaseMat(&A); cvReleaseMat(&B);cvReleaseMat(&aVec);
}
//// function : ComputeCameraMatrices// usage : ComputeCameraMatrices(F, e2, P1, P2);//--------------------------------------------------------// this function calculates the camera matrices P1 & p2// P1 = [I|0], P2 = [SF|e2] --> P2 = [[e2]xF|e2]//void ComputeCameraMatrices(CvMat *F, CvMat *e2, CvMat *P1, CvMat *P2) {
19
float v1 = 1, v2 = 0, v3 = 0, lambda = 1; // arbitrayfloat sub;int n, m;CvMat *E2x = cvCreateMat(3, 3, CV_64FC1);CvMat *SF = cvCreateMat(3, 3, CV_64FC1);// P1 = [I|0]float p1[12] = {1, 0, 0, 0,
0, 1, 0, 0,0, 0, 1, 0};
Array2CvMat(p1, P1, 3, 4);
// SF = [e2]xFMakeSkewMatrix(e2, E2x);cvMatMul(E2x, F, SF);
// P2 = [SF + e2 v^T| lambda * e2]float v[3] = {v1, v2, v3};for(n = 0; n < 3; n++){
for(m = 0; m < 3; m++){sub = cvmGet(SF, n, m) + v[m] * cvmGet(e2, n, 0);cvmSet(P2, n, m, sub);
}cvmSet(P2, n, 3, lambda * cvmGet(e2, n, 0));
}
cvReleaseMat(&E2x); cvReleaseMat(&SF);}
//// function : GetEpipolarLine// usage : GetEpipolatLine(F, x, l, flag);//------------------------------------------------------// this function returns epipolar line, l.// when flag = true, l2 = F x1// when flage = flase, l1 = F^T x2//void GetEpipolarLine(CvMat *F, CvMat *x, CvMat *l, bool pointFlag) {
CvMat *FTrans = cvCreateMat(3, 3, CV_64FC1);if(pointFlag == true){
// compute epipolar linecvMatMul(F, x, l);
}else{// compute epipolar linecvTranspose(F, FTrans);cvMatMul(FTrans, x, l);
}
20
cvReleaseMat(&FTrans);}
//// function : FindCorrespondences// usage : FindCorrespondences(corspPt, imgae1, image2, interestPt1, interestPt2);//---------------------------------------------------------------------------------// this function returns correspondences of 2 set of interest points.// for the image that has smaller number of interest points// find correspondences using NCC// then, check ordering constraint.// if the correspondences satisfy the ordering constraint,// then, update correspondence map (corspPt)//void FindCorrespondences(PairPoints *corspPt,
IplImage *image1, IplImage *image2,Points *interestPt1, Points *interestPt2)
{int corspOrder[500];int n1 = interestPt1->len;int n2 = interestPt2->len;int corspIndex;int numOfCorsp;int k;
corspIndex = corspPt->len;if(corspIndex >= MAX_POINT_SIZE){
printf("point buffer is full. overwrite the data\n");if(n1 < n2) corspIndex -= n1;else corspIndex -= n2;
}
if(n1 < n2){numOfCorsp = n1;CorrespondencePair(corspOrder, image1, image2, interestPt1, interestPt2);if(IsRightOrder(corspOrder, n1)){ //check ordering constraint
for(k = 0; k < n1; k++){ // update correspondencesif(corspOrder[k] != NO_COR){
corspPt->pt1I[corspIndex] = interestPt1->ptI[k];corspPt->pt1J[corspIndex] = interestPt1->ptJ[k];corspPt->pt2I[corspIndex] = interestPt2->ptI[corspOrder[k]];corspPt->pt2J[corspIndex] = interestPt2->ptJ[corspOrder[k]];corspIndex++;
}
21
}}
}else{numOfCorsp = n2;CorrespondencePair(corspOrder, image2, image1, interestPt2, interestPt1);if(IsRightOrder(corspOrder, n2)){ //check ordering constraint
for(k = 0; k < n2; k++){ // update correspondencesif(corspOrder[k] != NO_COR){
corspPt->pt1I[corspIndex] = interestPt1->ptI[corspOrder[k]];corspPt->pt1J[corspIndex] = interestPt1->ptJ[corspOrder[k]];corspPt->pt2I[corspIndex] = interestPt2->ptI[k];corspPt->pt2J[corspIndex] = interestPt2->ptJ[k];corspIndex++;
}}
}}corspPt->len = corspIndex;
}
//// function : CorrespondencePair// usage : CorrespondencePair(corspPt, sImgae, lImage, sNumPt, lNumPt);//----------------------------------------------------------------------// this function returns correspondences of 2 set of interest points.// for the image that has smaller number of interest points// find correspondences using NCC//void CorrespondencePair(int *corspOrder, IplImage *sImage, IplImage *lImage,
Points *sNumPt, Points *lNumPt){
int s, l, i, j, ii, jj;int sN, lN;float threshold = DIST_THRESHOLD;float maxNcc, value, d;int blockHeight = WINDOWSIZE_NCC;int blockWidth = WINDOWSIZE_NCC;int channels = sImage->nChannels;IplImage *sBlock = 0, *lBlock = 0;
// create image blockssBlock = cvCreateImage(cvSize(blockWidth, blockHeight), IPL_DEPTH_8U, channels);lBlock = cvCreateImage(cvSize(blockWidth, blockHeight), IPL_DEPTH_8U, channels);
sN = sNumPt->len;lN = lNumPt->len;
22
for(s = 0; s < sN; s++){// make sImage blocki = cvRound(sNumPt->ptI[s]);j = cvRound(sNumPt->ptJ[s]);MakeImageBlock(sBlock, sImage, i, j);maxNcc = -10;for(l = 0; l < lN; l++){ // search interest points in lImage
// make lImage blockii = cvRound(lNumPt->ptI[l]);jj = cvRound(lNumPt->ptJ[l]);
d = sqrt(pow(ii - i, 2) + pow(jj - j, 2));
if(d < threshold){MakeImageBlock(lBlock, lImage, ii, jj);// calculate nccvalue = NCC(sBlock, lBlock);// take the index of corresponding point that gives maximum NCCif(value > maxNcc){
maxNcc = value;// store corresponding points index// that is the position on the buffer lNumPtcorspOrder[s] = l;
}}if(maxNcc == -10){ // if there is no correspondence
corspOrder[s] = NO_COR;}
}}
}
bool IsRightOrder(int *corspOrder, int len) {int n, index, preIndex;
preIndex = -1;for(n = 0; n < len; n++){
index = corspOrder[n];if(index <= preIndex && index != NO_COR){
return(false);}else{
preIndex = index;}
}
return(true);
23
}
void CalculateWorldPosition(PairPoints *corspPt, Points3D *pt3D,CvMat *P1, CvMat *P2)
{CvMat *A = cvCreateMat(4, 4, CV_64FC1);CvMat *D = cvCreateMat(4, 4, CV_64FC1);CvMat *U = cvCreateMat(4, 4, CV_64FC1);CvMat *V = cvCreateMat(4, 4, CV_64FC1);float X[4];int k, n, len;float x1, y1, x2, y2;
len = corspPt->len;pt3D->len = len;
for(k = 0; k < len; k++){x1 = corspPt->pt1J[k];y1 = corspPt->pt1I[k];x2 = corspPt->pt2J[k];y2 = corspPt->pt2I[k];
cvmSet(A, 0, 0, x1 * cvmGet(P1, 2, 0) - cvmGet(P1, 0, 0)); // a11cvmSet(A, 0, 1, x1 * cvmGet(P1, 2, 1) - cvmGet(P1, 0, 1)); // a12cvmSet(A, 0, 2, x1 * cvmGet(P1, 2, 2) - cvmGet(P1, 0, 2)); // a13cvmSet(A, 0, 3, x1 * cvmGet(P1, 2, 3) - cvmGet(P1, 0, 3)); // a14
cvmSet(A, 1, 0, y1 * cvmGet(P1, 2, 0) - cvmGet(P1, 1, 0)); // a21cvmSet(A, 1, 1, y1 * cvmGet(P1, 2, 1) - cvmGet(P1, 1, 1)); // a22cvmSet(A, 1, 2, y1 * cvmGet(P1, 2, 2) - cvmGet(P1, 1, 2)); // a23cvmSet(A, 1, 3, y1 * cvmGet(P1, 2, 3) - cvmGet(P1, 1, 3)); // a24
cvmSet(A, 2, 0, x2 * cvmGet(P2, 2, 0) - cvmGet(P2, 0, 0)); // a31cvmSet(A, 2, 1, x2 * cvmGet(P2, 2, 1) - cvmGet(P2, 0, 1)); // a32cvmSet(A, 2, 2, x2 * cvmGet(P2, 2, 2) - cvmGet(P2, 0, 2)); // a33cvmSet(A, 2, 3, x2 * cvmGet(P2, 2, 3) - cvmGet(P2, 0, 3)); // a34
cvmSet(A, 3, 0, y2 * cvmGet(P2, 2, 0) - cvmGet(P2, 1, 0)); // a41cvmSet(A, 3, 1, y2 * cvmGet(P2, 2, 1) - cvmGet(P2, 1, 1)); // a42cvmSet(A, 3, 2, y2 * cvmGet(P2, 2, 2) - cvmGet(P2, 1, 2)); // a43cvmSet(A, 3, 3, y2 * cvmGet(P2, 2, 3) - cvmGet(P2, 1, 3)); // a44
//////////////////////////////////////////////////// solve : min ||AX||, s.t. ||X|| = 1 ////////////////////////////////////////////////////cvSVD(A, D, U, V, CV_SVD_U_T|CV_SVD_V_T);
24
// A = U^T D V in openCV : A = U’ D’ V’^T in text
// take last column of V’ : last row of Vfor(n = 0; n < 4; n++){
X[n] = cvmGet(V, 3, n);}
if(X[3] != 0){pt3D->x[k] = X[0] / X[3];pt3D->y[k] = X[1] / X[3];pt3D->z[k] = X[2] / X[3];
}else{printf("error : X is infinity point !!!!\n");exit(0);
}}// release matricescvReleaseMat(&A);cvReleaseMat(&D); cvReleaseMat(&U); cvReleaseMat(&V);
}
3.4 Program Header File : utility.h
//// file : utility.h//-----------------------// this file contains utility functions to// deal with general processes.//
#define OFF 0#define ON 1#define EPS 0.5#define IMPLEMENTATION 2#define min(a, b) ((a <= b) ? a : b)#define max(a, b) ((a >= b) ? a : b)#define MAX_POINT_SIZE 5000#define WINDOWSIZE_NCC 9
typedef struct{int len;float pt1I[MAX_POINT_SIZE];float pt1J[MAX_POINT_SIZE];float pt2I[MAX_POINT_SIZE];
25
float pt2J[MAX_POINT_SIZE];}PairPoints;
typedef struct{int len;float ptI[MAX_POINT_SIZE];float ptJ[MAX_POINT_SIZE];
}Points;
typedef struct{int len;float x[MAX_POINT_SIZE];float y[MAX_POINT_SIZE];float z[MAX_POINT_SIZE];
}Points3D;
void Array2CvMat(float *arr, CvMat *cvArr, int row, int column);
void CvMat2Array(CvMat *cvArr, float *arr, int row, int column);
void InitializeImage(IplImage *image);
void CombineTwoImages(IplImage *image1, IplImage *image2,IplImage *outImage);
void WriteImage(IplImage *image, char *imageName);
void DrawImagePair(IplImage *inImage1, IplImage *inImage2, char *outName,bool flag);
void MakeImageBlock(IplImage *block, IplImage *image, int centPosI,int centPosJ);
void TransformImage(IplImage *inImage, IplImage *outImage, CvMat *H);
void TransformIntPoint(CvPoint *in, CvPoint *out, CvMat *H);
void TransformPoints(Points *pt, Points *transPt, CvMat *H);
void CenteringHomography(CvMat *H, CvPoint *imageCenter);
void DataNormalization(Points *x, CvMat *T);
void DrawLine(IplImage *image, CvMat *l);
void MakeSkewMatrix(CvMat *a, CvMat *Ax);
26
float NCC(IplImage *block1, IplImage *block2);
3.5 Program Function File : utility.cpp
//// file : utility.cpp//-----------------------// this file contains utility functions to// deal with general processes.//
#include <stdlib.h>#include <stdio.h>#include <math.h>#include "cv.h"#include "cxcore.h"#include "highgui.h"#include "utility.h"
void Array2CvMat(float *arr, CvMat *cvArr, int row, int column){
int i, j;
for(i = 0; i < row; i++){for(j = 0; j < column; j++){
cvmSet(cvArr, i, j, arr[i*column + j]);}
}}
void CvMat2Array(CvMat *cvArr, float *arr, int row, int column){
int i, j;
for(i = 0; i < row; i++){for(j = 0; j < column; j++){
arr[i*column + j] = cvmGet(cvArr, i, j);}
}}
void CombineTwoImages(IplImage *image1, IplImage *image2,
27
IplImage *outImage){
int i, j, k;uchar *outImageData = 0, *image1Data = 0, *image2Data = 0;
int height = image1->height;int width = image1->width;int step = image1->widthStep;int channels = image1->nChannels;int outWidth = outImage->width;int outHeight = outImage->height;int outStep = outImage->widthStep;
if(outWidth == width * 2 && outHeight == height){}else if(outWidth == width && outHeight == height * 2){}else{
printf("image combining error\n");exit(0);
}
outImageData = (uchar *)outImage->imageData;image1Data = (uchar *)image1->imageData;image2Data = (uchar *)image2->imageData;
for(i = 0; i < outHeight; i++){for(j = 0; j < outWidth; j++){
for(k = 0; k < channels; k++){if(i < height && j < width){
outImageData[i*outStep + j*channels + k]= image1Data[i*step + j*channels + k];
}else if((i >= height && j < width)){outImageData[i*outStep + j*channels + k]= image2Data[(i-height)*step + j*channels + k];
}else if((i < height && j >= width)){outImageData[i*outStep + j*channels + k]= image2Data[i*step + (j-width)*channels + k];
}else{printf("there is no i > height & j > width \n");exit(0);
}}
}}
}
void WriteImage(IplImage *image, char *imageName)
28
{if(!cvSaveImage(imageName, image)){
printf("Could not save: %s\n", imageName);}
}
void DrawImagePair(IplImage *inImage1, IplImage *inImage2,char *outName, bool flag)
{
IplImage *outImage;
int height = inImage1->height;int width = inImage1->width;int channel = inImage1->nChannels;
if(flag == true){ // left-right pairoutImage = cvCreateImage(cvSize(width * 2, height), IPL_DEPTH_8U, channel);
}else{ // up - down pairoutImage = cvCreateImage(cvSize(width, height * 2), IPL_DEPTH_8U, channel);
}CombineTwoImages(inImage1, inImage2, outImage);cvNamedWindow("output image", CV_WINDOW_AUTOSIZE);cvShowImage("output image", outImage);cvWaitKey(0);cvDestroyWindow("output image");
// write output imageWriteImage(outImage, outName);
cvReleaseImage(&outImage);}
//// function : MakeImageBlock// usage : MakeImageBlock(block, image, centPosI, centPosJ);// ------------------------------------------------------------// This function copies a block region of the image into a block// for example, if block size is 3 by 3 and the position of the block// is i, j on the image. the resultant block will be 3 by 3 and// the block will be copied by image(i-1, j-1) ... image(i+1, j+1).//void MakeImageBlock(IplImage *block, IplImage *image,
int centPosI, int centPosJ){
29
uchar *blockData = 0, *imageData = 0;int blockHeight, blockWidth, imageHeight, imageWidth;int blockStep, channels, imageStep;int i, j, k, posI, posJ;
blockHeight = block->height;blockWidth = block->width;imageHeight = image->height;imageWidth = image->width;channels = block->nChannels;blockStep = block->widthStep;imageStep = image->widthStep;blockData = (uchar *)block->imageData;imageData = (uchar *)image->imageData;
for(i = 0; i < blockHeight; i++){for(j = 0; j < blockWidth; j++){
for(k = 0; k < channels; k++){posI = centPosI + i - blockHeight / 2;posJ = centPosJ + j - blockWidth / 2;posI = min(max(posI, 0), imageHeight - 1);posJ = min(max(posJ, 0), imageWidth - 1);
blockData[i*blockStep + j*channels + k]= imageData[posI*imageStep + posJ*channels + k];
}}
}}
//// function : TransformImage// usage : TransformImage(inImage, outImage, H);// ---------------------------------------------// This function transforms input image using H//void TransformImage(IplImage *inImage, IplImage *outImage, CvMat *H){
uchar *inData;uchar *outData;int height, width, step, channels;int i, j, k;
// get the input image dataheight = inImage->height;
30
width = inImage->width;step = inImage->widthStep;channels = inImage->nChannels;inData = (uchar *)inImage->imageData;outData = (uchar *)outImage->imageData;
// apply the transform to get the target image// -----------------------------------------------------// out(x’) = in(x) : has 2 implementation forms// case 1 : out(Hx) = in(x)// case 2 : out(x’) = inv(H)x’CvMat *invH = cvCreateMat(3, 3, CV_64FC1);cvInvert(H, invH, CV_SVD);float h[9];if(IMPLEMENTATION == 1){ // case 1 : out(Hx) = in(x)
CvMat2Array(H, h, 3, 3);}else{ // case 2 : x = inv(H)x’
CvMat2Array(invH, h, 3, 3);}
int ii, jj;float x1, x2, x3;for(i = 0; i < height; i++){ // case 1 : i, j : x, ii, jj : x’, x’ = Hx
for(j = 0; j < width; j++){ // case 2 : i, j : x’, ii, jj : x, x = invHx’for(k = 0; k < channels; k++){ // x : domain, x’ : range
x1 = h[0] * j + h[1] * i + h[2];x2 = h[3] * j + h[4] * i + h[5];x3 = h[6] * j + h[7] * i + h[8];ii = min(height - 1, max(0, (int)(x2 / x3)));jj = min(width - 1, max(0, (int)(x1 / x3)));if(IMPLEMENTATION == 1){ // case 1 : out(Hx) = in(x)
if(ii == 0 || ii == height -1 || jj == 0 || jj == width - 1){outData[ii*step + jj*channels + k] = 0;
}else{outData[ii*step + jj*channels + k]= inData[i*step + j*channels + k];
}}else{ // case 2 : out(x’) = in(inv(H)x’)
if(ii == 0 || ii == height -1 || jj == 0 || jj == width - 1){outData[i*step + j*channels + k] = 0;
}else{outData[i*step + j*channels + k]= inData[ii*step + jj*channels + k];
}}
}}
31
}}
//// function : TransformIntPoint// usage : TransformIntPoint(inPoint, outPoint, H);// ---------------------------------------------// This function transforms input point using H//void TransformIntPoint(CvPoint *in, CvPoint *out, CvMat *H){
float x = in->x;float y = in->y;float h[9];
CvMat2Array(H, h, 3, 3);
float x1 = h[0] * x + h[1] * y + h[2];float x2 = h[3] * x + h[4] * y + h[5];float x3 = h[6] * x + h[7] * y + h[8];if(x3 == 0){
out->x = cvRound(x1);out->y = cvRound(x2);
}else{out->x = cvRound(x1 / x3);out->y = cvRound(x2 / x3);
}}
//// function : TransformPoints// usage : TransformPoints(pt, transPt, H1);//-----------------------------------------------// this function returns transformed points.//void TransformPoints(Points *pt, Points *transPt, CvMat *H){
int n;float x, y;float h[9];
transPt->len = pt->len;// transform pointsfor(n = 0; n < pt->len; n++){
x = pt->ptJ[n];
32
y = pt->ptI[n];
CvMat2Array(H, h, 3, 3);
float x1 = h[0] * x + h[1] * y + h[2];float x2 = h[3] * x + h[4] * y + h[5];float x3 = h[6] * x + h[7] * y + h[8];if(x3 == 0){
transPt->ptJ[n] = x1;transPt->ptI[n] = x2;
}else{transPt->ptJ[n] = x1 / x3;transPt->ptI[n] = x2 / x3;
}}
}
//// function : CenteringHomography// usage : CenteringHomography(H, center);//---------------------------------------------------------// this function returns the translated homgraphy.// the homography makes sure that transformed image center is// preserved.void CenteringHomography(CvMat *H, CvPoint *imageCenter){
CvPoint out;int i, j;CvMat *T = cvCreateMat(3, 3, CV_64FC1);CvMat *inH = cvCreateMat(3, 3, CV_64FC1);
for(i = 0; i < 3; i++){for(j = 0; j < 3; j++){
cvmSet(inH, i, j, cvmGet(H, i, j));}
}
TransformIntPoint(imageCenter, &out, inH);float t[9] = {1, 0, (imageCenter->x - out.x),
0, 1, (imageCenter->y - out.y),0, 0, 1};
Array2CvMat(t, T, 3, 3);cvMatMul(T, inH, H);
cvReleaseMat(&T);cvReleaseMat(&inH);
33
}
//// function : DataNormalization// usage : DataNormalization(pt, T);// ------------------------------------------------------// This function normalizes pt and returns the similarity// transform, T.// The centroid of pt will be transformed into (0,0).// The average distance of normalized pt will be sqrt(2).//void DataNormalization(Points *pt, CvMat *T){
int i;int numOfPoints = pt->len;float sumI = 0, sumJ = 0, meanI = 0, meanJ = 0;float squareDist = 0, sumDist = 0, meanDist = 0;float scale = 0;float x, y, xx, yy, ww;
// calculate the centroidfor(i = 0; i < numOfPoints; i++){
sumI += pt->ptI[i];sumJ += pt->ptJ[i];
}meanI = sumI / numOfPoints;meanJ = sumJ / numOfPoints;
// calculate the mean distancefor(i = 0; i < numOfPoints; i++){
squareDist = pow(pt->ptI[i] - meanI, 2)+ pow(pt->ptJ[i] - meanJ, 2);
sumDist += sqrt(squareDist);}meanDist = sumDist / numOfPoints;
// set the similarity transformscale = sqrt(2) / meanDist;float t[9] = {scale, 0, -scale * meanI,
0, scale, -scale * meanJ,0, 0, 1};
Array2CvMat(t, T, 3, 3);
// data normalizationfor(i = 0; i < numOfPoints; i++){
34
x = pt->ptJ[i];y = pt->ptI[i];
xx = t[0] * x + t[1] * y + t[2];yy = t[3] * x + t[4] * y + t[5];ww = t[6] * x + t[7] * y + t[8];
xx = xx / ww;yy = yy / ww;
pt->ptJ[i] = xx;pt->ptI[i] = yy;
}}
void DrawLine(IplImage *image, CvMat *line){
int width = image->width;int height = image->height;bool pos1Flag = false, pos2Flag = false;CvPoint pos1, pos2;int x, y;int i;float pt[4][3];float l[3];CvMat2Array(line, l, 3, 1);
//make 4 boundary linesfloat l1[3] = {0, 1, 0};float l2[3] = {0, 1, -(height-1)};float l3[3] = {1, 0, 0};float l4[3] = {1, 0, -(width-1)};
CrossPointOf2Lines(l, l1, pt[0]);CrossPointOf2Lines(l, l2, pt[1]);CrossPointOf2Lines(l, l3, pt[2]);CrossPointOf2Lines(l, l4, pt[3]);
// find valid 2 points in the given imagefor(i = 0; i < 4; i++){
if(pt[i][2] == 0){x = 100000000;y = 100000000;
}else{x = cvRound(pt[i][0] / pt[i][2]);
35
y = cvRound(pt[i][1] / pt[i][2]);}
if((x >= 0 && x < width) && (y >= 0 && y < height)){if(pos1Flag == false){
pos1 = cvPoint(x, y);pos1Flag = true;
}else if(pos2Flag == false && (pos1.x != x || pos1.y != y)){pos2 = cvPoint(x, y);pos2Flag = true;
}}/*if(pos1Flag == false){
pos1 = cvPoint(x, y);pos1Flag = true;
}else if(pos2Flag == false && (pos1.x != x || pos1.y != y)){pos2 = cvPoint(x, y);pos2Flag = true;
}*/
}
if(pos1Flag == true && pos2Flag == true){cvLine(image, pos1, pos2, cvScalar(0, 0, 255), 2);
}else{printf("error : line cannot be drawn\n");exit(0);
}}
//// function : MakeSkewMatrix// usage : MakeSkewMatrix(a, Ax);//-------------------------------------------// this function returns the skew matrix of// the given vector a//void MakeSkewMatrix(CvMat *a, CvMat *Ax){
float ax = cvmGet(a, 0, 0);float ay = cvmGet(a, 1, 0);float az = cvmGet(a, 2, 0);
float aSkew[9] = {0, -az, ay,az, 0, -ax,
36
-ay, ax, 0};
Array2CvMat(aSkew, Ax, 3, 3);}
//// function : NCC// usage : nccValue = NCC(block1, block2);//---------------------------------------------// This function returns ncc value between// block1 and block2.//float NCC(IplImage *block1, IplImage *block2){
int i, j, k;uchar *block1Data = 0, *block2Data = 0;
int height = block1->height;int width = block1->width;int channels = block1->nChannels;int step = block1->widthStep;
float meanB1[channels];float meanB2[channels];float varB1[channels];float varB2[channels];float numerTerm[channels];float denomTerm[channels];float ncc = 0;
block1Data = (uchar *)block1->imageData;block2Data = (uchar *)block2->imageData;
// initializefor(k = 0; k < channels; k++){
meanB1[k] = 0;meanB2[k] = 0;varB1[k] = 0;varB2[k] = 0;numerTerm[k] = 0;
}
// calculate mean valuesfor(i = 0; i < height; i++){
for(j = 0; j < width; j++){for(k = 0; k < channels; k++){
37
meanB1[k] += (float)block1Data[i*step + j*channels + k];meanB2[k] += (float)block2Data[i*step + j*channels + k];
}}
}
for(k = 0; k < channels; k++){meanB1[k] = meanB1[k] / (height * width);meanB2[k] = meanB2[k] / (height * width);
}
for(i = 0; i < height; i++){for(j = 0; j < width; j++){
for(k = 0; k < channels; k++){numerTerm[k] += ((float)block1Data[i*step + j*channels + k]
- meanB1[k])* ((float)block2Data[i*step + j*channels + k]
- meanB2[k]);varB1[k] += pow(((float)block1Data[i*step + j*channels + k]
- meanB1[k]), 2);varB2[k] += pow(((float)block2Data[i*step + j*channels + k]
- meanB2[k]), 2);
}}
}
for(k = 0; k < channels; k++){denomTerm[k] = pow(varB1[k]*varB2[k], 0.5);if(denomTerm[k] == 0){
ncc += 0;}else{
ncc += numerTerm[k] / denomTerm[k];}
}
// we can calculate NCC for color image blocks as// the average of each color NCCsncc = ncc / channels;
return(ncc);}
38