Calibrates camera with single precision
void cvCalibrateCamera( int numImages, int* numPoints, CvSize imageSize, CvPoint2D32f* imagePoints32f, CvPoint3D32f* objectPoints32f, CvVect32f distortion32f, CvMatr32f cameraMatrix32f, CvVect32f transVects32f, CvMatr32f rotMatrs32f, int useIntrinsicGuess );
The function cvCalibrateCamera calculates the camera parameters using information points on the pattern object and pattern object images.
Calibrates camera with double precision
void cvCalibrateCamera_64d( int numImages, int* numPoints, CvSize imageSize, CvPoint2D64d* imagePoints, CvPoint3D64d* objectPoints, CvVect64d distortion, CvMatr64d cameraMatrix, CvVect64d transVects, CvMatr64d rotMatrs, int useIntrinsicGuess );
The function cvCalibrateCamera_64d is basically the same as the function cvCalibrateCamera, but uses double precision.
Converts rotation matrix to rotation vector and vice versa with single precision
void cvRodrigues( CvMat* rotMatrix, CvMat* rotVector, CvMat* jacobian, int convType);
The function cvRodrigues converts the rotation matrix to the rotation vector or vice versa.
Corrects camera lens distortion
void cvUnDistortOnce( const CvArr* srcImage, CvArr* dstImage, const float* intrMatrix, const float* distCoeffs, int interpolate=1 );
The function cvUnDistortOnce corrects camera lens distortion in case of a single image. Matrix of the camera intrinsic parameters and distortion coefficients k1, k2 , p1 , and p2 must be preliminarily calculated by the function cvCalibrateCamera.
Calculates arrays of distorted points indices and interpolation coefficients
void cvUnDistortInit( const CvArr* srcImage, CvArr* undistMap, const float* intrMatrix, const float* distCoeffs, int interpolate=1 );
The function cvUnDistortInit calculates arrays of distorted points indices and interpolation coefficients using known matrix of the camera intrinsic parameters and distortion coefficients. It calculates undistortion map for cvUnDistort.
Matrix of the camera intrinsic parameters and the distortion coefficients may be calculated by cvCalibrateCamera.
Corrects camera lens distortion
void cvUnDistort( const void* srcImage, void* dstImage, const void* undistMap, int interpolate=1 );
The function cvUnDistort corrects camera lens distortion using previously calculated undistortion map. It is faster than cvUnDistortOnce.
Finds approximate positions of internal corners of the chessboard
int cvFindChessBoardCornerGuesses( IplImage* img, IplImage* thresh, CvSize etalonSize, CvPoint2D32f* corners, int* cornerCount );
The function cvFindChessBoardCornerGuesses attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners. The function returns non-zero value if all the corners have been found and they have been placed in a certain order (row by row, left to right in every row), otherwise, if the function fails to find all the corners or reorder them, the function returns 0. For example, a simple chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points, where the squares are tangent. The word "approximate" in the above description means that the corner coordinates found may differ from the actual coordinates by a couple of pixels. To get more precise coordinates, the user may use the function cvFindCornerSubPix.
Finds extrinsic camera parameters for pattern
void cvFindExtrinsicCameraParams( int numPoints, CvSize imageSize, CvPoint2D32f* imagePoints32f, CvPoint3D32f* objectPoints32f, CvVect32f focalLength32f, CvPoint2D32f principalPoint32f, CvVect32f distortion32f, CvVect32f rotVect32f, CvVect32f transVect32f );
The function cvFindExtrinsicCameraParams finds the extrinsic parameters for the pattern.
Finds extrinsic camera parameters for pattern with double precision
void cvFindExtrinsicCameraParams_64d( int numPoints, CvSize imageSize, CvPoint2D64d* imagePoints, CvPoint3D64d* objectPoints, CvVect64d focalLength, CvPoint2D64d principalPoint, CvVect64d distortion, CvVect64d rotVect, CvVect64d transVect );
The function cvFindExtrinsicCameraParams_64d finds the extrinsic parameters for the pattern with double precision.
Initializes structure containing object information
CvPOSITObject* cvCreatePOSITObject( CvPoint3D32f* points, int numPoints );
The function cvCreatePOSITObject allocates memory for the object structure and computes the object inverse matrix.
The preprocessed object data is stored in the structure CvPOSITObject, internal for OpenCV, which means that the user cannot directly access the structure data. The user may only create this structure and pass its pointer to the function.
Object is defined as a set of points given in a coordinate system. The function cvPOSIT computes a vector that begins at a camera-related coordinate system center and ends at the points[0] of the object.
Once the work with a given object is finished, the function cvReleasePOSITObject must be called to free memory.Implements POSIT algorithm
void cvPOSIT( CvPoint2D32f* imagePoints, CvPOSITObject* pObject, double focalLength, CvTermCriteria criteria, CvMatrix3* rotation, CvPoint3D32f* translation );
The function cvPOSIT implements POSIT algorithm. Image coordinates are given in a camera-related coordinate system. The focal length may be retrieved using camera calibration functions. At every iteration of the algorithm new perspective projection of estimated pose is computed.
Difference norm between two projections is the maximal distance between corresponding points. The parameter criteria.epsilon serves to stop the algorithm if the difference is small.
Deallocates 3D object structure
void cvReleasePOSITObject( CvPOSITObject** ppObject );
The function cvReleasePOSITObject releases memory previously allocated by the function cvCreatePOSITObject.
Calculates homography matrix for oblong planar object (e.g. arm)
void cvCalcImageHomography( float* line, CvPoint3D32f* center, float* intrinsic, float homography[3][3]);
The function cvCalcImageHomography calculates the homography matrix for the initial image transformation from image plane to the plane, defined by 3D oblong object line (See Figure 6-10 in OpenCV Guide 3D Reconstruction Chapter).
Calculates scanlines coordinates for two cameras by fundamental matrix
void cvMakeScanlines( CvMatrix3* matrix, CvSize imgSize, int* scanlines1, int* scanlines2, int* lens1, int* lens2, int* numlines );
The function cvMakeScanlines finds coordinates of scanlines for two images.
This function returns the number of scanlines. The function does nothing except calculating the number of scanlines if the pointers scanlines1 or scanlines2 are equal to zero.
Rectifies image
void cvPreWarpImage( int numLines, IplImage* img, uchar* dst, int* dstNums, int* scanlines );
The function cvPreWarpImage rectifies the image so that the scanlines in the rectified image are horizontal. The output buffer of size max(width,height)*numscanlines*3 must be allocated before calling the function.
Retrieves scanlines from rectified image and breaks them down into runs
void cvFindRuns( int numLines, uchar* prewarp_1, uchar* prewarp_2, int* lineLens_1, int* lineLens_2, int* runs_1, int* runs_2, int* numRuns_1, int* numRuns_2 );
The function cvFindRuns retrieves scanlines from the rectified image and breaks each scanline down into several runs, that is, series of pixels of almost the same brightness.
Finds correspondence between two sets of runs of two warped images
void cvDynamicCorrespondMulti( int lines, int* first, int* firstRuns, int* second, int* secondRuns, int* firstCorr, int* secondCorr );
The function cvDynamicCorrespondMulti finds correspondence between two sets of runs of two images. Memory must be allocated before calling this function. Memory size for one array of correspondence information is
Calculates coordinates of scanlines of image from virtual camera
void cvMakeAlphaScanlines( int* scanlines_1, int* scanlines_2, int* scanlinesA, int* lens, int numlines, float alpha );
The function cvMakeAlphaScanlines finds coordinates of scanlines for the virtual camera with the given camera position.
Memory must be allocated before calling this function. Memory size for the array of correspondence runs is numscanlines*2*4*sizeof(int) . Memory size for the array of the scanline lengths is numscanlines*2*4*sizeof(int) .
Morphs two pre-warped images using information about stereo correspondence
void cvMorphEpilinesMulti( int lines, uchar* firstPix, int* firstNum, uchar* secondPix, int* secondNum, uchar* dstPix, int* dstNum, float alpha, int* first, int* firstRuns, int* second, int* secondRuns, int* firstCorr, int* secondCorr );
The function cvMorphEpilinesMulti morphs two pre-warped images using information about correspondence between the scanlines of two images.
Warps rectified morphed image back
void cvPostWarpImage( int numLines, uchar* src, int* srcNums, IplImage* img, int* scanlines );
The function cvPostWarpImage warps the resultant image from the virtual camera by storing its rows across the scanlines whose coordinates are calculated by cvMakeAlphaScanlines.
Deletes moire in given image
void cvDeleteMoire( IplImage* img );
The function cvDeleteMoire deletes moire from the given image. The post-warped image may have black (un-covered) points because of possible holes between neighboring scanlines. The function deletes moire (black pixels) from the image by substituting neighboring pixels for black pixels. If all the scanlines are horizontal, the function may be omitted.
Calculates fundamental matrix from corresponding points in two images
int cvFindFundamentalMat( CvMat* points1, CvMat* points2, CvMat* fundMatr, int method, double param1, double param2, CvMat* status=0);
The epipolar geometry is described by the following equation:
p2T*F*p1=0,
where F is fundamental matrix, p1 and p2 are corresponding points on the two images.
The function FindFundamentalMat calculates fundamental matrix using one of four methods listed above and returns the number of fundamental matrix found: 0 if the matrix could not be found, 1 or 3 if the matrix or matrices have been found successfully.
The calculated fundamental matrix may be passed further to ComputeCorrespondEpilines function that computes coordinates of corresponding epilines on two images.
For 7-point method uses exactly 7 points. It can find 1 or 3 fundamental matrices. It returns number of the matrices found and if there is a room in the destination array to keep all the detected matrices, stores all of them there, otherwise it stores only one of the matrices.
All other methods use 8 or more points and return a single fundamental matrix.
int numPoints = 100; CvMat* points1; CvMat* points2; CvMat* status; CvMat* fundMatr; points1 = cvCreateMat(2,numPoints,CV_32F); points2 = cvCreateMat(2,numPoints,CV_32F); status = cvCreateMat(1,numPoints,CV_32F); /* Fill the points here ... */ fundMatr = cvCreateMat(3,3,CV_32F); int num = cvFindFundamentalMat(points1,points2,fundMatr,CV_FM_RANSAC,1.0,0.99,status); if( num == 1 ) { printf("Fundamental matrix was found\n"); } else { printf("Fundamental matrix was not found\n"); } /*====== Example of code for three matrixes ======*/ CvMat* points1; CvMat* points2; CvMat* fundMatr; points1 = cvCreateMat(2,7,CV_32F); points2 = cvCreateMat(2,7,CV_32F); /* Fill the points here... */ fundMatr = cvCreateMat(9,3,CV_32F); int num = cvFindFundamentalMat(points1,points2,fundMatr,CV_FM_7POINT,0,0,0); printf("Found %d matrixes\n",num);
For every input point on one of image computes the corresponding epiline on the other image
void cvComputeCorrespondEpilines( const CvMat* points, int pointImageID, CvMat* fundMatr, CvMat* corrLines);
The function ComputeCorrespondEpilines computes the corresponding epiline for every input point using the basic equation of epipolar line geometry:
If points located on first image (ImageID=1), corresponding epipolar line can be computed as:
l2=F*p1where F is fundamental matrix, p1 point on first image, l2 corresponding epipolar line on second image. If points located on second image (ImageID=2):
l1=FT*p2
where F is fundamental matrix, p2 point on second image, l1 corresponding epipolar line on first image
Each epipolar line is present by coefficients a,b,c of line equation:a*x + b*y + c = 0
Also computed line normalized by a2+b2=1. It's useful if distance from point to line must be computed later.