Camera Calibration and 3D Reconstruction Reference



Camera Calibration Functions


CalibrateCamera

Calibrates camera with single precision

void cvCalibrateCamera( int numImages, int* numPoints, CvSize imageSize,
                        CvPoint2D32f* imagePoints32f, CvPoint3D32f* objectPoints32f,
                        CvVect32f distortion32f, CvMatr32f cameraMatrix32f,
                        CvVect32f transVects32f, CvMatr32f rotMatrs32f,
                        int useIntrinsicGuess );

numImages
Number of the images.
numPoints
Array of the number of points in each image.
imageSize
Size of the image.
imagePoints32f
Pointer to the images.
objectPoints32f
Pointer to the pattern.
distortion32f
Array of four distortion coefficients found.
cameraMatrix32f
Camera matrix found.
transVects32f
Array of translate vectors for each pattern position in the image.
rotMatrs32f
Array of the rotation matrix for each pattern position in the image.
useIntrinsicGuess
Intrinsic guess. If equal to 1, intrinsic guess is needed.

The function cvCalibrateCamera calculates the camera parameters using information points on the pattern object and pattern object images.


CalibrateCamera_64d

Calibrates camera with double precision

void cvCalibrateCamera_64d( int numImages, int* numPoints, CvSize imageSize,
                            CvPoint2D64d* imagePoints, CvPoint3D64d* objectPoints,
                            CvVect64d distortion, CvMatr64d cameraMatrix,
                            CvVect64d transVects, CvMatr64d rotMatrs,
                            int useIntrinsicGuess );

numImages
Number of the images.
numPoints
Array of the number of points in each image.
imageSize
Size of the image.
imagePoints
Pointer to the images.
objectPoints
Pointer to the pattern.
distortion
Distortion coefficients found.
cameraMatrix
Camera matrix found.
transVects
Array of the translate vectors for each pattern position on the image.
rotMatrs
Array of the rotation matrix for each pattern position on the image.
useIntrinsicGuess
Intrinsic guess. If equal to 1, intrinsic guess is needed.

The function cvCalibrateCamera_64d is basically the same as the function cvCalibrateCamera, but uses double precision.


Rodrigues

Converts rotation matrix to rotation vector and vice versa with single precision

void  cvRodrigues( CvMat* rotMatrix, CvMat* rotVector,
                   CvMat* jacobian, int convType);

rotMatrix
Rotation matrix (3x3), 32-bit or 64-bit floating point.
rotVector
Rotation vector (3x1 or 1x3) of the same type as rotMatrix.
jacobian
Jacobian matrix 3 × 9.
convType
Type of conversion; must be CV_RODRIGUES_M2V for converting the matrix to the vector or CV_RODRIGUES_V2M for converting the vector to the matrix.

The function cvRodrigues converts the rotation matrix to the rotation vector or vice versa.


UnDistortOnce

Corrects camera lens distortion

void cvUnDistortOnce( const CvArr* srcImage, CvArr* dstImage,
                      const float* intrMatrix,
                      const float* distCoeffs,
                      int interpolate=1 );

srcImage
Source (distorted) image.
dstImage
Destination (corrected) image.
intrMatrix
Matrix of the camera intrinsic parameters (3x3).
distCoeffs
Vector of the four distortion coefficients k1, k2, p1 and p2.
interpolate
Bilinear interpolation flag.

The function cvUnDistortOnce corrects camera lens distortion in case of a single image. Matrix of the camera intrinsic parameters and distortion coefficients k1, k2 , p1 , and p2 must be preliminarily calculated by the function cvCalibrateCamera.


UnDistortInit

Calculates arrays of distorted points indices and interpolation coefficients

void cvUnDistortInit( const CvArr* srcImage, CvArr* undistMap,
                      const float* intrMatrix,
                      const float* distCoeffs,
                      int interpolate=1 );

srcImage
Artibtrary source (distorted) image, the image size and number of channels do matter.
undistMap
32-bit integer image of the same size as the source image (if interpolate=0) or 3 times wider than the source image (if interpolate=1).
intrMatrix
Matrix of the camera intrinsic parameters.
distCoeffs
Vector of the 4 distortion coefficients k1, k2, p1 and p2.
interpolate
Bilinear interpolation flag.

The function cvUnDistortInit calculates arrays of distorted points indices and interpolation coefficients using known matrix of the camera intrinsic parameters and distortion coefficients. It calculates undistortion map for cvUnDistort.

Matrix of the camera intrinsic parameters and the distortion coefficients may be calculated by cvCalibrateCamera.


UnDistort

Corrects camera lens distortion

void cvUnDistort( const void* srcImage, void* dstImage,
                  const void* undistMap, int interpolate=1 );

srcImage
Source (distorted) image.
dstImage
Destination (corrected) image.
undistMap
Undistortion map, pre-calculated by cvUnDistortInit.
interpolate
Bilinear interpolation flag, the same as in cvUnDistortInit.

The function cvUnDistort corrects camera lens distortion using previously calculated undistortion map. It is faster than cvUnDistortOnce.


FindChessBoardCornerGuesses

Finds approximate positions of internal corners of the chessboard

int cvFindChessBoardCornerGuesses( IplImage* img, IplImage* thresh, CvSize etalonSize,
                                   CvPoint2D32f* corners, int* cornerCount );

img
Source chessboard view; must have the depth of IPL_DEPTH_8U.
thresh
Temporary image of the same size and format as the source image.
etalonSize
Number of inner corners per chessboard row and column. The width (the number of columns) must be less or equal to the height (the number of rows).
corners
Pointer to the corner array found.
cornerCount
Signed value whose absolute value is the number of corners found. A positive number means that a whole chessboard has been found and a negative number means that not all the corners have been found.

The function cvFindChessBoardCornerGuesses attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners. The function returns non-zero value if all the corners have been found and they have been placed in a certain order (row by row, left to right in every row), otherwise, if the function fails to find all the corners or reorder them, the function returns 0. For example, a simple chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points, where the squares are tangent. The word "approximate" in the above description means that the corner coordinates found may differ from the actual coordinates by a couple of pixels. To get more precise coordinates, the user may use the function cvFindCornerSubPix.


Pose Estimation


FindExtrinsicCameraParams

Finds extrinsic camera parameters for pattern

void cvFindExtrinsicCameraParams( int numPoints, CvSize imageSize,
                                  CvPoint2D32f* imagePoints32f, CvPoint3D32f* objectPoints32f,
                                  CvVect32f focalLength32f, CvPoint2D32f principalPoint32f,
                                  CvVect32f distortion32f, CvVect32f rotVect32f,
                                  CvVect32f transVect32f );

numPoints
Number of the points.
ImageSize
Size of the image.
imagePoints32f
Pointer to the image.
objectPoints32f
Pointer to the pattern.
focalLength32f
Focal length.
principalPoint32f
Principal point.
distortion32f
Distortion.
rotVect32f
Rotation vector.
transVect32f
Translate vector.

The function cvFindExtrinsicCameraParams finds the extrinsic parameters for the pattern.


FindExtrinsicCameraParams_64d

Finds extrinsic camera parameters for pattern with double precision

void cvFindExtrinsicCameraParams_64d( int numPoints, CvSize imageSize,
                                      CvPoint2D64d* imagePoints, CvPoint3D64d* objectPoints,
                                      CvVect64d focalLength, CvPoint2D64d principalPoint,
                                      CvVect64d distortion, CvVect64d rotVect,
                                      CvVect64d transVect );

numPoints
Number of the points.
ImageSize
Size of the image.
imagePoints
Pointer to the image.
objectPoints
Pointer to the pattern.
focalLength
Focal length.
principalPoint
Principal point.
distortion
Distortion.
rotVect
Rotation vector.
transVect
Translate vector.

The function cvFindExtrinsicCameraParams_64d finds the extrinsic parameters for the pattern with double precision.


CreatePOSITObject

Initializes structure containing object information

CvPOSITObject* cvCreatePOSITObject( CvPoint3D32f* points, int numPoints );

points
Pointer to the points of the 3D object model.
numPoints
Number of object points.

The function cvCreatePOSITObject allocates memory for the object structure and computes the object inverse matrix.

The preprocessed object data is stored in the structure CvPOSITObject, internal for OpenCV, which means that the user cannot directly access the structure data. The user may only create this structure and pass its pointer to the function.

Object is defined as a set of points given in a coordinate system. The function cvPOSIT computes a vector that begins at a camera-related coordinate system center and ends at the points[0] of the object.

Once the work with a given object is finished, the function cvReleasePOSITObject must be called to free memory.


POSIT

Implements POSIT algorithm

void cvPOSIT( CvPoint2D32f* imagePoints, CvPOSITObject* pObject,
              double focalLength, CvTermCriteria criteria,
              CvMatrix3* rotation, CvPoint3D32f* translation );

imagePoints
Pointer to the object points projections on the 2D image plane.
pObject
Pointer to the object structure.
focalLength
Focal length of the camera used.
criteria
Termination criteria of the iterative POSIT algorithm.
rotation
Matrix of rotations.
translation
Translation vector.

The function cvPOSIT implements POSIT algorithm. Image coordinates are given in a camera-related coordinate system. The focal length may be retrieved using camera calibration functions. At every iteration of the algorithm new perspective projection of estimated pose is computed.

Difference norm between two projections is the maximal distance between corresponding points. The parameter criteria.epsilon serves to stop the algorithm if the difference is small.


ReleasePOSITObject

Deallocates 3D object structure

void cvReleasePOSITObject( CvPOSITObject** ppObject );

ppObject
Address of the pointer to the object structure.

The function cvReleasePOSITObject releases memory previously allocated by the function cvCreatePOSITObject.


CalcImageHomography

Calculates homography matrix for oblong planar object (e.g. arm)

void cvCalcImageHomography( float* line, CvPoint3D32f* center,
                            float* intrinsic, float homography[3][3]);

line
the main object axis direction (vector (dx,dy,dz)).
center
object center ((cx,cy,cz)).
intrinsic
intrinsic camera parameters (3x3 matrix).
homography
output homography matrix (3x3).

The function cvCalcImageHomography calculates the homography matrix for the initial image transformation from image plane to the plane, defined by 3D oblong object line (See Figure 6-10 in OpenCV Guide 3D Reconstruction Chapter).


View Morphing Functions


MakeScanlines

Calculates scanlines coordinates for two cameras by fundamental matrix

void cvMakeScanlines( CvMatrix3* matrix, CvSize imgSize, int* scanlines1,
                      int* scanlines2, int* lens1, int* lens2, int* numlines );

matrix
Fundamental matrix.
imgSize
Size of the image.
scanlines1
Pointer to the array of calculated scanlines of the first image.
scanlines2
Pointer to the array of calculated scanlines of the second image.
lens1
Pointer to the array of calculated lengths (in pixels) of the first image scanlines.
lens2
Pointer to the array of calculated lengths (in pixels) of the second image scanlines.
numlines
Pointer to the variable that stores the number of scanlines.

The function cvMakeScanlines finds coordinates of scanlines for two images.

This function returns the number of scanlines. The function does nothing except calculating the number of scanlines if the pointers scanlines1 or scanlines2 are equal to zero.


PreWarpImage

Rectifies image

void cvPreWarpImage( int numLines, IplImage* img, uchar* dst,
                     int* dstNums, int* scanlines );

numLines
Number of scanlines for the image.
img
Image to prewarp.
dst
Data to store for the prewarp image.
dstNums
Pointer to the array of lengths of scanlines.
scanlines
Pointer to the array of coordinates of scanlines.

The function cvPreWarpImage rectifies the image so that the scanlines in the rectified image are horizontal. The output buffer of size max(width,height)*numscanlines*3 must be allocated before calling the function.


FindRuns

Retrieves scanlines from rectified image and breaks them down into runs

void cvFindRuns( int numLines, uchar* prewarp_1, uchar* prewarp_2,
                 int* lineLens_1, int* lineLens_2,
                 int* runs_1, int* runs_2,
                 int* numRuns_1, int* numRuns_2 );

numLines
Number of the scanlines.
prewarp_1
Prewarp data of the first image.
prewarp_2
Prewarp data of the second image.
lineLens_1
Array of lengths of scanlines in the first image.
lineLens_2
Array of lengths of scanlines in the second image.
runs_1
Array of runs in each scanline in the first image.
runs_2
Array of runs in each scanline in the second image.
numRuns_1
Array of numbers of runs in each scanline in the first image.
numRuns_2
Array of numbers of runs in each scanline in the second image.

The function cvFindRuns retrieves scanlines from the rectified image and breaks each scanline down into several runs, that is, series of pixels of almost the same brightness.


DynamicCorrespondMulti

Finds correspondence between two sets of runs of two warped images

void cvDynamicCorrespondMulti( int lines, int* first, int* firstRuns,
                               int* second, int* secondRuns,
                               int* firstCorr, int* secondCorr );

lines
Number of scanlines.
first
Array of runs of the first image.
firstRuns
Array of numbers of runs in each scanline of the first image.
second
Array of runs of the second image.
secondRuns
Array of numbers of runs in each scanline of the second image.
firstCorr
Pointer to the array of correspondence information found for the first runs.
secondCorr
Pointer to the array of correspondence information found for the second runs.

The function cvDynamicCorrespondMulti finds correspondence between two sets of runs of two images. Memory must be allocated before calling this function. Memory size for one array of correspondence information is

max( width,height )* numscanlines*3*sizeof ( int ) .


MakeAlphaScanlines

Calculates coordinates of scanlines of image from virtual camera

void cvMakeAlphaScanlines( int* scanlines_1, int* scanlines_2,
                           int* scanlinesA, int* lens,
                           int numlines, float alpha );

scanlines_1
Pointer to the array of the first scanlines.
scanlines_2
Pointer to the array of the second scanlines.
scanlinesA
Pointer to the array of the scanlines found in the virtual image.
lens
Pointer to the array of lengths of the scanlines found in the virtual image.
numlines
Number of scanlines.
alpha
Position of virtual camera (0.0 - 1.0) .

The function cvMakeAlphaScanlines finds coordinates of scanlines for the virtual camera with the given camera position.

Memory must be allocated before calling this function. Memory size for the array of correspondence runs is numscanlines*2*4*sizeof(int) . Memory size for the array of the scanline lengths is numscanlines*2*4*sizeof(int) .


MorphEpilinesMulti

Morphs two pre-warped images using information about stereo correspondence

void cvMorphEpilinesMulti( int lines, uchar* firstPix, int* firstNum,
                           uchar* secondPix, int* secondNum,
                           uchar* dstPix, int* dstNum,
                           float alpha, int* first, int* firstRuns,
                           int* second, int* secondRuns,
                           int* firstCorr, int* secondCorr );

lines
Number of scanlines in the prewarp image.
firstPix
Pointer to the first prewarp image.
firstNum
Pointer to the array of numbers of points in each scanline in the first image.
secondPix
Pointer to the second prewarp image.
secondNum
Pointer to the array of numbers of points in each scanline in the second image.
dstPix
Pointer to the resulting morphed warped image.
dstNum
Pointer to the array of numbers of points in each line.
alpha
Virtual camera position (0.0 - 1.0) .
first
First sequence of runs.
firstRuns
Pointer to the number of runs in each scanline in the first image.
second
Second sequence of runs.
secondRuns
Pointer to the number of runs in each scanline in the second image.
firstCorr
Pointer to the array of correspondence information found for the first runs.
secondCorr
Pointer to the array of correspondence information found for the second runs.

The function cvMorphEpilinesMulti morphs two pre-warped images using information about correspondence between the scanlines of two images.


PostWarpImage

Warps rectified morphed image back

void cvPostWarpImage( int numLines, uchar* src, int* srcNums,
                      IplImage* img, int* scanlines );

numLines
Number of the scanlines.
src
Pointer to the prewarp image virtual image.
srcNums
Number of the scanlines in the image.
img
Resulting unwarp image.
scanlines
Pointer to the array of scanlines data.

The function cvPostWarpImage warps the resultant image from the virtual camera by storing its rows across the scanlines whose coordinates are calculated by cvMakeAlphaScanlines.


DeleteMoire

Deletes moire in given image

void cvDeleteMoire( IplImage* img );

img
Image.

The function cvDeleteMoire deletes moire from the given image. The post-warped image may have black (un-covered) points because of possible holes between neighboring scanlines. The function deletes moire (black pixels) from the image by substituting neighboring pixels for black pixels. If all the scanlines are horizontal, the function may be omitted.


Stereo Correspondence and Epipolar Geometry Functions


FindFundamentalMat

Calculates fundamental matrix from corresponding points in two images

int cvFindFundamentalMat( CvMat* points1,
                          CvMat* points2,
                          CvMat* fundMatr,
                          int    method,
                          double param1,
                          double param2,
                          CvMat* status=0);

points1
Array of the first image points of 2xN/Nx2 or 3xN/Nx3 size (N is number of points). The point coordinates should be floating-point (single or double precision)
points2
Array of the second image points of the same size and format as points1
fundMatr
The output fundamental matrix or matrices. Size 3x3 or 9x3 (7-point method can returns up to 3 matrices).
method
Method for computing fundamental matrix
CV_FM_7POINT - for 7-point algorithm. Number of points == 7
CV_FM_8POINT - for 8-point algorithm. Number of points >= 8
CV_FM_RANSAC - for RANSAC algorithm. Number of points >= 8
CV_FM_LMEDS - for LMedS algorithm. Number of points >= 8
param1
The parameter is used for RANSAC or LMedS methods only. It is the maximum distance from point to epipolar line, beyound which the point is considered bad and is not considered in further calculations. Usually it is set to 0.5 or 1.0.
param2
The parameter is used for RANSAC or LMedS methods only. It denotes the desirable level of confidense the matrix is the correct (up to some precision). It can be set to 0.99 for example.
status
Array of N elements, every element of which is set to 1 if the point was not rejected during the computation, 0 otherwise. The array is computed only in RANSAC and LMedS methods. For other methods it is set to all 1's. This is the optional parameter.

The epipolar geometry is described by the following equation:

p2T*F*p1=0,

where F is fundamental matrix, p1 and p2 are corresponding points on the two images.

The function FindFundamentalMat calculates fundamental matrix using one of four methods listed above and returns the number of fundamental matrix found: 0 if the matrix could not be found, 1 or 3 if the matrix or matrices have been found successfully.

The calculated fundamental matrix may be passed further to ComputeCorrespondEpilines function that computes coordinates of corresponding epilines on two images.

For 7-point method uses exactly 7 points. It can find 1 or 3 fundamental matrices. It returns number of the matrices found and if there is a room in the destination array to keep all the detected matrices, stores all of them there, otherwise it stores only one of the matrices.

All other methods use 8 or more points and return a single fundamental matrix.

Example. Fundamental matrix calculation

int numPoints = 100;
CvMat* points1;
CvMat* points2;
CvMat* status;
CvMat* fundMatr;

points1  = cvCreateMat(2,numPoints,CV_32F);
points2  = cvCreateMat(2,numPoints,CV_32F);
status   = cvCreateMat(1,numPoints,CV_32F);

/* Fill the points here ... */

fundMatr = cvCreateMat(3,3,CV_32F);
int num = cvFindFundamentalMat(points1,points2,fundMatr,CV_FM_RANSAC,1.0,0.99,status);
if( num == 1 )
{
    printf("Fundamental matrix was found\n");
}
else
{
    printf("Fundamental matrix was not found\n");
}


/*====== Example of code for three matrixes ======*/
CvMat* points1;
CvMat* points2;
CvMat* fundMatr;

points1  = cvCreateMat(2,7,CV_32F);
points2  = cvCreateMat(2,7,CV_32F);

/* Fill the points here... */

fundMatr = cvCreateMat(9,3,CV_32F);
int num = cvFindFundamentalMat(points1,points2,fundMatr,CV_FM_7POINT,0,0,0);
printf("Found %d matrixes\n",num);

ComputeCorrespondEpilines

For every input point on one of image computes the corresponding epiline on the other image

void cvComputeCorrespondEpilines( const CvMat* points,
                                  int pointImageID,
                                  CvMat* fundMatr,
                                  CvMat* corrLines);

points
The input points: 2xN or 3xN array (N number of points)
pointImageID
Image ID there are points are located, 1 or 2
fundMatr
Fundamental matrix
corrLines
Computed epilines, 3xN array

The function ComputeCorrespondEpilines computes the corresponding epiline for every input point using the basic equation of epipolar line geometry:

If points located on first image (ImageID=1), corresponding epipolar line can be computed as:

l2=F*p1
where F is fundamental matrix, p1 point on first image, l2 corresponding epipolar line on second image.

If points located on second image (ImageID=2):
l1=FT*p2

where F is fundamental matrix, p2 point on second image, l1 corresponding epipolar line on first image

Each epipolar line is present by coefficients a,b,c of line equation:
a*x + b*y + c = 0

Also computed line normalized by a2+b2=1. It's useful if distance from point to line must be computed later.