@techstark/opencv-js
Version:
OpenCV JavaScript version for node.js or browser
838 lines • 133 kB
TypeScript
import type { bool, double, float, InputArray, InputArrayOfArrays, InputOutputArray, int, Mat, OutputArray, OutputArrayOfArrays, Point2d, Rect, Size, size_t, TermCriteria, Vec3d } from "./_types";
/**
* the overall RMS re-projection error.
* The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
* views. The algorithm is based on Zhang2000 and BouguetMCT . The coordinates of 3D object points and
* their corresponding 2D projections in each view must be specified. That may be achieved by using an
* object with a known geometry and easily detectable feature points. Such an object is called a
* calibration rig or calibration pattern, and OpenCV has built-in support for a chessboard as a
* calibration rig (see findChessboardCorners ). Currently, initialization of intrinsic parameters
* (when CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration patterns
* (where Z-coordinates of the object points must be all zeros). 3D calibration rigs can also be used
* as long as initial cameraMatrix is provided.
*
* The algorithm performs the following steps:
*
* Compute the initial intrinsic parameters (the option only available for planar calibration patterns)
* or read them from the input parameters. The distortion coefficients are all set to zeros initially
* unless some of CALIB_FIX_K? are specified.
* Estimate the initial camera pose as if the intrinsic parameters have been already known. This is
* done using solvePnP .
* Run the global Levenberg-Marquardt optimization algorithm to minimize the reprojection error, that
* is, the total sum of squared distances between the observed feature points imagePoints and the
* projected (using the current estimates for camera parameters and the poses) object points
* objectPoints. See projectPoints for details.
*
* If you use a non-square (=non-NxN) grid and findChessboardCorners for calibration, and
* calibrateCamera returns bad values (zero distortion coefficients, an image center very far from
* (w/2-0.5,h/2-0.5), and/or large differences between `$f_x$` and `$f_y$` (ratios of 10:1 or more)),
* then you have probably used patternSize=cvSize(rows,cols) instead of using
* patternSize=cvSize(cols,rows) in findChessboardCorners .
*
* [calibrateCameraRO], [findChessboardCorners], [solvePnP], [initCameraMatrix2D], [stereoCalibrate],
* [undistort]
*
* @param objectPoints In the new interface it is a vector of vectors of calibration pattern points in
* the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer
* vector contains as many elements as the number of the pattern views. If the same calibration pattern
* is shown in each view and it is fully visible, all the vectors will be the same. Although, it is
* possible to use partially occluded patterns, or even different patterns in different views. Then,
* the vectors will be different. The points are 3D, but since they are in a pattern coordinate system,
* then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that
* Z-coordinate of each input object point is 0. In the old interface all the vectors of object points
* from different views are concatenated together.
*
* @param imagePoints In the new interface it is a vector of vectors of the projections of calibration
* pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and
* objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i. In
* the old interface all the vectors of object points from different views are concatenated together.
*
* @param imageSize Size of the image used only to initialize the intrinsic camera matrix.
*
* @param cameraMatrix Output 3x3 floating-point camera matrix $A =
* \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}$ . If CV_CALIB_USE_INTRINSIC_GUESS and/or
* CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before
* calling the function.
*
* @param distCoeffs Output vector of distortion coefficients $(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5,
* k_6 [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])$ of 4, 5, 8, 12 or 14 elements.
*
* @param rvecs Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view
* (e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding
* k-th translation vector (see the next output parameter description) brings the calibration pattern
* from the model coordinate space (in which object points are specified) to the world coordinate
* space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1).
*
* @param tvecs Output vector of translation vectors estimated for each pattern view.
*
* @param stdDeviationsIntrinsics Output vector of standard deviations estimated for intrinsic
* parameters. Order of deviations values: $(f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6
* , s_1, s_2, s_3, s_4, \tau_x, \tau_y)$ If one of parameters is not estimated, it's deviation is
* equals to zero.
*
* @param stdDeviationsExtrinsics Output vector of standard deviations estimated for extrinsic
* parameters. Order of deviations values: $(R_1, T_1, \dotsc , R_M, T_M)$ where M is number of pattern
* views, $R_i, T_i$ are concatenated 1x3 vectors.
*
* @param perViewErrors Output vector of the RMS re-projection error estimated for each pattern view.
*
* @param flags Different flags that may be zero or a combination of the following values:
* CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are
* optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used),
* and focal distances are computed in a least-squares fashion. Note, that if intrinsic parameters are
* known, there is no need to use this function just to estimate extrinsic parameters. Use solvePnP
* instead.CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization.
* It stays at the center or at a different location specified when CALIB_USE_INTRINSIC_GUESS is set
* too.CALIB_FIX_ASPECT_RATIO The functions considers only fy as a free parameter. The ratio fx/fy
* stays the same as in the input cameraMatrix . When CALIB_USE_INTRINSIC_GUESS is not set, the actual
* input values of fx and fy are ignored, only their ratio is computed and used
* further.CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients $(p_1, p_2)$ are set to zeros and
* stay zero.CALIB_FIX_K1,...,CALIB_FIX_K6 The corresponding radial distortion coefficient is not
* changed during the optimization. If CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
* supplied distCoeffs matrix is used. Otherwise, it is set to 0.CALIB_RATIONAL_MODEL Coefficients k4,
* k5, and k6 are enabled. To provide the backward compatibility, this extra flag should be explicitly
* specified to make the calibration function use the rational model and return 8 coefficients. If the
* flag is not set, the function computes and returns only 5 distortion
* coefficients.CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the
* backward compatibility, this extra flag should be explicitly specified to make the calibration
* function use the thin prism model and return 12 coefficients. If the flag is not set, the function
* computes and returns only 5 distortion coefficients.CALIB_FIX_S1_S2_S3_S4 The thin prism distortion
* coefficients are not changed during the optimization. If CALIB_USE_INTRINSIC_GUESS is set, the
* coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to
* 0.CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility,
* this extra flag should be explicitly specified to make the calibration function use the tilted
* sensor model and return 14 coefficients. If the flag is not set, the function computes and returns
* only 5 distortion coefficients.CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are
* not changed during the optimization. If CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the
* supplied distCoeffs matrix is used. Otherwise, it is set to 0.
*
* @param criteria Termination criteria for the iterative optimization algorithm.
*/
export declare function calibrateCamera(objectPoints: InputArrayOfArrays, imagePoints: InputArrayOfArrays, imageSize: Size, cameraMatrix: InputOutputArray, distCoeffs: InputOutputArray, rvecs: OutputArrayOfArrays, tvecs: OutputArrayOfArrays, stdDeviationsIntrinsics: OutputArray, stdDeviationsExtrinsics: OutputArray, perViewErrors: OutputArray, flags?: int, criteria?: TermCriteria): double;
/**
* This is an overloaded member function, provided for convenience. It differs from the above function
* only in what argument(s) it accepts.
*/
export declare function calibrateCamera(objectPoints: InputArrayOfArrays, imagePoints: InputArrayOfArrays, imageSize: Size, cameraMatrix: InputOutputArray, distCoeffs: InputOutputArray, rvecs: OutputArrayOfArrays, tvecs: OutputArrayOfArrays, flags?: int, criteria?: TermCriteria): double;
/**
* This function is an extension of [calibrateCamera()] with the method of releasing object which was
* proposed in strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar targets
* (calibration plates), this method can dramatically improve the precision of the estimated camera
* parameters. Both the object-releasing method and standard method are supported by this function. Use
* the parameter **iFixedPoint** for method selection. In the internal implementation,
* [calibrateCamera()] is a wrapper for this function.
*
* the overall RMS re-projection error.
* The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
* views. The algorithm is based on Zhang2000, BouguetMCT and strobl2011iccv. See [calibrateCamera()]
* for other detailed explanations.
*
* [calibrateCamera], [findChessboardCorners], [solvePnP], [initCameraMatrix2D], [stereoCalibrate],
* [undistort]
*
* @param objectPoints Vector of vectors of calibration pattern points in the calibration pattern
* coordinate space. See calibrateCamera() for details. If the method of releasing object to be used,
* the identical calibration board must be used in each view and it must be fully visible, and all
* objectPoints[i] must be the same and all points should be roughly close to a plane. The calibration
* target has to be rigid, or at least static if the camera (rather than the calibration target) is
* shifted for grabbing images.
*
* @param imagePoints Vector of vectors of the projections of calibration pattern points. See
* calibrateCamera() for details.
*
* @param imageSize Size of the image used only to initialize the intrinsic camera matrix.
*
* @param iFixedPoint The index of the 3D object point in objectPoints[0] to be fixed. It also acts as
* a switch for calibration method selection. If object-releasing method to be used, pass in the
* parameter in the range of [1, objectPoints[0].size()-2], otherwise a value out of this range will
* make standard calibration method selected. Usually the top-right corner point of the calibration
* board grid is recommended to be fixed when object-releasing method being utilized. According to
* strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front and
* objectPoints[0].back.z are used. With object-releasing method, accurate rvecs, tvecs and
* newObjPoints are only possible if coordinates of these three fixed points are accurate enough.
*
* @param cameraMatrix Output 3x3 floating-point camera matrix. See calibrateCamera() for details.
*
* @param distCoeffs Output vector of distortion coefficients. See calibrateCamera() for details.
*
* @param rvecs Output vector of rotation vectors estimated for each pattern view. See
* calibrateCamera() for details.
*
* @param tvecs Output vector of translation vectors estimated for each pattern view.
*
* @param newObjPoints The updated output vector of calibration pattern points. The coordinates might
* be scaled based on three fixed points. The returned coordinates are accurate only if the above
* mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter
* is ignored with standard calibration method.
*
* @param stdDeviationsIntrinsics Output vector of standard deviations estimated for intrinsic
* parameters. See calibrateCamera() for details.
*
* @param stdDeviationsExtrinsics Output vector of standard deviations estimated for extrinsic
* parameters. See calibrateCamera() for details.
*
* @param stdDeviationsObjPoints Output vector of standard deviations estimated for refined coordinates
* of calibration pattern points. It has the same size and order as objectPoints[0] vector. This
* parameter is ignored with standard calibration method.
*
* @param perViewErrors Output vector of the RMS re-projection error estimated for each pattern view.
*
* @param flags Different flags that may be zero or a combination of some predefined values. See
* calibrateCamera() for details. If the method of releasing object is used, the calibration time may
* be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially
* less precise and less stable in some rare cases.
*
* @param criteria Termination criteria for the iterative optimization algorithm.
*/
export declare function calibrateCameraRO(objectPoints: InputArrayOfArrays, imagePoints: InputArrayOfArrays, imageSize: Size, iFixedPoint: int, cameraMatrix: InputOutputArray, distCoeffs: InputOutputArray, rvecs: OutputArrayOfArrays, tvecs: OutputArrayOfArrays, newObjPoints: OutputArray, stdDeviationsIntrinsics: OutputArray, stdDeviationsExtrinsics: OutputArray, stdDeviationsObjPoints: OutputArray, perViewErrors: OutputArray, flags?: int, criteria?: TermCriteria): double;
/**
* This is an overloaded member function, provided for convenience. It differs from the above function
* only in what argument(s) it accepts.
*/
export declare function calibrateCameraRO(objectPoints: InputArrayOfArrays, imagePoints: InputArrayOfArrays, imageSize: Size, iFixedPoint: int, cameraMatrix: InputOutputArray, distCoeffs: InputOutputArray, rvecs: OutputArrayOfArrays, tvecs: OutputArrayOfArrays, newObjPoints: OutputArray, flags?: int, criteria?: TermCriteria): double;
/**
* The function performs the Hand-Eye calibration using various methods. One approach consists in
* estimating the rotation then the translation (separable solutions) and the following methods are
* implemented:
*
* R. Tsai, R. Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration
* Tsai89
* F. Park, B. Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group Park94
* R. Horaud, F. Dornaika Hand-Eye Calibration Horaud95
*
* Another approach consists in estimating simultaneously the rotation and the translation
* (simultaneous solutions), with the following implemented method:
*
* N. Andreff, R. Horaud, B. Espiau On-line Hand-Eye Calibration Andreff99
* K. Daniilidis Hand-Eye Calibration Using Dual Quaternions Daniilidis98
*
* The following picture describes the Hand-Eye calibration problem where the transformation between a
* camera ("eye") mounted on a robot gripper ("hand") has to be estimated.
*
* The calibration procedure is the following:
*
* a static calibration pattern is used to estimate the transformation between the target frame and the
* camera frame
* the robot gripper is moved in order to acquire several poses
* for each pose, the homogeneous transformation between the gripper frame and the robot base frame is
* recorded using for instance the robot kinematics `\\[ \\begin{bmatrix} X_b\\\\ Y_b\\\\ Z_b\\\\ 1
* \\end{bmatrix} = \\begin{bmatrix} _{}^{b}\\textrm{R}_g & _{}^{b}\\textrm{t}_g \\\\ 0_{1 \\times 3} &
* 1 \\end{bmatrix} \\begin{bmatrix} X_g\\\\ Y_g\\\\ Z_g\\\\ 1 \\end{bmatrix} \\]`
* for each pose, the homogeneous transformation between the calibration target frame and the camera
* frame is recorded using for instance a pose estimation method (PnP) from 2D-3D point correspondences
* `\\[ \\begin{bmatrix} X_c\\\\ Y_c\\\\ Z_c\\\\ 1 \\end{bmatrix} = \\begin{bmatrix}
* _{}^{c}\\textrm{R}_t & _{}^{c}\\textrm{t}_t \\\\ 0_{1 \\times 3} & 1 \\end{bmatrix} \\begin{bmatrix}
* X_t\\\\ Y_t\\\\ Z_t\\\\ 1 \\end{bmatrix} \\]`
*
* The Hand-Eye calibration procedure returns the following homogeneous transformation `\\[
* \\begin{bmatrix} X_g\\\\ Y_g\\\\ Z_g\\\\ 1 \\end{bmatrix} = \\begin{bmatrix} _{}^{g}\\textrm{R}_c &
* _{}^{g}\\textrm{t}_c \\\\ 0_{1 \\times 3} & 1 \\end{bmatrix} \\begin{bmatrix} X_c\\\\ Y_c\\\\
* Z_c\\\\ 1 \\end{bmatrix} \\]`
*
* This problem is also known as solving the `$\\mathbf{A}\\mathbf{X}=\\mathbf{X}\\mathbf{B}$`
* equation: `\\[ \\begin{align*} ^{b}{\\textrm{T}_g}^{(1)} \\hspace{0.2em} ^{g}\\textrm{T}_c
* \\hspace{0.2em} ^{c}{\\textrm{T}_t}^{(1)} &= \\hspace{0.1em} ^{b}{\\textrm{T}_g}^{(2)}
* \\hspace{0.2em} ^{g}\\textrm{T}_c \\hspace{0.2em} ^{c}{\\textrm{T}_t}^{(2)} \\\\
* (^{b}{\\textrm{T}_g}^{(2)})^{-1} \\hspace{0.2em} ^{b}{\\textrm{T}_g}^{(1)} \\hspace{0.2em}
* ^{g}\\textrm{T}_c &= \\hspace{0.1em} ^{g}\\textrm{T}_c \\hspace{0.2em} ^{c}{\\textrm{T}_t}^{(2)}
* (^{c}{\\textrm{T}_t}^{(1)})^{-1} \\\\ \\textrm{A}_i \\textrm{X} &= \\textrm{X} \\textrm{B}_i \\\\
* \\end{align*} \\]`
*
* Additional information can be found on this .
*
* A minimum of 2 motions with non parallel rotation axes are necessary to determine the hand-eye
* transformation. So at least 3 different poses are required, but it is strongly recommended to use
* many more poses.
*
* @param R_gripper2base Rotation part extracted from the homogeneous matrix that transforms a point
* expressed in the gripper frame to the robot base frame ( $_{}^{b}\textrm{T}_g$). This is a vector
* (vector<Mat>) that contains the rotation matrices for all the transformations from gripper frame to
* robot base frame.
*
* @param t_gripper2base Translation part extracted from the homogeneous matrix that transforms a point
* expressed in the gripper frame to the robot base frame ( $_{}^{b}\textrm{T}_g$). This is a vector
* (vector<Mat>) that contains the translation vectors for all the transformations from gripper frame
* to robot base frame.
*
* @param R_target2cam Rotation part extracted from the homogeneous matrix that transforms a point
* expressed in the target frame to the camera frame ( $_{}^{c}\textrm{T}_t$). This is a vector
* (vector<Mat>) that contains the rotation matrices for all the transformations from calibration
* target frame to camera frame.
*
* @param t_target2cam Rotation part extracted from the homogeneous matrix that transforms a point
* expressed in the target frame to the camera frame ( $_{}^{c}\textrm{T}_t$). This is a vector
* (vector<Mat>) that contains the translation vectors for all the transformations from calibration
* target frame to camera frame.
*
* @param R_cam2gripper Estimated rotation part extracted from the homogeneous matrix that transforms a
* point expressed in the camera frame to the gripper frame ( $_{}^{g}\textrm{T}_c$).
*
* @param t_cam2gripper Estimated translation part extracted from the homogeneous matrix that
* transforms a point expressed in the camera frame to the gripper frame ( $_{}^{g}\textrm{T}_c$).
*
* @param method One of the implemented Hand-Eye calibration method, see cv::HandEyeCalibrationMethod
*/
export declare function calibrateHandEye(R_gripper2base: InputArrayOfArrays, t_gripper2base: InputArrayOfArrays, R_target2cam: InputArrayOfArrays, t_target2cam: InputArrayOfArrays, R_cam2gripper: OutputArray, t_cam2gripper: OutputArray, method?: HandEyeCalibrationMethod): void;
/**
* The function computes various useful camera characteristics from the previously estimated camera
* matrix.
*
* Do keep in mind that the unity measure 'mm' stands for whatever unit of measure one chooses for the
* chessboard pitch (it can thus be any value).
*
* @param cameraMatrix Input camera matrix that can be estimated by calibrateCamera or stereoCalibrate
* .
*
* @param imageSize Input image size in pixels.
*
* @param apertureWidth Physical width in mm of the sensor.
*
* @param apertureHeight Physical height in mm of the sensor.
*
* @param fovx Output field of view in degrees along the horizontal sensor axis.
*
* @param fovy Output field of view in degrees along the vertical sensor axis.
*
* @param focalLength Focal length of the lens in mm.
*
* @param principalPoint Principal point in mm.
*
* @param aspectRatio $f_y/f_x$
*/
export declare function calibrationMatrixValues(cameraMatrix: InputArray, imageSize: Size, apertureWidth: double, apertureHeight: double, fovx: any, fovy: any, focalLength: any, principalPoint: any, aspectRatio: any): void;
export declare function checkChessboard(img: InputArray, size: Size): bool;
/**
* The functions compute:
*
* `\\[\\begin{array}{l} \\texttt{rvec3} = \\mathrm{rodrigues} ^{-1} \\left ( \\mathrm{rodrigues} (
* \\texttt{rvec2} ) \\cdot \\mathrm{rodrigues} ( \\texttt{rvec1} ) \\right ) \\\\ \\texttt{tvec3} =
* \\mathrm{rodrigues} ( \\texttt{rvec2} ) \\cdot \\texttt{tvec1} + \\texttt{tvec2} \\end{array} ,\\]`
*
* where `$\\mathrm{rodrigues}$` denotes a rotation vector to a rotation matrix transformation, and
* `$\\mathrm{rodrigues}^{-1}$` denotes the inverse transformation. See Rodrigues for details.
*
* Also, the functions can compute the derivatives of the output vectors with regards to the input
* vectors (see matMulDeriv ). The functions are used inside stereoCalibrate but can also be used in
* your own code where Levenberg-Marquardt or another gradient-based solver is used to optimize a
* function that contains a matrix multiplication.
*
* @param rvec1 First rotation vector.
*
* @param tvec1 First translation vector.
*
* @param rvec2 Second rotation vector.
*
* @param tvec2 Second translation vector.
*
* @param rvec3 Output rotation vector of the superposition.
*
* @param tvec3 Output translation vector of the superposition.
*
* @param dr3dr1 Optional output derivative of rvec3 with regard to rvec1
*
* @param dr3dt1 Optional output derivative of rvec3 with regard to tvec1
*
* @param dr3dr2 Optional output derivative of rvec3 with regard to rvec2
*
* @param dr3dt2 Optional output derivative of rvec3 with regard to tvec2
*
* @param dt3dr1 Optional output derivative of tvec3 with regard to rvec1
*
* @param dt3dt1 Optional output derivative of tvec3 with regard to tvec1
*
* @param dt3dr2 Optional output derivative of tvec3 with regard to rvec2
*
* @param dt3dt2 Optional output derivative of tvec3 with regard to tvec2
*/
export declare function composeRT(rvec1: InputArray, tvec1: InputArray, rvec2: InputArray, tvec2: InputArray, rvec3: OutputArray, tvec3: OutputArray, dr3dr1?: OutputArray, dr3dt1?: OutputArray, dr3dr2?: OutputArray, dr3dt2?: OutputArray, dt3dr1?: OutputArray, dt3dt1?: OutputArray, dt3dr2?: OutputArray, dt3dt2?: OutputArray): void;
/**
* For every point in one of the two images of a stereo pair, the function finds the equation of the
* corresponding epipolar line in the other image.
*
* From the fundamental matrix definition (see findFundamentalMat ), line `$l^{(2)}_i$` in the second
* image for the point `$p^{(1)}_i$` in the first image (when whichImage=1 ) is computed as:
*
* `\\[l^{(2)}_i = F p^{(1)}_i\\]`
*
* And vice versa, when whichImage=2, `$l^{(1)}_i$` is computed from `$p^{(2)}_i$` as:
*
* `\\[l^{(1)}_i = F^T p^{(2)}_i\\]`
*
* Line coefficients are defined up to a scale. They are normalized so that `$a_i^2+b_i^2=1$` .
*
* @param points Input points. $N \times 1$ or $1 \times N$ matrix of type CV_32FC2 or vector<Point2f>
* .
*
* @param whichImage Index of the image (1 or 2) that contains the points .
*
* @param F Fundamental matrix that can be estimated using findFundamentalMat or stereoRectify .
*
* @param lines Output vector of the epipolar lines corresponding to the points in the other image.
* Each line $ax + by + c=0$ is encoded by 3 numbers $(a, b, c)$ .
*/
export declare function computeCorrespondEpilines(points: InputArray, whichImage: int, F: InputArray, lines: OutputArray): void;
/**
* The function converts points homogeneous to Euclidean space using perspective projection. That is,
* each point (x1, x2, ... x(n-1), xn) is converted to (x1/xn, x2/xn, ..., x(n-1)/xn). When xn=0, the
* output point coordinates will be (0,0,0,...).
*
* @param src Input vector of N-dimensional points.
*
* @param dst Output vector of N-1-dimensional points.
*/
export declare function convertPointsFromHomogeneous(src: InputArray, dst: OutputArray): void;
/**
* The function converts 2D or 3D points from/to homogeneous coordinates by calling either
* convertPointsToHomogeneous or convertPointsFromHomogeneous.
*
* The function is obsolete. Use one of the previous two functions instead.
*
* @param src Input array or vector of 2D, 3D, or 4D points.
*
* @param dst Output vector of 2D, 3D, or 4D points.
*/
export declare function convertPointsHomogeneous(src: InputArray, dst: OutputArray): void;
/**
* The function converts points from Euclidean to homogeneous space by appending 1's to the tuple of
* point coordinates. That is, each point (x1, x2, ..., xn) is converted to (x1, x2, ..., xn, 1).
*
* @param src Input vector of N-dimensional points.
*
* @param dst Output vector of N+1-dimensional points.
*/
export declare function convertPointsToHomogeneous(src: InputArray, dst: OutputArray): void;
/**
* The function implements the Optimal Triangulation Method (see Multiple View Geometry for details).
* For each given point correspondence points1[i] <-> points2[i], and a fundamental matrix F, it
* computes the corrected correspondences newPoints1[i] <-> newPoints2[i] that minimize the geometric
* error `$d(points1[i], newPoints1[i])^2 + d(points2[i],newPoints2[i])^2$` (where `$d(a,b)$` is the
* geometric distance between points `$a$` and `$b$` ) subject to the epipolar constraint
* `$newPoints2^T * F * newPoints1 = 0$` .
*
* @param F 3x3 fundamental matrix.
*
* @param points1 1xN array containing the first set of points.
*
* @param points2 1xN array containing the second set of points.
*
* @param newPoints1 The optimized points1.
*
* @param newPoints2 The optimized points2.
*/
export declare function correctMatches(F: InputArray, points1: InputArray, points2: InputArray, newPoints1: OutputArray, newPoints2: OutputArray): void;
/**
* This function decompose an essential matrix E using svd decomposition HartleyZ00 . Generally 4
* possible poses exists for a given E. They are `$[R_1, t]$`, `$[R_1, -t]$`, `$[R_2, t]$`, `$[R_2,
* -t]$`. By decomposing E, you can only get the direction of the translation, so the function returns
* unit t.
*
* @param E The input essential matrix.
*
* @param R1 One possible rotation matrix.
*
* @param R2 Another possible rotation matrix.
*
* @param t One possible translation.
*/
export declare function decomposeEssentialMat(E: InputArray, R1: OutputArray, R2: OutputArray, t: OutputArray): void;
/**
* This function extracts relative camera motion between two views observing a planar object from the
* homography H induced by the plane. The intrinsic camera matrix K must also be provided. The function
* may return up to four mathematical solution sets. At least two of the solutions may further be
* invalidated if point correspondences are available by applying positive depth constraint (all points
* must be in front of the camera). The decomposition method is described in detail in Malis .
*
* @param H The input homography matrix between two images.
*
* @param K The input intrinsic camera calibration matrix.
*
* @param rotations Array of rotation matrices.
*
* @param translations Array of translation matrices.
*
* @param normals Array of plane normal matrices.
*/
export declare function decomposeHomographyMat(H: InputArray, K: InputArray, rotations: OutputArrayOfArrays, translations: OutputArrayOfArrays, normals: OutputArrayOfArrays): int;
/**
* The function computes a decomposition of a projection matrix into a calibration and a rotation
* matrix and the position of a camera.
*
* It optionally returns three rotation matrices, one for each axis, and three Euler angles that could
* be used in OpenGL. Note, there is always more than one sequence of rotations about the three
* principal axes that results in the same orientation of an object, e.g. see Slabaugh . Returned tree
* rotation matrices and corresponding three Euler angles are only one of the possible solutions.
*
* The function is based on RQDecomp3x3 .
*
* @param projMatrix 3x4 input projection matrix P.
*
* @param cameraMatrix Output 3x3 camera matrix K.
*
* @param rotMatrix Output 3x3 external rotation matrix R.
*
* @param transVect Output 4x1 translation vector T.
*
* @param rotMatrixX Optional 3x3 rotation matrix around x-axis.
*
* @param rotMatrixY Optional 3x3 rotation matrix around y-axis.
*
* @param rotMatrixZ Optional 3x3 rotation matrix around z-axis.
*
* @param eulerAngles Optional three-element vector containing three Euler angles of rotation in
* degrees.
*/
export declare function decomposeProjectionMatrix(projMatrix: InputArray, cameraMatrix: OutputArray, rotMatrix: OutputArray, transVect: OutputArray, rotMatrixX?: OutputArray, rotMatrixY?: OutputArray, rotMatrixZ?: OutputArray, eulerAngles?: OutputArray): void;
/**
* The function draws individual chessboard corners detected either as red circles if the board was not
* found, or as colored corners connected with lines if the board was found.
*
* @param image Destination image. It must be an 8-bit color image.
*
* @param patternSize Number of inner corners per a chessboard row and column (patternSize =
* cv::Size(points_per_row,points_per_column)).
*
* @param corners Array of detected corners, the output of findChessboardCorners.
*
* @param patternWasFound Parameter indicating whether the complete board was found or not. The return
* value of findChessboardCorners should be passed here.
*/
export declare function drawChessboardCorners(image: InputOutputArray, patternSize: Size, corners: InputArray, patternWasFound: bool): void;
/**
* [solvePnP]
*
* This function draws the axes of the world/object coordinate system w.r.t. to the camera frame. OX is
* drawn in red, OY in green and OZ in blue.
*
* @param image Input/output image. It must have 1 or 3 channels. The number of channels is not
* altered.
*
* @param cameraMatrix Input 3x3 floating-point matrix of camera intrinsic parameters. $A =
* \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}$
*
* @param distCoeffs Input vector of distortion coefficients $(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6
* [, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])$ of 4, 5, 8, 12 or 14 elements. If the vector is empty,
* the zero distortion coefficients are assumed.
*
* @param rvec Rotation vector (see Rodrigues ) that, together with tvec, brings points from the model
* coordinate system to the camera coordinate system.
*
* @param tvec Translation vector.
*
* @param length Length of the painted axes in the same unit than tvec (usually in meters).
*
* @param thickness Line thickness of the painted axes.
*/
export declare function drawFrameAxes(image: InputOutputArray, cameraMatrix: InputArray, distCoeffs: InputArray, rvec: InputArray, tvec: InputArray, length: float, thickness?: int): void;
/**
* It computes `\\[ \\begin{bmatrix} x\\\\ y\\\\ \\end{bmatrix} = \\begin{bmatrix} a_{11} & a_{12}\\\\
* a_{21} & a_{22}\\\\ \\end{bmatrix} \\begin{bmatrix} X\\\\ Y\\\\ \\end{bmatrix} + \\begin{bmatrix}
* b_1\\\\ b_2\\\\ \\end{bmatrix} \\]`
*
* Output 2D affine transformation matrix `$2 \\times 3$` or empty matrix if transformation could not
* be estimated. The returned matrix has the following form: `\\[ \\begin{bmatrix} a_{11} & a_{12} &
* b_1\\\\ a_{21} & a_{22} & b_2\\\\ \\end{bmatrix} \\]`
* The function estimates an optimal 2D affine transformation between two 2D point sets using the
* selected robust algorithm.
*
* The computed transformation is then refined further (using only inliers) with the
* Levenberg-Marquardt method to reduce the re-projection error even more.
*
* The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish
* inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when
* there are more than 50% of inliers.
*
* [estimateAffinePartial2D], [getAffineTransform]
*
* @param from First input 2D point set containing $(X,Y)$.
*
* @param to Second input 2D point set containing $(x,y)$.
*
* @param inliers Output vector indicating which points are inliers (1-inlier, 0-outlier).
*
* @param method Robust method used to compute transformation. The following methods are possible:
* cv::RANSAC - RANSAC-based robust methodcv::LMEDS - Least-Median robust method RANSAC is the default
* method.
*
* @param ransacReprojThreshold Maximum reprojection error in the RANSAC algorithm to consider a point
* as an inlier. Applies only to RANSAC.
*
* @param maxIters The maximum number of robust method iterations.
*
* @param confidence Confidence level, between 0 and 1, for the estimated transformation. Anything
* between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
* significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
*
* @param refineIters Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing
* 0 will disable refining, so the output matrix will be output of robust method.
*/
export declare function estimateAffine2D(from: InputArray, to: InputArray, inliers?: OutputArray, method?: int, ransacReprojThreshold?: double, maxIters?: size_t, confidence?: double, refineIters?: size_t): any;
/**
* It computes `\\[ \\begin{bmatrix} x\\\\ y\\\\ z\\\\ \\end{bmatrix} = \\begin{bmatrix} a_{11} &
* a_{12} & a_{13}\\\\ a_{21} & a_{22} & a_{23}\\\\ a_{31} & a_{32} & a_{33}\\\\ \\end{bmatrix}
* \\begin{bmatrix} X\\\\ Y\\\\ Z\\\\ \\end{bmatrix} + \\begin{bmatrix} b_1\\\\ b_2\\\\ b_3\\\\
* \\end{bmatrix} \\]`
*
* The function estimates an optimal 3D affine transformation between two 3D point sets using the
* RANSAC algorithm.
*
* @param src First input 3D point set containing $(X,Y,Z)$.
*
* @param dst Second input 3D point set containing $(x,y,z)$.
*
* @param out Output 3D affine transformation matrix $3 \times 4$ of the form \[ \begin{bmatrix} a_{11}
* & a_{12} & a_{13} & b_1\\ a_{21} & a_{22} & a_{23} & b_2\\ a_{31} & a_{32} & a_{33} & b_3\\
* \end{bmatrix} \]
*
* @param inliers Output vector indicating which points are inliers (1-inlier, 0-outlier).
*
* @param ransacThreshold Maximum reprojection error in the RANSAC algorithm to consider a point as an
* inlier.
*
* @param confidence Confidence level, between 0 and 1, for the estimated transformation. Anything
* between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
* significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
*/
export declare function estimateAffine3D(src: InputArray, dst: InputArray, out: OutputArray, inliers: OutputArray, ransacThreshold?: double, confidence?: double): int;
/**
* Output 2D affine transformation (4 degrees of freedom) matrix `$2 \\times 3$` or empty matrix if
* transformation could not be estimated.
* The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to
* combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust
* estimation.
*
* The computed transformation is then refined further (using only inliers) with the
* Levenberg-Marquardt method to reduce the re-projection error even more.
*
* Estimated transformation matrix is: `\\[ \\begin{bmatrix} \\cos(\\theta) \\cdot s & -\\sin(\\theta)
* \\cdot s & t_x \\\\ \\sin(\\theta) \\cdot s & \\cos(\\theta) \\cdot s & t_y \\end{bmatrix} \\]`
* Where `$ \\theta $` is the rotation angle, `$ s $` the scaling factor and `$ t_x, t_y $` are
* translations in `$ x, y $` axes respectively.
*
* The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish
* inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when
* there are more than 50% of inliers.
*
* [estimateAffine2D], [getAffineTransform]
*
* @param from First input 2D point set.
*
* @param to Second input 2D point set.
*
* @param inliers Output vector indicating which points are inliers.
*
* @param method Robust method used to compute transformation. The following methods are possible:
* cv::RANSAC - RANSAC-based robust methodcv::LMEDS - Least-Median robust method RANSAC is the default
* method.
*
* @param ransacReprojThreshold Maximum reprojection error in the RANSAC algorithm to consider a point
* as an inlier. Applies only to RANSAC.
*
* @param maxIters The maximum number of robust method iterations.
*
* @param confidence Confidence level, between 0 and 1, for the estimated transformation. Anything
* between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation
* significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
*
* @param refineIters Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing
* 0 will disable refining, so the output matrix will be output of robust method.
*/
export declare function estimateAffinePartial2D(from: InputArray, to: InputArray, inliers?: OutputArray, method?: int, ransacReprojThreshold?: double, maxIters?: size_t, confidence?: double, refineIters?: size_t): any;
/**
* This function is intended to filter the output of the decomposeHomographyMat based on additional
* information as described in Malis . The summary of the method: the decomposeHomographyMat function
* returns 2 unique solutions and their "opposites" for a total of 4 solutions. If we have access to
* the sets of points visible in the camera frame before and after the homography transformation is
* applied, we can determine which are the true potential solutions and which are the opposites by
* verifying which homographies are consistent with all visible reference points being in front of the
* camera. The inputs are left unchanged; the filtered solution set is returned as indices into the
* existing one.
*
* @param rotations Vector of rotation matrices.
*
* @param normals Vector of plane normal matrices.
*
* @param beforePoints Vector of (rectified) visible reference points before the homography is applied
*
* @param afterPoints Vector of (rectified) visible reference points after the homography is applied
*
* @param possibleSolutions Vector of int indices representing the viable solution set after filtering
*
* @param pointsMask optional Mat/Vector of 8u type representing the mask for the inliers as given by
* the findHomography function
*/
export declare function filterHomographyDecompByVisibleRefpoints(rotations: InputArrayOfArrays, normals: InputArrayOfArrays, beforePoints: InputArray, afterPoints: InputArray, possibleSolutions: OutputArray, pointsMask?: InputArray): void;
/**
* @param img The input 16-bit signed disparity image
*
* @param newVal The disparity value used to paint-off the speckles
*
* @param maxSpeckleSize The maximum speckle size to consider it a speckle. Larger blobs are not
* affected by the algorithm
*
* @param maxDiff Maximum difference between neighbor disparity pixels to put them into the same blob.
* Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map,
* where disparity values are multiplied by 16, this scale factor should be taken into account when
* specifying this parameter value.
*
* @param buf The optional temporary buffer to avoid memory allocation within the function.
*/
export declare function filterSpeckles(img: InputOutputArray, newVal: double, maxSpeckleSize: int, maxDiff: double, buf?: InputOutputArray): void;
export declare function find4QuadCornerSubpix(img: InputArray, corners: InputOutputArray, region_size: Size): bool;
/**
* The function attempts to determine whether the input image is a view of the chessboard pattern and
* locate the internal chessboard corners. The function returns a non-zero value if all of the corners
* are found and they are placed in a certain order (row by row, left to right in every row).
* Otherwise, if the function fails to find all the corners or reorder them, it returns 0. For example,
* a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black
* squares touch each other. The detected coordinates are approximate, and to determine their positions
* more accurately, the function calls cornerSubPix. You also may use the function cornerSubPix with
* different parameters if returned coordinates are not accurate enough.
*
* Sample usage of detecting and drawing chessboard corners: :
*
* ```cpp
* Size patternsize(8,6); //interior number of corners
* Mat gray = ....; //source image
* vector<Point2f> corners; //this will be filled by the detected corners
*
* //CALIB_CB_FAST_CHECK saves a lot of time on images
* //that do not contain any chessboard corners
* bool patternfound = findChessboardCorners(gray, patternsize, corners,
* CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
* + CALIB_CB_FAST_CHECK);
*
* if(patternfound)
* cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
* TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
*
* drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
* ```
*
* The function requires white space (like a square-thick border, the wider the better) around the
* board to make the detection more robust in various environments. Otherwise, if there is no border
* and the background is dark, the outer black squares cannot be segmented properly and so the square
* grouping and ordering algorithm fails.
*
* @param image Source chessboard view. It must be an 8-bit grayscale or color image.
*
* @param patternSize Number of inner corners per a chessboard row and column ( patternSize =
* cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).
*
* @param corners Output array of detected corners.
*
* @param flags Various operation flags that can be zero or a combination of the following values:
* CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black and white, rather
* than a fixed threshold level (computed from the average image brightness).CALIB_CB_NORMALIZE_IMAGE
* Normalize the image gamma with equalizeHist before applying fixed or adaptive
* thresholding.CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter,
* square-like shape) to filter out false quads extracted at the contour retrieval
* stage.CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners, and
* shortcut the call if none is found. This can drastically speed up the call in the degenerate
* condition when no chessboard is observed.
*/
export declare function findChessboardCorners(image: InputArray, patternSize: Size, corners: OutputArray, flags?: int): bool;
/**
* The function is analog to findchessboardCorners but uses a localized radon transformation
* approximated by box filters being more robust to all sort of noise, faster on larger images and is
* able to directly return the sub-pixel position of the internal chessboard corners. The Method is
* based on the paper duda2018 "Accurate Detection and Localization of Checkerboard Corners for
* Calibration" demonstrating that the returned sub-pixel positions are more accurate than the one
* returned by cornerSubPix allowing a precise camera calibration for demanding applications.
*
* The function requires a white boarder with roughly the same width as one of the checkerboard fields
* around the whole board to improve the detection in various environments. In addition, because of the
* localized radon transformation it is beneficial to use round corners for the field corners which are
* located on the outside of the board. The following figure illustrates a sample checkerboard
* optimized for the detection. However, any other checkerboard can be used as well.
*
* @param image Source chessboard view. It must be an 8-bit grayscale or color image.
*
* @param patternSize Number of inner corners per a chessboard row and column ( patternSize =
* cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).
*
* @param corners Output array of detected corners.
*
* @param flags Various operation flags that can be zero or a combination of the following values:
* CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with equalizeHist before
* detection.CALIB_CB_EXHAUSTIVE Run an exhaustive search to improve detection rate.CALIB_CB_ACCURACY
* Up sample input image to improve sub-pixel accuracy due to aliasing effects. This should be used if
* an accurate camera calibration is required.
*/
export declare function findChessboardCornersSB(image: InputArray, patternSize: Size, corners: OutputArray, flags?: int): bool;
/**
* The function attempts to determine whether the input image contains a grid of circles. If it is, the
* function locates centers of the circles. The function returns a non-zero value if all of the centers
* have been found and they have been placed in a certain order (row by row, left to right in every
* row). Otherwise, if the function fails to find all the corners or reorder them, it returns 0.
*
* Sample usage of detecting and drawing the centers of circles: :
*
* ```cpp
* Size patternsize(7,7); //number of centers
* Mat gray = ....; //source image
* vector<Point2f> centers; //this will be filled by the detected centers
*
* bool patternfound = findCirclesGrid(gray, patternsize, centers);
*
* drawChessboardCorners(img, patternsize, Mat(centers), patternfound);
* ```
*
* The function requires white space (like a square-thick border, the wider the better) around the
* board to make the detection more robust in various environments.
*
* @param image grid view of input circles; it must be an 8-bit grayscale or color image.
*
* @param patternSize number of circles per row and column ( patternSize = Size(points_per_row,
* points_per_colum) ).
*
* @param centers output array of detected centers.
*
* @param flags various operation flags that can be one of the following values:
* CALIB_CB_SYMMETRIC_GRID uses symmetric pattern of circles.CALIB_CB_ASYMMETRIC_GRID uses asymmetric
* pattern of circles.CALIB_CB_CLUSTERING uses a special algorithm for grid detection. It is more
* robust to perspective distortions but much more sensitive to background clutter.
*
* @param blobDetector feature detector that finds blobs like dark circles on light background.
*
* @param parameters struct for finding circles in a grid pattern.
*/
export declare function findCirclesGrid(image: InputArray, patternSize: Size, centers: OutputArray, flags: int, blobDetector: any, parameters: any): bool;
/**
* This is an overloaded member function, provided for convenience. It differs from the above function
* only in what argument(s) it accepts.
*/
export declare function findCirclesGrid(image: InputArray, patternSize: Size, centers: OutputArray, flags?: int, blobDetector?: any): bool;
/**
* This function estimates essential matrix based on the five-point algorithm solver in Nister03 .
* SteweniusCFS is also a related. The epipolar geometry is described by the following equation:
*
* `\\[[p_2; 1]^T K^{-T} E K^{-1} [p_1; 1] = 0\\]`
*
* where `$E$` is an essential matrix, `$p_1$` and `$p_2$` are corresponding points in the first and
* the second images, respectively. The result of this function may be passed further to
* decomposeEssentialMat or recoverPose to recover the relative pose between cameras.
*
* @param points1 Array of N (N >= 5) 2D points from the first image. The point coordinates should be
* floating-point (single or double precision).
*
* @param points2 Array of the second image points of the same size and format as points1 .
*
* @param cameraMatrix Camera matrix $K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}$ . Note
* that this function assumes that points1 and points2 are feature points from cameras with the same
* camera matrix.
*
* @param method Method for computing an essential matrix.
* RANSAC for the RANSAC algorithm.LMEDS for the LMedS algorithm.
*
* @param prob Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of
* confidence (probability) that the estimated matrix is correct.
*
* @param threshold Parameter used for RANSAC. It is the maximum distance from a point to an epipolar
* line in pixels, beyond which the point is considered an outlier and is not used for computing the
* final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the
* point localization, image resolution, and the image noise.
*
* @param mask Output array of N elements, every element of which is set to 0 for outliers and to 1 for
* the other points. The array is computed only in the RANSAC and LMedS methods.
*/
export declare fu