--- /dev/null
+ Object detection with Generalized Ballard and Guil Hough Transform {#tutorial_generalized_hough_ballard_guil}
+ ==================================================================
+
+ @tableofcontents
+
+ @prev_tutorial{tutorial_hough_circle}
+ @next_tutorial{tutorial_remap}
+
++| | |
++| -: | :- |
++| Original author | Markus Heck |
++| Compatibility | OpenCV >= 3.4 |
++
+ Goal
+ ----
+
+ In this tutorial you will learn how to:
+
+ - Use @ref cv::GeneralizedHoughBallard and @ref cv::GeneralizedHoughGuil to detect an object
+
+ Example
+ -------
+
+ ### What does this program do?
+
+ 1. Load the image and template
+
+ ![image](images/generalized_hough_mini_image.jpg)
+ ![template](images/generalized_hough_mini_template.jpg)
+
+ 2. Instantiate @ref cv::GeneralizedHoughBallard with the help of `createGeneralizedHoughBallard()`
+ 3. Instantiate @ref cv::GeneralizedHoughGuil with the help of `createGeneralizedHoughGuil()`
+ 4. Set the required parameters for both GeneralizedHough variants
+ 5. Detect and show found results
+
+ @note
+ - Both variants can't be instantiated directly. Using the create methods is required.
+ - Guil Hough is very slow. Calculating the results for the "mini" files used in this tutorial
+ takes only a few seconds. With image and template in a higher resolution, as shown below,
+ my notebook requires about 5 minutes to calculate a result.
+
+ ![image](images/generalized_hough_image.jpg)
+ ![template](images/generalized_hough_template.jpg)
+
+ ### Code
+
+ The complete code for this tutorial is shown below.
+ @include samples/cpp/tutorial_code/ImgTrans/generalizedHoughTransform.cpp
+
+ Explanation
+ -----------
+
+ ### Load image, template and setup variables
+
+ @snippet samples/cpp/tutorial_code/ImgTrans/generalizedHoughTransform.cpp generalized-hough-transform-load-and-setup
+
+ The position vectors will contain the matches the detectors will find.
+ Every entry contains four floating point values:
+ position vector
+
+ - *[0]*: x coordinate of center point
+ - *[1]*: y coordinate of center point
+ - *[2]*: scale of detected object compared to template
+ - *[3]*: rotation of detected object in degree in relation to template
+
+ An example could look as follows: `[200, 100, 0.9, 120]`
+
+ ### Setup parameters
+
+ @snippet samples/cpp/tutorial_code/ImgTrans/generalizedHoughTransform.cpp generalized-hough-transform-setup-parameters
+
+ Finding the optimal values can end up in trial and error and depends on many factors, such as the image resolution.
+
+ ### Run detection
+
+ @snippet samples/cpp/tutorial_code/ImgTrans/generalizedHoughTransform.cpp generalized-hough-transform-run
+
+ As mentioned above, this step will take some time, especially with larger images and when using Guil.
+
+ ### Draw results and show image
+
+ @snippet samples/cpp/tutorial_code/ImgTrans/generalizedHoughTransform.cpp generalized-hough-transform-draw-results
+
+ Result
+ ------
+
+ ![result image](images/generalized_hough_result_img.jpg)
+
+ The blue rectangle shows the result of @ref cv::GeneralizedHoughBallard and the green rectangles the results of @ref
+ cv::GeneralizedHoughGuil.
+
+ Getting perfect results like in this example is unlikely if the parameters are not perfectly adapted to the sample.
+ An example with less perfect parameters is shown below.
+ For the Ballard variant, only the center of the result is marked as a black dot on this image. The rectangle would be
+ the same as on the previous image.
+
+ ![less perfect result](images/generalized_hough_less_perfect_result_img.jpg)
Hough Circle Transform {#tutorial_hough_circle}
======================
+@tableofcontents
+
@prev_tutorial{tutorial_hough_lines}
- @next_tutorial{tutorial_remap}
+ @next_tutorial{tutorial_generalized_hough_ballard_guil}
+| | |
+| -: | :- |
+| Original author | Ana Huamán |
+| Compatibility | OpenCV >= 3.0 |
+
Goal
----
Remapping {#tutorial_remap}
=========
- @prev_tutorial{tutorial_hough_circle}
+@tableofcontents
+
+ @prev_tutorial{tutorial_generalized_hough_ballard_guil}
@next_tutorial{tutorial_warp_affine}
+| | |
+| -: | :- |
+| Original author | Ana Huamán |
+| Compatibility | OpenCV >= 3.0 |
+
Goal
----
Image Processing (imgproc module) {#tutorial_table_of_content_imgproc}
=================================
-In this section you will learn about the image processing (manipulation) functions inside OpenCV.
-
+Basic
+-----
- @subpage tutorial_basic_geometric_drawing
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- We will learn how to draw simple geometry with OpenCV!
-
- @subpage tutorial_random_generator_and_text
-
- *Languages:* C++
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- We will draw some *fancy-looking* stuff using OpenCV!
-
- @subpage tutorial_gausian_median_blur_bilateral_filter
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Let's take a look at some basic linear filters!
-
- @subpage tutorial_erosion_dilatation
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- Author: Ana Huamán
-
- Let's *change* the shape of objects!
-
- @subpage tutorial_opening_closing_hats
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Here we investigate different morphology operators
-
- @subpage tutorial_hitOrMiss
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.4
-
- *Author:* Lorena García
-
- Learn how to find patterns in binary images using the Hit-or-Miss operation
-
- @subpage tutorial_morph_lines_detection
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Theodore Tsesmelis
-
- Here we will show how we can use different morphological operators to extract horizontal and vertical lines
-
- @subpage tutorial_pyramids
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- What if I need a bigger/smaller image?
-
- @subpage tutorial_threshold
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- After so much processing, it is time to decide which pixels stay
-
- @subpage tutorial_threshold_inRange
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Rishiraj Surti
-
- Thresholding operations using inRange function.
-
+Transformations
+---------------
- @subpage tutorial_filter_2d
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn to design our own filters by using OpenCV functions
-
- @subpage tutorial_copyMakeBorder
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to pad our images
-
- @subpage tutorial_sobel_derivatives
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to calculate gradients and use them to detect edges
-
- @subpage tutorial_laplace_operator
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn about the *Laplace* operator and how to detect edges with it
-
- @subpage tutorial_canny_detector
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn a sophisticated alternative to detect edges
-
- @subpage tutorial_hough_lines
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to detect lines
-
- @subpage tutorial_hough_circle
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to detect circles
-
+ - @subpage tutorial_generalized_hough_ballard_guil
-
- *Languages:* C++
-
- *Compatibility:* \>= OpenCV 3.4
-
- *Author:* Markus Heck
-
- Detect an object in a picture with the help of GeneralizedHoughBallard and GeneralizedHoughGuil.
-
- @subpage tutorial_remap
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to manipulate pixels locations
-
- @subpage tutorial_warp_affine
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to rotate, translate and scale our images
-
+Histograms
+----------
- @subpage tutorial_histogram_equalization
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to improve the contrast in our images
-
- @subpage tutorial_histogram_calculation
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to create and generate histograms
-
- @subpage tutorial_histogram_comparison
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn to calculate metrics between histograms
-
- @subpage tutorial_back_projection
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to use histograms to find similar objects in images
-
- @subpage tutorial_template_matching
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Ana Huamán
-
- Where we learn how to match templates in an image
-
-- @subpage tutorial_table_of_contents_contours
-
- Learn how to find contours in images and investigate their properties and features.
-
+Contours
+--------
+- @subpage tutorial_find_contours
+- @subpage tutorial_hull
+- @subpage tutorial_bounding_rects_circles
+- @subpage tutorial_bounding_rotated_ellipses
+- @subpage tutorial_moments
+- @subpage tutorial_point_polygon_test
+
+Others
+------
- @subpage tutorial_distance_transform
-
- *Languages:* C++, Java, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Theodore Tsesmelis
-
- Where we learn to segment objects using Laplacian filtering, the Distance Transformation and the Watershed algorithm.
-
- @subpage tutorial_out_of_focus_deblur_filter
-
- *Languages:* C++
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Karpushin Vladislav
-
- You will learn how to recover an out-of-focus image by Wiener filter.
-
- @subpage tutorial_motion_deblur_filter
-
- *Languages:* C++
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Karpushin Vladislav
-
- You will learn how to recover an image with motion blur distortion using a Wiener filter.
-
- @subpage tutorial_anisotropic_image_segmentation_by_a_gst
-
- *Languages:* C++, Python
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Karpushin Vladislav
-
- You will learn how to segment an anisotropic image with a single local orientation by a gradient structure tensor.
-
- @subpage tutorial_periodic_noise_removing_filter
-
- *Languages:* C++
-
- *Compatibility:* \> OpenCV 2.0
-
- *Author:* Karpushin Vladislav
-
- You will learn how to remove periodic noise in the Fourier domain.
where \f$E\f$ is an essential matrix, \f$p_1\f$ and \f$p_2\f$ are corresponding points in the first and the
second images, respectively. The result of this function may be passed further to
- #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.
+ #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras.
*/
-CV_EXPORTS_W Mat findEssentialMat( InputArray points1, InputArray points2,
- InputArray cameraMatrix, int method,
- double prob, double threshold,
- int maxIters, OutputArray mask = noArray() );
+CV_EXPORTS_W
+Mat findEssentialMat(
+ InputArray points1, InputArray points2,
+ InputArray cameraMatrix, int method = RANSAC,
+ double prob = 0.999, double threshold = 1.0,
+ int maxIters = 1000, OutputArray mask = noArray()
+);
/** @overload */
-CV_EXPORTS_W Mat findEssentialMat( InputArray points1, InputArray points2,
- InputArray cameraMatrix, int method = RANSAC,
- double prob = 0.999, double threshold = 1.0,
- OutputArray mask = noArray() );
+CV_EXPORTS
+Mat findEssentialMat(
+ InputArray points1, InputArray points2,
+ InputArray cameraMatrix, int method,
+ double prob, double threshold,
+ OutputArray mask
+); // TODO remove from OpenCV 5.0
/** @overload
@param points1 Array of N (N \>= 5) 2D points from the first image. The point coordinates should
int mode = StereoSGBM::MODE_SGBM);
};
- is assumed. In cvInitUndistortMap R assumed to be an identity matrix.
+
+//! cv::undistort mode
+enum UndistortTypes
+{
+ PROJ_SPHERICAL_ORTHO = 0,
+ PROJ_SPHERICAL_EQRECT = 1
+};
+
+/** @brief Transforms an image to compensate for lens distortion.
+
+The function transforms an image to compensate radial and tangential lens distortion.
+
+The function is simply a combination of #initUndistortRectifyMap (with unity R ) and #remap
+(with bilinear interpolation). See the former function for details of the transformation being
+performed.
+
+Those pixels in the destination image, for which there is no correspondent pixels in the source
+image, are filled with zeros (black color).
+
+A particular subset of the source image that will be visible in the corrected image can be regulated
+by newCameraMatrix. You can use #getOptimalNewCameraMatrix to compute the appropriate
+newCameraMatrix depending on your requirements.
+
+The camera matrix and the distortion parameters can be determined using #calibrateCamera. If
+the resolution of images is different from the resolution used at the calibration stage, \f$f_x,
+f_y, c_x\f$ and \f$c_y\f$ need to be scaled accordingly, while the distortion coefficients remain
+the same.
+
+@param src Input (distorted) image.
+@param dst Output (corrected) image that has the same size and type as src .
+@param cameraMatrix Input camera matrix \f$A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
+@param distCoeffs Input vector of distortion coefficients
+\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
+of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
+@param newCameraMatrix Camera matrix of the distorted image. By default, it is the same as
+cameraMatrix but you may additionally scale and shift the result by using a different matrix.
+ */
+CV_EXPORTS_W void undistort( InputArray src, OutputArray dst,
+ InputArray cameraMatrix,
+ InputArray distCoeffs,
+ InputArray newCameraMatrix = noArray() );
+
+/** @brief Computes the undistortion and rectification transformation map.
+
+The function computes the joint undistortion and rectification transformation and represents the
+result in the form of maps for #remap. The undistorted image looks like original, as if it is
+captured with a camera using the camera matrix =newCameraMatrix and zero distortion. In case of a
+monocular camera, newCameraMatrix is usually equal to cameraMatrix, or it can be computed by
+#getOptimalNewCameraMatrix for a better control over scaling. In case of a stereo camera,
+newCameraMatrix is normally set to P1 or P2 computed by #stereoRectify .
+
+Also, this new camera is oriented differently in the coordinate space, according to R. That, for
+example, helps to align two heads of a stereo camera so that the epipolar lines on both images
+become horizontal and have the same y- coordinate (in case of a horizontally aligned stereo camera).
+
+The function actually builds the maps for the inverse mapping algorithm that is used by #remap. That
+is, for each pixel \f$(u, v)\f$ in the destination (corrected and rectified) image, the function
+computes the corresponding coordinates in the source image (that is, in the original image from
+camera). The following process is applied:
+\f[
+\begin{array}{l}
+x \leftarrow (u - {c'}_x)/{f'}_x \\
+y \leftarrow (v - {c'}_y)/{f'}_y \\
+{[X\,Y\,W]} ^T \leftarrow R^{-1}*[x \, y \, 1]^T \\
+x' \leftarrow X/W \\
+y' \leftarrow Y/W \\
+r^2 \leftarrow x'^2 + y'^2 \\
+x'' \leftarrow x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}
++ 2p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4\\
+y'' \leftarrow y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}
++ p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\
+s\vecthree{x'''}{y'''}{1} =
+\vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{-R_{13}((\tau_x, \tau_y)}
+{0}{R_{33}(\tau_x, \tau_y)}{-R_{23}(\tau_x, \tau_y)}
+{0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\\
+map_x(u,v) \leftarrow x''' f_x + c_x \\
+map_y(u,v) \leftarrow y''' f_y + c_y
+\end{array}
+\f]
+where \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
+are the distortion coefficients.
+
+In case of a stereo camera, this function is called twice: once for each camera head, after
+#stereoRectify, which in its turn is called after #stereoCalibrate. But if the stereo camera
+was not calibrated, it is still possible to compute the rectification transformations directly from
+the fundamental matrix using #stereoRectifyUncalibrated. For each camera, the function computes
+homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D
+space. R can be computed from H as
+\f[\texttt{R} = \texttt{cameraMatrix} ^{-1} \cdot \texttt{H} \cdot \texttt{cameraMatrix}\f]
+where cameraMatrix can be chosen arbitrarily.
+
+@param cameraMatrix Input camera matrix \f$A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
+@param distCoeffs Input vector of distortion coefficients
+\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
+of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
+@param R Optional rectification transformation in the object space (3x3 matrix). R1 or R2 ,
+computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation
++is assumed. In #initUndistortRectifyMap R assumed to be an identity matrix.
+@param newCameraMatrix New camera matrix \f$A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}\f$.
+@param size Undistorted image size.
+@param m1type Type of the first output map that can be CV_32FC1, CV_32FC2 or CV_16SC2, see #convertMaps
+@param map1 The first output map.
+@param map2 The second output map.
+ */
+CV_EXPORTS_W
+void initUndistortRectifyMap(InputArray cameraMatrix, InputArray distCoeffs,
+ InputArray R, InputArray newCameraMatrix,
+ Size size, int m1type, OutputArray map1, OutputArray map2);
+
+/** @brief Computes the projection and inverse-rectification transformation map. In essense, this is the inverse of
+#initUndistortRectifyMap to accomodate stereo-rectification of projectors ('inverse-cameras') in projector-camera pairs.
+
+The function computes the joint projection and inverse rectification transformation and represents the
+result in the form of maps for #remap. The projected image looks like a distorted version of the original which,
+once projected by a projector, should visually match the original. In case of a monocular camera, newCameraMatrix
+is usually equal to cameraMatrix, or it can be computed by
+#getOptimalNewCameraMatrix for a better control over scaling. In case of a projector-camera pair,
+newCameraMatrix is normally set to P1 or P2 computed by #stereoRectify .
+
+The projector is oriented differently in the coordinate space, according to R. In case of projector-camera pairs,
+this helps align the projector (in the same manner as #initUndistortRectifyMap for the camera) to create a stereo-rectified pair. This
+allows epipolar lines on both images to become horizontal and have the same y-coordinate (in case of a horizontally aligned projector-camera pair).
+
+The function builds the maps for the inverse mapping algorithm that is used by #remap. That
+is, for each pixel \f$(u, v)\f$ in the destination (projected and inverse-rectified) image, the function
+computes the corresponding coordinates in the source image (that is, in the original digital image). The following process is applied:
+
+\f[
+\begin{array}{l}
+\text{newCameraMatrix}\\
+x \leftarrow (u - {c'}_x)/{f'}_x \\
+y \leftarrow (v - {c'}_y)/{f'}_y \\
+
+\\\text{Undistortion}
+\\\scriptsize{\textit{though equation shown is for radial undistortion, function implements cv::undistortPoints()}}\\
+r^2 \leftarrow x^2 + y^2 \\
+\theta \leftarrow \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}\\
+x' \leftarrow \frac{x}{\theta} \\
+y' \leftarrow \frac{y}{\theta} \\
+
+\\\text{Rectification}\\
+{[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\
+x'' \leftarrow X/W \\
+y'' \leftarrow Y/W \\
+
+\\\text{cameraMatrix}\\
+map_x(u,v) \leftarrow x'' f_x + c_x \\
+map_y(u,v) \leftarrow y'' f_y + c_y
+\end{array}
+\f]
+where \f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
+are the distortion coefficients vector distCoeffs.
+
+In case of a stereo-rectified projector-camera pair, this function is called for the projector while #initUndistortRectifyMap is called for the camera head.
+This is done after #stereoRectify, which in turn is called after #stereoCalibrate. If the projector-camera pair
+is not calibrated, it is still possible to compute the rectification transformations directly from
+the fundamental matrix using #stereoRectifyUncalibrated. For the projector and camera, the function computes
+homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D
+space. R can be computed from H as
+\f[\texttt{R} = \texttt{cameraMatrix} ^{-1} \cdot \texttt{H} \cdot \texttt{cameraMatrix}\f]
+where cameraMatrix can be chosen arbitrarily.
+
+@param cameraMatrix Input camera matrix \f$A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
+@param distCoeffs Input vector of distortion coefficients
+\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
+of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
+@param R Optional rectification transformation in the object space (3x3 matrix). R1 or R2,
+computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation
+is assumed.
+@param newCameraMatrix New camera matrix \f$A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}\f$.
+@param size Distorted image size.
+@param m1type Type of the first output map. Can be CV_32FC1, CV_32FC2 or CV_16SC2, see #convertMaps
+@param map1 The first output map for #remap.
+@param map2 The second output map for #remap.
+ */
+CV_EXPORTS_W
+void initInverseRectificationMap( InputArray cameraMatrix, InputArray distCoeffs,
+ InputArray R, InputArray newCameraMatrix,
+ const Size& size, int m1type, OutputArray map1, OutputArray map2 );
+
+//! initializes maps for #remap for wide-angle
+CV_EXPORTS
+float initWideAngleProjMap(InputArray cameraMatrix, InputArray distCoeffs,
+ Size imageSize, int destImageWidth,
+ int m1type, OutputArray map1, OutputArray map2,
+ enum UndistortTypes projType = PROJ_SPHERICAL_EQRECT, double alpha = 0);
+static inline
+float initWideAngleProjMap(InputArray cameraMatrix, InputArray distCoeffs,
+ Size imageSize, int destImageWidth,
+ int m1type, OutputArray map1, OutputArray map2,
+ int projType, double alpha = 0)
+{
+ return initWideAngleProjMap(cameraMatrix, distCoeffs, imageSize, destImageWidth,
+ m1type, map1, map2, (UndistortTypes)projType, alpha);
+}
+
+/** @brief Returns the default new camera matrix.
+
+The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when
+centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true).
+
+In the latter case, the new camera matrix will be:
+
+\f[\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} -1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} -1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\f]
+
+where \f$f_x\f$ and \f$f_y\f$ are \f$(0,0)\f$ and \f$(1,1)\f$ elements of cameraMatrix, respectively.
+
+By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not
+move the principal point. However, when you work with stereo, it is important to move the principal
+points in both views to the same y-coordinate (which is required by most of stereo correspondence
+algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for
+each view where the principal points are located at the center.
+
+@param cameraMatrix Input camera matrix.
+@param imgsize Camera view image size in pixels.
+@param centerPrincipalPoint Location of the principal point in the new camera matrix. The
+parameter indicates whether this location should be at the image center or not.
+ */
+CV_EXPORTS_W
+Mat getDefaultNewCameraMatrix(InputArray cameraMatrix, Size imgsize = Size(),
+ bool centerPrincipalPoint = false);
+
+/** @brief Computes the ideal point coordinates from the observed point coordinates.
+
+The function is similar to #undistort and #initUndistortRectifyMap but it operates on a
+sparse set of points instead of a raster image. Also the function performs a reverse transformation
+to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a
+planar object, it does, up to a translation vector, if the proper R is specified.
+
+For each observed point coordinate \f$(u, v)\f$ the function computes:
+\f[
+\begin{array}{l}
+x^{"} \leftarrow (u - c_x)/f_x \\
+y^{"} \leftarrow (v - c_y)/f_y \\
+(x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\
+{[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\
+x \leftarrow X/W \\
+y \leftarrow Y/W \\
+\text{only performed if P is specified:} \\
+u' \leftarrow x {f'}_x + {c'}_x \\
+v' \leftarrow y {f'}_y + {c'}_y
+\end{array}
+\f]
+
+where *undistort* is an approximate iterative algorithm that estimates the normalized original
+point coordinates out of the normalized distorted point coordinates ("normalized" means that the
+coordinates do not depend on the camera matrix).
+
+The function can be used for both a stereo camera head or a monocular camera (when R is empty).
+@param src Observed point coordinates, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or CV_64FC2) (or
+vector\<Point2f\> ).
+@param dst Output ideal point coordinates (1xN/Nx1 2-channel or vector\<Point2f\> ) after undistortion and reverse perspective
+transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.
+@param cameraMatrix Camera matrix \f$\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
+@param distCoeffs Input vector of distortion coefficients
+\f$(k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\f$
+of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
+@param R Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by
+#stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.
+@param P New camera matrix (3x3) or new projection matrix (3x4) \f$\begin{bmatrix} {f'}_x & 0 & {c'}_x & t_x \\ 0 & {f'}_y & {c'}_y & t_y \\ 0 & 0 & 1 & t_z \end{bmatrix}\f$. P1 or P2 computed by
+#stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.
+ */
+CV_EXPORTS_W
+void undistortPoints(InputArray src, OutputArray dst,
+ InputArray cameraMatrix, InputArray distCoeffs,
+ InputArray R = noArray(), InputArray P = noArray());
+/** @overload
+ @note Default version of #undistortPoints does 5 iterations to compute undistorted points.
+ */
+CV_EXPORTS_AS(undistortPointsIter)
+void undistortPoints(InputArray src, OutputArray dst,
+ InputArray cameraMatrix, InputArray distCoeffs,
+ InputArray R, InputArray P, TermCriteria criteria);
+
+/**
+ * @brief Compute undistorted image points position
+ *
+ * @param src Observed points position, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel (CV_32FC2 or
+CV_64FC2) (or vector\<Point2f\> ).
+ * @param dst Output undistorted points position (1xN/Nx1 2-channel or vector\<Point2f\> ).
+ * @param cameraMatrix Camera matrix \f$\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\f$ .
+ * @param distCoeffs Distortion coefficients
+ */
+CV_EXPORTS_W
+void undistortImagePoints(InputArray src, OutputArray dst, InputArray cameraMatrix,
+ InputArray distCoeffs,
+ TermCriteria = TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 5,
+ 0.01));
+
//! @} calib3d
/** @brief The methods in this namespace use a so-called fisheye camera model.
# define CV_NOEXCEPT
#endif
-
+#ifndef CV_CONSTEXPR
+# if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900/*MSVS 2015*/)
+# define CV_CONSTEXPR constexpr
+# endif
+#endif
+#ifndef CV_CONSTEXPR
+# define CV_CONSTEXPR
+#endif
- // Integer types portatibility
+ // Integer types portability
#ifdef OPENCV_STDINT_HEADER
#include OPENCV_STDINT_HEADER
#elif defined(__cplusplus)
CV_Assert(ncn == img.channels());
CV_TIFF_CHECK_CALL(TIFFSetField(tif, TIFFTAG_SAMPLEFORMAT, SAMPLEFORMAT_IEEEFP));
}
- const size_t buffer_size = (bpp / bitsPerByte) * ncn * tile_height0 * tile_width0;
- CV_CheckLT( buffer_size, MAX_TILE_SIZE, "buffer_size is too large: >= 1Gb");
+
+ if ( doReadScanline )
+ {
+ // Read each scanlines.
+ tile_height0 = 1;
+ }
+
- CV_CheckGE( static_cast<int>(buffer_size),
- static_cast<int>(TIFFScanlineSize(tif)),
- "buffer_size is smaller than TIFFScanlineSize(). ");
+ const size_t src_buffer_bytes_per_row = divUp(static_cast<size_t>(ncn * tile_width0 * bpp), static_cast<size_t>(bitsPerByte));
+ const size_t src_buffer_size = tile_height0 * src_buffer_bytes_per_row;
++ CV_CheckLT(src_buffer_size, MAX_TILE_SIZE, "buffer_size is too large: >= 1Gb");
+ const size_t src_buffer_unpacked_bytes_per_row = divUp(static_cast<size_t>(ncn * tile_width0 * dst_bpp), static_cast<size_t>(bitsPerByte));
+ const size_t src_buffer_unpacked_size = tile_height0 * src_buffer_unpacked_bytes_per_row;
+ const bool needsUnpacking = (bpp < dst_bpp);
+ AutoBuffer<uchar> _src_buffer(src_buffer_size);
+ uchar* src_buffer = _src_buffer.data();
+ AutoBuffer<uchar> _src_buffer_unpacked(needsUnpacking ? src_buffer_unpacked_size : 0);
+ uchar* src_buffer_unpacked = needsUnpacking ? _src_buffer_unpacked.data() : nullptr;
+
+ if ( doReadScanline )
+ {
- AutoBuffer<uchar> _buffer(buffer_size);
- uchar* buffer = _buffer.data();
- ushort* buffer16 = (ushort*)buffer;
++ CV_CheckGE(src_buffer_size,
++ static_cast<size_t>(TIFFScanlineSize(tif)),
++ "src_buffer_size is smaller than TIFFScanlineSize().");
+ }
+
int tileidx = 0;
+ #define MAKE_FLAG(a,b) ( (a << 8) | b )
+ const int convert_flag = MAKE_FLAG( ncn, wanted_channels );
+ const bool isNeedConvert16to8 = ( doReadScanline ) && ( bpp == 16 ) && ( dst_bpp == 8);
+
for (int y = 0; y < m_height; y += (int)tile_height0)
{
int tile_height = std::min((int)tile_height0, m_height - y);
{
case 8:
{
- uchar* bstart = buffer;
+ uchar* bstart = src_buffer;
- if (!is_tiled)
+ if (doReadScanline)
+ {
- CV_TIFF_CHECK_CALL((int)TIFFReadScanline(tif, (uint32*)buffer, y) >= 0);
++ CV_TIFF_CHECK_CALL((int)TIFFReadScanline(tif, (uint32*)src_buffer, y) >= 0);
+
+ if ( isNeedConvert16to8 )
+ {
+ // Convert buffer image from 16bit to 8bit.
+ int ix;
+ for ( ix = 0 ; ix < tile_width * ncn - 4; ix += 4 )
+ {
- buffer[ ix ] = buffer[ ix * 2 + 1 ];
- buffer[ ix + 1 ] = buffer[ ix * 2 + 3 ];
- buffer[ ix + 2 ] = buffer[ ix * 2 + 5 ];
- buffer[ ix + 3 ] = buffer[ ix * 2 + 7 ];
++ src_buffer[ ix ] = src_buffer[ ix * 2 + 1 ];
++ src_buffer[ ix + 1 ] = src_buffer[ ix * 2 + 3 ];
++ src_buffer[ ix + 2 ] = src_buffer[ ix * 2 + 5 ];
++ src_buffer[ ix + 3 ] = src_buffer[ ix * 2 + 7 ];
+ }
+
+ for ( ; ix < tile_width * ncn ; ix ++ )
+ {
- buffer[ ix ] = buffer[ ix * 2 + 1];
++ src_buffer[ ix ] = src_buffer[ ix * 2 + 1];
+ }
+ }
+ }
+ else if (!is_tiled)
{
- CV_TIFF_CHECK_CALL(TIFFReadRGBAStrip(tif, y, (uint32*)buffer));
+ CV_TIFF_CHECK_CALL(TIFFReadRGBAStrip(tif, y, (uint32*)src_buffer));
}
else
{
case 16:
{
- if (!is_tiled)
+ if (doReadScanline)
+ {
- CV_TIFF_CHECK_CALL((int)TIFFReadScanline(tif, (uint32*)buffer, y) >= 0);
++ CV_TIFF_CHECK_CALL((int)TIFFReadScanline(tif, (uint32*)src_buffer, y) >= 0);
+ }
+ else if (!is_tiled)
{
- CV_TIFF_CHECK_CALL((int)TIFFReadEncodedStrip(tif, tileidx, (uint32*)buffer, buffer_size) >= 0);
+ CV_TIFF_CHECK_CALL((int)TIFFReadEncodedStrip(tif, tileidx, (uint32*)src_buffer, src_buffer_size) >= 0);
}
else
{
} // for x
} // for y
}
- fixOrientation(img, img_orientation, dst_bpp);
+ if (bpp < dst_bpp)
+ img *= (1<<(dst_bpp-bpp));
+
+ // If TIFFReadRGBA* function is used -> fixOrientationPartial().
+ // Otherwise -> fixOrientationFull().
+ fixOrientation(img, img_orientation,
+ ( ( dst_bpp != 8 ) && ( !doReadScanline ) ) );
}
if (m_hdr && depth >= CV_32F)
EXPECT_EQ(0, remove(file4.c_str()));
}
- const string req_filename = cv::format("readwrite/huge-tiff/%s_%llu.tif", mat_type_string.c_str(), buffer_size);
+ //==================================================================================================
+ // See https://github.com/opencv/opencv/issues/22388
+
+ /**
+ * Dummy enum to show combination of IMREAD_*.
+ */
+ enum ImreadMixModes
+ {
+ IMREAD_MIX_UNCHANGED = IMREAD_UNCHANGED ,
+ IMREAD_MIX_GRAYSCALE = IMREAD_GRAYSCALE ,
+ IMREAD_MIX_COLOR = IMREAD_COLOR ,
+ IMREAD_MIX_GRAYSCALE_ANYDEPTH = IMREAD_GRAYSCALE | IMREAD_ANYDEPTH ,
+ IMREAD_MIX_GRAYSCALE_ANYCOLOR = IMREAD_GRAYSCALE | IMREAD_ANYCOLOR,
+ IMREAD_MIX_GRAYSCALE_ANYDEPTH_ANYCOLOR = IMREAD_GRAYSCALE | IMREAD_ANYDEPTH | IMREAD_ANYCOLOR,
+ IMREAD_MIX_COLOR_ANYDEPTH = IMREAD_COLOR | IMREAD_ANYDEPTH ,
+ IMREAD_MIX_COLOR_ANYCOLOR = IMREAD_COLOR | IMREAD_ANYCOLOR,
+ IMREAD_MIX_COLOR_ANYDEPTH_ANYCOLOR = IMREAD_COLOR | IMREAD_ANYDEPTH | IMREAD_ANYCOLOR
+ };
+
+ typedef tuple< uint64_t, tuple<string, int>, ImreadMixModes > Bufsize_and_Type;
+ typedef testing::TestWithParam<Bufsize_and_Type> Imgcodecs_Tiff_decode_Huge;
+
+ static inline
+ void PrintTo(const ImreadMixModes& val, std::ostream* os)
+ {
+ PrintTo( static_cast<ImreadModes>(val), os );
+ }
+
+ TEST_P(Imgcodecs_Tiff_decode_Huge, regression)
+ {
+ // Get test parameters
+ const uint64_t buffer_size = get<0>(GetParam());
+ const string mat_type_string = get<0>(get<1>(GetParam()));
+ const int mat_type = get<1>(get<1>(GetParam()));
+ const int imread_mode = get<2>(GetParam());
+
+ // Detect data file
- throw SkipTestException( cv::format("Test is skipped( pixels(%lu) > CV_IO_MAX_IMAGE_PIXELS(%lu) )",
- pixels, CV_IO_MAX_IMAGE_PIXELS ) );
++ const string req_filename = cv::format("readwrite/huge-tiff/%s_%zu.tif", mat_type_string.c_str(), (size_t)buffer_size);
+ const string filename = findDataFile( req_filename );
+
+ // Preparation process for test
+ {
+ // Convert from mat_type and buffer_size to tiff file information.
+ const uint64_t width = 32768;
+ int ncn = CV_MAT_CN(mat_type);
+ int depth = ( CV_MAT_DEPTH(mat_type) == CV_16U) ? 2 : 1; // 16bit or 8 bit
+ const uint64_t height = (uint64_t) buffer_size / width / ncn / depth;
+ const uint64_t base_scanline_size = (uint64_t) width * ncn * depth;
+ const uint64_t base_strip_size = (uint64_t) base_scanline_size * height;
+
+ // To avoid exception about pixel size, check it.
+ static const size_t CV_IO_MAX_IMAGE_PIXELS = utils::getConfigurationParameterSizeT("OPENCV_IO_MAX_IMAGE_PIXELS", 1 << 30);
+ uint64_t pixels = (uint64_t) width * height;
+ if ( pixels > CV_IO_MAX_IMAGE_PIXELS )
+ {
- CV_LOG_DEBUG(NULL, cv::format("OpenCV TIFF-test(line %d):memory usage info : mat(%llu), libtiff(%llu), work(%llu) -> total(%llu)",
- __LINE__, memory_usage_cvmat, memory_usage_tiff, memory_usage_work, memory_usage_total) );
++ throw SkipTestException( cv::format("Test is skipped( pixels(%zu) > CV_IO_MAX_IMAGE_PIXELS(%zu) )",
++ (size_t)pixels, CV_IO_MAX_IMAGE_PIXELS) );
+ }
+
+ // If buffer_size >= 1GB * 95%, TIFFReadScanline() is used.
+ const uint64_t BUFFER_SIZE_LIMIT_FOR_READS_CANLINE = (uint64_t) 1024*1024*1024*95/100;
+ const bool doReadScanline = ( base_strip_size >= BUFFER_SIZE_LIMIT_FOR_READS_CANLINE );
+
+ // Update ncn and depth for destination Mat.
+ switch ( imread_mode )
+ {
+ case IMREAD_UNCHANGED:
+ break;
+ case IMREAD_GRAYSCALE:
+ ncn = 1;
+ depth = 1;
+ break;
+ case IMREAD_GRAYSCALE | IMREAD_ANYDEPTH:
+ ncn = 1;
+ break;
+ case IMREAD_GRAYSCALE | IMREAD_ANYCOLOR:
+ ncn = (ncn == 1)?1:3;
+ depth = 1;
+ break;
+ case IMREAD_GRAYSCALE | IMREAD_ANYCOLOR | IMREAD_ANYDEPTH:
+ ncn = (ncn == 1)?1:3;
+ break;
+ case IMREAD_COLOR:
+ ncn = 3;
+ depth = 1;
+ break;
+ case IMREAD_COLOR | IMREAD_ANYDEPTH:
+ ncn = 3;
+ break;
+ case IMREAD_COLOR | IMREAD_ANYCOLOR:
+ ncn = 3;
+ depth = 1;
+ break;
+ case IMREAD_COLOR | IMREAD_ANYDEPTH | IMREAD_ANYCOLOR:
+ ncn = 3;
+ break;
+ default:
+ break;
+ }
+
+ // Memory usage for Destination Mat
+ const uint64_t memory_usage_cvmat = (uint64_t) width * ncn * depth * height;
+
+ // Memory usage for Work memory in libtiff.
+ uint64_t memory_usage_tiff = 0;
+ if ( ( depth == 1 ) && ( !doReadScanline ) )
+ {
+ // TIFFReadRGBA*() request to allocate RGBA(32bit) buffer.
+ memory_usage_tiff = (uint64_t)
+ width *
+ 4 * // ncn = RGBA
+ 1 * // dst_bpp = 8 bpp
+ height;
+ }
+ else
+ {
+ // TIFFReadEncodedStrip() or TIFFReadScanline() request to allocate strip memory.
+ memory_usage_tiff = base_strip_size;
+ }
+
+ // Memory usage for Work memory in imgcodec/grfmt_tiff.cpp
+ const uint64_t memory_usage_work =
+ ( doReadScanline ) ? base_scanline_size // for TIFFReadScanline()
+ : base_strip_size; // for TIFFReadRGBA*() or TIFFReadEncodedStrip()
+
+ // Total memory usage.
+ const uint64_t memory_usage_total =
+ memory_usage_cvmat + // Destination Mat
+ memory_usage_tiff + // Work memory in libtiff
+ memory_usage_work; // Work memory in imgcodecs
+
+ // Output memory usage log.
++ CV_LOG_DEBUG(NULL, cv::format("OpenCV TIFF-test: memory usage info : mat(%zu), libtiff(%zu), work(%zu) -> total(%zu)",
++ (size_t)memory_usage_cvmat, (size_t)memory_usage_tiff, (size_t)memory_usage_work, (size_t)memory_usage_total) );
+
+ // Add test tags.
+ if ( memory_usage_total >= (uint64_t) 6144 * 1024 * 1024 )
+ {
+ applyTestTag( CV_TEST_TAG_MEMORY_14GB, CV_TEST_TAG_VERYLONG );
+ }
+ else if ( memory_usage_total >= (uint64_t) 2048 * 1024 * 1024 )
+ {
+ applyTestTag( CV_TEST_TAG_MEMORY_6GB, CV_TEST_TAG_VERYLONG );
+ }
+ else if ( memory_usage_total >= (uint64_t) 1024 * 1024 * 1024 )
+ {
+ applyTestTag( CV_TEST_TAG_MEMORY_2GB, CV_TEST_TAG_LONG );
+ }
+ else if ( memory_usage_total >= (uint64_t) 512 * 1024 * 1024 )
+ {
+ applyTestTag( CV_TEST_TAG_MEMORY_1GB );
+ }
+ else if ( memory_usage_total >= (uint64_t) 200 * 1024 * 1024 )
+ {
+ applyTestTag( CV_TEST_TAG_MEMORY_512MB );
+ }
+ else
+ {
+ // do nothing.
+ }
+ }
+
+ // TEST Main
+
+ cv::Mat img;
+ ASSERT_NO_THROW( img = cv::imread(filename, imread_mode) );
+ ASSERT_FALSE(img.empty());
+
+ /**
+ * Test marker pixels at each corners.
+ *
+ * 0xAn,0x00 ... 0x00, 0xBn
+ * 0x00,0x00 ... 0x00, 0x00
+ * : : : :
+ * 0x00,0x00 ... 0x00, 0x00
+ * 0xCn,0x00 .., 0x00, 0xDn
+ *
+ */
+
+ #define MAKE_FLAG(from_type, to_type) (((uint64_t)from_type << 32 ) | to_type )
+
+ switch ( MAKE_FLAG(mat_type, img.type() ) )
+ {
+ // GRAY TO GRAY
+ case MAKE_FLAG(CV_8UC1, CV_8UC1):
+ case MAKE_FLAG(CV_16UC1, CV_8UC1):
+ EXPECT_EQ( 0xA0, img.at<uchar>(0, 0) );
+ EXPECT_EQ( 0xB0, img.at<uchar>(0, img.cols-1) );
+ EXPECT_EQ( 0xC0, img.at<uchar>(img.rows-1, 0) );
+ EXPECT_EQ( 0xD0, img.at<uchar>(img.rows-1, img.cols-1) );
+ break;
+
+ // RGB/RGBA TO BGR
+ case MAKE_FLAG(CV_8UC3, CV_8UC3):
+ case MAKE_FLAG(CV_8UC4, CV_8UC3):
+ case MAKE_FLAG(CV_16UC3, CV_8UC3):
+ case MAKE_FLAG(CV_16UC4, CV_8UC3):
+ EXPECT_EQ( 0xA2, img.at<Vec3b>(0, 0) [0] );
+ EXPECT_EQ( 0xA1, img.at<Vec3b>(0, 0) [1] );
+ EXPECT_EQ( 0xA0, img.at<Vec3b>(0, 0) [2] );
+ EXPECT_EQ( 0xB2, img.at<Vec3b>(0, img.cols-1)[0] );
+ EXPECT_EQ( 0xB1, img.at<Vec3b>(0, img.cols-1)[1] );
+ EXPECT_EQ( 0xB0, img.at<Vec3b>(0, img.cols-1)[2] );
+ EXPECT_EQ( 0xC2, img.at<Vec3b>(img.rows-1, 0) [0] );
+ EXPECT_EQ( 0xC1, img.at<Vec3b>(img.rows-1, 0) [1] );
+ EXPECT_EQ( 0xC0, img.at<Vec3b>(img.rows-1, 0) [2] );
+ EXPECT_EQ( 0xD2, img.at<Vec3b>(img.rows-1, img.cols-1)[0] );
+ EXPECT_EQ( 0xD1, img.at<Vec3b>(img.rows-1, img.cols-1)[1] );
+ EXPECT_EQ( 0xD0, img.at<Vec3b>(img.rows-1, img.cols-1)[2] );
+ break;
+
+ // RGBA TO BGRA
+ case MAKE_FLAG(CV_8UC4, CV_8UC4):
+ case MAKE_FLAG(CV_16UC4, CV_8UC4):
+ EXPECT_EQ( 0xA2, img.at<Vec4b>(0, 0) [0] );
+ EXPECT_EQ( 0xA1, img.at<Vec4b>(0, 0) [1] );
+ EXPECT_EQ( 0xA0, img.at<Vec4b>(0, 0) [2] );
+ EXPECT_EQ( 0xA3, img.at<Vec4b>(0, 0) [3] );
+ EXPECT_EQ( 0xB2, img.at<Vec4b>(0, img.cols-1)[0] );
+ EXPECT_EQ( 0xB1, img.at<Vec4b>(0, img.cols-1)[1] );
+ EXPECT_EQ( 0xB0, img.at<Vec4b>(0, img.cols-1)[2] );
+ EXPECT_EQ( 0xB3, img.at<Vec4b>(0, img.cols-1)[3] );
+ EXPECT_EQ( 0xC2, img.at<Vec4b>(img.rows-1, 0) [0] );
+ EXPECT_EQ( 0xC1, img.at<Vec4b>(img.rows-1, 0) [1] );
+ EXPECT_EQ( 0xC0, img.at<Vec4b>(img.rows-1, 0) [2] );
+ EXPECT_EQ( 0xC3, img.at<Vec4b>(img.rows-1, 0) [3] );
+ EXPECT_EQ( 0xD2, img.at<Vec4b>(img.rows-1, img.cols-1)[0] );
+ EXPECT_EQ( 0xD1, img.at<Vec4b>(img.rows-1, img.cols-1)[1] );
+ EXPECT_EQ( 0xD0, img.at<Vec4b>(img.rows-1, img.cols-1)[2] );
+ EXPECT_EQ( 0xD3, img.at<Vec4b>(img.rows-1, img.cols-1)[3] );
+ break;
+
+ // RGB/RGBA to GRAY
+ case MAKE_FLAG(CV_8UC3, CV_8UC1):
+ case MAKE_FLAG(CV_8UC4, CV_8UC1):
+ case MAKE_FLAG(CV_16UC3, CV_8UC1):
+ case MAKE_FLAG(CV_16UC4, CV_8UC1):
+ EXPECT_LE( 0xA0, img.at<uchar>(0, 0) );
+ EXPECT_GE( 0xA2, img.at<uchar>(0, 0) );
+ EXPECT_LE( 0xB0, img.at<uchar>(0, img.cols-1) );
+ EXPECT_GE( 0xB2, img.at<uchar>(0, img.cols-1) );
+ EXPECT_LE( 0xC0, img.at<uchar>(img.rows-1, 0) );
+ EXPECT_GE( 0xC2, img.at<uchar>(img.rows-1, 0) );
+ EXPECT_LE( 0xD0, img.at<uchar>(img.rows-1, img.cols-1) );
+ EXPECT_GE( 0xD2, img.at<uchar>(img.rows-1, img.cols-1) );
+ break;
+
+ // GRAY to BGR
+ case MAKE_FLAG(CV_8UC1, CV_8UC3):
+ case MAKE_FLAG(CV_16UC1, CV_8UC3):
+ EXPECT_EQ( 0xA0, img.at<Vec3b>(0, 0) [0] );
+ EXPECT_EQ( 0xB0, img.at<Vec3b>(0, img.cols-1)[0] );
+ EXPECT_EQ( 0xC0, img.at<Vec3b>(img.rows-1, 0) [0] );
+ EXPECT_EQ( 0xD0, img.at<Vec3b>(img.rows-1, img.cols-1)[0] );
+ // R==G==B
+ EXPECT_EQ( img.at<Vec3b>(0, 0) [0], img.at<Vec3b>(0, 0) [1] );
+ EXPECT_EQ( img.at<Vec3b>(0, 0) [0], img.at<Vec3b>(0, 0) [2] );
+ EXPECT_EQ( img.at<Vec3b>(0, img.cols-1) [0], img.at<Vec3b>(0, img.cols-1)[1] );
+ EXPECT_EQ( img.at<Vec3b>(0, img.cols-1) [0], img.at<Vec3b>(0, img.cols-1)[2] );
+ EXPECT_EQ( img.at<Vec3b>(img.rows-1, 0) [0], img.at<Vec3b>(img.rows-1, 0) [1] );
+ EXPECT_EQ( img.at<Vec3b>(img.rows-1, 0) [0], img.at<Vec3b>(img.rows-1, 0) [2] );
+ EXPECT_EQ( img.at<Vec3b>(img.rows-1, img.cols-1) [0], img.at<Vec3b>(img.rows-1, img.cols-1)[1] );
+ EXPECT_EQ( img.at<Vec3b>(img.rows-1, img.cols-1) [0], img.at<Vec3b>(img.rows-1, img.cols-1)[2] );
+ break;
+
+ // GRAY TO GRAY
+ case MAKE_FLAG(CV_16UC1, CV_16UC1):
+ EXPECT_EQ( 0xA090, img.at<ushort>(0, 0) );
+ EXPECT_EQ( 0xB080, img.at<ushort>(0, img.cols-1) );
+ EXPECT_EQ( 0xC070, img.at<ushort>(img.rows-1, 0) );
+ EXPECT_EQ( 0xD060, img.at<ushort>(img.rows-1, img.cols-1) );
+ break;
+
+ // RGB/RGBA TO BGR
+ case MAKE_FLAG(CV_16UC3, CV_16UC3):
+ case MAKE_FLAG(CV_16UC4, CV_16UC3):
+ EXPECT_EQ( 0xA292, img.at<Vec3w>(0, 0) [0] );
+ EXPECT_EQ( 0xA191, img.at<Vec3w>(0, 0) [1] );
+ EXPECT_EQ( 0xA090, img.at<Vec3w>(0, 0) [2] );
+ EXPECT_EQ( 0xB282, img.at<Vec3w>(0, img.cols-1)[0] );
+ EXPECT_EQ( 0xB181, img.at<Vec3w>(0, img.cols-1)[1] );
+ EXPECT_EQ( 0xB080, img.at<Vec3w>(0, img.cols-1)[2] );
+ EXPECT_EQ( 0xC272, img.at<Vec3w>(img.rows-1, 0) [0] );
+ EXPECT_EQ( 0xC171, img.at<Vec3w>(img.rows-1, 0) [1] );
+ EXPECT_EQ( 0xC070, img.at<Vec3w>(img.rows-1, 0) [2] );
+ EXPECT_EQ( 0xD262, img.at<Vec3w>(img.rows-1, img.cols-1)[0] );
+ EXPECT_EQ( 0xD161, img.at<Vec3w>(img.rows-1, img.cols-1)[1] );
+ EXPECT_EQ( 0xD060, img.at<Vec3w>(img.rows-1, img.cols-1)[2] );
+ break;
+
+ // RGBA TO RGBA
+ case MAKE_FLAG(CV_16UC4, CV_16UC4):
+ EXPECT_EQ( 0xA292, img.at<Vec4w>(0, 0) [0] );
+ EXPECT_EQ( 0xA191, img.at<Vec4w>(0, 0) [1] );
+ EXPECT_EQ( 0xA090, img.at<Vec4w>(0, 0) [2] );
+ EXPECT_EQ( 0xA393, img.at<Vec4w>(0, 0) [3] );
+ EXPECT_EQ( 0xB282, img.at<Vec4w>(0, img.cols-1)[0] );
+ EXPECT_EQ( 0xB181, img.at<Vec4w>(0, img.cols-1)[1] );
+ EXPECT_EQ( 0xB080, img.at<Vec4w>(0, img.cols-1)[2] );
+ EXPECT_EQ( 0xB383, img.at<Vec4w>(0, img.cols-1)[3] );
+ EXPECT_EQ( 0xC272, img.at<Vec4w>(img.rows-1, 0) [0] );
+ EXPECT_EQ( 0xC171, img.at<Vec4w>(img.rows-1, 0) [1] );
+ EXPECT_EQ( 0xC070, img.at<Vec4w>(img.rows-1, 0) [2] );
+ EXPECT_EQ( 0xC373, img.at<Vec4w>(img.rows-1, 0) [3] );
+ EXPECT_EQ( 0xD262, img.at<Vec4w>(img.rows-1,img.cols-1) [0] );
+ EXPECT_EQ( 0xD161, img.at<Vec4w>(img.rows-1,img.cols-1) [1] );
+ EXPECT_EQ( 0xD060, img.at<Vec4w>(img.rows-1,img.cols-1) [2] );
+ EXPECT_EQ( 0xD363, img.at<Vec4w>(img.rows-1,img.cols-1) [3] );
+ break;
+
+ // RGB/RGBA to GRAY
+ case MAKE_FLAG(CV_16UC3, CV_16UC1):
+ case MAKE_FLAG(CV_16UC4, CV_16UC1):
+ EXPECT_LE( 0xA090, img.at<ushort>(0, 0) );
+ EXPECT_GE( 0xA292, img.at<ushort>(0, 0) );
+ EXPECT_LE( 0xB080, img.at<ushort>(0, img.cols-1) );
+ EXPECT_GE( 0xB282, img.at<ushort>(0, img.cols-1) );
+ EXPECT_LE( 0xC070, img.at<ushort>(img.rows-1, 0) );
+ EXPECT_GE( 0xC272, img.at<ushort>(img.rows-1, 0) );
+ EXPECT_LE( 0xD060, img.at<ushort>(img.rows-1, img.cols-1) );
+ EXPECT_GE( 0xD262, img.at<ushort>(img.rows-1, img.cols-1) );
+ break;
+
+ // GRAY to RGB
+ case MAKE_FLAG(CV_16UC1, CV_16UC3):
+ EXPECT_EQ( 0xA090, img.at<Vec3w>(0, 0) [0] );
+ EXPECT_EQ( 0xB080, img.at<Vec3w>(0, img.cols-1)[0] );
+ EXPECT_EQ( 0xC070, img.at<Vec3w>(img.rows-1, 0) [0] );
+ EXPECT_EQ( 0xD060, img.at<Vec3w>(img.rows-1, img.cols-1)[0] );
+ // R==G==B
+ EXPECT_EQ( img.at<Vec3w>(0, 0) [0], img.at<Vec3w>(0, 0) [1] );
+ EXPECT_EQ( img.at<Vec3w>(0, 0) [0], img.at<Vec3w>(0, 0) [2] );
+ EXPECT_EQ( img.at<Vec3w>(0, img.cols-1) [0], img.at<Vec3w>(0, img.cols-1)[1] );
+ EXPECT_EQ( img.at<Vec3w>(0, img.cols-1) [0], img.at<Vec3w>(0, img.cols-1)[2] );
+ EXPECT_EQ( img.at<Vec3w>(img.rows-1, 0) [0], img.at<Vec3w>(img.rows-1, 0) [1] );
+ EXPECT_EQ( img.at<Vec3w>(img.rows-1, 0) [0], img.at<Vec3w>(img.rows-1, 0) [2] );
+ EXPECT_EQ( img.at<Vec3w>(img.rows-1, img.cols-1) [0], img.at<Vec3w>(img.rows-1, img.cols-1)[1] );
+ EXPECT_EQ( img.at<Vec3w>(img.rows-1, img.cols-1) [0], img.at<Vec3w>(img.rows-1, img.cols-1)[2] );
+ break;
+
+ // No supported.
+ // (1) 8bit to 16bit
+ case MAKE_FLAG(CV_8UC1, CV_16UC1):
+ case MAKE_FLAG(CV_8UC1, CV_16UC3):
+ case MAKE_FLAG(CV_8UC1, CV_16UC4):
+ case MAKE_FLAG(CV_8UC3, CV_16UC1):
+ case MAKE_FLAG(CV_8UC3, CV_16UC3):
+ case MAKE_FLAG(CV_8UC3, CV_16UC4):
+ case MAKE_FLAG(CV_8UC4, CV_16UC1):
+ case MAKE_FLAG(CV_8UC4, CV_16UC3):
+ case MAKE_FLAG(CV_8UC4, CV_16UC4):
+ // (2) GRAY/RGB TO RGBA
+ case MAKE_FLAG(CV_8UC1, CV_8UC4):
+ case MAKE_FLAG(CV_8UC3, CV_8UC4):
+ case MAKE_FLAG(CV_16UC1, CV_8UC4):
+ case MAKE_FLAG(CV_16UC3, CV_8UC4):
+ case MAKE_FLAG(CV_16UC1, CV_16UC4):
+ case MAKE_FLAG(CV_16UC3, CV_16UC4):
+ default:
+ FAIL() << cv::format("Unknown test pattern: from = %d ( %d, %d) to = %d ( %d, %d )",
+ mat_type, (int)CV_MAT_CN(mat_type ), ( CV_MAT_DEPTH(mat_type )==CV_16U)?16:8,
+ img.type(), (int)CV_MAT_CN(img.type() ), ( CV_MAT_DEPTH(img.type() )==CV_16U)?16:8);
+ break;
+ }
+
+ #undef MAKE_FLAG
+ }
+
+ // Basic Test
+ const Bufsize_and_Type Imgcodecs_Tiff_decode_Huge_list_basic[] =
+ {
+ make_tuple<uint64_t, tuple<string,int>,ImreadMixModes>( 1073479680ull, make_tuple<string,int>("CV_8UC1", CV_8UC1), IMREAD_MIX_COLOR ),
+ make_tuple<uint64_t, tuple<string,int>,ImreadMixModes>( 2147483648ull, make_tuple<string,int>("CV_16UC4", CV_16UC4), IMREAD_MIX_COLOR ),
+ };
+
+ INSTANTIATE_TEST_CASE_P(Imgcodecs_Tiff, Imgcodecs_Tiff_decode_Huge,
+ testing::ValuesIn( Imgcodecs_Tiff_decode_Huge_list_basic )
+ );
+
+ // Full Test
+
+ /**
+ * Test lists for combination of IMREAD_*.
+ */
+ const ImreadMixModes all_modes_Huge_Full[] =
+ {
+ IMREAD_MIX_UNCHANGED,
+ IMREAD_MIX_GRAYSCALE,
+ IMREAD_MIX_GRAYSCALE_ANYDEPTH,
+ IMREAD_MIX_GRAYSCALE_ANYCOLOR,
+ IMREAD_MIX_GRAYSCALE_ANYDEPTH_ANYCOLOR,
+ IMREAD_MIX_COLOR,
+ IMREAD_MIX_COLOR_ANYDEPTH,
+ IMREAD_MIX_COLOR_ANYCOLOR,
+ IMREAD_MIX_COLOR_ANYDEPTH_ANYCOLOR,
+ };
+
+ const uint64_t huge_buffer_sizes_decode_Full[] =
+ {
+ 1048576ull, // 1 * 1024 * 1024
+ 1073479680ull, // 1024 * 1024 * 1024 - 32768 * 4 * 2
+ 1073741824ull, // 1024 * 1024 * 1024
+ 2147483648ull, // 2048 * 1024 * 1024
+ };
+
+ const tuple<string, int> mat_types_Full[] =
+ {
+ make_tuple<string, int>("CV_8UC1", CV_8UC1), // 8bit GRAY
+ make_tuple<string, int>("CV_8UC3", CV_8UC3), // 24bit RGB
+ make_tuple<string, int>("CV_8UC4", CV_8UC4), // 32bit RGBA
+ make_tuple<string, int>("CV_16UC1", CV_16UC1), // 16bit GRAY
+ make_tuple<string, int>("CV_16UC3", CV_16UC3), // 48bit RGB
+ make_tuple<string, int>("CV_16UC4", CV_16UC4), // 64bit RGBA
+ };
+
+ INSTANTIATE_TEST_CASE_P(DISABLED_Imgcodecs_Tiff_Full, Imgcodecs_Tiff_decode_Huge,
+ testing::Combine(
+ testing::ValuesIn(huge_buffer_sizes_decode_Full),
+ testing::ValuesIn(mat_types_Full),
+ testing::ValuesIn(all_modes_Huge_Full)
+ )
+ );
+
+
+ //==================================================================================================
+
TEST(Imgcodecs_Tiff, write_read_16bit_big_little_endian)
{
// see issue #2601 "16-bit Grayscale TIFF Load Failures Due to Buffer Underflow and Endianness"
for (size_t i = 0ull; i < decoded_info.size(); i++) {
result.push_back(make_pair(decoded_info[i], straight_barcode[i]));
}
- sort(result.begin(), result.end(), [](const pair<string, Mat>& v1, const pair<string, Mat>& v2)
- {return v1.first < v2.first; });
++
+ sort(result.begin(), result.end(), compareQR);
vector<vector<uint8_t> > decoded_info_sort;
vector<Mat> straight_barcode_sort;
for (size_t i = 0ull; i < result.size(); i++) {