From 6ed141de0e0bd68df305510fc6973a99b705ecd6 Mon Sep 17 00:00:00 2001 From: Ilya Lysenkov Date: Tue, 19 Oct 2010 11:51:56 +0000 Subject: [PATCH] Reorganized documentation to be consistent with modular structure --- doc/{cv_calibration_3d.tex => calib3d.tex} | 2 +- ...ay_operations.tex => core_array_operations.tex} | 4 +- ...ic_structures.tex => core_basic_structures.tex} | 0 ...ering_search.tex => core_clustering_search.tex} | 250 +------ ...ng_functions.tex => core_drawing_functions.tex} | 0 ..._structures.tex => core_dynamic_structures.tex} | 0 ...core_introduction.tex => core_introduction.tex} | 0 ...cxcore_persistence.tex => core_persistence.tex} | 0 ...ons.tex => core_utilities_system_functions.tex} | 0 doc/cvaux_3d.tex | 0 doc/cvaux_bgfg.tex | 0 ...ection.tex => features2d_feature_detection.tex} | 765 +------------------- ...tection.tex => features2d_object_detection.tex} | 2 +- ...ition.tex => features2d_object_recognition.tex} | 0 doc/flann.tex | 253 +++++++ doc/{HighGui.tex => highgui.tex} | 0 doc/{HighGui_Qt.tex => highgui_qt.tex} | 0 doc/imgproc_feature_detection.tex | 773 +++++++++++++++++++++ doc/{cv_histograms.tex => imgproc_histograms.tex} | 0 ...e_filtering.tex => imgproc_image_filtering.tex} | 0 ...e_transform.tex => imgproc_image_transform.tex} | 0 ...image_warping.tex => imgproc_image_warping.tex} | 0 doc/imgproc_motion_tracking.tex | 169 +++++ doc/imgproc_object_detection.tex | 138 ++++ ...visions.tex => imgproc_planar_subdivisions.tex} | 0 ...lysis.tex => imgproc_struct_shape_analysis.tex} | 0 doc/{MachineLearning.tex => ml.tex} | 0 doc/{cv_object_detection.tex => objdetect.tex} | 132 +--- doc/online-opencv.tex | 74 +- doc/opencvref_body.tex | 75 +- ...tion_tracking.tex => video_motion_tracking.tex} | 159 ----- 31 files changed, 1433 insertions(+), 1363 deletions(-) rename doc/{cv_calibration_3d.tex => calib3d.tex} (99%) rename doc/{cxcore_array_operations.tex => core_array_operations.tex} (99%) rename doc/{cxcore_basic_structures.tex => core_basic_structures.tex} (100%) rename doc/{cxcore_clustering_search.tex => core_clustering_search.tex} (50%) rename doc/{cxcore_drawing_functions.tex => core_drawing_functions.tex} (100%) rename doc/{cxcore_dynamic_structures.tex => core_dynamic_structures.tex} (100%) rename doc/{cxcore_introduction.tex => core_introduction.tex} (100%) rename doc/{cxcore_persistence.tex => core_persistence.tex} (100%) rename doc/{cxcore_utilities_system_functions.tex => core_utilities_system_functions.tex} (100%) delete mode 100644 doc/cvaux_3d.tex delete mode 100644 doc/cvaux_bgfg.tex rename doc/{cv_feature_detection.tex => features2d_feature_detection.tex} (53%) rename doc/{cvaux_object_detection.tex => features2d_object_detection.tex} (99%) rename doc/{cv_object_recognition.tex => features2d_object_recognition.tex} (100%) create mode 100644 doc/flann.tex rename doc/{HighGui.tex => highgui.tex} (100%) rename doc/{HighGui_Qt.tex => highgui_qt.tex} (100%) create mode 100644 doc/imgproc_feature_detection.tex rename doc/{cv_histograms.tex => imgproc_histograms.tex} (100%) rename doc/{cv_image_filtering.tex => imgproc_image_filtering.tex} (100%) rename doc/{cv_image_transform.tex => imgproc_image_transform.tex} (100%) rename doc/{cv_image_warping.tex => imgproc_image_warping.tex} (100%) create mode 100644 doc/imgproc_motion_tracking.tex create mode 100644 doc/imgproc_object_detection.tex rename doc/{cv_planar_subdivisions.tex => imgproc_planar_subdivisions.tex} (100%) rename doc/{cv_struct_shape_analysis.tex => imgproc_struct_shape_analysis.tex} (100%) rename doc/{MachineLearning.tex => ml.tex} (100%) rename doc/{cv_object_detection.tex => objdetect.tex} (83%) rename doc/{cv_motion_tracking.tex => video_motion_tracking.tex} (85%) diff --git a/doc/cv_calibration_3d.tex b/doc/calib3d.tex similarity index 99% rename from doc/cv_calibration_3d.tex rename to doc/calib3d.tex index e96a82b..16903d0 100644 --- a/doc/cv_calibration_3d.tex +++ b/doc/calib3d.tex @@ -1,4 +1,4 @@ -\section{Camera Calibration and 3D Reconstruction} +\section{Camera Calibration and 3d Reconstruction} The functions in this section use the so-called pinhole camera model. That is, a scene view is formed by projecting 3D points into the image plane diff --git a/doc/cxcore_array_operations.tex b/doc/core_array_operations.tex similarity index 99% rename from doc/cxcore_array_operations.tex rename to doc/core_array_operations.tex index 5622f41..a5be71e 100644 --- a/doc/cxcore_array_operations.tex +++ b/doc/core_array_operations.tex @@ -2032,8 +2032,8 @@ The function calculates the natural logarithm of the absolute value of every ele Where \texttt{C} is a large negative number (about -700 in the current implementation). -\cvCPyFunc{Mahalonobis} -Calculates the Mahalonobis distance between two vectors. +\cvCPyFunc{Mahalanobis} +Calculates the Mahalanobis distance between two vectors. \cvdefC{double cvMahalanobis(\par const CvArr* vec1,\par const CvArr* vec2,\par CvArr* mat);} \cvdefPy{Mahalonobis(vec1,vec2,mat)-> None} diff --git a/doc/cxcore_basic_structures.tex b/doc/core_basic_structures.tex similarity index 100% rename from doc/cxcore_basic_structures.tex rename to doc/core_basic_structures.tex diff --git a/doc/cxcore_clustering_search.tex b/doc/core_clustering_search.tex similarity index 50% rename from doc/cxcore_clustering_search.tex rename to doc/core_clustering_search.tex index b15d13d..74ab655 100644 --- a/doc/cxcore_clustering_search.tex +++ b/doc/core_clustering_search.tex @@ -1,4 +1,4 @@ -\section{Clustering and Search in Multi-Dimensional Spaces} +\section{Clustering} \ifCPy @@ -287,253 +287,5 @@ The generic function \texttt{partition} implements an $O(N^2)$ algorithm for splitting a set of $N$ elements into one or more equivalency classes, as described in \url{http://en.wikipedia.org/wiki/Disjoint-set_data_structure}. The function returns the number of equivalency classes. -\subsection{Fast Approximate Nearest Neighbor Search} - -This section documents OpenCV's interface to the FLANN\footnote{http://people.cs.ubc.ca/\~mariusm/flann} library. FLANN (Fast Library for Approximate Nearest Neighbors) is a library that -contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. More -information about FLANN can be found in \cite{muja_flann_2009}. - -\ifplastex -\cvclass{cv::flann::Index_} -\else -\subsubsection{cv::flann::Index\_}\label{cvflann.Index} -\fi -The FLANN nearest neighbor index class. This class is templated with the type of elements for which the index is built. - -\begin{lstlisting} -namespace cv -{ -namespace flann -{ - template - class Index_ - { - public: - Index_(const Mat& features, const IndexParams& params); - - ~Index_(); - - void knnSearch(const vector& query, - vector& indices, - vector& dists, - int knn, - const SearchParams& params); - void knnSearch(const Mat& queries, - Mat& indices, - Mat& dists, - int knn, - const SearchParams& params); - - int radiusSearch(const vector& query, - vector& indices, - vector& dists, - float radius, - const SearchParams& params); - int radiusSearch(const Mat& query, - Mat& indices, - Mat& dists, - float radius, - const SearchParams& params); - - void save(std::string filename); - - int veclen() const; - - int size() const; - - const IndexParams* getIndexParameters(); - }; - - typedef Index_ Index; - -} } // namespace cv::flann -\end{lstlisting} - -\ifplastex -\cvCppFunc{cv::flann::Index_::Index_} -\else -\subsubsection{cvflann::Index\_$$::Index\_}\label{cvflann.Index.Index} -\fi -Constructs a nearest neighbor search index for a given dataset. - -\cvdefCpp{Index\_::Index\_(const Mat\& features, const IndexParams\& params);} -\begin{description} -\cvarg{features}{ Matrix of containing the features(points) to index. The size of the matrix is num\_features x feature\_dimensionality and -the data type of the elements in the matrix must coincide with the type of the index.} -\cvarg{params}{Structure containing the index parameters. The type of index that will be constructed depends on the type of this parameter. -The possible parameter types are:} - -\begin{description} -\cvarg{LinearIndexParams}{When passing an object of this type, the index will perform a linear, brute-force search.} -\begin{lstlisting} -struct LinearIndexParams : public IndexParams -{ -}; -\end{lstlisting} - -\cvarg{KDTreeIndexParams}{When passing an object of this type the index constructed will consist of a set of randomized kd-trees which will be searched in parallel.} -\begin{lstlisting} -struct KDTreeIndexParams : public IndexParams -{ - KDTreeIndexParams( int trees = 4 ); -}; -\end{lstlisting} -\begin{description} -\cvarg{trees}{The number of parallel kd-trees to use. Good values are in the range [1..16]} -\end{description} - -\cvarg{KMeansIndexParams}{When passing an object of this type the index constructed will be a hierarchical k-means tree.} -\begin{lstlisting} -struct KMeansIndexParams : public IndexParams -{ - KMeansIndexParams( - int branching = 32, - int iterations = 11, - flann_centers_init_t centers_init = CENTERS_RANDOM, - float cb_index = 0.2 ); -}; -\end{lstlisting} -\begin{description} -\cvarg{branching}{ The branching factor to use for the hierarchical k-means tree } -\cvarg{iterations}{ The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence} -\cvarg{centers\_init}{The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are \texttt{CENTERS\_RANDOM} (picks the initial cluster centers randomly), \texttt{CENTERS\_GONZALES} (picks the initial centers using Gonzales' algorithm) and \texttt{CENTERS\_KMEANSPP} (picks the initial centers using the algorithm suggested in \cite{arthur_kmeanspp_2007})} -\cvarg{cb\_index}{This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When \texttt{cb\_index} is zero the next kmeans domain to be explored is choosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain.} -\end{description} - -\cvarg{CompositeIndexParams}{When using a parameters object of this type the index created combines the randomized kd-trees and the hierarchical k-means tree.} -\begin{lstlisting} -struct CompositeIndexParams : public IndexParams -{ - CompositeIndexParams( - int trees = 4, - int branching = 32, - int iterations = 11, - flann_centers_init_t centers_init = CENTERS_RANDOM, - float cb_index = 0.2 ); -}; -\end{lstlisting} - -\cvarg{AutotunedIndexParams}{When passing an object of this type the index created is automatically tuned to offer the best performance, by choosing the optimal index type (randomized kd-trees, hierarchical kmeans, linear) and parameters for the dataset provided.} -\begin{lstlisting} -struct AutotunedIndexParams : public IndexParams -{ - AutotunedIndexParams( - float target_precision = 0.9, - float build_weight = 0.01, - float memory_weight = 0, - float sample_fraction = 0.1 ); -}; -\end{lstlisting} -\begin{description} -\cvarg{target\_precision}{ Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application. } - -\cvarg{build\_weight}{ Specifies the importance of the index build time raported to the nearest-neighbor search time. In some applications it's acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it's required that the index be build as fast as possible even if that leads to slightly longer search times.} - -\cvarg{memory\_weight}{Is used to specify the tradeoff between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage.} - -\cvarg{sample\_fraction}{Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters.} -\end{description} - -\cvarg{SavedIndexParams}{This object type is used for loading a previously saved index from the disk.} -\begin{lstlisting} -struct SavedIndexParams : public IndexParams -{ - SavedIndexParams( std::string filename ); -}; -\end{lstlisting} -\begin{description} -\cvarg{filename}{ The filename in which the index was saved. } -\end{description} -\end{description} -\end{description} - -\ifplastex -\cvCppFunc{cv::flann::Index_::knnSearch} -\else -\subsubsection{cv::flann::Index\_$$::knnSearch}\label{cvflann.Index.knnSearch} -\fi -Performs a K-nearest neighbor search for a given query point using the index. -\cvdefCpp{ -void Index\_::knnSearch(const vector\& query, \par - vector\& indices, \par - vector\& dists, \par - int knn, \par - const SearchParams\& params);\newline -void Index\_::knnSearch(const Mat\& queries,\par - Mat\& indices, Mat\& dists,\par - int knn, const SearchParams\& params);} -\begin{description} -\cvarg{query}{The query point} -\cvarg{indices}{Vector that will contain the indices of the K-nearest neighbors found. It must have at least knn size.} -\cvarg{dists}{Vector that will contain the distances to the K-nearest neighbors found. It must have at least knn size.} -\cvarg{knn}{Number of nearest neighbors to search for.} -\cvarg{params}{Search parameters} -\begin{lstlisting} - struct SearchParams { - SearchParams(int checks = 32); - }; -\end{lstlisting} -\begin{description} -\cvarg{checks}{ The number of times the tree(s) in the index should be recursively traversed. A higher value for this parameter would give better search precision, but also take more time. If automatic configuration was used when the index was created, the number of checks required to achieve the specified precision was also computed, in which case this parameter is ignored.} -\end{description} -\end{description} - -\ifplastex -\cvCppFunc{cv::flann::Index_::radiusSearch} -\else -\subsubsection{cv::flann::Index\_$$::radiusSearch}\label{cvflann.Index.radiusSearch} -\fi -Performs a radius nearest neighbor search for a given query point. -\cvdefCpp{ -int Index\_::radiusSearch(const vector\& query, \par - vector\& indices, \par - vector\& dists, \par - float radius, \par - const SearchParams\& params);\newline -int Index\_::radiusSearch(const Mat\& query, \par - Mat\& indices, \par - Mat\& dists, \par - float radius, \par - const SearchParams\& params);} -\begin{description} -\cvarg{query}{The query point} -\cvarg{indices}{Vector that will contain the indices of the points found within the search radius in decreasing order of the distance to the query point. If the number of neighbors in the search radius is bigger than the size of this vector, the ones that don't fit in the vector are ignored. } -\cvarg{dists}{Vector that will contain the distances to the points found within the search radius} -\cvarg{radius}{The search radius} -\cvarg{params}{Search parameters} -\end{description} - -\ifplastex -\cvCppFunc{cv::flann::Index_::save} -\else -\subsubsection{cv::flann::Index\_$$::save}\label{cvflann.Index.save} -\fi - -Saves the index to a file. -\cvdefCpp{void Index\_::save(std::string filename);} -\begin{description} -\cvarg{filename}{The file to save the index to} -\end{description} - -\ifplastex -\cvCppFunc{cv::flann::Index_::getIndexParameters} -\else -\subsubsection{cv::flann::Index\_$$::getIndexParameters}\label{cvflann.Index.getIndexParameters} -\fi - -Returns the index paramreters. This is usefull in case of autotuned indices, when the parameters computed can be retrived using this method. -\cvdefCpp{const IndexParams* Index\_::getIndexParameters();} - - -\cvCppFunc{cv::flann::hierarchicalClustering} -Clusters the given points by constructing a hierarchical k-means tree and choosing a cut in the tree that minimizes the cluster's variance. -\cvdefCpp{int hierarchicalClustering(const Mat\& features, Mat\& centers,\par - const KMeansIndexParams\& params);} -\begin{description} -\cvarg{features}{The points to be clustered. The matrix must have elements of type ET.} -\cvarg{centers}{The centers of the clusters obtained. The matrix must have type DT. The number of rows in this matrix represents the number of clusters desired, however, because of the way the cut in the hierarchical tree is chosen, the number of clusters computed will be the highest number of the form \texttt{(branching-1)*k+1} that's lower than the number of clusters desired, where \texttt{branching} is the tree's branching factor (see description of the KMeansIndexParams).} -\cvarg{params}{Parameters used in the construction of the hierarchical k-means tree} -\end{description} -The function returns the number of clusters computed. \fi diff --git a/doc/cxcore_drawing_functions.tex b/doc/core_drawing_functions.tex similarity index 100% rename from doc/cxcore_drawing_functions.tex rename to doc/core_drawing_functions.tex diff --git a/doc/cxcore_dynamic_structures.tex b/doc/core_dynamic_structures.tex similarity index 100% rename from doc/cxcore_dynamic_structures.tex rename to doc/core_dynamic_structures.tex diff --git a/doc/cxcore_introduction.tex b/doc/core_introduction.tex similarity index 100% rename from doc/cxcore_introduction.tex rename to doc/core_introduction.tex diff --git a/doc/cxcore_persistence.tex b/doc/core_persistence.tex similarity index 100% rename from doc/cxcore_persistence.tex rename to doc/core_persistence.tex diff --git a/doc/cxcore_utilities_system_functions.tex b/doc/core_utilities_system_functions.tex similarity index 100% rename from doc/cxcore_utilities_system_functions.tex rename to doc/core_utilities_system_functions.tex diff --git a/doc/cvaux_3d.tex b/doc/cvaux_3d.tex deleted file mode 100644 index e69de29..0000000 diff --git a/doc/cvaux_bgfg.tex b/doc/cvaux_bgfg.tex deleted file mode 100644 index e69de29..0000000 diff --git a/doc/cv_feature_detection.tex b/doc/features2d_feature_detection.tex similarity index 53% rename from doc/cv_feature_detection.tex rename to doc/features2d_feature_detection.tex index 57261ae..6307ac7 100644 --- a/doc/cv_feature_detection.tex +++ b/doc/features2d_feature_detection.tex @@ -2,108 +2,6 @@ \ifCPy -\cvCPyFunc{Canny} -Implements the Canny algorithm for edge detection. - -\cvdefC{ -void cvCanny(\par const CvArr* image, -\par CvArr* edges, -\par double threshold1, -\par double threshold2, -\par int aperture\_size=3 ); -}\cvdefPy{Canny(image,edges,threshold1,threshold2,aperture\_size=3)-> None} -\begin{description} -\cvarg{image}{Single-channel input image} -\cvarg{edges}{Single-channel image to store the edges found by the function} -\cvarg{threshold1}{The first threshold} -\cvarg{threshold2}{The second threshold} -\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel})} -\end{description} - -The function finds the edges on the input image \texttt{image} and marks them in the output image \texttt{edges} using the Canny algorithm. The smallest value between \texttt{threshold1} and \texttt{threshold2} is used for edge linking, the largest value is used to find the initial segments of strong edges. - -\cvCPyFunc{CornerEigenValsAndVecs} -Calculates eigenvalues and eigenvectors of image blocks for corner detection. - -\cvdefC{ -void cvCornerEigenValsAndVecs( \par const CvArr* image,\par CvArr* eigenvv,\par int blockSize,\par int aperture\_size=3 ); - -}\cvdefPy{CornerEigenValsAndVecs(image,eigenvv,blockSize,aperture\_size=3)-> None} - -\begin{description} -\cvarg{image}{Input image} -\cvarg{eigenvv}{Image to store the results. It must be 6 times wider than the input image} -\cvarg{blockSize}{Neighborhood size (see discussion)} -\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel})} -\end{description} - -For every pixel, the function \texttt{cvCornerEigenValsAndVecs} considers a $\texttt{blockSize} \times \texttt{blockSize}$ neigborhood S(p). It calcualtes the covariation matrix of derivatives over the neigborhood as: - -\[ -M = \begin{bmatrix} -\sum_{S(p)}(dI/dx)^2 & \sum_{S(p)}(dI/dx \cdot dI/dy)^2 \\ -\sum_{S(p)}(dI/dx \cdot dI/dy)^2 & \sum_{S(p)}(dI/dy)^2 -\end{bmatrix} -\] - -After that it finds eigenvectors and eigenvalues of the matrix and stores them into destination image in form -$(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)$ where -\begin{description} -\item[$\lambda_1, \lambda_2$]are the eigenvalues of $M$; not sorted -\item[$x_1, y_1$]are the eigenvectors corresponding to $\lambda_1$ -\item[$x_2, y_2$]are the eigenvectors corresponding to $\lambda_2$ -\end{description} - -\cvCPyFunc{CornerHarris} -Harris edge detector. - -\cvdefC{ -void cvCornerHarris( -\par const CvArr* image, -\par CvArr* harris\_dst, -\par int blockSize, -\par int aperture\_size=3, -\par double k=0.04 ); -} -\cvdefPy{CornerHarris(image,harris\_dst,blockSize,aperture\_size=3,k=0.04)-> None} - -\begin{description} -\cvarg{image}{Input image} -\cvarg{harris\_dst}{Image to store the Harris detector responses. Should have the same size as \texttt{image}} -\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCPyCross{CornerEigenValsAndVecs})} -\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel}).} -% format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing -\cvarg{k}{Harris detector free parameter. See the formula below} -\end{description} - -The function runs the Harris edge detector on the image. Similarly to \cvCPyCross{CornerMinEigenVal} and \cvCPyCross{CornerEigenValsAndVecs}, for each pixel it calculates a $2\times2$ gradient covariation matrix $M$ over a $\texttt{blockSize} \times \texttt{blockSize}$ neighborhood. Then, it stores - -\[ -det(M) - k \, trace(M)^2 -\] - -to the destination image. Corners in the image can be found as the local maxima of the destination image. - -\cvCPyFunc{CornerMinEigenVal} -Calculates the minimal eigenvalue of gradient matrices for corner detection. - -\cvdefC{ -void cvCornerMinEigenVal( -\par const CvArr* image, -\par CvArr* eigenval, -\par int blockSize, -\par int aperture\_size=3 ); -}\cvdefPy{CornerMinEigenVal(image,eigenval,blockSize,aperture\_size=3)-> None} -\begin{description} -\cvarg{image}{Input image} -\cvarg{eigenval}{Image to store the minimal eigenvalues. Should have the same size as \texttt{image}} -\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCPyCross{CornerEigenValsAndVecs})} -\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel}).} -% format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing -\end{description} - -The function is similar to \cvCPyCross{CornerEigenValsAndVecs} but it calculates and stores only the minimal eigen value of derivative covariation matrix for every pixel, i.e. $min(\lambda_1, \lambda_2)$ in terms of the previous function. - \ifPy \cvclass{CvSURFPoint} A SURF keypoint, represented as a tuple \texttt{((x, y), laplacian, size, dir, hessian)}. @@ -217,60 +115,6 @@ x=777 y=281 laplacian=1 size=70 dir=167.940964 hessian=35538.363281 \fi -\cvCPyFunc{FindCornerSubPix} -Refines the corner locations. - -\cvdefC{ -void cvFindCornerSubPix( -\par const CvArr* image, -\par CvPoint2D32f* corners, -\par int count, -\par CvSize win, -\par CvSize zero\_zone, -\par CvTermCriteria criteria ); -}\cvdefPy{FindCornerSubPix(image,corners,win,zero\_zone,criteria)-> corners} - -\begin{description} -\cvarg{image}{Input image} -\ifC -\cvarg{corners}{Initial coordinates of the input corners; refined coordinates on output} -\cvarg{count}{Number of corners} -\fi -\ifPy -\cvarg{corners}{Initial coordinates of the input corners as a list of (x, y) pairs} -\fi -\cvarg{win}{Half of the side length of the search window. For example, if \texttt{win}=(5,5), then a $5*2+1 \times 5*2+1 = 11 \times 11$ search window would be used} -\cvarg{zero\_zone}{Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size} -\cvarg{criteria}{Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after a certain number of iterations or when a required accuracy is achieved. The \texttt{criteria} may specify either of or both the maximum number of iteration and the required accuracy} -\end{description} - -The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below. -\ifPy -It returns the refined coordinates as a list of (x, y) pairs. -\fi - -\includegraphics[width=1.0\textwidth]{pics/cornersubpix.png} - -Sub-pixel accurate corner locator is based on the observation that every vector from the center $q$ to a point $p$ located within a neighborhood of $q$ is orthogonal to the image gradient at $p$ subject to image and measurement noise. Consider the expression: - -\[ -\epsilon_i = {DI_{p_i}}^T \cdot (q - p_i) -\] - -where ${DI_{p_i}}$ is the image gradient at the one of the points $p_i$ in a neighborhood of $q$. The value of $q$ is to be found such that $\epsilon_i$ is minimized. A system of equations may be set up with $\epsilon_i$ set to zero: - -\[ -\sum_i(DI_{p_i} \cdot {DI_{p_i}}^T) q = \sum_i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i) -\] - -where the gradients are summed within a neighborhood ("search window") of $q$. Calling the first gradient term $G$ and the second gradient term $b$ gives: - -\[ -q = G^{-1} \cdot b -\] - -The algorithm sets the center of the neighborhood window at this new center $q$ and then iterates until the center keeps within a set threshold. - \cvCPyFunc{GetStarKeypoints} Retrieves keypoints using the StarDetector algorithm. @@ -391,615 +235,9 @@ int main(int argc, char** argv) \end{lstlisting} \fi -\cvCPyFunc{GoodFeaturesToTrack} -Determines strong corners on an image. - -\cvdefC{ -void cvGoodFeaturesToTrack( -\par const CvArr* image -\par CvArr* eigImage, CvArr* tempImage -\par CvPoint2D32f* corners -\par int* cornerCount -\par double qualityLevel -\par double minDistance -\par const CvArr* mask=NULL -\par int blockSize=3 -\par int useHarris=0 -\par double k=0.04 ); -} -\cvdefPy{GoodFeaturesToTrack(image,eigImage,tempImage,cornerCount,qualityLevel,minDistance,mask=NULL,blockSize=3,useHarris=0,k=0.04)-> corners} - -\begin{description} -\cvarg{image}{The source 8-bit or floating-point 32-bit, single-channel image} -\cvarg{eigImage}{Temporary floating-point 32-bit image, the same size as \texttt{image}} -\cvarg{tempImage}{Another temporary image, the same size and format as \texttt{eigImage}} -\ifC -\cvarg{corners}{Output parameter; detected corners} -\cvarg{cornerCount}{Output parameter; number of detected corners} -\else -\cvarg{cornerCount}{number of corners to detect} -\fi -\cvarg{qualityLevel}{Multiplier for the max/min eigenvalue; specifies the minimal accepted quality of image corners} -\cvarg{minDistance}{Limit, specifying the minimum possible distance between the returned corners; Euclidian distance is used} -\cvarg{mask}{Region of interest. The function selects points either in the specified region or in the whole image if the mask is NULL} -\cvarg{blockSize}{Size of the averaging block, passed to the underlying \cvCPyCross{CornerMinEigenVal} or \cvCPyCross{CornerHarris} used by the function} -\cvarg{useHarris}{If nonzero, Harris operator (\cvCPyCross{CornerHarris}) is used instead of default \cvCPyCross{CornerMinEigenVal}} -\cvarg{k}{Free parameter of Harris detector; used only if ($\texttt{useHarris} != 0$)} -\end{description} - -The function finds the corners with big eigenvalues in the image. The function first calculates the minimal -eigenvalue for every source image pixel using the \cvCPyCross{CornerMinEigenVal} -function and stores them in \texttt{eigImage}. Then it performs -non-maxima suppression (only the local maxima in $3\times 3$ neighborhood -are retained). The next step rejects the corners with the minimal -eigenvalue less than $\texttt{qualityLevel} \cdot max(\texttt{eigImage}(x,y))$. -Finally, the function ensures that the distance between any two corners is not smaller than \texttt{minDistance}. The weaker corners (with a smaller min eigenvalue) that are too close to the stronger corners are rejected. - -Note that the if the function is called with different values \texttt{A} and \texttt{B} of the parameter \texttt{qualityLevel}, and \texttt{A} > {B}, the array of returned corners with \texttt{qualityLevel=A} will be the prefix of the output corners array with \texttt{qualityLevel=B}. - -\cvCPyFunc{HoughLines2} -Finds lines in a binary image using a Hough transform. - -\cvdefC{ -CvSeq* cvHoughLines2( \par CvArr* image,\par void* storage,\par int method,\par double rho,\par double theta,\par int threshold,\par double param1=0,\par double param2=0 ); -} -\cvdefPy{HoughLines2(image,storage,method,rho,theta,threshold,param1=0,param2=0)-> lines} - -\begin{description} -\cvarg{image}{The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function} -\cvarg{storage}{The storage for the lines that are detected. It can -be a memory storage (in this case a sequence of lines is created in -the storage and returned by the function) or single row/single column -matrix (CvMat*) of a particular type (see below) to which the lines' -parameters are written. The matrix header is modified by the function -so its \texttt{cols} or \texttt{rows} will contain the number of lines -detected. If \texttt{storage} is a matrix and the actual number -of lines exceeds the matrix size, the maximum possible number of lines -is returned (in the case of standard hough transform the lines are sorted -by the accumulator value)} -\cvarg{method}{The Hough transform variant, one of the following: -\begin{description} - \cvarg{CV\_HOUGH\_STANDARD}{classical or standard Hough transform. Every line is represented by two floating-point numbers $(\rho, \theta)$, where $\rho$ is a distance between (0,0) point and the line, and $\theta$ is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of \texttt{CV\_32FC2} type} - \cvarg{CV\_HOUGH\_PROBABILISTIC}{probabilistic Hough transform (more efficient in case if picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of \texttt{CV\_32SC4} type} - \cvarg{CV\_HOUGH\_MULTI\_SCALE}{multi-scale variant of the classical Hough transform. The lines are encoded the same way as \texttt{CV\_HOUGH\_STANDARD}} -\end{description}} -\cvarg{rho}{Distance resolution in pixel-related units} -\cvarg{theta}{Angle resolution measured in radians} -\cvarg{threshold}{Threshold parameter. A line is returned by the function if the corresponding accumulator value is greater than \texttt{threshold}} -\cvarg{param1}{The first method-dependent parameter: -\begin{itemize} - \item For the classical Hough transform it is not used (0). - \item For the probabilistic Hough transform it is the minimum line length. - \item For the multi-scale Hough transform it is the divisor for the distance resolution $\rho$. (The coarse distance resolution will be $\rho$ and the accurate resolution will be $(\rho / \texttt{param1})$). -\end{itemize}} -\cvarg{param2}{The second method-dependent parameter: -\begin{itemize} - \item For the classical Hough transform it is not used (0). - \item For the probabilistic Hough transform it is the maximum gap between line segments lying on the same line to treat them as a single line segment (i.e. to join them). - \item For the multi-scale Hough transform it is the divisor for the angle resolution $\theta$. (The coarse angle resolution will be $\theta$ and the accurate resolution will be $(\theta / \texttt{param2})$). -\end{itemize}} -\end{description} - -The function implements a few variants of the Hough transform for line detection. - -\ifC -\textbf{Example. Detecting lines with Hough transform.} -\begin{lstlisting} -/* This is a standalone program. Pass an image name as a first parameter -of the program. Switch between standard and probabilistic Hough transform -by changing "#if 1" to "#if 0" and back */ -#include -#include -#include - -int main(int argc, char** argv) -{ - IplImage* src; - if( argc == 2 && (src=cvLoadImage(argv[1], 0))!= 0) - { - IplImage* dst = cvCreateImage( cvGetSize(src), 8, 1 ); - IplImage* color_dst = cvCreateImage( cvGetSize(src), 8, 3 ); - CvMemStorage* storage = cvCreateMemStorage(0); - CvSeq* lines = 0; - int i; - cvCanny( src, dst, 50, 200, 3 ); - cvCvtColor( dst, color_dst, CV_GRAY2BGR ); -#if 1 - lines = cvHoughLines2( dst, - storage, - CV_HOUGH_STANDARD, - 1, - CV_PI/180, - 100, - 0, - 0 ); - - for( i = 0; i < MIN(lines->total,100); i++ ) - { - float* line = (float*)cvGetSeqElem(lines,i); - float rho = line[0]; - float theta = line[1]; - CvPoint pt1, pt2; - double a = cos(theta), b = sin(theta); - double x0 = a*rho, y0 = b*rho; - pt1.x = cvRound(x0 + 1000*(-b)); - pt1.y = cvRound(y0 + 1000*(a)); - pt2.x = cvRound(x0 - 1000*(-b)); - pt2.y = cvRound(y0 - 1000*(a)); - cvLine( color_dst, pt1, pt2, CV_RGB(255,0,0), 3, 8 ); - } -#else - lines = cvHoughLines2( dst, - storage, - CV_HOUGH_PROBABILISTIC, - 1, - CV_PI/180, - 80, - 30, - 10 ); - for( i = 0; i < lines->total; i++ ) - { - CvPoint* line = (CvPoint*)cvGetSeqElem(lines,i); - cvLine( color_dst, line[0], line[1], CV_RGB(255,0,0), 3, 8 ); - } -#endif - cvNamedWindow( "Source", 1 ); - cvShowImage( "Source", src ); - - cvNamedWindow( "Hough", 1 ); - cvShowImage( "Hough", color_dst ); - - cvWaitKey(0); - } -} -\end{lstlisting} - -This is the sample picture the function parameters have been tuned for: - -\includegraphics[width=0.5\textwidth]{pics/building.jpg} - -And this is the output of the above program in the case of probabilistic Hough transform (\texttt{\#if 0} case): - -\includegraphics[width=0.5\textwidth]{pics/houghp.png} -\fi - -\cvCPyFunc{PreCornerDetect} -Calculates the feature map for corner detection. - -\cvdefC{ -void cvPreCornerDetect( -\par const CvArr* image, -\par CvArr* corners, -\par int apertureSize=3 ); -} -\cvdefPy{PreCornerDetect(image,corners,apertureSize=3)-> None} -\begin{description} -\cvarg{image}{Input image} -\cvarg{corners}{Image to store the corner candidates} -\cvarg{apertureSize}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel})} -\end{description} - -The function calculates the function - -\[ -D_x^2 D_{yy} + D_y^2 D_{xx} - 2 D_x D_y D_{xy} -\] - -where $D_?$ denotes one of the first image derivatives and $D_{??}$ denotes a second image derivative. - -The corners can be found as local maximums of the function below: - -\ifC -\begin{lstlisting} -// assume that the image is floating-point -IplImage* corners = cvCloneImage(image); -IplImage* dilated_corners = cvCloneImage(image); -IplImage* corner_mask = cvCreateImage( cvGetSize(image), 8, 1 ); -cvPreCornerDetect( image, corners, 3 ); -cvDilate( corners, dilated_corners, 0, 1 ); -cvSubS( corners, dilated_corners, corners ); -cvCmpS( corners, 0, corner_mask, CV_CMP_GE ); -cvReleaseImage( &corners ); -cvReleaseImage( &dilated_corners ); -\end{lstlisting} - -\else -\lstinputlisting{python_fragments/precornerdetect.py} \fi - -\ifC -\cvCPyFunc{SampleLine} -Reads the raster line to the buffer. - -\cvdefC{ -int cvSampleLine( -\par const CvArr* image -\par CvPoint pt1 -\par CvPoint pt2 -\par void* buffer -\par int connectivity=8 ); -} - -\begin{description} -\cvarg{image}{Image to sample the line from} -\cvarg{pt1}{Starting line point} -\cvarg{pt2}{Ending line point} -\cvarg{buffer}{Buffer to store the line points; must have enough size to store -$max( |\texttt{pt2.x} - \texttt{pt1.x}|+1, |\texttt{pt2.y} - \texttt{pt1.y}|+1 )$ -points in the case of an 8-connected line and -$ (|\texttt{pt2.x}-\texttt{pt1.x}|+|\texttt{pt2.y}-\texttt{pt1.y}|+1) $ -in the case of a 4-connected line} -\cvarg{connectivity}{The line connectivity, 4 or 8} -\end{description} - -The function implements a particular application of line iterators. The function reads all of the image points lying on the line between \texttt{pt1} and \texttt{pt2}, including the end points, and stores them into the buffer. - -\fi - -\fi - - \ifCpp -\cvCppFunc{Canny} -Finds edges in an image using Canny algorithm. - -\cvdefCpp{void Canny( const Mat\& image, Mat\& edges,\par - double threshold1, double threshold2,\par - int apertureSize=3, bool L2gradient=false );} -\begin{description} -\cvarg{image}{Single-channel 8-bit input image} -\cvarg{edges}{The output edge map. It will have the same size and the same type as \texttt{image}} -\cvarg{threshold1}{The first threshold for the hysteresis procedure} -\cvarg{threshold2}{The second threshold for the hysteresis procedure} -\cvarg{apertureSize}{Aperture size for the \cvCppCross{Sobel} operator} -\cvarg{L2gradient}{Indicates, whether the more accurate $L_2$ norm $=\sqrt{(dI/dx)^2 + (dI/dy)^2}$ should be used to compute the image gradient magnitude (\texttt{L2gradient=true}), or a faster default $L_1$ norm $=|dI/dx|+|dI/dy|$ is enough (\texttt{L2gradient=false})} -\end{description} - -The function finds edges in the input image \texttt{image} and marks them in the output map \texttt{edges} using the Canny algorithm. The smallest value between \texttt{threshold1} and \texttt{threshold2} is used for edge linking, the largest value is used to find the initial segments of strong edges, see -\url{http://en.wikipedia.org/wiki/Canny_edge_detector} - -\cvCppFunc{cornerEigenValsAndVecs} -Calculates eigenvalues and eigenvectors of image blocks for corner detection. - -\cvdefCpp{void cornerEigenValsAndVecs( const Mat\& src, Mat\& dst,\par - int blockSize, int apertureSize,\par - int borderType=BORDER\_DEFAULT );} -\begin{description} -\cvarg{src}{Input single-channel 8-bit or floating-point image} -\cvarg{dst}{Image to store the results. It will have the same size as \texttt{src} and the type \texttt{CV\_32FC(6)}} -\cvarg{blockSize}{Neighborhood size (see discussion)} -\cvarg{apertureSize}{Aperture parameter for the \cvCppCross{Sobel} operator} -\cvarg{boderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}} -\end{description} - -For every pixel $p$, the function \texttt{cornerEigenValsAndVecs} considers a \texttt{blockSize} $\times$ \texttt{blockSize} neigborhood $S(p)$. It calculates the covariation matrix of derivatives over the neighborhood as: - -\[ -M = \begin{bmatrix} -\sum_{S(p)}(dI/dx)^2 & \sum_{S(p)}(dI/dx dI/dy)^2 \\ -\sum_{S(p)}(dI/dx dI/dy)^2 & \sum_{S(p)}(dI/dy)^2 -\end{bmatrix} -\] - -Where the derivatives are computed using \cvCppCross{Sobel} operator. - -After that it finds eigenvectors and eigenvalues of $M$ and stores them into destination image in the form -$(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)$ where -\begin{description} -\item[$\lambda_1, \lambda_2$]are the eigenvalues of $M$; not sorted -\item[$x_1, y_1$]are the eigenvectors corresponding to $\lambda_1$ -\item[$x_2, y_2$]are the eigenvectors corresponding to $\lambda_2$ -\end{description} - -The output of the function can be used for robust edge or corner detection. - -See also: \cvCppCross{cornerMinEigenVal}, \cvCppCross{cornerHarris}, \cvCppCross{preCornerDetect} - -\cvCppFunc{cornerHarris} -Harris edge detector. - -\cvdefCpp{void cornerHarris( const Mat\& src, Mat\& dst, int blockSize,\par - int apertureSize, double k,\par - int borderType=BORDER\_DEFAULT );} -\begin{description} -\cvarg{src}{Input single-channel 8-bit or floating-point image} -\cvarg{dst}{Image to store the Harris detector responses; will have type \texttt{CV\_32FC1} and the same size as \texttt{src}} -\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCppCross{cornerEigenValsAndVecs})} -\cvarg{apertureSize}{Aperture parameter for the \cvCppCross{Sobel} operator} -\cvarg{k}{Harris detector free parameter. See the formula below} -\cvarg{boderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}} -\end{description} - -The function runs the Harris edge detector on the image. Similarly to \cvCppCross{cornerMinEigenVal} and \cvCppCross{cornerEigenValsAndVecs}, for each pixel $(x, y)$ it calculates a $2\times2$ gradient covariation matrix $M^{(x,y)}$ over a $\texttt{blockSize} \times \texttt{blockSize}$ neighborhood. Then, it computes the following characteristic: - -\[ -\texttt{dst}(x,y) = \mathrm{det} M^{(x,y)} - k \cdot \left(\mathrm{tr} M^{(x,y)}\right)^2 -\] - -Corners in the image can be found as the local maxima of this response map. - -\cvCppFunc{cornerMinEigenVal} -Calculates the minimal eigenvalue of gradient matrices for corner detection. - -\cvdefCpp{void cornerMinEigenVal( const Mat\& src, Mat\& dst,\par - int blockSize, int apertureSize=3,\par - int borderType=BORDER\_DEFAULT );} -\begin{description} -\cvarg{src}{Input single-channel 8-bit or floating-point image} -\cvarg{dst}{Image to store the minimal eigenvalues; will have type \texttt{CV\_32FC1} and the same size as \texttt{src}} -\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCppCross{cornerEigenValsAndVecs})} -\cvarg{apertureSize}{Aperture parameter for the \cvCppCross{Sobel} operator} -\cvarg{boderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}} -\end{description} - -The function is similar to \cvCppCross{cornerEigenValsAndVecs} but it calculates and stores only the minimal eigenvalue of the covariation matrix of derivatives, i.e. $\min(\lambda_1, \lambda_2)$ in terms of the formulae in \cvCppCross{cornerEigenValsAndVecs} description. - -\cvCppFunc{cornerSubPix} -Refines the corner locations. - -\cvdefCpp{void cornerSubPix( const Mat\& image, vector\& corners,\par - Size winSize, Size zeroZone,\par - TermCriteria criteria );} -\begin{description} -\cvarg{image}{Input image} -\cvarg{corners}{Initial coordinates of the input corners; refined coordinates on output} -\cvarg{winSize}{Half of the side length of the search window. For example, if \texttt{winSize=Size(5,5)}, then a $5*2+1 \times 5*2+1 = 11 \times 11$ search window would be used} -\cvarg{zeroZone}{Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size} -\cvarg{criteria}{Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after a certain number of iterations or when a required accuracy is achieved. The \texttt{criteria} may specify either of or both the maximum number of iteration and the required accuracy} -\end{description} - -The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below. - -\includegraphics[width=1.0\textwidth]{pics/cornersubpix.png} - -Sub-pixel accurate corner locator is based on the observation that every vector from the center $q$ to a point $p$ located within a neighborhood of $q$ is orthogonal to the image gradient at $p$ subject to image and measurement noise. Consider the expression: - -\[ -\epsilon_i = {DI_{p_i}}^T \cdot (q - p_i) -\] - -where ${DI_{p_i}}$ is the image gradient at the one of the points $p_i$ in a neighborhood of $q$. The value of $q$ is to be found such that $\epsilon_i$ is minimized. A system of equations may be set up with $\epsilon_i$ set to zero: - -\[ -\sum_i(DI_{p_i} \cdot {DI_{p_i}}^T) - \sum_i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i) -\] - -where the gradients are summed within a neighborhood ("search window") of $q$. Calling the first gradient term $G$ and the second gradient term $b$ gives: - -\[ -q = G^{-1} \cdot b -\] - -The algorithm sets the center of the neighborhood window at this new center $q$ and then iterates until the center keeps within a set threshold. - - -\cvCppFunc{goodFeaturesToTrack} -Determines strong corners on an image. - -\cvdefCpp{void goodFeaturesToTrack( const Mat\& image, vector\& corners,\par - int maxCorners, double qualityLevel, double minDistance,\par - const Mat\& mask=Mat(), int blockSize=3,\par - bool useHarrisDetector=false, double k=0.04 );} -\begin{description} -\cvarg{image}{The input 8-bit or floating-point 32-bit, single-channel image} -\cvarg{corners}{The output vector of detected corners} -\cvarg{maxCorners}{The maximum number of corners to return. If there are more corners than that will be found, the strongest of them will be returned} -\cvarg{qualityLevel}{Characterizes the minimal accepted quality of image corners; the value of the parameter is multiplied by the by the best corner quality measure (which is the min eigenvalue, see \cvCppCross{cornerMinEigenVal}, or the Harris function response, see \cvCppCross{cornerHarris}). The corners, which quality measure is less than the product, will be rejected. For example, if the best corner has the quality measure = 1500, and the \texttt{qualityLevel=0.01}, then all the corners which quality measure is less than 15 will be rejected.} -\cvarg{minDistance}{The minimum possible Euclidean distance between the returned corners} -\cvarg{mask}{The optional region of interest. If the image is not empty (then it needs to have the type \texttt{CV\_8UC1} and the same size as \texttt{image}), it will specify the region in which the corners are detected} -\cvarg{blockSize}{Size of the averaging block for computing derivative covariation matrix over each pixel neighborhood, see \cvCppCross{cornerEigenValsAndVecs}} -\cvarg{useHarrisDetector}{Indicates, whether to use \hyperref[cornerHarris]{Harris} operator or \cvCppCross{cornerMinEigenVal}} -\cvarg{k}{Free parameter of Harris detector} -\end{description} - -The function finds the most prominent corners in the image or in the specified image region, as described -in \cite{Shi94}: -\begin{enumerate} -\item the function first calculates the corner quality measure at every source image pixel using the \cvCppCross{cornerMinEigenVal} or \cvCppCross{cornerHarris} -\item then it performs non-maxima suppression (the local maxima in $3\times 3$ neighborhood -are retained). -\item the next step rejects the corners with the minimal eigenvalue less than $\texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)$. -\item the remaining corners are then sorted by the quality measure in the descending order. -\item finally, the function throws away each corner $pt_j$ if there is a stronger corner $pt_i$ ($i < j$) such that the distance between them is less than \texttt{minDistance} -\end{enumerate} - -The function can be used to initialize a point-based tracker of an object. - -Note that the if the function is called with different values \texttt{A} and \texttt{B} of the parameter \texttt{qualityLevel}, and \texttt{A} > {B}, the vector of returned corners with \texttt{qualityLevel=A} will be the prefix of the output vector with \texttt{qualityLevel=B}. - -See also: \cvCppCross{cornerMinEigenVal}, \cvCppCross{cornerHarris}, \cvCppCross{calcOpticalFlowPyrLK}, \cvCppCross{estimateRigidMotion}, \cvCppCross{PlanarObjectDetector}, \cvCppCross{OneWayDescriptor} - -\cvCppFunc{HoughCircles} -Finds circles in a grayscale image using a Hough transform. - -\cvdefCpp{void HoughCircles( Mat\& image, vector\& circles,\par - int method, double dp, double minDist,\par - double param1=100, double param2=100,\par - int minRadius=0, int maxRadius=0 );} -\begin{description} -\cvarg{image}{The 8-bit, single-channel, grayscale input image} -\cvarg{circles}{The output vector of found circles. Each vector is encoded as 3-element floating-point vector $(x, y, radius)$} -\cvarg{method}{Currently, the only implemented method is \texttt{CV\_HOUGH\_GRADIENT}, which is basically \emph{21HT}, described in \cite{Yuen90}.} -\cvarg{dp}{The inverse ratio of the accumulator resolution to the image resolution. For example, if \texttt{dp=1}, the accumulator will have the same resolution as the input image, if \texttt{dp=2} - accumulator will have half as big width and height, etc} -\cvarg{minDist}{Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed} -\cvarg{param1}{The first method-specific parameter. in the case of \texttt{CV\_HOUGH\_GRADIENT} it is the higher threshold of the two passed to \cvCppCross{Canny} edge detector (the lower one will be twice smaller)} -\cvarg{param2}{The second method-specific parameter. in the case of \texttt{CV\_HOUGH\_GRADIENT} it is the accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first} -\cvarg{minRadius}{Minimum circle radius} -\cvarg{maxRadius}{Maximum circle radius} -\end{description} - -The function finds circles in a grayscale image using some modification of Hough transform. Here is a short usage example: - -\begin{lstlisting} -#include -#include -#include - -using namespace cv; - -int main(int argc, char** argv) -{ - Mat img, gray; - if( argc != 2 && !(img=imread(argv[1], 1)).data) - return -1; - cvtColor(img, gray, CV_BGR2GRAY); - // smooth it, otherwise a lot of false circles may be detected - GaussianBlur( gray, gray, Size(9, 9), 2, 2 ); - vector circles; - HoughCircles(gray, circles, CV_HOUGH_GRADIENT, - 2, gray->rows/4, 200, 100 ); - for( size_t i = 0; i < circles.size(); i++ ) - { - Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); - int radius = cvRound(circles[i][2]); - // draw the circle center - circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 ); - // draw the circle outline - circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 ); - } - namedWindow( "circles", 1 ); - imshow( "circles", img ); - return 0; -} -\end{lstlisting} - -Note that usually the function detects the circles' centers well, however it may fail to find the correct radii. You can assist the function by specifying the radius range (\texttt{minRadius} and \texttt{maxRadius}) if you know it, or you may ignore the returned radius, use only the center and find the correct radius using some additional procedure. - -See also: \cvCppCross{fitEllipse}, \cvCppCross{minEnclosingCircle} - -\cvCppFunc{HoughLines} -Finds lines in a binary image using standard Hough transform. - -\cvdefCpp{void HoughLines( Mat\& image, vector\& lines,\par - double rho, double theta, int threshold,\par - double srn=0, double stn=0 );} -\begin{description} -\cvarg{image}{The 8-bit, single-channel, binary source image. The image may be modified by the function} -\cvarg{lines}{The output vector of lines. Each line is represented by a two-element vector $(\rho, \theta)$. $\rho$ is the distance from the coordinate origin $(0,0)$ (top-left corner of the image) and $\theta$ is the line rotation angle in radians ($0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}$)} -\cvarg{rho}{Distance resolution of the accumulator in pixels} -\cvarg{theta}{Angle resolution of the accumulator in radians} -\cvarg{threshold}{The accumulator threshold parameter. Only those lines are returned that get enough votes ($>\texttt{threshold}$)} -\cvarg{srn}{For the multi-scale Hough transform it is the divisor for the distance resolution \texttt{rho}. The coarse accumulator distance resolution will be \texttt{rho} and the accurate accumulator resolution will be \texttt{rho/srn}. If both \texttt{srn=0} and \texttt{stn=0} then the classical Hough transform is used, otherwise both these parameters should be positive.} -\cvarg{stn}{For the multi-scale Hough transform it is the divisor for the distance resolution \texttt{theta}} -\end{description} - -The function implements standard or standard multi-scale Hough transform algorithm for line detection. See \cvCppCross{HoughLinesP} for the code example. - - -\cvCppFunc{HoughLinesP} -Finds lines segments in a binary image using probabilistic Hough transform. - -\cvdefCpp{void HoughLinesP( Mat\& image, vector\& lines,\par - double rho, double theta, int threshold,\par - double minLineLength=0, double maxLineGap=0 );} -\begin{description} -\cvarg{image}{The 8-bit, single-channel, binary source image. The image may be modified by the function} -\cvarg{lines}{The output vector of lines. Each line is represented by a 4-element vector $(x_1, y_1, x_2, y_2)$, where $(x_1,y_1)$ and $(x_2, y_2)$ are the ending points of each line segment detected.} -\cvarg{rho}{Distance resolution of the accumulator in pixels} -\cvarg{theta}{Angle resolution of the accumulator in radians} -\cvarg{threshold}{The accumulator threshold parameter. Only those lines are returned that get enough votes ($>\texttt{threshold}$)} -\cvarg{minLineLength}{The minimum line length. Line segments shorter than that will be rejected} -\cvarg{maxLineGap}{The maximum allowed gap between points on the same line to link them.} -\end{description} - -The function implements probabilistic Hough transform algorithm for line detection, described in \cite{Matas00}. Below is line detection example: - -\begin{lstlisting} -/* This is a standalone program. Pass an image name as a first parameter -of the program. Switch between standard and probabilistic Hough transform -by changing "#if 1" to "#if 0" and back */ -#include -#include -#include - -using namespace cv; - -int main(int argc, char** argv) -{ - Mat src, dst, color_dst; - if( argc != 2 || !(src=imread(argv[1], 0)).data) - return -1; - - Canny( src, dst, 50, 200, 3 ); - cvtColor( dst, color_dst, CV_GRAY2BGR ); - -#if 0 - vector lines; - HoughLines( dst, lines, 1, CV_PI/180, 100 ); - - for( size_t i = 0; i < lines.size(); i++ ) - { - float rho = lines[i][0]; - float theta = lines[i][1]; - double a = cos(theta), b = sin(theta); - double x0 = a*rho, y0 = b*rho; - Point pt1(cvRound(x0 + 1000*(-b)), - cvRound(y0 + 1000*(a))); - Point pt2(cvRound(x0 - 1000*(-b)), - cvRound(y0 - 1000*(a))); - line( color_dst, pt1, pt2, Scalar(0,0,255), 3, 8 ); - } -#else - vector lines; - HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 ); - for( size_t i = 0; i < lines.size(); i++ ) - { - line( color_dst, Point(lines[i][0], lines[i][1]), - Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 ); - } -#endif - namedWindow( "Source", 1 ); - imshow( "Source", src ); - - namedWindow( "Detected Lines", 1 ); - imshow( "Detected Lines", color_dst ); - - waitKey(0); - return 0; -} -\end{lstlisting} - - -This is the sample picture the function parameters have been tuned for: - -\includegraphics[width=0.5\textwidth]{pics/building.jpg} - -And this is the output of the above program in the case of probabilistic Hough transform - -\includegraphics[width=0.5\textwidth]{pics/houghp.png} - -\cvCppFunc{perCornerDetect} -Calculates the feature map for corner detection - -\cvdefCpp{void preCornerDetect( const Mat\& src, Mat\& dst, int apertureSize,\par - int borderType=BORDER\_DEFAULT );} -\begin{description} -\cvarg{src}{The source single-channel 8-bit of floating-point image} -\cvarg{dst}{The output image; will have type \texttt{CV\_32F} and the same size as \texttt{src}} -\cvarg{apertureSize}{Aperture size of \cvCppCross{Sobel}} -\cvarg{borderType}{The pixel extrapolation method; see \cvCppCross{borderInterpolate}} -\end{description} - -The function calculates the complex spatial derivative-based function of the source image - -\[ -\texttt{dst} = (D_x \texttt{src})^2 \cdot D_{yy} \texttt{src} + (D_y \texttt{src})^2 \cdot D_{xx} \texttt{src} - 2 D_x \texttt{src} \cdot D_y \texttt{src} \cdot D_{xy} \texttt{src} -\] - -where $D_x$, $D_y$ are the first image derivatives, $D_{xx}$, $D_{yy}$ are the second image derivatives and $D_{xy}$ is the mixed derivative. - -The corners can be found as local maximums of the functions, as shown below: - -\begin{lstlisting} -Mat corners, dilated_corners; -preCornerDetect(image, corners, 3); -// dilation with 3x3 rectangular structuring element -dilate(corners, dilated_corners, Mat(), 1); -Mat corner_mask = corners == dilated_corners; -\end{lstlisting} - - \cvclass{KeyPoint} Data structure for salient point detectors @@ -1240,6 +478,9 @@ protected: }; \end{lstlisting} + + + \cvCppFunc{FeatureDetector::detect} Detect keypoints in an image. diff --git a/doc/cvaux_object_detection.tex b/doc/features2d_object_detection.tex similarity index 99% rename from doc/cvaux_object_detection.tex rename to doc/features2d_object_detection.tex index f673629..f6d511c 100644 --- a/doc/cvaux_object_detection.tex +++ b/doc/features2d_object_detection.tex @@ -401,4 +401,4 @@ for(i=0; i < imageKeypoints->total; i++) \end{lstlisting} -\fi \ No newline at end of file +\fi diff --git a/doc/cv_object_recognition.tex b/doc/features2d_object_recognition.tex similarity index 100% rename from doc/cv_object_recognition.tex rename to doc/features2d_object_recognition.tex diff --git a/doc/flann.tex b/doc/flann.tex new file mode 100644 index 0000000..57015a1 --- /dev/null +++ b/doc/flann.tex @@ -0,0 +1,253 @@ +\section{Fast Approximate Nearest Neighbor Search} + +\ifCpp + +This section documents OpenCV's interface to the FLANN\footnote{http://people.cs.ubc.ca/\~mariusm/flann} library. FLANN (Fast Library for Approximate Nearest Neighbors) is a library that +contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. More +information about FLANN can be found in \cite{muja_flann_2009}. + +\ifplastex +\cvclass{cv::flann::Index_} +\else +\subsection{cv::flann::Index\_}\label{cvflann.Index} +\fi +The FLANN nearest neighbor index class. This class is templated with the type of elements for which the index is built. + +\begin{lstlisting} +namespace cv +{ +namespace flann +{ + template + class Index_ + { + public: + Index_(const Mat& features, const IndexParams& params); + + ~Index_(); + + void knnSearch(const vector& query, + vector& indices, + vector& dists, + int knn, + const SearchParams& params); + void knnSearch(const Mat& queries, + Mat& indices, + Mat& dists, + int knn, + const SearchParams& params); + + int radiusSearch(const vector& query, + vector& indices, + vector& dists, + float radius, + const SearchParams& params); + int radiusSearch(const Mat& query, + Mat& indices, + Mat& dists, + float radius, + const SearchParams& params); + + void save(std::string filename); + + int veclen() const; + + int size() const; + + const IndexParams* getIndexParameters(); + }; + + typedef Index_ Index; + +} } // namespace cv::flann +\end{lstlisting} + +\ifplastex +\cvCppFunc{cv::flann::Index_::Index_} +\else +\subsection{cvflann::Index\_$$::Index\_}\label{cvflann.Index.Index} +\fi +Constructs a nearest neighbor search index for a given dataset. + +\cvdefCpp{Index\_::Index\_(const Mat\& features, const IndexParams\& params);} +\begin{description} +\cvarg{features}{ Matrix of containing the features(points) to index. The size of the matrix is num\_features x feature\_dimensionality and +the data type of the elements in the matrix must coincide with the type of the index.} +\cvarg{params}{Structure containing the index parameters. The type of index that will be constructed depends on the type of this parameter. +The possible parameter types are:} + +\begin{description} +\cvarg{LinearIndexParams}{When passing an object of this type, the index will perform a linear, brute-force search.} +\begin{lstlisting} +struct LinearIndexParams : public IndexParams +{ +}; +\end{lstlisting} + +\cvarg{KDTreeIndexParams}{When passing an object of this type the index constructed will consist of a set of randomized kd-trees which will be searched in parallel.} +\begin{lstlisting} +struct KDTreeIndexParams : public IndexParams +{ + KDTreeIndexParams( int trees = 4 ); +}; +\end{lstlisting} +\begin{description} +\cvarg{trees}{The number of parallel kd-trees to use. Good values are in the range [1..16]} +\end{description} + +\cvarg{KMeansIndexParams}{When passing an object of this type the index constructed will be a hierarchical k-means tree.} +\begin{lstlisting} +struct KMeansIndexParams : public IndexParams +{ + KMeansIndexParams( + int branching = 32, + int iterations = 11, + flann_centers_init_t centers_init = CENTERS_RANDOM, + float cb_index = 0.2 ); +}; +\end{lstlisting} +\begin{description} +\cvarg{branching}{ The branching factor to use for the hierarchical k-means tree } +\cvarg{iterations}{ The maximum number of iterations to use in the k-means clustering stage when building the k-means tree. A value of -1 used here means that the k-means clustering should be iterated until convergence} +\cvarg{centers\_init}{The algorithm to use for selecting the initial centers when performing a k-means clustering step. The possible values are \texttt{CENTERS\_RANDOM} (picks the initial cluster centers randomly), \texttt{CENTERS\_GONZALES} (picks the initial centers using Gonzales' algorithm) and \texttt{CENTERS\_KMEANSPP} (picks the initial centers using the algorithm suggested in \cite{arthur_kmeanspp_2007})} +\cvarg{cb\_index}{This parameter (cluster boundary index) influences the way exploration is performed in the hierarchical kmeans tree. When \texttt{cb\_index} is zero the next kmeans domain to be explored is choosen to be the one with the closest center. A value greater then zero also takes into account the size of the domain.} +\end{description} + +\cvarg{CompositeIndexParams}{When using a parameters object of this type the index created combines the randomized kd-trees and the hierarchical k-means tree.} +\begin{lstlisting} +struct CompositeIndexParams : public IndexParams +{ + CompositeIndexParams( + int trees = 4, + int branching = 32, + int iterations = 11, + flann_centers_init_t centers_init = CENTERS_RANDOM, + float cb_index = 0.2 ); +}; +\end{lstlisting} + +\cvarg{AutotunedIndexParams}{When passing an object of this type the index created is automatically tuned to offer the best performance, by choosing the optimal index type (randomized kd-trees, hierarchical kmeans, linear) and parameters for the dataset provided.} +\begin{lstlisting} +struct AutotunedIndexParams : public IndexParams +{ + AutotunedIndexParams( + float target_precision = 0.9, + float build_weight = 0.01, + float memory_weight = 0, + float sample_fraction = 0.1 ); +}; +\end{lstlisting} +\begin{description} +\cvarg{target\_precision}{ Is a number between 0 and 1 specifying the percentage of the approximate nearest-neighbor searches that return the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application. } + +\cvarg{build\_weight}{ Specifies the importance of the index build time raported to the nearest-neighbor search time. In some applications it's acceptable for the index build step to take a long time if the subsequent searches in the index can be performed very fast. In other applications it's required that the index be build as fast as possible even if that leads to slightly longer search times.} + +\cvarg{memory\_weight}{Is used to specify the tradeoff between time (index build time and search time) and memory used by the index. A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory usage.} + +\cvarg{sample\_fraction}{Is a number between 0 and 1 indicating what fraction of the dataset to use in the automatic parameter configuration algorithm. Running the algorithm on the full dataset gives the most accurate results, but for very large datasets can take longer than desired. In such case using just a fraction of the data helps speeding up this algorithm while still giving good approximations of the optimum parameters.} +\end{description} + +\cvarg{SavedIndexParams}{This object type is used for loading a previously saved index from the disk.} +\begin{lstlisting} +struct SavedIndexParams : public IndexParams +{ + SavedIndexParams( std::string filename ); +}; +\end{lstlisting} +\begin{description} +\cvarg{filename}{ The filename in which the index was saved. } +\end{description} +\end{description} +\end{description} + +\ifplastex +\cvCppFunc{cv::flann::Index_::knnSearch} +\else +\subsection{cv::flann::Index\_$$::knnSearch}\label{cvflann.Index.knnSearch} +\fi +Performs a K-nearest neighbor search for a given query point using the index. +\cvdefCpp{ +void Index\_::knnSearch(const vector\& query, \par + vector\& indices, \par + vector\& dists, \par + int knn, \par + const SearchParams\& params);\newline +void Index\_::knnSearch(const Mat\& queries,\par + Mat\& indices, Mat\& dists,\par + int knn, const SearchParams\& params);} +\begin{description} +\cvarg{query}{The query point} +\cvarg{indices}{Vector that will contain the indices of the K-nearest neighbors found. It must have at least knn size.} +\cvarg{dists}{Vector that will contain the distances to the K-nearest neighbors found. It must have at least knn size.} +\cvarg{knn}{Number of nearest neighbors to search for.} +\cvarg{params}{Search parameters} +\begin{lstlisting} + struct SearchParams { + SearchParams(int checks = 32); + }; +\end{lstlisting} +\begin{description} +\cvarg{checks}{ The number of times the tree(s) in the index should be recursively traversed. A higher value for this parameter would give better search precision, but also take more time. If automatic configuration was used when the index was created, the number of checks required to achieve the specified precision was also computed, in which case this parameter is ignored.} +\end{description} +\end{description} + +\ifplastex +\cvCppFunc{cv::flann::Index_::radiusSearch} +\else +\subsection{cv::flann::Index\_$$::radiusSearch}\label{cvflann.Index.radiusSearch} +\fi +Performs a radius nearest neighbor search for a given query point. +\cvdefCpp{ +int Index\_::radiusSearch(const vector\& query, \par + vector\& indices, \par + vector\& dists, \par + float radius, \par + const SearchParams\& params);\newline +int Index\_::radiusSearch(const Mat\& query, \par + Mat\& indices, \par + Mat\& dists, \par + float radius, \par + const SearchParams\& params);} +\begin{description} +\cvarg{query}{The query point} +\cvarg{indices}{Vector that will contain the indices of the points found within the search radius in decreasing order of the distance to the query point. If the number of neighbors in the search radius is bigger than the size of this vector, the ones that don't fit in the vector are ignored. } +\cvarg{dists}{Vector that will contain the distances to the points found within the search radius} +\cvarg{radius}{The search radius} +\cvarg{params}{Search parameters} +\end{description} + +\ifplastex +\cvCppFunc{cv::flann::Index_::save} +\else +\subsection{cv::flann::Index\_$$::save}\label{cvflann.Index.save} +\fi + +Saves the index to a file. +\cvdefCpp{void Index\_::save(std::string filename);} +\begin{description} +\cvarg{filename}{The file to save the index to} +\end{description} + +\ifplastex +\cvCppFunc{cv::flann::Index_::getIndexParameters} +\else +\subsection{cv::flann::Index\_$$::getIndexParameters}\label{cvflann.Index.getIndexParameters} +\fi + +Returns the index paramreters. This is usefull in case of autotuned indices, when the parameters computed can be retrived using this method. +\cvdefCpp{const IndexParams* Index\_::getIndexParameters();} + +\section{Clustering} + +\cvCppFunc{cv::flann::hierarchicalClustering} +Clusters the given points by constructing a hierarchical k-means tree and choosing a cut in the tree that minimizes the cluster's variance. +\cvdefCpp{int hierarchicalClustering(const Mat\& features, Mat\& centers,\par + const KMeansIndexParams\& params);} +\begin{description} +\cvarg{features}{The points to be clustered. The matrix must have elements of type ET.} +\cvarg{centers}{The centers of the clusters obtained. The matrix must have type DT. The number of rows in this matrix represents the number of clusters desired, however, because of the way the cut in the hierarchical tree is chosen, the number of clusters computed will be the highest number of the form \texttt{(branching-1)*k+1} that's lower than the number of clusters desired, where \texttt{branching} is the tree's branching factor (see description of the KMeansIndexParams).} +\cvarg{params}{Parameters used in the construction of the hierarchical k-means tree} +\end{description} +The function returns the number of clusters computed. + +\fi diff --git a/doc/HighGui.tex b/doc/highgui.tex similarity index 100% rename from doc/HighGui.tex rename to doc/highgui.tex diff --git a/doc/HighGui_Qt.tex b/doc/highgui_qt.tex similarity index 100% rename from doc/HighGui_Qt.tex rename to doc/highgui_qt.tex diff --git a/doc/imgproc_feature_detection.tex b/doc/imgproc_feature_detection.tex new file mode 100644 index 0000000..1221ab0 --- /dev/null +++ b/doc/imgproc_feature_detection.tex @@ -0,0 +1,773 @@ +%[TODO: from Feature Detection] +\section{Feature Detection} + +\ifCPy + +\cvCPyFunc{Canny} +Implements the Canny algorithm for edge detection. + +\cvdefC{ +void cvCanny(\par const CvArr* image, +\par CvArr* edges, +\par double threshold1, +\par double threshold2, +\par int aperture\_size=3 ); +}\cvdefPy{Canny(image,edges,threshold1,threshold2,aperture\_size=3)-> None} +\begin{description} +\cvarg{image}{Single-channel input image} +\cvarg{edges}{Single-channel image to store the edges found by the function} +\cvarg{threshold1}{The first threshold} +\cvarg{threshold2}{The second threshold} +\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel})} +\end{description} + +The function finds the edges on the input image \texttt{image} and marks them in the output image \texttt{edges} using the Canny algorithm. The smallest value between \texttt{threshold1} and \texttt{threshold2} is used for edge linking, the largest value is used to find the initial segments of strong edges. + +\cvCPyFunc{CornerEigenValsAndVecs} +Calculates eigenvalues and eigenvectors of image blocks for corner detection. + +\cvdefC{ +void cvCornerEigenValsAndVecs( \par const CvArr* image,\par CvArr* eigenvv,\par int blockSize,\par int aperture\_size=3 ); + +}\cvdefPy{CornerEigenValsAndVecs(image,eigenvv,blockSize,aperture\_size=3)-> None} + +\begin{description} +\cvarg{image}{Input image} +\cvarg{eigenvv}{Image to store the results. It must be 6 times wider than the input image} +\cvarg{blockSize}{Neighborhood size (see discussion)} +\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel})} +\end{description} + +For every pixel, the function \texttt{cvCornerEigenValsAndVecs} considers a $\texttt{blockSize} \times \texttt{blockSize}$ neigborhood S(p). It calcualtes the covariation matrix of derivatives over the neigborhood as: + +\[ +M = \begin{bmatrix} +\sum_{S(p)}(dI/dx)^2 & \sum_{S(p)}(dI/dx \cdot dI/dy)^2 \\ +\sum_{S(p)}(dI/dx \cdot dI/dy)^2 & \sum_{S(p)}(dI/dy)^2 +\end{bmatrix} +\] + +After that it finds eigenvectors and eigenvalues of the matrix and stores them into destination image in form +$(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)$ where +\begin{description} +\item[$\lambda_1, \lambda_2$]are the eigenvalues of $M$; not sorted +\item[$x_1, y_1$]are the eigenvectors corresponding to $\lambda_1$ +\item[$x_2, y_2$]are the eigenvectors corresponding to $\lambda_2$ +\end{description} + +\cvCPyFunc{CornerHarris} +Harris edge detector. + +\cvdefC{ +void cvCornerHarris( +\par const CvArr* image, +\par CvArr* harris\_dst, +\par int blockSize, +\par int aperture\_size=3, +\par double k=0.04 ); +} +\cvdefPy{CornerHarris(image,harris\_dst,blockSize,aperture\_size=3,k=0.04)-> None} + +\begin{description} +\cvarg{image}{Input image} +\cvarg{harris\_dst}{Image to store the Harris detector responses. Should have the same size as \texttt{image}} +\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCPyCross{CornerEigenValsAndVecs})} +\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel}).} +% format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing +\cvarg{k}{Harris detector free parameter. See the formula below} +\end{description} + +The function runs the Harris edge detector on the image. Similarly to \cvCPyCross{CornerMinEigenVal} and \cvCPyCross{CornerEigenValsAndVecs}, for each pixel it calculates a $2\times2$ gradient covariation matrix $M$ over a $\texttt{blockSize} \times \texttt{blockSize}$ neighborhood. Then, it stores + +\[ +det(M) - k \, trace(M)^2 +\] + +to the destination image. Corners in the image can be found as the local maxima of the destination image. + +\cvCPyFunc{CornerMinEigenVal} +Calculates the minimal eigenvalue of gradient matrices for corner detection. + +\cvdefC{ +void cvCornerMinEigenVal( +\par const CvArr* image, +\par CvArr* eigenval, +\par int blockSize, +\par int aperture\_size=3 ); +}\cvdefPy{CornerMinEigenVal(image,eigenval,blockSize,aperture\_size=3)-> None} +\begin{description} +\cvarg{image}{Input image} +\cvarg{eigenval}{Image to store the minimal eigenvalues. Should have the same size as \texttt{image}} +\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCPyCross{CornerEigenValsAndVecs})} +\cvarg{aperture\_size}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel}).} +% format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing +\end{description} + +The function is similar to \cvCPyCross{CornerEigenValsAndVecs} but it calculates and stores only the minimal eigen value of derivative covariation matrix for every pixel, i.e. $min(\lambda_1, \lambda_2)$ in terms of the previous function. + +\cvCPyFunc{FindCornerSubPix} +Refines the corner locations. + +\cvdefC{ +void cvFindCornerSubPix( +\par const CvArr* image, +\par CvPoint2D32f* corners, +\par int count, +\par CvSize win, +\par CvSize zero\_zone, +\par CvTermCriteria criteria ); +}\cvdefPy{FindCornerSubPix(image,corners,win,zero\_zone,criteria)-> corners} + +\begin{description} +\cvarg{image}{Input image} +\ifC +\cvarg{corners}{Initial coordinates of the input corners; refined coordinates on output} +\cvarg{count}{Number of corners} +\fi +\ifPy +\cvarg{corners}{Initial coordinates of the input corners as a list of (x, y) pairs} +\fi +\cvarg{win}{Half of the side length of the search window. For example, if \texttt{win}=(5,5), then a $5*2+1 \times 5*2+1 = 11 \times 11$ search window would be used} +\cvarg{zero\_zone}{Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size} +\cvarg{criteria}{Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after a certain number of iterations or when a required accuracy is achieved. The \texttt{criteria} may specify either of or both the maximum number of iteration and the required accuracy} +\end{description} + +The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below. +\ifPy +It returns the refined coordinates as a list of (x, y) pairs. +\fi + +\includegraphics[width=1.0\textwidth]{pics/cornersubpix.png} + +Sub-pixel accurate corner locator is based on the observation that every vector from the center $q$ to a point $p$ located within a neighborhood of $q$ is orthogonal to the image gradient at $p$ subject to image and measurement noise. Consider the expression: + +\[ +\epsilon_i = {DI_{p_i}}^T \cdot (q - p_i) +\] + +where ${DI_{p_i}}$ is the image gradient at the one of the points $p_i$ in a neighborhood of $q$. The value of $q$ is to be found such that $\epsilon_i$ is minimized. A system of equations may be set up with $\epsilon_i$ set to zero: + +\[ +\sum_i(DI_{p_i} \cdot {DI_{p_i}}^T) q = \sum_i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i) +\] + +where the gradients are summed within a neighborhood ("search window") of $q$. Calling the first gradient term $G$ and the second gradient term $b$ gives: + +\[ +q = G^{-1} \cdot b +\] + +The algorithm sets the center of the neighborhood window at this new center $q$ and then iterates until the center keeps within a set threshold. + +\cvCPyFunc{GoodFeaturesToTrack} +Determines strong corners on an image. + +\cvdefC{ +void cvGoodFeaturesToTrack( +\par const CvArr* image +\par CvArr* eigImage, CvArr* tempImage +\par CvPoint2D32f* corners +\par int* cornerCount +\par double qualityLevel +\par double minDistance +\par const CvArr* mask=NULL +\par int blockSize=3 +\par int useHarris=0 +\par double k=0.04 ); +} +\cvdefPy{GoodFeaturesToTrack(image,eigImage,tempImage,cornerCount,qualityLevel,minDistance,mask=NULL,blockSize=3,useHarris=0,k=0.04)-> corners} + +\begin{description} +\cvarg{image}{The source 8-bit or floating-point 32-bit, single-channel image} +\cvarg{eigImage}{Temporary floating-point 32-bit image, the same size as \texttt{image}} +\cvarg{tempImage}{Another temporary image, the same size and format as \texttt{eigImage}} +\ifC +\cvarg{corners}{Output parameter; detected corners} +\cvarg{cornerCount}{Output parameter; number of detected corners} +\else +\cvarg{cornerCount}{number of corners to detect} +\fi +\cvarg{qualityLevel}{Multiplier for the max/min eigenvalue; specifies the minimal accepted quality of image corners} +\cvarg{minDistance}{Limit, specifying the minimum possible distance between the returned corners; Euclidian distance is used} +\cvarg{mask}{Region of interest. The function selects points either in the specified region or in the whole image if the mask is NULL} +\cvarg{blockSize}{Size of the averaging block, passed to the underlying \cvCPyCross{CornerMinEigenVal} or \cvCPyCross{CornerHarris} used by the function} +\cvarg{useHarris}{If nonzero, Harris operator (\cvCPyCross{CornerHarris}) is used instead of default \cvCPyCross{CornerMinEigenVal}} +\cvarg{k}{Free parameter of Harris detector; used only if ($\texttt{useHarris} != 0$)} +\end{description} + +The function finds the corners with big eigenvalues in the image. The function first calculates the minimal +eigenvalue for every source image pixel using the \cvCPyCross{CornerMinEigenVal} +function and stores them in \texttt{eigImage}. Then it performs +non-maxima suppression (only the local maxima in $3\times 3$ neighborhood +are retained). The next step rejects the corners with the minimal +eigenvalue less than $\texttt{qualityLevel} \cdot max(\texttt{eigImage}(x,y))$. +Finally, the function ensures that the distance between any two corners is not smaller than \texttt{minDistance}. The weaker corners (with a smaller min eigenvalue) that are too close to the stronger corners are rejected. + +Note that the if the function is called with different values \texttt{A} and \texttt{B} of the parameter \texttt{qualityLevel}, and \texttt{A} > {B}, the array of returned corners with \texttt{qualityLevel=A} will be the prefix of the output corners array with \texttt{qualityLevel=B}. + +\cvCPyFunc{HoughLines2} +Finds lines in a binary image using a Hough transform. + +\cvdefC{ +CvSeq* cvHoughLines2( \par CvArr* image,\par void* storage,\par int method,\par double rho,\par double theta,\par int threshold,\par double param1=0,\par double param2=0 ); +} +\cvdefPy{HoughLines2(image,storage,method,rho,theta,threshold,param1=0,param2=0)-> lines} + +\begin{description} +\cvarg{image}{The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function} +\cvarg{storage}{The storage for the lines that are detected. It can +be a memory storage (in this case a sequence of lines is created in +the storage and returned by the function) or single row/single column +matrix (CvMat*) of a particular type (see below) to which the lines' +parameters are written. The matrix header is modified by the function +so its \texttt{cols} or \texttt{rows} will contain the number of lines +detected. If \texttt{storage} is a matrix and the actual number +of lines exceeds the matrix size, the maximum possible number of lines +is returned (in the case of standard hough transform the lines are sorted +by the accumulator value)} +\cvarg{method}{The Hough transform variant, one of the following: +\begin{description} + \cvarg{CV\_HOUGH\_STANDARD}{classical or standard Hough transform. Every line is represented by two floating-point numbers $(\rho, \theta)$, where $\rho$ is a distance between (0,0) point and the line, and $\theta$ is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of \texttt{CV\_32FC2} type} + \cvarg{CV\_HOUGH\_PROBABILISTIC}{probabilistic Hough transform (more efficient in case if picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of \texttt{CV\_32SC4} type} + \cvarg{CV\_HOUGH\_MULTI\_SCALE}{multi-scale variant of the classical Hough transform. The lines are encoded the same way as \texttt{CV\_HOUGH\_STANDARD}} +\end{description}} +\cvarg{rho}{Distance resolution in pixel-related units} +\cvarg{theta}{Angle resolution measured in radians} +\cvarg{threshold}{Threshold parameter. A line is returned by the function if the corresponding accumulator value is greater than \texttt{threshold}} +\cvarg{param1}{The first method-dependent parameter: +\begin{itemize} + \item For the classical Hough transform it is not used (0). + \item For the probabilistic Hough transform it is the minimum line length. + \item For the multi-scale Hough transform it is the divisor for the distance resolution $\rho$. (The coarse distance resolution will be $\rho$ and the accurate resolution will be $(\rho / \texttt{param1})$). +\end{itemize}} +\cvarg{param2}{The second method-dependent parameter: +\begin{itemize} + \item For the classical Hough transform it is not used (0). + \item For the probabilistic Hough transform it is the maximum gap between line segments lying on the same line to treat them as a single line segment (i.e. to join them). + \item For the multi-scale Hough transform it is the divisor for the angle resolution $\theta$. (The coarse angle resolution will be $\theta$ and the accurate resolution will be $(\theta / \texttt{param2})$). +\end{itemize}} +\end{description} + +The function implements a few variants of the Hough transform for line detection. + +\ifC +\textbf{Example. Detecting lines with Hough transform.} +\begin{lstlisting} +/* This is a standalone program. Pass an image name as a first parameter +of the program. Switch between standard and probabilistic Hough transform +by changing "#if 1" to "#if 0" and back */ +#include +#include +#include + +int main(int argc, char** argv) +{ + IplImage* src; + if( argc == 2 && (src=cvLoadImage(argv[1], 0))!= 0) + { + IplImage* dst = cvCreateImage( cvGetSize(src), 8, 1 ); + IplImage* color_dst = cvCreateImage( cvGetSize(src), 8, 3 ); + CvMemStorage* storage = cvCreateMemStorage(0); + CvSeq* lines = 0; + int i; + cvCanny( src, dst, 50, 200, 3 ); + cvCvtColor( dst, color_dst, CV_GRAY2BGR ); +#if 1 + lines = cvHoughLines2( dst, + storage, + CV_HOUGH_STANDARD, + 1, + CV_PI/180, + 100, + 0, + 0 ); + + for( i = 0; i < MIN(lines->total,100); i++ ) + { + float* line = (float*)cvGetSeqElem(lines,i); + float rho = line[0]; + float theta = line[1]; + CvPoint pt1, pt2; + double a = cos(theta), b = sin(theta); + double x0 = a*rho, y0 = b*rho; + pt1.x = cvRound(x0 + 1000*(-b)); + pt1.y = cvRound(y0 + 1000*(a)); + pt2.x = cvRound(x0 - 1000*(-b)); + pt2.y = cvRound(y0 - 1000*(a)); + cvLine( color_dst, pt1, pt2, CV_RGB(255,0,0), 3, 8 ); + } +#else + lines = cvHoughLines2( dst, + storage, + CV_HOUGH_PROBABILISTIC, + 1, + CV_PI/180, + 80, + 30, + 10 ); + for( i = 0; i < lines->total; i++ ) + { + CvPoint* line = (CvPoint*)cvGetSeqElem(lines,i); + cvLine( color_dst, line[0], line[1], CV_RGB(255,0,0), 3, 8 ); + } +#endif + cvNamedWindow( "Source", 1 ); + cvShowImage( "Source", src ); + + cvNamedWindow( "Hough", 1 ); + cvShowImage( "Hough", color_dst ); + + cvWaitKey(0); + } +} +\end{lstlisting} + +This is the sample picture the function parameters have been tuned for: + +\includegraphics[width=0.5\textwidth]{pics/building.jpg} + +And this is the output of the above program in the case of probabilistic Hough transform (\texttt{\#if 0} case): + +\includegraphics[width=0.5\textwidth]{pics/houghp.png} +\fi + + +\cvCPyFunc{PreCornerDetect} +Calculates the feature map for corner detection. + +\cvdefC{ +void cvPreCornerDetect( +\par const CvArr* image, +\par CvArr* corners, +\par int apertureSize=3 ); +} +\cvdefPy{PreCornerDetect(image,corners,apertureSize=3)-> None} +\begin{description} +\cvarg{image}{Input image} +\cvarg{corners}{Image to store the corner candidates} +\cvarg{apertureSize}{Aperture parameter for the Sobel operator (see \cvCPyCross{Sobel})} +\end{description} + +The function calculates the function + +\[ +D_x^2 D_{yy} + D_y^2 D_{xx} - 2 D_x D_y D_{xy} +\] + +where $D_?$ denotes one of the first image derivatives and $D_{??}$ denotes a second image derivative. + +The corners can be found as local maximums of the function below: + +\ifC +\begin{lstlisting} +// assume that the image is floating-point +IplImage* corners = cvCloneImage(image); +IplImage* dilated_corners = cvCloneImage(image); +IplImage* corner_mask = cvCreateImage( cvGetSize(image), 8, 1 ); +cvPreCornerDetect( image, corners, 3 ); +cvDilate( corners, dilated_corners, 0, 1 ); +cvSubS( corners, dilated_corners, corners ); +cvCmpS( corners, 0, corner_mask, CV_CMP_GE ); +cvReleaseImage( &corners ); +cvReleaseImage( &dilated_corners ); +\end{lstlisting} + +\else +\lstinputlisting{python_fragments/precornerdetect.py} +\fi + +\ifC +\cvCPyFunc{SampleLine} +Reads the raster line to the buffer. + +\cvdefC{ +int cvSampleLine( +\par const CvArr* image +\par CvPoint pt1 +\par CvPoint pt2 +\par void* buffer +\par int connectivity=8 ); +} + +\begin{description} +\cvarg{image}{Image to sample the line from} +\cvarg{pt1}{Starting line point} +\cvarg{pt2}{Ending line point} +\cvarg{buffer}{Buffer to store the line points; must have enough size to store +$max( |\texttt{pt2.x} - \texttt{pt1.x}|+1, |\texttt{pt2.y} - \texttt{pt1.y}|+1 )$ +points in the case of an 8-connected line and +$ (|\texttt{pt2.x}-\texttt{pt1.x}|+|\texttt{pt2.y}-\texttt{pt1.y}|+1) $ +in the case of a 4-connected line} +\cvarg{connectivity}{The line connectivity, 4 or 8} +\end{description} + +The function implements a particular application of line iterators. The function reads all of the image points lying on the line between \texttt{pt1} and \texttt{pt2}, including the end points, and stores them into the buffer. + +\fi + +\fi + + +\ifCpp + +\cvCppFunc{Canny} +Finds edges in an image using Canny algorithm. + +\cvdefCpp{void Canny( const Mat\& image, Mat\& edges,\par + double threshold1, double threshold2,\par + int apertureSize=3, bool L2gradient=false );} +\begin{description} +\cvarg{image}{Single-channel 8-bit input image} +\cvarg{edges}{The output edge map. It will have the same size and the same type as \texttt{image}} +\cvarg{threshold1}{The first threshold for the hysteresis procedure} +\cvarg{threshold2}{The second threshold for the hysteresis procedure} +\cvarg{apertureSize}{Aperture size for the \cvCppCross{Sobel} operator} +\cvarg{L2gradient}{Indicates, whether the more accurate $L_2$ norm $=\sqrt{(dI/dx)^2 + (dI/dy)^2}$ should be used to compute the image gradient magnitude (\texttt{L2gradient=true}), or a faster default $L_1$ norm $=|dI/dx|+|dI/dy|$ is enough (\texttt{L2gradient=false})} +\end{description} + +The function finds edges in the input image \texttt{image} and marks them in the output map \texttt{edges} using the Canny algorithm. The smallest value between \texttt{threshold1} and \texttt{threshold2} is used for edge linking, the largest value is used to find the initial segments of strong edges, see +\url{http://en.wikipedia.org/wiki/Canny_edge_detector} + +\cvCppFunc{cornerEigenValsAndVecs} +Calculates eigenvalues and eigenvectors of image blocks for corner detection. + +\cvdefCpp{void cornerEigenValsAndVecs( const Mat\& src, Mat\& dst,\par + int blockSize, int apertureSize,\par + int borderType=BORDER\_DEFAULT );} +\begin{description} +\cvarg{src}{Input single-channel 8-bit or floating-point image} +\cvarg{dst}{Image to store the results. It will have the same size as \texttt{src} and the type \texttt{CV\_32FC(6)}} +\cvarg{blockSize}{Neighborhood size (see discussion)} +\cvarg{apertureSize}{Aperture parameter for the \cvCppCross{Sobel} operator} +\cvarg{boderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}} +\end{description} + +For every pixel $p$, the function \texttt{cornerEigenValsAndVecs} considers a \texttt{blockSize} $\times$ \texttt{blockSize} neigborhood $S(p)$. It calculates the covariation matrix of derivatives over the neighborhood as: + +\[ +M = \begin{bmatrix} +\sum_{S(p)}(dI/dx)^2 & \sum_{S(p)}(dI/dx dI/dy)^2 \\ +\sum_{S(p)}(dI/dx dI/dy)^2 & \sum_{S(p)}(dI/dy)^2 +\end{bmatrix} +\] + +Where the derivatives are computed using \cvCppCross{Sobel} operator. + +After that it finds eigenvectors and eigenvalues of $M$ and stores them into destination image in the form +$(\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)$ where +\begin{description} +\item[$\lambda_1, \lambda_2$]are the eigenvalues of $M$; not sorted +\item[$x_1, y_1$]are the eigenvectors corresponding to $\lambda_1$ +\item[$x_2, y_2$]are the eigenvectors corresponding to $\lambda_2$ +\end{description} + +The output of the function can be used for robust edge or corner detection. + +See also: \cvCppCross{cornerMinEigenVal}, \cvCppCross{cornerHarris}, \cvCppCross{preCornerDetect} + +\cvCppFunc{cornerHarris} +Harris edge detector. + +\cvdefCpp{void cornerHarris( const Mat\& src, Mat\& dst, int blockSize,\par + int apertureSize, double k,\par + int borderType=BORDER\_DEFAULT );} +\begin{description} +\cvarg{src}{Input single-channel 8-bit or floating-point image} +\cvarg{dst}{Image to store the Harris detector responses; will have type \texttt{CV\_32FC1} and the same size as \texttt{src}} +\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCppCross{cornerEigenValsAndVecs})} +\cvarg{apertureSize}{Aperture parameter for the \cvCppCross{Sobel} operator} +\cvarg{k}{Harris detector free parameter. See the formula below} +\cvarg{boderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}} +\end{description} + +The function runs the Harris edge detector on the image. Similarly to \cvCppCross{cornerMinEigenVal} and \cvCppCross{cornerEigenValsAndVecs}, for each pixel $(x, y)$ it calculates a $2\times2$ gradient covariation matrix $M^{(x,y)}$ over a $\texttt{blockSize} \times \texttt{blockSize}$ neighborhood. Then, it computes the following characteristic: + +\[ +\texttt{dst}(x,y) = \mathrm{det} M^{(x,y)} - k \cdot \left(\mathrm{tr} M^{(x,y)}\right)^2 +\] + +Corners in the image can be found as the local maxima of this response map. + +\cvCppFunc{cornerMinEigenVal} +Calculates the minimal eigenvalue of gradient matrices for corner detection. + +\cvdefCpp{void cornerMinEigenVal( const Mat\& src, Mat\& dst,\par + int blockSize, int apertureSize=3,\par + int borderType=BORDER\_DEFAULT );} +\begin{description} +\cvarg{src}{Input single-channel 8-bit or floating-point image} +\cvarg{dst}{Image to store the minimal eigenvalues; will have type \texttt{CV\_32FC1} and the same size as \texttt{src}} +\cvarg{blockSize}{Neighborhood size (see the discussion of \cvCppCross{cornerEigenValsAndVecs})} +\cvarg{apertureSize}{Aperture parameter for the \cvCppCross{Sobel} operator} +\cvarg{boderType}{Pixel extrapolation method; see \cvCppCross{borderInterpolate}} +\end{description} + +The function is similar to \cvCppCross{cornerEigenValsAndVecs} but it calculates and stores only the minimal eigenvalue of the covariation matrix of derivatives, i.e. $\min(\lambda_1, \lambda_2)$ in terms of the formulae in \cvCppCross{cornerEigenValsAndVecs} description. + +\cvCppFunc{cornerSubPix} +Refines the corner locations. + +\cvdefCpp{void cornerSubPix( const Mat\& image, vector\& corners,\par + Size winSize, Size zeroZone,\par + TermCriteria criteria );} +\begin{description} +\cvarg{image}{Input image} +\cvarg{corners}{Initial coordinates of the input corners; refined coordinates on output} +\cvarg{winSize}{Half of the side length of the search window. For example, if \texttt{winSize=Size(5,5)}, then a $5*2+1 \times 5*2+1 = 11 \times 11$ search window would be used} +\cvarg{zeroZone}{Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size} +\cvarg{criteria}{Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after a certain number of iterations or when a required accuracy is achieved. The \texttt{criteria} may specify either of or both the maximum number of iteration and the required accuracy} +\end{description} + +The function iterates to find the sub-pixel accurate location of corners, or radial saddle points, as shown in on the picture below. + +\includegraphics[width=1.0\textwidth]{pics/cornersubpix.png} + +Sub-pixel accurate corner locator is based on the observation that every vector from the center $q$ to a point $p$ located within a neighborhood of $q$ is orthogonal to the image gradient at $p$ subject to image and measurement noise. Consider the expression: + +\[ +\epsilon_i = {DI_{p_i}}^T \cdot (q - p_i) +\] + +where ${DI_{p_i}}$ is the image gradient at the one of the points $p_i$ in a neighborhood of $q$. The value of $q$ is to be found such that $\epsilon_i$ is minimized. A system of equations may be set up with $\epsilon_i$ set to zero: + +\[ +\sum_i(DI_{p_i} \cdot {DI_{p_i}}^T) - \sum_i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i) +\] + +where the gradients are summed within a neighborhood ("search window") of $q$. Calling the first gradient term $G$ and the second gradient term $b$ gives: + +\[ +q = G^{-1} \cdot b +\] + +The algorithm sets the center of the neighborhood window at this new center $q$ and then iterates until the center keeps within a set threshold. + + +\cvCppFunc{goodFeaturesToTrack} +Determines strong corners on an image. + +\cvdefCpp{void goodFeaturesToTrack( const Mat\& image, vector\& corners,\par + int maxCorners, double qualityLevel, double minDistance,\par + const Mat\& mask=Mat(), int blockSize=3,\par + bool useHarrisDetector=false, double k=0.04 );} +\begin{description} +\cvarg{image}{The input 8-bit or floating-point 32-bit, single-channel image} +\cvarg{corners}{The output vector of detected corners} +\cvarg{maxCorners}{The maximum number of corners to return. If there are more corners than that will be found, the strongest of them will be returned} +\cvarg{qualityLevel}{Characterizes the minimal accepted quality of image corners; the value of the parameter is multiplied by the by the best corner quality measure (which is the min eigenvalue, see \cvCppCross{cornerMinEigenVal}, or the Harris function response, see \cvCppCross{cornerHarris}). The corners, which quality measure is less than the product, will be rejected. For example, if the best corner has the quality measure = 1500, and the \texttt{qualityLevel=0.01}, then all the corners which quality measure is less than 15 will be rejected.} +\cvarg{minDistance}{The minimum possible Euclidean distance between the returned corners} +\cvarg{mask}{The optional region of interest. If the image is not empty (then it needs to have the type \texttt{CV\_8UC1} and the same size as \texttt{image}), it will specify the region in which the corners are detected} +\cvarg{blockSize}{Size of the averaging block for computing derivative covariation matrix over each pixel neighborhood, see \cvCppCross{cornerEigenValsAndVecs}} +\cvarg{useHarrisDetector}{Indicates, whether to use \hyperref[cornerHarris]{Harris} operator or \cvCppCross{cornerMinEigenVal}} +\cvarg{k}{Free parameter of Harris detector} +\end{description} + +The function finds the most prominent corners in the image or in the specified image region, as described +in \cite{Shi94}: +\begin{enumerate} +\item the function first calculates the corner quality measure at every source image pixel using the \cvCppCross{cornerMinEigenVal} or \cvCppCross{cornerHarris} +\item then it performs non-maxima suppression (the local maxima in $3\times 3$ neighborhood +are retained). +\item the next step rejects the corners with the minimal eigenvalue less than $\texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)$. +\item the remaining corners are then sorted by the quality measure in the descending order. +\item finally, the function throws away each corner $pt_j$ if there is a stronger corner $pt_i$ ($i < j$) such that the distance between them is less than \texttt{minDistance} +\end{enumerate} + +The function can be used to initialize a point-based tracker of an object. + +Note that the if the function is called with different values \texttt{A} and \texttt{B} of the parameter \texttt{qualityLevel}, and \texttt{A} > {B}, the vector of returned corners with \texttt{qualityLevel=A} will be the prefix of the output vector with \texttt{qualityLevel=B}. + +See also: \cvCppCross{cornerMinEigenVal}, \cvCppCross{cornerHarris}, \cvCppCross{calcOpticalFlowPyrLK}, \cvCppCross{estimateRigidMotion}, \cvCppCross{PlanarObjectDetector}, \cvCppCross{OneWayDescriptor} + +\cvCppFunc{HoughCircles} +Finds circles in a grayscale image using a Hough transform. + +\cvdefCpp{void HoughCircles( Mat\& image, vector\& circles,\par + int method, double dp, double minDist,\par + double param1=100, double param2=100,\par + int minRadius=0, int maxRadius=0 );} +\begin{description} +\cvarg{image}{The 8-bit, single-channel, grayscale input image} +\cvarg{circles}{The output vector of found circles. Each vector is encoded as 3-element floating-point vector $(x, y, radius)$} +\cvarg{method}{Currently, the only implemented method is \texttt{CV\_HOUGH\_GRADIENT}, which is basically \emph{21HT}, described in \cite{Yuen90}.} +\cvarg{dp}{The inverse ratio of the accumulator resolution to the image resolution. For example, if \texttt{dp=1}, the accumulator will have the same resolution as the input image, if \texttt{dp=2} - accumulator will have half as big width and height, etc} +\cvarg{minDist}{Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed} +\cvarg{param1}{The first method-specific parameter. in the case of \texttt{CV\_HOUGH\_GRADIENT} it is the higher threshold of the two passed to \cvCppCross{Canny} edge detector (the lower one will be twice smaller)} +\cvarg{param2}{The second method-specific parameter. in the case of \texttt{CV\_HOUGH\_GRADIENT} it is the accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first} +\cvarg{minRadius}{Minimum circle radius} +\cvarg{maxRadius}{Maximum circle radius} +\end{description} + +The function finds circles in a grayscale image using some modification of Hough transform. Here is a short usage example: + +\begin{lstlisting} +#include +#include +#include + +using namespace cv; + +int main(int argc, char** argv) +{ + Mat img, gray; + if( argc != 2 && !(img=imread(argv[1], 1)).data) + return -1; + cvtColor(img, gray, CV_BGR2GRAY); + // smooth it, otherwise a lot of false circles may be detected + GaussianBlur( gray, gray, Size(9, 9), 2, 2 ); + vector circles; + HoughCircles(gray, circles, CV_HOUGH_GRADIENT, + 2, gray->rows/4, 200, 100 ); + for( size_t i = 0; i < circles.size(); i++ ) + { + Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); + int radius = cvRound(circles[i][2]); + // draw the circle center + circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 ); + // draw the circle outline + circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 ); + } + namedWindow( "circles", 1 ); + imshow( "circles", img ); + return 0; +} +\end{lstlisting} + +Note that usually the function detects the circles' centers well, however it may fail to find the correct radii. You can assist the function by specifying the radius range (\texttt{minRadius} and \texttt{maxRadius}) if you know it, or you may ignore the returned radius, use only the center and find the correct radius using some additional procedure. + +See also: \cvCppCross{fitEllipse}, \cvCppCross{minEnclosingCircle} + +\cvCppFunc{HoughLines} +Finds lines in a binary image using standard Hough transform. + +\cvdefCpp{void HoughLines( Mat\& image, vector\& lines,\par + double rho, double theta, int threshold,\par + double srn=0, double stn=0 );} +\begin{description} +\cvarg{image}{The 8-bit, single-channel, binary source image. The image may be modified by the function} +\cvarg{lines}{The output vector of lines. Each line is represented by a two-element vector $(\rho, \theta)$. $\rho$ is the distance from the coordinate origin $(0,0)$ (top-left corner of the image) and $\theta$ is the line rotation angle in radians ($0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}$)} +\cvarg{rho}{Distance resolution of the accumulator in pixels} +\cvarg{theta}{Angle resolution of the accumulator in radians} +\cvarg{threshold}{The accumulator threshold parameter. Only those lines are returned that get enough votes ($>\texttt{threshold}$)} +\cvarg{srn}{For the multi-scale Hough transform it is the divisor for the distance resolution \texttt{rho}. The coarse accumulator distance resolution will be \texttt{rho} and the accurate accumulator resolution will be \texttt{rho/srn}. If both \texttt{srn=0} and \texttt{stn=0} then the classical Hough transform is used, otherwise both these parameters should be positive.} +\cvarg{stn}{For the multi-scale Hough transform it is the divisor for the distance resolution \texttt{theta}} +\end{description} + +The function implements standard or standard multi-scale Hough transform algorithm for line detection. See \cvCppCross{HoughLinesP} for the code example. + + +\cvCppFunc{HoughLinesP} +Finds lines segments in a binary image using probabilistic Hough transform. + +\cvdefCpp{void HoughLinesP( Mat\& image, vector\& lines,\par + double rho, double theta, int threshold,\par + double minLineLength=0, double maxLineGap=0 );} +\begin{description} +\cvarg{image}{The 8-bit, single-channel, binary source image. The image may be modified by the function} +\cvarg{lines}{The output vector of lines. Each line is represented by a 4-element vector $(x_1, y_1, x_2, y_2)$, where $(x_1,y_1)$ and $(x_2, y_2)$ are the ending points of each line segment detected.} +\cvarg{rho}{Distance resolution of the accumulator in pixels} +\cvarg{theta}{Angle resolution of the accumulator in radians} +\cvarg{threshold}{The accumulator threshold parameter. Only those lines are returned that get enough votes ($>\texttt{threshold}$)} +\cvarg{minLineLength}{The minimum line length. Line segments shorter than that will be rejected} +\cvarg{maxLineGap}{The maximum allowed gap between points on the same line to link them.} +\end{description} + +The function implements probabilistic Hough transform algorithm for line detection, described in \cite{Matas00}. Below is line detection example: + +\begin{lstlisting} +/* This is a standalone program. Pass an image name as a first parameter +of the program. Switch between standard and probabilistic Hough transform +by changing "#if 1" to "#if 0" and back */ +#include +#include +#include + +using namespace cv; + +int main(int argc, char** argv) +{ + Mat src, dst, color_dst; + if( argc != 2 || !(src=imread(argv[1], 0)).data) + return -1; + + Canny( src, dst, 50, 200, 3 ); + cvtColor( dst, color_dst, CV_GRAY2BGR ); + +#if 0 + vector lines; + HoughLines( dst, lines, 1, CV_PI/180, 100 ); + + for( size_t i = 0; i < lines.size(); i++ ) + { + float rho = lines[i][0]; + float theta = lines[i][1]; + double a = cos(theta), b = sin(theta); + double x0 = a*rho, y0 = b*rho; + Point pt1(cvRound(x0 + 1000*(-b)), + cvRound(y0 + 1000*(a))); + Point pt2(cvRound(x0 - 1000*(-b)), + cvRound(y0 - 1000*(a))); + line( color_dst, pt1, pt2, Scalar(0,0,255), 3, 8 ); + } +#else + vector lines; + HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 ); + for( size_t i = 0; i < lines.size(); i++ ) + { + line( color_dst, Point(lines[i][0], lines[i][1]), + Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 ); + } +#endif + namedWindow( "Source", 1 ); + imshow( "Source", src ); + + namedWindow( "Detected Lines", 1 ); + imshow( "Detected Lines", color_dst ); + + waitKey(0); + return 0; +} +\end{lstlisting} + + +This is the sample picture the function parameters have been tuned for: + +\includegraphics[width=0.5\textwidth]{pics/building.jpg} + +And this is the output of the above program in the case of probabilistic Hough transform + +\includegraphics[width=0.5\textwidth]{pics/houghp.png} + +\cvCppFunc{preCornerDetect} +Calculates the feature map for corner detection + +\cvdefCpp{void preCornerDetect( const Mat\& src, Mat\& dst, int apertureSize,\par + int borderType=BORDER\_DEFAULT );} +\begin{description} +\cvarg{src}{The source single-channel 8-bit of floating-point image} +\cvarg{dst}{The output image; will have type \texttt{CV\_32F} and the same size as \texttt{src}} +\cvarg{apertureSize}{Aperture size of \cvCppCross{Sobel}} +\cvarg{borderType}{The pixel extrapolation method; see \cvCppCross{borderInterpolate}} +\end{description} + +The function calculates the complex spatial derivative-based function of the source image + +\[ +\texttt{dst} = (D_x \texttt{src})^2 \cdot D_{yy} \texttt{src} + (D_y \texttt{src})^2 \cdot D_{xx} \texttt{src} - 2 D_x \texttt{src} \cdot D_y \texttt{src} \cdot D_{xy} \texttt{src} +\] + +where $D_x$, $D_y$ are the first image derivatives, $D_{xx}$, $D_{yy}$ are the second image derivatives and $D_{xy}$ is the mixed derivative. + +The corners can be found as local maximums of the functions, as shown below: + +\begin{lstlisting} +Mat corners, dilated_corners; +preCornerDetect(image, corners, 3); +// dilation with 3x3 rectangular structuring element +dilate(corners, dilated_corners, Mat(), 1); +Mat corner_mask = corners == dilated_corners; +\end{lstlisting} + + + +\fi diff --git a/doc/cv_histograms.tex b/doc/imgproc_histograms.tex similarity index 100% rename from doc/cv_histograms.tex rename to doc/imgproc_histograms.tex diff --git a/doc/cv_image_filtering.tex b/doc/imgproc_image_filtering.tex similarity index 100% rename from doc/cv_image_filtering.tex rename to doc/imgproc_image_filtering.tex diff --git a/doc/cv_image_transform.tex b/doc/imgproc_image_transform.tex similarity index 100% rename from doc/cv_image_transform.tex rename to doc/imgproc_image_transform.tex diff --git a/doc/cv_image_warping.tex b/doc/imgproc_image_warping.tex similarity index 100% rename from doc/cv_image_warping.tex rename to doc/imgproc_image_warping.tex diff --git a/doc/imgproc_motion_tracking.tex b/doc/imgproc_motion_tracking.tex new file mode 100644 index 0000000..17e3839 --- /dev/null +++ b/doc/imgproc_motion_tracking.tex @@ -0,0 +1,169 @@ +%[TODO: FROM VIDEO] +\section{Motion Analysis and Object Tracking} + +\ifCPy + +\cvCPyFunc{Acc} +Adds a frame to an accumulator. + +\cvdefC{ +void cvAcc( \par const CvArr* image,\par CvArr* sum,\par const CvArr* mask=NULL ); +} +\cvdefPy{Acc(image,sum,mask=NULL)-> None} + +\begin{description} +\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently)} +\cvarg{sum}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function adds the whole image \texttt{image} or its selected region to the accumulator \texttt{sum}: + +\[ \texttt{sum}(x,y) \leftarrow \texttt{sum}(x,y) + \texttt{image}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +\cvCPyFunc{MultiplyAcc} +Adds the product of two input images to the accumulator. + +\cvdefC{ +void cvMultiplyAcc( \par const CvArr* image1,\par const CvArr* image2,\par CvArr* acc,\par const CvArr* mask=NULL ); +} +\cvdefPy{MultiplyAcc(image1,image2,acc,mask=NULL)-> None} + +\begin{description} +\cvarg{image1}{First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)} +\cvarg{image2}{Second input image, the same format as the first one} +\cvarg{acc}{Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function adds the product of 2 images or their selected regions to the accumulator \texttt{acc}: + +\[ \texttt{acc}(x,y) \leftarrow \texttt{acc}(x,y) + \texttt{image1}(x,y) \cdot \texttt{image2}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + + +\cvCPyFunc{RunningAvg} +Updates the running average. + +\cvdefC{ +void cvRunningAvg( \par const CvArr* image,\par CvArr* acc,\par double alpha,\par const CvArr* mask=NULL ); +} +\cvdefPy{RunningAvg(image,acc,alpha,mask=NULL)-> None} + +\begin{description} +\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)} +\cvarg{acc}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point} +\cvarg{alpha}{Weight of input image} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function calculates the weighted sum of the input image +\texttt{image} and the accumulator \texttt{acc} so that \texttt{acc} +becomes a running average of frame sequence: + +\[ \texttt{acc}(x,y) \leftarrow (1-\alpha) \cdot \texttt{acc}(x,y) + \alpha \cdot \texttt{image}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +where $\alpha$ regulates the update speed (how fast the accumulator forgets about previous frames). + +\cvCPyFunc{SquareAcc} +Adds the square of the source image to the accumulator. + +\cvdefC{ +void cvSquareAcc( \par const CvArr* image,\par CvArr* sqsum,\par const CvArr* mask=NULL ); +}\cvdefPy{SquareAcc(image,sqsum,mask=NULL)-> None} + +\begin{description} +\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)} +\cvarg{sqsum}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function adds the input image \texttt{image} or its selected region, raised to power 2, to the accumulator \texttt{sqsum}: + +\[ \texttt{sqsum}(x,y) \leftarrow \texttt{sqsum}(x,y) + \texttt{image}(x,y)^2 \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +\fi + +\ifCpp + +\cvCppFunc{accumulate} +Adds image to the accumulator. + +\cvdefCpp{void accumulate( const Mat\& src, Mat\& dst, const Mat\& mask=Mat() );} +\begin{description} +\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point} +\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function adds \texttt{src}, or some of its elements, to \texttt{dst}: + +\[ \texttt{dst}(x,y) \leftarrow \texttt{dst}(x,y) + \texttt{src}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +The function supports multi-channel images; each channel is processed independently. + +The functions \texttt{accumulate*} can be used, for example, to collect statistic of background of a scene, viewed by a still camera, for the further foreground-background segmentation. + +See also: \cvCppCross{accumulateSquare}, \cvCppCross{accumulateProduct}, \cvCppCross{accumulateWeighted} + +\cvCppFunc{accumulateSquare} +Adds the square of the source image to the accumulator. + +\cvdefCpp{void accumulateSquare( const Mat\& src, Mat\& dst, \par const Mat\& mask=Mat() );} +\begin{description} +\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point} +\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function adds the input image \texttt{src} or its selected region, raised to power 2, to the accumulator \texttt{dst}: + +\[ \texttt{dst}(x,y) \leftarrow \texttt{dst}(x,y) + \texttt{src}(x,y)^2 \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +The function supports multi-channel images; each channel is processed independently. + +See also: \cvCppCross{accumulateSquare}, \cvCppCross{accumulateProduct}, \cvCppCross{accumulateWeighted} + +\cvCppFunc{accumulateProduct} +Adds the per-element product of two input images to the accumulator. + +\cvdefCpp{void accumulateProduct( const Mat\& src1, const Mat\& src2,\par + Mat\& dst, const Mat\& mask=Mat() );} +\begin{description} +\cvarg{src1}{The first input image, 1- or 3-channel, 8-bit or 32-bit floating point} +\cvarg{src2}{The second input image of the same type and the same size as \texttt{src1}} +\cvarg{dst}{Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function adds the product of 2 images or their selected regions to the accumulator \texttt{dst}: + +\[ \texttt{dst}(x,y) \leftarrow \texttt{dst}(x,y) + \texttt{src1}(x,y) \cdot \texttt{src2}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +The function supports multi-channel images; each channel is processed independently. + +See also: \cvCppCross{accumulate}, \cvCppCross{accumulateSquare}, \cvCppCross{accumulateWeighted} + +\cvCppFunc{accumulateWeighted} +Updates the running average. + +\cvdefCpp{void accumulateWeighted( const Mat\& src, Mat\& dst,\par + double alpha, const Mat\& mask=Mat() );} +\begin{description} +\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point} +\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point} +\cvarg{alpha}{Weight of the input image} +\cvarg{mask}{Optional operation mask} +\end{description} + +The function calculates the weighted sum of the input image +\texttt{src} and the accumulator \texttt{dst} so that \texttt{dst} +becomes a running average of frame sequence: + +\[ \texttt{dst}(x,y) \leftarrow (1-\texttt{alpha}) \cdot \texttt{dst}(x,y) + \texttt{alpha} \cdot \texttt{src}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] + +that is, \texttt{alpha} regulates the update speed (how fast the accumulator "forgets" about earlier images). +The function supports multi-channel images; each channel is processed independently. + +See also: \cvCppCross{accumulate}, \cvCppCross{accumulateSquare}, \cvCppCross{accumulateProduct} + +\fi diff --git a/doc/imgproc_object_detection.tex b/doc/imgproc_object_detection.tex new file mode 100644 index 0000000..71a6e9a --- /dev/null +++ b/doc/imgproc_object_detection.tex @@ -0,0 +1,138 @@ +%[TODO: from objdetect] +\section{Object Detection} + +\ifCPy +\cvCPyFunc{MatchTemplate} +Compares a template against overlapped image regions. + +\cvdefC{ +void cvMatchTemplate( \par const CvArr* image,\par const CvArr* templ,\par CvArr* result,\par int method ); +}\cvdefPy{MatchTemplate(image,templ,result,method)-> None} + +\begin{description} +\cvarg{image}{Image where the search is running; should be 8-bit or 32-bit floating-point} +\cvarg{templ}{Searched template; must be not greater than the source image and the same data type as the image} +\cvarg{result}{A map of comparison results; single-channel 32-bit floating-point. +If \texttt{image} is $W \times H$ and +\texttt{templ} is $w \times h$ then \texttt{result} must be $(W-w+1) \times (H-h+1)$} +\cvarg{method}{Specifies the way the template must be compared with the image regions (see below)} +\end{description} + +The function is similar to +\cvCPyCross{CalcBackProjectPatch}. It slides through \texttt{image}, compares the +overlapped patches of size $w \times h$ against \texttt{templ} +using the specified method and stores the comparison results to +\texttt{result}. Here are the formulas for the different comparison +methods one may use ($I$ denotes \texttt{image}, $T$ \texttt{template}, +$R$ \texttt{result}). The summation is done over template and/or the +image patch: $x' = 0...w-1, y' = 0...h-1$ + +% \texttt{x'=0..w-1, y'=0..h-1}): + +\begin{description} +\item[method=CV\_TM\_SQDIFF] +\[ R(x,y)=\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2 \] + +\item[method=CV\_TM\_SQDIFF\_NORMED] +\[ R(x,y)=\frac +{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2} +{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} +\] + +\item[method=CV\_TM\_CCORR] +\[ R(x,y)=\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y')) \] + +\item[method=CV\_TM\_CCORR\_NORMED] +\[ R(x,y)=\frac +{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))} +{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} +\] + +\item[method=CV\_TM\_CCOEFF] +\[ R(x,y)=\sum_{x',y'} (T'(x',y') \cdot I(x+x',y+y')) \] + +where +\[ +\begin{array}{l} +T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum_{x'',y''} T(x'',y'')\\ +I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum_{x'',y''} I(x+x'',y+y'') +\end{array} +\] + +\item[method=CV\_TM\_CCOEFF\_NORMED] +\[ R(x,y)=\frac +{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) } +{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} } +\] +\end{description} + +After the function finishes the comparison, the best matches can be found as global minimums (\texttt{CV\_TM\_SQDIFF}) or maximums (\texttt{CV\_TM\_CCORR} and \texttt{CV\_TM\_CCOEFF}) using the \cvCPyCross{MinMaxLoc} function. In the case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels (and separate mean values are used for each channel). + +\fi + +\ifCpp + +\cvCppFunc{matchTemplate} +Compares a template against overlapped image regions. + +\cvdefCpp{void matchTemplate( const Mat\& image, const Mat\& templ,\par + Mat\& result, int method );} +\begin{description} +\cvarg{image}{Image where the search is running; should be 8-bit or 32-bit floating-point} +\cvarg{templ}{Searched template; must be not greater than the source image and have the same data type} +\cvarg{result}{A map of comparison results; will be single-channel 32-bit floating-point. +If \texttt{image} is $W \times H$ and +\texttt{templ} is $w \times h$ then \texttt{result} will be $(W-w+1) \times (H-h+1)$} +\cvarg{method}{Specifies the comparison method (see below)} +\end{description} + +The function slides through \texttt{image}, compares the +overlapped patches of size $w \times h$ against \texttt{templ} +using the specified method and stores the comparison results to +\texttt{result}. Here are the formulas for the available comparison +methods ($I$ denotes \texttt{image}, $T$ \texttt{template}, +$R$ \texttt{result}). The summation is done over template and/or the +image patch: $x' = 0...w-1, y' = 0...h-1$ + +% \texttt{x'=0..w-1, y'=0..h-1}): + +\begin{description} +\item[method=CV\_TM\_SQDIFF] +\[ R(x,y)=\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2 \] + +\item[method=CV\_TM\_SQDIFF\_NORMED] +\[ R(x,y)=\frac +{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2} +{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} +\] + +\item[method=CV\_TM\_CCORR] +\[ R(x,y)=\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y')) \] + +\item[method=CV\_TM\_CCORR\_NORMED] +\[ R(x,y)=\frac +{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))} +{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} +\] + +\item[method=CV\_TM\_CCOEFF] +\[ R(x,y)=\sum_{x',y'} (T'(x',y') \cdot I(x+x',y+y')) \] + +where +\[ +\begin{array}{l} +T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum_{x'',y''} T(x'',y'')\\ +I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum_{x'',y''} I(x+x'',y+y'') +\end{array} +\] + +\item[method=CV\_TM\_CCOEFF\_NORMED] +\[ R(x,y)=\frac +{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) } +{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} } +\] +\end{description} + +After the function finishes the comparison, the best matches can be found as global minimums (when \texttt{CV\_TM\_SQDIFF} was used) or maximums (when \texttt{CV\_TM\_CCORR} or \texttt{CV\_TM\_CCOEFF} was used) using the \cvCppCross{minMaxLoc} function. In the case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels (and separate mean values are used for each channel). That is, the function can take a color template and a color image; the result will still be a single-channel image, which is easier to analyze. + +\fi diff --git a/doc/cv_planar_subdivisions.tex b/doc/imgproc_planar_subdivisions.tex similarity index 100% rename from doc/cv_planar_subdivisions.tex rename to doc/imgproc_planar_subdivisions.tex diff --git a/doc/cv_struct_shape_analysis.tex b/doc/imgproc_struct_shape_analysis.tex similarity index 100% rename from doc/cv_struct_shape_analysis.tex rename to doc/imgproc_struct_shape_analysis.tex diff --git a/doc/MachineLearning.tex b/doc/ml.tex similarity index 100% rename from doc/MachineLearning.tex rename to doc/ml.tex diff --git a/doc/cv_object_detection.tex b/doc/objdetect.tex similarity index 83% rename from doc/cv_object_detection.tex rename to doc/objdetect.tex index 8de710f..38e7ef8 100644 --- a/doc/cv_object_detection.tex +++ b/doc/objdetect.tex @@ -1,73 +1,7 @@ -\section{Object Detection} +\section{Cascade Classification} \ifCPy -\cvCPyFunc{MatchTemplate} -Compares a template against overlapped image regions. - -\cvdefC{ -void cvMatchTemplate( \par const CvArr* image,\par const CvArr* templ,\par CvArr* result,\par int method ); -}\cvdefPy{MatchTemplate(image,templ,result,method)-> None} - -\begin{description} -\cvarg{image}{Image where the search is running; should be 8-bit or 32-bit floating-point} -\cvarg{templ}{Searched template; must be not greater than the source image and the same data type as the image} -\cvarg{result}{A map of comparison results; single-channel 32-bit floating-point. -If \texttt{image} is $W \times H$ and -\texttt{templ} is $w \times h$ then \texttt{result} must be $(W-w+1) \times (H-h+1)$} -\cvarg{method}{Specifies the way the template must be compared with the image regions (see below)} -\end{description} - -The function is similar to -\cvCPyCross{CalcBackProjectPatch}. It slides through \texttt{image}, compares the -overlapped patches of size $w \times h$ against \texttt{templ} -using the specified method and stores the comparison results to -\texttt{result}. Here are the formulas for the different comparison -methods one may use ($I$ denotes \texttt{image}, $T$ \texttt{template}, -$R$ \texttt{result}). The summation is done over template and/or the -image patch: $x' = 0...w-1, y' = 0...h-1$ - -% \texttt{x'=0..w-1, y'=0..h-1}): - -\begin{description} -\item[method=CV\_TM\_SQDIFF] -\[ R(x,y)=\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2 \] - -\item[method=CV\_TM\_SQDIFF\_NORMED] -\[ R(x,y)=\frac -{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2} -{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} -\] - -\item[method=CV\_TM\_CCORR] -\[ R(x,y)=\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y')) \] - -\item[method=CV\_TM\_CCORR\_NORMED] -\[ R(x,y)=\frac -{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))} -{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} -\] - -\item[method=CV\_TM\_CCOEFF] -\[ R(x,y)=\sum_{x',y'} (T'(x',y') \cdot I(x+x',y+y')) \] - -where -\[ -\begin{array}{l} -T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum_{x'',y''} T(x'',y'')\\ -I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum_{x'',y''} I(x+x'',y+y'') -\end{array} -\] - -\item[method=CV\_TM\_CCOEFF\_NORMED] -\[ R(x,y)=\frac -{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) } -{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} } -\] -\end{description} - -After the function finishes the comparison, the best matches can be found as global minimums (\texttt{CV\_TM\_SQDIFF}) or maximums (\texttt{CV\_TM\_CCORR} and \texttt{CV\_TM\_CCOEFF}) using the \cvCPyCross{MinMaxLoc} function. In the case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels (and separate mean values are used for each channel). - \subsection{Haar Feature-based Cascade Classifier for Object Detection} The object detector described below has been initially proposed by Paul Viola @@ -701,68 +635,4 @@ Groups the object candidate rectangles \end{description} The function is a wrapper for a generic function \cvCppCross{partition}. It clusters all the input rectangles using the rectangle equivalence criteria, that combines rectangles that have similar sizes and similar locations (the similarity is defined by \texttt{eps}). When \texttt{eps=0}, no clustering is done at all. If $\texttt{eps}\rightarrow +\inf$, all the rectangles will be put in one cluster. Then, the small clusters, containing less than or equal to \texttt{groupThreshold} rectangles, will be rejected. In each other cluster the average rectangle will be computed and put into the output rectangle list. - -\cvCppFunc{matchTemplate} -Compares a template against overlapped image regions. - -\cvdefCpp{void matchTemplate( const Mat\& image, const Mat\& templ,\par - Mat\& result, int method );} -\begin{description} -\cvarg{image}{Image where the search is running; should be 8-bit or 32-bit floating-point} -\cvarg{templ}{Searched template; must be not greater than the source image and have the same data type} -\cvarg{result}{A map of comparison results; will be single-channel 32-bit floating-point. -If \texttt{image} is $W \times H$ and -\texttt{templ} is $w \times h$ then \texttt{result} will be $(W-w+1) \times (H-h+1)$} -\cvarg{method}{Specifies the comparison method (see below)} -\end{description} - -The function slides through \texttt{image}, compares the -overlapped patches of size $w \times h$ against \texttt{templ} -using the specified method and stores the comparison results to -\texttt{result}. Here are the formulas for the available comparison -methods ($I$ denotes \texttt{image}, $T$ \texttt{template}, -$R$ \texttt{result}). The summation is done over template and/or the -image patch: $x' = 0...w-1, y' = 0...h-1$ - -% \texttt{x'=0..w-1, y'=0..h-1}): - -\begin{description} -\item[method=CV\_TM\_SQDIFF] -\[ R(x,y)=\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2 \] - -\item[method=CV\_TM\_SQDIFF\_NORMED] -\[ R(x,y)=\frac -{\sum_{x',y'} (T(x',y')-I(x+x',y+y'))^2} -{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} -\] - -\item[method=CV\_TM\_CCORR] -\[ R(x,y)=\sum_{x',y'} (T(x',y') \cdot I(x+x',y+y')) \] - -\item[method=CV\_TM\_CCORR\_NORMED] -\[ R(x,y)=\frac -{\sum_{x',y'} (T(x',y') \cdot I'(x+x',y+y'))} -{\sqrt{\sum_{x',y'}T(x',y')^2 \cdot \sum_{x',y'} I(x+x',y+y')^2}} -\] - -\item[method=CV\_TM\_CCOEFF] -\[ R(x,y)=\sum_{x',y'} (T'(x',y') \cdot I(x+x',y+y')) \] - -where -\[ -\begin{array}{l} -T'(x',y')=T(x',y') - 1/(w \cdot h) \cdot \sum_{x'',y''} T(x'',y'')\\ -I'(x+x',y+y')=I(x+x',y+y') - 1/(w \cdot h) \cdot \sum_{x'',y''} I(x+x'',y+y'') -\end{array} -\] - -\item[method=CV\_TM\_CCOEFF\_NORMED] -\[ R(x,y)=\frac -{ \sum_{x',y'} (T'(x',y') \cdot I'(x+x',y+y')) } -{ \sqrt{\sum_{x',y'}T'(x',y')^2 \cdot \sum_{x',y'} I'(x+x',y+y')^2} } -\] -\end{description} - -After the function finishes the comparison, the best matches can be found as global minimums (when \texttt{CV\_TM\_SQDIFF} was used) or maximums (when \texttt{CV\_TM\_CCORR} or \texttt{CV\_TM\_CCOEFF} was used) using the \cvCppCross{minMaxLoc} function. In the case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels (and separate mean values are used for each channel). That is, the function can take a color template and a color image; the result will still be a single-channel image, which is easier to analyze. - \fi diff --git a/doc/online-opencv.tex b/doc/online-opencv.tex index bc38be8..2524552 100644 --- a/doc/online-opencv.tex +++ b/doc/online-opencv.tex @@ -20,35 +20,55 @@ \tableofcontents %%% Chapters %%% -\input{cxcore_introduction} - -\chapter{cxcore. The Core Functionality} -\input{cxcore_basic_structures} -\input{cxcore_array_operations} -\input{cxcore_dynamic_structures} -\input{cxcore_drawing_functions} -\input{cxcore_persistence} -\input{cxcore_clustering_search} -\input{cxcore_utilities_system_functions} - -\chapter{cv. Image Processing and Computer Vision} -\input{cv_image_filtering} -\input{cv_image_warping} -\input{cv_image_transform} -\input{cv_histograms} -\input{cv_feature_detection} -\input{cv_motion_tracking} -\input{cv_struct_shape_analysis} -\input{cv_planar_subdivisions} -\input{cv_object_detection} -\input{cv_calibration_3d} -\input{cv_object_recognition} - -\chapter{highgui. High-level GUI and Media IO} -\input{HighGui} +\input{core_introduction} + +\chapter{core. The Core Functionality} +\input{core_basic_structures} +\input{core_array_operations} +\input{core_dynamic_structures} +\input{core_drawing_functions} +\input{core_persistence} +\input{core_clustering_search} +\input{core_utilities_system_functions} + + +\chapter{imgproc. Image Processing} +\input{imgproc_histograms} +\input{imgproc_image_filtering} +\input{imgproc_image_warping} +\input{imgproc_image_transform} +\input{imgproc_struct_shape_analysis} +\input{imgproc_planar_subdivisions} +\input{imgproc_motion_tracking} +\input{imgproc_feature_detection} + +\chapter{features2d. Feature Detection and Descriptor Extraction} +\input{features2d_feature_detection} +%\input{features2d_object_recognition} +%\input{features2d_object_detection} + +\chapter{flann. Clustering and Search in Multi-Dimensional Spaces} +\input{flann} + +\chapter{objdetect. Object Detection} +\input{objdetect} + +\chapter{video. Video Analysis} +\input{video_motion_tracking} + +\chapter{highgui. High-level GUI and Media I/O} +\input{highgui} +\ifPy %Qt is for C and Cpp, so do nothing +\else +\input{highgui_qt} +\fi + +\chapter{calib3d. Camera Calibration, Pose Estimation and Stereo} +\input{calib3d} + \chapter{ml. Machine Learning} -\input{MachineLearning} +\input{ml} %%%%%%%%%%%%%%%% \end{document} % End of document. diff --git a/doc/opencvref_body.tex b/doc/opencvref_body.tex index e3cb7e7..778a14c 100644 --- a/doc/opencvref_body.tex +++ b/doc/opencvref_body.tex @@ -1,38 +1,51 @@ -\input{cxcore_introduction} - -\chapter{cxcore. The Core Functionality} -\input{cxcore_basic_structures} -\input{cxcore_array_operations} -\input{cxcore_dynamic_structures} -\input{cxcore_drawing_functions} -\input{cxcore_persistence} -\input{cxcore_clustering_search} -\input{cxcore_utilities_system_functions} - -\chapter{cv. Image Processing and Computer Vision} -\input{cv_image_filtering} -\input{cv_image_warping} -\input{cv_image_transform} -\input{cv_histograms} -\input{cv_feature_detection} -\input{cv_motion_tracking} -\input{cv_struct_shape_analysis} -\input{cv_planar_subdivisions} -\input{cv_object_detection} -\input{cv_calibration_3d} -\input{cv_object_recognition} - -\chapter{cvaux. Extra Computer Vision Functionality} -\input{cvaux_bgfg} -\input{cvaux_object_detection} -\input{cvaux_3d} +\input{core_introduction} + +\chapter{core. The Core Functionality} +\input{core_basic_structures} +\input{core_array_operations} +\input{core_dynamic_structures} +\input{core_drawing_functions} +\input{core_persistence} +\input{core_clustering_search} +\input{core_utilities_system_functions} + + +\chapter{imgproc. Image Processing} +\input{imgproc_histograms} +\input{imgproc_image_filtering} +\input{imgproc_image_warping} +\input{imgproc_image_transform} +\input{imgproc_struct_shape_analysis} +\input{imgproc_planar_subdivisions} +\input{imgproc_motion_tracking} +\input{imgproc_feature_detection} + +\chapter{features2d. Feature Detection and Descriptor Extraction} +\input{features2d_feature_detection} +\input{features2d_object_recognition} +\input{features2d_object_detection} + +\chapter{flann. Clustering and Search in Multi-Dimensional Spaces} +\input{flann} + +\chapter{objdetect. Object Detection} +\input{objdetect} + +\chapter{video. Video Analysis} +\input{video_motion_tracking} \chapter{highgui. High-level GUI and Media I/O} -\input{HighGui} +\input{highgui} \ifPy %Qt is for C and Cpp, so do nothing \else -\input{HighGui_Qt} +\input{highgui_qt} \fi +\chapter{calib3d. Camera Calibration, Pose Estimation and Stereo} +\input{calib3d} + + \chapter{ml. Machine Learning} -\input{MachineLearning} +\input{ml} + + diff --git a/doc/cv_motion_tracking.tex b/doc/video_motion_tracking.tex similarity index 85% rename from doc/cv_motion_tracking.tex rename to doc/video_motion_tracking.tex index 89b3896..bff0df8 100644 --- a/doc/cv_motion_tracking.tex +++ b/doc/video_motion_tracking.tex @@ -2,24 +2,6 @@ \ifCPy -\cvCPyFunc{Acc} -Adds a frame to an accumulator. - -\cvdefC{ -void cvAcc( \par const CvArr* image,\par CvArr* sum,\par const CvArr* mask=NULL ); -} -\cvdefPy{Acc(image,sum,mask=NULL)-> None} - -\begin{description} -\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently)} -\cvarg{sum}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function adds the whole image \texttt{image} or its selected region to the accumulator \texttt{sum}: - -\[ \texttt{sum}(x,y) \leftarrow \texttt{sum}(x,y) + \texttt{image}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - \cvCPyFunc{CalcGlobalOrientation} Calculates the global motion orientation of some selected region. @@ -620,25 +602,6 @@ iterations are made until the search window center moves by less than the given value and/or until the function has done the maximum number of iterations. The function returns the number of iterations made. -\cvCPyFunc{MultiplyAcc} -Adds the product of two input images to the accumulator. - -\cvdefC{ -void cvMultiplyAcc( \par const CvArr* image1,\par const CvArr* image2,\par CvArr* acc,\par const CvArr* mask=NULL ); -} -\cvdefPy{MultiplyAcc(image1,image2,acc,mask=NULL)-> None} - -\begin{description} -\cvarg{image1}{First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)} -\cvarg{image2}{Second input image, the same format as the first one} -\cvarg{acc}{Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function adds the product of 2 images or their selected regions to the accumulator \texttt{acc}: - -\[ \texttt{acc}(x,y) \leftarrow \texttt{acc}(x,y) + \texttt{image1}(x,y) \cdot \texttt{image2}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - \ifC % { \cvCPyFunc{ReleaseConDensation} Deallocates the ConDensation filter structure. @@ -672,30 +635,6 @@ The function releases the structure \cvCPyCross{CvKalman} and all of the underly \fi % } -\cvCPyFunc{RunningAvg} -Updates the running average. - -\cvdefC{ -void cvRunningAvg( \par const CvArr* image,\par CvArr* acc,\par double alpha,\par const CvArr* mask=NULL ); -} -\cvdefPy{RunningAvg(image,acc,alpha,mask=NULL)-> None} - -\begin{description} -\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)} -\cvarg{acc}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point} -\cvarg{alpha}{Weight of input image} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function calculates the weighted sum of the input image -\texttt{image} and the accumulator \texttt{acc} so that \texttt{acc} -becomes a running average of frame sequence: - -\[ \texttt{acc}(x,y) \leftarrow (1-\alpha) \cdot \texttt{acc}(x,y) + \alpha \cdot \texttt{image}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - -where $\alpha$ regulates the update speed (how fast the accumulator forgets about previous frames). - - \cvCPyFunc{SegmentMotion} Segments a whole motion into separate moving parts. @@ -772,23 +711,6 @@ than \texttt{criteria.epsilon} or the function performed The function returns the updated list of points. \fi -\cvCPyFunc{SquareAcc} -Adds the square of the source image to the accumulator. - -\cvdefC{ -void cvSquareAcc( \par const CvArr* image,\par CvArr* sqsum,\par const CvArr* mask=NULL ); -}\cvdefPy{SquareAcc(image,sqsum,mask=NULL)-> None} - -\begin{description} -\cvarg{image}{Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)} -\cvarg{sqsum}{Accumulator with the same number of channels as input image, 32-bit or 64-bit floating-point} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function adds the input image \texttt{image} or its selected region, raised to power 2, to the accumulator \texttt{sqsum}: - -\[ \texttt{sqsum}(x,y) \leftarrow \texttt{sqsum}(x,y) + \texttt{image}(x,y)^2 \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - \cvCPyFunc{UpdateMotionHistory} Updates the motion history image by a moving silhouette. @@ -817,87 +739,6 @@ That is, MHI pixels where motion occurs are set to the current timestamp, while \ifCpp -\cvCppFunc{accumulate} -Adds image to the accumulator. - -\cvdefCpp{void accumulate( const Mat\& src, Mat\& dst, const Mat\& mask=Mat() );} -\begin{description} -\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point} -\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function adds \texttt{src}, or some of its elements, to \texttt{dst}: - -\[ \texttt{dst}(x,y) \leftarrow \texttt{dst}(x,y) + \texttt{src}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - -The function supports multi-channel images; each channel is processed independently. - -The functions \texttt{accumulate*} can be used, for example, to collect statistic of background of a scene, viewed by a still camera, for the further foreground-background segmentation. - -See also: \cvCppCross{accumulateSquare}, \cvCppCross{accumulateProduct}, \cvCppCross{accumulateWeighted} - -\cvCppFunc{accumulateSquare} -Adds the square of the source image to the accumulator. - -\cvdefCpp{void accumulateSquare( const Mat\& src, Mat\& dst, \par const Mat\& mask=Mat() );} -\begin{description} -\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point} -\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function adds the input image \texttt{src} or its selected region, raised to power 2, to the accumulator \texttt{dst}: - -\[ \texttt{dst}(x,y) \leftarrow \texttt{dst}(x,y) + \texttt{src}(x,y)^2 \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - -The function supports multi-channel images; each channel is processed independently. - -See also: \cvCppCross{accumulateSquare}, \cvCppCross{accumulateProduct}, \cvCppCross{accumulateWeighted} - -\cvCppFunc{accumulateProduct} -Adds the per-element product of two input images to the accumulator. - -\cvdefCpp{void accumulateProduct( const Mat\& src1, const Mat\& src2,\par - Mat\& dst, const Mat\& mask=Mat() );} -\begin{description} -\cvarg{src1}{The first input image, 1- or 3-channel, 8-bit or 32-bit floating point} -\cvarg{src2}{The second input image of the same type and the same size as \texttt{src1}} -\cvarg{dst}{Accumulator with the same number of channels as input images, 32-bit or 64-bit floating-point} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function adds the product of 2 images or their selected regions to the accumulator \texttt{dst}: - -\[ \texttt{dst}(x,y) \leftarrow \texttt{dst}(x,y) + \texttt{src1}(x,y) \cdot \texttt{src2}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - -The function supports multi-channel images; each channel is processed independently. - -See also: \cvCppCross{accumulate}, \cvCppCross{accumulateSquare}, \cvCppCross{accumulateWeighted} - -\cvCppFunc{accumulateWeighted} -Updates the running average. - -\cvdefCpp{void accumulateWeighted( const Mat\& src, Mat\& dst,\par - double alpha, const Mat\& mask=Mat() );} -\begin{description} -\cvarg{src}{The input image, 1- or 3-channel, 8-bit or 32-bit floating point} -\cvarg{dst}{The accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point} -\cvarg{alpha}{Weight of the input image} -\cvarg{mask}{Optional operation mask} -\end{description} - -The function calculates the weighted sum of the input image -\texttt{src} and the accumulator \texttt{dst} so that \texttt{dst} -becomes a running average of frame sequence: - -\[ \texttt{dst}(x,y) \leftarrow (1-\texttt{alpha}) \cdot \texttt{dst}(x,y) + \texttt{alpha} \cdot \texttt{src}(x,y) \quad \text{if} \quad \texttt{mask}(x,y) \ne 0 \] - -that is, \texttt{alpha} regulates the update speed (how fast the accumulator "forgets" about earlier images). -The function supports multi-channel images; each channel is processed independently. - -See also: \cvCppCross{accumulate}, \cvCppCross{accumulateSquare}, \cvCppCross{accumulateProduct} - \cvCppFunc{calcOpticalFlowPyrLK} Calculates the optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids -- 2.7.4