or in reverse. The ImageData interface can represent or set the underlying pixel data of an area of a
canvas element.
-@sa Please refer to canvas docs for more details.
+@note Please refer to canvas docs for more details.
First, create an ImageData obj from canvas:
@code{.js}
corner points and retval which will be True if pattern is obtained. These corners will be placed in
an order (from left-to-right, top-to-bottom)
-@sa This function may not be able to find the required pattern in all the images. So, one good option
+@note This function may not be able to find the required pattern in all the images. So, one good option
is to write the code such that, it starts the camera and check each frame for required pattern. Once
the pattern is obtained, find the corners and store it in a list. Also, provide some interval before
reading next frame so that we can adjust our chess board in different direction. Continue this
are not sure how many images out of the 14 given are good. Thus, we must read all the images and take only the good
ones.
-@sa Instead of chess board, we can alternatively use a circular grid. In this case, we must use the function
+@note Instead of chess board, we can alternatively use a circular grid. In this case, we must use the function
**cv.findCirclesGrid()** to find the pattern. Fewer images are sufficient to perform camera calibration using a circular grid.
Once we find the corners, we can increase their accuracy using **cv.cornerSubPix()**. We can also

-@sa Plenty of plotting options are available in Matplotlib. Please refer to Matplotlib docs for more
+@note Plenty of plotting options are available in Matplotlib. Please refer to Matplotlib docs for more
details. Some, we will see on the way.
__warning__
See, even image rotation doesn't affect much on this comparison.
-@sa [Hu-Moments](http://en.wikipedia.org/wiki/Image_moment#Rotation_invariant_moments) are seven
+@note [Hu-Moments](http://en.wikipedia.org/wiki/Image_moment#Rotation_invariant_moments) are seven
moments invariant to translation, rotation and scale. Seventh one is skew-invariant. Those values
can be found using **cv.HuMoments()** function.
as 0-0.99, 1-1.99, 2-2.99 etc. So final range would be 255-255.99. To represent that, they also add
256 at end of bins. But we don't need that 256. Upto 255 is sufficient.
-@sa Numpy has another function, **np.bincount()** which is much faster than (around 10X)
+@note Numpy has another function, **np.bincount()** which is much faster than (around 10X)
np.histogram(). So for one-dimensional histograms, you can better try that. Don't forget to set
minlength = 256 in np.bincount. For example, hist = np.bincount(img.ravel(),minlength=256)
where \f$\beta\f$ is known as the *weight vector* and \f$\beta_{0}\f$ as the *bias*.
-@sa A more in depth description of this and hyperplanes you can find in the section 4.5 (*Separating
+@note A more in depth description of this and hyperplanes you can find in the section 4.5 (*Separating
Hyperplanes*) of the book: *Elements of Statistical Learning* by T. Hastie, R. Tibshirani and J. H.
Friedman (@cite HTF01).
the article introducing it. Nevertheless, you can get a good image of it by looking at the OpenCV
implementation below.
-@sa
+@note
SSIM is described more in-depth in the: "Z. Wang, A. C. Bovik, H. R. Sheikh and E. P.
Simoncelli, "Image quality assessment: From error visibility to structural similarity," IEEE
Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, Apr. 2004." article.
/** @brief Implementation of the Zach, Pock and Bischof Dual TV-L1 Optical Flow method.
*
- * @sa C. Zach, T. Pock and H. Bischof, "A Duality Based Approach for Realtime TV-L1 Optical Flow".
- * @sa Javier Sanchez, Enric Meinhardt-Llopis and Gabriele Facciolo. "TV-L1 Optical Flow Estimation".
+ * @note C. Zach, T. Pock and H. Bischof, "A Duality Based Approach for Realtime TV-L1 Optical Flow".
+ * @note Javier Sanchez, Enric Meinhardt-Llopis and Gabriele Facciolo. "TV-L1 Optical Flow Estimation".
*/
class CV_EXPORTS OpticalFlowDual_TVL1 : public DenseOpticalFlow
{
documentation of source stream to know the right URL.
@param apiPreference preferred Capture API backends to use. Can be used to enforce a specific reader
implementation if multiple are available: e.g. cv::CAP_FFMPEG or cv::CAP_IMAGES or cv::CAP_DSHOW.
- @sa The list of supported API backends cv::VideoCaptureAPIs
+
+ @sa cv::VideoCaptureAPIs
*/
CV_WRAP VideoCapture(const String& filename, int apiPreference);
Use a `domain_offset` to enforce a specific reader implementation if multiple are available like cv::CAP_FFMPEG or cv::CAP_IMAGES or cv::CAP_DSHOW.
e.g. to open Camera 1 using the MS Media Foundation API use `index = 1 + cv::CAP_MSMF`
- @sa The list of supported API backends cv::VideoCaptureAPIs
+ @sa cv::VideoCaptureAPIs
*/
CV_WRAP VideoCapture(int index);
@param apiPreference preferred Capture API backends to use. Can be used to enforce a specific reader
implementation if multiple are available: e.g. cv::CAP_DSHOW or cv::CAP_MSMF or cv::CAP_V4L2.
- @sa The list of supported API backends cv::VideoCaptureAPIs
+ @sa cv::VideoCaptureAPIs
*/
CV_WRAP VideoCapture(int index, int apiPreference);