From: Pranit Bauva Date: Sun, 1 Oct 2017 13:16:58 +0000 (+0530) Subject: doc: fix typo in py_tutorials X-Git-Tag: accepted/tizen/6.0/unified/20201030.111113~553^2 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=d3e3d0996c0bbd0fbdd7fe99be1a3eebb9a2c345;p=platform%2Fupstream%2Fopencv.git doc: fix typo in py_tutorials --- diff --git a/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown b/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown index 1305d8a..f2b1aa0 100644 --- a/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown +++ b/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.markdown @@ -130,7 +130,7 @@ Or >>> b = img[:,:,0] @endcode Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal -to zero. You can simply use Numpy indexing, and that is more faster. +to zero. You can simply use Numpy indexing, and that is faster. @code{.py} >>> img[:,:,2] = 0 @endcode diff --git a/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown b/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown index a047154..1c6ef13 100644 --- a/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown +++ b/doc/py_tutorials/py_feature2d/py_matcher/py_matcher.markdown @@ -140,7 +140,7 @@ FLANN based Matcher FLANN stands for Fast Library for Approximate Nearest Neighbors. It contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional -features. It works more faster than BFMatcher for large datasets. We will see the second example +features. It works faster than BFMatcher for large datasets. We will see the second example with FLANN based matcher. For FLANN based matcher, we need to pass two dictionaries which specifies the algorithm to be used, diff --git a/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown b/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown index dab2add..7c9456f 100644 --- a/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown +++ b/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.markdown @@ -34,7 +34,7 @@ applications, rotation invariance is not required, so no need of finding this or speeds up the process. SURF provides such a functionality called Upright-SURF or U-SURF. It improves speed and is robust upto \f$\pm 15^{\circ}\f$. OpenCV supports both, depending upon the flag, **upright**. If it is 0, orientation is calculated. If it is 1, orientation is not calculated and it -is more faster. +is faster. ![image](images/surf_orientation.jpg) @@ -130,7 +130,7 @@ False >>> plt.imshow(img2),plt.show() @endcode -See the results below. All the orientations are shown in same direction. It is more faster than +See the results below. All the orientations are shown in same direction. It is faster than previous. If you are working on cases where orientation is not a problem (like panorama stitching) etc, this is better. diff --git a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown index 7d7eaac..27c0734 100644 --- a/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown +++ b/doc/py_tutorials/py_imgproc/py_histograms/py_histogram_begins/py_histogram_begins.markdown @@ -99,7 +99,7 @@ as 0-0.99, 1-1.99, 2-2.99 etc. So final range would be 255-255.99. To represent np.histogram(). So for one-dimensional histograms, you can better try that. Don't forget to set minlength = 256 in np.bincount. For example, hist = np.bincount(img.ravel(),minlength=256) -@note OpenCV function is more faster than (around 40X) than np.histogram(). So stick with OpenCV +@note OpenCV function is faster than (around 40X) than np.histogram(). So stick with OpenCV function. Now we should plot histograms, but how?