---------------
Now we know how to convert BGR image to HSV, we can use this to extract a colored object. In HSV, it
-is more easier to represent a color than RGB color-space. In our application, we will try to extract
+is more easier to represent a color than in BGR color-space. In our application, we will try to extract
a blue colored object. So here is the method:
- Take each frame of the video
mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1])
hsv[...,0] = ang*180/np.pi/2
hsv[...,2] = cv2.normalize(mag,None,0,255,cv2.NORM_MINMAX)
- rgb = cv2.cvtColor(hsv,cv2.COLOR_HSV2BGR)
+ bgr = cv2.cvtColor(hsv,cv2.COLOR_HSV2BGR)
- cv2.imshow('frame2',rgb)
+ cv2.imshow('frame2',bgr)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
elif k == ord('s'):
cv2.imwrite('opticalfb.png',frame2)
- cv2.imwrite('opticalhsv.png',rgb)
+ cv2.imwrite('opticalhsv.png',bgr)
prvs = next
cap.release()
- Represents a 4-element vector. The type Scalar is widely used in OpenCV for passing pixel
values.
-- In this tutorial, we will use it extensively to represent RGB color values (3 parameters). It is
+- In this tutorial, we will use it extensively to represent BGR color values (3 parameters). It is
not necessary to define the last argument if it is not going to be used.
- Let's see an example, if we are asked for a color argument and we give:
@code{.cpp}
Scalar( a, b, c )
@endcode
- We would be defining a RGB color such as: *Red = c*, *Green = b* and *Blue = a*
+ We would be defining a BGR color such as: *Blue = a*, *Green = b* and *Red = c*
Code
----
*image.size()* and *image.type()*
-# Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
- pixel in image. Since we are operating with RGB images, we will have three values per pixel (R,
- G and B), so we will also access them separately. Here is the piece of code:
+ pixel in image. Since we are operating with BGR images, we will have three values per pixel (B,
+ G and R), so we will also access them separately. Here is the piece of code:
@code{.cpp}
for( int y = 0; y < image.rows; y++ ) {
for( int x = 0; x < image.cols; x++ ) {
how_to_scan_images imageName.jpg intValueToReduce [G]
@endcode
The final argument is optional. If given the image will be loaded in gray scale format, otherwise
-the RGB color way is used. The first thing is to calculate the lookup table.
+the BGR color space is used. The first thing is to calculate the lookup table.
@snippet how_to_scan_images.cpp dividewith
![](tutorial_how_matrix_stored_1.png)
For multichannel images the columns contain as many sub columns as the number of channels. For
-example in case of an RGB color system:
+example in case of an BGR color system:
![](tutorial_how_matrix_stored_2.png)
@snippet interoperability_with_OpenCV_1.cpp new
-Because, we want to mess around with the images luma component we first convert from the default RGB
+Because, we want to mess around with the images luma component we first convert from the default BGR
to the YUV color space and then split the result up into separate planes. Here the program splits:
in the first example it processes each plane using one of the three major image scanning algorithms
in OpenCV (C [] operator, iterator, individual element access). In a second variant we add to the
There are, however, many other color systems each with their own advantages:
-- RGB is the most common as our eyes use something similar, our display systems also compose
- colors using these.
+- RGB is the most common as our eyes use something similar, however keep in mind that OpenCV standard display
+ system composes colors using the BGR color space (a switch of the red and blue channel).
- The HSV and HLS decompose colors into their hue, saturation and value/luminance components,
which is a more natural way for us to describe colors. You might, for example, dismiss the last
component, making your algorithm less sensible to the light conditions of the input image.
- What type of video files you can create with OpenCV
- How to extract a given color channel from a video
-As a simple demonstration I'll just extract one of the RGB color channels of an input video file
+As a simple demonstration I'll just extract one of the BGR color channels of an input video file
into a new video. You can control the flow of the application from its console line arguments:
- The first argument points to the video file to work on
outputVideo.write(res); //or
outputVideo << res;
@endcode
-Extracting a color channel from an RGB image means to set to zero the RGB values of the other
+Extracting a color channel from an BGR image means to set to zero the BGR values of the other
channels. You can either do this with image scanning operations or by using the split and merge
operations. You first split the channels up into different images, set the other channels to zero
images of the same size and type and finally merge them back:
-# Most of the stuff shown is known by you (if you have any doubt, please refer to the tutorials in
previous sections). Let's check the general structure of the program:
- - Load an image (can be RGB or grayscale)
+ - Load an image (can be BGR or grayscale)
- Create two windows (one for dilation output, the other for erosion)
- Create a set of 02 Trackbars for each operation:
- The first trackbar "Element" returns either **erosion_elem** or **dilation_elem**
-----------
-# Declare variables such as the matrices to store the base image and the two other images to
- compare ( RGB and HSV )
+ compare ( BGR and HSV )
@code{.cpp}
Mat src_base, hsv_base;
Mat src_test1, hsv_test1;
Two of the most basic morphological operations are dilation and erosion. Dilation adds pixels to the boundaries of the object in an image, while erosion does exactly the opposite. The amount of pixels added or removed, respectively depends on the size and shape of the structuring element used to process the image. In general the rules followed from these two operations have as follows:
-- __Dilation__: The value of the output pixel is the <b><em>maximum</em></b> value of all the pixels that fall within the structuring element's size and shape. For example in a binary image, if any of the pixels of the input image falling within the range of the kernel is set to the value 1, the corresponding pixel of the output image will be set to 1 as well. The latter applies to any type of image (e.g. grayscale, rgb, etc).
+- __Dilation__: The value of the output pixel is the <b><em>maximum</em></b> value of all the pixels that fall within the structuring element's size and shape. For example in a binary image, if any of the pixels of the input image falling within the range of the kernel is set to the value 1, the corresponding pixel of the output image will be set to 1 as well. The latter applies to any type of image (e.g. grayscale, bgr, etc).
![Dilation on a Binary Image](images/morph21.gif)
-----------
-# Let's check the general structure of the program:
- - Load an image. If it is RGB we convert it to Grayscale. For this, remember that we can use
+ - Load an image. If it is BGR we convert it to Grayscale. For this, remember that we can use
the function @ref cv::cvtColor :
@code{.cpp}
src = imread( argv[1], 1 );
/// Convert the image to Gray
- cvtColor( src, src_gray, COLOR_RGB2GRAY );
+ cvtColor( src, src_gray, COLOR_BGR2GRAY );
@endcode
- Create a window to display the result
@code{.cpp}
-----------
-# We begin by loading an image using @ref cv::imread , located in the path given by *imageName*.
- For this example, assume you are loading a RGB image.
+ For this example, assume you are loading a BGR image.
-# Now we are going to convert our image from BGR to Grayscale format. OpenCV has a really nice
function to do this kind of transformations:
@code{.cpp}
======================================================
Depth sensors compatible with OpenNI (Kinect, XtionPRO, ...) are supported through VideoCapture
-class. Depth map, RGB image and some other formats of output can be retrieved by using familiar
+class. Depth map, BGR image and some other formats of output can be retrieved by using familiar
interface of VideoCapture.
In order to use depth sensor with OpenCV you should do the following preliminary steps:
- CAP_OPENNI_VALID_DEPTH_MASK - mask of valid pixels (not ocluded, not shaded etc.)
(CV_8UC1)
--# data given from RGB image generator:
+-# data given from BGR image generator:
- CAP_OPENNI_BGR_IMAGE - color image (CV_8UC3)
- CAP_OPENNI_GRAY_IMAGE - gray image (CV_8UC1)