Code
----
+@add_toggle_cpp
+- **Downloadable code**: Click
+ [here](https://github.com/opencv/opencv/tree/3.4/samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp)
+
- The following code performs the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ :
-@include BasicLinearTransforms.cpp
+ @include samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp
+@end_toggle
+
+@add_toggle_java
+- **Downloadable code**: Click
+ [here](https://github.com/opencv/opencv/tree/3.4/samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java)
+
+- The following code performs the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ :
+ @include samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java
+@end_toggle
+
+@add_toggle_python
+- **Downloadable code**: Click
+ [here](https://github.com/opencv/opencv/tree/3.4/samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py)
+
+- The following code performs the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ :
+ @include samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py
+@end_toggle
Explanation
-----------
--# We begin by creating parameters to save \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
- @snippet BasicLinearTransforms.cpp basic-linear-transform-parameters
+- We load an image using @ref cv::imread and save it in a Mat object:
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-load
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-load
+@end_toggle
--# We load an image using @ref cv::imread and save it in a Mat object:
- @snippet BasicLinearTransforms.cpp basic-linear-transform-load
--# Now, since we will make some transformations to this image, we need a new Mat object to store
+@add_toggle_python
+@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-load
+@end_toggle
+
+- Now, since we will make some transformations to this image, we need a new Mat object to store
it. Also, we want this to have the following features:
- Initial pixel values equal to zero
- Same size and type as the original image
- @snippet BasicLinearTransforms.cpp basic-linear-transform-output
- We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
- *image.size()* and *image.type()*
--# Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-output
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-output
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-output
+@end_toggle
+
+We observe that @ref cv::Mat::zeros returns a Matlab-style zero initializer based on
+*image.size()* and *image.type()*
+
+- We ask now the values of \f$\alpha\f$ and \f$\beta\f$ to be entered by the user:
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-parameters
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-parameters
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-parameters
+@end_toggle
+
+- Now, to perform the operation \f$g(i,j) = \alpha \cdot f(i,j) + \beta\f$ we will access to each
pixel in image. Since we are operating with BGR images, we will have three values per pixel (B,
G and R), so we will also access them separately. Here is the piece of code:
- @snippet BasicLinearTransforms.cpp basic-linear-transform-operation
- Notice the following:
- - To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]*
- where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
- - Since the operation \f$\alpha \cdot p(i,j) + \beta\f$ can give values out of range or not
- integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
- values are valid.
--# Finally, we create windows and show the images, the usual way.
- @snippet BasicLinearTransforms.cpp basic-linear-transform-display
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-operation
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-operation
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-operation
+@end_toggle
+
+Notice the following (**C++ code only**):
+- To access each pixel in the images we are using this syntax: *image.at\<Vec3b\>(y,x)[c]*
+ where *y* is the row, *x* is the column and *c* is R, G or B (0, 1 or 2).
+- Since the operation \f$\alpha \cdot p(i,j) + \beta\f$ can give values out of range or not
+ integers (if \f$\alpha\f$ is float), we use cv::saturate_cast to make sure the
+ values are valid.
+
+- Finally, we create windows and show the images, the usual way.
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/ImgProc/BasicLinearTransforms.cpp basic-linear-transform-display
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/BasicLinearTransformsDemo.java basic-linear-transform-display
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/BasicLinearTransforms.py basic-linear-transform-display
+@end_toggle
@note
Instead of using the **for** loops to access each pixel, we could have simply used this command:
- @code{.cpp}
- image.convertTo(new_image, -1, alpha, beta);
- @endcode
- where @ref cv::Mat::convertTo would effectively perform *new_image = a*image + beta\*. However, we
- wanted to show you how to access each pixel. In any case, both methods give the same result but
- convertTo is more optimized and works a lot faster.
+
+@add_toggle_cpp
+@code{.cpp}
+image.convertTo(new_image, -1, alpha, beta);
+@endcode
+@end_toggle
+
+@add_toggle_java
+@code{.java}
+image.convertTo(newImage, -1, alpha, beta);
+@endcode
+@end_toggle
+
+@add_toggle_python
+@code{.py}
+new_image = cv.convertScaleAbs(image, alpha=alpha, beta=beta)
+@endcode
+@end_toggle
+
+where @ref cv::Mat::convertTo would effectively perform *new_image = a*image + beta\*. However, we
+wanted to show you how to access each pixel. In any case, both methods give the same result but
+convertTo is more optimized and works a lot faster.
Result
------
### Code
+@add_toggle_cpp
Code for the tutorial is [here](https://github.com/opencv/opencv/blob/3.4/samples/cpp/tutorial_code/ImgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.cpp).
+@end_toggle
+
+@add_toggle_java
+Code for the tutorial is [here](https://github.com/opencv/opencv/blob/3.4/samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/ChangingContrastBrightnessImageDemo.java).
+@end_toggle
+
+@add_toggle_python
+Code for the tutorial is [here](https://github.com/opencv/opencv/blob/3.4/samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.py).
+@end_toggle
+
Code for the gamma correction:
-@snippet changing_contrast_brightness_image.cpp changing-contrast-brightness-gamma-correction
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/ImgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.cpp changing-contrast-brightness-gamma-correction
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/ImgProc/changing_contrast_brightness_image/ChangingContrastBrightnessImageDemo.java changing-contrast-brightness-gamma-correction
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/imgProc/changing_contrast_brightness_image/changing_contrast_brightness_image.py changing-contrast-brightness-gamma-correction
+@end_toggle
A look-up table is used to improve the performance of the computation as only 256 values needs to be calculated once.
### Images
Load an image from a file:
-@code{.cpp}
- Mat img = imread(filename)
-@endcode
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Load an image from a file
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Load an image from a file
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Load an image from a file
+@end_toggle
If you read a jpg file, a 3 channel image is created by default. If you need a grayscale image, use:
-@code{.cpp}
- Mat img = imread(filename, IMREAD_GRAYSCALE);
-@endcode
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Load an image from a file in grayscale
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Load an image from a file in grayscale
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Load an image from a file in grayscale
+@end_toggle
+
+@note Format of the file is determined by its content (first few bytes). To save an image to a file:
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Save image
+@end_toggle
-@note format of the file is determined by its content (first few bytes) Save an image to a file:
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Save image
+@end_toggle
-@code{.cpp}
- imwrite(filename, img);
-@endcode
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Save image
+@end_toggle
-@note format of the file is determined by its extension.
+@note Format of the file is determined by its extension.
-@note use imdecode and imencode to read and write image from/to memory rather than a file.
+@note Use cv::imdecode and cv::imencode to read and write an image from/to memory rather than a file.
Basic operations with images
----------------------------
In order to get pixel intensity value, you have to know the type of an image and the number of
channels. Here is an example for a single channel grey scale image (type 8UC1) and pixel coordinates
x and y:
-@code{.cpp}
- Scalar intensity = img.at<uchar>(y, x);
-@endcode
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Pixel access 1
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Pixel access 1
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Pixel access 1
+@end_toggle
+
+C++ version only:
intensity.val[0] contains a value from 0 to 255. Note the ordering of x and y. Since in OpenCV
images are represented by the same structure as matrices, we use the same convention for both
cases - the 0-based row index (or y-coordinate) goes first and the 0-based column index (or
-x-coordinate) follows it. Alternatively, you can use the following notation:
-@code{.cpp}
- Scalar intensity = img.at<uchar>(Point(x, y));
-@endcode
+x-coordinate) follows it. Alternatively, you can use the following notation (**C++ only**):
+
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Pixel access 2
+
Now let us consider a 3 channel image with BGR color ordering (the default format returned by
imread):
-@code{.cpp}
- Vec3b intensity = img.at<Vec3b>(y, x);
- uchar blue = intensity.val[0];
- uchar green = intensity.val[1];
- uchar red = intensity.val[2];
-@endcode
+
+**C++ code**
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Pixel access 3
+
+**Python Python**
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Pixel access 3
+
You can use the same method for floating-point images (for example, you can get such an image by
-running Sobel on a 3 channel image):
-@code{.cpp}
- Vec3f intensity = img.at<Vec3f>(y, x);
- float blue = intensity.val[0];
- float green = intensity.val[1];
- float red = intensity.val[2];
-@endcode
+running Sobel on a 3 channel image) (**C++ only**):
+
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Pixel access 4
+
The same method can be used to change pixel intensities:
-@code{.cpp}
- img.at<uchar>(y, x) = 128;
-@endcode
-There are functions in OpenCV, especially from calib3d module, such as projectPoints, that take an
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Pixel access 5
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Pixel access 5
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Pixel access 5
+@end_toggle
+
+There are functions in OpenCV, especially from calib3d module, such as cv::projectPoints, that take an
array of 2D or 3D points in the form of Mat. Matrix should contain exactly one column, each row
corresponds to a point, matrix type should be 32FC2 or 32FC3 correspondingly. Such a matrix can be
-easily constructed from `std::vector`:
-@code{.cpp}
- vector<Point2f> points;
- //... fill the array
- Mat pointsMat = Mat(points);
-@endcode
-One can access a point in this matrix using the same method Mat::at :
-@code{.cpp}
-Point2f point = pointsMat.at<Point2f>(i, 0);
-@endcode
+easily constructed from `std::vector` (**C++ only**):
+
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Mat from points vector
+
+One can access a point in this matrix using the same method `Mat::at` (**C++ only**):
+
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Point access
### Memory management and reference counting
and a pointer to data. So nothing prevents us from having several instances of Mat corresponding to
the same data. A Mat keeps a reference count that tells if data has to be deallocated when a
particular instance of Mat is destroyed. Here is an example of creating two matrices without copying
-data:
-@code{.cpp}
- std::vector<Point3f> points;
- // .. fill the array
- Mat pointsMat = Mat(points).reshape(1);
-@endcode
-As a result we get a 32FC1 matrix with 3 columns instead of 32FC3 matrix with 1 column. pointsMat
+data (**C++ only**):
+
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Reference counting 1
+
+As a result, we get a 32FC1 matrix with 3 columns instead of 32FC3 matrix with 1 column. `pointsMat`
uses data from points and will not deallocate the memory when destroyed. In this particular
-instance, however, developer has to make sure that lifetime of points is longer than of pointsMat.
+instance, however, developer has to make sure that lifetime of `points` is longer than of `pointsMat`
If we need to copy the data, this is done using, for example, cv::Mat::copyTo or cv::Mat::clone:
-@code{.cpp}
- Mat img = imread("image.jpg");
- Mat img1 = img.clone();
-@endcode
-To the contrary with C API where an output image had to be created by developer, an empty output Mat
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Reference counting 2
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Reference counting 2
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Reference counting 2
+@end_toggle
+
+To the contrary with C API where an output image had to be created by the developer, an empty output Mat
can be supplied to each function. Each implementation calls Mat::create for a destination matrix.
This method allocates data for a matrix if it is empty. If it is not empty and has the correct size
-and type, the method does nothing. If, however, size or type are different from input arguments, the
+and type, the method does nothing. If however, size or type are different from the input arguments, the
data is deallocated (and lost) and a new data is allocated. For example:
-@code{.cpp}
- Mat img = imread("image.jpg");
- Mat sobelx;
- Sobel(img, sobelx, CV_32F, 1, 0);
-@endcode
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Reference counting 3
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Reference counting 3
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Reference counting 3
+@end_toggle
### Primitive operations
There is a number of convenient operators defined on a matrix. For example, here is how we can make
-a black image from an existing greyscale image \`img\`:
-@code{.cpp}
- img = Scalar(0);
-@endcode
+a black image from an existing greyscale image `img`
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Set image to black
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Set image to black
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Set image to black
+@end_toggle
+
Selecting a region of interest:
-@code{.cpp}
- Rect r(10, 10, 100, 100);
- Mat smallImg = img(r);
-@endcode
-A conversion from Mat to C API data structures:
-@code{.cpp}
- Mat img = imread("image.jpg");
- IplImage img1 = img;
- CvMat m = img;
-@endcode
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Select ROI
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Select ROI
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Select ROI
+@end_toggle
+
+A conversion from Mat to C API data structures (**C++ only**):
+
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp C-API conversion
Note that there is no data copying here.
-Conversion from color to grey scale:
-@code{.cpp}
- Mat img = imread("image.jpg"); // loading a 8UC3 image
- Mat grey;
- cvtColor(img, grey, COLOR_BGR2GRAY);
-@endcode
+Conversion from color to greyscale:
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp BGR to Gray
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java BGR to Gray
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py BGR to Gray
+@end_toggle
+
Change image type from 8UC1 to 32FC1:
-@code{.cpp}
- src.convertTo(dst, CV_32F);
-@endcode
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp Convert to CV_32F
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java Convert to CV_32F
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py Convert to CV_32F
+@end_toggle
### Visualizing images
It is very useful to see intermediate results of your algorithm during development process. OpenCV
provides a convenient way of visualizing images. A 8U image can be shown using:
-@code{.cpp}
- Mat img = imread("image.jpg");
- namedWindow("image", WINDOW_AUTOSIZE);
- imshow("image", img);
- waitKey();
-@endcode
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp imshow 1
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java imshow 1
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py imshow 1
+@end_toggle
A call to waitKey() starts a message passing cycle that waits for a key stroke in the "image"
window. A 32F image needs to be converted to 8U type. For example:
-@code{.cpp}
- Mat img = imread("image.jpg");
- Mat grey;
- cvtColor(img, grey, COLOR_BGR2GRAY);
-
- Mat sobelx;
- Sobel(grey, sobelx, CV_32F, 1, 0);
-
- double minVal, maxVal;
- minMaxLoc(sobelx, &minVal, &maxVal); //find minimum and maximum intensities
- Mat draw;
- sobelx.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
-
- namedWindow("image", WINDOW_AUTOSIZE);
- imshow("image", draw);
- waitKey();
-@endcode
+
+@add_toggle_cpp
+@snippet samples/cpp/tutorial_code/core/mat_operations/mat_operations.cpp imshow 2
+@end_toggle
+
+@add_toggle_java
+@snippet samples/java/tutorial_code/core/mat_operations/MatOperations.java imshow 2
+@end_toggle
+
+@add_toggle_python
+@snippet samples/python/tutorial_code/core/mat_operations/mat_operations.py imshow 2
+@end_toggle
+
+@note Here cv::namedWindow is not necessary since it is immediately followed by cv::imshow.
+Nevertheless, it can be used to change the window properties or when using cv::createTrackbar
- @subpage tutorial_mat_operations
+ *Languages:* C++, Java, Python
+
+ *Compatibility:* \> OpenCV 2.0
+
Reading/writing images from file, accessing pixels, primitive operations, visualizing images.
- @subpage tutorial_adding_images
- @subpage tutorial_basic_linear_transform
+ *Languages:* C++, Java, Python
+
*Compatibility:* \> OpenCV 2.0
*Author:* Ana Huamán
*/
int main( int argc, char** argv )
{
- //! [basic-linear-transform-parameters]
- double alpha = 1.0; /*< Simple contrast control */
- int beta = 0; /*< Simple brightness control */
- //! [basic-linear-transform-parameters]
-
/// Read image given by user
//! [basic-linear-transform-load]
- String imageName("../data/lena.jpg"); // by default
- if (argc > 1)
+ CommandLineParser parser( argc, argv, "{@input | ../data/lena.jpg | input image}" );
+ Mat image = imread( parser.get<String>( "@input" ) );
+ if( image.empty() )
{
- imageName = argv[1];
+ cout << "Could not open or find the image!\n" << endl;
+ cout << "Usage: " << argv[0] << " <Input image>" << endl;
+ return -1;
}
- Mat image = imread( imageName );
//! [basic-linear-transform-load]
+
//! [basic-linear-transform-output]
Mat new_image = Mat::zeros( image.size(), image.type() );
//! [basic-linear-transform-output]
+ //! [basic-linear-transform-parameters]
+ double alpha = 1.0; /*< Simple contrast control */
+ int beta = 0; /*< Simple brightness control */
+
/// Initialize values
cout << " Basic Linear Transforms " << endl;
cout << "-------------------------" << endl;
cout << "* Enter the alpha value [1.0-3.0]: "; cin >> alpha;
cout << "* Enter the beta value [0-100]: "; cin >> beta;
+ //! [basic-linear-transform-parameters]
/// Do the operation new_image(i,j) = alpha*image(i,j) + beta
/// Instead of these 'for' loops we could have used simply:
//! [basic-linear-transform-operation]
for( int y = 0; y < image.rows; y++ ) {
for( int x = 0; x < image.cols; x++ ) {
- for( int c = 0; c < 3; c++ ) {
+ for( int c = 0; c < image.channels(); c++ ) {
new_image.at<Vec3b>(y,x)[c] =
- saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );
+ saturate_cast<uchar>( alpha*image.at<Vec3b>(y,x)[c] + beta );
}
}
}
//! [basic-linear-transform-operation]
//! [basic-linear-transform-display]
- /// Create Windows
- namedWindow("Original Image", WINDOW_AUTOSIZE);
- namedWindow("New Image", WINDOW_AUTOSIZE);
-
/// Show stuff
imshow("Original Image", image);
imshow("New Image", new_image);
#include "opencv2/highgui.hpp"
// we're NOT "using namespace std;" here, to avoid collisions between the beta variable and std::beta in c++17
+using std::cout;
+using std::endl;
using namespace cv;
namespace
img.convertTo(res, -1, alpha_, beta_);
hconcat(img, res, img_corrected);
+ imshow("Brightness and contrast adjustments", img_corrected);
}
void gammaCorrection(const Mat &img, const double gamma_)
{
CV_Assert(gamma_ >= 0);
- //![changing-contrast-brightness-gamma-correction]
+ //! [changing-contrast-brightness-gamma-correction]
Mat lookUpTable(1, 256, CV_8U);
uchar* p = lookUpTable.ptr();
for( int i = 0; i < 256; ++i)
Mat res = img.clone();
LUT(img, lookUpTable, res);
- //![changing-contrast-brightness-gamma-correction]
+ //! [changing-contrast-brightness-gamma-correction]
hconcat(img, res, img_gamma_corrected);
+ imshow("Gamma correction", img_gamma_corrected);
}
void on_linear_transform_alpha_trackbar(int, void *)
int main( int argc, char** argv )
{
-
- String imageName("../data/lena.jpg"); // by default
- if (argc > 1)
+ CommandLineParser parser( argc, argv, "{@input | ../data/lena.jpg | input image}" );
+ img_original = imread( parser.get<String>( "@input" ) );
+ if( img_original.empty() )
{
- imageName = argv[1];
+ cout << "Could not open or find the image!\n" << endl;
+ cout << "Usage: " << argv[0] << " <Input image>" << endl;
+ return -1;
}
- img_original = imread( imageName );
img_corrected = Mat(img_original.rows, img_original.cols*2, img_original.type());
img_gamma_corrected = Mat(img_original.rows, img_original.cols*2, img_original.type());
hconcat(img_original, img_original, img_corrected);
hconcat(img_original, img_original, img_gamma_corrected);
- namedWindow("Brightness and contrast adjustments", WINDOW_AUTOSIZE);
- namedWindow("Gamma correction", WINDOW_AUTOSIZE);
+ namedWindow("Brightness and contrast adjustments");
+ namedWindow("Gamma correction");
createTrackbar("Alpha gain (contrast)", "Brightness and contrast adjustments", &alpha, 500, on_linear_transform_alpha_trackbar);
createTrackbar("Beta bias (brightness)", "Brightness and contrast adjustments", &beta, 200, on_linear_transform_beta_trackbar);
createTrackbar("Gamma correction", "Gamma correction", &gamma_cor, 200, on_gamma_correction_trackbar);
- while (true)
- {
- imshow("Brightness and contrast adjustments", img_corrected);
- imshow("Gamma correction", img_gamma_corrected);
+ on_linear_transform_alpha_trackbar(0, 0);
+ on_gamma_correction_trackbar(0, 0);
- int c = waitKey(30);
- if (c == 27)
- break;
- }
+ waitKey();
imwrite("linear_transform_correction.png", img_corrected);
imwrite("gamma_correction.png", img_gamma_corrected);
--- /dev/null
+/* Snippet code for Operations with images tutorial (not intended to be run but should built successfully) */
+
+#include "opencv2/core.hpp"
+#include "opencv2/core/core_c.h"
+#include "opencv2/imgcodecs.hpp"
+#include "opencv2/imgproc.hpp"
+#include "opencv2/highgui.hpp"
+#include <iostream>
+
+using namespace cv;
+using namespace std;
+
+int main(int,char**)
+{
+ std::string filename = "";
+ // Input/Output
+ {
+ //! [Load an image from a file]
+ Mat img = imread(filename);
+ //! [Load an image from a file]
+ CV_UNUSED(img);
+ }
+ {
+ //! [Load an image from a file in grayscale]
+ Mat img = imread(filename, IMREAD_GRAYSCALE);
+ //! [Load an image from a file in grayscale]
+ CV_UNUSED(img);
+ }
+ {
+ Mat img(4,4,CV_8U);
+ //! [Save image]
+ imwrite(filename, img);
+ //! [Save image]
+ }
+ // Accessing pixel intensity values
+ {
+ Mat img(4,4,CV_8U);
+ int y = 0, x = 0;
+ {
+ //! [Pixel access 1]
+ Scalar intensity = img.at<uchar>(y, x);
+ //! [Pixel access 1]
+ CV_UNUSED(intensity);
+ }
+ {
+ //! [Pixel access 2]
+ Scalar intensity = img.at<uchar>(Point(x, y));
+ //! [Pixel access 2]
+ CV_UNUSED(intensity);
+ }
+ {
+ //! [Pixel access 3]
+ Vec3b intensity = img.at<Vec3b>(y, x);
+ uchar blue = intensity.val[0];
+ uchar green = intensity.val[1];
+ uchar red = intensity.val[2];
+ //! [Pixel access 3]
+ CV_UNUSED(blue);
+ CV_UNUSED(green);
+ CV_UNUSED(red);
+ }
+ {
+ //! [Pixel access 4]
+ Vec3f intensity = img.at<Vec3f>(y, x);
+ float blue = intensity.val[0];
+ float green = intensity.val[1];
+ float red = intensity.val[2];
+ //! [Pixel access 4]
+ CV_UNUSED(blue);
+ CV_UNUSED(green);
+ CV_UNUSED(red);
+ }
+ {
+ //! [Pixel access 5]
+ img.at<uchar>(y, x) = 128;
+ //! [Pixel access 5]
+ }
+ {
+ int i = 0;
+ //! [Mat from points vector]
+ vector<Point2f> points;
+ //... fill the array
+ Mat pointsMat = Mat(points);
+ //! [Mat from points vector]
+
+ //! [Point access]
+ Point2f point = pointsMat.at<Point2f>(i, 0);
+ //! [Point access]
+ CV_UNUSED(point);
+ }
+ }
+ // Memory management and reference counting
+ {
+ //! [Reference counting 1]
+ std::vector<Point3f> points;
+ // .. fill the array
+ Mat pointsMat = Mat(points).reshape(1);
+ //! [Reference counting 1]
+ CV_UNUSED(pointsMat);
+ }
+ {
+ //! [Reference counting 2]
+ Mat img = imread("image.jpg");
+ Mat img1 = img.clone();
+ //! [Reference counting 2]
+ CV_UNUSED(img1);
+ }
+ {
+ //! [Reference counting 3]
+ Mat img = imread("image.jpg");
+ Mat sobelx;
+ Sobel(img, sobelx, CV_32F, 1, 0);
+ //! [Reference counting 3]
+ }
+ // Primitive operations
+ {
+ Mat img;
+ {
+ //! [Set image to black]
+ img = Scalar(0);
+ //! [Set image to black]
+ }
+ {
+ //! [Select ROI]
+ Rect r(10, 10, 100, 100);
+ Mat smallImg = img(r);
+ //! [Select ROI]
+ CV_UNUSED(smallImg);
+ }
+ }
+ {
+ //! [C-API conversion]
+ Mat img = imread("image.jpg");
+ IplImage img1 = img;
+ CvMat m = img;
+ //! [C-API conversion]
+ CV_UNUSED(img1);
+ CV_UNUSED(m);
+ }
+ {
+ //! [BGR to Gray]
+ Mat img = imread("image.jpg"); // loading a 8UC3 image
+ Mat grey;
+ cvtColor(img, grey, COLOR_BGR2GRAY);
+ //! [BGR to Gray]
+ }
+ {
+ Mat dst, src;
+ //! [Convert to CV_32F]
+ src.convertTo(dst, CV_32F);
+ //! [Convert to CV_32F]
+ }
+ // Visualizing images
+ {
+ //! [imshow 1]
+ Mat img = imread("image.jpg");
+ namedWindow("image", WINDOW_AUTOSIZE);
+ imshow("image", img);
+ waitKey();
+ //! [imshow 1]
+ }
+ {
+ //! [imshow 2]
+ Mat img = imread("image.jpg");
+ Mat grey;
+ cvtColor(img, grey, COLOR_BGR2GRAY);
+ Mat sobelx;
+ Sobel(grey, sobelx, CV_32F, 1, 0);
+ double minVal, maxVal;
+ minMaxLoc(sobelx, &minVal, &maxVal); //find minimum and maximum intensities
+ Mat draw;
+ sobelx.convertTo(draw, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
+ namedWindow("image", WINDOW_AUTOSIZE);
+ imshow("image", draw);
+ waitKey();
+ //! [imshow 2]
+ }
+
+ return 0;
+}
--- /dev/null
+import java.util.Scanner;
+
+import org.opencv.core.Core;
+import org.opencv.core.Mat;
+import org.opencv.highgui.HighGui;
+import org.opencv.imgcodecs.Imgcodecs;
+
+class BasicLinearTransforms {
+ private byte saturate(double val) {
+ int iVal = (int) Math.round(val);
+ iVal = iVal > 255 ? 255 : (iVal < 0 ? 0 : iVal);
+ return (byte) iVal;
+ }
+
+ public void run(String[] args) {
+ /// Read image given by user
+ //! [basic-linear-transform-load]
+ String imagePath = args.length > 0 ? args[0] : "../data/lena.jpg";
+ Mat image = Imgcodecs.imread(imagePath);
+ if (image.empty()) {
+ System.out.println("Empty image: " + imagePath);
+ System.exit(0);
+ }
+ //! [basic-linear-transform-load]
+
+ //! [basic-linear-transform-output]
+ Mat newImage = Mat.zeros(image.size(), image.type());
+ //! [basic-linear-transform-output]
+
+ //! [basic-linear-transform-parameters]
+ double alpha = 1.0; /*< Simple contrast control */
+ int beta = 0; /*< Simple brightness control */
+
+ /// Initialize values
+ System.out.println(" Basic Linear Transforms ");
+ System.out.println("-------------------------");
+ try (Scanner scanner = new Scanner(System.in)) {
+ System.out.print("* Enter the alpha value [1.0-3.0]: ");
+ alpha = scanner.nextDouble();
+ System.out.print("* Enter the beta value [0-100]: ");
+ beta = scanner.nextInt();
+ }
+ //! [basic-linear-transform-parameters]
+
+ /// Do the operation newImage(i,j) = alpha*image(i,j) + beta
+ /// Instead of these 'for' loops we could have used simply:
+ /// image.convertTo(newImage, -1, alpha, beta);
+ /// but we wanted to show you how to access the pixels :)
+ //! [basic-linear-transform-operation]
+ byte[] imageData = new byte[(int) (image.total()*image.channels())];
+ image.get(0, 0, imageData);
+ byte[] newImageData = new byte[(int) (newImage.total()*newImage.channels())];
+ for (int y = 0; y < image.rows(); y++) {
+ for (int x = 0; x < image.cols(); x++) {
+ for (int c = 0; c < image.channels(); c++) {
+ double pixelValue = imageData[(y * image.cols() + x) * image.channels() + c];
+ /// Java byte range is [-128, 127]
+ pixelValue = pixelValue < 0 ? pixelValue + 256 : pixelValue;
+ newImageData[(y * image.cols() + x) * image.channels() + c]
+ = saturate(alpha * pixelValue + beta);
+ }
+ }
+ }
+ newImage.put(0, 0, newImageData);
+ //! [basic-linear-transform-operation]
+
+ //! [basic-linear-transform-display]
+ /// Show stuff
+ HighGui.imshow("Original Image", image);
+ HighGui.imshow("New Image", newImage);
+
+ /// Wait until user press some key
+ HighGui.waitKey();
+ //! [basic-linear-transform-display]
+ System.exit(0);
+ }
+}
+
+public class BasicLinearTransformsDemo {
+ public static void main(String[] args) {
+ // Load the native OpenCV library
+ System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
+
+ new BasicLinearTransforms().run(args);
+ }
+}
--- /dev/null
+import java.awt.BorderLayout;
+import java.awt.Container;
+import java.awt.Image;
+import java.awt.event.ActionEvent;
+import java.awt.event.ActionListener;
+
+import javax.swing.BoxLayout;
+import javax.swing.ImageIcon;
+import javax.swing.JCheckBox;
+import javax.swing.JFrame;
+import javax.swing.JLabel;
+import javax.swing.JPanel;
+import javax.swing.JSlider;
+import javax.swing.event.ChangeEvent;
+import javax.swing.event.ChangeListener;
+
+import org.opencv.core.Core;
+import org.opencv.core.CvType;
+import org.opencv.core.Mat;
+import org.opencv.highgui.HighGui;
+import org.opencv.imgcodecs.Imgcodecs;
+
+class ChangingContrastBrightnessImage {
+ private static int MAX_VALUE_ALPHA = 500;
+ private static int MAX_VALUE_BETA_GAMMA = 200;
+ private static final String WINDOW_NAME = "Changing the contrast and brightness of an image demo";
+ private static final String ALPHA_NAME = "Alpha gain (contrast)";
+ private static final String BETA_NAME = "Beta bias (brightness)";
+ private static final String GAMMA_NAME = "Gamma correction";
+ private JFrame frame;
+ private Mat matImgSrc = new Mat();
+ private JLabel imgSrcLabel;
+ private JLabel imgModifLabel;
+ private JPanel controlPanel;
+ private JPanel alphaBetaPanel;
+ private JPanel gammaPanel;
+ private double alphaValue = 1.0;
+ private double betaValue = 0.0;
+ private double gammaValue = 1.0;
+ private JCheckBox methodCheckBox;
+ private JSlider sliderAlpha;
+ private JSlider sliderBeta;
+ private JSlider sliderGamma;
+
+ public ChangingContrastBrightnessImage(String[] args) {
+ String imagePath = args.length > 0 ? args[0] : "../data/lena.jpg";
+ matImgSrc = Imgcodecs.imread(imagePath);
+ if (matImgSrc.empty()) {
+ System.out.println("Empty image: " + imagePath);
+ System.exit(0);
+ }
+
+ // Create and set up the window.
+ frame = new JFrame(WINDOW_NAME);
+ frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
+ // Set up the content pane.
+ Image img = HighGui.toBufferedImage(matImgSrc);
+ addComponentsToPane(frame.getContentPane(), img);
+ // Use the content pane's default BorderLayout. No need for
+ // setLayout(new BorderLayout());
+ // Display the window.
+ frame.pack();
+ frame.setVisible(true);
+ }
+
+ private void addComponentsToPane(Container pane, Image img) {
+ if (!(pane.getLayout() instanceof BorderLayout)) {
+ pane.add(new JLabel("Container doesn't use BorderLayout!"));
+ return;
+ }
+
+ controlPanel = new JPanel();
+ controlPanel.setLayout(new BoxLayout(controlPanel, BoxLayout.PAGE_AXIS));
+
+ methodCheckBox = new JCheckBox("Do gamma correction");
+ methodCheckBox.addActionListener(new ActionListener() {
+ @Override
+ public void actionPerformed(ActionEvent e) {
+ JCheckBox cb = (JCheckBox) e.getSource();
+ if (cb.isSelected()) {
+ controlPanel.remove(alphaBetaPanel);
+ controlPanel.add(gammaPanel);
+ performGammaCorrection();
+ frame.revalidate();
+ frame.repaint();
+ frame.pack();
+ } else {
+ controlPanel.remove(gammaPanel);
+ controlPanel.add(alphaBetaPanel);
+ performLinearTransformation();
+ frame.revalidate();
+ frame.repaint();
+ frame.pack();
+ }
+ }
+ });
+ controlPanel.add(methodCheckBox);
+
+ alphaBetaPanel = new JPanel();
+ alphaBetaPanel.setLayout(new BoxLayout(alphaBetaPanel, BoxLayout.PAGE_AXIS));
+ alphaBetaPanel.add(new JLabel(ALPHA_NAME));
+ sliderAlpha = new JSlider(0, MAX_VALUE_ALPHA, 100);
+ sliderAlpha.setMajorTickSpacing(50);
+ sliderAlpha.setMinorTickSpacing(10);
+ sliderAlpha.setPaintTicks(true);
+ sliderAlpha.setPaintLabels(true);
+ sliderAlpha.addChangeListener(new ChangeListener() {
+ @Override
+ public void stateChanged(ChangeEvent e) {
+ alphaValue = sliderAlpha.getValue() / 100.0;
+ performLinearTransformation();
+ }
+ });
+ alphaBetaPanel.add(sliderAlpha);
+
+ alphaBetaPanel.add(new JLabel(BETA_NAME));
+ sliderBeta = new JSlider(0, MAX_VALUE_BETA_GAMMA, 100);
+ sliderBeta.setMajorTickSpacing(20);
+ sliderBeta.setMinorTickSpacing(5);
+ sliderBeta.setPaintTicks(true);
+ sliderBeta.setPaintLabels(true);
+ sliderBeta.addChangeListener(new ChangeListener() {
+ @Override
+ public void stateChanged(ChangeEvent e) {
+ betaValue = sliderBeta.getValue() - 100;
+ performLinearTransformation();
+ }
+ });
+ alphaBetaPanel.add(sliderBeta);
+ controlPanel.add(alphaBetaPanel);
+
+ gammaPanel = new JPanel();
+ gammaPanel.setLayout(new BoxLayout(gammaPanel, BoxLayout.PAGE_AXIS));
+ gammaPanel.add(new JLabel(GAMMA_NAME));
+ sliderGamma = new JSlider(0, MAX_VALUE_BETA_GAMMA, 100);
+ sliderGamma.setMajorTickSpacing(20);
+ sliderGamma.setMinorTickSpacing(5);
+ sliderGamma.setPaintTicks(true);
+ sliderGamma.setPaintLabels(true);
+ sliderGamma.addChangeListener(new ChangeListener() {
+ @Override
+ public void stateChanged(ChangeEvent e) {
+ gammaValue = sliderGamma.getValue() / 100.0;
+ performGammaCorrection();
+ }
+ });
+ gammaPanel.add(sliderGamma);
+
+ pane.add(controlPanel, BorderLayout.PAGE_START);
+ JPanel framePanel = new JPanel();
+ imgSrcLabel = new JLabel(new ImageIcon(img));
+ framePanel.add(imgSrcLabel);
+ imgModifLabel = new JLabel(new ImageIcon(img));
+ framePanel.add(imgModifLabel);
+ pane.add(framePanel, BorderLayout.CENTER);
+ }
+
+ private void performLinearTransformation() {
+ Mat img = new Mat();
+ matImgSrc.convertTo(img, -1, alphaValue, betaValue);
+ imgModifLabel.setIcon(new ImageIcon(HighGui.toBufferedImage(img)));
+ frame.repaint();
+ }
+
+ private byte saturate(double val) {
+ int iVal = (int) Math.round(val);
+ iVal = iVal > 255 ? 255 : (iVal < 0 ? 0 : iVal);
+ return (byte) iVal;
+ }
+
+ private void performGammaCorrection() {
+ //! [changing-contrast-brightness-gamma-correction]
+ Mat lookUpTable = new Mat(1, 256, CvType.CV_8U);
+ byte[] lookUpTableData = new byte[(int) (lookUpTable.total()*lookUpTable.channels())];
+ for (int i = 0; i < lookUpTable.cols(); i++) {
+ lookUpTableData[i] = saturate(Math.pow(i / 255.0, gammaValue) * 255.0);
+ }
+ lookUpTable.put(0, 0, lookUpTableData);
+ Mat img = new Mat();
+ Core.LUT(matImgSrc, lookUpTable, img);
+ //! [changing-contrast-brightness-gamma-correction]
+
+ imgModifLabel.setIcon(new ImageIcon(HighGui.toBufferedImage(img)));
+ frame.repaint();
+ }
+}
+
+public class ChangingContrastBrightnessImageDemo {
+ public static void main(String[] args) {
+ // Load the native OpenCV library
+ System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
+
+ // Schedule a job for the event dispatch thread:
+ // creating and showing this application's GUI.
+ javax.swing.SwingUtilities.invokeLater(new Runnable() {
+ @Override
+ public void run() {
+ new ChangingContrastBrightnessImage(args);
+ }
+ });
+ }
+}
--- /dev/null
+import java.util.Arrays;
+
+import org.opencv.core.Core;
+import org.opencv.core.Core.MinMaxLocResult;
+import org.opencv.core.CvType;
+import org.opencv.core.Mat;
+import org.opencv.core.Rect;
+import org.opencv.highgui.HighGui;
+import org.opencv.imgcodecs.Imgcodecs;
+import org.opencv.imgproc.Imgproc;
+
+public class MatOperations {
+ @SuppressWarnings("unused")
+ public static void main(String[] args) {
+ /* Snippet code for Operations with images tutorial (not intended to be run) */
+
+ // Load the native OpenCV library
+ System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
+
+ String filename = "";
+ // Input/Output
+ {
+ //! [Load an image from a file]
+ Mat img = Imgcodecs.imread(filename);
+ //! [Load an image from a file]
+ }
+ {
+ //! [Load an image from a file in grayscale]
+ Mat img = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
+ //! [Load an image from a file in grayscale]
+ }
+ {
+ Mat img = new Mat(4, 4, CvType.CV_8U);
+ //! [Save image]
+ Imgcodecs.imwrite(filename, img);
+ //! [Save image]
+ }
+ // Accessing pixel intensity values
+ {
+ Mat img = new Mat(4, 4, CvType.CV_8U);
+ int y = 0, x = 0;
+ {
+ //! [Pixel access 1]
+ byte[] imgData = new byte[(int) (img.total() * img.channels())];
+ img.get(0, 0, imgData);
+ byte intensity = imgData[y * img.cols() + x];
+ //! [Pixel access 1]
+ }
+ {
+ //! [Pixel access 5]
+ byte[] imgData = new byte[(int) (img.total() * img.channels())];
+ imgData[y * img.cols() + x] = (byte) 128;
+ img.put(0, 0, imgData);
+ //! [Pixel access 5]
+ }
+
+ }
+ // Memory management and reference counting
+ {
+ //! [Reference counting 2]
+ Mat img = Imgcodecs.imread("image.jpg");
+ Mat img1 = img.clone();
+ //! [Reference counting 2]
+ }
+ {
+ //! [Reference counting 3]
+ Mat img = Imgcodecs.imread("image.jpg");
+ Mat sobelx = new Mat();
+ Imgproc.Sobel(img, sobelx, CvType.CV_32F, 1, 0);
+ //! [Reference counting 3]
+ }
+ // Primitive operations
+ {
+ Mat img = new Mat(400, 400, CvType.CV_8UC3);
+ {
+ //! [Set image to black]
+ byte[] imgData = new byte[(int) (img.total() * img.channels())];
+ Arrays.fill(imgData, (byte) 0);
+ img.put(0, 0, imgData);
+ //! [Set image to black]
+ }
+ {
+ //! [Select ROI]
+ Rect r = new Rect(10, 10, 100, 100);
+ Mat smallImg = img.submat(r);
+ //! [Select ROI]
+ }
+ }
+ {
+ //! [BGR to Gray]
+ Mat img = Imgcodecs.imread("image.jpg"); // loading a 8UC3 image
+ Mat grey = new Mat();
+ Imgproc.cvtColor(img, grey, Imgproc.COLOR_BGR2GRAY);
+ //! [BGR to Gray]
+ }
+ {
+ Mat dst = new Mat(), src = new Mat();
+ //! [Convert to CV_32F]
+ src.convertTo(dst, CvType.CV_32F);
+ //! [Convert to CV_32F]
+ }
+ // Visualizing images
+ {
+ //! [imshow 1]
+ Mat img = Imgcodecs.imread("image.jpg");
+ HighGui.namedWindow("image", HighGui.WINDOW_AUTOSIZE);
+ HighGui.imshow("image", img);
+ HighGui.waitKey();
+ //! [imshow 1]
+ }
+ {
+ //! [imshow 2]
+ Mat img = Imgcodecs.imread("image.jpg");
+ Mat grey = new Mat();
+ Imgproc.cvtColor(img, grey, Imgproc.COLOR_BGR2GRAY);
+ Mat sobelx = new Mat();
+ Imgproc.Sobel(grey, sobelx, CvType.CV_32F, 1, 0);
+ MinMaxLocResult res = Core.minMaxLoc(sobelx); // find minimum and maximum intensities
+ Mat draw = new Mat();
+ double maxVal = res.maxVal, minVal = res.minVal;
+ sobelx.convertTo(draw, CvType.CV_8U, 255.0 / (maxVal - minVal), -minVal * 255.0 / (maxVal - minVal));
+ HighGui.namedWindow("image", HighGui.WINDOW_AUTOSIZE);
+ HighGui.imshow("image", draw);
+ HighGui.waitKey();
+ //! [imshow 2]
+ }
+ System.exit(0);
+ }
+
+}
--- /dev/null
+from __future__ import division
+import cv2 as cv
+import numpy as np
+
+# Snippet code for Operations with images tutorial (not intended to be run)
+
+def load():
+ # Input/Output
+ filename = 'img.jpg'
+ ## [Load an image from a file]
+ img = cv.imread(filename)
+ ## [Load an image from a file]
+
+ ## [Load an image from a file in grayscale]
+ img = cv.imread(filename, cv.IMREAD_GRAYSCALE)
+ ## [Load an image from a file in grayscale]
+
+ ## [Save image]
+ cv.imwrite(filename, img)
+ ## [Save image]
+
+def access_pixel():
+ # Accessing pixel intensity values
+ img = np.empty((4,4,3), np.uint8)
+ y = 0
+ x = 0
+ ## [Pixel access 1]
+ intensity = img[y,x]
+ ## [Pixel access 1]
+
+ ## [Pixel access 3]
+ blue = img[y,x,0]
+ green = img[y,x,1]
+ red = img[y,x,2]
+ ## [Pixel access 3]
+
+ ## [Pixel access 5]
+ img[y,x] = 128
+ ## [Pixel access 5]
+
+def reference_counting():
+ # Memory management and reference counting
+ ## [Reference counting 2]
+ img = cv.imread('image.jpg')
+ img1 = np.copy(img)
+ ## [Reference counting 2]
+
+ ## [Reference counting 3]
+ img = cv.imread('image.jpg')
+ sobelx = cv.Sobel(img, cv.CV_32F, 1, 0);
+ ## [Reference counting 3]
+
+def primitive_operations():
+ img = np.empty((4,4,3), np.uint8)
+ ## [Set image to black]
+ img[:] = 0
+ ## [Set image to black]
+
+ ## [Select ROI]
+ smallImg = img[10:110,10:110]
+ ## [Select ROI]
+
+ ## [BGR to Gray]
+ img = cv.imread('image.jpg')
+ grey = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
+ ## [BGR to Gray]
+
+ src = np.ones((4,4), np.uint8)
+ ## [Convert to CV_32F]
+ dst = src.astype(np.float32)
+ ## [Convert to CV_32F]
+
+def visualize_images():
+ ## [imshow 1]
+ img = cv.imread('image.jpg')
+ cv.namedWindow('image', cv.WINDOW_AUTOSIZE)
+ cv.imshow('image', img)
+ cv.waitKey()
+ ## [imshow 1]
+
+ ## [imshow 2]
+ img = cv.imread('image.jpg')
+ grey = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
+ sobelx = cv.Sobel(grey, cv.CV_32F, 1, 0)
+ # find minimum and maximum intensities
+ minVal = np.amin(sobelx)
+ maxVal = np.amax(sobelx)
+ draw = cv.convertScaleAbs(sobelx, alpha=255.0/(maxVal - minVal), beta=-minVal * 255.0/(maxVal - minVal))
+ cv.namedWindow('image', cv.WINDOW_AUTOSIZE)
+ cv.imshow('image', draw)
+ cv.waitKey()
+ ## [imshow 2]
--- /dev/null
+from __future__ import print_function
+from builtins import input
+import cv2 as cv
+import numpy as np
+import argparse
+
+# Read image given by user
+## [basic-linear-transform-load]
+parser = argparse.ArgumentParser(description='Code for Changing the contrast and brightness of an image! tutorial.')
+parser.add_argument('--input', help='Path to input image.', default='../data/lena.jpg')
+args = parser.parse_args()
+
+image = cv.imread(args.input)
+if image is None:
+ print('Could not open or find the image: ', args.input)
+ exit(0)
+## [basic-linear-transform-load]
+
+## [basic-linear-transform-output]
+new_image = np.zeros(image.shape, image.dtype)
+## [basic-linear-transform-output]
+
+## [basic-linear-transform-parameters]
+alpha = 1.0 # Simple contrast control
+beta = 0 # Simple brightness control
+
+# Initialize values
+print(' Basic Linear Transforms ')
+print('-------------------------')
+try:
+ alpha = float(input('* Enter the alpha value [1.0-3.0]: '))
+ beta = int(input('* Enter the beta value [0-100]: '))
+except ValueError:
+ print('Error, not a number')
+## [basic-linear-transform-parameters]
+
+# Do the operation new_image(i,j) = alpha*image(i,j) + beta
+# Instead of these 'for' loops we could have used simply:
+# new_image = cv.convertScaleAbs(image, alpha=alpha, beta=beta)
+# but we wanted to show you how to access the pixels :)
+## [basic-linear-transform-operation]
+for y in range(image.shape[0]):
+ for x in range(image.shape[1]):
+ for c in range(image.shape[2]):
+ new_image[y,x,c] = np.clip(alpha*image[y,x,c] + beta, 0, 255)
+## [basic-linear-transform-operation]
+
+## [basic-linear-transform-display]
+# Show stuff
+cv.imshow('Original Image', image)
+cv.imshow('New Image', new_image)
+
+# Wait until user press some key
+cv.waitKey()
+## [basic-linear-transform-display]
--- /dev/null
+from __future__ import print_function
+from __future__ import division
+import cv2 as cv
+import numpy as np
+import argparse
+
+alpha = 1.0
+alpha_max = 500
+beta = 0
+beta_max = 200
+gamma = 1.0
+gamma_max = 200
+
+def basicLinearTransform():
+ res = cv.convertScaleAbs(img_original, alpha=alpha, beta=beta)
+ img_corrected = cv.hconcat([img_original, res])
+ cv.imshow("Brightness and contrast adjustments", img_corrected)
+
+def gammaCorrection():
+ ## [changing-contrast-brightness-gamma-correction]
+ lookUpTable = np.empty((1,256), np.uint8)
+ for i in range(256):
+ lookUpTable[0,i] = np.clip(pow(i / 255.0, gamma) * 255.0, 0, 255)
+
+ res = cv.LUT(img_original, lookUpTable)
+ ## [changing-contrast-brightness-gamma-correction]
+
+ img_gamma_corrected = cv.hconcat([img_original, res]);
+ cv.imshow("Gamma correction", img_gamma_corrected);
+
+def on_linear_transform_alpha_trackbar(val):
+ global alpha
+ alpha = val / 100
+ basicLinearTransform()
+
+def on_linear_transform_beta_trackbar(val):
+ global beta
+ beta = val - 100
+ basicLinearTransform()
+
+def on_gamma_correction_trackbar(val):
+ global gamma
+ gamma = val / 100
+ gammaCorrection()
+
+parser = argparse.ArgumentParser(description='Code for Changing the contrast and brightness of an image! tutorial.')
+parser.add_argument('--input', help='Path to input image.', default='../data/lena.jpg')
+args = parser.parse_args()
+
+img_original = cv.imread(args.input)
+if img_original is None:
+ print('Could not open or find the image: ', args.input)
+ exit(0)
+
+img_corrected = np.empty((img_original.shape[0], img_original.shape[1]*2, img_original.shape[2]), img_original.dtype)
+img_gamma_corrected = np.empty((img_original.shape[0], img_original.shape[1]*2, img_original.shape[2]), img_original.dtype)
+
+img_corrected = cv.hconcat([img_original, img_original])
+img_gamma_corrected = cv.hconcat([img_original, img_original])
+
+cv.namedWindow('Brightness and contrast adjustments')
+cv.namedWindow('Gamma correction')
+
+alpha_init = int(alpha *100)
+cv.createTrackbar('Alpha gain (contrast)', 'Brightness and contrast adjustments', alpha_init, alpha_max, on_linear_transform_alpha_trackbar)
+beta_init = beta + 100
+cv.createTrackbar('Beta bias (brightness)', 'Brightness and contrast adjustments', beta_init, beta_max, on_linear_transform_beta_trackbar)
+gamma_init = int(gamma * 100)
+cv.createTrackbar('Gamma correction', 'Gamma correction', gamma_init, gamma_max, on_gamma_correction_trackbar)
+
+on_linear_transform_alpha_trackbar(alpha_init)
+on_gamma_correction_trackbar(gamma_init)
+
+cv.waitKey()