1 Use OpenCL in Android camera preview based CV application {#tutorial_android_ocl_intro}
2 =====================================
4 @prev_tutorial{tutorial_dev_with_OCV_on_Android}
5 @next_tutorial{tutorial_macos_install}
8 This guide was designed to help you in use of [OpenCL ™](https://www.khronos.org/opencl/) in Android camera preview based CV application.
9 It was written for [Eclipse-based ADT tools](http://developer.android.com/tools/help/adt.html)
10 (deprecated by Google now), but it easily can be reproduced with [Android Studio](http://developer.android.com/tools/studio/index.html).
12 This tutorial assumes you have the following installed and configured:
16 - Eclipse IDE with ADT and CDT plugins
18 It also assumes that you are familiar with Android Java and JNI programming basics.
19 If you need help with anything of the above, you may refer to our @ref tutorial_android_dev_intro guide.
21 This tutorial also assumes you have an Android operated device with OpenCL enabled.
23 The related source code is located within OpenCV samples at
24 [opencv/samples/android/tutorial-4-opencl](https://github.com/opencv/opencv/tree/3.4/samples/android/tutorial-4-opencl/) directory.
29 Using [GPGPU](https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units)
30 via OpenCL for applications performance enhancements is quite a modern trend now.
31 Some CV algo-s (e.g. image filtering) run much faster on a GPU than on a CPU.
32 Recently it has become possible on Android OS.
34 The most popular CV application scenario for an Android operated device is starting camera in preview mode, applying some CV algo to every frame
35 and displaying the preview frames modified by that CV algo.
37 Let's consider how we can use OpenCL in this scenario. In particular let's try two ways: direct calls to OpenCL API and recently introduced OpenCV T-API
38 (aka [Transparent API](https://docs.google.com/presentation/d/1qoa29N_B-s297-fp0-b3rBirvpzJQp8dCtllLQ4DVCY/present)) - implicit OpenCL accelerations of some OpenCV algo-s.
43 Starting Android API level 11 (Android 3.0) [Camera API](http://developer.android.com/reference/android/hardware/Camera.html)
44 allows use of OpenGL texture as a target for preview frames.
45 Android API level 21 brings a new [Camera2 API](http://developer.android.com/reference/android/hardware/camera2/package-summary.html)
46 that provides much more control over the camera settings and usage modes,
47 it allows several targets for preview frames and OpenGL texture in particular.
49 Having a preview frame in an OpenGL texture is a good deal for using OpenCL because there is an
50 [OpenGL-OpenCL Interoperability API (cl_khr_gl_sharing)](https://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/cl_khr_gl_sharing.html),
51 allowing sharing OpenGL texture data with OpenCL functions without copying (with some restrictions of course).
53 Let's create a base for our application that just configures Android camera to send preview frames to OpenGL texture and displays these frames
54 on display without any processing.
56 A minimal `Activity` class for that purposes looks like following:
59 public class Tutorial4Activity extends Activity {
61 private MyGLSurfaceView mView;
64 public void onCreate(Bundle savedInstanceState) {
65 super.onCreate(savedInstanceState);
66 requestWindowFeature(Window.FEATURE_NO_TITLE);
67 getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
68 WindowManager.LayoutParams.FLAG_FULLSCREEN);
69 getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON,
70 WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
71 setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
73 mView = new MyGLSurfaceView(this);
74 setContentView(mView);
78 protected void onPause() {
84 protected void onResume() {
91 And a minimal `View` class respectively:
94 public class MyGLSurfaceView extends GLSurfaceView {
96 MyGLRendererBase mRenderer;
98 public MyGLSurfaceView(Context context) {
101 if(android.os.Build.VERSION.SDK_INT >= 21)
102 mRenderer = new Camera2Renderer(this);
104 mRenderer = new CameraRenderer(this);
106 setEGLContextClientVersion(2);
107 setRenderer(mRenderer);
108 setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
112 public void surfaceCreated(SurfaceHolder holder) {
113 super.surfaceCreated(holder);
117 public void surfaceDestroyed(SurfaceHolder holder) {
118 super.surfaceDestroyed(holder);
122 public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
123 super.surfaceChanged(holder, format, w, h);
127 public void onResume() {
129 mRenderer.onResume();
133 public void onPause() {
140 __Note__: we use two renderer classes: one for legacy [Camera](http://developer.android.com/reference/android/hardware/Camera.html) API
141 and another for modern [Camera2](http://developer.android.com/reference/android/hardware/camera2/package-summary.html).
143 A minimal `Renderer` class can be implemented in Java (OpenGL ES 2.0 [available](http://developer.android.com/reference/android/opengl/GLES20.html) in Java),
144 but since we are going to modify the preview texture with OpenCL let's move OpenGL stuff to JNI.
145 Here is a simple Java wrapper for our JNI stuff:
148 public class NativeGLRenderer {
151 System.loadLibrary("opencv_java3"); // comment this when using OpenCV Manager
152 System.loadLibrary("JNIrender");
155 public static native int initGL();
156 public static native void closeGL();
157 public static native void drawFrame();
158 public static native void changeSize(int width, int height);
162 Since `Camera` and `Camera2` APIs differ significantly in camera setup and control, let's create a base class for the two corresponding renderers:
165 public abstract class MyGLRendererBase implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener {
166 protected final String LOGTAG = "MyGLRendererBase";
168 protected SurfaceTexture mSTex;
169 protected MyGLSurfaceView mView;
171 protected boolean mGLInit = false;
172 protected boolean mTexUpdate = false;
174 MyGLRendererBase(MyGLSurfaceView view) {
178 protected abstract void openCamera();
179 protected abstract void closeCamera();
180 protected abstract void setCameraPreviewSize(int width, int height);
182 public void onResume() {
183 Log.i(LOGTAG, "onResume");
186 public void onPause() {
187 Log.i(LOGTAG, "onPause");
194 NativeGLRenderer.closeGL();
199 public synchronized void onFrameAvailable(SurfaceTexture surfaceTexture) {
200 //Log.i(LOGTAG, "onFrameAvailable");
202 mView.requestRender();
206 public void onDrawFrame(GL10 gl) {
207 //Log.i(LOGTAG, "onDrawFrame");
211 synchronized (this) {
213 mSTex.updateTexImage();
217 NativeGLRenderer.drawFrame();
221 public void onSurfaceChanged(GL10 gl, int surfaceWidth, int surfaceHeight) {
222 Log.i(LOGTAG, "onSurfaceChanged("+surfaceWidth+"x"+surfaceHeight+")");
223 NativeGLRenderer.changeSize(surfaceWidth, surfaceHeight);
224 setCameraPreviewSize(surfaceWidth, surfaceHeight);
228 public void onSurfaceCreated(GL10 gl, EGLConfig config) {
229 Log.i(LOGTAG, "onSurfaceCreated");
230 String strGLVersion = GLES20.glGetString(GLES20.GL_VERSION);
231 if (strGLVersion != null)
232 Log.i(LOGTAG, "OpenGL ES version: " + strGLVersion);
234 int hTex = NativeGLRenderer.initGL();
235 mSTex = new SurfaceTexture(hTex);
236 mSTex.setOnFrameAvailableListener(this);
243 As you can see, inheritors for `Camera` and `Camera2` APIs should implement the following abstract methods:
245 protected abstract void openCamera();
246 protected abstract void closeCamera();
247 protected abstract void setCameraPreviewSize(int width, int height);
250 Let's leave the details of their implementation beyond of this tutorial, please refer the
251 [source code](https://github.com/opencv/opencv/tree/3.4/samples/android/tutorial-4-opencl/) to see them.
253 Preview Frames modification
254 ---------------------------
256 The details OpenGL ES 2.0 initialization are also quite straightforward and noisy to be quoted here,
257 but the important point here is that the OpeGL texture to be the target for camera preview should be of type `GL_TEXTURE_EXTERNAL_OES`
258 (not `GL_TEXTURE_2D`), internally it keeps picture data in _YUV_ format.
259 That makes unable sharing it via CL-GL interop (`cl_khr_gl_sharing`) and accessing its pixel data via C/C++ code.
260 To overcome this restriction we have to perform an OpenGL rendering from this texture to another regular `GL_TEXTURE_2D` one
261 using _FrameBuffer Object_ (aka FBO).
265 After that we can read (_copy_) pixel data from C/C++ via `glReadPixels()` and write them back to texture after modification via `glTexSubImage2D()`.
267 ### Direct OpenCL calls
269 Also that `GL_TEXTURE_2D` texture can be shared with OpenCL without copying, but we have to create OpenCL context with special way for that:
274 EGLDisplay mEglDisplay = eglGetCurrentDisplay();
275 if (mEglDisplay == EGL_NO_DISPLAY)
276 LOGE("initCL: eglGetCurrentDisplay() returned 'EGL_NO_DISPLAY', error = %x", eglGetError());
278 EGLContext mEglContext = eglGetCurrentContext();
279 if (mEglContext == EGL_NO_CONTEXT)
280 LOGE("initCL: eglGetCurrentContext() returned 'EGL_NO_CONTEXT', error = %x", eglGetError());
282 cl_context_properties props[] =
283 { CL_GL_CONTEXT_KHR, (cl_context_properties) mEglContext,
284 CL_EGL_DISPLAY_KHR, (cl_context_properties) mEglDisplay,
285 CL_CONTEXT_PLATFORM, 0,
290 cl::Platform p = cl::Platform::getDefault();
291 std::string ext = p.getInfo<CL_PLATFORM_EXTENSIONS>();
292 if(ext.find("cl_khr_gl_sharing") == std::string::npos)
293 LOGE("Warning: CL-GL sharing isn't supported by PLATFORM");
294 props[5] = (cl_context_properties) p();
296 theContext = cl::Context(CL_DEVICE_TYPE_GPU, props);
297 std::vector<cl::Device> devs = theContext.getInfo<CL_CONTEXT_DEVICES>();
298 LOGD("Context returned %d devices, taking the 1st one", devs.size());
299 ext = devs[0].getInfo<CL_DEVICE_EXTENSIONS>();
300 if(ext.find("cl_khr_gl_sharing") == std::string::npos)
301 LOGE("Warning: CL-GL sharing isn't supported by DEVICE");
303 theQueue = cl::CommandQueue(theContext, devs[0]);
309 LOGE("cl::Error: %s (%d)", e.what(), e.err());
311 catch(std::exception& e)
313 LOGE("std::exception: %s", e.what());
317 LOGE( "OpenCL info: unknown error while initializing OpenCL stuff" );
319 LOGD("initCL completed");
323 @note To build this JNI code you need __OpenCL 1.2__ headers from [Khronos web site](https://www.khronos.org/registry/cl/api/1.2/) and
324 the __libOpenCL.so__ downloaded from the device you'll run the application.
326 Then the texture can be wrapped by a `cl::ImageGL` object and processed via OpenCL calls:
328 cl::ImageGL imgIn (theContext, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, texIn);
329 cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texOut);
331 std::vector < cl::Memory > images;
332 images.push_back(imgIn);
333 images.push_back(imgOut);
334 theQueue.enqueueAcquireGLObjects(&images);
337 cl::Kernel Laplacian = ...
338 Laplacian.setArg(0, imgIn);
339 Laplacian.setArg(1, imgOut);
342 theQueue.enqueueNDRangeKernel(Laplacian, cl::NullRange, cl::NDRange(w, h), cl::NullRange);
345 theQueue.enqueueReleaseGLObjects(&images);
351 But instead of writing OpenCL code by yourselves you may want to use __OpenCV T-API__ that calls OpenCL implicitly.
352 All that you need is to pass the created OpenCL context to OpenCV (via `cv::ocl::attachContext()`) and somehow wrap OpenGL texture with `cv::UMat`.
353 Unfortunately `UMat` keeps OpenCL _buffer_ internally, that can't be wrapped over either OpenGL _texture_ or OpenCL _image_ - so we have to copy image data here:
355 cl::ImageGL imgIn (theContext, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, tex);
356 std::vector < cl::Memory > images(1, imgIn);
357 theQueue.enqueueAcquireGLObjects(&images);
360 cv::UMat uIn, uOut, uTmp;
361 cv::ocl::convertFromImage(imgIn(), uIn);
362 theQueue.enqueueReleaseGLObjects(&images);
364 cv::Laplacian(uIn, uTmp, CV_8U);
365 cv:multiply(uTmp, 10, uOut);
368 cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, tex);
370 images.push_back(imgOut);
371 theQueue.enqueueAcquireGLObjects(&images);
372 cl_mem clBuffer = (cl_mem)uOut.handle(cv::ACCESS_READ);
373 cl_command_queue q = (cl_command_queue)cv::ocl::Queue::getDefault().ptr();
375 size_t origin[3] = { 0, 0, 0 };
376 size_t region[3] = { w, h, 1 };
377 CV_Assert(clEnqueueCopyBufferToImage (q, clBuffer, imgOut(), offset, origin, region, 0, NULL, NULL) == CL_SUCCESS);
378 theQueue.enqueueReleaseGLObjects(&images);
382 - @note We have to make one more image data copy when placing back the modified image to the original OpenGL texture via OpenCL image wrapper.
383 - @note By default the OpenCL support (T-API) is disabled in OpenCV builds for Android OS (so it's absent in official packages as of version 3.0),
384 but it's possible to rebuild locally OpenCV for Android with OpenCL/T-API enabled: use `-DWITH_OPENCL=YES` option for CMake.
386 cd opencv-build-android
387 path/to/cmake.exe -GNinja -DCMAKE_MAKE_PROGRAM="path/to/ninja.exe" -DCMAKE_TOOLCHAIN_FILE=path/to/opencv/platforms/android/android.toolchain.cmake -DANDROID_ABI="armeabi-v7a with NEON" -DCMAKE_BUILD_WITH_INSTALL_RPATH=ON path/to/opencv
388 path/to/ninja.exe install/strip
390 To use your own modified `libopencv_java3.so` you have to keep inside your APK, not to use OpenCV Manager and load it manually via `System.loadLibrary("opencv_java3")`.
395 To compare the performance we measured FPS of the same preview frames modification (_Laplacian_) done by C/C++ code (call to `cv::Laplacian` with `cv::Mat`),
396 by direct OpenCL calls (using OpenCL _images_ for input and output), and by OpenCV _T-API_ (call to `cv::Laplacian` with `cv::UMat`) on _Sony Xperia Z3_ with 720p camera resolution:
397 * __C/C++ version__ shows __3-4 fps__
398 * __direct OpenCL calls__ shows __25-27 fps__
399 * __OpenCV T-API__ shows __11-13 fps__ (due to extra copying from `cl_image` to `cl_buffer` and back)