1 /** @mainpage Introduction
5 The Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies.
7 Several builds of the library are available using various configurations:
8 - OS: Linux, Android or bare metal.
9 - Architecture: armv7a (32bit) or arm64-v8a (64bit)
10 - Technology: NEON / OpenCL / NEON and OpenCL
11 - Debug / Asserts / Release: Use a build with asserts enabled to debug your application and enable extra validation. Once you are sure your application works as expected you can switch to a release build of the library for maximum performance.
13 @section S0_1_contact Contact / Support
15 Please email developer@arm.com
17 In order to facilitate the work of the support team please provide the build information of the library you are using. To get the version of the library you are using simply run:
19 $ strings android-armv7a-cl-asserts/libarm_compute.so | grep arm_compute_version
20 arm_compute_version=v16.12 Build options: {'embed_kernels': '1', 'opencl': '1', 'arch': 'armv7a', 'neon': '0', 'asserts': '1', 'debug': '0', 'os': 'android', 'Werror': '1'} Git hash=f51a545d4ea12a9059fe4e598a092f1fd06dc858
22 @section S1_file_organisation File organisation
24 This archive contains:
25 - The arm_compute header and source files
26 - The latest Khronos OpenCL 1.2 C headers from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a>
27 - The latest Khronos cl2.hpp from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a> (API version 2.1 when this document was written)
28 - The sources for a stub version of libOpenCL.so to help you build your application.
29 - An examples folder containing a few examples to compile and link against the library.
30 - A @ref utils folder containing headers with some boiler plate code used by the examples.
33 You should have the following file organisation:
36 ├── arm_compute --> All the arm_compute headers
39 │ │ │ ├── CLKernelLibrary.h --> Manages all the OpenCL kernels compilation and caching, provides accessors for the OpenCL Context.
40 │ │ │ ├── CLKernels.h --> Includes all the OpenCL kernels at once
41 │ │ │ ├── CL specialisation of all the generic objects interfaces (ICLTensor, ICLImage, etc.)
42 │ │ │ ├── kernels --> Folder containing all the OpenCL kernels
43 │ │ │ │ └── CL*Kernel.h
44 │ │ │ └── OpenCL.h --> Wrapper to configure the Khronos OpenCL C++ header
46 │ │ │ ├── CPPKernels.h --> Includes all the CPP kernels at once
47 │ │ │ └── kernels --> Folder containing all the CPP kernels
48 │ │ │ └── CPP*Kernel.h
50 │ │ │ ├── kernels --> Folder containing all the NEON kernels
51 │ │ │ │ ├── arm64 --> Folder containing the interfaces for the assembly arm64 NEON kernels
52 │ │ │ │ ├── arm32 --> Folder containing the interfaces for the assembly arm32 NEON kernels
53 │ │ │ │ ├── assembly --> Folder containing the NEON assembly routines.
54 │ │ │ │ └── NE*Kernel.h
55 │ │ │ └── NEKernels.h --> Includes all the NEON kernels at once
56 │ │ ├── All common basic types (Types.h, Window, Coordinates, Iterator, etc.)
57 │ │ ├── All generic objects interfaces (ITensor, IImage, etc.)
58 │ │ └── Objects metadata classes (ImageInfo, TensorInfo, MultiImageInfo)
60 │ │ ├── CL --> OpenCL specific operations
61 │ │ │ └── CLMap.h / CLUnmap.h
63 │ │ │ └── The various nodes supported by the graph API
64 │ │ ├── Nodes.h --> Includes all the Graph nodes at once.
65 │ │ └── Graph objects ( INode, ITensorAccessor, Graph, etc.)
68 │ │ ├── CL objects & allocators (CLArray, CLImage, CLTensor, etc.)
69 │ │ ├── functions --> Folder containing all the OpenCL functions
71 │ │ ├── CLScheduler.h --> Interface to enqueue OpenCL kernels and get/set the OpenCL CommandQueue and ICLTuner.
72 │ │ └── CLFunctions.h --> Includes all the OpenCL functions at once
74 │ │ ├── CPPKernels.h --> Includes all the CPP functions at once.
75 │ │ └── CPPScheduler.h --> Basic pool of threads to execute CPP/NEON code on several cores in parallel
77 │ │ ├── functions --> Folder containing all the NEON functions
79 │ │ └── NEFunctions.h --> Includes all the NEON functions at once
81 │ │ └── OMPScheduler.h --> OpenMP scheduler (Alternative to the CPPScheduler)
82 │ ├── Memory manager files (LifetimeManager, PoolManager, etc.)
83 │ └── Basic implementations of the generic object interfaces (Array, Image, Tensor, etc.)
87 ├── documentation.xhtml -> documentation/index.xhtml
89 │ ├── cl_convolution.cpp
92 │ ├── neoncl_scale_median_gaussian.cpp
94 │ ├── neon_copy_objects.cpp
95 │ ├── neon_convolution.cpp
99 │ │ └── Khronos OpenCL C headers and C++ wrapper
100 │ ├── half --> FP16 library available from http://half.sourceforge.net
101 │ └── libnpy --> Library to load / write npy buffers, available from https://github.com/llohse/libnpy
105 │ ├── caffe_data_extractor.py --> Basic script to export weights from Caffe to npy files
106 │ └── tensorflow_data_extractor.py --> Basic script to export weights from Tensor Flow to npy files
109 │ │ └── ... (Same structure as headers)
111 │ │ └── cl_kernels --> All the OpenCL kernels
113 │ │ └── ... (Same structure as headers)
115 │ └── ... (Same structure as headers)
117 │ └── Various headers to work around toolchains / platform issues.
119 │ ├── All test related files shared between validation and benchmark
120 │ ├── CL --> OpenCL accessors
121 │ ├── NEON --> NEON accessors
122 │ ├── benchmark --> Sources for benchmarking
123 │ │ ├── Benchmark specific files
124 │ │ ├── CL --> OpenCL benchmarking tests
125 │ │ └── NEON --> NEON benchmarking tests
127 │ │ └── Datasets for all the validation / benchmark tests, layer configurations for various networks, etc.
129 │ │ └── Boiler plate code for both validation and benchmark test suites (Command line parsers, instruments, output loggers, etc.)
131 │ │ └── Examples of how to instantiate networks.
132 │ ├── validation --> Sources for validation
133 │ │ ├── Validation specific files
134 │ │ ├── CL --> OpenCL validation tests
135 │ │ ├── CPP --> C++ reference implementations
137 │ │ │ └── Fixtures to initialise and run the runtime Functions.
138 │ │ └── NEON --> NEON validation tests
139 │ └── dataset --> Datasets defining common sets of input parameters
140 └── utils --> Boiler plate code used by examples
143 @section S2_versions_changelog Release versions and changelog
145 @subsection S2_1_versions Release versions
147 All releases are numbered vYY.MM Where YY are the last two digits of the year, and MM the month number.
148 If there is more than one release in a month then an extra sequential number is appended at the end:
150 v17.03 (First release of March 2017)
151 v17.03.1 (Second release of March 2017)
152 v17.04 (First release of April 2017)
154 @note We're aiming at releasing one major public release with new features per quarter. All releases in between will only contain bug fixes.
156 @subsection S2_2_changelog Changelog
158 v17.09 Public major release
159 - Experimental Graph support: initial implementation of a simple stream API to easily chain machine learning layers.
160 - Memory Manager (@ref arm_compute::BlobLifetimeManager, @ref arm_compute::BlobMemoryPool, @ref arm_compute::ILifetimeManager, @ref arm_compute::IMemoryGroup, @ref arm_compute::IMemoryManager, @ref arm_compute::IMemoryPool, @ref arm_compute::IPoolManager, @ref arm_compute::MemoryManagerOnDemand, @ref arm_compute::PoolManager)
161 - New validation and benchmark frameworks (Boost and Google frameworks replaced by homemade framework).
162 - Most machine learning functions support both fixed point 8 and 16 bit (QS8, QS16) for both NEON and OpenCL.
163 - New NEON kernels / functions:
164 - @ref arm_compute::NEGEMMAssemblyBaseKernel @ref arm_compute::NEGEMMAArch64Kernel
165 - @ref arm_compute::NEDequantizationLayerKernel / @ref arm_compute::NEDequantizationLayer
166 - @ref arm_compute::NEFloorKernel / @ref arm_compute::NEFloor
167 - @ref arm_compute::NEL2NormalizeKernel / @ref arm_compute::NEL2Normalize
168 - @ref arm_compute::NEQuantizationLayerKernel @ref arm_compute::NEMinMaxLayerKernel / @ref arm_compute::NEQuantizationLayer
169 - @ref arm_compute::NEROIPoolingLayerKernel / @ref arm_compute::NEROIPoolingLayer
170 - @ref arm_compute::NEReductionOperationKernel / @ref arm_compute::NEReductionOperation
171 - @ref arm_compute::NEReshapeLayerKernel / @ref arm_compute::NEReshapeLayer
173 - New OpenCL kernels / functions:
174 - @ref arm_compute::CLDepthwiseConvolution3x3Kernel @ref arm_compute::CLDepthwiseIm2ColKernel @ref arm_compute::CLDepthwiseVectorToTensorKernel @ref arm_compute::CLDepthwiseWeightsReshapeKernel / @ref arm_compute::CLDepthwiseConvolution3x3 @ref arm_compute::CLDepthwiseConvolution @ref arm_compute::CLDepthwiseSeparableConvolutionLayer
175 - @ref arm_compute::CLDequantizationLayerKernel / @ref arm_compute::CLDequantizationLayer
176 - @ref arm_compute::CLDirectConvolutionLayerKernel / @ref arm_compute::CLDirectConvolutionLayer
177 - @ref arm_compute::CLFlattenLayer
178 - @ref arm_compute::CLFloorKernel / @ref arm_compute::CLFloor
179 - @ref arm_compute::CLGEMMTranspose1xW
180 - @ref arm_compute::CLGEMMMatrixVectorMultiplyKernel
181 - @ref arm_compute::CLL2NormalizeKernel / @ref arm_compute::CLL2Normalize
182 - @ref arm_compute::CLQuantizationLayerKernel @ref arm_compute::CLMinMaxLayerKernel / @ref arm_compute::CLQuantizationLayer
183 - @ref arm_compute::CLROIPoolingLayerKernel / @ref arm_compute::CLROIPoolingLayer
184 - @ref arm_compute::CLReductionOperationKernel / @ref arm_compute::CLReductionOperation
185 - @ref arm_compute::CLReshapeLayerKernel / @ref arm_compute::CLReshapeLayer
187 v17.06 Public major release
189 - Added support for fixed point 8 bit (QS8) to the various NEON machine learning kernels.
190 - Added unit tests and benchmarks (AlexNet, LeNet)
191 - Added support for sub tensors.
192 - Added infrastructure to provide GPU specific optimisation for some OpenCL kernels.
193 - Added @ref arm_compute::OMPScheduler (OpenMP) scheduler for NEON
194 - Added @ref arm_compute::SingleThreadScheduler scheduler for NEON (For bare metal)
195 - User can specify his own scheduler by implementing the @ref arm_compute::IScheduler interface.
196 - New OpenCL kernels / functions:
197 - @ref arm_compute::CLBatchNormalizationLayerKernel / @ref arm_compute::CLBatchNormalizationLayer
198 - @ref arm_compute::CLDepthConcatenateKernel / @ref arm_compute::CLDepthConcatenate
199 - @ref arm_compute::CLHOGOrientationBinningKernel @ref arm_compute::CLHOGBlockNormalizationKernel, @ref arm_compute::CLHOGDetectorKernel / @ref arm_compute::CLHOGDescriptor @ref arm_compute::CLHOGDetector @ref arm_compute::CLHOGGradient @ref arm_compute::CLHOGMultiDetection
200 - @ref arm_compute::CLLocallyConnectedMatrixMultiplyKernel / @ref arm_compute::CLLocallyConnectedLayer
201 - @ref arm_compute::CLWeightsReshapeKernel / @ref arm_compute::CLConvolutionLayerReshapeWeights
203 - @ref arm_compute::CPPDetectionWindowNonMaximaSuppressionKernel
204 - New NEON kernels / functions:
205 - @ref arm_compute::NEBatchNormalizationLayerKernel / @ref arm_compute::NEBatchNormalizationLayer
206 - @ref arm_compute::NEDepthConcatenateKernel / @ref arm_compute::NEDepthConcatenate
207 - @ref arm_compute::NEDirectConvolutionLayerKernel / @ref arm_compute::NEDirectConvolutionLayer
208 - @ref arm_compute::NELocallyConnectedMatrixMultiplyKernel / @ref arm_compute::NELocallyConnectedLayer
209 - @ref arm_compute::NEWeightsReshapeKernel / @ref arm_compute::NEConvolutionLayerReshapeWeights
211 v17.05 Public bug fixes release
213 - Remaining of the functions ported to use accurate padding.
214 - Library does not link against OpenCL anymore (It uses dlopen / dlsym at runtime instead to determine whether or not OpenCL is available).
215 - Added "free" method to allocator.
216 - Minimum version of g++ required for armv7 Linux changed from 4.8 to 4.9
218 v17.04 Public bug fixes release
220 The following functions have been ported to use the new accurate padding:
221 - @ref arm_compute::CLColorConvertKernel
222 - @ref arm_compute::CLEdgeNonMaxSuppressionKernel
223 - @ref arm_compute::CLEdgeTraceKernel
224 - @ref arm_compute::CLGaussianPyramidHorKernel
225 - @ref arm_compute::CLGaussianPyramidVertKernel
226 - @ref arm_compute::CLGradientKernel
227 - @ref arm_compute::NEChannelCombineKernel
228 - @ref arm_compute::NEFillArrayKernel
229 - @ref arm_compute::NEGaussianPyramidHorKernel
230 - @ref arm_compute::NEGaussianPyramidVertKernel
231 - @ref arm_compute::NEHarrisScoreFP16Kernel
232 - @ref arm_compute::NEHarrisScoreKernel
233 - @ref arm_compute::NEHOGDetectorKernel
234 - @ref arm_compute::NELogits1DMaxKernel
235 - @ref arm_compute::NELogits1DShiftExpSumKernel
236 - @ref arm_compute::NELogits1DNormKernel
237 - @ref arm_compute::NENonMaximaSuppression3x3FP16Kernel
238 - @ref arm_compute::NENonMaximaSuppression3x3Kernel
240 v17.03.1 First Major public release of the sources
241 - Renamed the library to arm_compute
242 - New CPP target introduced for C++ kernels shared between NEON and CL functions.
243 - New padding calculation interface introduced and ported most kernels / functions to use it.
244 - New OpenCL kernels / functions:
245 - @ref arm_compute::CLGEMMLowpMatrixMultiplyKernel / @ref arm_compute::CLGEMMLowp
246 - New NEON kernels / functions:
247 - @ref arm_compute::NENormalizationLayerKernel / @ref arm_compute::NENormalizationLayer
248 - @ref arm_compute::NETransposeKernel / @ref arm_compute::NETranspose
249 - @ref arm_compute::NELogits1DMaxKernel, @ref arm_compute::NELogits1DShiftExpSumKernel, @ref arm_compute::NELogits1DNormKernel / @ref arm_compute::NESoftmaxLayer
250 - @ref arm_compute::NEIm2ColKernel, @ref arm_compute::NECol2ImKernel, arm_compute::NEConvolutionLayerWeightsReshapeKernel / @ref arm_compute::NEConvolutionLayer
251 - @ref arm_compute::NEGEMMMatrixAccumulateBiasesKernel / @ref arm_compute::NEFullyConnectedLayer
252 - @ref arm_compute::NEGEMMLowpMatrixMultiplyKernel / @ref arm_compute::NEGEMMLowp
254 v17.03 Sources preview
255 - New OpenCL kernels / functions:
256 - @ref arm_compute::CLGradientKernel, @ref arm_compute::CLEdgeNonMaxSuppressionKernel, @ref arm_compute::CLEdgeTraceKernel / @ref arm_compute::CLCannyEdge
257 - GEMM refactoring + FP16 support: @ref arm_compute::CLGEMMInterleave4x4Kernel, @ref arm_compute::CLGEMMTranspose1xWKernel, @ref arm_compute::CLGEMMMatrixMultiplyKernel, @ref arm_compute::CLGEMMMatrixAdditionKernel / @ref arm_compute::CLGEMM
258 - @ref arm_compute::CLGEMMMatrixAccumulateBiasesKernel / @ref arm_compute::CLFullyConnectedLayer
259 - @ref arm_compute::CLTransposeKernel / @ref arm_compute::CLTranspose
260 - @ref arm_compute::CLLKTrackerInitKernel, @ref arm_compute::CLLKTrackerStage0Kernel, @ref arm_compute::CLLKTrackerStage1Kernel, @ref arm_compute::CLLKTrackerFinalizeKernel / @ref arm_compute::CLOpticalFlow
261 - @ref arm_compute::CLNormalizationLayerKernel / @ref arm_compute::CLNormalizationLayer
262 - @ref arm_compute::CLLaplacianPyramid, @ref arm_compute::CLLaplacianReconstruct
263 - New NEON kernels / functions:
264 - @ref arm_compute::NEActivationLayerKernel / @ref arm_compute::NEActivationLayer
265 - GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref arm_compute::NEGEMMInterleave4x4Kernel, @ref arm_compute::NEGEMMTranspose1xWKernel, @ref arm_compute::NEGEMMMatrixMultiplyKernel, @ref arm_compute::NEGEMMMatrixAdditionKernel / @ref arm_compute::NEGEMM
266 - @ref arm_compute::NEPoolingLayerKernel / @ref arm_compute::NEPoolingLayer
268 v17.02.1 Sources preview
269 - New OpenCL kernels / functions:
270 - @ref arm_compute::CLLogits1DMaxKernel, @ref arm_compute::CLLogits1DShiftExpSumKernel, @ref arm_compute::CLLogits1DNormKernel / @ref arm_compute::CLSoftmaxLayer
271 - @ref arm_compute::CLPoolingLayerKernel / @ref arm_compute::CLPoolingLayer
272 - @ref arm_compute::CLIm2ColKernel, @ref arm_compute::CLCol2ImKernel, arm_compute::CLConvolutionLayerWeightsReshapeKernel / @ref arm_compute::CLConvolutionLayer
273 - @ref arm_compute::CLRemapKernel / @ref arm_compute::CLRemap
274 - @ref arm_compute::CLGaussianPyramidHorKernel, @ref arm_compute::CLGaussianPyramidVertKernel / @ref arm_compute::CLGaussianPyramid, @ref arm_compute::CLGaussianPyramidHalf, @ref arm_compute::CLGaussianPyramidOrb
275 - @ref arm_compute::CLMinMaxKernel, @ref arm_compute::CLMinMaxLocationKernel / @ref arm_compute::CLMinMaxLocation
276 - @ref arm_compute::CLNonLinearFilterKernel / @ref arm_compute::CLNonLinearFilter
277 - New NEON FP16 kernels (Requires armv8.2 CPU)
278 - @ref arm_compute::NEAccumulateWeightedFP16Kernel
279 - @ref arm_compute::NEBox3x3FP16Kernel
280 - @ref arm_compute::NENonMaximaSuppression3x3FP16Kernel
282 v17.02 Sources preview
283 - New OpenCL kernels / functions:
284 - @ref arm_compute::CLActivationLayerKernel / @ref arm_compute::CLActivationLayer
285 - @ref arm_compute::CLChannelCombineKernel / @ref arm_compute::CLChannelCombine
286 - @ref arm_compute::CLDerivativeKernel / @ref arm_compute::CLChannelExtract
287 - @ref arm_compute::CLFastCornersKernel / @ref arm_compute::CLFastCorners
288 - @ref arm_compute::CLMeanStdDevKernel / @ref arm_compute::CLMeanStdDev
289 - New NEON kernels / functions:
290 - HOG / SVM: @ref arm_compute::NEHOGOrientationBinningKernel, @ref arm_compute::NEHOGBlockNormalizationKernel, @ref arm_compute::NEHOGDetectorKernel, arm_compute::NEHOGNonMaximaSuppressionKernel / @ref arm_compute::NEHOGDescriptor, @ref arm_compute::NEHOGDetector, @ref arm_compute::NEHOGGradient, @ref arm_compute::NEHOGMultiDetection
291 - @ref arm_compute::NENonLinearFilterKernel / @ref arm_compute::NENonLinearFilter
292 - Introduced a CLScheduler to manage the default context and command queue used by the runtime library and create synchronisation events.
293 - Switched all the kernels / functions to use tensors instead of images.
294 - Updated documentation to include instructions to build the library from sources.
296 v16.12 Binary preview release
299 @section S3_how_to_build How to build the library and the examples
301 @subsection S3_1_build_options Build options
303 scons 2.3 or above is required to build the library.
304 To see the build options available simply run ```scons -h```:
306 debug: Debug (yes|no)
310 asserts: Enable asserts (this flag is forced to 1 for debug=1) (yes|no)
314 arch: Target Architecture (armv7a|arm64-v8a|arm64-v8.2-a|x86_32|x86_64)
318 os: Target OS (linux|android|bare_metal)
322 build: Build type (native|cross_compile)
323 default: cross_compile
324 actual: cross_compile
326 examples: Build example programs (yes|no)
330 Werror: Enable/disable the -Werror compilation flag (yes|no)
334 opencl: Enable OpenCL support (yes|no)
338 neon: Enable Neon support (yes|no)
342 embed_kernels: Embed OpenCL kernels in library binary (yes|no)
346 set_soname: Set the library's soname and shlibversion (requires SCons 2.4 or above) (yes|no)
350 openmp: Enable OpenMP backend (yes|no)
354 cppthreads: Enable C++11 threads backend (yes|no)
358 build_dir: Specify sub-folder for the build ( /path/to/build_dir )
362 extra_cxx_flags: Extra CXX flags to be appended to the build command
366 pmu: Enable PMU counters (yes|no)
370 mali: Enable Mali hardware counters (yes|no)
374 validation_tests: Build validation test programs (yes|no)
378 benchmark_tests: Build benchmark test programs (yes|no)
382 @b debug / @b asserts:
383 - With debug=1 asserts are enabled, and the library is built with symbols and no optimisations enabled.
384 - With debug=0 and asserts=1: Optimisations are enabled and symbols are removed, however all the asserts are still present (This is about 20% slower than the release build)
385 - With debug=0 and asserts=0: All optimisations are enable and no validation is performed, if the application misuses the library it is likely to result in a crash. (Only use this mode once you are sure your application is working as expected).
387 @b arch: The x86_32 and x86_64 targets can only be used with neon=0 and opencl=1.
389 @b os: Choose the operating system you are targeting: Linux, Android or bare metal.
390 @note bare metal can only be used for NEON (not OpenCL), only static libraries get built and NEON's multi-threading support is disabled.
392 @b build: you can either build directly on your device (native) or cross compile from your desktop machine (cross-compile). In both cases make sure the compiler is available in your path.
394 @note If you want to natively compile for 32bit on a 64bit ARM device running a 64bit OS then you will have to use cross-compile too.
396 @b Werror: If you are compiling using the same toolchains as the ones used in this guide then there shouldn't be any warning and therefore you should be able to keep Werror=1. If with a different compiler version the library fails to build because of warnings interpreted as errors then, if you are sure the warnings are not important, you might want to try to build with Werror=0 (But please do report the issue either on Github or by an email to developer@arm.com so that the issue can be addressed).
398 @b opencl / @b neon: Choose which SIMD technology you want to target. (NEON for ARM Cortex-A CPUs or OpenCL for ARM Mali GPUs)
400 @b embed_kernels: For OpenCL only: set embed_kernels=1 if you want the OpenCL kernels to be built in the library's binaries instead of being read from separate ".cl" files. If embed_kernels is set to 0 then the application can set the path to the folder containing the OpenCL kernel files by calling CLKernelLibrary::init(). By default the path is set to "./cl_kernels".
402 @b set_soname: Do you want to build the versioned version of the library ?
404 If enabled the library will contain a SONAME and SHLIBVERSION and some symlinks will automatically be created between the objects.
406 libarm_compute_core.so -> libarm_compute_core.so.1.0.0
407 libarm_compute_core.so.1 -> libarm_compute_core.so.1.0.0
408 libarm_compute_core.so.1.0.0
410 @note This options is disabled by default as it requires SCons version 2.4 or above.
412 @b extra_cxx_flags: Custom CXX flags which will be appended to the end of the build command.
414 @b build_dir: Build the library in a subfolder of the "build" folder. (Allows to build several configurations in parallel).
416 @b examples: Build or not the examples
418 @b validation_tests: Enable the build of the validation suite.
420 @b benchmark_tests: Enable the build of the benchmark tests
422 @b pmu: Enable the PMU cycle counter to measure execution time in benchmark tests. (Your device needs to support it)
424 @b mali: Enable the collection of Mali hardware counters to measure execution time in benchmark tests. (Your device needs to have a Mali driver that supports it)
426 @b openmp Build in the OpenMP scheduler for NEON.
428 @note Only works when building with g++ not clang++
430 @b cppthreads Build in the C++11 scheduler for NEON.
432 @sa arm_compute::Scheduler::set
434 @subsection S3_2_linux Building for Linux
436 @subsubsection S3_2_1_library How to build the library ?
438 For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
440 - gcc-linaro-arm-linux-gnueabihf-4.9-2014.07_linux
441 - gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu
442 - gcc-linaro-6.3.1-2017.02-i686_aarch64-linux-gnu
444 @note If you are building with opencl=1 then scons will expect to find libOpenCL.so either in the current directory or in "build" (See the section below if you need a stub OpenCL library to link against)
446 To cross-compile the library in debug mode, with NEON only support, for Linux 32bit:
448 scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
450 To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
452 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=arm64-v8a
454 You can also compile the library natively on an ARM device by using <b>build=native</b>:
456 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=arm64-v8a build=native
457 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
459 @note g++ for ARM is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.
461 For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>
463 apt-get install g++-arm-linux-gnueabihf
467 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
469 or simply remove the build parameter as build=cross_compile is the default value:
471 scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
473 @attention To cross compile with opencl=1 you need to make sure to have a version of libOpenCL matching your target architecture.
475 @subsubsection S3_2_2_examples How to manually build the examples ?
477 The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
479 @note The following command lines assume the arm_compute and libOpenCL binaries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built library with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
481 To cross compile a NEON example for Linux 32bit:
483 arm-linux-gnueabihf-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -std=c++11 -mfpu=neon -L. -larm_compute -o neon_convolution
485 To cross compile a NEON example for Linux 64bit:
487 aarch64-linux-gnu-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -std=c++11 -L. -larm_compute -o neon_convolution
489 (notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
491 To cross compile an OpenCL example for Linux 32bit:
493 arm-linux-gnueabihf-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -lOpenCL -o cl_convolution -DARM_COMPUTE_CL
495 To cross compile an OpenCL example for Linux 64bit:
497 aarch64-linux-gnu-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -lOpenCL -o cl_convolution -DARM_COMPUTE_CL
499 (notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
501 To compile natively (i.e directly on an ARM device) for NEON for Linux 32bit:
503 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -std=c++11 -mfpu=neon -larm_compute -o neon_convolution
505 To compile natively (i.e directly on an ARM device) for NEON for Linux 64bit:
507 g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -std=c++11 -larm_compute -o neon_convolution
509 (notice the only difference with the 32 bit command is that we don't need the -mfpu option)
511 To compile natively (i.e directly on an ARM device) for OpenCL for Linux 32bit or Linux 64bit:
513 g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -lOpenCL -o cl_convolution -DARM_COMPUTE_CL
516 @note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L
518 To run the built executable simply run:
520 LD_LIBRARY_PATH=build ./neon_convolution
524 LD_LIBRARY_PATH=build ./cl_convolution
526 @note If you built the library with support for both OpenCL and NEON you will need to link against OpenCL even if your application only uses NEON.
528 @subsection S3_3_android Building for Android
530 For Android, the library was successfully built and tested using Google's standalone toolchains:
531 - arm-linux-androideabi-4.9 for armv7a (clang++)
532 - aarch64-linux-android-4.9 for arm64-v8a (g++)
534 Here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>
536 - Download the NDK r14 from here: https://developer.android.com/ndk/downloads/index.html
537 - Make sure you have Python 2 installed on your machine.
538 - Generate the 32 and/or 64 toolchains by running the following commands:
541 $NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-4.9 --stl gnustl
542 $NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-androideabi-4.9 --stl gnustl
544 @attention Due to some NDK issues make sure you use g++ & gnustl for aarch64 and clang++ & gnustl for armv7
546 @note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-4.9/bin:$MY_TOOLCHAINS/arm-linux-androideabi-4.9/bin
548 @subsubsection S3_3_1_library How to build the library ?
550 @note If you are building with opencl=1 then scons will expect to find libOpenCL.so either in the current directory or in "build" (See the section below if you need a stub OpenCL library to link against)
552 To cross-compile the library in debug mode, with NEON only support, for Android 32bit:
554 CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
556 To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
558 scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=arm64-v8a
560 @subsubsection S3_3_2_examples How to manually build the examples ?
562 The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
564 @note The following command lines assume the arm_compute and libOpenCL binaries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built library with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
566 Once you've got your Android standalone toolchain built and added to your path you can do the following:
568 To cross compile a NEON example:
571 arm-linux-androideabi-clang++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o neon_convolution_arm -static-libstdc++ -pie
573 aarch64-linux-android-g++ examples/neon_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o neon_convolution_aarch64 -static-libstdc++ -pie
575 To cross compile an OpenCL example:
578 arm-linux-androideabi-clang++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o cl_convolution_arm -static-libstdc++ -pie -lOpenCL -DARM_COMPUTE_CL
580 aarch64-linux-android-g++ examples/cl_convolution.cpp utils/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o cl_convolution_aarch64 -static-libstdc++ -pie -lOpenCL -DARM_COMPUTE_CL
582 @note Due to some issues in older versions of the Mali OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
584 Then you need to do is upload the executable and the shared library to the device using ADB:
586 adb push neon_convolution_arm /data/local/tmp/
587 adb push cl_convolution_arm /data/local/tmp/
588 adb shell chmod 777 -R /data/local/tmp/
590 And finally to run the example:
592 adb shell /data/local/tmp/neon_convolution_arm
593 adb shell /data/local/tmp/cl_convolution_arm
597 adb push neon_convolution_aarch64 /data/local/tmp/
598 adb push cl_convolution_aarch64 /data/local/tmp/
599 adb shell chmod 777 -R /data/local/tmp/
601 And finally to run the example:
603 adb shell /data/local/tmp/neon_convolution_aarch64
604 adb shell /data/local/tmp/cl_convolution_aarch64
606 @subsection S3_4_windows_host Building on a Windows host system
608 Using `scons` directly from the Windows command line is known to cause
609 problems. The reason seems to be that if `scons` is setup for cross-compilation
610 it gets confused about Windows style paths (using backslashes). Thus it is
611 recommended to follow one of the options outlined below.
613 @subsubsection S3_4_1_ubuntu_on_windows Bash on Ubuntu on Windows
615 The best and easiest option is to use
616 <a href="https://msdn.microsoft.com/en-gb/commandline/wsl/about">Ubuntu on Windows</a>.
617 This feature is still marked as *beta* and thus might not be available.
618 However, if it is building the library is as simple as opening a *Bash on
619 Ubuntu on Windows* shell and following the general guidelines given above.
621 @subsubsection S3_4_2_cygwin Cygwin
623 If the Windows subsystem for Linux is not available <a href="https://www.cygwin.com/">Cygwin</a>
624 can be used to install and run `scons`. In addition to the default packages
625 installed by Cygwin `scons` has to be selected in the installer. (`git` might
626 also be useful but is not strictly required if you already have got the source
627 code of the library.) Linaro provides pre-built versions of
628 <a href="http://releases.linaro.org/components/toolchain/binaries/">GCC cross-compilers</a>
629 that can be used from the Cygwin terminal. When building for Android the
630 compiler is included in the Android standalone toolchain. After everything has
631 been set up in the Cygwin terminal the general guide on building the library
634 @subsection S3_5_cl_stub_library The OpenCL stub library
636 In the opencl-1.2-stubs folder you will find the sources to build a stub OpenCL library which then can be used to link your application or arm_compute against.
638 If you preferred you could retrieve the OpenCL library from your device and link against this one but often this library will have dependencies on a range of system libraries forcing you to link your application against those too even though it is not using them.
640 @warning This OpenCL library provided is a stub and *not* a real implementation. You can use it to resolve OpenCL's symbols in arm_compute while building the example but you must make sure the real libOpenCL.so is in your PATH when running the example or it will not work.
642 To cross-compile the stub OpenCL library simply run:
644 <target-prefix>-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
648 <target-prefix>-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
650 arm-linux-gnueabihf-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
652 aarch64-linux-gnu-gcc -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC
654 arm-linux-androideabi-clang -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
656 aarch64-linux-android-gcc -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC -shared