Compute Library
18.05
|
NEON kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8. More...
#include <NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel.h>
Public Member Functions | |
const char * | name () const override |
Name of the kernel. More... | |
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel () | |
Constructor. More... | |
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel (const NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel &)=delete | |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel & | operator= (const NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel &)=delete |
Prevent instances of this class from being copied (As this class contains pointers) More... | |
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel (NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel &&)=default | |
Allow instances of this class to be moved. More... | |
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel & | operator= (NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel &&)=default |
Allow instances of this class to be moved. More... | |
void | configure (const ITensor *input, const ITensor *bias, ITensor *output, int result_fixedpoint_multiplier, int result_shift, int result_offset_after_shift, int min=0, int max=0) |
Initialise the kernel's input and output. More... | |
void | run (const Window &window, const ThreadInfo &info) override |
Execute the kernel on the passed window. More... | |
Public Member Functions inherited from ICPPKernel | |
virtual | ~ICPPKernel ()=default |
Default destructor. More... | |
Public Member Functions inherited from IKernel | |
IKernel () | |
Constructor. More... | |
virtual | ~IKernel ()=default |
Destructor. More... | |
virtual bool | is_parallelisable () const |
Indicates whether or not the kernel is parallelisable. More... | |
virtual BorderSize | border_size () const |
The size of the border for that kernel. More... | |
const Window & | window () const |
The maximum window the kernel can be executed on. More... | |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0) |
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel. More... | |
NEON kernel used to quantize down the int32 accumulator values of GEMMLowp to QASYMM8.
This kernel takes a final int32 accumulator value (the output of NEGEMMLowpMatrixMultiplyKernel), and processes it to obtain the final QASYMM8 value. The following computations will be performed by the kernel:
Definition at line 46 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel.h.
Constructor.
Referenced by NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::name().
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
|
default |
Allow instances of this class to be moved.
void configure | ( | const ITensor * | input, |
const ITensor * | bias, | ||
ITensor * | output, | ||
int | result_fixedpoint_multiplier, | ||
int | result_shift, | ||
int | result_offset_after_shift, | ||
int | min = 0 , |
||
int | max = 0 |
||
) |
Initialise the kernel's input and output.
[in] | input | Input tensor. Data type supported: S32 |
[in] | bias | Biases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input . |
[out] | output | Output tensor. Data type supported: Data type supported: QASYMM8 |
[in] | result_fixedpoint_multiplier | Fixed point value to be multiplied to each element of the input matrix when once the result_offset has been add |
[in] | result_shift | Integer value used to round to nearest division by a power-of-two the result after the fixed point multiplication |
[in] | result_offset_after_shift | Offset to be applied to result before converting it back to QASYMM8 |
[in] | min | (Optional) Min value used to saturate down the output result before converting back to QASYMM8 |
[in] | max | (Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min , this value can be used to implement "rectified linear unit" activation functions |
Referenced by NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::name().
|
inlineoverridevirtual |
Name of the kernel.
Implements ICPPKernel.
Definition at line 49 of file NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel.h.
References NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::configure(), arm_compute::test::validation::info(), arm_compute::test::fixed_point_arithmetic::detail::max(), arm_compute::test::fixed_point_arithmetic::detail::min(), NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel(), NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::operator=(), NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::run(), NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::validate(), and IKernel::window().
|
delete |
Prevent instances of this class from being copied (As this class contains pointers)
Referenced by NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::name().
|
default |
Allow instances of this class to be moved.
|
overridevirtual |
Execute the kernel on the passed window.
[in] | window | Region on which to execute the kernel. (Must be a region of the window returned by window()) |
[in] | info | Info about executing thread and CPU. |
Implements ICPPKernel.
Referenced by NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::name().
|
static |
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel.
[in] | input | Input tensor. Data type supported: S32 |
[in] | bias | Biases tensor. Only shared biases supported and it can be a nullptr if the biases addition is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input . |
[in] | output | Output tensor. Data type supported: Data type supported: QASYMM8 |
[in] | min | (Optional) Min value used to saturate down the output result before converting back to QASYMM8 |
[in] | max | (Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min , this value can be used to implement "rectified linear unit" activation functions |
Referenced by NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPointKernel::name().