Compute Library
18.05
|
Basic function to execute NEGEMMLowpQuantizeDownInt32ToUint8Scale on NEON. More...
#include <NEGEMMLowpOutputStage.h>
Public Member Functions | |
void | configure (const ITensor *input, const ITensor *bias, ITensor *output, int result_offset, int result_mult_int, int result_shift, int min=0, int max=0) |
Initialise the kernel's inputs, output. More... | |
Public Member Functions inherited from INESimpleFunction | |
INESimpleFunction () | |
Constructor. More... | |
void | run () overridefinal |
Run the kernels contained in the function. More... | |
Public Member Functions inherited from IFunction | |
virtual | ~IFunction ()=default |
Destructor. More... | |
virtual void | prepare () |
Prepare the function for executing. More... | |
Static Public Member Functions | |
static Status | validate (const ITensorInfo *input, const ITensorInfo *bias, const ITensorInfo *output, int min=0, int max=0) |
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8Scale. More... | |
Basic function to execute NEGEMMLowpQuantizeDownInt32ToUint8Scale on NEON.
NEGEMMLowpQuantizeDownInt32ToUint8Scale depends on 3 parameters: result_offset, result_mult_int, result_shift The final result is:
((input[i][k] + result_offset) * result_mult_int) >> result_shift
In case the bias tensor is provided, the final result is:
((input[i][k] + bias[k] + result_offset) * result_mult_int) >> result_shift
This function calls the following NEON kernels:
Definition at line 59 of file NEGEMMLowpOutputStage.h.
void configure | ( | const ITensor * | input, |
const ITensor * | bias, | ||
ITensor * | output, | ||
int | result_offset, | ||
int | result_mult_int, | ||
int | result_shift, | ||
int | min = 0 , |
||
int | max = 0 |
||
) |
Initialise the kernel's inputs, output.
[in] | input | Input tensor. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32 |
[in] | bias | Biases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input . |
[out] | output | Output tensor. Data type supported: Data type supported: QASYMM8 |
[in] | result_offset | Offset to be added to each element of the input matrix |
[in] | result_mult_int | Value to be multiplied to each element of the input matrix when once the result_offset has been add |
[in] | result_shift | Number of bits to shift right the result before converting back to QASYMM8 |
[in] | min | (Optional) Min value used to saturate down the output result before converting back to QASYMM8 |
[in] | max | (Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min , this value can be used to implement "rectified linear unit" activation functions |
Referenced by arm_compute::test::validation::DATA_TEST_CASE().
|
static |
Static function to check if given info will lead to a valid configuration of NEGEMMLowpQuantizeDownInt32ToUint8Scale.
[in] | input | Input tensor. It is the output of NEGEMMLowpMatrixMultiplyCore function. Data type supported: S32 |
[in] | bias | Biases tensor. Only shared biases supported and it can be a nullptr if the addition of biases is not required. Biases are 1D tensor with dimensions [OFM]. Data type supported: Same as input . |
[in] | output | Output tensor. Data type supported: Data type supported: QASYMM8 |
[in] | min | (Optional) Min value used to saturate down the output result before converting back to QASYMM8 |
[in] | max | (Optional) Max value used to saturate up the output result before converting back to QASYMM8, Along with min , this value can be used to implement "rectified linear unit" activation functions |
Referenced by arm_compute::test::validation::DATA_TEST_CASE().