From c5187bb81db7d22570aa691c0a46c5a5116fa62c Mon Sep 17 00:00:00 2001
From: =?utf8?q?=EC=9D=B4=EC=83=81=EA=B7=9C/=EB=8F=99=EC=9E=91=EC=A0=9C?=
=?utf8?q?=EC=96=B4Lab=28SR=29/Principal=20Engineer/=EC=82=BC=EC=84=B1?=
=?utf8?q?=EC=A0=84=EC=9E=90?= To use:
@@ -825,37 +952,43 @@ typedef enum {
*
- * * 21:The clipping threshold (\f$t_{cell}\f$) for the cell state, such that values are bound
- * within [-cell_clip, cell_clip]. If set to 0.0 then clipping is
- * disabled.
- * * 22:The clipping threshold (\f$t_{proj}\f$) for the output from the projection layer, such
- * that values are bound within [-proj_clip, proj_clip]. If set to 0.0
+ * * 21:The clipping threshold (\f$t_{cell}\f$) for the cell state, such
+ * that values are bound within [-cell_clip, cell_clip]. If set to 0.0
* then clipping is disabled.
+ * * 22:The clipping threshold (\f$t_{proj}\f$) for the output from the
+ * projection layer, such that values are bound within
+ * [-proj_clip, proj_clip]. If set to 0.0 then clipping is disabled.
*
* Outputs:
* * 0: The scratch buffer.
- * A 2-D tensor of type T, of shape [batch_size, num_units * 4] with
- * CIFG, or [batch_size, num_units * 3] without CIFG.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, num_units * 4] with CIFG, or
+ * [batch_size, num_units * 3] without CIFG.
* * 1: The output state (out) (\f$h_t\f$).
- * A 2-D tensor of type T, of shape [batch_size, output_size].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, output_size].
* * 2: The cell state (out) (\f$C_t\f$).
- * A 2-D tensor of type T, of shape [batch_size, num_units].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, num_units].
* * 3: The output (\f$o_t\f$).
- * A 2-D tensor of type T, of shape [batch_size, output_size]. This is
- * effectively the same as the current âoutput state (out)â value.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, output_size]. This is effectively the same as the
+ * current âoutput state (out)â value.
*/
ANEURALNETWORKS_LSTM = 16,
- /** Performs an 2-D max pooling operation.
+ /**
+ * Performs an 2-D max pooling operation.
*
- * The output dimensions are functions of the filter dimensions, stride, and padding.
+ * The output dimensions are functions of the filter dimensions, stride, and
+ * padding.
*
* The values in the output tensor are computed as:
*
* output[batch, row, col, channel] =
* max_{i, j} (input[batch, row + i, col + j, channel])
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -864,51 +997,68 @@ typedef enum {
* Both explicit padding and implicit padding are supported.
*
* Inputs (explicit padding):
- * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
- * * 1: An INT32 value, specifying the padding on the left, in the âwidthâ dimension.
- * * 2: An INT32 value, specifying the padding on the right,in the âwidthâ dimension.
- * * 3: An INT32 value, specifying the padding on the top, in the âheightâ dimension.
- * * 4: An INT32 value, specifying the padding on the bottom, in the âheightâ dimension.
- * * 5: An INT32 value, specifying the stride when walking through input
- * in the âwidthâ dimension.
- * * 6: An INT32 value, specifying the stride when walking through input
- * in the âheightâ dimension.
- * * 7: An INT32 value, specifying the filter width.
- * * 8: An INT32 value, specifying the filter height.
- * * 9: An INT32 value, and has to be one of the {@link FuseCode} values.
- * Specifies the activation to invoke on the result of each addition.
+ * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
+ * the input.
+ * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on
+ * the left, in the âwidthâ dimension.
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on
+ * the right, in the âwidthâ dimension.
+ * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on
+ * the top, in the âheightâ dimension.
+ * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the padding on
+ * the bottom, in the âheightâ dimension.
+ * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when
+ * walking through input in the âwidthâ dimension.
+ * * 6: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when
+ * walking through input in the âheightâ dimension.
+ * * 7: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter
+ * width.
+ * * 8: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter
+ * height.
+ * * 9: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
+ * {@link FuseCode} values. Specifies the activation to
+ * invoke on the result.
*
* Inputs (implicit padding):
- * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
- * * 1: An INT32 value, specifying the implicit padding scheme, has to be one of the
+ * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
+ * the input.
+ * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the implicit
+ * padding scheme, has to be one of the
* {@link PaddingCode} values.
- * * 2: An INT32 value, specifying the stride when walking through input
- * in the âwidthâ dimension.
- * * 3: An INT32 value, specifying the stride when walking through input
- * in the âheightâ dimension.
- * * 4: An INT32 value, specifying the filter width.
- * * 5: An INT32 value, specifying the filter height.
- * * 6: An INT32 value, and has to be one of the {@link FuseCode} values.
- * Specifies the activation to invoke on the result of each addition.
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when
+ * walking through input in the âwidthâ dimension.
+ * * 3: An {@link ANEURALNETWORKS_INT32} scalar, specifying the stride when
+ * walking through input in the âheightâ dimension.
+ * * 4: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter
+ * width.
+ * * 5: An {@link ANEURALNETWORKS_INT32} scalar, specifying the filter
+ * height.
+ * * 6: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
+ * {@link FuseCode} values. Specifies the activation to
+ * invoke on the result.
*
* Outputs:
- * * 0: The output 4-D tensor, of shape [batches, out_height, out_width, depth].
+ * * 0: The output 4-D tensor, of shape
+ * [batches, out_height, out_width, depth].
*/
ANEURALNETWORKS_MAX_POOL_2D = 17,
- /** Multiplies two tensors, element-wise.
+ /**
+ * Multiplies two tensors, element-wise.
*
- * Takes two input tensors of identical type and compatible dimensions. The output
- * is the product of both input tensors, optionally modified by an activation function.
+ * Takes two input tensors of identical {@link OperandCode} and compatible
+ * dimensions. The output is the product of both input tensors, optionally
+ * modified by an activation function.
*
* Two dimensions are compatible when:
* 1. they are equal, or
* 2. one of them is 1
*
- * The size of the resulting output is the maximum size along each dimension of the
- * input operands. It starts with the trailing dimensions, and works its way forward.
+ * The size of the resulting output is the maximum size along each dimension
+ * of the input operands. It starts with the trailing dimensions, and works
+ * its way forward.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -916,24 +1066,28 @@ typedef enum {
*
* Inputs:
* * 0: A tensor.
- * * 1: A tensor of the same type, and compatible dimensions as input0.
- * * 2: An INT32 value, and has to be one of the {@link FuseCode} values.
- * Specifies the activation to invoke on the result of each addition.
+ * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions
+ * as input0.
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
+ * {@link FuseCode} values. Specifies the activation to
+ * invoke on the result.
*
* Outputs:
- * * 0: The product, a tensor of the same type as input0.
- * For output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} type, the following
- * condition must be satisfied: output_scale > input1_scale * input2_scale.
+ * * 0: The product, a tensor of the same {@link OperandCode} as input0.
+ * For output tensor of {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
+ * the following condition must be satisfied:
+ * output_scale > input1_scale * input2_scale.
*/
ANEURALNETWORKS_MUL = 18,
- /** Computes rectified linear activation on the input tensor element-wise.
+ /**
+ * Computes rectified linear activation on the input tensor element-wise.
*
* The output is calculated using this formula:
*
* output = max(0, input)
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -947,13 +1101,14 @@ typedef enum {
*/
ANEURALNETWORKS_RELU = 19,
- /** Computes rectified linear 1 activation on the input tensor element-wise.
+ /**
+ * Computes rectified linear 1 activation on the input tensor element-wise.
*
* The output is calculated using this formula:
*
* output = min(1.f, max(-1.f, input))
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -967,13 +1122,14 @@ typedef enum {
*/
ANEURALNETWORKS_RELU1 = 20,
- /** Computes rectified linear 6 activation on the input tensor element-wise.
+ /**
+ * Computes rectified linear 6 activation on the input tensor element-wise.
*
* The output is calculated using this formula:
*
* output = min(6, max(0, input))
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -987,12 +1143,13 @@ typedef enum {
*/
ANEURALNETWORKS_RELU6 = 21,
- /** Reshapes a tensor.
+ /**
+ * Reshapes a tensor.
*
- * Given tensor, this operation returns a tensor that has the same values as tensor,
- * but with a newly specified shape.
+ * Given tensor, this operation returns a tensor that has the same values as
+ * tensor, but with a newly specified shape.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -1000,32 +1157,38 @@ typedef enum {
*
* Inputs:
* * 0: A tensor, specifying the tensor to be reshaped.
- * * 1: A 1-D tensor of type {@link ANEURALNETWORKS_TENSOR_INT32}, defining the shape
- * of the output tensor. The number of elements implied by shape must be the same
- * as the number of elements in the input tensor.
+ * * 1: A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, defining the
+ * shape of the output tensor. The number of elements implied by shape
+ * must be the same as the number of elements in the input tensor.
*
* Outputs:
* * 0: The output tensor, of shape specified by the input shape.
*/
ANEURALNETWORKS_RESHAPE = 22,
- /** Resizes images to given size using the bilinear interpretation.
+ /**
+ * Resizes images to given size using the bilinear interpretation.
*
- * Resized images will be distorted if their output aspect ratio is not the same as
- * input aspect ratio.
+ * Resized images must be distorted if their output aspect ratio is not the
+ * same as input aspect ratio. The corner pixels of output may not be the
+ * same as corner pixels of input.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
*
* Supported tensor rank: 4, with "NHWC" data layout.
*
* Inputs:
- * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying the input.
- * * 1: An INT32 value, specifying the output height of the output tensor.
- * * 2: An INT32 value, specifying the output width of the output tensor.
+ * * 0: A 4-D tensor, of shape [batches, height, width, depth], specifying
+ * the input.
+ * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output
+ * height of the output tensor.
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, specifying the output
+ * width of the output tensor.
*
* Outputs:
- * * 0: The output 4-D tensor, of shape [batches, new_height, new_width, depth].
+ * * 0: The output 4-D tensor, of shape
+ * [batches, new_height, new_width, depth].
*/
ANEURALNETWORKS_RESIZE_BILINEAR = 23,
@@ -1033,7 +1196,8 @@ typedef enum {
* A basic recurrent neural network layer.
*
* This layer implements the operation:
- * outputs = state = activation(inputs * input_weights + state * recurrent_weights + bias)
+ * outputs = state = activation(inputs * input_weights +
+ * state * recurrent_weights + bias)
*
* Where:
* * âinput_weightsâ is a weight matrix that multiplies the inputs;
@@ -1044,41 +1208,49 @@ typedef enum {
* * âactivationâ is the function passed as the âfused_activation_functionâ
* argument (if not âNONEâ).
*
- * Supported tensor types (Type T):
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
*
* Inputs:
* * 0: input.
- * A 2-D tensor of type T, of shape [batch_size, input_size], where
- * âbatch_sizeâ corresponds to the batching dimension, and âinput_sizeâ is
- * the size of the input.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32} of shape
+ * [batch_size, input_size], where âbatch_sizeâ corresponds to the
+ * batching dimension, and âinput_sizeâ is the size of the input.
* * 1: weights.
- * A 2-D tensor of type T, of shape [num_units, input_size], where
- * ânum_unitsâ corresponds to the number of units.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [num_units, input_size], where ânum_unitsâ corresponds to the
+ * number of units.
* * 2: recurrent_weights.
- * A 2-D tensor of type T, of shape [num_units, num_units], with columns
- * corresponding to the weights from each unit.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [num_units, num_units], with columns corresponding to the weights
+ * from each unit.
* * 3: bias.
- * A 1-D tensor of type T, of shape [num_units].
+ * A 1-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [num_units].
* * 4: hidden state (in).
- * A 2-D tensor of type T, of shape [batch_size, num_units].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, num_units].
* * 5: fused_activation_function.
- * An optional {@link FuseCode} value indicating the activation
- * function. If âNONEâ is specified then it results in a linear
- * activation.
+ * An optional {@link FuseCode} value indicating the
+ * activation function. If âNONEâ is specified then it results in a
+ * linear activation.
*
* Outputs:
* * 0: hidden state (out).
- * A 2-D tensor of type T, of shape [batch_size, num_units].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, num_units].
*
* * 1: output.
- * A 2-D tensor of type T, of shape [batch_size, num_units]. This is
- * effectively the same as the current state value.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, num_units]. This is effectively the same as the
+ * current state value.
*/
ANEURALNETWORKS_RNN = 24,
- /** Computes the softmax activation on the input tensor element-wise, per batch, by
- * normalizing the input vector so the maximum coefficient is zero.
+ /**
+ * Computes the softmax activation on the input tensor element-wise, per
+ * batch, by normalizing the input vector so the maximum coefficient is
+ * zero.
*
* The output is calculated using this formula:
*
@@ -1086,7 +1258,7 @@ typedef enum {
* exp((input[batch, i] - max(input[batch, :])) * beta) /
* sum_{k}{exp((input[batch, k] - max(input[batch, :])) * beta)}
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
@@ -1094,41 +1266,46 @@ typedef enum {
*
* Inputs:
* * 0: A 2-D or 4-D tensor, specifying the tensor to be reshaped.
- * * 1: A FLOAT32 value, specifying the positive scaling factor for the exponent, beta.
+ * * 1: An {@link ANEURALNETWORKS_FLOAT32} scalar, specifying the positive
+ * scaling factor for the exponent, beta.
*
* Outputs:
* * 0: The output tensor of same shape as input0.
- * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM} type,
+ * For {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM},
* the scale must be 1.f / 256 and the zeroPoint must be 0.
*/
ANEURALNETWORKS_SOFTMAX = 25,
- /** Rearranges blocks of spatial data, into depth.
+ /**
+ * Rearranges blocks of spatial data, into depth.
*
- * More specifically, this op outputs a copy of the input tensor where values from
- * the height and width dimensions are moved to the depth dimension.
- * The value block_size indicates the input block size and how the data is moved.
+ * More specifically, this op outputs a copy of the input tensor where
+ * values from the height and width dimensions are moved to the depth
+ * dimension. The value block_size indicates the input block size and how
+ * the data is moved.
*
- * Chunks of data of size block_size * block_size from depth are rearranged into
- * non-overlapping blocks of size block_size x block_size.
+ * Chunks of data of size block_size * block_size from depth are rearranged
+ * into non-overlapping blocks of size block_size x block_size.
*
* The depth of the output tensor is input_depth * block_size * block_size.
* The input tensor's height and width must be divisible by block_size.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: 4, with "NHWC" data layout.
*
* Inputs:
- * * 0: A 4-D tensor, of shape [batches, height, width, depth_in], specifying the input.
- * * 1: An INT32 value, specifying the block_size. block_size must be >=1 and
- * block_size must be a divisor of both the input height and width.
+ * * 0: A 4-D tensor, of shape [batches, height, width, depth_in],
+ * specifying the input.
+ * * 1: An {@link ANEURALNETWORKS_INT32} scalar, specifying the block_size.
+ * block_size must be >=1 and block_size must be a divisor of both the
+ * input height and width.
*
* Outputs:
- * * 0: The output 4-D tensor, of shape [batch, height/block_size, width/block_size,
- * depth*block_size*block_size].
+ * * 0: The output 4-D tensor, of shape [batches, height/block_size,
+ * width/block_size, depth_in*block_size*block_size].
*/
ANEURALNETWORKS_SPACE_TO_DEPTH = 26,
@@ -1145,21 +1322,22 @@ typedef enum {
* INTERSPEECH, 2015.
*
* It processes the incoming input using a 2-stage filtering mechanism:
- * * stage 1 performs filtering on the "features" dimension, whose outputs get
- * pushed into a memory of fixed-size memory_size.
+ * * stage 1 performs filtering on the "features" dimension, whose outputs
+ * get pushed into a memory of fixed-size memory_size.
* * stage 2 performs filtering on the "time" dimension of the memory_size
* memoized outputs of stage 1.
*
* Specifically, for rank 1, this layer implements the operation:
*
- * memory = push(conv1d(inputs, weights_feature, feature_dim, "ANEURALNETWORKS_PADDING_VALID"));
+ * memory = push(conv1d(inputs, weights_feature, feature_dim,
+ * "ANEURALNETWORKS_PADDING_VALID"));
* outputs = activation(memory * weights_time + bias);
*
* Where:
* * âweights_featureâ is a weights matrix that processes the inputs (by
- * convolving the input with every âfeature filterâ), and whose outputs get
- * pushed, stacked in order, into the fixed-size âmemoryâ (the oldest entry
- * gets dropped);
+ * convolving the input with every âfeature filterâ), and whose outputs
+ * get pushed, stacked in order, into the fixed-size âmemoryâ (the oldest
+ * entry gets dropped);
* * âweights_timeâ is a weights matrix that processes the âmemoryâ (by a
* batched matrix multiplication on the num_units);
* * âbiasâ is an optional bias vector (added to each output vector in the
@@ -1170,45 +1348,53 @@ typedef enum {
* Each rank adds a dimension to the weights matrices by means of stacking
* the filters.
*
- * Supported tensor types (type T):
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
*
* Inputs:
* * 0: input.
- * A 2-D tensor of type T, of shape [batch_size, input_size], where
- * âbatch_sizeâ corresponds to the batching dimension, and âinput_sizeâ is
- * the size of the input.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, input_size], where âbatch_sizeâ corresponds to the
+ * batching dimension, and âinput_sizeâ is the size of the input.
* * 1: weights_feature.
- * A 2-D tensor of type T, of shape [num_units, input_size], where
- * ânum_unitsâ corresponds to the number of units.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [num_units, input_size], where ânum_unitsâ corresponds to the
+ * number of units.
* * 2: weights_time.
- * A 2-D tensor of type T, of shape [num_units, memory_size], where
- * âmemory_sizeâ corresponds to the fixed-size of the memory.
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [num_units, memory_size], where âmemory_sizeâ corresponds to the
+ * fixed-size of the memory.
* * 3: bias.
- * An optional 1-D tensor of type T, of shape [num_units].
+ * An optional 1-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32},
+ * of shape [num_units].
* * 4: state (in).
- * A 2-D tensor of type T, of shape [batch_size, (memory_size - 1) * num_units * rank].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, (memory_size - 1) * num_units * rank].
* * 5: rank.
* The rank of the SVD approximation.
* * 6: fused_activation_function.
- * An optional {@link FuseCode} value indicating the activation function.
- * If âNONEâ is specified then it results in a linear activation.
+ * An optional {@link FuseCode} value indicating the
+ * activation function. If âNONEâ is specified then it results in a
+ * linear activation.
*
* Outputs:
* * 0: state (out).
- * A 2-D tensor of type T, of shape [batch_size, (memory_size - 1) * num_units * rank].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, (memory_size - 1) * num_units * rank].
* * 1: output.
- * A 2-D tensor of type T, of shape [batch_size, num_units].
+ * A 2-D tensor of {@link ANEURALNETWORKS_TENSOR_FLOAT32}, of shape
+ * [batch_size, num_units].
*/
ANEURALNETWORKS_SVDF = 27,
- /** Computes hyperbolic tangent of input tensor element-wise.
+ /**
+ * Computes hyperbolic tangent of input tensor element-wise.
*
* The output is calculated using this formula:
*
* output = tanh(input)
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
*
* Supported tensor rank: up to 4.
@@ -1225,87 +1411,95 @@ typedef enum {
/**
* BatchToSpace for N-dimensional tensors.
*
- * This operation reshapes the batch dimension (dimension 0) into M + 1 dimensions of shape
- * block_shape + [batch], interleaves these blocks back into the grid defined by the
- * spatial dimensions [1, ..., M], to obtain a result with the same rank as the input.
+ * This operation reshapes the batch dimension (dimension 0) into M + 1
+ * dimensions of shape block_shape + [batch], interleaves these blocks back
+ * into the grid defined by the spatial dimensions [1, ..., M], to obtain a
+ * result with the same rank as the input.
*
* This is the reverse of SpaceToBatch.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: 4
*
* Inputs:
- * 0: An n-D tensor, specifying the tensor to be reshaped
- * 1: A 1-D Tensor of type TENSOR_INT32, the block sizes for each spatial dimension of the
- * input tensor. All values must be >= 1.
+ * * 0: An n-D tensor, specifying the tensor to be reshaped
+ * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the block
+ * sizes for each spatial dimension of the input tensor. All values
+ * must be >= 1.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_BATCH_TO_SPACE_ND = 29,
/**
* Element-wise division of two tensors.
*
- * Takes two input tensors of identical type and compatible dimensions. The output
- * is the result of dividing the first input tensor by the second, optionally
- * modified by an activation function.
+ * Takes two input tensors of identical {@link OperandCode} and compatible
+ * dimensions. The output is the result of dividing the first input tensor
+ * by the second, optionally modified by an activation function.
*
* Two dimensions are compatible when:
* 1. they are equal, or
* 2. one of them is 1
*
- * The size of the output is the maximum size along each dimension of the input operands.
- * It starts with the trailing dimensions, and works its way forward.
+ * The size of the output is the maximum size along each dimension of the
+ * input operands. It starts with the trailing dimensions, and works its way
+ * forward.
*
* Example:
* input1.dimension = {4, 1, 2}
* input2.dimension = {5, 4, 3, 1}
* output.dimension = {5, 4, 3, 2}
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: An n-D tensor, specifying the first input.
- * 1: A tensor of the same type, and compatible dimensions as input0.
- * 2: An INT32 value, and has to be one of the {@link FusedActivationFunc} values.
- * Specifies the activation to invoke on the result of each addition.
+ * * 0: An n-D tensor, specifying the first input.
+ * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions
+ * as input0.
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
+ * {@link FuseCode} values. Specifies the activation to
+ * invoke on the result.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_DIV = 30,
/**
* Computes the mean of elements across dimensions of a tensor.
*
- * Reduces the input tensor along the given dimensions to reduce. Unless keep_dims
- * is true, the rank of the tensor is reduced by 1 for each entry in axis.
- * If keep_dims is true, the reduced dimensions are retained with length 1.
+ * Reduces the input tensor along the given dimensions to reduce. Unless
+ * keep_dims is true, the rank of the tensor is reduced by 1 for each entry
+ * in axis. If keep_dims is true, the reduced dimensions are retained with
+ * length 1.
*
- * If dimensions to reduce have no entries, all dimensions are reduced, and a tensor with
- * a single element is returned.
+ * If dimensions to reduce have no entries, all dimensions are reduced, and
+ * a tensor with a single element is returned.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: A tensor, specifying the input.
- * 1: A 1-D Tensor of type TENSOR_INT32. The dimensions to reduce. If None (the default),
- * reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
- * 2: An INT32 value, keep_dims. If positive, retains reduced dimensions with length 1.
+ * * 0: A tensor, specifying the input.
+ * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The dimensions
+ * to reduce. If None (the default), reduces all dimensions. Must be in
+ * the range [-rank(input_tensor), rank(input_tensor)).
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, keep_dims. If positive,
+ * retains reduced dimensions with length 1.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_MEAN = 31,
@@ -1314,21 +1508,30 @@ typedef enum {
*
* This operation pads a tensor according to the specified paddings.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: An n-D tensor, specifying the tensor to be padded.
- * 1: A 2-D Tensor of type TENSOR_INT32, the paddings for each spatial dimension of the
- * input tensor. The shape of the tensor must be {rank(input0), 2}.
- * padding[i, 0] specifies the number of element to be padded in the front of dimension i.
- * padding[i, 1] specifies the number of element to be padded after the end of dimension i.
+ * * 0: An n-D tensor, specifying the tensor to be padded.
+ * * 1: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings
+ * for each spatial dimension of the input tensor. The shape of the
+ * tensor must be {rank(input0), 2}.
+ * padding[i, 0] specifies the number of elements to be padded in the
+ * front of dimension i.
+ * padding[i, 1] specifies the number of elements to be padded after the
+ * end of dimension i.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0. The
+ * output tensor has the same rank as input0, and each
+ * dimension of the output tensor has the same size as the
+ * corresponding dimension of the input tensor plus the size
+ * of the padding:
+ * output0.dimension[i] =
+ * padding[i, 0] + input0.dimension[i] + padding[i, 1]
*/
ANEURALNETWORKS_PAD = 32,
@@ -1336,149 +1539,169 @@ typedef enum {
/**
* SpaceToBatch for N-Dimensional tensors.
*
- * This operation divides "spatial" dimensions [1, ..., M] of the input into a grid of blocks
- * of shape block_shape, and interleaves these blocks with the "batch" dimension (0) such that
- * in the output, the spatial dimensions [1, ..., M] correspond to the position within the grid,
- * and the batch dimension combines both the position within a spatial block and the original
- * batch position. Prior to division into blocks, the spatial dimensions of the input are
- * optionally zero padded according to paddings.
+ * This operation divides "spatial" dimensions [1, ..., M] of the input into
+ * a grid of blocks of shape block_shape, and interleaves these blocks with
+ * the "batch" dimension (0) such that in the output, the spatial dimensions
+ * [1, ..., M] correspond to the position within the grid, and the batch
+ * dimension combines both the position within a spatial block and the
+ * original batch position. Prior to division into blocks, the spatial
+ * dimensions of the input are optionally zero padded according to paddings.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: 4
*
* Inputs:
- * 0: An n-D tensor, specifying the input.
- * 1: A 1-D Tensor of type TENSOR_INT32, the block sizes for each spatial dimension of the
- * input tensor. All values must be >= 1.
- * 2: A 2-D Tensor of type TENSOR_INT32, the paddings for each spatial diemension of the
- * input tensor. All values must be >= 0. The shape of the tensor must be {rank(input0), 2}.
- * padding[i, 0] specifies the number of element to be padded in the front of dimension i.
- * padding[i, 1] specifies the number of element to be padded after the end of dimension i.
+ * * 0: An n-D tensor, specifying the input.
+ * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the block
+ * sizes for each spatial dimension of the input tensor. All values
+ * must be >= 1.
+ * * 2: A 2-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the paddings
+ * for each spatial dimension of the input tensor. All values must be
+ * >= 0. The shape of the tensor must be {rank(input0), 2}.
+ * padding[i, 0] specifies the number of element to be padded in the
+ * front of dimension i.
+ * padding[i, 1] specifies the number of element to be padded after the
+ * end of dimension i.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_SPACE_TO_BATCH_ND = 33,
/**
* Removes dimensions of size 1 from the shape of a tensor.
*
- * Given a tensor input, this operation returns a tensor of the same type with all
- * dimensions of size 1 removed. If you don't want to remove all size 1 dimensions,
- * you can remove specific size 1 dimensions by specifying the axes (input1).
+ * Given a tensor input, this operation returns a tensor of the same
+ * {@link OperandCode} with all dimensions of size 1 removed. If you don't
+ * want to remove all size 1 dimensions, you can remove specific size 1
+ * dimensions by specifying the axes (input1).
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: An n-D tensor, the tensor to be squeezed.
- * 1: An optional 1-D tensor of type TENSOR_INT32. The dimensions to squeeze. If specified
- * only squeezes the dimensions listed. Otherwise, squeezes all dimensions.
- * The dimension index starts at 0. An error will be reported if squeezing a dimension that
- * is not 1.
+ * * 0: An n-D tensor, the tensor to be squeezed.
+ * * 1: An optional 1-D tensor of {@link ANEURALNETWORKS_TENSOR_INT32}. The
+ * dimensions to squeeze. If specified only squeezes the dimensions
+ * listed. Otherwise, squeezes all dimensions. The dimension index
+ * starts at 0. An error must be reported if squeezing a dimension that
+ * is not 1.
*
* Outputs:
- * 0: A tensor of the same type as input0. Contains the same data as input, but has one or more
- * dimensions of size 1 removed.
+ * * 0: A tensor of the same {@link OperandCode} as input0. Contains the
+ * same data as input, but has one or more dimensions of size 1
+ * removed.
*/
ANEURALNETWORKS_SQUEEZE = 34,
/**
* Extracts a strided slice of a tensor.
*
- * Roughly speaking, this op extracts a slice of size (end - begin) / stride from the given
- * input tensor. Starting at the location specified by begin the slice continues by adding
- * stride to the index until all dimensions are not less than end. Note that a stride can
- * be negative, which causes a reverse slice.
+ * Roughly speaking, this op extracts a slice of size (end - begin) / stride
+ * from the given input tensor. Starting at the location specified by begin
+ * the slice continues by adding stride to the index until all dimensions
+ * are not less than end. Note that a stride can be negative, which causes a
+ * reverse slice.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: An n-D tensor, specifying the tensor to be sliced.
- * 1: A 1-D Tensor of type TENSOR_INT32, the starts of the dimensions of the input
- * tensor to be sliced. The length must be of rank(input0).
- * 2: A 1-D Tensor of type TENSOR_INT32, the ends of the dimensions of the input
- * tensor to be sliced. The length must be of rank(input0).
- * 3: A 1-D Tensor of type TENSOR_INT32, the strides of the dimensions of the input
- * tensor to be sliced. The length must be of rank(input0).
- * 4: An INT32 value, begin_mask. If the ith bit of begin_mask is set, begin[i] is ignored
- * and the fullest possible range in that dimension is used instead.
- * 5: An INT32 value, end_mask. If the ith bit of end_mask is set, end[i] is ignored and
- * the fullest possible range in that dimension is used instead.
- * 6: An INT32 value, shrink_axis_mask. An int32 mask. If the ith bit of shrink_axis_mask is
- * set, it implies that the ith specification shrinks the dimensionality by 1. A slice of
- * size 1 starting from begin[i] in the dimension will be preserved.
+ * * 0: An n-D tensor, specifying the tensor to be sliced.
+ * * 1: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the starts of
+ * the dimensions of the input tensor to be sliced. The length must be
+ * of rank(input0).
+ * * 2: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the ends of
+ * the dimensions of the input tensor to be sliced. The length must be
+ * of rank(input0).
+ * * 3: A 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32}, the strides of
+ * the dimensions of the input tensor to be sliced. The length must be
+ * of rank(input0).
+ * * 4: An {@link ANEURALNETWORKS_INT32} scalar, begin_mask. If the ith bit
+ * of begin_mask is set, begin[i] is ignored and the fullest possible
+ * range in that dimension is used instead.
+ * * 5: An {@link ANEURALNETWORKS_INT32} scalar, end_mask. If the ith bit of
+ * end_mask is set, end[i] is ignored and the fullest possible range in
+ * that dimension is used instead.
+ * * 6: An {@link ANEURALNETWORKS_INT32} scalar, shrink_axis_mask. An int32
+ * mask. If the ith bit of shrink_axis_mask is set, it implies that the
+ * ith specification shrinks the dimensionality by 1. A slice of size 1
+ * starting from begin[i] in the dimension must be preserved.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_STRIDED_SLICE = 35,
/**
* Element-wise subtraction of two tensors.
*
- * Takes two input tensors of identical type and compatible dimensions. The output
- * is the result of subtracting the second input tensor from the first one, optionally
- * modified by an activation function.
+ * Takes two input tensors of identical {@link OperandCode} and compatible
+ * dimensions. The output is the result of subtracting the second input
+ * tensor from the first one, optionally modified by an activation function.
*
* Two dimensions are compatible when:
* 1. they are equal, or
* 2. one of them is 1
*
- * The size of the output is the maximum size along each dimension of the input operands.
- * It starts with the trailing dimensions, and works its way forward.
+ * The size of the output is the maximum size along each dimension of the
+ * input operands. It starts with the trailing dimensions, and works its way
+ * forward.
*
* Example:
* input1.dimension = {4, 1, 2}
* input2.dimension = {5, 4, 3, 1}
* output.dimension = {5, 4, 3, 2}
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: An n-D tensor, specifying the first input.
- * 1: A tensor of the same type, and compatible dimensions as input0.
- * 2: An INT32 value, and has to be one of the {@link FusedActivationFunc} values.
- * Specifies the activation to invoke on the result of each addition.
+ * * 0: An n-D tensor, specifying the first input.
+ * * 1: A tensor of the same {@link OperandCode}, and compatible dimensions
+ * as input0.
+ * * 2: An {@link ANEURALNETWORKS_INT32} scalar, and has to be one of the
+ * {@link FuseCode} values. Specifies the activation to
+ * invoke on the result.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_SUB = 36,
/**
- * Transposes the input tensor, permuting the dimensions according to the perm tensor.
+ * Transposes the input tensor, permuting the dimensions according to the
+ * perm tensor.
*
- * The returned tensor's dimension i corresponds to the input dimension perm[i].
- * If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor.
- * Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.
+ * The returned tensor's dimension i corresponds to the input dimension
+ * perm[i]. If perm is not given, it is set to (n-1...0), where n is the
+ * rank of the input tensor. Hence by default, this operation performs a
+ * regular matrix transpose on 2-D input Tensors.
*
- * Supported tensor types:
+ * Supported tensor {@link OperandCode}:
* * {@link ANEURALNETWORKS_TENSOR_FLOAT32}
* * {@link ANEURALNETWORKS_TENSOR_QUANT8_ASYMM}
*
* Supported tensor rank: up to 4
*
* Inputs:
- * 0: An n-D tensor, specifying the tensor to be transposed.
- * 1: An optional 1-D Tensor of type TENSOR_INT32, the permutation of the dimensions of the
- * input tensor.
+ * * 0: An n-D tensor, specifying the tensor to be transposed.
+ * * 1: An optional 1-D Tensor of {@link ANEURALNETWORKS_TENSOR_INT32},
+ * the permutation of the dimensions of the input tensor.
*
* Outputs:
- * 0: A tensor of the same type as input0.
+ * * 0: A tensor of the same {@link OperandCode} as input0.
*/
ANEURALNETWORKS_TRANSPOSE = 37,
} OperationCode;
@@ -1561,8 +1784,8 @@ typedef enum {
ANEURALNETWORKS_UNEXPECTED_NULL = 3,
ANEURALNETWORKS_BAD_DATA = 4,
ANEURALNETWORKS_OP_FAILED = 5,
- ANEURALNETWORKS_UNMAPPABLE = 5,
ANEURALNETWORKS_BAD_STATE = 6,
+ ANEURALNETWORKS_UNMAPPABLE = 7,
} ResultCode;
/**
@@ -1603,6 +1826,12 @@ typedef struct ANeuralNetworksMemory ANeuralNetworksMemory;
*
*
An output buffer or memory region must not overlap with any + * other output buffer or memory region, with an input buffer or + * memory region, or with an operand value in a memory object + * ({@link ANeuralNetworksModel_setOperandValueFromMemory}).
+ * *An execution cannot be modified once {@link ANeuralNetworksExecution_startCompute} * has been called on it.
* @@ -1682,18 +1916,55 @@ typedef struct ANeuralNetworksCompilation ANeuralNetworksCompilation; * thread to use {@link ANeuralNetworksEvent_wait} at the same time. * *It is also the application's responsibility to ensure that there are no other - * uses of the request after calling {@link ANeuralNetworksExecution_free}.
+ * uses of the execution after calling {@link ANeuralNetworksExecution_free}. */ typedef struct ANeuralNetworksExecution ANeuralNetworksExecution; /** * ANeuralNetworksOperandType describes the type of an operand. * This structure is used to describe both scalars and tensors. + * + * A tensor operand type must have a specified rank (number of + * dimensions) but may have any of its dimensions unspecified. + * + * A tensor operand type with all dimensions specified is "fully + * specified". Whenever possible (i.e., whenever the dimensions are + * known at model construction time), a tensor operand type should be + * (but is not required to be) fully specified, in order to enable the + * best possible performance. + * + * If a tensor operand's type is not fully specified, the dimensions + * of the operand are deduced from the operand types and values of the + * operation for which that operand is an output. + * + *In the following situations, a tensor operand type must be fully + * specified:
Every operand must be referenced in exactly one of the following + * ways:
An operand that is identified as a model input or as a constant + * must not also be identified as a model output with + * {@link ANeuralNetworksModel_identifyInputsAndOutputs}.
+ * + * To build a model that can accommodate inputs of various sizes, as + * you may want to do for a CNN, leave unspecified the dimensions that + * will vary at run time. If you do so, fully specify dimensions + * when calling {@link ANeuralNetworksExecution_setInput} or + * {@link ANeuralNetworksExecution_setInputFromMemory}. * * Attempting to modify a model once {@link ANeuralNetworksModel_finish} has been * called will return an error. @@ -1827,7 +2116,9 @@ int ANeuralNetworksModel_finish(ANeuralNetworksModel* model); * * @param model The model to be modified. * @param type The {@link ANeuralNetworksOperandType} that describes the shape - * of the operand. + * of the operand. Neither the {@link ANeuralNetworksOperandType} + * nor the dimensions it points to need to outlive the call to + * {@link ANeuralNetworksModel_addOperand}. * * @return ANEURALNETWORKS_NO_ERROR if successful. */ @@ -1903,7 +2194,7 @@ int ANeuralNetworksModel_setOperandValueFromMemory(ANeuralNetworksModel* model, * Add an operation to a model. * * @param model The model to be modified. - * @param type The type of the operation. + * @param type The {@link ANeuralNetworksOperationType} of the operation. * @param inputCount The number of entries in the inputs array. * @param inputs An array of indexes identifying each operand. * @param outputCount The number of entries in the outputs array. @@ -1925,7 +2216,8 @@ int ANeuralNetworksModel_addOperation(ANeuralNetworksModel* model, const uint32_t* outputs); /** - * Specifies which operands will be the model's inputs and outputs. + * Specifies which operands will be the model's inputs and + * outputs. Every model must have at least one input and one output. * * An operand cannot be used for both input and output. Doing so will * return an error. @@ -2096,12 +2388,18 @@ void ANeuralNetworksExecution_free(ANeuralNetworksExecution* execution); * @param index The index of the input argument we are setting. It is * an index into the lists passed to * {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not - * the index associated with {@link ANeuralNetworksModel_addOperand}. - * @param type The type of the operand. This should be used to specify the - * dimensions that were set to 0 when the operand was added to the - * model. All other properties of the type must be the same as - * specified in the model. If the type is the same as specified - * when the model was built, NULL can be passed. + * the index associated with + * {@link ANeuralNetworksModel_addOperand}. + * @param type The {@link ANeuralNetworksOperandType} of the + * operand. Unless the input is omitted, this should be + * used to specify the dimensions that were left + * unspecified when the operand was added to the + * model. All other properties of the type must be the + * same as specified in the model. If the type is the same + * as specified when the model was built, NULL can be + * passed. Neither the {@link ANeuralNetworksOperandType} + * nor the dimensions it points to need to outlive the call + * to {@link ANeuralNetworksExecution_setInput}. * @param buffer The buffer containing the data. * @param length The length in bytes of the buffer. * @@ -2129,11 +2427,15 @@ int ANeuralNetworksExecution_setInput(ANeuralNetworksExecution* execution, int32 * an index into the lists passed to * {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not * the index associated with {@link ANeuralNetworksModel_addOperand}. - * @param type The type of the operand. This can be used to specify the - * dimensions that were set to 0 when the operand was added to the - * model. All other values must be the same as specified in the - * model. If the type is the same as specified when the model - * was built, NULL can be passed. + * @param type The {@link ANeuralNetworksOperandType} of the + * operand. This should be used to specify the dimensions + * that were left unspecified when the operand was added + * to the model. All other properties of the type must be + * the same as specified in the model. If the type is the + * same as specified when the model was built, NULL can be + * passed. Neither the {@link ANeuralNetworksOperandType} + * nor the dimensions it points to need to outlive the call + * to {@link ANeuralNetworksExecution_setInputFromMemory}. * @param memory The memory containing the data. * @param offset This specifies the location of the data within the memory. * The offset is in bytes from the start of memory. @@ -2163,11 +2465,16 @@ int ANeuralNetworksExecution_setInputFromMemory(ANeuralNetworksExecution* execut * an index into the lists passed to * {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not * the index associated with {@link ANeuralNetworksModel_addOperand}. - * @param type The type of the operand. This can be used to specify the - * dimensions that were set to 0 when the operand was added to the - * model. All other values must be the same as specified in the - * model. If the type is the same as specified when the model - * was built, NULL can be passed. + * @param type The {@link ANeuralNetworksOperandType} of the + * operand. Unless the output is omitted, this should be + * used to specify the dimensions that were left + * unspecified when the operand was added to the + * model. All other properties of the type must be the + * same as specified in the model. If the type is the same + * as specified when the model was built, NULL can be + * passed. Neither the {@link ANeuralNetworksOperandType} + * nor the dimensions it points to need to outlive the call + * to {@link ANeuralNetworksExecution_setOutput}. * @param buffer The buffer where the data is to be written. * @param length The length in bytes of the buffer. * @@ -2195,11 +2502,15 @@ int ANeuralNetworksExecution_setOutput(ANeuralNetworksExecution* execution, int3 * an index into the lists passed to * {@link ANeuralNetworksModel_identifyInputsAndOutputs}. It is not * the index associated with {@link ANeuralNetworksModel_addOperand}. - * @param type The type of the operand. This can be used to specify the - * dimensions that were set to 0 when the operand was added to the - * model. All other values must be the same as specified in the - * model. If the type is the same as specified when the model - * was built, NULL can be passed. + * @param type The {@link ANeuralNetworksOperandType} of the operand. This should be + * used to specify the dimensions that were left + * unspecified when the operand was added to the + * model. All other properties of the type must be the + * same as specified in the model. If the type is the same + * as specified when the model was built, NULL can be + * passed. Neither the {@link ANeuralNetworksOperandType} + * nor the dimensions it points to need to outlive the call + * to {@link ANeuralNetworksExecution_setOutputFromMemory}. * @param memory The memory where the data is to be stored. * @param offset This specifies the location of the data within the memory. * The offset is in bytes from the start of memory. -- 2.7.4