**Versioned name**: *ReLU-1*
-**Category**: *Activation*
+**Category**: *Activation function*
-**Short description**: [Reference](http://caffe.berkeleyvision.org/tutorial/layers/relu.html)
+**Short description**: ReLU element-wise activation function. ([Reference](http://caffe.berkeleyvision.org/tutorial/layers/relu.html))
**Detailed description**: [Reference](https://github.com/Kulbear/deep-learning-nano-foundation/wiki/ReLU-and-Softmax-Activation-Functions#rectified-linear-units)
**Mathematical Formulation**
-\f[
-Y_{i}^{( l )} = max(0, Y_{i}^{( l - 1 )})
-\f]
+For each element from the input tensor calculates corresponding
+ element in the output tensor with the following formula:
+ \f[
+ Y_{i}^{( l )} = max(0, Y_{i}^{( l - 1 )})
+ \f]
**Inputs**:
-* **1**: Multidimensional input tensor. Required.
+* **1**: Multidimensional input tensor *x* of any supported numeric type. Required.
+
+**Outputs**:
+
+* **1**: Result of ReLU function applied to the input tensor *x*. Tensor with shape and type matching the input tensor. Required.
**Example**
out[i] = arg[i] > zero ? arg[i] : zero;
}
}
- template <typename T>
- void relu_backprop(const T* arg, const T* delta_arg, T* out, size_t count)
- {
- T zero = 0;
- for (size_t i = 0; i < count; i++)
- {
- out[i] = arg[i] > zero ? delta_arg[i] : zero;
- }
- }
}
}
}