[layer] Forwarding implementation for mol attention layer
authorParichay Kapoor <pk.kapoor@samsung.com>
Fri, 12 Nov 2021 01:49:49 +0000 (10:49 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Wed, 1 Dec 2021 09:53:03 +0000 (18:53 +0900)
commit96e97538989d10e95154bd4d8af18215c704466b
treea1e9ebcde73b7406b85ec0a1b331d793f5cd3525
parent1efe39fb6ff58f72cd2edb9676f5f622fed9a944
[layer] Forwarding implementation for mol attention layer

This patch provides the forwarding implementation for the mol attention
layer. New underlying requirements have been added with this which
requires more tensor operations to support strided operations like
softmax and divide which will be supported soon.

Signed-off-by: Parichay Kapoor <pk.kapoor@samsung.com>
nntrainer/layers/common_properties.h
nntrainer/layers/mol_attention_layer.cpp
nntrainer/layers/mol_attention_layer.h