From b2deee4fa0e6490a7b4d1d116f3526667f9a2784 Mon Sep 17 00:00:00 2001 From: "jijoong.moon" Date: Fri, 16 Sep 2022 15:22:21 +0900 Subject: [PATCH] [ README ] add new features in README Add new features: . positional encoding layer . Multi-head attention layer . layer normalization layer . kld loss . learning rate schedule **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon --- README.md | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index b5f69f3..50b2d70 100644 --- a/README.md +++ b/README.md @@ -79,6 +79,7 @@ This component defines layers which consist of a neural network model. Layers ha | pooling2D | Pooling2DLayer | Pooling 2D layer | | input | InputLayer | Input Layer. This is not always required. | | batch_normalization | BatchNormalizationLayer | Batch normalization layer | + | layer_normalization | LayerNormalizationLayer | Layer normalization layer | | activation | ActivaitonLayer | Set by layer property | | addition | AdditionLayer | Add input input layers | | attention | AttentionLayer | Attenstion layer | @@ -95,6 +96,7 @@ This component defines layers which consist of a neural network model. Layers ha | split | SplitLayer | Split layer | | dropout | DropOutLayer | Dropout Layer | | embedding | EmbeddingLayer | Embedding Layer | + | positional_encoding | PositionalEncodingLayer | Positional Encoding Layer | | rnn | RNNLayer | Recurrent Layer | | rnncell | RNNCellLayer | Recurrent Cell Layer | | gru | GRULayer | Gated Recurrent Unit Layer | @@ -103,6 +105,8 @@ This component defines layers which consist of a neural network model. Layers ha | lstmcell | LSTMCellLayer | Long Short-Term Memory Cell Layer | | zoneoutlstmcell | ZoneoutLSTMCellLayer | Zoneout Long Short-Term Memory Cell Layer | | time_dist | TimeDistLayer | Time distributed Layer | + | multi_head_attention | MultiHeadAttentionLayer | Multi Head Attention Layer | + ### Supported Optimizers @@ -113,6 +117,12 @@ NNTrainer Provides | sgd | Stochastic Gradient Decent | - | | adam | Adaptive Moment Estimation | - | + | Keyword | Leanring Rate | Description | + |:-------:|:---:|:---:| + | exponential | exponential learning rate decay | - | + | constant | constant learning rate | - | + | step | step learning rate | - | + ### Supported Loss Functions NNTrainer provides @@ -123,6 +133,7 @@ NNTrainer provides | cross_softmax | CrossEntropySoftmaxLossLayer | Cross entropy softmax loss layer | | constant_derivative | ConstantDerivativeLossLayer | Constant derivative loss layer | | mse | MSELossLayer | Mean square error loss layer | + | kld | KLDLossLayer | Kullback-Leibler Divergence loss layer | ### Supported Activation Functions @@ -134,9 +145,6 @@ NNTrainer provides | sigmoid | sigmoid function | set as layer property | | relu | relu function | set as layer propery | | softmax | softmax function | set as layer propery | - | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor) | - | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_param & type | - | learnig_rate_decay | learning rate decay | need to set step | ### Tensor @@ -159,8 +167,7 @@ NNTrainer provides | Keyword | Loss Name | Description | |:-------:|:---:|:---| | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor) | - | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_constant & type | - | learnig_rate_decay | learning rate decay | need to set step | + | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_param & type | ### APIs Currently, we provide [C APIs](https://github.com/nnstreamer/nntrainer/blob/master/api/capi/include/nntrainer.h) for Tizen. [C++ APIs](https://github.com/nnstreamer/nntrainer/blob/master/api/ccapi/include) are also provided for other platform. Java & C# APIs will be provided soon. -- 2.7.4