[ Doc ] Update Readme.md
[platform/core/ml/nntrainer.git] / README.md
1 # NNtrainer
2
3 [![Code Coverage](http://ci.nnstreamer.ai/nntrainer/ci/badge/codecoverage.svg)](http://ci.nnstreamer.ai/nntrainer/ci/gcov_html/index.html)
4 ![GitHub repo size](https://img.shields.io/github/repo-size/nnstreamer/nntrainer)
5 ![GitHub issues](https://img.shields.io/github/issues/nnstreamer/nntrainer)
6 ![GitHub pull requests](https://img.shields.io/github/issues-pr/nnstreamer/nntrainer)
7 <a href="https://scan.coverity.com/projects/nnstreamer-nntrainer">
8   <img alt="Coverity Scan Build Status"
9        src="https://scan.coverity.com/projects/22512/badge.svg"/>
10 </a>
11 [![DailyBuild](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/daily_build_test_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/)
12
13 NNtrainer is a Software Framework for training Neural Network models on devices.
14
15 ## Overview
16
17 NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop a Software Framework to train neural network models on embedded devices which have relatively limited resources. Rather than training whole layers of a network from the scratch, NNtrainer finetunes the neural network model on device with user data for the personalization.
18
19 Even if NNtariner runs on device, it provides full functionalities to train models and also utilizes limited device resources efficiently. NNTrainer is able to train various machine learning algorithms such as k-Nearest Neighbor (k-NN), Neural Networks, Logistic Regression, Reinforcement Learning algorithms, Recurrent network and more. We also provide examples for various tasks such as Few-shot learning, ResNet, VGG, Product Rating and more will be added. All of these were tested on Samsung Galaxy smart phone with Android and PC (Ubuntu 18.04/20.04).
20
21 [ NNTraner: Light-Weight On-Device Training Framework ](https://arxiv.org/pdf/2206.04688.pdf), arXiv, 2022 <br />
22 [ NNTrainer: Towards the on-device learning for personalization ](https://www.youtube.com/watch?v=HWiV7WbIM3E), Samsung Software Developer Conference 2021 (Korean) <br />
23 [ NNTrainer: Personalize neural networks on devices! ](https://www.youtube.com/watch?v=HKKowY78P1A), Samsung Developer Conference 2021 <br />
24 [ NNTrainer: "On-device learning" ](https://www.youtube.com/embed/Jy_auavraKg?start=4035&end=4080), Samsung AI Forum 2021
25
26 ## Official Releases
27
28 |     | [Tizen](http://download.tizen.org/snapshots/tizen/unified/latest/repos/standard/packages/) | [Ubuntu](https://launchpad.net/~nnstreamer/+archive/ubuntu/ppa) | Android/NDK Build |
29 | :-- | :--: | :--: | :--: |
30 |     | 6.0M2 and later | 18.04 | 9/P |
31 | arm | [![armv7l badge](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/tizen.armv7l_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/) | Available  | Ready |
32 | arm64 |  [![aarch64 badge](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/tizen.aarch64_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/) | Available  | [![android badge](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/arm64_v8a_android_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/) |
33 | x64 | [![x64 badge](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/tizen.x86_64_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/)  | [![ubuntu badge](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/ubuntu_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/)  | Ready  |
34 | x86 | [![x86 badge](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/badge/tizen.i586_result_badge.svg)](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/)  | N/A  | N/A  |
35 | Publish | [Tizen Repo](http://download.tizen.org/snapshots/tizen/unified/latest/repos/standard/packages/) | [PPA](https://launchpad.net/~nnstreamer/+archive/ubuntu/ppa) |   |
36 | API | C (Official) | C/C++ | C/C++  |
37
38 - Ready: CI system ensures build-ability and unit-testing. Users may easily build and execute. However, we do not have automated release & deployment system for this instance.
39 - Available: binary packages are released and deployed automatically and periodically along with CI tests.
40 - [Daily Release](http://ci.nnstreamer.ai/nntrainer/ci/daily-build/build_result/)
41 - SDK Support: Tizen Studio (6.0 M2+)
42
43 ## Maintainer
44 * [Jijoong Moon](https://github.com/jijoongmoon)
45 * [MyungJoo Ham](https://github.com/myungjoo)
46 * [Geunsik Lim](https://github.com/leemgs)
47
48 ## Reviewers
49 * [Sangjung Woo](https://github.com/again4you)
50 * [Wook Song](https://github.com/wooksong)
51 * [Jaeyun Jung](https://github.com/jaeyun-jung)
52 * [Hyoungjoo Ahn](https://github.com/helloahn)
53 * [Parichay Kapoor](https://github.com/kparichay)
54 * [Dongju Chae](https://github.com/dongju-chae)
55 * [Gichan Jang](https://github.com/gichan-jang)
56 * [Yongjoo Ahn](https://github.com/anyj0527)
57 * [Jihoon Lee](https://github.com/zhoonit)
58 * [Hyeonseok Lee](https://github.com/lhs8928)
59 * [Mete Ozay](https://github.com/meteozay)
60 * [Hyunil Park](https://github.com/songgot)
61 * [Jiho Chu](https://github.com/jihochu)
62 * [Yelin Jeong](https://github.com/niley7464)
63 * [Donghak Park](https://github.com/DonghakPark)
64
65
66 ## Components
67
68 ### Supported Layers
69
70 This component defines layers which consist of a neural network model. Layers have their own properties to be set.
71
72  | Keyword | Layer Class Name | Description |
73  |:-------:|:---:|:---|
74  | conv1d | Conv1DLayer | Convolution 1-Dimentional Layer |
75  | conv2d | Conv2DLayer |Convolution 2-Dimentional Layer |
76  | pooling2d | Pooling2DLayer |Pooling 2-Dimentional Layer. Support average / max / global average / global max pooling |
77  | flatten | FlattenLayer | Flatten layer |
78  | fully_connected | FullyConnectedLayer | Fully connected layer |
79  | pooling2D | Pooling2DLayer | Pooling 2D layer |
80  | input | InputLayer | Input Layer.  This is not always required. |
81  | batch_normalization | BatchNormalizationLayer | Batch normalization layer |
82  | activation | ActivaitonLayer | Set by layer property |
83  | addition | AdditionLayer | Add input input layers |
84  | attention | AttentionLayer | Attenstion layer |
85  | centroid_knn | CentroidKNN | Centroid K-nearest neighbor layer |
86  | concat | ConcatLayer | Concatenate input layers |
87  | multiout | MultiOutLayer | Multi-Output Layer |
88  | backbone_nnstreamer | NNStreamerLayer | Encapsulate NNStreamer layer |
89  | backbone_tflite | TfLiteLayer | Encapsulate tflite as an layer |
90  | permute | PermuteLayer | Permute layer for transpose |
91  | preprocess_flip | PreprocessFlipLayer | Preprocess random flip layer |
92  | preprocess_l2norm | PreprocessL2NormLayer | Preprocess simple l2norm layer to normalize |
93  | preprocess_translate | PreprocessTranslateLayer | Preprocess translate layer |
94  | reshape | ReshapeLayer | Reshape tensor dimension layer |
95  | split | SplitLayer | Split layer |
96  | dropout | DropOutLayer | Dropout Layer |
97  | embedding | EmbeddingLayer | Embedding Layer |
98  | rnn | RNNLayer | Recurrent Layer |
99  | rnncell | RNNCellLayer | Recurrent Cell Layer |
100  | gru | GRULayer | Gated Recurrent Unit Layer |
101  | grucell | GRUCellLayer | Gated Recurrent Unit Cell Layer |
102  | lstm | LSTMLayer | Long Short-Term Memory Layer |
103  | lstmcell | LSTMCellLayer | Long Short-Term Memory Cell Layer |
104  | zoneoutlstmcell | ZoneoutLSTMCellLayer | Zoneout Long Short-Term Memory Cell Layer |
105  | time_dist | TimeDistLayer | Time distributed Layer |
106
107 ### Supported Optimizers
108
109 NNTrainer Provides
110
111  | Keyword | Optimizer Name | Description |
112  |:-------:|:---:|:---:|
113  | sgd | Stochastic Gradient Decent | - |
114  | adam | Adaptive Moment Estimation | - |
115
116 ### Supported Loss Functions
117
118 NNTrainer provides
119
120  | Keyword | Class Name | Description |
121  |:-------:|:---:|:---:|
122  | cross_sigmoid | CrossEntropySigmoidLossLayer | Cross entropy sigmoid loss layer |
123  | cross_softmax | CrossEntropySoftmaxLossLayer | Cross entropy softmax loss layer |
124  | constant_derivative | ConstantDerivativeLossLayer | Constant derivative loss layer |
125  | mse | MSELossLayer | Mean square error loss layer |
126
127 ### Supported Activation Functions
128
129 NNTrainer provides
130
131  | Keyword | Loss Name | Description |
132  |:-------:|:---:|:---|
133  | tanh | tanh function | set as layer property |
134  | sigmoid | sigmoid function | set as layer property |
135  | relu | relu function | set as layer propery |
136  | softmax | softmax function | set as layer propery |
137  | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform),  HE(Normal/Unifor) |
138  | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_param & type |
139  | learnig_rate_decay | learning rate decay | need to set step |
140
141 ### Tensor
142
143 Tensor is responsible for calculation of a layer. It executes several operations such as addition, division, multiplication, dot production, data averaging and so on. In order to accelerate  calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) are implemented for some of the operations. Later, these calculations will be optimized.
144 Currently, we supports lazy calculation mode to reduce complexity for copying tensors during calculations.
145
146  | Keyword | Description |
147  |:-------:|:---:|
148  | 4D Tensor | B, C, H, W|
149  | Add/sub/mul/div | - |
150  | sum, average, argmax | - |
151  | Dot, Transpose | - |
152  | normalization, standardization | - |
153  | save, read | - |
154
155 ### Others
156
157 NNTrainer provides
158
159  | Keyword | Loss Name | Description |
160  |:-------:|:---:|:---|
161  | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform),  HE(Normal/Unifor) |
162  | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_constant & type |
163  | learnig_rate_decay | learning rate decay | need to set step |
164
165 ### APIs
166 Currently, we provide [C APIs](https://github.com/nnstreamer/nntrainer/blob/master/api/capi/include/nntrainer.h) for Tizen. [C++ APIs](https://github.com/nnstreamer/nntrainer/blob/master/api/ccapi/include) are also provided for other platform. Java & C# APIs will be provided soon.
167
168
169 ### Examples for NNTrainer
170
171 #### [Custom Shortcut Application](https://github.com/nnstreamer/nntrainer/tree/main/Applications/Tizen_native/CustomShortcut)
172
173
174 A demo application which enable user defined custom shortcut on galaxy watch.
175
176 #### [MNIST Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/MNIST)
177
178 An example to train mnist dataset. It consists two convolution 2d layer, 2 pooling 2d layer, flatten layer and fully connected layer.
179
180 #### [Reinforcement Learning Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/ReinforcementLearning/DeepQ)
181
182 A reinforcement learning example with cartpole game. It is using DeepQ algorithm.
183
184 #### [Transfer Learning Examples](https://github.com/nnstreamer/nntrainer/tree/main/Applications/TransferLearning)
185
186 Transfer learning examples with for image classification using the Cifar 10 dataset and for OCR. TFlite is used for feature extractor and modify last layer (fully connected layer) of network.
187
188 #### [ResNet Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/Resnet)
189
190 An example to train resnet18 network.
191
192 #### [VGG Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/VGG)
193
194 An example to train vgg16 network.
195
196 #### [ProductRating Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/ProductRatings)
197
198 This application contains a simple embedding-based model that predicts ratings given a user and a product.
199
200 #### [SimpleShot Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/SimpleShot)
201
202 An example to demonstrate few-shot learning : SimpleShot
203
204 #### [Custom Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/Custom)
205
206 An example to demonstrate how to create custom layers, optimizers or other supported objects.
207
208 <!-- #### Tizen CAPI Example -->
209
210 <!-- An example to demonstrate c api for Tizen. It is same transfer learing but written with tizen c api.~ -->
211 <!-- Deleted instead moved to a [test](https://github.com/nnstreamer/nntrainer/blob/master/test/tizen_capi/unittest_tizen_capi.cpp) -->
212
213 #### [KNN Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/KNN)
214
215 A transfer learning example with for image classification using the Cifar 10 dataset. TFlite is used for feature extractor and compared with KNN.
216
217 #### [Logistic Regression Example](https://github.com/nnstreamer/nntrainer/tree/main/Applications/LogisticRegression)
218
219 A logistic regression example using NNTrainer.
220
221 ## [Getting Started](https://github.com/nnstreamer/nntrainer/blob/main/docs/getting-started.md)
222
223 Instructions for installing NNTrainer.
224
225 ### [Running Examples](https://github.com/nnstreamer/nntrainer/blob/main/docs/how-to-run-examples.md)
226
227 Instructions for preparing NNTrainer for execution
228
229 ## Open Source License
230
231 The nntrainer is an open source project released under the terms of the Apache License version 2.0.
232
233 ## Contributing
234
235 Contributions are welcome! Please see our [Contributing](https://github.com/nnstreamer/nntrainer/blob/main/docs/contributing.md) Guide for more details.
236
237 [![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/0)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/0)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/1)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/1)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/2)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/2)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/3)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/3)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/4)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/4)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/5)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/5)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/6)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/6)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/7)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/7)