3 [![Code Coverage](http://nnsuite.mooo.com/nntrainer/ci/badge/codecoverage.svg)](http://nnsuite.mooo.com/nntrainer/ci/gcov_html/index.html)
4 ![GitHub repo size](https://img.shields.io/github/repo-size/nnstreamer/nntrainer)
5 ![GitHub issues](https://img.shields.io/github/issues/nnstreamer/nntrainer)
6 ![GitHub pull requests](https://img.shields.io/github/issues-pr/nnstreamer/nntrainer)
8 NNtrainer is Software Framework for Training Neural Network Models on Devices.
12 NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop Software Framework to train neural network model on embedded devices which has relatively limited resources. Rather than training the whole layers, NNtrainer trains only one or a few layers added after the feature extractor.
14 Even though it trains part of the neural network models, NNtrainer requires quite a lot of functionalities to train from common neural network frameworks. By implementing them, it is good enough to run several examples which can help to understand how it works. There are k-NN, Neural Network, Logistic Regression and Reinforcement Learning with CartPole in Applications directory and some of them use Mobilenet V2 with tensorflow-lite as a feature extractor. All of them tested on Galaxy S8 with Android and PC (Ubuntu 16.04).
17 * [Jijoong Moon](https://github.com/jijoongmoon)
18 * [MyungJoo Ham](https://github.com/myungjoo)
19 * [Geunsik Lim](https://github.com/leemgs)
22 * [Sangjung Woo](https://github.com/again4you)
23 * [Wook Song](https://github.com/wooksong)
24 * [Jaeyun Jung](https://github.com/jaeyun-jung)
25 * [Hyoungjoo Ahn](https://github.com/helloahn)
26 * [Parichay Kapoor](https://github.com/kparichay)
27 * [Dongju Chae](https://github.com/dongju-chae)
28 * [Gichan Jang](https://github.com/gichan-jang)
29 * [Yongjoo Ahn](https://github.com/anyj0527)
30 * [Jihoon Lee](https://github.com/zhoonit)
31 * [Hyeonseok Lee](https://github.com/lhs8928)
37 This component defines Layers which consist of Neural Network Model. Layers has own properties to be set.
39 | Keyword | Layer Name | Description |
40 |:-------:|:---:|:---|
41 | conv2d | Convolution 2D |Convolution 2-Dimentional Layer |
42 | pooling2d | Pooling 2D |Pooling 2-Dimentional Layer. Support average / max / global average / global max pooing |
43 | flatten | Flatten | Flatten Layer |
44 | fully_connected | Fully Connected | Fully Connected Layer |
45 | input | Input | Input Layer. This is not always requied. |
46 | batch_normalization | Batch Normalization Layer | Batch Normalization Layer. |
47 | loss layer | loss layer | hidden from users |
48 | activation | activaiton layer | set by layer property |
50 ### Supported Optimizers
54 | Keyward | Optimizer Name | Description |
55 |:-------:|:---:|:---:|
56 | sgd | Stochastic Gradient Decent | - |
57 | adam | Adaptive Moment Estimation | - |
63 | Keyward | Loss Name | Description |
64 |:-------:|:---:|:---:|
65 | mse | Mean squared Error | - |
66 | cross | Cross Entropy - sigmoid | if activation last layer is sigmoid |
67 | cross | Cross Entropy - softmax | if activation last layer is softmax |
69 ### Supported Activations
73 | Keyward | Loss Name | Description |
74 |:-------:|:---:|:---|
75 | tanh | tanh function | set as layer property |
76 | sigmoid | sigmoid function | set as layer property |
77 | relu | relu function | set as layer propery |
78 | softmax | softmax function | set as layer propery |
79 | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor) |
80 | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_param & type |
81 | learnig_rate_decay | learning rate decay | need to set step |
85 Tensor is responsible for the calculation of Layer. It executes the addition, division, multiplication, dot production, averaging of Data and so on. In order to accelerate the calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) for some of the operation. Later, these calculations will be optimized.
86 Currently we supports lazy calculation mode to reduce copy of tensors during calcuation.
88 | Keyward | Description |
90 | 4D Tensor | B, C, H, W|
91 | Add/sub/mul/div | - |
92 | sum, average, argmax | - |
93 | Dot, Transpose | - |
94 | normalization, standardization | - |
101 | Keyward | Loss Name | Description |
102 |:-------:|:---:|:---|
103 | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor) |
104 | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_constant & type |
105 | learnig_rate_decay | learning rate decay | need to set step |
108 Currently we privde [C APIs](https://github.com/nnstreamer/nntrainer/blob/master/api/capi/include/nntrainer.h) for Tizen. C++ API also provides soon.
111 ### Examples for NNTrainer
113 #### [Custom Shortcut Application](https://github.com/nnstreamer/nntrainer/tree/master/Applications/Tizen_native/CustomShortcut)
116 This is demo application which enable user defined custom shortcut on galaxy watch.
118 #### [MNIST Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/MNIST)
120 This is example to train mnist dataset. It consists two convolution 2d layer, 2 pooling 2d layer, flatten layer and fully connected layer.
122 #### [Reinforcement Learning Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/ReinforcementLearning/DeepQ)
124 This is reinforcement learning example with cartpole game. It is using deepq alogrightm.
126 #### [Classification for cifar 10](https://github.com/nnstreamer/nntrainer/tree/master/Applications/TransferLearning/CIFAR_Classification)
128 This is Transfer learning example with cifar 10 data set. TFlite is used for feature extractor and modify last layer (fully connected layer) of network.
130 #### ~Tizen CAPI Example~
132 ~This is for demonstrate c api for tizen. It is same transfer learing but written with tizen c api.~
133 Deleted instead moved to a [test](https://github.com/nnstreamer/nntrainer/blob/master/test/tizen_capi/unittest_tizen_capi.cpp)
135 #### [KNN Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/KNN)
137 This is Transfer learning example with cifar 10 data set. TFlite is used for feature extractor and compared with KNN
139 #### [Logistic Regression Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/LogisticRegression)
141 This is simple logistic regression example using nntrainer.
147 The following dependencies are needed to compile / build / run.
149 * gcc/g++ (>= 4.9, std=c++14 is used)
151 * blas library (CBLAS) (for CPU Acceleration, libopenblas is used for now)
152 * cuda, cudart, cublas (should match the version) (GPU Acceleration on PC)
153 * tensorflow-lite (>= 1.4.0)
154 * libjsoncpp ( >= 0.6.0) (openAI Environment on PC)
155 * libcurl3 (>= 7.47) (openAI Environment on PC)
157 * libgtest (for testing)
161 Download the source file by cloning the github repository.
164 $ git clone https://github.com/nnstreamer/nntrainer
167 After completing download the sources, you can find the several directories and files as below.
188 f1a3a05 (HEAD -> master, origin/master, origin/HEAD) Add more badges
189 37032a1 Add Unit Test Cases for Neural Network Initialization
190 181a003 lower case for layer type.
191 1eb399b Update clang-format
192 87f1de7 Add Unit Test for Neural Network
193 cd5c36e Add Coverage Test badge for nntrainer
197 You can find the source code of the core library in nntrainer/src. In order to build them, use [meson](https://mesonbuild.com/)
200 The Meson build system
202 Source dir: /home/wook/Work/NNS/nntrainer
203 Build dir: /home/wook/Work/NNS/nntrainer/build
204 Build type: native build
205 Project name: nntrainer
206 Project version: 0.0.1
207 Native C compiler: cc (gcc 7.5.0 "cc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0")
208 Native C++ compiler: c++ (gcc 7.5.0 "c++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0")
209 Build machine cpu family: x86_64
210 Build machine cpu: x86_64
212 Build targets in project: 11
213 Found ninja-1.8.2 at /usr/bin/ninja
216 ninja: Entering directory `build'
217 [41/41] Linking target test/unittest/unittest_nntrainer_internal.
220 After completion of the build, the shared library, 'libnntrainer.so' and the static library, 'libnntrainer.a' will be placed in build/nntrainer.
222 $ ls build/nntrainer -1
223 d48ed23@@nntrainer@sha
224 d48ed23@@nntrainer@sta
229 In order to install them with related header files to your system, use the 'install' sub-command.
231 $ ninja -C build install
233 Then, you will find the libnntrainer.so and related .h files in /usr/local/lib and /usr/local/include directories.
235 By default, the command ```ninja -C build`` generates the five example application binaries (Classification, k-NN, LogisticRegression, ReinforcementLearning, and Training) you could try in build/Applications. For 'Training' as an example case,
237 $ ls build/Applications/Training/jni/ -1
238 e189c96@@nntrainer_training@exe
242 In order to run such example binaries, Tensorflow-lite is a prerequisite. If you are trying to run on the Android, it will automatically download tensorflow (1.9.0) and compile as static library. Otherwise, you need to install it by yourself.
247 Meson build `enable-test` set to true
253 $ meson build -Denable-test=true
254 The Meson build system
257 Configuring capi-nntrainer.pc using configuration
258 Run-time dependency GTest found: YES (building self)
259 Build targets in project: 17
261 Found ninja-1.10.0.git.kitware.jobserver-1 at /home/jlee/.local/bin/ninja
263 $ ninja -C build test
264 [79/79] Running all tests.
265 1/12 unittest_tizen_capi OK 8.86s
266 2/12 unittest_tizen_capi_layer OK 0.05s
267 3/12 unittest_tizen_capi_optimizer OK 0.01s
268 4/12 unittest_tizen_capi_dataset OK 0.03s
269 5/12 unittest_nntrainer_activations OK 0.03s
270 6/12 unittest_nntrainer_internal OK 0.23s
271 7/12 unittest_nntrainer_layers OK 0.22s
272 8/12 unittest_nntrainer_lazy_tensor OK 0.04s
273 9/12 unittest_nntrainer_tensor OK 0.04s
274 10/12 unittest_util_func OK 0.05s
275 11/12 unittest_databuffer_file OK 0.12s
276 12/12 unittest_nntrainer_modelfile OK 2.22s
286 if you want to run particular test
289 $ meson -C build test $(test name)
293 NNTrainer provides extensive sample app running test.
295 Meson build with `enable-app` set to true
300 $ meson build -Denable-app=true
301 The Meson build system
304 Configuring capi-nntrainer.pc using configuration
305 Run-time dependency GTest found: YES (building self)
306 Build targets in project: 17
308 Found ninja-1.10.0.git.kitware.jobserver-1 at /home/jlee/.local/bin/ninja
310 $ ninja -C build test
312 1/21 app_classification OK 3.59s
313 2/21 app_classification_func OK 42.77s
314 3/21 app_knn OK 4.81s
315 4/21 app_logistic OK 14.11s
316 5/21 app_DeepQ OK 30.30s
317 6/21 app_training OK 38.36s
318 7/21 app_classification_capi_ini OK 32.65s
319 8/21 app_classification_capi_file OK 32.04s
320 9/21 app_classification_capi_func OK 29.13s
324 if you want to run particular example only
327 $ meson -C build test $(test name) #app_classification_capi_func
332 1. [Training](https://github.com/nnstreamer/nntrainer/blob/master/Applications/Training/README.md)
334 After build, run with following arguments
335 Make sure to put last '/' for the resources directory.
337 $./path/to/example ./path/to/settings.ini ./path/to/resource/directory/
340 To run the 'Training', for example, do as follows.
345 $ LD_LIBRARY_PATH=./build/nntrainer ./build/Applications/Training/jni/nntrainer_training ./Applications/Training/res/Training.ini ./Applications/Training/res/
346 ../../res/happy/happy1.bmp
347 ../../res/happy/happy2.bmp
348 ../../res/happy/happy3.bmp
349 ../../res/happy/happy4.bmp
350 ../../res/happy/happy5.bmp
351 ../../res/sad/sad1.bmp
352 ../../res/sad/sad2.bmp
358 ## Open Source License
360 The nntrainer is an open source project released under the terms of the Apache License version 2.0.
364 Contributions are welcome! Please see our [Contributing](https://github.com/nnstreamer/nntrainer/blob/main/docs/contributing.md) Guide for more details.
366 [![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/0)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/0)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/1)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/1)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/2)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/2)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/3)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/3)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/4)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/4)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/5)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/5)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/6)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/6)[![](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/images/7)](https://sourcerer.io/fame/dongju-chae/nnstreamer/nntrainer/links/7)