3 [![Code Coverage](http://nnsuite.mooo.com/nntrainer/ci/badge/codecoverage.svg)](http://nnsuite.mooo.com/nntrainer/ci/gcov_html/index.html)
4 ![GitHub repo size](https://img.shields.io/github/repo-size/nnstreamer/nntrainer)
5 ![GitHub issues](https://img.shields.io/github/issues/nnstreamer/nntrainer)
6 ![GitHub pull requests](https://img.shields.io/github/issues-pr/nnstreamer/nntrainer)
8 NNtrainer is Software Framework for Training Neural Network Models on Devices.
12 NNtrainer is an Open Source Project. The aim of the NNtrainer is to develop Software Framework to train neural network model on embedded devices which has relatively limited resources. Rather than training the whole layers, NNtrainer trains only one or a few layers added after the feature extractor.
14 Even though it trains part of the neural network models, NNtrainer requires quite a lot of functionalities to train from common neural network frameworks. By implementing them, it is good enough to run several examples which can help to understand how it works. There are k-NN, Neural Network, Logistic Regression and Reinforcement Learning with CartPole in Applications directory and some of them use Mobilenet V2 with tensorflow-lite as a feature extractor. All of them tested on Galaxy S8 with Android and PC (Ubuntu 16.04).
17 * [Jijoong Moon](https://github.com/jijoongmoon)
18 * [MyungJoo Ham](https://github.com/myungjoo)
19 * [Geunsik Lim](https://github.com/leemgs)
22 * [Sangjung Woo](https://github.com/again4you)
23 * [Wook Song](https://github.com/wooksong)
24 * [Jaeyun Jung](https://github.com/jaeyun-jung)
25 * [Hyoungjoo Ahn](https://github.com/helloahn)
26 * [Parichay Kapoor](https://github.com/kparichay)
27 * [Dongju Chae](https://github.com/dongju-chae)
28 * [Gichan Jang](https://github.com/gichan-jang)
29 * [Yongjoo Ahn](https://github.com/anyj0527)
30 * [Jihoon Lee](https://github.com/zhoonit)
36 This component defines Layers which consist of Neural Network Model. Layers has own properties to be set.
38 | Keyword | Layer Name | Description |
39 |:-------:|:---:|:---|
40 | conv2d | Convolution 2D |Convolution 2-Dimentional Layer |
41 | pooling2d | Pooling 2D |Pooling 2-Dimentional Layer. Support average / max / global average / global max pooing |
42 | flatten | Flatten | Flatten Layer |
43 | fully_connected | Fully Connected | Fully Connected Layer |
44 | input | Input | Input Layer. This is not always requied. |
45 | batch_normalization | Batch Normalization Layer | Batch Normalization Layer. |
46 | loss layer | loss layer | hidden from users |
47 | activation | activaiton layer | set by layer property |
49 ### Supported Optimizers
53 | Keyward | Optimizer Name | Description |
54 |:-------:|:---:|:---:|
55 | sgd | Stochastic Gradient Decent | - |
56 | adam | Adaptive Moment Estimation | - |
62 | Keyward | Loss Name | Description |
63 |:-------:|:---:|:---:|
64 | mse | Mean squared Error | - |
65 | cross | Cross Entropy - sigmoid | if activation last layer is sigmoid |
66 | cross | Cross Entropy - softmax | if activation last layer is softmax |
68 ### Supported Activations
72 | Keyward | Loss Name | Description |
73 |:-------:|:---:|:---|
74 | tanh | tanh function | set as layer property |
75 | sigmoid | sigmoid function | set as layer property |
76 | relu | relu function | set as layer propery |
77 | softmax | softmax function | set as layer propery |
78 | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor) |
79 | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_param & type |
80 | learnig_rate_decay | learning rate decay | need to set step |
84 Tensor is responsible for the calculation of Layer. It executes the addition, division, multiplication, dot production, averaging of Data and so on. In order to accelerate the calculation speed, CBLAS (C-Basic Linear Algebra: CPU) and CUBLAS (CUDA: Basic Linear Algebra) for PC (Especially NVIDIA GPU) for some of the operation. Later, these calculations will be optimized.
85 Currently we supports lazy calculation mode to reduce copy of tensors during calcuation.
87 | Keyward | Description |
89 | 4D Tensor | B, C, H, W|
90 | Add/sub/mul/div | - |
91 | sum, average, argmax | - |
92 | Dot, Transpose | - |
93 | normalization, standardization | - |
100 | Keyward | Loss Name | Description |
101 |:-------:|:---:|:---|
102 | weight_initializer | Weight Initialization | Xavier(Normal/Uniform), LeCun(Normal/Uniform), HE(Normal/Unifor) |
103 | weight_regularizer | weight decay ( L2Norm only ) | needs set weight_regularizer_constant & type |
104 | learnig_rate_decay | learning rate decay | need to set step |
107 Currently we privde [C APIs](https://github.com/nnstreamer/nntrainer/blob/master/api/capi/include/nntrainer.h) for Tizen. C++ API also provides soon.
110 ### Examples for NNTrainer
112 #### [Custom Shortcut Application](https://github.com/nnstreamer/nntrainer/tree/master/Applications/Tizen_native/CustomShortcut)
115 This is demo application which enable user defined custom shortcut on galaxy watch.
117 #### [MNIST Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/mnist)
119 This is example to train mnist dataset. It consists two convolution 2d layer, 2 pooling 2d layer, flatten layer and fully connected layer.
121 #### [Reinforcement Learning Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/ReinforcementLearning/DeepQ)
123 This is reinforcement learning example with cartpole game. It is using deepq alogrightm.
125 #### [Classification for cifar 10]( https://github.com/nnstreamer/nntrainer/tree/master/Applications/Classification )
127 This is Transfer learning example with cifar 10 data set. TFlite is used for feature extractor and modify last layer (fully connected layer) of network.
129 #### [Tizen CAPI Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/Tizen_CAPI)
131 This is for demonstrate c api for tizen. It is same transfer learing but written with tizen c api.
133 #### [KNN Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/KNN)
135 This is Transfer learning example with cifar 10 data set. TFlite is used for feature extractor and compared with KNN
137 #### [Logistic Regression Example](https://github.com/nnstreamer/nntrainer/tree/master/Applications/LogisticRegression)
139 This is simple logistic regression example using nntrainer.
145 The following dependencies are needed to compile / build / run.
147 * gcc/g++ (>= 4.9, std=c++14 is used)
149 * blas library (CBLAS) (for CPU Acceleration, libopenblas is used for now)
150 * cuda, cudart, cublas (should match the version) (GPU Acceleration on PC)
151 * tensorflow-lite (>= 1.4.0)
152 * libjsoncpp ( >= 0.6.0) (openAI Environment on PC)
153 * libcurl3 (>= 7.47) (openAI Environment on PC)
155 * libgtest (for testing)
158 ### Give It a Go Build with Docker
160 You can use [docker image](https://hub.docker.com/r/lunapocket/nntrainer-build-env) to easily set up and try building.
165 $ docker pull lunapocket/nntrainer-build-env:ubuntu-18.04
166 $ docker run --rm -it lunapocket/nntrainer-build-env:ubuntu-18.04
173 $ git pull # If you want to build with latest sources.
176 You can try build from now on without concerning about Prerequisites.
180 Download the source file by cloning the github repository.
183 $ git clone https://github.com/nnstreamer/nntrainer
186 After completing download the sources, you can find the several directories and files as below.
207 f1a3a05 (HEAD -> master, origin/master, origin/HEAD) Add more badges
208 37032a1 Add Unit Test Cases for Neural Network Initialization
209 181a003 lower case for layer type.
210 1eb399b Update clang-format
211 87f1de7 Add Unit Test for Neural Network
212 cd5c36e Add Coverage Test badge for nntrainer
216 You can find the source code of the core library in nntrainer/src. In order to build them, use [meson](https://mesonbuild.com/)
219 The Meson build system
221 Source dir: /home/wook/Work/NNS/nntrainer
222 Build dir: /home/wook/Work/NNS/nntrainer/build
223 Build type: native build
224 Project name: nntrainer
225 Project version: 0.0.1
226 Native C compiler: cc (gcc 7.5.0 "cc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0")
227 Native C++ compiler: c++ (gcc 7.5.0 "c++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0")
228 Build machine cpu family: x86_64
229 Build machine cpu: x86_64
231 Build targets in project: 11
232 Found ninja-1.8.2 at /usr/bin/ninja
235 ninja: Entering directory `build'
236 [41/41] Linking target test/unittest/unittest_nntrainer_internal.
239 After completion of the build, the shared library, 'libnntrainer.so' and the static library, 'libnntrainer.a' will be placed in build/nntrainer.
241 $ ls build/nntrainer -1
242 d48ed23@@nntrainer@sha
243 d48ed23@@nntrainer@sta
248 In order to install them with related header files to your system, use the 'install' sub-command.
250 $ ninja -C build install
252 Then, you will find the libnntrainer.so and related .h files in /usr/local/lib and /usr/local/include directories.
254 By default, the command ```ninja -C build`` generates the five example application binaries (Classification, k-NN, LogisticRegression, ReinforcementLearning, and Training) you could try in build/Applications. For 'Training' as an example case,
256 $ ls build/Applications/Training/jni/ -1
257 e189c96@@nntrainer_training@exe
261 In order to run such example binaries, Tensorflow-lite is a prerequisite. If you are trying to run on the Android, it will automatically download tensorflow (1.9.0) and compile as static library. Otherwise, you need to install it by yourself.
266 Meson build `enable-test` set to true
272 $ meson build -Denable-test=true
273 The Meson build system
276 Configuring capi-nntrainer.pc using configuration
277 Run-time dependency GTest found: YES (building self)
278 Build targets in project: 17
280 Found ninja-1.10.0.git.kitware.jobserver-1 at /home/jlee/.local/bin/ninja
282 $ ninja -C build test
283 [79/79] Running all tests.
284 1/12 unittest_tizen_capi OK 8.86s
285 2/12 unittest_tizen_capi_layer OK 0.05s
286 3/12 unittest_tizen_capi_optimizer OK 0.01s
287 4/12 unittest_tizen_capi_dataset OK 0.03s
288 5/12 unittest_nntrainer_activations OK 0.03s
289 6/12 unittest_nntrainer_internal OK 0.23s
290 7/12 unittest_nntrainer_layers OK 0.22s
291 8/12 unittest_nntrainer_lazy_tensor OK 0.04s
292 9/12 unittest_nntrainer_tensor OK 0.04s
293 10/12 unittest_util_func OK 0.05s
294 11/12 unittest_databuffer_file OK 0.12s
295 12/12 unittest_nntrainer_modelfile OK 2.22s
305 if you want to run particular test
308 $ meson -C build test $(test name)
312 NNTrainer provides extensive sample app running test.
314 Meson build with `enable-app` set to true
319 $ meson build -Denable-app=true
320 The Meson build system
323 Configuring capi-nntrainer.pc using configuration
324 Run-time dependency GTest found: YES (building self)
325 Build targets in project: 17
327 Found ninja-1.10.0.git.kitware.jobserver-1 at /home/jlee/.local/bin/ninja
329 $ ninja -C build test
331 1/21 app_classification OK 3.59s
332 2/21 app_classification_func OK 42.77s
333 3/21 app_knn OK 4.81s
334 4/21 app_logistic OK 14.11s
335 5/21 app_DeepQ OK 30.30s
336 6/21 app_training OK 38.36s
337 7/21 app_classification_capi_ini OK 32.65s
338 8/21 app_classification_capi_file OK 32.04s
339 9/21 app_classification_capi_func OK 29.13s
343 if you want to run particular example only
346 $ meson -C build test $(test name) #app_classification_capi_func
351 1. [Training](https://github.com/nnstreamer/nntrainer/blob/master/Applications/Training/README.md)
353 After build, run with following arguments
354 Make sure to put last '/' for the resources directory.
356 $./path/to/example ./path/to/settings.ini ./path/to/resource/directory/
359 To run the 'Training', for example, do as follows.
364 $ LD_LIBRARY_PATH=./build/nntrainer ./build/Applications/Training/jni/nntrainer_training ./Applications/Training/res/Training.ini ./Applications/Training/res/
365 ../../res/happy/happy1.bmp
366 ../../res/happy/happy2.bmp
367 ../../res/happy/happy3.bmp
368 ../../res/happy/happy4.bmp
369 ../../res/happy/happy5.bmp
370 ../../res/sad/sad1.bmp
371 ../../res/sad/sad2.bmp
377 ## Open Source License
379 The nntrainer is an open source project released under the terms of the Apache License version 2.0.