Add README for DeepQ
authorjijoong.moon <jijoong.moon@samsung.com>
Thu, 21 Nov 2019 09:54:24 +0000 (18:54 +0900)
committer문지중/On-Device Lab(SR)/Principal Engineer/삼성전자 <jijoong.moon@samsung.com>
Thu, 21 Nov 2019 10:06:32 +0000 (19:06 +0900)
**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: jijoong.moon <jijoong.moon@samsung.com>
DeepQ/README.md [new file with mode: 0644]

diff --git a/DeepQ/README.md b/DeepQ/README.md
new file mode 100644 (file)
index 0000000..482d49d
--- /dev/null
@@ -0,0 +1,26 @@
+# Reinforcement Learning with DeepQ
+
+In this toy example, Reinforcement Learning with DeepQ Learning is implemented without any neural network framework like tensorflow or tensorflow-lite. In order to do that, some algorithms are implemented and  tested on Galaxy 9 & Ubuntu 16.04 PC. All codes are written in C++.
+
+- Implement DeepQ Learning Algorithm
+. Experience Replay
+. Two Neural Network ( main & target Network ) to stabilization
+- Fully Connected Layer Support
+- Multi Layer Support ( Two FC Layer is used )
+- Gradient Descent Optimizer Support
+- ADAM Optimizer Support
+- Minibatch Support
+- Softmax Support
+- sigmoid/tanh Support
+
+For the Environment,
+- OpenAI/Gym Cartpole-v0 support
+- Native Cartpole environment is implemented
+. Maximum Iteration is 200.
+
+The results is below.
+
+<p align=center>
+<img src =https://github.sec.samsung.net/storage/user/19415/files/de916e80-0b9f-11ea-9950-5c40d2bef8e4 width=300 >
+<img src =https://github.sec.samsung.net/storage/user/19415/files/d2f17800-0b9e-11ea-8060-edfeacd6c71e width=300 >
+</p>