From: jijoong.moon Date: Thu, 21 Nov 2019 09:54:24 +0000 (+0900) Subject: Add README for DeepQ X-Git-Tag: accepted/tizen/unified/20200706.064221~249 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=a999df003644eed8d20efa5a217386d973cf769d;p=platform%2Fcore%2Fml%2Fnntrainer.git Add README for DeepQ **Self evaluation:** 1. Build test: [X]Passed [ ]Failed [ ]Skipped 2. Run test: [X]Passed [ ]Failed [ ]Skipped Signed-off-by: jijoong.moon --- diff --git a/DeepQ/README.md b/DeepQ/README.md new file mode 100644 index 0000000..482d49d --- /dev/null +++ b/DeepQ/README.md @@ -0,0 +1,26 @@ +# Reinforcement Learning with DeepQ + +In this toy example, Reinforcement Learning with DeepQ Learning is implemented without any neural network framework like tensorflow or tensorflow-lite. In order to do that, some algorithms are implemented and tested on Galaxy 9 & Ubuntu 16.04 PC. All codes are written in C++. + +- Implement DeepQ Learning Algorithm +. Experience Replay +. Two Neural Network ( main & target Network ) to stabilization +- Fully Connected Layer Support +- Multi Layer Support ( Two FC Layer is used ) +- Gradient Descent Optimizer Support +- ADAM Optimizer Support +- Minibatch Support +- Softmax Support +- sigmoid/tanh Support + +For the Environment, +- OpenAI/Gym Cartpole-v0 support +- Native Cartpole environment is implemented +. Maximum Iteration is 200. + +The results is below. + +

+ + +