From: A. Unique TensorFlower Date: Thu, 4 Jan 2018 00:06:24 +0000 (-0800) Subject: Update RNN/LSTM performance docs X-Git-Tag: v1.6.0-rc0~304^2~5^2~43 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=9f816c90b621b59286a3b39faf213384d0563401;p=platform%2Fupstream%2Ftensorflow.git Update RNN/LSTM performance docs PiperOrigin-RevId: 180730491 --- diff --git a/tensorflow/docs_src/performance/performance_guide.md b/tensorflow/docs_src/performance/performance_guide.md index 0fa1fc38c2..4850e62c12 100644 --- a/tensorflow/docs_src/performance/performance_guide.md +++ b/tensorflow/docs_src/performance/performance_guide.md @@ -224,11 +224,7 @@ On NVIDIA GPUs, the use of @{tf.contrib.cudnn_rnn} should always be preferred unless you want layer normalization, which it doesn't support. It is often at least an order of magnitude faster than @{tf.contrib.rnn.BasicLSTMCell} and @{tf.contrib.rnn.LSTMBlockCell} and uses 3-4x less memory than -@{tf.contrib.rnn.BasicLSTMCell}. Unfortunately, @{tf.contrib.cudnn_rnn} is not -compatible with @{tf.train.SyncReplicasOptimizer} so you should either use a -different synchronization mechanism (consider an all-reduce based strategy) or -use the @{tf.contrib.rnn.LSTMBlockFusedCell} (at a significant performance -penalty). +@{tf.contrib.rnn.BasicLSTMCell}. If you need to run one step of the RNN at a time, as might be the case in reinforcement learning with a recurrent policy, then you should use the