Fix issue with Luong attention when scale=True and dtype of tf.float16/tf.float64...
authorYong Tang <yong.tang.github@outlook.com>
Fri, 30 Mar 2018 17:41:48 +0000 (10:41 -0700)
committerRasmus Munk Larsen <rmlarsen@google.com>
Fri, 30 Mar 2018 17:41:48 +0000 (10:41 -0700)
commit65669c0e12ba7aaa43a34db3798b8ac8fd97ba3e
treed6c9e1a2993ee96b15a32a223149022ef4abb74b
parent8328d84fb54c59b93161950e50709b401576bcc3
Fix issue with Luong attention when scale=True and dtype of tf.float16/tf.float64 (#18106)

* Fix issue with Luong attention when scale=True and dtype=tf.float16/tf.float64

This fix tries to address the issue raised in 18099 where
Luong throws a ValueError when scale=True and dtype is not tf.float32.

This fix addresses the issue with the additional test case added.

This fix fixes 18099.

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Fix pylint issue

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add test case for Luong attention with scale=True and dtype=float16/float64

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
* Add assertEqual to confirm the dtypes of the output

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
tensorflow/contrib/seq2seq/python/kernel_tests/attention_wrapper_test.py
tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py