[TF:XLA] do not emit bfloat16 sum reductions from tf2xla
authorNick Desaulniers <ndesaulniers@google.com>
Wed, 21 Mar 2018 22:23:07 +0000 (15:23 -0700)
committerTensorFlower Gardener <gardener@tensorflow.org>
Wed, 21 Mar 2018 22:25:29 +0000 (15:25 -0700)
commit9cd65e9a9081640934b2b78cf84b6e51ddd69796
tree610081bb81702da81c558e2b2a4fc14f0c3d1868
parenta07bd80e27dd41a1b6a3f4c2e1954ae573453cda
[TF:XLA] do not emit bfloat16 sum reductions from tf2xla

bfloat16 is a storage format, not a computation format. Doing reductions in
this reduced precision is prone to quickly overflow.  Instead, emit a float32
computation, and wrap the reduce params and result in conversions to and from float32.

PiperOrigin-RevId: 189977590
17 files changed:
tensorflow/compiler/tf2xla/kernels/batch_norm_op.cc
tensorflow/compiler/tf2xla/kernels/bias_ops.cc
tensorflow/compiler/tf2xla/kernels/conv_ops.cc
tensorflow/compiler/tf2xla/kernels/fake_quantize_ops.cc
tensorflow/compiler/tf2xla/kernels/image_ops.cc
tensorflow/compiler/tf2xla/kernels/l2loss_op.cc
tensorflow/compiler/tf2xla/kernels/lrn_ops.cc
tensorflow/compiler/tf2xla/kernels/pooling_ops.cc
tensorflow/compiler/tf2xla/kernels/reduction_ops.cc
tensorflow/compiler/tf2xla/kernels/reduction_ops.h
tensorflow/compiler/tf2xla/kernels/reduction_ops_common.cc
tensorflow/compiler/tf2xla/kernels/scan_ops.cc
tensorflow/compiler/tf2xla/kernels/softmax_op.cc
tensorflow/compiler/tf2xla/xla_helpers.cc
tensorflow/compiler/tf2xla/xla_helpers.h
tensorflow/compiler/xla/literal_util.cc
tensorflow/compiler/xla/tests/convert_test.cc