[XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already supports BF16...
authorYuanzhong Xu <yuanzx@google.com>
Mon, 12 Feb 2018 19:26:22 +0000 (11:26 -0800)
committerTensorFlower Gardener <gardener@tensorflow.org>
Mon, 12 Feb 2018 19:30:18 +0000 (11:30 -0800)
commitfabf6ddede109bbf18115718224449c314bcf92a
treeb36dd4f1ee9b42e81fb12a448f2dfd867d7180ec
parent075931641e9147f0faf16e0ce2b76525620e1be0
[XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already supports BF16 input/output, conversions before/after it will be removed and the HLO's input/output types will be converted to BF16.

Also updates HloVerifier to allow mixed precision if requested. If an HLO has both both F32 and BF16 inputs, ShapeInference will use F32 as the output type.

PiperOrigin-RevId: 185407143
16 files changed:
tensorflow/compiler/xla/service/BUILD
tensorflow/compiler/xla/service/bfloat16_conversion_folding.cc [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_conversion_folding.h [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_conversion_folding_test.cc [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_normalization.cc [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_normalization.h [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_normalization_test.cc [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_support.cc [new file with mode: 0644]
tensorflow/compiler/xla/service/bfloat16_support.h [new file with mode: 0644]
tensorflow/compiler/xla/service/hlo_instruction.cc
tensorflow/compiler/xla/service/hlo_verifier.cc
tensorflow/compiler/xla/service/hlo_verifier.h
tensorflow/compiler/xla/service/shape_inference.cc
tensorflow/compiler/xla/shape_util.cc
tensorflow/compiler/xla/shape_util.h
tensorflow/compiler/xla/shape_util_test.cc