add BFloat16 operators on CPU: range, sinh, cosh, frexp, nan_to_num (#61826)
authorjiayisun <jiayi.sun@intel.com>
Fri, 20 Aug 2021 21:54:51 +0000 (14:54 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Fri, 20 Aug 2021 21:56:52 +0000 (14:56 -0700)
commitda0820e553a1ff89dbfd37c591154e8326748fab
tree4ba563aaa82cfa4e7ab573c9dd0c2079a207536a
parenta8de0d83fed2d68512c0b0e20716bd63e6769469
add BFloat16 operators on CPU: range, sinh, cosh, frexp, nan_to_num (#61826)

Summary:
Added BFloat16 support for range, sinh, cosh, frexp, and nan_to_num on CPU, and collected the benchmark data of these OPs(range, sinh, cosh, frexp, and nan_to_num) for BFloat16 and Float32 data type by using the operator_benchmark tool of PyTorch on the platform of Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz

Number of cores: 1 core, 28 cores(1 socket)
[cosh_sinh_benchmark.txt](https://github.com/pytorch/pytorch/files/6974313/cosh_sinh_benchmark.txt)
[frexp_benchmark.txt](https://github.com/pytorch/pytorch/files/6974315/frexp_benchmark.txt)
[nan_to_num_benchmark.txt](https://github.com/pytorch/pytorch/files/6974317/nan_to_num_benchmark.txt)
[range_benchmark.txt](https://github.com/pytorch/pytorch/files/6974318/range_benchmark.txt)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61826

Reviewed By: saketh-are

Differential Revision: D30257259

Pulled By: VitalyFedyunin

fbshipit-source-id: 394cd713e6394050a8c90b2160633beb675d71dd
aten/src/ATen/native/RangeFactories.cpp
aten/src/ATen/native/cpu/UnaryOpsKernel.cpp
c10/util/BFloat16-math.h
torch/testing/_internal/common_methods_invocations.py