Fix lop1p lowering bug (#64724)
authorYinghai Lu <yinghai@fb.com>
Thu, 9 Sep 2021 07:58:39 +0000 (00:58 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Thu, 9 Sep 2021 07:59:44 +0000 (00:59 -0700)
commit233e3e5bb499b97e0a68ba93b6928c2e96096777
treefde438bf6c7ae9adeeb728b8a643edcaf7963ed5
parentd0b207e68bc4e390cbd3dd64c8f116ba0a162d3e
Fix lop1p lowering bug (#64724)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64724

`1` will introduce a int tensor instead of float tensor, which doesn't work well with downstream operators (elementwise). Error would be like
```
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1) [Unary]_output and (Unnamed Layer* 2) [Constant]_output: first input has type Float but second input has type Int32.
```
Changing the constant to be float type fixes this.

Reviewed By: 842974287

Differential Revision: D30796959

fbshipit-source-id: 0538e4dd960df9ce87a2d4cafe8f1a0c061b6bad
torch/fx/experimental/fx_acc/acc_ops.py