[quant] Add support for linear_relu fusion for FP16 dynamic quant (#63826)
authorSupriya Rao <supriyar@fb.com>
Fri, 27 Aug 2021 04:05:56 +0000 (21:05 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Fri, 27 Aug 2021 04:12:06 +0000 (21:12 -0700)
commit294db0603fef315c8f6ac95e30f8ce6b5cce2b5a
tree364469c7f1c3a00de3768ef9e7c17c9697a2425f
parentcec44aa574e06e8aa1096b62a7c6d7c4dda8a3f5
[quant] Add support for linear_relu fusion for FP16 dynamic quant (#63826)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63826

Support the conversion of the intrinsic linearRelu module to the quantized dynamic LinearReLU module
Verify the support works for both linear module and functional linear fusion

Test Plan:
python test/test_quantization.py test_dynamic_with_fusion

Imported from OSS

Reviewed By: iramazanli

Differential Revision: D30503513

fbshipit-source-id: 70446797e9670dfef7341cba2047183d6f88b70f
test/quantization/fx/test_quantize_fx.py
torch/nn/intrinsic/quantized/dynamic/modules/linear_relu.py
torch/quantization/fx/quantization_patterns.py