From a421cba325bd816e8b6f40b8823533819124ab71 Mon Sep 17 00:00:00 2001 From: Supriya Rao Date: Sun, 15 Aug 2021 22:44:44 -0700 Subject: [PATCH] [docs][ao] Add overload information for fake_quantize_per_tensor_affine (#63258) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63258 This function supports scalar and tensor qparams Test Plan: CI Imported from OSS Reviewed By: jerryzh168 Differential Revision: D30316432 fbshipit-source-id: 8b2f5582e7e095fdda22c17d178abcbc89a2d1fc --- torch/_torch_docs.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py index 6ab55aa..78950e4 100644 --- a/torch/_torch_docs.py +++ b/torch/_torch_docs.py @@ -10043,8 +10043,8 @@ Returns a new tensor with the data in :attr:`input` fake quantized using :attr:` Args: input (Tensor): the input value(s), in ``torch.float32``. - scale (double): quantization scale - zero_point (int64): quantization zero_point + scale (double or Tensor): quantization scale + zero_point (int64 or Tensor): quantization zero_point quant_min (int64): lower bound of the quantized domain quant_max (int64): upper bound of the quantized domain @@ -10058,6 +10058,8 @@ Example:: tensor([ 0.0552, 0.9730, 0.3973, -1.0780]) >>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255) tensor([0.1000, 1.0000, 0.4000, 0.0000]) + >>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255) + tensor([0.6000, 0.4000, 0.0000, 0.0000]) """) add_docstr(torch.fake_quantize_per_channel_affine, -- 2.7.4