[docs][ao] Add overload information for fake_quantize_per_tensor_affine (#63258)
authorSupriya Rao <supriyar@fb.com>
Mon, 16 Aug 2021 05:44:44 +0000 (22:44 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Mon, 16 Aug 2021 05:47:05 +0000 (22:47 -0700)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63258

This function supports scalar and tensor qparams

Test Plan:
CI

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D30316432

fbshipit-source-id: 8b2f5582e7e095fdda22c17d178abcbc89a2d1fc

torch/_torch_docs.py

index 6ab55aa..78950e4 100644 (file)
@@ -10043,8 +10043,8 @@ Returns a new tensor with the data in :attr:`input` fake quantized using :attr:`
 
 Args:
     input (Tensor): the input value(s), in ``torch.float32``.
-    scale (double): quantization scale
-    zero_point (int64): quantization zero_point
+    scale (double or Tensor): quantization scale
+    zero_point (int64 or Tensor): quantization zero_point
     quant_min (int64): lower bound of the quantized domain
     quant_max (int64): upper bound of the quantized domain
 
@@ -10058,6 +10058,8 @@ Example::
     tensor([ 0.0552,  0.9730,  0.3973, -1.0780])
     >>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255)
     tensor([0.1000, 1.0000, 0.4000, 0.0000])
+    >>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255)
+    tensor([0.6000, 0.4000, 0.0000, 0.0000])
 """)
 
 add_docstr(torch.fake_quantize_per_channel_affine,