[quant] AO migration of the `_correct_bias.py`, `_equalize.py`, and `_learnable_fake_...
authorZafar Takhirov <zaf@fb.com>
Thu, 16 Sep 2021 01:13:53 +0000 (18:13 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Thu, 16 Sep 2021 01:15:39 +0000 (18:15 -0700)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64917

AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates from torch.quantization to torch.ao.quantization the following files:
- `_correct_bias.py`
- `_equalize.py`
- `_learnable_fake_quantize.py`

**Note:** These file are migrated completely without any warning. The old location is thus silently deprecated.

Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestBiasCorrection`

Reviewed By: vkuzo

Differential Revision: D30898565

fbshipit-source-id: 1d39be2539dd1adfcb42e16bdcc0daf5c8316bbd

test/quantization/core/test_workflow_ops.py
test/quantization/eager/test_bias_correction_eager.py
test/quantization/eager/test_equalize_eager.py
torch/ao/quantization/_correct_bias.py [moved from torch/quantization/_correct_bias.py with 100% similarity]
torch/ao/quantization/_equalize.py [moved from torch/quantization/_equalize.py with 100% similarity]
torch/ao/quantization/_learnable_fake_quantize.py [moved from torch/quantization/_learnable_fake_quantize.py with 100% similarity]

index b713522..c197d79 100644 (file)
@@ -8,7 +8,7 @@ from torch.quantization import (
     default_affine_fixed_qparams_fake_quant,
 )
 
-from torch.quantization._learnable_fake_quantize import _LearnableFakeQuantize
+from torch.ao.quantization._learnable_fake_quantize import _LearnableFakeQuantize
 from torch.testing._internal.common_quantized import (
     _fake_quantize_per_channel_affine_reference,
     _fake_quantize_per_channel_affine_grad_reference,
index a8c7289..06b91be 100644 (file)
@@ -7,7 +7,7 @@ from torch.quantization import default_qconfig
 from torch.quantization import QuantWrapper
 import torch.ao.ns._numeric_suite as ns
 
-from torch.quantization._correct_bias import (
+from torch.ao.quantization._correct_bias import (
     _supported_modules,
     _supported_modules_quantized,
     bias_correction,
index 7e0bfb5..9b3aa0f 100644 (file)
@@ -4,7 +4,7 @@ import torch.nn as nn
 from torch.testing._internal.common_quantization import QuantizationTestCase
 from torch.ao.quantization.fuse_modules import fuse_modules
 
-import torch.quantization._equalize as _equalize
+import torch.ao.quantization._equalize as _equalize
 
 import copy