From: Zafar Takhirov Date: Thu, 16 Sep 2021 01:13:53 +0000 (-0700) Subject: [quant] AO migration of the `_correct_bias.py`, `_equalize.py`, and `_learnable_fake_... X-Git-Tag: accepted/tizen/8.0/unified/20231005.095509~162 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=e0ecd0901113b5561d3d9e309881680ab41b38c6;p=platform%2Fupstream%2Fpytorch.git [quant] AO migration of the `_correct_bias.py`, `_equalize.py`, and `_learnable_fake_quantize.py` (#64917) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64917 AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly. This migrates from torch.quantization to torch.ao.quantization the following files: - `_correct_bias.py` - `_equalize.py` - `_learnable_fake_quantize.py` **Note:** These file are migrated completely without any warning. The old location is thus silently deprecated. Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestBiasCorrection` Reviewed By: vkuzo Differential Revision: D30898565 fbshipit-source-id: 1d39be2539dd1adfcb42e16bdcc0daf5c8316bbd --- diff --git a/test/quantization/core/test_workflow_ops.py b/test/quantization/core/test_workflow_ops.py index b713522..c197d79 100644 --- a/test/quantization/core/test_workflow_ops.py +++ b/test/quantization/core/test_workflow_ops.py @@ -8,7 +8,7 @@ from torch.quantization import ( default_affine_fixed_qparams_fake_quant, ) -from torch.quantization._learnable_fake_quantize import _LearnableFakeQuantize +from torch.ao.quantization._learnable_fake_quantize import _LearnableFakeQuantize from torch.testing._internal.common_quantized import ( _fake_quantize_per_channel_affine_reference, _fake_quantize_per_channel_affine_grad_reference, diff --git a/test/quantization/eager/test_bias_correction_eager.py b/test/quantization/eager/test_bias_correction_eager.py index a8c7289..06b91be 100644 --- a/test/quantization/eager/test_bias_correction_eager.py +++ b/test/quantization/eager/test_bias_correction_eager.py @@ -7,7 +7,7 @@ from torch.quantization import default_qconfig from torch.quantization import QuantWrapper import torch.ao.ns._numeric_suite as ns -from torch.quantization._correct_bias import ( +from torch.ao.quantization._correct_bias import ( _supported_modules, _supported_modules_quantized, bias_correction, diff --git a/test/quantization/eager/test_equalize_eager.py b/test/quantization/eager/test_equalize_eager.py index 7e0bfb5..9b3aa0f 100644 --- a/test/quantization/eager/test_equalize_eager.py +++ b/test/quantization/eager/test_equalize_eager.py @@ -4,7 +4,7 @@ import torch.nn as nn from torch.testing._internal.common_quantization import QuantizationTestCase from torch.ao.quantization.fuse_modules import fuse_modules -import torch.quantization._equalize as _equalize +import torch.ao.quantization._equalize as _equalize import copy diff --git a/torch/quantization/_correct_bias.py b/torch/ao/quantization/_correct_bias.py similarity index 100% rename from torch/quantization/_correct_bias.py rename to torch/ao/quantization/_correct_bias.py diff --git a/torch/quantization/_equalize.py b/torch/ao/quantization/_equalize.py similarity index 100% rename from torch/quantization/_equalize.py rename to torch/ao/quantization/_equalize.py diff --git a/torch/quantization/_learnable_fake_quantize.py b/torch/ao/quantization/_learnable_fake_quantize.py similarity index 100% rename from torch/quantization/_learnable_fake_quantize.py rename to torch/ao/quantization/_learnable_fake_quantize.py