Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64917
AO Team is migrating the existing torch.quantization into torch.ao.quantization. We are doing it one file at a time to make sure that the internal callsites are updated properly.
This migrates from torch.quantization to torch.ao.quantization the following files:
- `_correct_bias.py`
- `_equalize.py`
- `_learnable_fake_quantize.py`
**Note:** These file are migrated completely without any warning. The old location is thus silently deprecated.
Test Plan: `buck test mode/dev //caffe2/test:quantization -- TestBiasCorrection`
Reviewed By: vkuzo
Differential Revision:
D30898565
fbshipit-source-id:
1d39be2539dd1adfcb42e16bdcc0daf5c8316bbd
default_affine_fixed_qparams_fake_quant,
)
-from torch.quantization._learnable_fake_quantize import _LearnableFakeQuantize
+from torch.ao.quantization._learnable_fake_quantize import _LearnableFakeQuantize
from torch.testing._internal.common_quantized import (
_fake_quantize_per_channel_affine_reference,
_fake_quantize_per_channel_affine_grad_reference,
from torch.quantization import QuantWrapper
import torch.ao.ns._numeric_suite as ns
-from torch.quantization._correct_bias import (
+from torch.ao.quantization._correct_bias import (
_supported_modules,
_supported_modules_quantized,
bias_correction,
from torch.testing._internal.common_quantization import QuantizationTestCase
from torch.ao.quantization.fuse_modules import fuse_modules
-import torch.quantization._equalize as _equalize
+import torch.ao.quantization._equalize as _equalize
import copy