Summary:
Fixes https://github.com/pytorch/pytorch/issues/62793
This is mostly a quick fix. I think the more correct fix could be updating `unique_dim` to `_unique_dim` which could be BC-breaking for C++ users (� maybe). Maybe something else I am missing.
~~Not sure how to add a test for it.~~ Have tested it locally.
We can add a test like following. Tested this locally, it fails currently but passes with the fix.
```python
def test_wildcard_import(self):
exec('from torch import *')
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63080
Reviewed By: gchanan
Differential Revision:
D30738711
Pulled By: zou3519
fbshipit-source-id:
b86d0190e45ba0b49fd2cffdcfd2e3a75cc2a35e
def test_dir(self):
dir(torch)
+ def test_wildcard_import(self):
+ exec('from torch import *')
+
@wrapDeterministicFlagAPITest
def test_deterministic_flag(self):
for deterministic in [True, False]:
# PR #43339 for details.
from torch._C._VariableFunctions import * # type: ignore[misc] # noqa: F403
+# Ops not to be exposed in `torch` namespace,
+# mostly helper ops.
+PRIVATE_OPS = (
+ 'unique_dim',
+)
+
for name in dir(_C._VariableFunctions):
- if name.startswith('__'):
+ if name.startswith('__') or name in PRIVATE_OPS:
continue
globals()[name] = getattr(_C._VariableFunctions, name)
__all__.append(name)
normalized, onesided, length, return_complex)
-del torch.unique_dim
-
-
if TYPE_CHECKING:
# These _impl functions return a variable number of tensors as output with
# __torch_function__; tuple unpacking is done already rather than being