Summary:
Otherwise, these tests will fail, even though there are never meant to run on single GPU machines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14860
Differential Revision:
D13369060
Pulled By: teng-li
fbshipit-source-id:
8a637a6d57335491ba8602cd09927700b2bbf8a0
)
@skip_if_not_nccl
+ @skip_if_not_multigpu
def test_queue_reduction(self):
# Set up process group.
store = c10d.FileStore(self.file.name, self.world_size)
torch.ones(10) * (self.world_size + 1) * len(devices) / 2.0)
@skip_if_not_nccl
+ @skip_if_not_multigpu
def test_sync_reduction(self):
# Set up process group.
store = c10d.FileStore(self.file.name, self.world_size)