Skipping two c10d tests only if there are multi-GPUs (#14860)
authorTeng Li <tengli@fb.com>
Fri, 7 Dec 2018 01:22:04 +0000 (17:22 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Fri, 7 Dec 2018 01:28:07 +0000 (17:28 -0800)
Summary:
Otherwise, these tests will fail, even though there are never meant to run on single GPU machines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14860

Differential Revision: D13369060

Pulled By: teng-li

fbshipit-source-id: 8a637a6d57335491ba8602cd09927700b2bbf8a0

test/test_c10d.py

index 5e715d8..9f477c9 100644 (file)
@@ -1565,6 +1565,7 @@ class DistributedDataParallelTest(MultiProcessTestCase):
         )
 
     @skip_if_not_nccl
+    @skip_if_not_multigpu
     def test_queue_reduction(self):
         # Set up process group.
         store = c10d.FileStore(self.file.name, self.world_size)
@@ -1592,6 +1593,7 @@ class DistributedDataParallelTest(MultiProcessTestCase):
                          torch.ones(10) * (self.world_size + 1) * len(devices) / 2.0)
 
     @skip_if_not_nccl
+    @skip_if_not_multigpu
     def test_sync_reduction(self):
         # Set up process group.
         store = c10d.FileStore(self.file.name, self.world_size)