Summary:
This fixes the following test failure:
```
======================================================================
ERROR: test_event_handle_multi_gpu (__main__.TestMultiprocessing)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_multiprocessing.py", line 445, in test_event_handle_multi_gpu
with torch.cuda.device(d1):
File "/home/stefan/rel/lib/python3.7/site-packages/torch/cuda/__init__.py", line 229, in __enter__
torch._C._cuda_setDevice(self.idx)
RuntimeError: cuda runtime error (10) : invalid device ordinal at /home/stefan/pytorch/torch/csrc/cuda/Module.cpp:33
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17402
Differential Revision:
D14195190
Pulled By: soumith
fbshipit-source-id:
e911f3782875856de3cfbbd770b6d0411d750279
@unittest.skipIf(NO_MULTIPROCESSING_SPAWN, "Disabled for environments that \
don't support multiprocessing with spawn start method")
@unittest.skipIf(not TEST_CUDA_IPC, 'CUDA IPC not available')
+ @unittest.skipIf(not TEST_MULTIGPU, 'found only 1 GPU')
def test_event_handle_multi_gpu(self):
d0 = torch.device('cuda:0')
d1 = torch.device('cuda:1')