Disable test_leaf_variable_sharing on ASAN runs
authorEdward Yang <ezyang@fb.com>
Mon, 10 Dec 2018 18:40:25 +0000 (10:40 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Mon, 10 Dec 2018 18:43:05 +0000 (10:43 -0800)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15001

Reviewed By: orionr

Differential Revision: D13399119

fbshipit-source-id: 6b1d098e55a67b1f5bc6d08a8ee3c1be8234a654

test/test_multiprocessing.py

index c509d37..f56cd2c 100644 (file)
@@ -480,6 +480,9 @@ class TestMultiprocessing(TestCase):
             var = torch.arange(1., 26).view(5, 5).requires_grad_(requires_grad)
             self._test_autograd_sharing(var)
 
+    # See https://github.com/pytorch/pytorch/issues/14997
+    @unittest.skipIf(TEST_WITH_ASAN,
+                     "non-deterministically hangs with ASAN")
     def test_leaf_variable_sharing(self):
         devices = ['cpu']
         if torch.cuda.is_available() and not NO_MULTIPROCESSING_SPAWN and TEST_CUDA_IPC: