Fix flaky test for dp saved tensor hooks (#63324)
authorVictor Quach <quach@fb.com>
Tue, 17 Aug 2021 15:55:25 +0000 (08:55 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Tue, 17 Aug 2021 15:56:58 +0000 (08:56 -0700)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63324

Fix for https://www.internalfb.com/tasks/?t=98258963
`catch_warnings` seem to only trigger once in certain cases where it
should trigger twice.
This test is only meant to test whether hooks are trigger / not trigger,
so changing it to self.assertGreater is ok.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D30340833

Pulled By: Varal7

fbshipit-source-id: 1bfb9437befe9e8ab8f95efe5f513337fa9bdc5c

test/test_autograd.py

index 2b7db29..7200bd5 100644 (file)
@@ -9373,7 +9373,7 @@ class TestMultithreadAutograd(TestCase):
                     else:
                         # DataParallel only uses one thread
                         # so hooks should be called here
-                        _self.assertEqual(len(w), 2)
+                        _self.assertGreater(len(w), 0)
 
         x = torch.ones(5, 5, requires_grad=True)
         model = torch.nn.DataParallel(Model())
@@ -9383,7 +9383,7 @@ class TestMultithreadAutograd(TestCase):
             with warnings.catch_warnings(record=True) as w:
                 y = x * x
                 # hooks should be called here
-                _self.assertEqual(len(w), 2)
+                _self.assertGreater(len(w), 0)
 
     def test_python_thread_in_middle(self):
         # User might write a network that starts on one CPU thread, then runs its second half