Storage.clone maintains original device (#14751)
authorFrancisco Massa <fvsmassa@gmail.com>
Wed, 5 Dec 2018 16:27:00 +0000 (08:27 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Wed, 5 Dec 2018 16:33:56 +0000 (08:33 -0800)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/14673

As pointed out by vishwakftw , the root case of the `deepcopy` issue was that `storage.clone()` would create a new storage in the default device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14751

Reviewed By: soumith

Differential Revision: D13323061

Pulled By: fmassa

fbshipit-source-id: bfe46ebd78f0b6cd9518c11d09de7849282ed2a2

test/test_cuda.py
torch/storage.py

index 7aef6dd..5249f36 100644 (file)
@@ -1376,6 +1376,14 @@ class TestCuda(TestCase):
             self.assertEqual(copy.get_device(), 0)
 
     @unittest.skipIf(not TEST_MULTIGPU, "detected only one GPU")
+    def test_multigpu_storage_clone(self):
+        x = torch.randn(4, 4, device='cuda:1').storage()
+        y = x.clone()
+        self.assertEqual(x.get_device(), y.get_device())
+        for t in ['byte', 'char', 'short', 'int', 'long', 'half', 'double']:
+            self.assertEqual(getattr(x, t)().get_device(), x.get_device())
+
+    @unittest.skipIf(not TEST_MULTIGPU, "detected only one GPU")
     def test_cuda_set_device(self):
         x = torch.randn(5, 5)
         with torch.cuda.device(1):
index a8ea4da..22f64a8 100644 (file)
@@ -39,7 +39,9 @@ class _StorageBase(object):
 
     def clone(self):
         """Returns a copy of this storage"""
-        return type(self)(self.size()).copy_(self)
+        device = self.get_device() if self.is_cuda else -1
+        with torch.cuda.device(device):
+            return type(self)(self.size()).copy_(self)
 
     def tolist(self):
         """Returns a list containing the elements of this storage"""