Use CUDAGuard when serializing CUDA Tensors (#15807)
authorRichard Zou <zou3519@gmail.com>
Tue, 8 Jan 2019 15:26:15 +0000 (07:26 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Tue, 8 Jan 2019 15:31:50 +0000 (07:31 -0800)
commit8f11147d43386acec2642a23cf08710ee0ab1af5
tree175ab59fa9f9bfd9d8c75cf8956e5ec6436b074e
parent29a9d6af45769afd304005be8af03f36f74a14b2
Use CUDAGuard when serializing CUDA Tensors (#15807)

Summary:
Fixes #15308. Before this change, `torch.save` and `torch.load` would
initialize the CUDA context on GPU 0 if it hadn't been initialized
already, even if the serialized tensors are only on GPU 1.

This PR fixes that bug by using CUDAGuard in the storage serialization
path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15807

Differential Revision: D13593201

Pulled By: zou3519

fbshipit-source-id: 4addc91ea5a5278d56a03f3d422577ee39e99897
torch/csrc/generic/serialization.cpp