From 2b7345bcd5dc79e734580d3e362e20f8ea64fc06 Mon Sep 17 00:00:00 2001 From: Teng Li Date: Thu, 29 Nov 2018 17:48:04 -0800 Subject: [PATCH] PT1 distributed doc update (#14530) Summary: Removed an incorrect section. We don't support this. I wrote this from my memory :( Pull Request resolved: https://github.com/pytorch/pytorch/pull/14530 Differential Revision: D13253471 Pulled By: teng-li fbshipit-source-id: c3f1ffc6c98ef8789157e885776e0b775ec47b15 --- docs/source/distributed.rst | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/docs/source/distributed.rst b/docs/source/distributed.rst index 4591a71..068226f 100644 --- a/docs/source/distributed.rst +++ b/docs/source/distributed.rst @@ -79,14 +79,6 @@ In the past, we were often being asked from many users on "which backend should - Use Gloo, unless you have specific reasons to use MPI. -- What if I have both CPU and GPU tensors that need to get communicated? - - - This can be achieved by creating two groups. One group with the NCCL backend for GPU tensors, and one - group with the Gloo backend for CPU tensors. For instance: use :func:`torch.distributed.init_process_group` - with the NCCL backend (which will create the default group for collective calls on GPU tensors), and - use :func:`torch.distributed.new_group` with the Gloo backend to create a new group for collective calls - on CPU tensors. - Common environmental variables ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -268,6 +260,10 @@ used to create new groups, with arbitrary subsets of all processes. It returns an opaque group handle that can be given as a ``group`` argument to all collectives (collectives are distributed functions to exchange information in certain well-known programming patterns). +Currently `torch.distributed` does not support creating groups with different backends. +In other words, each group being created will use the same backend as you specified in +:func:`~torch.distributed.init_process_group`. + .. autofunction:: new_group Point-to-point communication -- 2.7.4