Fixed a formatting issue in doc comments (#17505)
authorBrian Johnson <brianjo@fb.com>
Tue, 12 Mar 2019 16:52:05 +0000 (09:52 -0700)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Tue, 12 Mar 2019 16:55:29 +0000 (09:55 -0700)
Summary:
for torch.distributed.broadcast_multigpu per issue #17243
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17505

Reviewed By: janewangfb

Differential Revision: D14373865

Pulled By: pietern

fbshipit-source-id: 6d7e91a3da50a7c9ba417ad852f7746eb5200043

torch/distributed/distributed_c10d.py

index ace6f82..1461469 100644 (file)
@@ -672,10 +672,10 @@ def broadcast_multigpu(tensor_list,
 
     Arguments:
         tensor_list (List[Tensor]): Tensors that participate in the collective
-            operation. if ``src`` is the rank, then ``src_tensor``th element of
-            ``tensor_list`` (``tensor_list[src_tensor]``) will be broadcasted
-            to all other tensors (on different GPUs) in the src process and
-            all tensors in ``tensor_list`` of other non-src processes.
+            operation. If ``src`` is the rank, then the specified ``src_tensor``
+            element of ``tensor_list`` (``tensor_list[src_tensor]``) will be
+            broadcast to all other tensors (on different GPUs) in the src process
+            and all tensors in ``tensor_list`` of other non-src processes.
             You also need to make sure that ``len(tensor_list)`` is the same
             for all the distributed processes calling this function.