[DDP Comm Hook] Add bf16 gradient compression to ddp_comm_hooks.rst (#64346)
authorYi Wang <wayi@fb.com>
Wed, 1 Sep 2021 23:09:46 +0000 (16:09 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Wed, 1 Sep 2021 23:34:00 +0000 (16:34 -0700)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64346

as title
ghstack-source-id: 137170288

Test Plan: N/A

Reviewed By: rohan-varma

Differential Revision: D30693513

fbshipit-source-id: 8c64b8404ff3b0322e1bbbd93f6ef051ea91307d

docs/source/ddp_comm_hooks.rst

index aed70c0..5bd0378 100644 (file)
@@ -44,11 +44,13 @@ The input ``bucket`` is a :class:`torch.distributed.GradBucket` object.
 .. currentmodule:: torch.distributed.algorithms.ddp_comm_hooks.default_hooks
 .. autofunction:: allreduce_hook
 .. autofunction:: fp16_compress_hook
+.. autofunction:: bf16_compress_hook
 
-Additionally, a communication hook wraper is provided to support :meth:`~fp16_compress_hook` as a wrapper,
+Additionally, a communication hook wraper is provided to support :meth:`~fp16_compress_hook` or :meth:`~bf16_compress_hook` as a wrapper,
 which can be combined with other communication hooks.
 
 .. autofunction:: fp16_compress_wrapper
+.. autofunction:: bf16_compress_wrapper
 
 PowerSGD Communication Hook
 ---------------------------