From: serhii-havrylov Date: Wed, 13 Mar 2019 10:16:40 +0000 (-0700) Subject: Update docs for `mark_non_differentiable` method (#17891) X-Git-Tag: accepted/tizen/6.5/unified/20211028.231830~835 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=f6de833cac63016de7f0f04b260c734f19e46505;p=platform%2Fupstream%2Fpytorch.git Update docs for `mark_non_differentiable` method (#17891) Summary: The current documentation doesn't reflect the real values of tensors during the backward pass. This issue is mentioned in https://github.com/pytorch/pytorch/issues/12631 Pull Request resolved: https://github.com/pytorch/pytorch/pull/17891 Differential Revision: D14419949 Pulled By: soumith fbshipit-source-id: 8b495628c3f017bc880f8096682cd176a53974e5 --- diff --git a/torch/autograd/function.py b/torch/autograd/function.py index ea48a4f..89e930a 100644 --- a/torch/autograd/function.py +++ b/torch/autograd/function.py @@ -51,7 +51,8 @@ class _ContextMethodMixin(object): This will mark outputs as not requiring gradients, increasing the efficiency of backward computation. You still need to accept a gradient for each output in :meth:`~Function.backward`, but it's always going to - be ``None``. + be a zero tensor with the same shape as the shape of a corresponding + output. This is used e.g. for indices returned from a max :class:`Function`. """