From 085278f8b141579c5d5481a8fb96c7dfa830b262 Mon Sep 17 00:00:00 2001 From: MengeTM <34686199+MengeTM@users.noreply.github.com> Date: Thu, 26 Aug 2021 15:32:06 -0700 Subject: [PATCH] Derivatives of relu (#63027) (#63089) Summary: Optimization of relu and leaky_relu derivatives for reduction of VRAM needed for the backward-passes Fixes https://github.com/pytorch/pytorch/issues/63027 Pull Request resolved: https://github.com/pytorch/pytorch/pull/63089 Reviewed By: iramazanli Differential Revision: D30582049 Pulled By: albanD fbshipit-source-id: a9481fe8c10cbfe2db485e28ce80cabfef501eb8 --- tools/autograd/derivatives.yaml | 4 ---- 1 file changed, 4 deletions(-) diff --git a/tools/autograd/derivatives.yaml b/tools/autograd/derivatives.yaml index 49e574a..641471e 100644 --- a/tools/autograd/derivatives.yaml +++ b/tools/autograd/derivatives.yaml @@ -1604,10 +1604,6 @@ self: soft_margin_loss_backward(grad, self, target, reduction) - name: relu(Tensor self) -> Tensor - self: threshold_backward(grad, self, 0) - -# NB: `output` instead of `self` saves memory. It avoids saving a copy of self. -- name: relu_(Tensor(a!) self) -> Tensor(a!) self: threshold_backward(grad, result, 0) - name: silu(Tensor self) -> Tensor -- 2.7.4