From: Gregory Chanan Date: Fri, 19 Apr 2019 17:56:00 +0000 (-0700) Subject: Have embedding_dense_backward match JIT signature. (#19427) X-Git-Tag: accepted/tizen/6.5/unified/20211028.231830~123 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=b0812d3d4c6e19404592c74753560702a00caa81;p=platform%2Fupstream%2Fpytorch.git Have embedding_dense_backward match JIT signature. (#19427) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19427 ghimport-source-id: 93438cd495129a1e41118c62e6339909783035fd Differential Revision: D15003385 Pulled By: gchanan fbshipit-source-id: 53cbe18aa4541a2501f496abfee526e40093c0ff --- diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml index cc2b0fd..e6eb881 100644 --- a/aten/src/ATen/native/native_functions.yaml +++ b/aten/src/ATen/native/native_functions.yaml @@ -636,8 +636,7 @@ - func: embedding_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq, bool sparse) -> Tensor -- func: embedding_dense_backward(Tensor grad_output, IndexTensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> Tensor - matches_jit_signature: False +- func: embedding_dense_backward(Tensor grad_output, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> Tensor dispatch: CPU: embedding_dense_backward_cpu CUDA: embedding_dense_backward_cuda diff --git a/tools/autograd/derivatives.yaml b/tools/autograd/derivatives.yaml index 4be3090..50e3226 100644 --- a/tools/autograd/derivatives.yaml +++ b/tools/autograd/derivatives.yaml @@ -950,6 +950,7 @@ - name: embedding_dense_backward(Tensor grad_output, Tensor indices, int64_t num_weights, int64_t padding_idx, bool scale_grad_by_freq) grad_output: embedding_dense_double_backward(grad, indices) + indices: non_differentiable - name: _embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int64_t mode, bool sparse, Tensor per_sample_weights) indices: non_differentiable