Have embedding_dense_backward match JIT signature. (#19427)
authorGregory Chanan <gchanan@fb.com>
Fri, 19 Apr 2019 17:56:00 +0000 (10:56 -0700)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Fri, 19 Apr 2019 18:03:09 +0000 (11:03 -0700)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19427
ghimport-source-id: 93438cd495129a1e41118c62e6339909783035fd

Differential Revision: D15003385

Pulled By: gchanan

fbshipit-source-id: 53cbe18aa4541a2501f496abfee526e40093c0ff

aten/src/ATen/native/native_functions.yaml
tools/autograd/derivatives.yaml

index cc2b0fd..e6eb881 100644 (file)
 
 - func: embedding_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq, bool sparse) -> Tensor
 
-- func: embedding_dense_backward(Tensor grad_output, IndexTensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> Tensor
-  matches_jit_signature: False
+- func: embedding_dense_backward(Tensor grad_output, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> Tensor
   dispatch:
     CPU: embedding_dense_backward_cpu
     CUDA: embedding_dense_backward_cuda
index 4be3090..50e3226 100644 (file)
 
 - name: embedding_dense_backward(Tensor grad_output, Tensor indices, int64_t num_weights, int64_t padding_idx, bool scale_grad_by_freq)
   grad_output: embedding_dense_double_backward(grad, indices)
+  indices: non_differentiable
 
 - name: _embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int64_t mode, bool sparse, Tensor per_sample_weights)
   indices: non_differentiable