Allow Tensor lists to show up in symbolic differentiable graphs. (#16784)
authorZachary DeVito <zdevito@fb.com>
Thu, 11 Apr 2019 01:12:38 +0000 (18:12 -0700)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Thu, 11 Apr 2019 01:16:20 +0000 (18:16 -0700)
commit1abbee0f8e088d2c99481a05672f07947916fd75
tree51b0804fd8eb85e8f7c7d0c91c8438f050719c08
parent612998f2eeb1140102a9f6d16ad3738748a4698c
Allow Tensor lists to show up in symbolic differentiable graphs. (#16784)

Summary:
It is done by flattening all tensor lists that are inputs/outputs to the
graph into the inputs/outputs list in the autograd graph.

This is less desirable than simply allowing IValues to exist in the
inputs/outputs of autograd::Function but it is substantially less
intrusive.

CaptureList describes the variables captured for backward in a single class.
UnpackInstructs describes how the flattened inputs to backwards are re-packed into lists.
ailzhang

This PR is also part 2 of covering maskrcnn & bert AD formulas, following #16689.

Ops added in this PR:
```
cat
index
meshgrid
reshape
split
split_with_sizes
stack
unbind
```
I will also add a few perf numbers here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16784

Differential Revision: D14104063

Pulled By: ailzhang

fbshipit-source-id: 5ceadadfd67ccaac60c5fd6740786c5354e252b9
test/common_methods_invocations.py
test/test_jit.py
torch/csrc/jit/graph_executor.cpp
torch/csrc/jit/passes/specialize_autogradzero.cpp
torch/csrc/jit/symbolic_script.cpp