Edward Yang [Tue, 18 Dec 2018 15:35:43 +0000 (07:35 -0800)]
Revert
D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
Differential Revision:
D13383102
Original commit changeset:
c434f0e0ddff
fbshipit-source-id:
690f46ca0710954fa591a5ea77535e9759db4de5
svcscm [Tue, 18 Dec 2018 05:23:30 +0000 (21:23 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
4bf66581d07d839f459869bc9c6428011063cc5b
Zachary DeVito [Tue, 18 Dec 2018 05:11:30 +0000 (21:11 -0800)]
improve script/no script save error (#15321)
Summary:
Improves the error message for #15116
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15321
Differential Revision:
D13499379
Pulled By: zdevito
fbshipit-source-id:
b8dc0a83efabff74199f4aab2ee98aa41c42608b
James Sun [Tue, 18 Dec 2018 04:28:00 +0000 (20:28 -0800)]
Allow tracing with fork/wait (#15184)
Summary:
There is still limitation on this: if a script module is somewhere
in the trace, the inputs/outputs can only be tensors or tuples of
tensors.
resolves #15052
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15184
Differential Revision:
D13457691
Pulled By: highker
fbshipit-source-id:
8fe46afc41357a0eb8eadd83f687b31d074deb0e
Jie [Tue, 18 Dec 2018 04:08:15 +0000 (20:08 -0800)]
[TensorIterator fixing mean to output correct result for half precisi… (#14878)
Summary:
…on](#12115)
mean is calculated in two step sum()/numel(). For half precision, data gets
casted back to half after sum().
We fused the division into the reduction kernel by adding pre_op/post_op.
This allows us to do torch.ones(65536).cuda().half().mean() to return correct
result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14878
Differential Revision:
D13491159
Pulled By: soumith
fbshipit-source-id:
e83802e1628b6d2615c45e18d7acf991d143a09e
Edward Yang [Tue, 18 Dec 2018 03:50:10 +0000 (19:50 -0800)]
Reenable OpenMP by reverting the following two commits. (#15315)
Summary:
Revert "Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)"
This reverts commit
a84e873bb156080ea76ab182171b1f3b4d5395f6.
Revert "Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)"
This reverts commit
8901935ad42fe9bf093d1106ea43606008a4024d.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15315
Differential Revision:
D13495852
Pulled By: ezyang
fbshipit-source-id:
bcd3f60088b14831c53d3c171f10cd1ab6b35dee
Peter Goldsborough [Tue, 18 Dec 2018 00:08:05 +0000 (16:08 -0800)]
Fix _apply in nn.Module (#15305)
Summary:
Fixes an issue that arose from https://github.com/pytorch/pytorch/pull/13481 where `.shared_memory()` couldn't be called. Effectively undoes all changes to `nn.Module` from that PR and solve the relevant problem in a different way (the goal was to be able to call `._apply()` on the Python wrapper for a C++ module).
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15305
Differential Revision:
D13493937
Pulled By: goldsborough
fbshipit-source-id:
4cb8687f90fc8709a536c5e7eacd0dc8edf6f750
Peter Goldsborough [Tue, 18 Dec 2018 00:07:14 +0000 (16:07 -0800)]
Add a correctness check for C++ types to custom operators (#15247)
Summary:
The JIT uses `int64_t` for its integer type and `double` for its floating point type, but users quite often want to write `int` or `float` and that currently fails in not-so-nice ways for custom ops. This PR adds a simple `static_assert` to catch these common failure cases.
zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15247
Differential Revision:
D13493941
Pulled By: goldsborough
fbshipit-source-id:
c1cd0d10ab5838c75f167c0bdb57e45a0bc1344e
Tristan Rice [Mon, 17 Dec 2018 23:59:45 +0000 (15:59 -0800)]
caffe2/python/task: added __repr__ methods to all task definitions (#15250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15250
This adds `__repr__` methods to all of the classes under task.py. This makes the objects much easier to interact with when using them in an interactive manner, such as in a Jupyter notebook.
The default `__repr__` method just returns the object ID which is very unhelpful.
Reviewed By: hanli0612
Differential Revision:
D13475758
fbshipit-source-id:
6e1b166ec35163b9776c797b6a2e0d002560cd29
Roy Li [Mon, 17 Dec 2018 23:44:23 +0000 (15:44 -0800)]
Port nn fold and unfold to c++
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14597
Reviewed By: ezyang
Differential Revision:
D13272227
fbshipit-source-id:
6eccab5ff5830a977398a96393b778095120edc6
James Sun [Mon, 17 Dec 2018 23:36:28 +0000 (15:36 -0800)]
Allow future type parsing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14887
Differential Revision:
D13490984
Pulled By: highker
fbshipit-source-id:
165fe995867be273793f983154aa6cbce13e4396
Jesse Hellemn [Mon, 17 Dec 2018 23:27:53 +0000 (15:27 -0800)]
Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15064
Reviewed By: orionr
Differential Revision:
D13474801
Pulled By: pjh5
fbshipit-source-id:
9d3664c3a3a1b6c2d9f083f8476fe3b037296b98
David Riazati [Mon, 17 Dec 2018 23:22:07 +0000 (15:22 -0800)]
Bicubic interpolation for nn.functional.interpolate (#9849)
Summary:
Addresses #918, interpolation results should be similar to tf
* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`
The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849
Differential Revision:
D9007525
Pulled By: driazati
fbshipit-source-id:
93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
Wanchao Liang [Mon, 17 Dec 2018 23:18:51 +0000 (15:18 -0800)]
add isinstance static type checking for jit (#15076)
Summary:
This PR add isinstance to do static type checking in JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15076
Differential Revision:
D13471067
Pulled By: wanchaol
fbshipit-source-id:
d39b7ed5db9fcca4b503659d02cf7795950ea8ea
peter [Mon, 17 Dec 2018 23:18:15 +0000 (15:18 -0800)]
Fix the missing caffe2 proto files for Windows (#15157)
Summary:
Fixes #15156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15157
Differential Revision:
D13490420
Pulled By: orionr
fbshipit-source-id:
4387d707f634a5975238af915b1befb2277f8ec7
Edward Yang [Mon, 17 Dec 2018 23:09:40 +0000 (15:09 -0800)]
Replace SwitchToDevice(0) with SwitchToDevice() (#15126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15126
I want to make people stop manufacturing StreamId from thin air,
and a first step is to make people use the default stream.
Reviewed By: dzhulgakov
Differential Revision:
D13432922
fbshipit-source-id:
9f0d8d70646c50d979bde5ba3c3addeebac48a3d
David Riazati [Mon, 17 Dec 2018 22:38:46 +0000 (14:38 -0800)]
Don't enforce docstrings on bool dispatch (#15306)
Summary:
Allows 2 functions that are boolean dispatched to have no docstrings (the only case that will fail now is if both functions have docstrings)
Fixes #15281
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15306
Differential Revision:
D13494884
Pulled By: driazati
fbshipit-source-id:
65fec39ae03a7d6a68ad617c9b270faeb1617930
Soumyaroop Roy [Mon, 17 Dec 2018 22:23:54 +0000 (14:23 -0800)]
Fix for issue 14829 (#14908)
Summary:
* Modify the testcase as outlined in the issue
* Issue url: https://github.com/pytorch/pytorch/issues/14829
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14908
Differential Revision:
D13490360
Pulled By: ezyang
fbshipit-source-id:
ff11a72e19b49223652182e82c2b4e65fe444ca7
Junjie Bai [Mon, 17 Dec 2018 21:45:26 +0000 (13:45 -0800)]
Minor fixes in .jenkins/caffe2/bench.sh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15304
Differential Revision:
D13493876
Pulled By: bddppq
fbshipit-source-id:
7146eb2587e526af65b4b0290c25bd55653a3088
Spandan Tiwari [Mon, 17 Dec 2018 21:45:21 +0000 (13:45 -0800)]
Adding ONNX export for torch.expand and torch.ne (#15050)
Summary:
`torch.expand` and `torch.ne` are used often in models and this PR adds ONNX export support for them. ArmenAg has created issue https://github.com/pytorch/pytorch/issues/10882 for this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15050
Differential Revision:
D13453036
Pulled By: houseroad
fbshipit-source-id:
4724b4ffcebda6cd6b2acac51d6733cb27318daf
Edward Yang [Mon, 17 Dec 2018 21:25:31 +0000 (13:25 -0800)]
Tighten up invariants regarding StreamId. (#15125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15125
I realized that it is really bad juju if you fake a StreamId
out of thin air, because in general this isn't going to work.
So, make the constructor a lot scarier.
Most "faking StreamId out of thin air" happens because someone
just wants to put something on the default stream.
Reviewed By: dzhulgakov
Differential Revision:
D13432800
fbshipit-source-id:
a86991d6fc1d8aa4e54e8175e5f06f90856238e6
David Riazati [Mon, 17 Dec 2018 21:08:03 +0000 (13:08 -0800)]
Fix tensor printing bug in Python 2 (#12732)
Summary:
`rsplit` doesn't have kwargs in Python 2 so this line raises an error
Fixes #15135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12732
Differential Revision:
D10458630
Pulled By: driazati
fbshipit-source-id:
a63e42fbc0e39e4291480775b516c98122ec05a1
peter [Mon, 17 Dec 2018 05:50:43 +0000 (21:50 -0800)]
Refactor hotpatch_vars and apply it to libtorch (#14976)
Summary:
Fixes #14801.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14976
Differential Revision:
D13485381
Pulled By: soumith
fbshipit-source-id:
0af3c2e1b90988d56f6f85632328d1e4b788ffd2
Derek Kim [Sat, 15 Dec 2018 18:56:49 +0000 (10:56 -0800)]
Trivial comment correction in dataloader (#15276)
Summary:
Trivial comment correction in dataloader
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15276
Differential Revision:
D13477324
Pulled By: soumith
fbshipit-source-id:
2a74a014999655d129311d611f2a09411339cb13
Krishna Kalyan [Sat, 15 Dec 2018 17:46:55 +0000 (09:46 -0800)]
Delete ffi documentation (#15220)
Summary: Deleting FFI documentation since its deprecated.
Differential Revision:
D13477329
Pulled By: soumith
fbshipit-source-id:
0b3d485eb7cef1f05b6b397dff50f21a49d6409e
Fei Sun [Sat, 15 Dec 2018 17:07:02 +0000 (09:07 -0800)]
Fix a typo in the assert
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15265
Reviewed By: llyfacebook
Differential Revision:
D13477029
Pulled By: sf-wind
fbshipit-source-id:
9c5571a583c01f9701625541ebec0c836cb923f2
y0ast [Sat, 15 Dec 2018 12:41:02 +0000 (04:41 -0800)]
fix cholesky call in potrs example (#15215)
Summary:
Cholesky by default returns the lower triangular matrix, see [docs](https://pytorch.org/docs/stable/torch.html#torch.cholesky).
However `torch.potrs` by default requires the upper triangular matrix. The naming of the variable `u` suggests that the example expects the upper to be returned, so I've added the flag to make that happen in the example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15215
Differential Revision:
D13476468
Pulled By: soumith
fbshipit-source-id:
7b68035f435a2b1be4d363b3f63e407394af949d
Michael Suo [Sat, 15 Dec 2018 09:14:45 +0000 (01:14 -0800)]
value-based mark and sweep DCE (#14910)
Summary:
This makes DCE more granular by tracking live values/aliases through the graph (rather than just nodes). So we can be more aggressive in DCE around control flow blocks. For example, in:
```
%a0 = aten::foo()
%b = aten::foo()
%a2, %b2 = prim::If(%cond) {
block0() {
%a1 = aten::foo(%.0)
%b1 = aten::foo(%b)
} -> (%a1, %b1)
}
return (%a2)
```
we will now dce all the `%b` stuff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14910
Differential Revision:
D13476445
Pulled By: suo
fbshipit-source-id:
2bf5db19711c07dde946697a4f4b270bd8baf791
Xiang Gao [Sat, 15 Dec 2018 08:07:37 +0000 (00:07 -0800)]
Mention Jacobian-vector product in the doc of torch.autograd (#15197)
Summary:
A friend of me is learning deep learning and pytorch, and he is confused by the following piece of code from the tutorial https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients :
```python
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
```
He don't know where the following line comes from:
```python
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
```
What are we computing? Why don't we compute "the gradient of `y` w.r.t `x`"?
In the tutorial, it only says
> You can do many crazy things with autograd!
Which does not explain anything. It seems to be hard for some beginners of deep learning to understand why do we ever do backwards with external gradient fed in and what is the meaning of doing so. So I modified the tutorial in https://github.com/pytorch/tutorials/pull/385
and the docstring correspondingly in this PR, explaining the Jacobian vector product. Please review this PR and https://github.com/pytorch/tutorials/pull/385 together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15197
Differential Revision:
D13476513
Pulled By: soumith
fbshipit-source-id:
bee62282e9ab72403247384e4063bcdf59d40c3c
Jerry Zhang [Sat, 15 Dec 2018 05:08:20 +0000 (21:08 -0800)]
Tensor method rename dims()->sizes() (#15246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15246
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: igorsugak
Differential Revision:
D13470369
fbshipit-source-id:
ce995beab7c64bebe8b234fb5e6d015940ec2952
Zachary DeVito [Sat, 15 Dec 2018 03:29:19 +0000 (19:29 -0800)]
Create parser.cpp (#15238)
Summary:
Moves implementation into .cpp file. Parser was getting included in several compilation units.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15238
Differential Revision:
D13474635
Pulled By: zdevito
fbshipit-source-id:
7dc824eea8f506d6c8ae1aa67aeec0c34d5285fc
Fei Sun [Sat, 15 Dec 2018 01:35:12 +0000 (17:35 -0800)]
Add several features to converting images to blobs (#15204)
Summary:
Several enhancements are implemented:
* Resize the images to be within a boundary between min-size and max-size (can be height and weight). It tries to resize the minimum size to match the min-size and keep the aspect ratio. However, if in that case the maximum size is more than the max-size, then resize the maximum size to be equal to the max-size (and the minimum size is less than min-size). The min/max sizes are specified in argument scale, in a comma separated form. If one of the size is -1, then that size is not a restriction.
* Change the OpenCV resize function arguments from using cv::Size() to the x, y scale. Theoretically they should be the same. But in reality, the two ways of specifying them may result to different resized outputs.
* Once the image is read in, change the data to floats. That means, after resize and other preprocessing steps, the float values are preserved (not truncated to int).
* It is possible to convert data in text format to the blob format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15204
Reviewed By: llyfacebook
Differential Revision:
D13467225
Pulled By: sf-wind
fbshipit-source-id:
7da34a72d43a9603cd7ab953f5821c1222d0178f
Yinghai Lu [Sat, 15 Dec 2018 00:34:11 +0000 (16:34 -0800)]
Supply static shape info to Reshape when doing onnxGetCompatibility (#15242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15242
Newer version ONNX Reshape gets shape info from a tensor. Hence for static backend, we need to provide this info to it when doing `onnxGetCompatibility` too.
Reviewed By: jackm321
Differential Revision:
D13471959
fbshipit-source-id:
8a58e28edd900b6ad54a1dbd63ff2579fbe0e820
rohithkrn [Sat, 15 Dec 2018 00:31:34 +0000 (16:31 -0800)]
FP16MomentumSGDUpdate Op fix and enable for ROCm (#15150)
Summary:
1. Fix a bug in FP16MomentumSGDUpdate operator
2. Enable operator for ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15150
Differential Revision:
D13473145
Pulled By: bddppq
fbshipit-source-id:
4c5c5f30cb9bba658e3639dbe193fa08a304d306
Alexander Sidorov [Sat, 15 Dec 2018 00:20:37 +0000 (16:20 -0800)]
Start unittesting our main observer (#15191)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15191
OSS:
just splitting out basic flags from a unit test. So I can extend them in another test where I need to add additional flags.
Reviewed By: yinghai
Differential Revision:
D13159184
fbshipit-source-id:
9823e792cf0ed8d0379235c44564862b7d784845
bddppq [Fri, 14 Dec 2018 23:34:38 +0000 (15:34 -0800)]
Build c10 HIP test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15233
Reviewed By: ezyang
Differential Revision:
D13471002
Pulled By: bddppq
fbshipit-source-id:
b42c3bc2b9db672ce50a52eb700cc6ed13d3535f
Krishna Kalyan [Fri, 14 Dec 2018 23:24:45 +0000 (15:24 -0800)]
record unit time in torch.cuda.event (#15221)
Summary: Record unit of time for torch.cuda.Event's elapsed_time
Differential Revision:
D13467646
Pulled By: zou3519
fbshipit-source-id:
4f1f4ef5fa4bc5a1b4775dfcec6ab155e5bf8d6e
James Reed [Fri, 14 Dec 2018 23:05:24 +0000 (15:05 -0800)]
Preserve module hierarchy on traced modules (#15101)
Summary:
We need this, for example, to properly call `_unpack` when we have a traced module in the hierarchy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15101
Differential Revision:
D13468467
Pulled By: jamesr66a
fbshipit-source-id:
c2b6740b12cde6e23395d12e42d4fc2c4c7ca3f2
Zachary DeVito [Fri, 14 Dec 2018 22:50:24 +0000 (14:50 -0800)]
fix an issue where two rules build the same .py files
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15230
Differential Revision:
D13471625
Pulled By: zdevito
fbshipit-source-id:
a982413a308c7a9bb5b6a82fe96fd3de44f555aa
Johannes M Dieterich [Fri, 14 Dec 2018 22:45:11 +0000 (14:45 -0800)]
Do not ifdef __launch_bounds__ out for ROCm. (#15228)
Summary:
The compiler understands it and profits from knowing it by not using too
many VGPRs as it defaults to 256 default workgroup size.
Fixes a problem in bringup of ROCm 2.0 on gfx906.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15228
Differential Revision:
D13470950
Pulled By: bddppq
fbshipit-source-id:
f9aa44c7c95299a099c0ea9317b9044cc056acc5
Edward Yang [Fri, 14 Dec 2018 22:23:13 +0000 (14:23 -0800)]
Revert
D13440858: [pytorch][PR] Use a pool of per-thread cudnn handles for each device, updated
Differential Revision:
D13440858
Original commit changeset:
1c6af5c53538
fbshipit-source-id:
fda42ea75000d4a4e9c4a8eeaaa5518f7ad9c298
Chaitanya Sri Krishna Lolla [Fri, 14 Dec 2018 22:18:00 +0000 (14:18 -0800)]
enabled tests in test_nn, test_cuda and test_sparse (#15232)
Summary:
tests work on ROCm 1.9.2 as present on CI (fp16 bringup, hipMemset and sparse improvements)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15232
Differential Revision:
D13470991
Pulled By: bddppq
fbshipit-source-id:
45acc4f9ea5baaaf7672b86eb022948055779925
David Riazati [Fri, 14 Dec 2018 22:14:13 +0000 (14:14 -0800)]
Fix jit doc codeblocks and tables (#15227)
Summary:
Some of the codeblocks were showing up as normal text and the "unsupported modules" table was formatted incorrectly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15227
Differential Revision:
D13468847
Pulled By: driazati
fbshipit-source-id:
eb7375710d4f6eca1d0f44dfc43c7c506300cb1e
Johannes M Dieterich [Fri, 14 Dec 2018 22:14:09 +0000 (14:14 -0800)]
Remove __forceinline__ hipification step. (#15229)
Summary:
The HIP definition now correctly contains the inline attribute.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15229
Differential Revision:
D13470962
Pulled By: bddppq
fbshipit-source-id:
34f8361bda5f3dce20a2eeb530c3a25d1b1bdd06
Peter Goldsborough [Fri, 14 Dec 2018 21:30:35 +0000 (13:30 -0800)]
Enable all clang-tidy performance checks (#15198)
Summary:
This PR adds the final set of clang-tidy checks we should add for our codebase: a last set of performance-related checks. Most fixes here are around changing `auto` to `const auto&` in a few places where unnecessary copies were made, and adding `reserve()` calls before loops doing repeated `push_back()`. Also a few cases of calling `std::string::find` with a single-character string literal instead of a single char, which uses a less efficient string search algorithm meant for searching larger substrings.
![image](https://user-images.githubusercontent.com/6429851/
49978940-
adc1a780-ff01-11e8-99da-
a4e431361f07.png)
ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15198
Differential Revision:
D13468797
Pulled By: goldsborough
fbshipit-source-id:
2bed1ea1c7c162b7f3e0e1026f17125e88c4d5b2
Junjie Bai [Fri, 14 Dec 2018 21:17:13 +0000 (13:17 -0800)]
Refactor caffe2 CI scripts and add benchmark scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14575
Differential Revision:
D13468049
Pulled By: bddppq
fbshipit-source-id:
e73bc8742c8a03f498816eee8a72b06a3e19fe48
Peter Goldsborough [Fri, 14 Dec 2018 16:29:15 +0000 (08:29 -0800)]
Better tests/support for Python/C++ inter-op (#15193)
Summary:
Methods like `module.named_modules()` returns a container of `shared_ptr<nn::Module>`. Currently the `nn::Module` base class does not have Python bindings. This PR fixes this, and adds more unit tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15193
Differential Revision:
D13458713
Pulled By: goldsborough
fbshipit-source-id:
4091fe1b96a1be8db14c6a4307fbacc2b41ff6fe
Jerry Zhang [Fri, 14 Dec 2018 10:05:15 +0000 (02:05 -0800)]
Tensor construction codemod(ResizeLike) - 3/7 (#15122)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15122
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision:
D13419643
fbshipit-source-id:
65b5a037b94d458b944d51f790ba2829db1fb530
Michael Suo [Fri, 14 Dec 2018 06:10:56 +0000 (22:10 -0800)]
Revert
D13407930: [pytorch][PR] Support torch.tensor in script
Differential Revision:
D13407930
Original commit changeset:
d17f1195a221
fbshipit-source-id:
f4458872c48ec4a2c9983b21ed90bcdc0ae665b7
Duc Ngo [Fri, 14 Dec 2018 04:43:00 +0000 (20:43 -0800)]
caffe2 - make DataRandomFiller usable in unit tests (#15027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15027
- Make DataRandomFiller able to accept input_dims and input_types for only non intermediate inputs. Add a helper to fill input directly to a workspace
Reviewed By: highker
Differential Revision:
D13408345
fbshipit-source-id:
5fc54d33da12e3f0a200e79380d4c695b0339b17
Duc Ngo [Fri, 14 Dec 2018 04:43:00 +0000 (20:43 -0800)]
caffe2 - easy - utils to set argument of operator (#15022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15022
Add setArgument testing utils to make it easy to set argument for an operator
Reviewed By: yinghai
Differential Revision:
D13405225
fbshipit-source-id:
b5c1859c6819d53c1a44718e2868e3137067df36
Duc Ngo [Fri, 14 Dec 2018 04:43:00 +0000 (20:43 -0800)]
caffe2 - easy - test utils for tensor assertion (#15020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15020
Add test utils for assertion of a tensor (sizes and values)
Reviewed By: salexspb
Differential Revision:
D13401146
fbshipit-source-id:
bc385df074043e03ea884940b5631b96de4a607e
Duc Ngo [Fri, 14 Dec 2018 04:42:59 +0000 (20:42 -0800)]
caffe2 - easy - test utils to compare tensors in two workspaces (#15181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15181
Add test utils to compare tensors in two workspaces
Reviewed By: ZolotukhinM
Differential Revision:
D13387212
fbshipit-source-id:
e19d932a1ecc696bd0a08ea14d9a7485cce67bb2
Duc Ngo [Fri, 14 Dec 2018 04:42:59 +0000 (20:42 -0800)]
caffe2 - easy - test utils to fill tensors (#15019)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15019
Put some utils to fill tensors to test_utils
Reviewed By: salexspb
Differential Revision:
D13386691
fbshipit-source-id:
51d891aad1ca12dc5133c0352df65b8db4f96edb
Duc Ngo [Fri, 14 Dec 2018 04:42:59 +0000 (20:42 -0800)]
caffe2 - easy - test utils to create operator (#15180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15180
Test utils to create an operator
On top of
D13370461
Reviewed By: ZolotukhinM
Differential Revision:
D13382773
fbshipit-source-id:
a88040ed5a60f31d3e73f1f958219cd7338dc52e
Duc Ngo [Fri, 14 Dec 2018 04:42:58 +0000 (20:42 -0800)]
caffe2 - easy - Create test_util to make it easier to write C++ unit tests (#15014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15014
Currently it looks like many of the simple operations such as comparing tensors, creating tensors, fetching tensors... are too verbose and took effort to write correctly in unit tests.
Easy to use utilities are often more important to increase productivity writing unit tests. While caffe2 python unit tests are relatively easier to write at the moment, the C++ side seems lacking.
In this change I create a test_util, started with assertsTensorEquals, getTensor, createTensor, and we can start putting more easy to use utilities there.
Reviewed By: salexspb
Differential Revision:
D13370461
fbshipit-source-id:
bee467a127e1d032ef19482f98aa5c776cf508c0
vishwakftw [Fri, 14 Dec 2018 04:30:40 +0000 (20:30 -0800)]
Fix derivative for mvlgamma (#15049)
Summary:
Fixes #15015.
Added tests to validate derivative.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15049
Reviewed By: soumith
Differential Revision:
D13434117
Pulled By: zou3519
fbshipit-source-id:
4a292600af9eb08b67c0f8b5482e9512aac95e72
Roy Li [Fri, 14 Dec 2018 03:33:37 +0000 (19:33 -0800)]
Fix numpy conversion for int8 tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15194
Differential Revision:
D13459270
Pulled By: li-roy
fbshipit-source-id:
605534add263860a3ad9a7fa70888301ee0bf8e4
Natalia Gimelshein [Fri, 14 Dec 2018 03:15:25 +0000 (19:15 -0800)]
add erf and erfc to fuser/autodiff
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15139
Differential Revision:
D13455690
Pulled By: soumith
fbshipit-source-id:
b06e5f5d362869c2e5fa11a52f9450d77c30d4cb
Sebastian Messmer [Fri, 14 Dec 2018 02:38:55 +0000 (18:38 -0800)]
Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858
This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor
Reviewed By: ezyang
Differential Revision:
D13365817
fbshipit-source-id:
bc73b27a793602cb14200dcdf357aa63233da43c
Sebastian Messmer [Fri, 14 Dec 2018 02:38:54 +0000 (18:38 -0800)]
Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656
This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.
dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.
ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.
Reviewed By: ezyang
Differential Revision:
D13287688
fbshipit-source-id:
17800ca8a79ab1633f23be58d96f99a160d8ed24
Jing Huang [Fri, 14 Dec 2018 02:10:55 +0000 (18:10 -0800)]
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem (#15113)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15113
cv::rotatedRectangleIntersection has a known float underflow bug that would cause failure in ```CV_Assert(intersection.size() <= 8)```
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem.
Otherwise, when ```USE_CPP_GENERATE_PROPOSALS = true```, the training would fail.
Reviewed By: viswanathgs
Differential Revision:
D13429770
fbshipit-source-id:
5e95d059f3c668f14059a0a83e8e53d8554cdb99
Elias Ellison [Fri, 14 Dec 2018 01:36:21 +0000 (17:36 -0800)]
Support torch.tensor in script (#14913)
Summary:
Adding support for torch.tensor in script.
The input list is typed as t[], because it can be arbitrarily nested. I added a check a compile time check that the inner type of the list is a bool, float, or int.
Also adds specialization for Boolean Lists, which already existed at the ivalue level but had not been added to the compiler yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14913
Differential Revision:
D13407930
Pulled By: eellison
fbshipit-source-id:
d17f1195a22149d5b0d08d76c89a7fab8444f7c5
Sebastian Messmer [Fri, 14 Dec 2018 01:07:57 +0000 (17:07 -0800)]
Remove TensorImpl -> Type dependency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15086
Reviewed By: dzhulgakov
Differential Revision:
D13425628
fbshipit-source-id:
08a8a774d17b071367454e027012a02f96d177d4
Peter Goldsborough [Fri, 14 Dec 2018 00:09:08 +0000 (16:09 -0800)]
Enable performance-unnecessary-value-param in .clang-tidy (#15026)
Summary:
This PR fixes around 250 places in the codebase where we were making unnecessary copies of objects (some large, some small).
ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15026
Differential Revision:
D13458784
Pulled By: goldsborough
fbshipit-source-id:
be5148b2ce09493588d70952e6f6d6ff5ec5199b
Junjie Bai [Thu, 13 Dec 2018 23:57:20 +0000 (15:57 -0800)]
Add missing caffe2_hip extension in setup.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15189
Reviewed By: orionr
Differential Revision:
D13457644
Pulled By: bddppq
fbshipit-source-id:
c2363e9b8fd21709b62777e5b2199f01ec1c65f8
bddppq [Thu, 13 Dec 2018 23:41:55 +0000 (15:41 -0800)]
Remove disabled_features in hipify
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15098
Reviewed By: ezyang
Differential Revision:
D13453762
Pulled By: bddppq
fbshipit-source-id:
e177042c78f5bf393163d660c25b80285353853d
bddppq [Thu, 13 Dec 2018 23:07:10 +0000 (15:07 -0800)]
Run ONNX cuda backend test cases via ROCm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15069
Differential Revision:
D13427757
Pulled By: bddppq
fbshipit-source-id:
ba0273d75986cd5b146f7041a83c63ddf9c6c0cf
vishwakftw [Thu, 13 Dec 2018 22:28:09 +0000 (14:28 -0800)]
Remove _finfo; replace _finfo usage with torch.finfo (#15165)
Summary:
This PR removes the usage of _finfo defined in torch.distributions.utils and changes the call sites
to use torch.finfo instead
Differential Revision:
D13451936
Pulled By: soumith
fbshipit-source-id:
6dbda3a6179d9407bc3396bf1a2baf3e85bc4cf2
Jerry Zhang [Thu, 13 Dec 2018 21:33:13 +0000 (13:33 -0800)]
Tensor construction codemod(ResizeLike) - 4/7 (#15088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15088
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419682
fbshipit-source-id:
3e59403bc1c0e71e5cb66df932ed0c6a0a72e643
David Reiss [Thu, 13 Dec 2018 21:14:11 +0000 (13:14 -0800)]
Replace non-printable-ascii characters in ProtoDebugString (#14918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14918
When ProtoBuf-Lite is in use, ProtoDebugString just calls SerializeAsString.
This produces binary output, which is not a very suitable "debug" string.
Specifically, we've observed it causing problems when calling code tries to
add the debug string to a Java exception message (which requires valid UTF-8).
Now, we replace all non-ASCII bytes with "?".
This is not a very fast implementation, but generating debug strings shouldn't
be a performance-sensitive operation in any application.
Reviewed By: dzhulgakov
Differential Revision:
D13385540
fbshipit-source-id:
8868172baf20efaf53fecf7d666a6980f59b64f5
Jerry Zhang [Thu, 13 Dec 2018 20:42:58 +0000 (12:42 -0800)]
Tensor construction codemod(ResizeLike) - 6/7 (#15137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15137
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419736
fbshipit-source-id:
f4ad7b9582c2f809258169b7fef9adbca7063d99
Jerry Zhang [Thu, 13 Dec 2018 20:40:33 +0000 (12:40 -0800)]
Tensor construction codemod(ResizeLike) - 5/7 (#15084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15084
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419711
fbshipit-source-id:
dd2b740c3f13d8087085bafc5571aaf908d1af42
Junjie Bai [Thu, 13 Dec 2018 20:31:38 +0000 (12:31 -0800)]
Use std::vector instead of alloca to work around hcc crash
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15175
Differential Revision:
D13453708
Pulled By: bddppq
fbshipit-source-id:
f8c147ae9f679e395fee9d4c73ebcca052c9a752
Junjie Bai [Thu, 13 Dec 2018 19:46:03 +0000 (11:46 -0800)]
Fix old tensor OutputTensorCopyFrom usage in ImageInput operator (#15094)
Summary:
cc jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15094
Differential Revision:
D13451898
Pulled By: bddppq
fbshipit-source-id:
27906be62fb88aaa13c257441a2e35a285b445ee
Vitaly Fedyunin [Thu, 13 Dec 2018 19:32:06 +0000 (11:32 -0800)]
Kill non-forward, non-backward functions generated from nn.yaml (#15127)
Summary:
Updating binding to legacy functions.
Remove unused declarations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15127
Differential Revision:
D13433405
Pulled By: VitalyFedyunin
fbshipit-source-id:
58544d38affd20818742338c9eb789d9d14ccbaa
Edward Yang [Thu, 13 Dec 2018 19:18:20 +0000 (11:18 -0800)]
Delete defunct USE_SIMPLE_BASE_CTOR_DTOR (#15144)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15144
Differential Revision:
D13440872
Pulled By: ezyang
fbshipit-source-id:
2b1d73fac0c63729ba01d8f129642334ae9d9cf3
Lu Fang [Thu, 13 Dec 2018 19:03:00 +0000 (11:03 -0800)]
Fix typo (#15045)
Summary:
Simple typo fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15045
Reviewed By: dzhulgakov
Differential Revision:
D13413509
Pulled By: houseroad
fbshipit-source-id:
be66700c30d038368b1433232a4e3fd9299c83d6
Michael Carilli [Thu, 13 Dec 2018 18:08:01 +0000 (10:08 -0800)]
Use a pool of per-thread cudnn handles for each device, updated (#15080)
Summary:
Rebased version of https://github.com/pytorch/pytorch/pull/14861, hopefully addressing ezyang's comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15080
Differential Revision:
D13440858
Pulled By: ezyang
fbshipit-source-id:
1c6af5c53538b81c6b92cf1dda231ed333f28035
vishwakftw [Thu, 13 Dec 2018 17:38:40 +0000 (09:38 -0800)]
Fix bincount for non-contiguous inputs on CPU (#15109)
Summary:
Fixes #15058.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15109
Differential Revision:
D13447448
Pulled By: soumith
fbshipit-source-id:
56e8d42934538fb00465105a2c5ccfeb7c18a651
Vitaly Fedyunin [Thu, 13 Dec 2018 16:53:16 +0000 (08:53 -0800)]
Unify SparseTensorImpl::size_ and TensorImpl::sizes_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15130
Differential Revision:
D13434981
Pulled By: VitalyFedyunin
fbshipit-source-id:
98bd4d66834a3c3d2ea577adb0c8413852da095d
Peter Goldsborough [Thu, 13 Dec 2018 16:01:10 +0000 (08:01 -0800)]
Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).
I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.
The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.
apaszke zdevito
CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481
Differential Revision:
D12981996
Pulled By: goldsborough
fbshipit-source-id:
147370d3596ebb0e94c82cec92993a148fee50a7
Richard Zou [Thu, 13 Dec 2018 15:51:08 +0000 (07:51 -0800)]
Reuse KernelSpec for FusionGroups with equivalent graphs (#14541)
Summary:
Before this PR, loop unrolling + the graph fuser was creating multiple
FusionGroups with the same bodies (with different variable names) for
JIT LSTMs. Each FusionGroup got registered to a separate fusion key;
each key resulted in a different compilation for the same
specializations.
This PR makes it so that when registering FusionGroups with the fusion
compiler, the compiler first checks the KernelSpec cache to see if the
FusionGroup's graph exists already. If it does, then return the
corresponding KernelSpec's key to share compiled kernels.
In addition, graphs in the KernelSpec cache are canonicalized before
being cached. I added a flag to the canonicalize pass to remove unique
names of values.
This shortens the compile time for a JIT LSTM (seq_len of 100, loop
unroll factor of 8) from 5.3s to 2.3s. Most of this compile time is
running the graph fuser and/or fusion compiler; while this PR
makes it so that there is only one unique kernel in the forward pass,
there are a lot of different kernels (6) in the backward pass
(after loop unrolling) that should be investigated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14541
Differential Revision:
D13324487
Pulled By: zou3519
fbshipit-source-id:
b841d82ed35a959b5cfc72db033bf5a7b42cc4fb
Syed Tousif Ahmed [Thu, 13 Dec 2018 08:19:13 +0000 (00:19 -0800)]
Removes THCNumerics usages in RNN.cu (#15085)
Summary:
We don't need THCNumerics here since at::Half can be implicitly converted to float and the cuda math dispatches are handled by `/usr/local/cuda/include/crt/math_functions.hpp` and `cmath`. ATen should be free of THCNumerics after this and when porting kernels from THC, one should not use THCNumerics.
Should close: https://github.com/pytorch/pytorch/issues/11878
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15085
Differential Revision:
D13447558
Pulled By: soumith
fbshipit-source-id:
4ff5cbf838edcd01e2d1397e4d7f4f920e9e9fc3
Jongsoo Park [Thu, 13 Dec 2018 08:15:51 +0000 (00:15 -0800)]
minimize header file includes from _avx2.cc (#14950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14950
Minimize the number of headers included from _avx2.cc files to avoid accidental compilation of functions defined the header files reused by other translation units that can lead to illegal instruction errors.
Reviewed By: dskhudia
Differential Revision:
D13394483
fbshipit-source-id:
67149a6fb51f7f047e745bfe395cb6dd4ae7c1ae
Gu, Jinghui [Thu, 13 Dec 2018 06:39:29 +0000 (22:39 -0800)]
Disable strict-overflow flag to avoid compilation error (#14977)
Summary:
Disable strict-overflow flag to avoid compilation error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14977
Differential Revision:
D13447577
Pulled By: soumith
fbshipit-source-id:
1957bd5aa3c7b79219da3dd53560464977c89526
Russell Kaplan [Thu, 13 Dec 2018 05:56:54 +0000 (21:56 -0800)]
Remove "early-release beta" disclaimer from README (#15136)
Summary:
Now that PyTorch 1.0 is out, this should be updated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15136
Differential Revision:
D13447377
Pulled By: soumith
fbshipit-source-id:
bd4e662c53d0699f25d4d90c1b4c1e182b4427c2
Xianjie Chen [Thu, 13 Dec 2018 05:31:14 +0000 (21:31 -0800)]
support casting to string (#15110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15110
support casting to string on CPU
Reviewed By: intermilan
Differential Revision:
D13429381
fbshipit-source-id:
b737a1ba1237b10f692d5c42b42a544b94ba9fd1
Cheng,Penghui [Thu, 13 Dec 2018 04:19:31 +0000 (20:19 -0800)]
Implementation of ChannelShuffle Op for MKLDNN (#15106)
Summary:
the speed-up of a single operation is up to 3X .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15106
Differential Revision:
D13429596
Pulled By: bddppq
fbshipit-source-id:
f8d987cafeac9bef9c3daf7e43ede8c6a4ee2ce5
Tyler Moncur [Thu, 13 Dec 2018 03:51:34 +0000 (19:51 -0800)]
Fix resize for edge case tensors (#14874)
Summary:
Certain tensor shapes failed when being resized. This pull request addresses the bug found in #13404.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14874
Differential Revision:
D13429788
Pulled By: soumith
fbshipit-source-id:
8aa6451dbadce46d6d1c47a01cb26e6559bcfc8c
Peter Goldsborough [Thu, 13 Dec 2018 03:15:22 +0000 (19:15 -0800)]
Autoformat build_variables.py (#15152)
Summary:
autoformat `tools/build_variables.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15152
Differential Revision:
D13445343
Pulled By: goldsborough
fbshipit-source-id:
fd63588de114cb92deda03fa1a0b36f5f9082b2f
Jongsoo Park [Thu, 13 Dec 2018 02:42:41 +0000 (18:42 -0800)]
don't compile dnnlowp.cc in avx2 option (#15147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15147
Forgot to take out dnnlowp.cc from avx2 list in a previous diff.
Reviewed By: dskhudia
Differential Revision:
D13440686
fbshipit-source-id:
9ada98b6e885c7d5f22c91a735ff60304480b4cb
Brett Koonce [Thu, 13 Dec 2018 02:11:03 +0000 (18:11 -0800)]
docs: minor spelling tweaks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15148
Differential Revision:
D13443708
Pulled By: suo
fbshipit-source-id:
5e3ec0afd3416ab8ce207f2d04105c49e1c04611
Zachary DeVito [Thu, 13 Dec 2018 01:27:49 +0000 (17:27 -0800)]
Export defs.bzl to open source for pytorch (#15132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15132
Pull Request resolved: https://github.com/facebook/fbshipit/pull/64
Reviewed By: dzhulgakov
Differential Revision:
D13424093
fbshipit-source-id:
bbebef964b9f3aef8f59cd394eca068680c36b5a
Junjie Bai [Thu, 13 Dec 2018 00:34:22 +0000 (16:34 -0800)]
Add back c2 string_utils include header to benchmark_helper
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15143
Differential Revision:
D13439694
fbshipit-source-id:
78698b66d52a0178118cbf3e79a7a5ad1763d47b
Johannes M Dieterich [Thu, 13 Dec 2018 00:06:02 +0000 (16:06 -0800)]
use ROCm 1.9.2 fp16 capabilities in rocBLAS and MIOpen interfaces (#14994)
Summary:
* relax MIOpen if statement to allow fp16/fp32 mixed precision training now supported by ROCm 1.9.2
* use gemm_ex API of rocBLAS in ROCm 1.9.2 instead of the previous hgemm API
* with this: enable all but one half test in test_nn
While there, fix also:
* a group convolution issue w/ MIOpen pertaining to initializing MIOpen on multi-GPU systems properly we detected while working on this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14994
Differential Revision:
D13439869
Pulled By: bddppq
fbshipit-source-id:
75e4eb51a59488882e64b5eabdc30555b25be25e
Viswanath Sivakumar [Wed, 12 Dec 2018 23:48:03 +0000 (15:48 -0800)]
Optimize CPU GenerateProposals op by lazily generating anchors (3-5x faster) (#15103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15103
There are two main optimizations in this diff:
1. We generate all anchors for every single spatial grid first, and then apply
NMS to pick 2000 anchors according to RPN_PRE_NMS_TOP_N. By first sorting the
score and picking the 2000 top ones and then lazily generating only the
corresponding anchors is much faster.
2. Transposing bbox_deltas from (num_anchors * 4, H, W) to
(H, W, num_anchors * 4) was also quite slow - taking about 20ms in the RRPN
case when there are lots of anchors which it's negligible for RPN case (like
0.1 ms). Instead of transponsing, performing all operations in the
(num_anchors, H, W) format speeds things up.
For regular RPN scenario, this gives 5x speedup from 5.84ms to 1.18ms a case
with 35 anchors over a 600x600 image.
For rotated boxes with 245 anchors, the runtime down from 80ms to 27ms per
iter.
Reviewed By: newstzpz
Differential Revision:
D13428688
fbshipit-source-id:
6006b332925e01a7c9433ded2ff5dc9e6d96f7d3
Shen Li [Wed, 12 Dec 2018 23:18:57 +0000 (15:18 -0800)]
Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
Summary:
This is an optimized implementation that does the following:
1. created an empty Tensor of correct size.
2. fill the Tensor with correct values.
The following three designs to fill in the Tensor result in roughly the same performance. Hence, the 2nd option is taken for simpler code, and to return contiguous tensors.
1. Sequential: fill row coordinates first, then columns. This results in two for-loop and more arithmetic operations.
2. Interleaved: fill in index coordinates one by one, which jumps between the two output Tensor rows in every iteration.
3. Transpose: create a n X 2 Tensor, fill the Tensor sequentially, and then transpose it.
<img width="352" alt="screen shot 2018-12-10 at 3 54 39 pm" src="https://user-images.githubusercontent.com/
16999635/
49769172-
07bd3580-fc94-11e8-8164-
41839185e9f9.png">
NOTE:
This implementation returns a 2D tensor, instead of a tuple of two tensors. It means that users will not be able to do the following:
```python
x = torch.ones(3, 3)
i = torch.tril_indices(3, 3)
x[i] # need to first convert the 2D tensor into a tuple of two 1D tensors.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14904
Reviewed By: zou3519
Differential Revision:
D13433027
Pulled By: mrshenli
fbshipit-source-id:
41c876aafcf584832d7069f7c5929ffb59e0ae6a
Imran [Wed, 12 Dec 2018 23:15:45 +0000 (15:15 -0800)]
Minor documentation mistake (#15068)
Summary:
keepdim is a optional parameter for torch.max()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15068
Differential Revision:
D13437745
Pulled By: zou3519
fbshipit-source-id:
b5198c7d4ae17758cd136f6e5aecc6cb5838f174
David Riazati [Wed, 12 Dec 2018 20:25:40 +0000 (12:25 -0800)]
Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.
* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912
Differential Revision:
D13385928
Pulled By: driazati
fbshipit-source-id:
e391691b2f87dba6e13be05d4aa3ed2f004e31da