Teng Li [Fri, 7 Dec 2018 01:22:04 +0000 (17:22 -0800)]
Skipping two c10d tests only if there are multi-GPUs (#14860)
Summary:
Otherwise, these tests will fail, even though there are never meant to run on single GPU machines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14860
Differential Revision:
D13369060
Pulled By: teng-li
fbshipit-source-id:
8a637a6d57335491ba8602cd09927700b2bbf8a0
Sebastian Messmer [Thu, 6 Dec 2018 23:52:15 +0000 (15:52 -0800)]
Move TensorOptions, DefaultTensorOptions to c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14746
Reviewed By: ezyang
Differential Revision:
D13318644
fbshipit-source-id:
b703d7dc67e75d9e9571c80d62a100c5fc4e84df
Marat Dukhan [Thu, 6 Dec 2018 23:12:35 +0000 (15:12 -0800)]
Switch Int8MaxPool operator to QNNPACK (#14832)
Summary:
1.6-2.4X speedup on ARM when compiled with gcc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14832
Differential Revision:
D13358160
Pulled By: Maratyszcza
fbshipit-source-id:
39e9791886fac62650bb53a9df341889f0bb5d49
Richard Zou [Thu, 6 Dec 2018 22:55:55 +0000 (14:55 -0800)]
collect_env.py: get conda magma and mkl information (#14854)
Summary:
Fixes #12371
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14854
Differential Revision:
D13363635
Pulled By: zou3519
fbshipit-source-id:
f8b5d05038bf5ce451399dfeed558ae298178128
zrphercule [Thu, 6 Dec 2018 22:04:44 +0000 (14:04 -0800)]
Add LogSigmoid support in ONNX symbolic (#14830)
Summary:
Add LogSigmoid:
torch.LogSigmoid(x) = onnx.Log(onnx.Sigmoid(x))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14830
Differential Revision:
D13353891
Pulled By: zrphercule
fbshipit-source-id:
bf456170b9e6c4edad07b3333cd5797f8e0fa97f
Ashwin Bharambe [Thu, 6 Dec 2018 21:44:33 +0000 (13:44 -0800)]
Kill GPU memory logs in normal runs (#14838)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14838
The GPU memory tracking logs are incredibly annoying and merely serve
to pollute output. I `VLOG(1)`ed them. Hopefully, this is non-controversial.
Reviewed By: kuttas
Differential Revision:
D13343290
fbshipit-source-id:
b3cae99346c97b66e97ea660061e15dc5c99b9fc
Junjie Bai [Thu, 6 Dec 2018 21:17:28 +0000 (13:17 -0800)]
Stop inserting static casts in Hipify (#14853)
Summary:
Latest hcc can now properly cast to correct type internally, so there is no need to insert static_cast in hipify scripts anymore.
However the hcc included in the latest ROCm release (1.9.2) doesn't have this fix, so leaving a flag to continue doing static_cast for those using the official ROCm releases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14853
Differential Revision:
D13363171
Pulled By: bddppq
fbshipit-source-id:
a36476a8511222ff3c933d31788e8a0ffb04f5ca
Jerry Zhang [Thu, 6 Dec 2018 19:16:07 +0000 (11:16 -0800)]
Tensor construction codemod - 3/3 (#14835)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14835
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: bddppq
Differential Revision:
D13335184
fbshipit-source-id:
26d8247e16b30bdff045530034af9b72c76d066f
Jerry Zhang [Thu, 6 Dec 2018 19:14:48 +0000 (11:14 -0800)]
Tensor construction codemod - 1/3 (#14828)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14828
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: bddppq
Differential Revision:
D13335160
fbshipit-source-id:
a3ae4c5a86bfbdaf2d5aa14e0eef57255e829fd4
Jerry Zhang [Thu, 6 Dec 2018 18:56:14 +0000 (10:56 -0800)]
Move numa.{h, cc} to c10/util (#14393)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: ezyang
Differential Revision:
D13205604
fbshipit-source-id:
54166492d31827b0343ed070cc36a825dd86e2ed
Johannes M Dieterich [Thu, 6 Dec 2018 18:04:37 +0000 (10:04 -0800)]
Upgrade CI to ROCm 1.9.2 (#14216)
Summary:
Drop custom hcc/hip as the 1.9.2 release should contain the relevant patches therein.
Most notable feature in 1.9.2 is mixed precision support in rocBLAS and MIOpen. These features will be enabled by subsequent PRs.
bddppq ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14216
Differential Revision:
D13354294
Pulled By: bddppq
fbshipit-source-id:
2541d4a196af21c9432c1aff7f6e65b572628028
Jan Schlüter [Thu, 6 Dec 2018 17:29:01 +0000 (09:29 -0800)]
Allow linspace and logspace with steps=1 and start != end like numpy (#14748)
Summary:
`torch.linspace(0, 1, 1)` fails with `RuntimeError: invalid argument 3: invalid number of points at ../aten/src/TH/generic/THTensorMoreMath.cpp:2119`, while `np.linspace(0, 1, 1)` works fine.
Looking at the code, there is even a comment by gchanan asking: "NumPy allows you to pass different points even if n <= 1 -- should we?"
I would say "yes". Currently, I would need to handle the case of `steps == 1` or `steps == 0` separately, making sure to change the `end` when calling `torch.linspace`. This is impractical. If we support `start != end`, there are two possibilities for the result: Either we ensure the first value in the resulting sequence always equals `start`, or we ensure the last value in the resulting sequence always equals `end`. Numpy chose the former, which also allows it to support a boolean `endpoint` flag. I'd say we should follow numpy.
This PR adapts `linspace` and `logspace` to mimic the behavior of numpy, adapts the tests accordingly, and extends the docstrings to make clear what happens when passing `steps=1`.
If you decide against this PR, the error message should become explicit about what I did wrong, and the documentation should be extended to mention this restriction.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14748
Differential Revision:
D13356136
Pulled By: ezyang
fbshipit-source-id:
db85b8f0a98a5e24b3acd766132ab71c91794a82
Jie [Thu, 6 Dec 2018 16:57:39 +0000 (08:57 -0800)]
(#14580)
Summary:
Removes cast of half to float in torch.sum, with float16 input tensor and
float32 output tensor, instead we cast data when loading input in kernel.
This supposingly would save a kernel launch as well as a full global memory load
on promoted data type (float).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14580
Differential Revision:
D13356203
Pulled By: ezyang
fbshipit-source-id:
85e91225b880a65fe3ceb493371b9b36407fdf48
Ricardo Cuenca [Thu, 6 Dec 2018 16:57:31 +0000 (08:57 -0800)]
Consistent formatting in losses' docs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14739
Differential Revision:
D13356143
Pulled By: ezyang
fbshipit-source-id:
9ae8316dd8ba6e910247b64cec22db63df10e11c
Alex Şuhan [Thu, 6 Dec 2018 16:56:25 +0000 (08:56 -0800)]
Add (partial) autodiff support for nll_loss (#14305)
Summary:
Not ready yet, need some comments / help with this. It's good enough for https://github.com/pytorch/xla immediate goals (forward + backward trace fusion), but there are at least two issues with it:
1. If we don't allow it, `test/test_jit.py` fails to cover the change.
2. If we allow the weight to be set, running `test/test_jit.py TestJitGenerated.test_nn_nll_loss` fails with:
```
======================================================================
ERROR: test_nn_nll_loss (__main__.TestJitGenerated)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test/test_jit.py", line 10001, in do_test
fn, f_args_variable, kwargs_variable, no_grad=no_grad)
File "test/test_jit.py", line 9360, in check_against_reference
outputs_test = self.runAndSaveRNG(func, recording_inputs, kwargs)
File "test/test_jit.py", line 425, in runAndSaveRNG
results = func(*inputs, **kwargs)
File "test/test_jit.py", line 9298, in script_fn
self.assertExportImport(CU.the_method.graph, tensors)
File "test/test_jit.py", line 415, in assertExportImport
self.assertExportImportModule(m, inputs)
File "test/test_jit.py", line 419, in assertExportImportModule
self.assertEqual(self.runAndSaveRNG(m.forward, inputs),
File "test/test_jit.py", line 425, in runAndSaveRNG
results = func(*inputs, **kwargs)
RuntimeError:
arguments for call are not valid:
for operator aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor out) -> Tensor:
expected a value of type Tensor for argument 'total_weight' but found bool
<internally-created-node>
~ <--- HERE
for operator aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> Tensor:
expected a value of type Tensor for argument 'total_weight' but found bool
<internally-created-node>
~ <--- HERE
for call at:
<internally-created-node>
~ <--- HERE
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14305
Differential Revision:
D13356265
Pulled By: ezyang
fbshipit-source-id:
504d783b2d87f923e698a6a4efc0fd9935a94a41
svcscm [Thu, 6 Dec 2018 11:18:17 +0000 (03:18 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
2adbb6f97d4b8f067a2538fec855063510b0ca3f
svcscm [Thu, 6 Dec 2018 10:53:28 +0000 (02:53 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
e0509413215f3b7578b825c52365fec4da625bd5
lcskrishna [Thu, 6 Dec 2018 07:52:42 +0000 (23:52 -0800)]
Fixed MIOpen RNN Segfault issue and enabled RNN test (#14810)
Summary:
This pull request contains changes for:
1. Added MIOpen RNN API miopenGetRNNLayerBiasSize and miopenGetRNNLayerParamSize.
2. Fixed usage of API miopenGetRNNLayerParam.
3. Modifying the RNN test to run using MIOpen engine.
Differential Revision:
D13355699
Pulled By: bddppq
fbshipit-source-id:
6f750657f8049c5446eca893880b397804120b69
Yinghai Lu [Thu, 6 Dec 2018 07:50:12 +0000 (23:50 -0800)]
Export complete subgraph io info when calling onnxGetBackendCompatibility (#14827)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14827
We need to send complete IO info when doing `onnxGetBackendCompatibility` to backend like Glow. Previously we are missing some info because sometimes we generate more than one nodes from one C2 op. This fixes the issue.
Reviewed By: jackm321
Differential Revision:
D13352049
fbshipit-source-id:
8d8ac70656a0ac42f3a0ccecad61456a4f3b2435
Huan Gui [Thu, 6 Dec 2018 06:51:23 +0000 (22:51 -0800)]
Fix clip gradient with empty input (#14709)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14709
As titled
Reviewed By: Wakeupbuddy
Differential Revision:
D13305554
fbshipit-source-id:
380062d4b0e4f9dc0207a27766cac7b8d05384d5
JerryShih [Thu, 6 Dec 2018 06:47:54 +0000 (22:47 -0800)]
Remove protobuf dependency in pytorch cmake file. (#14182)
Summary:
Currently, pytorch doesn't dependent on protobuf. So, we don't need to include the protobuf dir in pytorch cmake file.
And if we build caffe2 without custom-protobuf[1], we will have the protobuf mismatched problem.
[1]
https://github.com/pytorch/pytorch/blob/
92dbd0219f6fbdb1db105386386ccf92c0758e86/CMakeLists.txt#L65
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14182
Differential Revision:
D13356273
Pulled By: ezyang
fbshipit-source-id:
8120c3452d158dc51d70156433d7b9076c6aed47
Xiang Gao [Thu, 6 Dec 2018 06:44:27 +0000 (22:44 -0800)]
Optimize images (#14084)
Summary:
This is a PR that [ImgBot](https://imgbot.net/) opened on my fork https://github.com/zasdfgbnm/pytorch/pull/1, I forward it here. ImgBot does lossless compression on images to reduce file size.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14084
Differential Revision:
D13356293
Pulled By: ezyang
fbshipit-source-id:
731236d95ad870db8ccb99b03ed306704365242c
Aldian Fazrihady [Thu, 6 Dec 2018 06:31:39 +0000 (22:31 -0800)]
Prevent `profile_observer_test` from being run by CPU test (#14168)
Summary:
Fix CMakeLists.txt, so the test for CPU won't run profile_observer_test.cc, as currently it only supports GPU
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14168
Differential Revision:
D13356274
Pulled By: ezyang
fbshipit-source-id:
7d105f2e18675e5fab129864958148b0f18d582c
Achal Shah [Thu, 6 Dec 2018 06:30:07 +0000 (22:30 -0800)]
CAFFE2_INCLUDE_DIRS points to invalid path (#14306)
Summary:
I know that including CAFFE2_INCLUDE_DIRS in include headers are not necessary for newer cmakes. But I had this in one of my old projects and **cmake gave me error that "/usr/lib/include" is invalid path**.
It seems like "${_INSTALL_PREFIX}/lib/include" should be changed to "${_INSTALL_PREFIX}/include" as all caffe2 headers are in /include rather than /lib/include/
Please correct me if I am wrong?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14306
Differential Revision:
D13356246
Pulled By: ezyang
fbshipit-source-id:
e2d5d3c42352e59b245714ad90fd7a9ef48170d7
HB_alon [Thu, 6 Dec 2018 06:16:44 +0000 (22:16 -0800)]
use "Extension" instead of the unimported "setuptools.Extension" (#14475)
Summary:
use "Extension" instead of the unimported "setuptools.Extension"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14475
Differential Revision:
D13356219
Pulled By: ezyang
fbshipit-source-id:
5a3e7eb73a32d6bf09676efd9eddded5586435cd
Shuichi KITAGUCHI [Thu, 6 Dec 2018 06:07:45 +0000 (22:07 -0800)]
generate ATen core files with LF. (#14667)
Summary:
on Windows environment, some ATen core files (Type.h, Tensor.h, TensorMethods.h) are created and it's new line code is CRLF. (maybe enviconment dependant)
therefore, comparing files is failed in generate_outputs()agener917.py and compilation stopped.
this patch generates these files with LF forcibly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14667
Differential Revision:
D13356170
Pulled By: ezyang
fbshipit-source-id:
ef8cc3a6cc8bf3c45b78e9eb3df98cf47c0d33bb
Brendan Soffientini [Thu, 6 Dec 2018 05:53:36 +0000 (21:53 -0800)]
Remove outdated css file and refs in cpp conf.py (#14779)
Summary:
pytorch_theme.css is no longer necessary for the cpp or html docs site build. The new theme styles are located at https://github.com/pytorch/pytorch_sphinx_theme. The Lato font is also no longer used in the new theme.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14779
Differential Revision:
D13356125
Pulled By: ezyang
fbshipit-source-id:
c7635eb7512c7dcaddb9cad596ab3dbc96480144
vaeksare [Thu, 6 Dec 2018 05:24:58 +0000 (21:24 -0800)]
Fixes for some Windows compiler warnings (#14490)
Summary:
Implement some simple fixes to clean up windows build by fixing compiler warnings. Three main types of warnings were fixes:
1. GCC specific pragmas were changed to not be used on windows.
2. cmake flags that don't exist on windows were removed from windows build
3. Fix a macro that was defined multiple times on Windows.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14490
Differential Revision:
D13241988
Pulled By: ezyang
fbshipit-source-id:
38da8354f0e3a3b9c97e33309cdda9fd23c08247
Edward Yang [Thu, 6 Dec 2018 05:14:03 +0000 (21:14 -0800)]
Shut up "address will always evaluate to 'true'" warnings (#14774)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14774
Differential Revision:
D13327969
Pulled By: ezyang
fbshipit-source-id:
43380c89eedaaa89467952401b8fd3f5a9ad754a
Edward Yang [Thu, 6 Dec 2018 04:50:41 +0000 (20:50 -0800)]
HIPify less files in PyTorch (#14804)
Summary:
Stacked on #14803
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14804
Differential Revision:
D13347986
Pulled By: ezyang
fbshipit-source-id:
c93177b4ad51855660d0de36d042bfc542bd4be0
Junjie Bai [Thu, 6 Dec 2018 02:35:21 +0000 (18:35 -0800)]
Unify device argument parsing between torch and c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14786
Differential Revision:
D13334501
Pulled By: bddppq
fbshipit-source-id:
ae3536be1fe0dcd6a1552ec93629ecc9554c0d7c
Pieter Noordhuis [Thu, 6 Dec 2018 01:18:06 +0000 (17:18 -0800)]
Improve assertion failure message (#14813)
Summary:
See #14554.
I can't figure out how the reported issue can happen. The best next
thing is have more information when this happens again.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14813
Differential Revision:
D13351908
Pulled By: pietern
fbshipit-source-id:
61b30fcae2e34da54329d0893ca4921b6ad60f0d
Bram Wasti [Thu, 6 Dec 2018 01:16:24 +0000 (17:16 -0800)]
Add FunctionSchema based Operator Registry (#13789)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13789
This enables creation of operators with FunctionSchema and IValue
Reviewed By: smessmer
Differential Revision:
D13008791
fbshipit-source-id:
151efc88ac315f4a0ab0171a99774caaf767ef1e
Pieter Noordhuis [Thu, 6 Dec 2018 01:15:51 +0000 (17:15 -0800)]
Increase test timeout (#14814)
Summary:
It is possible that some sort of contention causes process scheduling
delays which in turn cause the timeout to *not* be hit.
Increased sleep here will decrease the probability of this happening.
Fixes #14555.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14814
Differential Revision:
D13351924
Pulled By: pietern
fbshipit-source-id:
1222cf0855408dfcb79f30f94694c790ee998cf9
Pieter Noordhuis [Thu, 6 Dec 2018 01:07:26 +0000 (17:07 -0800)]
Retry test on address already in use error (#14815)
Summary:
Thanks nairbv for the suggestion.
Also see #14589.
Fixes #14703.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14815
Differential Revision:
D13351913
Pulled By: pietern
fbshipit-source-id:
d11a4152505d0ce15592b13e417bb80551476a61
Lu Fang [Thu, 6 Dec 2018 01:04:39 +0000 (17:04 -0800)]
improve ONNX tests on torch.Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14821
Reviewed By: zrphercule
Differential Revision:
D13348773
Pulled By: houseroad
fbshipit-source-id:
611ca6e28f715e5518649c8c16f702ac3433308c
Lin Huang [Wed, 5 Dec 2018 21:12:37 +0000 (13:12 -0800)]
Define THPStorage struct only once (rather than N times) (#14802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14802
The definetion of THPStorage does not depend on any Real, its macro
defintion is unnecessary, refactor the code so that THPStorage is not macro
defined.
Reviewed By: ezyang
Differential Revision:
D13340445
fbshipit-source-id:
343393d0a36c868b9a06eea2ad9b80f5e395e947
Daya S Khudia [Wed, 5 Dec 2018 21:09:55 +0000 (13:09 -0800)]
File name change for FbgemmI8Depthwise.h and FbgemmI8Depthwise.cc (#14725)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14725
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/33
Renaming FbgemmI8Depthwise.h to FbgemmI8DepthwiseAvx2.h and FbgemmI8Depthwise.cc to FbgemmI8DepthwiseAvx2.cc since FbgemmI8DepthwiseAvx2.cc will be compiled with avx2 flags
Reviewed By: jianyuh
Differential Revision:
D13313898
fbshipit-source-id:
a8111eacf3d79a466ce0565bfe5f2f0b200a5c33
zrphercule [Wed, 5 Dec 2018 20:59:44 +0000 (12:59 -0800)]
Add torch.nn.RReLU support in symbolic (#14781)
Summary:
Now we support exporting torch.nn.RReLU in onnx.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14781
Reviewed By: houseroad
Differential Revision:
D13343872
Pulled By: zrphercule
fbshipit-source-id:
1e96b957de4fc2f5ba3959d42329807975419ae3
Daya S Khudia [Wed, 5 Dec 2018 19:50:57 +0000 (11:50 -0800)]
Move avx2 specific code in different source files (#28)
Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/28
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14516
This is the first diff in a series of diffs that will separate out avx2 specific code in separate files. The goal is to compile as little as possible code with avx2 and avx512 compiler flags.
Reviewed By: jianyuh
Differential Revision:
D13248376
fbshipit-source-id:
401c2e9d3cd96c420fd08c3efa011febce96ffbb
Marat Dukhan [Wed, 5 Dec 2018 19:39:46 +0000 (11:39 -0800)]
Validate matching input shapes in Int8Add operator (#14520)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14520
Default engine doesn't support broadcast semantics in Int8Add operator. This patch adds a check that shapes are equivalent.
Reviewed By: bertmaher
Differential Revision:
D13250922
fbshipit-source-id:
8526d07723bd9a34d54dee04d121c57f8b33c481
Tongzhou Wang [Wed, 5 Dec 2018 19:21:19 +0000 (11:21 -0800)]
fix stft arg types
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14800
Reviewed By: zou3519
Differential Revision:
D13340574
Pulled By: SsnL
fbshipit-source-id:
8b0dbbe299d1a362da0ecc0b1c0dadb2543ded5d
Edward Yang [Wed, 5 Dec 2018 18:57:00 +0000 (10:57 -0800)]
Improve HIPify performance (#14803)
Summary:
```
Improve performance of pyHIPIFY
Changes:
- Pre-compile regexes, don't use regexes when it's not necessary
(this saves us ~15%)
- Compile all substitutions for mappings into a single, non-backtracking
regex using a Trie. This gives big savings.
Before, running pyHIPIFY on all files took 15.8s. Now it takes 3.9s.
```
Stacked on #14769
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14803
Differential Revision:
D13342620
Pulled By: ezyang
fbshipit-source-id:
1cfa36b3236bbe24d07080a31cc788a52d740f40
Ailing Zhang [Wed, 5 Dec 2018 18:52:39 +0000 (10:52 -0800)]
Fix cuda multiprocessing cached memory (#14736)
Summary:
This PR fixes #11422
In the old world of CUDA IPC, when we want to share a tensor T from A to B, we have to share the whole CUDA mem allocation where T's storage sit in. And we casted it to the same type of storage of T's.
This causes problem when two different types of storage got allocated to the same CUDA mem block. When we try to reconstruct the second tensor, it will complain about wrong storage type.
In this PR we reconstruct the storage only (not the entire mem block). However, CUDA only allows one open memHandle once per process, we have to save the device pointer in a global cache so that we can reconstruct tensors as they come.
Thanks a ton to ezyang who helped design the solution and debugged the issue!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14736
Differential Revision:
D13335899
Pulled By: ailzhang
fbshipit-source-id:
cad69db392ed6f8fdc2b93a9dc2899f6d378c371
Peter Goldsborough [Wed, 5 Dec 2018 18:18:20 +0000 (10:18 -0800)]
Set and get default dtype (#13748)
Summary:
Replaces the `DefaultTensorOptions` with just a global default dtype that you can set and get like in Python.
Also, calls `set_default_dtype` in the implementation of `torch.set_default_dtype`. Right now these two default values are separate but will always be the same. Should we just bind `set_default_dtype` into Python? I think that might be good to do in a separate PR though.
ezyang gchanan
Also CC colesbury who wanted to do this for ATen for a while? What do you think about it?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13748
Differential Revision:
D13340207
Pulled By: goldsborough
fbshipit-source-id:
2689b09eb137fabb3a92d1ad1635782bee9398e8
Marat Dukhan [Wed, 5 Dec 2018 18:10:32 +0000 (10:10 -0800)]
Switch Int8AveragePool operator to QNNPACK (#14783)
Summary:
2.2-2.9X better performance on ARM when compiled with gcc (same bad perf when compiled with Clang)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14783
Differential Revision:
D13332680
Pulled By: Maratyszcza
fbshipit-source-id:
4c1138500c6b3026335e9bfe5f6be43b1ae2cefb
peterjc123 [Wed, 5 Dec 2018 17:50:41 +0000 (09:50 -0800)]
Update magma to 2.4.0 for Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14738
Differential Revision:
D13341611
Pulled By: soumith
fbshipit-source-id:
39a49fc60e710cc32a463858c9cee57c182330e2
Edward Yang [Wed, 5 Dec 2018 17:21:13 +0000 (09:21 -0800)]
Unify build_caffe2_amd.py and build_pytorch_amd.py (#14769)
Summary:
I need to preserve ability to HIPify out-of-place files
only, so build_amd.py grows a --out-of-place-only flag.
Stacked on #14757
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14769
Differential Revision:
D13340154
Pulled By: ezyang
fbshipit-source-id:
1b855bc79e824ea94517a893236fd2c8ba4cb79d
Ilia Cherniavskii [Wed, 5 Dec 2018 16:40:54 +0000 (08:40 -0800)]
Default pool() option (#14636)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14636
Add a default CPU option for the pool()
Reviewed By: andrewwdye
Differential Revision:
D13281367
fbshipit-source-id:
92dbfce89c900a41731b6d1ff62bb97886c40f77
Francisco Massa [Wed, 5 Dec 2018 16:27:00 +0000 (08:27 -0800)]
Storage.clone maintains original device (#14751)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/14673
As pointed out by vishwakftw , the root case of the `deepcopy` issue was that `storage.clone()` would create a new storage in the default device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14751
Reviewed By: soumith
Differential Revision:
D13323061
Pulled By: fmassa
fbshipit-source-id:
bfe46ebd78f0b6cd9518c11d09de7849282ed2a2
svcscm [Wed, 5 Dec 2018 14:24:49 +0000 (06:24 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
080e0034bd6353420383ac7b476af5a35eaba7c3
svcscm [Wed, 5 Dec 2018 10:53:49 +0000 (02:53 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
e397238c7c477c4268e2dc89e530776fc89f18f8
Jongsoo Park [Wed, 5 Dec 2018 08:49:01 +0000 (00:49 -0800)]
include avx512vl to avx512 code path (#14733)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14733
We often also want to use AVX512VL instruction sets.
We already included AVX512F, AVX512DQ.
Skylake also has AVX512BW, AVX512CD we may want to later.
Reviewed By: duc0
Differential Revision:
D13317282
fbshipit-source-id:
82c8e401d82d5c3a5452fb4ccb6e5cb88d242bda
Adam Paszke [Wed, 5 Dec 2018 08:07:51 +0000 (00:07 -0800)]
Use AT_WARN for warnings in the JIT (#14770)
Summary:
Previously their implementation dispatched to prim::Print, which kept
printing the warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14770
Differential Revision:
D13327629
Pulled By: suo
fbshipit-source-id:
b9913f533d4530eb7c29146c39981ba7f72b6b68
Yinghai Lu [Wed, 5 Dec 2018 05:50:41 +0000 (21:50 -0800)]
Add output info when doing onnxGetBackendCompatibility (#14784)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14784
TSIA. To give more complete info to `onnxGetBackendCompatibility`.
Reviewed By: bertmaher, rdzhabarov
Differential Revision:
D13331989
fbshipit-source-id:
1064b93f7f474788f736e6f0c893dae915c6fb99
Adam Paszke [Wed, 5 Dec 2018 05:35:48 +0000 (21:35 -0800)]
Don't DCE PythonOp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14773
Reviewed By: eellison
Differential Revision:
D13327673
Pulled By: suo
fbshipit-source-id:
236db3407c7eacac470530836e3d4d0dc323110c
Adam Paszke [Wed, 5 Dec 2018 04:35:51 +0000 (20:35 -0800)]
Improvements for symbolic AD (#14758)
Summary:
**Review only the last commit.**
This commit adds a few optimizations to AD, that let us dramatically
reduce the number of sizes we capture from forward.
We now:
- collapse chains of SumToSize
- avoid capturing sizes of tensors that are captured anyway
- more aggressively DCE the reverse code
- run CSE on the primal code to deduplicate `aten::size` calls
cc zou3519 zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14758
Differential Revision:
D13324440
Pulled By: zou3519
fbshipit-source-id:
45ccbc13605adcef2b461840c6089d3200000c72
Ailing Zhang [Wed, 5 Dec 2018 04:23:25 +0000 (20:23 -0800)]
Revert
D13289919: [pytorch][PR] [DataLoader] Refactor dataloader.py
Differential Revision:
D13289919
Original commit changeset:
d701bc7bb48f
fbshipit-source-id:
c350c491fefa98a0a7c0cf22cb832e78aeb15c3d
Edward Yang [Wed, 5 Dec 2018 04:07:49 +0000 (20:07 -0800)]
Delete defunct files from torch/csrc/distributed (#14785)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14785
Differential Revision:
D13333066
Pulled By: ezyang
fbshipit-source-id:
e7937b4e8e12409b0fa964c34f995f7861ca95ff
Elias Ellison [Wed, 5 Dec 2018 03:52:07 +0000 (19:52 -0800)]
support conv transpose in script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14775
Differential Revision:
D13330491
Pulled By: eellison
fbshipit-source-id:
432b327d6a33517ff53ea33c9f64700e81432332
Teng Li [Wed, 5 Dec 2018 03:20:08 +0000 (19:20 -0800)]
Making dist.get_default_group private for PT1 release (#14767)
Summary:
When I wrote the frontend API, it is designed on not letting users use the default_group directly on any functions. It should really be private.
All collectives are supposed to either use group.WORLD, or anything that comes out of new_group. That was the initial design.
We need to make a TODO on removing group.WORLD one day. It exists for backward compatibility reasons and adds lots of complexity.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14767
Reviewed By: pietern
Differential Revision:
D13330655
Pulled By: teng-li
fbshipit-source-id:
ace107e1c3a9b3910a300b22815a9e8096fafb1c
Andy Chen [Wed, 5 Dec 2018 02:45:45 +0000 (18:45 -0800)]
Make checkpoint_sequential work with multiple arguments (#14278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14278
In this commit, we make checkpoint_sequential work for models with multiple tensor inputs. Previously, it only processed the first tensor and ignored the rest.
We introduce a new test in test/test_utils.py that replicates the issue referenced in this [GitHub issue](https://github.com/pytorch/pytorch/issues/11093), and we make sure that the test passes by changing the behavior of checkpoint_sequential to process all input tensors.
Reviewed By: ezyang
Differential Revision:
D13144672
fbshipit-source-id:
24f58233a65a0f5b80b89c8d8cbced6f814004f7
Lu Fang [Wed, 5 Dec 2018 02:35:46 +0000 (18:35 -0800)]
update of fbcode/onnx to
42804705bdbf179d1a98394008417e1392013547 (#14777)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14777
Previous import was
6b34743d2e361bbc0acb29dd73536478cb92562e
Included changes:
- **[4280470](https://github.com/onnx/onnx/commit/4280470)**: Changes done internally at Facebook (#1668) <Lu Fang>
- **[f85221f](https://github.com/onnx/onnx/commit/f85221f)**: Fuse MatMul and Add into Gemm (#1542) <vloncar>
- **[022230e](https://github.com/onnx/onnx/commit/022230e)**: Replace np.long by np.int64 (#1664) <G. Ramalingam>
- **[0ab3c95](https://github.com/onnx/onnx/commit/0ab3c95)**: Infer shape from data in Constant nodes (#1667) <Shinichiro Hamaji>
Reviewed By: bddppq
Differential Revision:
D13330082
fbshipit-source-id:
13cf328626cf872d0983bbd2154d95c45da70f1c
David Riazati [Wed, 5 Dec 2018 02:32:05 +0000 (18:32 -0800)]
Enable testing on Loss modules (#14778)
Summary:
This PR adds `None` buffers as parameters (similarly to #14715). It also cleans up a bunch of the `test_jit.py` tests that should be covered by `common_nn.py` and brings in `criterion_tests` to test loss functions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14778
Differential Revision:
D13330849
Pulled By: driazati
fbshipit-source-id:
924cc4cf94e0dcd11e811a55222fd2ebc42a9e76
Wanchao Liang [Wed, 5 Dec 2018 02:15:14 +0000 (18:15 -0800)]
Add tests for dropout/batchnorm train/eval, remove training constants (#14780)
Summary:
This PR:
1. add tests for batchnorm/dropout for train/eval parameter mutatino
2. remove training constants from all our standard library
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14780
Differential Revision:
D13331578
Pulled By: wanchaol
fbshipit-source-id:
d92ca3ce38cc2888688d50fe015e3e22539a20a5
Gregory Chanan [Wed, 5 Dec 2018 01:48:25 +0000 (17:48 -0800)]
Split LegacyDeviceTypeInit from LegacyTypeDispatch. (#14723)
Summary:
The goal here is to have LegacyTHDispatch call into this as well, so LegacyTypeDispatch and LegacyTHDispatch don't have cross dependencies.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14723
Reviewed By: ezyang
Differential Revision:
D13314017
Pulled By: gchanan
fbshipit-source-id:
8761cb4af2b2269d2e755203e073bfdba535b8c0
Michael Suo [Tue, 4 Dec 2018 23:42:22 +0000 (15:42 -0800)]
don't allow cse to clean up nondeterministic nodes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14776
Differential Revision:
D13330229
Pulled By: suo
fbshipit-source-id:
6bc88811e1889949f0f079cffccd8cd4270584cc
Adam Paszke [Tue, 4 Dec 2018 23:40:41 +0000 (15:40 -0800)]
Reenable all forward-pass fusions that worked before the AD fix (#14558)
Summary:
Dealing with so many `aten::size` calls (in particular calls on elements computed inside fusion groups) requires us to do some extra graph processing in the fuser (to compute the sizes by explicit broadcasts, instead of writing the intermediate tensors only to check their size). This restores the forward expects of LSTM and MiLSTM to a single big kernel. Unfortunately the backward is much harder, because as long as we can't prove that the reductions are unnecessary (or if we can't distribute them over the op), we will not be able to fuse them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14558
Differential Revision:
D13321748
Pulled By: zou3519
fbshipit-source-id:
c04fc2f70d106d2bfb56206b5aec517a93b79d1f
David Riazati [Tue, 4 Dec 2018 23:09:30 +0000 (15:09 -0800)]
BatchNorm support not tracking stats
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14764
Differential Revision:
D13325800
Pulled By: driazati
fbshipit-source-id:
a3e4773dc31b83565e7a4de33614d6efd4a12de9
Lu Fang [Tue, 4 Dec 2018 22:48:56 +0000 (14:48 -0800)]
Minor doc change in c10/Device.h (#14762)
Summary:
Make sure it's a valid regex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14762
Reviewed By: zrphercule
Differential Revision:
D13326108
Pulled By: houseroad
fbshipit-source-id:
fdcae2d5d42774c4071651b7477f08047d385dfa
Gregory Chanan [Tue, 4 Dec 2018 22:41:03 +0000 (14:41 -0800)]
Introduce LegacyTHDispatcher for dispatching to TH functions. (#14754)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14754
This isn't hooked up to anything yet, this is just putting the skeleton in place.
The idea here is that the functions generated via Declarations.cwrap and nn.yaml are not actually operators, they are implementation details of operators, and thus don't need to participate in VariableType, JIT dispatch generation.
So, we will split these functions out from the usual Type/operator hierarchy; for now the dispatch will be done by a Type-like class called LegacyTHDispatcher. Once this is done this probably means we can collapse Type to be backend-specific, not Type/ScalarType specific, because all the ScalarType specific code will live in the LegacyTHDispatcher.
Reviewed By: ezyang
Differential Revision:
D13321605
fbshipit-source-id:
25d1bbc9827a42d6ab5d69aabbad3eac72bf364c
Michael Suo [Tue, 4 Dec 2018 22:28:10 +0000 (14:28 -0800)]
disable batch mm if we have mutable ops (#14771)
Summary:
Just to be safe, disable batch mm for mutable ops. We don't lose much for doing this, and we can go back at a calmer time to re-enable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14771
Reviewed By: eellison
Differential Revision:
D13327641
Pulled By: suo
fbshipit-source-id:
96611e21ed3cb8492a2cd040f7d33fb58c52bd5e
Chandler Zuo [Tue, 4 Dec 2018 22:23:22 +0000 (14:23 -0800)]
Replace at::Half non-vectorized conversions with implementations from FP16 (#14411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14411
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14579
Folded the fp16 codes into c10.
Reviewed By: ezyang
Differential Revision:
D13206450
fbshipit-source-id:
472208dd230dc49d33935622ff3286b17eeb0894
Thomas Viehmann [Tue, 4 Dec 2018 21:58:31 +0000 (13:58 -0800)]
Use .to to convert new tensors in new_tensor (#14097)
Summary:
This would solve the tracing problems of #13969.
Fixes: #14732
I would appreciate if this got good scrutiny before applied.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14097
Differential Revision:
D13323181
Pulled By: ezyang
fbshipit-source-id:
dcd104b497c0bfddb751923c6166a3824b7a3702
Zeming Lin [Tue, 4 Dec 2018 21:43:28 +0000 (13:43 -0800)]
Export generator constructor (#14041)
Summary:
Missed a spot :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14041
Reviewed By: ezyang
Differential Revision:
D13283803
Pulled By: ebetica
fbshipit-source-id:
482e245f57b0cea6ca3886355ea3ae487d024d4b
Zeming Lin [Tue, 4 Dec 2018 21:42:11 +0000 (13:42 -0800)]
c10d doesn't work with torch namespace (#14042)
Summary:
If both `Utils.hpp` and the `torch` namespace is included in the same file, the compiler won't know which fmap to use. I believe this is because of ADL. This change fixes that issue for me.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14042
Reviewed By: ezyang
Differential Revision:
D13283810
Pulled By: ebetica
fbshipit-source-id:
b68233336518230ba730e83ddac1226a66896533
Wanchao Liang [Tue, 4 Dec 2018 21:40:11 +0000 (13:40 -0800)]
Add resnet test, convert more modules (#14437)
Summary:
This PR add resnet to test_jit and convert more nn modules, stacked on #14533 and #14715
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14437
Differential Revision:
D13325871
Pulled By: wanchaol
fbshipit-source-id:
6c94a988b36794a373af6541c0c262a07291f7b1
David Riazati [Tue, 4 Dec 2018 21:33:41 +0000 (13:33 -0800)]
Add missing test skip
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14763
Differential Revision:
D13325350
Pulled By: driazati
fbshipit-source-id:
4d64a7616b227983c2fc2748c5fbecd1bcbff832
Peter Goldsborough [Tue, 4 Dec 2018 21:17:17 +0000 (13:17 -0800)]
Rename _local_scalar to item() (#13676)
Summary:
Make `at::_local_scalar` more "official" by renaming it to `item()`.
gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13676
Differential Revision:
D13003020
Pulled By: goldsborough
fbshipit-source-id:
0ac25f5237fb81a1576304a0a02f840ff44168a4
Edward Yang [Tue, 4 Dec 2018 20:46:42 +0000 (12:46 -0800)]
Remove use of hipify_caffe2, in favor of file path test. (#14757)
Summary:
This is towards unifying build_pytorch_amd.py and build_caffe2_amd.py
scripts. There is only one use of hipify_caffe2 left, which is just
to control which files actually get HIPified.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14757
Differential Revision:
D13323486
Pulled By: ezyang
fbshipit-source-id:
958cd91be32dfc3c0a9ba9eda507adb5937aebcd
Jerry Zhang [Tue, 4 Dec 2018 20:42:32 +0000 (12:42 -0800)]
Add inplace FeedTensor for python frontend (#14512)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14512
att
Reviewed By: dzhulgakov
Differential Revision:
D13243278
fbshipit-source-id:
78af417d0fcd9b9791ee839d62095903e49205cb
Elias Ellison [Tue, 4 Dec 2018 20:27:22 +0000 (12:27 -0800)]
Loss (#14720)
Summary:
Adding Loss modules to script. Some of the modules have an optional tensor parameter. I will wait until wanchao's diff to support optional tensors is landed before landing this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14720
Differential Revision:
D13317990
Pulled By: eellison
fbshipit-source-id:
535925bdf126d28d9e7d64077b83ebd836a5beba
Ailing Zhang [Tue, 4 Dec 2018 20:21:17 +0000 (12:21 -0800)]
Add new reduction mode in kl_div (#14457)
Summary:
Fixes #6622 .
We used to average over all elements for kl divergence, which is not aligned with its math definition.
This PR corrects the default reduction behavior of KL divergence that it now naverages over batch dimension.
- In KL, default behavior `reduction=mean` averages over batch dimension. While for most other loss functions, `reduction=mean` averages over all elements.
- We used to support scalar tensor as well. For BC purpose, we still support it, no reduction is performed on scalar tensor.
- Added a new reduction mode called `batchmean` which has the correct behavior for KL. Add a warning to make `batchmean` as default for KL instead of `mean` in next major release.
- [deprecated]I chose to not add a new reduction option, since "mean over batch dimension" is kinda special, and it only makes sense in few cases like KL. We don't want to explain why there's a option "batchmean" but it's not applicable for all other functions. I'm open to discussion on this one, as I cannot think of a perfect solution for this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14457
Differential Revision:
D13236016
Pulled By: ailzhang
fbshipit-source-id:
905cc7b3bfc35a11d7cf098b1ebc382170a087a7
Michael Antonov [Tue, 4 Dec 2018 19:42:43 +0000 (11:42 -0800)]
Implements Gather operator for arbitrary axis, sharing the code with BatchGather. (#13756)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13756
This implements general Gather operator for arbitrary axis, sharing the code with BatchGather.
- CPU gather & batch gather logic is now shared through caffe2::gather_helper, for any axis.
- Shared CUDA kernel moved to gather_op.cuh, for any axis.
- Gradients of axis > 0 delegate to BatchGatherGradientOp which now has axis argument.
- BatchGatherOp doc strings updated to have correct rank (q + (r -1)) and output.
- Added tests for axis == 2.
GatherOp supports index wrapping for axis == 0 by default, which was earlier for ONNX.
This diff also extends it to work in Cuda kernel. Added "wrap_indices" argument which specifies
wheather this wrapping should be done; set it to true if you'd like wrapping for any axis.
TBD: Update gradients to support negative indices (separate diff).
TBD: Once we have operator versioning, we'd like to update GatherOp to NOT support axis 0 wrapping
by default, but rather do it only if wrap_indices is set.
Reviewed By: dzhulgakov
Differential Revision:
D12983815
fbshipit-source-id:
8add9d67b47fe8c5ba7a335f581ca0530b205cd7
SsnL [Tue, 4 Dec 2018 17:51:25 +0000 (09:51 -0800)]
Refactor dataloader.py (#14668)
Summary:
As I am working on tasks in https://github.com/pytorch/pytorch/issues/13023, I realized how unreadable the code is because all functions to be run in multiprocessing must be at top global level. Adding more functionalities to `dataloader.py` will only make things worse.
So in this PR, I refactor `dataloader.py` and move much of it into `data._utils`. E.g., the `_worker_loop` and related methods are now in `data._utils.worker`, signal handling code in `data._utils.signal_handling`, collating code in `data._utils.collate`, etc. This split, IMHO, makes code much clearer. I will base my future changes to DataLoader on top of this.
No functionality is changed, except that I added `torch._six.queue`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14668
Reviewed By: soumith
Differential Revision:
D13289919
Pulled By: ailzhang
fbshipit-source-id:
d701bc7bb48f5dd7b163b5be941a9d27eb277a4c
Sebastian Messmer [Tue, 4 Dec 2018 16:55:15 +0000 (08:55 -0800)]
Back out "Move TensorOptions, DefaultTensorOptions and OptionsGuard to c10" (#14745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14745
Original commit changeset:
c62e7f9b0255
Reviewed By: suo
Differential Revision:
D13318594
fbshipit-source-id:
4d7dc35ca01b627accc3ee512bfcd6f2e805a533
Sebastian Messmer [Tue, 4 Dec 2018 16:55:15 +0000 (08:55 -0800)]
Back out "Fix include paths for TensorOptions, DefaultTensorOptions, OptionsGuard" (#14744)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14744
Original commit changeset:
d236d5351ecf
Reviewed By: suo
Differential Revision:
D13318596
fbshipit-source-id:
55f1e9472d05fb5a9c47dc82c32e9a66b5e4308c
Adam Paszke [Tue, 4 Dec 2018 16:53:38 +0000 (08:53 -0800)]
Disable randn_like fusion in the JIT (#14752)
Summary:
Fixes #14674. We won't have time for a proper fix before the release, so at least disable fusion of nodes that trigger incorrect behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14752
Differential Revision:
D13320407
Pulled By: zou3519
fbshipit-source-id:
2400f7c2cd332b957c248e755fdb0dadee68da5d
Ailing Zhang [Tue, 4 Dec 2018 16:34:04 +0000 (08:34 -0800)]
fix import failure in hub test (#14742)
Summary:
Fix #14610
I can repro the test failure following the steps provided, and this fixes the issue for me. Seems the timing of inserting has to happen after the downloading.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14742
Differential Revision:
D13318533
Pulled By: ailzhang
fbshipit-source-id:
b9207b4572d5a9443e516d9a84632e3d7b68e477
Edward Yang [Tue, 4 Dec 2018 15:56:42 +0000 (07:56 -0800)]
Revert
D13304654: [pytorch][PR] Introduce LegacyTHDispatcher for dispatching to TH functions.
Differential Revision:
D13304654
Original commit changeset:
cfe3e1a28adc
fbshipit-source-id:
06669d3c88f83e1d959e2c266fd608316539d42a
Gregory Chanan [Tue, 4 Dec 2018 15:39:09 +0000 (07:39 -0800)]
Introduce LegacyTHDispatcher for dispatching to TH functions. (#14708)
Summary:
This isn't hooked up to anything yet, this is just putting the skeleton in place.
The idea here is that the functions generated via Declarations.cwrap and nn.yaml are not actually operators, they are implementation details of operators, and thus don't need to participate in VariableType, JIT dispatch generation.
So, we will split these functions out from the usual Type/operator hierarchy; for now the dispatch will be done by a Type-like class called LegacyTHDispatcher. Once this is done this probably means we can collapse Type to be backend-specific, not Type/ScalarType specific, because all the ScalarType specific code will live in the LegacyTHDispatcher.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14708
Reviewed By: ezyang
Differential Revision:
D13304654
Pulled By: gchanan
fbshipit-source-id:
cfe3e1a28adcc355f67fe143495ee7e5c5118606
Zachary DeVito [Tue, 4 Dec 2018 15:30:13 +0000 (07:30 -0800)]
add .code property to ScriptModule (#14735)
Summary:
simple change to allow `print(foo.code)` to give a pretty-printed description of all the methods on a module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14735
Differential Revision:
D13317619
Pulled By: zdevito
fbshipit-source-id:
dc7f7ba12ba070f2dfccf362995c2a9e0e573cb7
Richard Zou [Tue, 4 Dec 2018 15:02:01 +0000 (07:02 -0800)]
Fix clamp when min/max are both None (#14716)
Summary:
Before this PR, tensor.clamp() would return an empty tensor if min and
max were not specified. This is a regression from 0.4.1, which would
throw an error. This PR restores that error message.
Fixes #14470
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14716
Differential Revision:
D13311031
Pulled By: zou3519
fbshipit-source-id:
87894db582d5749eaccfc22ba06aac4e10983880
Lu Fang [Tue, 4 Dec 2018 08:44:43 +0000 (00:44 -0800)]
Restore device in cpp API (#14711)
Summary:
This is a stack PR based on https://github.com/pytorch/pytorch/pull/14454.
It enables the restoring the storage to appropriate device.
~~[TODO]: add/modify appropriate tests~~ Done
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14711
Reviewed By: dzhulgakov
Differential Revision:
D13315746
Pulled By: houseroad
fbshipit-source-id:
fe6f24a45c35e88fd1a2eebc09950d4430fac185
Katherin Yu [Tue, 4 Dec 2018 08:40:53 +0000 (00:40 -0800)]
move structs to header file (#14728)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14728
Move IndexBlob,Index to header file so it can reused.
Differential Revision:
D13315898
fbshipit-source-id:
34432c9b8fa08af3d3387f32a940d35b02a59760
Lu Fang [Tue, 4 Dec 2018 08:30:46 +0000 (00:30 -0800)]
improve the restore device test, and relax the assertion (#14734)
Summary:
Only compare the device index if device has it.
Test the tensor restore with some computation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14734
Reviewed By: dzhulgakov
Differential Revision:
D13317949
Pulled By: houseroad
fbshipit-source-id:
26b2f2912a9bbc3b660a62283fb403ddab437e49
Adam Paszke [Tue, 4 Dec 2018 08:13:24 +0000 (00:13 -0800)]
Reduce broadcasted inputs in derivative code (#14485)
Summary:
Previously symbolic AD formulas assumed that no broadcasting happened,
and would return gradients of incorrect shapes (possibly leading to
silent errors later).
Fixes a few bugs (known and unknown):
- #11736
- ArgumentSpec didn't compute the input types correctly [(it didn't advance the offset for non-tensor args)](https://github.com/pytorch/pytorch/pull/14485/files#diff-4fd3157a056596aefb8cdf41022a208bR153)
- Symbolic AD could suffer from use after free (dangling pointers in grad map), because [`EliminateDeadCode` could have removed nodes](https://github.com/pytorch/pytorch/pull/14485/files#diff-25d33ad1ed6855684dec79d927ca6142L781) that referenced gradients of certain values.
- Undefined behavior in `aten::size`
During my tests I've also found a few new problems, and I have opened issues for them:
- FusionGroup seems to think that cat nodes broadcast their inputs (#14483)
- `prim::ConstantChunk` derivative formula doesn't handle undefined inputs (#14484)
This patch unfortunately deoptimizes some of our code (Fusion doesn't happen past chunk nodes, and outputs more tensors only because we have to get their size). I know how to fix those issues, but wanted to fix this terrible bug quickly.
cc zou3519 zdevito ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14485
Reviewed By: eellison
Differential Revision:
D13312888
Pulled By: suo
fbshipit-source-id:
ad46bfb4d0a306ad9451002f8270f7a790f72d58
Elias Ellison [Tue, 4 Dec 2018 07:59:36 +0000 (23:59 -0800)]
interpolate (#14123)
Summary:
Add support for interpolate and upsampling in weak_script mode.
Because the function parameters are overloaded, i had to add it as a builtin op. For interpolate:
size can be ?int | int[]?, and scale_factor can be ?float | float[]?. Every combination of the two parameters needs to be supported.
The same logic applies for upsample_nearest, upsample_bilinear, and upsample.
There are a few fixes that I came to along the way.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14123
Differential Revision:
D13278923
Pulled By: eellison
fbshipit-source-id:
e59729034369be4ce4b747291a3d1c74e135b869
David Riazati [Tue, 4 Dec 2018 07:49:39 +0000 (23:49 -0800)]
Add Pooling modules to Script (#14527)
Summary:
Depends on #14584
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14527
Differential Revision:
D13270773
Pulled By: driazati
fbshipit-source-id:
e4acd43ccbce0f4b62d41c30ce8d5c721171e19a
David Riazati [Tue, 4 Dec 2018 07:46:07 +0000 (23:46 -0800)]
Add fractional_max_pool2d to standard lib
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14591
Differential Revision:
D13270755
Pulled By: driazati
fbshipit-source-id:
138a60256795f5ef8d236c75be2cfd929059b98f