Mingzhe Li [Tue, 16 Apr 2019 15:47:25 +0000 (08:47 -0700)]
calculate execution time based on final iterations (#19299)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19299
I saw larger than 5% performance variation with small operators, this diff aims to reduce the variation by avoiding python overhead. Previously, in the benchmark, we run the main loop for 100 iterations then look at the time. If it's not significant, we will double the number of iterations to rerun and look at the result. We continue this process until it becomes significant. We calculate the time by total_time / number of iterations. The issue is that we are including multiple python trigger overhead.
Now, I change the logic to calculate execution time based on the last run instead of all runs, the equation is time_in_last_run/number of iterations.
Reviewed By: hl475
Differential Revision:
D14925287
fbshipit-source-id:
cb646298c08a651e27b99a5547350da367ffff47
Ilia Cherniavskii [Tue, 16 Apr 2019 07:13:50 +0000 (00:13 -0700)]
Move OMP/MKL thread initialization into ATen/Parallel (#19011)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19011
ghimport-source-id:
432e31eccfd0e59fa21a790f861e6b2ff4fdbac6
Differential Revision:
D14846034
Pulled By: ilia-cher
fbshipit-source-id:
d9d03c761d34bac80e09ce776e41c20fd3b04389
Mark Santaniello [Tue, 16 Apr 2019 06:40:21 +0000 (23:40 -0700)]
Avoid undefined symbol error when building AdIndexer LTO (#19009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19009
Move the definition of `MulFunctor<>::Backward()` into a header file.
Reviewed By: BIT-silence
Differential Revision:
D14823230
fbshipit-source-id:
1efaec01863fcc02dcbe7e788d376e72f8564501
Nikolay Korovaiko [Tue, 16 Apr 2019 05:05:20 +0000 (22:05 -0700)]
Ellipsis in subscript
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17763
Differential Revision:
D14893533
Pulled By: Krovatkin
fbshipit-source-id:
c46b4e386d3aa30e6dc03e3052d2e5ff097fa74b
Ilia Cherniavskii [Tue, 16 Apr 2019 03:24:10 +0000 (20:24 -0700)]
Add input information in RecordFunction calls (#18717)
Summary:
Add input information into generated RecordFunction calls in
VariableType wrappers, JIT operators and a few more locations
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18717
Differential Revision:
D14729156
Pulled By: ilia-cher
fbshipit-source-id:
811ac4cbfd85af5c389ef030a7e82ef454afadec
Summer Deng [Mon, 15 Apr 2019 23:43:58 +0000 (16:43 -0700)]
Add NHWC order support in the cost inference function of 3d conv (#19170)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19170
As title
The quantized resnext3d model in production got the following failures without the fix:
```
Caffe2 operator Int8ConvRelu logging error: [enforce fail at conv_pool_op_base.h:463] order == StorageOrder::NCHW. 1 vs 2. Conv3D only supports NCHW on the production quantized model
```
Reviewed By: jspark1105
Differential Revision:
D14894276
fbshipit-source-id:
ef97772277f322ed45215e382c3b4a3702e47e59
Jongsoo Park [Mon, 15 Apr 2019 21:35:25 +0000 (14:35 -0700)]
unit test with multiple op invocations (#19118)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19118
A bug introduced by
D14700576 reported by Yufei (fixed by
D14778810 and
D14785256) was not detected by our units tests.
This diff improves unit tests to catch such errors (with this diff and without
D14778810, we can reproduce the bug Yufei reported).
This improvement also revealed a bug that affects the accuracy when we pre-pack weight and bias together and the pre-packed weight/bias are used by multiple nets. We were modifying the pre-packed bias in-place which was supposed to be constants.
Reviewed By: csummersea
Differential Revision:
D14806077
fbshipit-source-id:
aa9049c74b6ea98d21fbd097de306447a662a46d
Karl Ostmo [Mon, 15 Apr 2019 19:26:50 +0000 (12:26 -0700)]
Run shellcheck on Jenkins scripts (#18874)
Summary:
closes #18873
Doesn't fail the build on warnings yet.
Also fix most severe shellcheck warnings
Limited to `.jenkins/pytorch/` at this time
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18874
Differential Revision:
D14936165
Pulled By: kostmo
fbshipit-source-id:
1ee335695e54fe6c387ef0f6606ea7011dad0fd4
Pieter Noordhuis [Mon, 15 Apr 2019 19:24:43 +0000 (12:24 -0700)]
Make DistributedDataParallel use new reducer (#18953)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18953
This removes Python side bucketing code from DistributedDataParallel
and replaces it with calls to the new C++ based bucketing and reducing
code. To confirm this is working well, we ran a test with both the
previous implementation and the new implementation, and confirmed they
are numerically equivalent.
Performance is improved by a couple percent or more, including the
single machine multiple GPU runs.
Closes #13273.
Reviewed By: mrshenli
Differential Revision:
D14580911
fbshipit-source-id:
44e76f8b0b7e58dd6c91644e3df4660ca2ee4ae2
Gemfield [Mon, 15 Apr 2019 19:23:36 +0000 (12:23 -0700)]
Fix the return value of ParseFromString (#19262)
Summary:
Fix the return value of ParseFromString.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19262
Differential Revision:
D14937605
Pulled By: ezyang
fbshipit-source-id:
3f441086517186a075efb3d74f09160463b696b3
vishwakftw [Mon, 15 Apr 2019 18:53:44 +0000 (11:53 -0700)]
Modify Cholesky derivative (#19116)
Summary:
The derivative of the Cholesky decomposition was previously a triangular matrix.
Changelog:
- Modify the derivative of Cholesky from a triangular matrix to symmetric matrix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19116
Differential Revision:
D14935470
Pulled By: ezyang
fbshipit-source-id:
1c1c76b478c6b99e4e16624682842cb632e8e8b9
Karl Ostmo [Mon, 15 Apr 2019 18:38:44 +0000 (11:38 -0700)]
produce diagram for caffe2 build matrix (#18517)
Summary:
This PR splits the configuration tree data from the logic used to construct the tree, for both `pytorch` and `caffe2` build configs.
Caffe2 configs are also now illustrated in a diagram.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18517
Differential Revision:
D14936170
Pulled By: kostmo
fbshipit-source-id:
7b40a88512627377c5ea0f24765dabfef76ca279
Sam Gross [Mon, 15 Apr 2019 18:13:33 +0000 (11:13 -0700)]
Free all blocks with outstanding events on OOM-retry (#19222)
Summary:
The caching allocator tries to free all blocks on an out-of-memory
error. Previously, it did not free blocks that still had outstanding
stream uses. This change synchronizes on the outstanding events and
frees those blocks.
See #19219
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19222
Differential Revision:
D14925071
Pulled By: colesbury
fbshipit-source-id:
a2e9fe957ec11b00ea8e6c0468436c519667c558
Vitaly Fedyunin [Mon, 15 Apr 2019 16:13:49 +0000 (09:13 -0700)]
Make sure that any of the future versions can load and execute older models. (#19174)
Summary:
Helps to test #18952
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19174
Differential Revision:
D14899474
Pulled By: VitalyFedyunin
fbshipit-source-id:
a4854ad44da28bd0f5115ca316e6078cbfe29d0d
Sebastian Messmer [Sun, 14 Apr 2019 04:42:28 +0000 (21:42 -0700)]
Sync fbcode/caffe2 and xplat/caffe2 (1) (#19218)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19218
Sync some contents between fbcode/caffe2 and xplat/caffe2 to move closer towards a world where they are identical.
Reviewed By: dzhulgakov
Differential Revision:
D14919916
fbshipit-source-id:
29c6b6d89ac556d58ae3cd02619aca88c79591c1
Ailing Zhang [Sun, 14 Apr 2019 03:13:52 +0000 (20:13 -0700)]
upgrade bazel version in CI [xla ci] (#19246)
Summary:
The latest TF requires upgrading bazel version.
This PR should fix xla tests in CI.
[xla ci]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19246
Differential Revision:
D14929533
Pulled By: ailzhang
fbshipit-source-id:
f6deb31428ed39f267d96bb9814d06f76641e73b
Junjie Bai [Sat, 13 Apr 2019 20:05:12 +0000 (13:05 -0700)]
Update docker images to use ROCm 2.3 (#19231)
Summary:
xw285cornell petrex iotamudelta
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-trigger-test/24676/
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-devtoolset7-rocmrpm-centos7.5-trigger-test/17679/
https://ci.pytorch.org/jenkins/job/pytorch-builds/job/py2-clang7-rocmdeb-ubuntu16.04-trigger/24652/
https://ci.pytorch.org/jenkins/job/pytorch-builds/job/py2-devtoolset7-rocmrpm-centos7.5-trigger/9943/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19231
Differential Revision:
D14928580
Pulled By: bddppq
fbshipit-source-id:
025b0affa6bcda6ee9f823dfc6c2cf8b92e71027
Zachary DeVito [Sat, 13 Apr 2019 17:01:34 +0000 (10:01 -0700)]
fix flake8 (#19243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19243
ghimport-source-id:
ae80aed3a5742df21afb6e55979686220a27cce7
Differential Revision:
D14928670
Pulled By: zdevito
fbshipit-source-id:
20ec0d5c8d6f1c515beb55e2e63eddf3b2fc12dd
Zachary DeVito [Sat, 13 Apr 2019 15:28:13 +0000 (08:28 -0700)]
Remove GraphExecutor's python bindings (#19141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19141
ghimport-source-id:
796a41f5514d29959af052fcf5391a2834850a80
Reviewed By: jamesr66a
Differential Revision:
D14888702
Pulled By: zdevito
fbshipit-source-id:
c280145f08e7bc210434d1c99396a3257b626cf9
Zachary DeVito [Sat, 13 Apr 2019 15:28:11 +0000 (08:28 -0700)]
Cleanup ScriptModule bindings (#19138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19138
ghimport-source-id:
10f810f5e7551c1cb65fc4799744083bd7ffd1ee
Reviewed By: jamesr66a
Differential Revision:
D14886945
Pulled By: zdevito
fbshipit-source-id:
a5e5bb08694d03166a7516ec038656c2a02e7896
Zachary DeVito [Sat, 13 Apr 2019 15:28:11 +0000 (08:28 -0700)]
get propagate_shape logic out of module.h (#19137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19137
ghimport-source-id:
2394765f2d401e68ffdfa4c985bfab4cca2517f8
Reviewed By: jamesr66a
Differential Revision:
D14885946
Pulled By: zdevito
fbshipit-source-id:
daa2894ed9761107e9d273bb172840dc23ace072
Zachary DeVito [Sat, 13 Apr 2019 15:28:11 +0000 (08:28 -0700)]
Make debug subgraph inlining thread local (#19136)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19136
ghimport-source-id:
3a24ab36aa753ce5cce7bba3467bdbe88e5c7f60
Reviewed By: jamesr66a
Differential Revision:
D14885051
Pulled By: zdevito
fbshipit-source-id:
b39c6ceef73ad9caefcbf8f40dd1b9132bba03c2
Zachary DeVito [Sat, 13 Apr 2019 15:28:10 +0000 (08:28 -0700)]
Support Kwargs in C++ Function/Method calls (#19086)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19086
ghimport-source-id:
7790a5cc6e32f6f72e92add0b9f76dfa49ad9859
Reviewed By: jamesr66a
Differential Revision:
D14875729
Pulled By: zdevito
fbshipit-source-id:
ad1e4542381d9c33722155459e794f1ba4660dbb
Johannes M Dieterich [Sat, 13 Apr 2019 04:42:10 +0000 (21:42 -0700)]
Enable working ROCm tests (#19169)
Summary:
Enable multi-GPU tests that work with ROCm 2.2. Have been run three times on CI to ensure stability.
While there, remove skipIfRocm annotations for tests that depend on MAGMA. They still skip but now for the correct reason (no MAGMA) to improve our diagnostics.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19169
Differential Revision:
D14924812
Pulled By: bddppq
fbshipit-source-id:
8b88f58bba58a08ddcd439e899a0abc6198fef64
Ailing Zhang [Sat, 13 Apr 2019 04:26:27 +0000 (21:26 -0700)]
import warnings in torch.hub & fix master CI travis (#19181)
Summary:
fix missing import in #18758
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19181
Differential Revision:
D14908198
Pulled By: ailzhang
fbshipit-source-id:
31e0dc4a27521103a1b93f72511ae1b64a36117f
Jerry Zhang [Sat, 13 Apr 2019 01:10:37 +0000 (18:10 -0700)]
fix lint errors in gen.py (#19221)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19221
att
Reviewed By: colesbury
Differential Revision:
D14923858
fbshipit-source-id:
4793d7794172d401455c5ce72dfc27dddad515d4
Bram Wasti [Fri, 12 Apr 2019 21:53:17 +0000 (14:53 -0700)]
Add pass registration mechanism (#18587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18587
ghimport-source-id:
80d753f7046a2a719e0c076684f44fa2059a0921
Differential Revision:
D14901227
Pulled By: bwasti
fbshipit-source-id:
56511d0313419b63945a36b80e9ea51abdef2bd4
Wanchao Liang [Fri, 12 Apr 2019 21:24:37 +0000 (14:24 -0700)]
JIT Layernorm fusion (#18266)
Summary:
Partially fuse layer_norm by decomposing layer_norm into the batchnorm kernel that computes the stats, and then fusing the affine operations after the reduce operations, this is similar to the batchnorm fusion that apaszke did, it also only works in inference mode now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18266
Differential Revision:
D14879877
Pulled By: wanchaol
fbshipit-source-id:
0197d8f2a17ec438d3e53f4c411d759c1ae81efe
Yinghai Lu [Fri, 12 Apr 2019 21:23:06 +0000 (14:23 -0700)]
Add more debugging helper to net transformer (#19176)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19176
Add some amenities for debugging.
Reviewed By: llyfacebook
Differential Revision:
D14901740
fbshipit-source-id:
2c4018fdbf7e3aba2a754b6b4103a72893c229c2
Jerry Zhang [Fri, 12 Apr 2019 19:47:39 +0000 (12:47 -0700)]
Add Quantized Backend (#18546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18546
We'll expose all combinations of various ways of quantization in the top level dispatch key, that is we have AffineCPUTensor, PerChannelAffineCUDATensor, etc.
QTensor method added:
- is_quantized()
- item()
Differential Revision:
D14637671
fbshipit-source-id:
346bc6ef404a570f0efd34e8793056ad3c7855f5
Xiang Gao [Fri, 12 Apr 2019 19:34:29 +0000 (12:34 -0700)]
Step 2: Rename _unique_dim2_temporary_will_remove_soon to unique_dim (#18649)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18649
ghimport-source-id:
3411d240a6af5fe299a889667964730184e30645
Differential Revision:
D14888292
Pulled By: VitalyFedyunin
fbshipit-source-id:
80da83c264598f74ab8decb165da4a1ce2b352bb
Lu Fang [Fri, 12 Apr 2019 18:58:06 +0000 (11:58 -0700)]
Fix onnx ints (#19102)
Summary:
If JIT constant propagation doesn't work, we have to handle the ListConstructor in symbolic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19102
Reviewed By: zrphercule
Differential Revision:
D14875588
Pulled By: houseroad
fbshipit-source-id:
d25c847d224d2d32db50aae1751100080e115022
Huamin Li [Fri, 12 Apr 2019 18:38:02 +0000 (11:38 -0700)]
use C10_REGISTER for GELU op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19090
Reviewed By: BIT-silence
Differential Revision:
D14864737
fbshipit-source-id:
8debd53171f7068726f0ab777a13ca46becbfbdf
Edward Yang [Fri, 12 Apr 2019 18:13:39 +0000 (11:13 -0700)]
Fix tabs lint. (#19196)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19196
ghimport-source-id:
c10b1b19b087d7650e1614f008a9c2db21dfec2f
Differential Revision:
D14913428
Pulled By: ezyang
fbshipit-source-id:
815b919d8e4516d0e5d89ebbdc4dff6d1d08da47
Will Feng [Fri, 12 Apr 2019 16:57:51 +0000 (09:57 -0700)]
Pin nvidia-container-runtime version (#19195)
Summary:
This PR is to fix the CI error:
```
nvidia-docker2 : Depends: nvidia-container-runtime (= 2.0.0+docker18.09.4-1) but 2.0.0+docker18.09.5-1 is to be installed
E: Unable to correct problems, you have held broken packages.
Exited with code 100
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19195
Differential Revision:
D14913104
Pulled By: yf225
fbshipit-source-id:
d151205f5ffe9cac7320ded3c25baa7e051c3623
peter [Fri, 12 Apr 2019 16:25:55 +0000 (09:25 -0700)]
One more fix for #18790
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19187
Differential Revision:
D14913100
Pulled By: ezyang
fbshipit-source-id:
bf147747f933a2c9a35f3ff00bf6b83a4f29286c
Jerry Zhang [Fri, 12 Apr 2019 02:38:21 +0000 (19:38 -0700)]
Fix promoteTypes for QInt types (#19182)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19182
This is a bug discovered by zafartahirov, right now if one of the tensor is QInt
type we'll return undefined, but actually we want to allow ops that accepts
Tensors of the same QInt type to work.
Reviewed By: zafartahirov
Differential Revision:
D14909172
fbshipit-source-id:
492fd6403da8c56e180efe9d632a3b7fc879aecf
Roy Li [Thu, 11 Apr 2019 23:55:39 +0000 (16:55 -0700)]
Replace more usages of Type with DeprecatedTypeProperties (#19093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19093
ghimport-source-id:
a82e3dce912a173b42a6a7e35eb1302d9f334e03
Differential Revision:
D14865520
Pulled By: li-roy
fbshipit-source-id:
b1a8bf32f87920ce8d82f990d670477bc79d0ca7
David Riazati [Thu, 11 Apr 2019 22:33:51 +0000 (15:33 -0700)]
Support attributes when copying modules (#19040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19040
ghimport-source-id:
37933efd717795751283cae8141e2e2caaae2e95
Reviewed By: eellison
Differential Revision:
D14895573
Pulled By: driazati
fbshipit-source-id:
bc2723212384ffa673d2a8df2bb57f38c62cc104
Will Feng [Thu, 11 Apr 2019 22:09:35 +0000 (15:09 -0700)]
Move version_counter_ to TensorImpl (#18223)
Summary:
According to https://github.com/pytorch/pytorch/issues/13638#issuecomment-
468055428, after the Variable/Tensor merge, we may capture variables without autograd metadata inside an autograd function, and we need a working version counter in these cases. This PR makes it possible by moving `version_counter_` out of autograd metadata and into TensorImpl, so that variables without autograd metadata still have version counters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18223
Differential Revision:
D14735123
Pulled By: yf225
fbshipit-source-id:
15f690311393ffd5a53522a226da82f5abb6c65b
Iurii Zdebskyi [Thu, 11 Apr 2019 21:25:21 +0000 (14:25 -0700)]
Enable comp ops for bool tensor (#19109)
Summary:
Enabled comparison ops for bool tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19109
Differential Revision:
D14871187
Pulled By: izdeby
fbshipit-source-id:
cf9951847d69124a93e5e21dd0a39c9568b1037d
Will Feng [Thu, 11 Apr 2019 20:32:45 +0000 (13:32 -0700)]
Change is_variable() to check existence of AutogradMeta, and remove is_variable_ (#19139)
Summary:
Currently, a TensorImpl's `is_variable_` is true if and only if the TensorImpl has AutogradMeta. This PR unifies these two concepts by removing `is_variable_` and change `is_variable()` to check existence of AutogradMeta instead.
Removing `is_variable_` is part of the work in Variable/Tensor merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19139
Differential Revision:
D14893339
Pulled By: yf225
fbshipit-source-id:
ceb5e22c3c01f79b5d21d5bdbf4a7d1bc397796a
Zachary DeVito [Thu, 11 Apr 2019 20:30:42 +0000 (13:30 -0700)]
First class modules in the compiler, round 2 (#19167)
Summary:
This PR propagates where we use first-class modules objects into the compiler. This creates a transitionary state where:
* compiler.cpp creates Graphs where `self` is a Module class and attributes/parameters/buffers/submodules are looked up with `prim::GetAttr`
* GraphExecutor still runs "lowered graphs" where the self object has been removed by a compiler pass `lower_first_class_method`.
* Tracing still creates "lowered graphs", and a pass "lift_lowered_method" creates a first-class method graph for things.
* This PR separates out Method and Function. A script::Function is a pure Graph with no `self` bound. Similar to Python, a script::Method is just a bound `self` and its underlying `script::Function`.
* This PR also separates CompilationUnit from Module. A CompilationUnit is just a list of named script::Functions. Class's have a CompilationUnit holding the class methods, and Modules also have a CompilationUnit holding their Methods. This avoids the weird circular case Module --has a-> Class -> has a -> Module ...
Details:
* In this transitionary state, we maintain two copies of a Graph, first-class module and lowered. Th first-class one has a self argument that is the module's class type. The lowered one is the lowered graph that uses the initial_ivalues inputs.
* When defining lowered methods using `_defined_lowered` we immediately create the first-class equivalent. The reverse is done lazily, creating lowered_methods on demand from the class.
* The two way conversions will be deleted in a future PR when the executor itself runs first-class objects. However this requires more changes to (1) the traces, (2) the python bindings, and (3) the onnx export pass and would make this PR way to large.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19167
Differential Revision:
D14891966
Pulled By: zdevito
fbshipit-source-id:
0b5f03118aa65448a15c7a7818e64089ec93d7ea
Gregory Chanan [Thu, 11 Apr 2019 20:22:49 +0000 (13:22 -0700)]
Materialize a non-default device for C2 legacy storage. (#18605)
Summary:
It's not intended that Storages have 'default' CUDA devices, but this is allowable via the Storage::create_legacy codepath.
This also messages with device_caching, because the initial cache is obtained from the Storage, which may have a 'default' device.
Instead, we materialize a device by allocating 0 bytes via the allocator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18605
Differential Revision:
D14680620
Pulled By: gchanan
fbshipit-source-id:
6d43383d836e90beaf12bfe37c3f0506843f5432
Yinghai Lu [Thu, 11 Apr 2019 19:28:32 +0000 (12:28 -0700)]
Allow empty net type (#19154)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19154
I recently saw some weird workflow error due to empty but set net_type. Maybe we should just fallback to simple net in this case.
Reviewed By: dzhulgakov
Differential Revision:
D14890072
fbshipit-source-id:
4e9edf8232298000713bebb0bfdec61e9c5df17d
Lu Fang [Thu, 11 Apr 2019 19:23:30 +0000 (12:23 -0700)]
Skip Slice if it's no op (#19155)
Summary:
If it's identity op, just skip the slice and return the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19155
Reviewed By: zrphercule
Differential Revision:
D14890238
Pulled By: houseroad
fbshipit-source-id:
f87b93df2cca0cb0e8ae2a1d95ba148044eafd4a
Lu Fang [Thu, 11 Apr 2019 18:15:47 +0000 (11:15 -0700)]
Rename ONNX util test names (#19153)
Summary:
Rename test cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19153
Reviewed By: zrphercule
Differential Revision:
D14890095
Pulled By: houseroad
fbshipit-source-id:
37a787398c88d9cc92b411c2355b43200cf1c4b0
Pieter Noordhuis [Thu, 11 Apr 2019 16:14:31 +0000 (09:14 -0700)]
Remove ProcessGroup::getGroupRank (#19147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19147
After #14809 was merged there is no longer a need for getGroupRank.
Every ProcessGroup object has its own rank and size fields which are
accurate for the global group as well as subgroups.
Strictly speaking removing a function in a minor version bump is a big
no-no, but I highly doubt this was ever used outside of
`torch.distributed` itself. This will result in a compile error for
folks who have subclassed the ProcessGroup class though.
If this is a concern we can delay merging until a later point in time,
but eventually this will need to be cleaned up.
Differential Revision:
D14889736
fbshipit-source-id:
3846fe118b3265b50a10ab8b1c75425dad06932d
Zafar Takhirov [Thu, 11 Apr 2019 15:29:52 +0000 (08:29 -0700)]
Basic implementation of QRelu in C10 (#19091)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19091
Implements a basic quantized ReLU (uint8). This is a temporary solution before using the `QTensor` type instead of the tuple.
Reviewed By: dzhulgakov
Differential Revision:
D14565413
fbshipit-source-id:
7d53cf5628cf9ec135603d6a1fb7c79cd9383019
Guanheng Zhang [Thu, 11 Apr 2019 15:04:32 +0000 (08:04 -0700)]
Import MultiheadAttention to PyTorch (#18334)
Summary:
Import MultiheadAttention into the core pytorch framework.
Users now can import MultiheadAttention directly from torch.nn.
See "Attention Is All You Need" for more details related to MultiheadAttention function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18334
Differential Revision:
D14577966
Pulled By: zhangguanheng66
fbshipit-source-id:
756c0deff623f3780651d9f9a70ce84516c806d3
Xing Wang [Thu, 11 Apr 2019 14:27:46 +0000 (07:27 -0700)]
try to enable uncertainty for lr loss (#17236)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17236
Following the paper in https://papers.nips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision.pdf, approximate the classification case with the regression formulation. For the LRLoss, add penalty based on the variance and regularization on the variance with a tunable parameter lambda.
Reviewed By: chocjy
Differential Revision:
D14077106
fbshipit-source-id:
4405d8995cebdc7275a0dd07857d32a8915d78ef
sakaia@jp.fujitsu.com [Thu, 11 Apr 2019 13:58:46 +0000 (06:58 -0700)]
Remove comment (#19148)
Summary:
Remove pointer to nonexistent Note.
It is already removed in "Remove support for CUDNN 6 (#15851)"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19148
Differential Revision:
D14891514
Pulled By: soumith
fbshipit-source-id:
dd33cfefa3a21e18afae5b3992dea085adaabda8
Zachary DeVito [Thu, 11 Apr 2019 13:14:21 +0000 (06:14 -0700)]
Revert
D14842057: Compiler uses first-class modules**
Differential Revision:
D14842057
Original commit changeset:
ca6e7b5a4380
fbshipit-source-id:
e8f1862a59bf20d5f78648b2fdc53a8b3750ead3
Zachary DeVito [Thu, 11 Apr 2019 06:57:36 +0000 (23:57 -0700)]
Compiler uses first-class modules** (#19043)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19043
ghimport-source-id:
0c9e80d5f35654af6d472abd5643bff3e9eb9ddf
Differential Revision:
D14842057
Pulled By: zdevito
fbshipit-source-id:
ca6e7b5a43805240f40b84d30e54495061067dc0
Christian Puhrsch [Thu, 11 Apr 2019 06:32:51 +0000 (23:32 -0700)]
Require matches_jit_signature within native_functions.yaml (#18956)
Summary:
"""
This will verify that the func syntax follows the JIT signature schema. If you are a developer outside the core team, set this to False first to help us track unification. After your tests pass try setting this to True once and leave it set to True if it doesn't trigger any asserts. This means that your signature happens to be compliant. In general, it serves as a means of tracking an ongoing schema unification with the goal of aligning func syntax with other components of PyTorch in order to reduce overall complexity and assert coverage of all functions by each component.
"""
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18956
Differential Revision:
D14807952
Pulled By: cpuhrsch
fbshipit-source-id:
42dac49269fb3cd96dc62e0b10820d0c32c7fb0e
Ailing Zhang [Thu, 11 Apr 2019 06:05:10 +0000 (23:05 -0700)]
add/move a few apis in torch.hub (#18758)
Summary:
* `torch.hub.list('pytorch/vision')` - show all available hub models in `pytorch/vision`
* `torch.hub.show('pytorch/vision', 'resnet18')` - show docstring & example for `resnet18` in `pytorch/vision`
* Moved `torch.utils.model_zoo.load_url` to `torch.hub.load_state_dict_from_url` and deprecate `torch.utils.model_zoo`
* We have too many env to control where the cache dir is, it's not very necessary. I actually want to unify `TORCH_HUB_DIR`, `TORCH_HOME` and `TORCH_MODEL_ZOO`, but haven't done it. (more suggestions are welcome!)
* Simplify `pytorch/vision` example in doc, it was used to show how how hub entrypoint can be written so had some confusing unnecessary args.
An example of hub usage is shown below
```
In [1]: import torch
In [2]: torch.hub.list('pytorch/vision', force_reload=True)
Downloading: "https://github.com/pytorch/vision/archive/master.zip" to /private/home/ailzhang/.torch/hub/master.zip
Out[2]: ['resnet18', 'resnet50']
In [3]: torch.hub.show('pytorch/vision', 'resnet18')
Using cache found in /private/home/ailzhang/.torch/hub/vision_master
Resnet18 model
pretrained (bool): a recommended kwargs for all entrypoints
args & kwargs are arguments for the function
In [4]: model = torch.hub.load('pytorch/vision', 'resnet18', pretrained=True)
Using cache found in /private/home/ailzhang/.torch/hub/vision_master
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18758
Differential Revision:
D14883651
Pulled By: ailzhang
fbshipit-source-id:
6db6ab708a74121782a9154c44b0e190b23e8309
Pieter Noordhuis [Thu, 11 Apr 2019 05:21:45 +0000 (22:21 -0700)]
Revert
D14878128: [jit] Support attributes when copying modules
Differential Revision:
D14878128
Original commit changeset:
7ef5f7b1b16b
fbshipit-source-id:
3818222a897f8c01bc67f550ed0fd3ddecf61015
Pieter Noordhuis [Thu, 11 Apr 2019 04:27:51 +0000 (21:27 -0700)]
ProcessGroupMPI exists only if it is valid (#14809)
Summary:
Previously, MPI process groups were created for all processes, even if
they were not part of the created group. Their MPI_Comm member field
would be MPI_COMM_NULL and they would ignore any calls. Their rank and
size were identical to that of the global process group and they had a
special groupRank and groupSize field to capture the _real_ rank.
This also meant assymetry with other process group types, where creating
a new group would either return the process group OR
GroupMember.NON_GROUP_MEMBER. For the MPI process group, it would always
return a process group and an additional check was needed to verify
whether or not a process was indeed part of a process group or not.
This commit changes this such that every MPI process group is a valid
process group, and by extension that we no longer have to special case
MPI to determine whether or not a process is part of a group. Now, if
the value returned by `new_group` is GroupMember.NON_GROUP_MEMBER, the
process is not a member, otherwise it is.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14809
Differential Revision:
D14887937
Pulled By: pietern
fbshipit-source-id:
c5bf86d3b33e524cc5004ee68e30103178fa491d
Shen Li [Thu, 11 Apr 2019 03:30:46 +0000 (20:30 -0700)]
Fix flaky store timeout test (#19114)
Summary:
~Sometimes, `init_process_group()`, `store.get()`, and `destory_process_group()` can take more than a few seconds. Hence, removing thread join timeout.~
The error was due to `Address already in use` when starting TPC backend. The solution is to catch the error and report it to the `retry_on_address_already_in_use_error` decorator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19114
Reviewed By: ezyang
Differential Revision:
D14872680
Pulled By: mrshenli
fbshipit-source-id:
fc504d02853ca73f76288c0ade564ab20bc01f7e
Xiaomeng Yang [Thu, 11 Apr 2019 01:45:57 +0000 (18:45 -0700)]
Optimize SoftmaxOp on CPU (#18635)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18635
Optimize SoftmaxOp on CPU
Reviewed By: houseroad
Differential Revision:
D14689516
fbshipit-source-id:
d2dcee2476d1a3a21f428e99bce9835f1d229d64
Zachary DeVito [Thu, 11 Apr 2019 01:12:38 +0000 (18:12 -0700)]
Allow Tensor lists to show up in symbolic differentiable graphs. (#16784)
Summary:
It is done by flattening all tensor lists that are inputs/outputs to the
graph into the inputs/outputs list in the autograd graph.
This is less desirable than simply allowing IValues to exist in the
inputs/outputs of autograd::Function but it is substantially less
intrusive.
CaptureList describes the variables captured for backward in a single class.
UnpackInstructs describes how the flattened inputs to backwards are re-packed into lists.
ailzhang
This PR is also part 2 of covering maskrcnn & bert AD formulas, following #16689.
Ops added in this PR:
```
cat
index
meshgrid
reshape
split
split_with_sizes
stack
unbind
```
I will also add a few perf numbers here.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16784
Differential Revision:
D14104063
Pulled By: ailzhang
fbshipit-source-id:
5ceadadfd67ccaac60c5fd6740786c5354e252b9
David Riazati [Wed, 10 Apr 2019 22:56:42 +0000 (15:56 -0700)]
Support attributes when copying modules (#19040)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19040
ghimport-source-id:
37933efd717795751283cae8141e2e2caaae2e95
Differential Revision:
D14878128
Pulled By: driazati
fbshipit-source-id:
7ef5f7b1b16b9bf9254e8503564fa3a750d841ab
Hao Lu [Wed, 10 Apr 2019 22:20:55 +0000 (15:20 -0700)]
Move ConcatBatchMatMulBatchGatherOp to OSS
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19059
Reviewed By: bwasti
Differential Revision:
D14849735
fbshipit-source-id:
fefd1887d38e51151c07a8b187e9c7c50ef02c6e
Edward Yang [Wed, 10 Apr 2019 21:09:35 +0000 (14:09 -0700)]
Print CuDNN version correctly. (#19110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19110
ghimport-source-id:
efbaf9b23cb61e7ea65460684778c6eeb38ae28e
Differential Revision:
D14874497
Pulled By: ezyang
fbshipit-source-id:
ced03576f7598189dd8cce79b3303a5529551f46
Roy Li [Wed, 10 Apr 2019 19:47:51 +0000 (12:47 -0700)]
Infer device from pointer in from_blob (#19094)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19094
ghimport-source-id:
8207cf614ba36333af610309b24fdc13441b2837
Differential Revision:
D14865925
Pulled By: li-roy
fbshipit-source-id:
16613801f7fe0e829ccab8af081517ea4257db06
Gu, Jinghui [Wed, 10 Apr 2019 18:58:38 +0000 (11:58 -0700)]
implement operators for DNNLOWP (#18656)
Summary:
Implement operators for DNNLOWP, including int8_conv, int8_FC, int8_pooling, int8_relu, int8_sum, quantize/dequantize, and order_swtich operators.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18656
Differential Revision:
D14767092
Pulled By: yinghai
fbshipit-source-id:
1f3e24929a358a42214da333bd304c593ea4468f
Gregory Chanan [Wed, 10 Apr 2019 18:46:35 +0000 (11:46 -0700)]
Improve mismatched storage error message. (#19068)
Summary:
Previously the error message would look like:
```
Attempted to set the storage of a tensor on device cuda:0 to a storage on different device cuda. This is no longer allowed; the devices must match.
```
Now it looks like:
```
Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cuda". This is no longer allowed; the devices must match.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19068
Reviewed By: dzhulgakov
Differential Revision:
D14854257
Pulled By: gchanan
fbshipit-source-id:
deb1ef73c2fcbf9338e7d67f2856282db2befac8
David Riazati [Wed, 10 Apr 2019 18:20:44 +0000 (11:20 -0700)]
Refactor pickler (#19035)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19035
ghimport-source-id:
553977b9963d4877e5066a61702f887e81706598
Differential Revision:
D14839341
Pulled By: driazati
fbshipit-source-id:
d6e4f21b2df28e2a0a21b26bf08d9905599119ad
iurii zdebskyi [Wed, 10 Apr 2019 18:05:54 +0000 (11:05 -0700)]
Fixed bool Tensor value change bug (#19096)
Summary:
Fixes #19077
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19096
Differential Revision:
D14871044
Pulled By: izdeby
fbshipit-source-id:
61b12559c8c5b9613e00ba5933f478321ea80469
Dmytro Dzhulgakov [Wed, 10 Apr 2019 17:13:59 +0000 (10:13 -0700)]
Split python_ir.h in a more sensible way (#19081)
Summary:
Files included in libtorch do depend on torch/csrc/utils/object_ptr.h, e.g. ir.cpp: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir.h#L10 (including usage in std::vector that requires destructor for THPPointer)
However, object_ptr.h depends on python stub: https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/object_ptr.h#L3
Whereas object_ptr.cpp depends full on on python: https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/object_ptr.cpp#L8
`torch/csrc/utils/object_ptr.cpp` is included only in Python extension target: https://github.com/pytorch/pytorch/blob/master/torch/CMakeLists.txt#L541
The only reason it was working on master is that compiler was aggressive enough in pruning unused inline functions. With a bit of changes in flags, it started breaking (like in kostmo's PR).
This PR splits out python-dependent bits more explicitly by forward declaring THPPointer for real.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19081
Reviewed By: ezyang
Differential Revision:
D14860091
Pulled By: dzhulgakov
fbshipit-source-id:
4e86cb8e2ac57aedb3cd00c15270d65bb376206c
Yinghai Lu [Wed, 10 Apr 2019 17:07:43 +0000 (10:07 -0700)]
Clear input/ouput shape cache for each inference (#19085)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19085
This is a bug where input_shapes_ and output_shapes_ will grow indefinitely. Fix it here.
Reviewed By: bertmaher, rdzhabarov
Differential Revision:
D14861695
fbshipit-source-id:
d59116f27c3b54f5cc5a33533de4b9222dbb7afc
Xiang Gao [Wed, 10 Apr 2019 14:33:15 +0000 (07:33 -0700)]
Add torch.unique_consecutive (#19060)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/19045
Please review: VitalyFedyunin ngimel
This is independent on the #18649 series. This will cause merge conflicts in #18649 series, but please merge this first, and I will resolve the merge conflicts there.
The new feature is exposed in `_unique2_temporary_will_remove_soon` and `_unique_dim2_temporary_will_remove_soon`. But not at `torch.unique` yet. I will take care of the API after #18649 series get merged completely.
Benchmark on a tensor of shape `torch.Size([15320, 2])`:
```python
print(torch.__version__)
print()
a = tensor.sort().values.to('cpu')
print('cpu, sorted_input=False:')
%timeit torch._unique2_temporary_will_remove_soon(a)
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True)
%timeit torch._unique2_temporary_will_remove_soon(a, return_counts=True)
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True, return_counts=True)
print()
print('cpu, sorted_input=True:')
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True)
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True)
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_counts=True)
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True, return_counts=True)
print()
a = a.to('cuda')
print('cuda, sorted_input=False:')
%timeit torch._unique2_temporary_will_remove_soon(a); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, return_inverse=True, return_counts=True); torch.cuda.synchronize()
print()
print('cuda, sorted_input=True:')
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique2_temporary_will_remove_soon(a, sorted_input=True, return_inverse=True, return_counts=True); torch.cuda.synchronize()
```
```
1.1.0a0+2addccc
cpu, sorted_input=False:
340 µs ± 5.88 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
717 µs ± 14.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
52.3 ms ± 2.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
52.3 ms ± 1.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
cpu, sorted_input=True:
32.8 µs ± 285 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
49.9 µs ± 557 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
51.6 µs ± 1.08 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
78 µs ± 782 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
cuda, sorted_input=False:
213 µs ± 1.52 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
291 µs ± 3.81 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
250 µs ± 1.05 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
321 µs ± 1.59 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cuda, sorted_input=True:
45.6 µs ± 2.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
110 µs ± 2.47 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
82 µs ± 857 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
143 µs ± 409 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
```python
print(torch.__version__)
print()
a1, a2 = tensor.unbind(1)
indices = (a1 * tensor.max() + a2).sort().indices
a = tensor.index_select(0, indices).to('cpu')
print('cpu, sorted_input=False:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_counts=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True, return_counts=True)
print()
print('cpu, sorted_input=True:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_counts=True)
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True, return_counts=True)
print()
a = a.to('cuda')
print('cuda, sorted_input=False:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, return_inverse=True, return_counts=True); torch.cuda.synchronize()
print()
print('cuda, sorted_input=True:')
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_counts=True); torch.cuda.synchronize()
%timeit torch._unique_dim2_temporary_will_remove_soon(a, dim=0, sorted_input=True, return_inverse=True, return_counts=True); torch.cuda.synchronize()
```
```
cpu, sorted_input=False:
55.4 ms ± 1.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.8 ms ± 616 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.2 ms ± 402 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.1 ms ± 725 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
cpu, sorted_input=True:
54.7 ms ± 585 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
55.2 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
54.5 ms ± 865 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
54.9 ms ± 577 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
cuda, sorted_input=False:
171 µs ± 783 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
220 µs ± 1.65 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
203 µs ± 2.95 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
251 µs ± 2.83 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
cuda, sorted_input=True:
59.6 µs ± 757 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
113 µs ± 431 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
93.2 µs ± 2.13 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
147 µs ± 2.81 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
The CPU implementation of `unique_dim` is super slow, see https://github.com/pytorch/pytorch/issues/18987, but this PR will not worry about this issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19060
Differential Revision:
D14866909
Pulled By: ezyang
fbshipit-source-id:
d20012cec68c37b05cf770a6f4d6524f910b950f
Lu Fang [Wed, 10 Apr 2019 07:32:02 +0000 (00:32 -0700)]
Replace tabs with space (#19100)
Summary:
fix the linter
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19100
Differential Revision:
D14869256
Pulled By: houseroad
fbshipit-source-id:
27ca93cd1dce01ac705b9c9ed93ca8eb6c36351c
Roy Ju [Wed, 10 Apr 2019 05:29:33 +0000 (22:29 -0700)]
Fixes error when too many parameters are passed to fused cuda kernel (#18063)
Summary:
Bug fix for https://github.com/pytorch/pytorch/issues/15043, where a large fusion in JIT with a large number of kernel arguments, which exceeds the limit allowed by nvrtc on a cuda device.
The fix is to check the number of arguments before a cuda kernel is generated. If the number exceeds the limit, take the runFallBack() path.
Add a reduced test from the original issue to keep the test time low. The test would fail without this fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18063
Differential Revision:
D14691401
Pulled By: soumith
fbshipit-source-id:
b98829bc89ed7724e91eda82ae3a5a1151af721a
Summer Deng [Wed, 10 Apr 2019 04:59:33 +0000 (21:59 -0700)]
amend
D14778810 (#18902)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18902
Fix in
D14778810 had an issue that when we fallback to acc32 because the density of outlier is too high W_quantized_ is already modified. In this diff we first just count the number of outliers (without modifying W_quantized_) and only when density is low enough and no need for fallback we modify W_quantized_ and construct an outlier matrix.
Reviewed By: jspark1105
Differential Revision:
D14785256
fbshipit-source-id:
03933110a4ca7409686a06b18a9bb921f8657950
James Reed [Wed, 10 Apr 2019 04:48:49 +0000 (21:48 -0700)]
Move abs, frac, reciprocal, and neg to TensorIterator (#19041)
Summary:
I've been messing around with vectorizing the fusion compiler in JIT, and noticed that these ops were pathologically slow. I moved them to use TensorIterator + Vec256<> and got some speed wins.
Benchmark script:
```
import torch, time
ops = ['abs', 'neg', 'reciprocal', 'frac']
x = torch.rand(1024, 1024)
NITER = 10000
print('op', 'time per iter (ms)', 'gops/s', 'GB/s', sep='\t')
for op in ops:
s = time.time()
for i in range(NITER):
getattr(x, op)()
elapsed_sec = ((time.time() - s) / NITER)
print(op, elapsed_sec * 1000, (1024*1024/elapsed_sec)/1e9, (1024*1024*4*2) / elapsed_sec / 1e9, sep='\t')
```
Before this change (on my mac with a skylake):
```
op time per iter (ms) gops/s GB/s
abs 0.
9730974197387695 1.
0775652866097343 8.
620522292877874
neg 1.
0723679780960083 0.
9778136063534356 7.
822508850827485
reciprocal 1.
2610594034194946 0.
8315040490215421 6.
6520323921723366
frac 1.
1681334018707275 0.
8976509004200546 7.
181207203360437
```
After this change:
```
op time per iter (ms) gops/s GB/s
abs 0.
5031076192855835 2.
084198210889721 16.
673585687117768
neg 0.
4433974027633667 2.
3648672578256087 18.
91893806260487
reciprocal 0.
47145988941192624 2.
2241043693195985 17.
79283495455679
frac 0.
5036592721939087 2.
0819154096627024 16.
65532327730162
```
So, after this change it looks like we are hitting machine peak for bandwidth and are bandwidth bound.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19041
Differential Revision:
D14862037
Pulled By: jamesr66a
fbshipit-source-id:
e2032ac0ca962dbf4120bb36812277c260e22912
Wanchao Liang [Wed, 10 Apr 2019 04:33:54 +0000 (21:33 -0700)]
Fix aten op output assignment (#18581)
Summary:
Fixes the problem of #18391
The issue is that when we code gen the ATenOp, we always generated static number of outputs for each operator. E.g. If there's operator from a old model that only requires two outputs, in its createOperator it will only allocate two output blobs, while the newer version of the operator (`unique` in this case) requires more output blob to be allocated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18581
Differential Revision:
D14865647
Pulled By: wanchaol
fbshipit-source-id:
85f63fe16d6fe408a09eca84798c7e8cab3070e9
Richard Zou [Wed, 10 Apr 2019 01:09:01 +0000 (18:09 -0700)]
EmbeddingBag w/ differentiable per_sample_weights (#18957)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18957
ghimport-source-id:
7396ca08b137ea40f04285764a9d9a6d4f19227e
Reviewed By: cpuhrsch
Differential Revision:
D14856526
Pulled By: zou3519
fbshipit-source-id:
949faea219c7c02ad981b1db610a477194d3f5c9
Richard Zou [Wed, 10 Apr 2019 01:08:59 +0000 (18:08 -0700)]
EmbeddingBag w/ per_sample_weights CUDA fwd + bwd (#18800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18800
ghimport-source-id:
17f638dea0e1ac9a86ec06b223c60362ed78449c
Reviewed By: cpuhrsch
Differential Revision:
D14851422
Pulled By: zou3519
fbshipit-source-id:
27b114e51e66112e4bc9cfc63d1d1ddfa650d347
Richard Zou [Wed, 10 Apr 2019 01:08:59 +0000 (18:08 -0700)]
EmbeddingBag w/ per_sample_weights CPU backward (#18799)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18799
ghimport-source-id:
58a6f629e890449013f24a9b6282664ca2a1e3ba
Reviewed By: cpuhrsch
Differential Revision:
D14851417
Pulled By: zou3519
fbshipit-source-id:
c36b9d469989354bf6cef1c2c3dc4f13e7cb1a25
Richard Zou [Wed, 10 Apr 2019 01:08:59 +0000 (18:08 -0700)]
EmbeddingBag CPU forward with per_sample_weights. (#18735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18735
ghimport-source-id:
d81bef54dafd7167d2451250d7be478d3c013920
Reviewed By: cpuhrsch
Differential Revision:
D14851415
Pulled By: zou3519
fbshipit-source-id:
cea6039e760ad571b90f0a536e420498f34be325
Richard Zou [Wed, 10 Apr 2019 01:08:59 +0000 (18:08 -0700)]
Refactor CPU embedding_bag implementation (#18734)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18734
ghimport-source-id:
e0e50d4b47f2fb8c86e464aacb950521d601f8d3
Reviewed By: cpuhrsch
Differential Revision:
D14851413
Pulled By: zou3519
fbshipit-source-id:
8ac4e4de590a363e9807dc552fe4ca52b92652ed
Alexander Sidorov [Tue, 9 Apr 2019 23:32:52 +0000 (16:32 -0700)]
Make BlackBoxPredictor handle networks throwing exceptions (#19080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19080
OSS: add a tiny unit test utility function to create tensors given shape and data outside of any workspace. I use it in an internal test
Reviewed By: dzhulgakov
Differential Revision:
D14814194
fbshipit-source-id:
6d53b235d99a97da812215f5c7f11fecad363c8c
Shen Li [Tue, 9 Apr 2019 23:11:05 +0000 (16:11 -0700)]
Remind users to set map_location properly when using DDP
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19084
Differential Revision:
D14861702
Pulled By: mrshenli
fbshipit-source-id:
10ca4a9b41e707050a6bce228ccca4177c9fa4a6
Vishwak Srinivasan [Tue, 9 Apr 2019 22:15:06 +0000 (15:15 -0700)]
Rename btrisolve to lu_solve (#18726)
Summary:
Changelog:
- Rename `btrisolve` to `lu_solve` to remain consistent with names of solve methods (`cholesky_solve`, `triangular_solve`, `solve`)
- Fix all callsites
- Rename all tests
- Create a tentative alias for `lu_solve` under the name `btrisolve` and add a deprecation warning to not promote usage
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18726
Differential Revision:
D14726237
Pulled By: zou3519
fbshipit-source-id:
bf25f6c79062183a4153015e0ec7ebab2c8b986b
Shen Li [Tue, 9 Apr 2019 21:10:04 +0000 (14:10 -0700)]
Avoid calling tensor.data.set_() in DDP
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18961
Differential Revision:
D14811208
Pulled By: mrshenli
fbshipit-source-id:
c1c46dfa13e0a6ec83aefd35696ee31a7ea3d810
Dmytro Dzhulgakov [Tue, 9 Apr 2019 19:13:41 +0000 (12:13 -0700)]
Reapply Wrap workaround for cpp custom types a bit prettier and add an example" (#19062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19062
As a temporary demonstration on how to extend this hack further until custom C types are ready.
Reviewed By: ezyang
Differential Revision:
D14817809
fbshipit-source-id:
6eaf731e9135313eb858e178abcd9f25380ab8fe
Shen Li [Tue, 9 Apr 2019 19:06:04 +0000 (12:06 -0700)]
Propagate ProcessGroup timeout to Store (#16571)
Summary:
closes #16520
Hi pietern, I am not sure if this is the expected way to pass timeout to `Store`, could you please help take a look? Thanks!
Questions:
1. How do I write tests for this? I wanted to do something like `test_barrier_timeout_global`, but it seems I need to set the pg's timeout larger than the `Store`'s default timeout (3 min) to see a difference, which is too long for a unit test. And I do not want to change the `Store`'s default timeout either. Any suggestion?
2. Should I also propagate timeout configuration down to `PrefixStore` in `_new_process_group_helper`?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16571
Differential Revision:
D13954527
Pulled By: mrshenli
fbshipit-source-id:
77f2653903f24255207233eb298f7c0321119a87
Wanchao Liang [Tue, 9 Apr 2019 18:53:23 +0000 (11:53 -0700)]
make test_jit_fuser runnable
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19036
Differential Revision:
D14839800
Pulled By: wanchaol
fbshipit-source-id:
b52c131b58e1b42a8c3da5d1117217c3dc2e5f5b
Edward Yang [Tue, 9 Apr 2019 18:48:56 +0000 (11:48 -0700)]
Fix documentation for unfold(dimension=..., ...), fixes #18793 (#19020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19020
ghimport-source-id:
8f31e51b79daba11939aa7992450984054713b9c
Differential Revision:
D14851890
Pulled By: ezyang
fbshipit-source-id:
8498e86a63633fdfd9ecae9b7f85b773b75fe27a
Edward Yang [Tue, 9 Apr 2019 18:34:37 +0000 (11:34 -0700)]
Debugging: Increase process reporting for apt/dpkg. (#18880)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18880
ghimport-source-id:
b43a33c12df379ec75c1fd4c713c1fc723a763e1
Differential Revision:
D14856296
Pulled By: ezyang
fbshipit-source-id:
30691eb14dddfe998b2605b416aaa1b14d1b6ad5
Edward Yang [Tue, 9 Apr 2019 18:09:31 +0000 (11:09 -0700)]
Add torch.__config__.show(), reporting detailed version of all libraries. (#18579)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18579
ghimport-source-id:
65124c95e49423de4ad1008c65e75057fea09b94
Differential Revision:
D14778507
Pulled By: ezyang
fbshipit-source-id:
1e4bb79f4800a116ce8fb7af2fefbd34da8d102c
Omegastick [Tue, 9 Apr 2019 17:36:13 +0000 (10:36 -0700)]
Fix torch::nn::init::orthogonal_ with CNNs (#18915)
Summary:
Fixes #18518
I changed the C++ API torch::nn::init::orthogonal_ implementation to match the Python implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18915
Differential Revision:
D14851833
Pulled By: ezyang
fbshipit-source-id:
45b5e9741582777c203e9ebed564ab3ac1f94baf
Soumith Chintala [Tue, 9 Apr 2019 17:09:58 +0000 (10:09 -0700)]
move nightlies to 1.1.0xxx
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19069
Differential Revision:
D14854600
Pulled By: soumith
fbshipit-source-id:
85c703bddbd47c1b3914d58ab9521ed22ddeb62a
Lu Fang [Tue, 9 Apr 2019 17:01:48 +0000 (10:01 -0700)]
add an utility function to check whether it's in the middle of onnx export or not
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19050
Reviewed By: yinghai
Differential Revision:
D14849878
Pulled By: houseroad
fbshipit-source-id:
a0a4a57f5f9f315ba1334edfccc9284a8099d17f
Lu Fang [Tue, 9 Apr 2019 16:56:34 +0000 (09:56 -0700)]
remove interned_string.h dep (#19061)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19061
remove the deps on interned_string.h
Reviewed By: BIT-silence
Differential Revision:
D14850078
fbshipit-source-id:
07e6ad72a7de369049ea56f32b72276fb4c59b32
Liang Xiong [Tue, 9 Apr 2019 16:25:03 +0000 (09:25 -0700)]
add logging to make the saving action visible (#19042)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19042
show the model saving step in the log.
Reviewed By: kennyhorror
Differential Revision:
D14809385
fbshipit-source-id:
c7a1e50ff92bb45b16b1c501d9325b304b07fbd3
Xiang Gao [Tue, 9 Apr 2019 16:10:42 +0000 (09:10 -0700)]
Namedtuple return for gels, triangular_solve, and test refactor (#17195)
Summary:
Partial fix of: https://github.com/pytorch/pytorch/issues/394
- `gels` and `triangular_solve` now returns namedtuple
- refactor test for namedtuple API for better coverage and maintainability
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17195
Differential Revision:
D14851875
Pulled By: ezyang
fbshipit-source-id:
9b2cba95564269d2c3a15324ba48751d68ed623c
Edward Yang [Tue, 9 Apr 2019 15:02:30 +0000 (08:02 -0700)]
Convert all tabs to spaces, add CI. (#18959)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18959
ghimport-source-id:
a934163fa34cb2019732d5f49dc7290c376bf156
Differential Revision:
D14831246
Pulled By: ezyang
fbshipit-source-id:
beb92dc4ee8c82f4c8259c081dd72e477fe7a9d0
Shen Li [Tue, 9 Apr 2019 15:01:18 +0000 (08:01 -0700)]
Fix BN tests for >= 8 GPU test environments (#19049)
Summary:
DDP does not support replicating BN layers within a process. Existing BN tests fail if the test environment has more than 8 GPUs. This is fixed by explicitly setting each process to use a single replica.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19049
Differential Revision:
D14845286
Pulled By: mrshenli
fbshipit-source-id:
937dda5081d415ece48b21f2781b6b4e008dd42f