Xiaomeng Yang [Tue, 26 Mar 2019 19:13:51 +0000 (12:13 -0700)]
Move math::Axpy function to elementwise lib (#18316)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18316
Move math::Axpy function to elementwise lib
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D14574697
fbshipit-source-id:
7cfbb2da295c8966c5328bd6b577cce2638eea62
Gu, Jinghui [Tue, 26 Mar 2019 17:52:52 +0000 (10:52 -0700)]
Upgrade mkldnn to version 0.18.1 (#18463)
Summary:
Upgrade mkldnn to version 0.18.1
Fix the MKLDNN build issue if linking with MKL 2019.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18463
Differential Revision:
D14620228
Pulled By: ezyang
fbshipit-source-id:
136074ad0e4631e1dde4ca1b0af4ee6a41e50913
Pat Mellon [Tue, 26 Mar 2019 17:25:01 +0000 (10:25 -0700)]
Add Google tag (#17690)
Summary:
This PR adds a Global Site Tag to the site.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17690
Differential Revision:
D14620816
Pulled By: zou3519
fbshipit-source-id:
c02407881ce08340289123f5508f92381744e8e3
Gemfield [Tue, 26 Mar 2019 17:14:11 +0000 (10:14 -0700)]
remove redundant --install_dir parameter in GEN_COMMAND (#18473)
Summary:
remove redundant --install_dir parameter in GEN_COMMAND, since "--install_dir parameter " already contained in ${GEN_COMMAND}.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18473
Differential Revision:
D14620193
Pulled By: ezyang
fbshipit-source-id:
ee9953b5d055f4b8beb3557f95f6539051b0028a
Iurii Zdebskyi [Tue, 26 Mar 2019 16:55:50 +0000 (09:55 -0700)]
Resolving comments from Bool Tensor for CPU PR (#18165)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18165
ghimport-source-id:
55cb3fb63a25c2faab1725b4ec14c688bf45bd38
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18166 Bool Tensor for CUDA
* **#18165 Resolved comments from Bool Tensor for CPU PR**
-------
------------
This is a follow up PR that resolves some additional feedback on one the of previous Bool Tensor PRs.
gchanan, here is a list of almost all the comments from the original PR with respective fixes and replies:
**[utils/python_scalars.h]** why is this converting from uint8_t and not bool? (comment?)
When i was adding this, i was testing by creating a tensor and then calling its .tolist(). it worked for bool and uint8_t equally good so i left uint8_t as thought it makes more sense as we are calling PyBool_FromLong. �Changing it to bool.
**[ATen/Dispatch.h]**better name?.
fixed.
**[test/test_torch.py]** what about other factories, such as full? (and more).
There is a test that goes through the factory methods - test_tensor_factories_empty. i added some bool cases above it and added a comment that once CUDA will be done, i will unite them and it will iterate not just between CUDA and CPU but also all types. ��Adding all bool cases now. Will unite in CUDA PR.
**[generic/THTensorMath.h]** any changes in this file actually needed?
Bad merge. Fixed.
**[TH/THTensor.h]** this generates code for random, clampedRandom, and cappedRandom -- do we have tests for all of these with bool?
Added
**[c10/core/ScalarType.h]** I'm not very confident about the lack of Bool here -- can you look at the call sites and see what makes sense to do here?
Added bool to the macro and created a similar one without for a single case which fails the build with errors:
_./torch/csrc/jit/symbolic_variable.h:79:20: error: ambiguous overload for ‘operator*’ (operand types are ‘const torch::jit::SymbolicVariable’ and ‘torch::jit::Value*’)
return (*this) * insertConstant(rhs);_
Differential Revision:
D14605105
fbshipit-source-id:
abf82d50e8f8c50b386545ac068268651b28496d
Edward Yang [Tue, 26 Mar 2019 16:42:41 +0000 (09:42 -0700)]
Unify cudaGetDeviceCount implementations. (#18445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18445
ghimport-source-id:
30d018737bf6989bc68b7e3676f44e0ca6141fde
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18242 Test running a CUDA build on CPU machine.
* **#18445 Unify cudaGetDeviceCount implementations.**
I went about doing this by searching for calls to cudaGetDeviceCount,
and then methodically replacing them with references to c10::cuda::device_count()
or at::cuda::device_count().
There is a point to doing this: the various implementations wildly differed
in their handling of what to do when cudaGetDeviceCount returns an error.
The final standardized behavior is that **all errors are swallowed** and
we return device count of zero. This indirectly fixes running CUDA builds
on CPU, which was broken in #17847.
I added 'noexcept' to the 'deviceCount' virtual method on DeviceGuardImpl.
This is a BC-breaking change for anyone inheriting from DeviceGuardImpl
but all you need to do is put 'noexcept' on your method and it is backwards
compatible with older libtorch.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14612189
fbshipit-source-id:
3c8d186e3dd623c0e27625212c7ce30f75d943cb
Christian Puhrsch [Tue, 26 Mar 2019 16:19:51 +0000 (09:19 -0700)]
Use TensorIterator for unary operations
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18309
Differential Revision:
D14591533
Pulled By: cpuhrsch
fbshipit-source-id:
a3b0788a481bddf1803c9f2d3289263d7364f8d7
vishwakftw [Tue, 26 Mar 2019 14:49:58 +0000 (07:49 -0700)]
Introduce SobolEngine (#10505)
Summary:
`SobolEngine` is a quasi-random sampler used to sample points evenly between [0,1]. Here we use direction numbers to generate these samples. The maximum supported dimension for the sampler is 1111.
Documentation has been added, tests have been added based on Balandat 's references. The implementation is an optimized / tensor-ized implementation of Balandat 's implementation in Cython as provided in #9332.
This closes #9332 .
cc: soumith Balandat
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10505
Reviewed By: zou3519
Differential Revision:
D9330179
Pulled By: ezyang
fbshipit-source-id:
01d5588e765b33b06febe99348f14d1e7fe8e55d
Wanchao Liang [Tue, 26 Mar 2019 06:44:15 +0000 (23:44 -0700)]
fix str of autogradzero
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18442
Differential Revision:
D14602880
Pulled By: wanchaol
fbshipit-source-id:
ebd00f9bb5f1f7e33964c10d8c9f165b7bb4985f
eellison [Tue, 26 Mar 2019 04:48:11 +0000 (21:48 -0700)]
Optimize boolean expressions & unwraps (#18259)
Summary:
Simplify or eliminate boolean and/or expressions, optimize unwrapping a value that cannot be None, and optimize using `is` with a None and a non-None value
Since peephole optimize is now introducing constants, i added another constant propagation pass after running it.
Previously i had a PR that did this & optimized shape ops - i will add the shape optimizations in a separate PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18259
Differential Revision:
D14602749
Pulled By: eellison
fbshipit-source-id:
1c3f5a67067d8dfdf55d7b78dcb616472ea8a267
Junjie Bai [Tue, 26 Mar 2019 03:50:49 +0000 (20:50 -0700)]
Fix python resolution in caffe2 CI scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18417
Differential Revision:
D14612704
Pulled By: bddppq
fbshipit-source-id:
0942048a9c3990afc50ce73c1fa1005c4d4097aa
Xiang Gao [Tue, 26 Mar 2019 03:36:44 +0000 (20:36 -0700)]
Support dim=None for argmax and argmin (#18264)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/18263
cc: houseroad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18264
Reviewed By: ezyang
Differential Revision:
D14559234
Pulled By: houseroad
fbshipit-source-id:
c5b8623752d6c6af41c6d715fd9585a65294868d
Xiang Gao [Tue, 26 Mar 2019 03:30:33 +0000 (20:30 -0700)]
Add return_counts to torch.unique (#18391)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/12598
This PR was originally authorized by ptrblck at https://github.com/pytorch/pytorch/pull/15495, but since there was no update for months after the request change, I clone that branch and resolve the code reviews here. Hope everything is good now. Especially, the implementation of count is changed from ptrblck's original algorithm to the one ngimel suggest, i.e. using `unique_by_key` and `adjacent_difference`.
The currently implementation of `_unique_dim` is VERY slow for computing inverse index and counts, see https://github.com/pytorch/pytorch/issues/18405. I will refactor `_unique_dim` in a later PR. For this PR, please allow me to keep the implementation as is.
cc: ptrblck ezyang ngimel colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18391
Reviewed By: soumith
Differential Revision:
D14605905
Pulled By: VitalyFedyunin
fbshipit-source-id:
555f5a12a8e28c38b10dfccf1b6bb16c030bfdce
Natalia Gimelshein [Tue, 26 Mar 2019 02:57:06 +0000 (19:57 -0700)]
change dropout lowering in symbolic_script (#18375)
Summary:
Dropout is now eligible for fusion, and generated fused kernels are just as fast as dropout in ATen. Change its lowering in symbolic script so that it can actually be fused. Still special-cased for cuda, because without fusion this lowering is less efficient than current (bernoulli_ * input). Testing is covered by the test case that ailzhang added (test_dropout_cuda).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18375
Differential Revision:
D14611938
Pulled By: soumith
fbshipit-source-id:
11b18f4784e6c9265e382a8f8deca7add8df3b37
Gao, Xiang [Tue, 26 Mar 2019 02:54:27 +0000 (19:54 -0700)]
Add torch.version.git_version (#18299)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/18293
cc: colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18299
Differential Revision:
D14611972
Pulled By: soumith
fbshipit-source-id:
cdb48ef37c8869713a9a43ea0da08e1bed9279a2
Xiang Gao [Tue, 26 Mar 2019 02:42:01 +0000 (19:42 -0700)]
Change deprecated IntList to IntArrayRef
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18262
Differential Revision:
D14612244
Pulled By: ezyang
fbshipit-source-id:
5d21c7b94d64104fececcb15c6d38d9bd2a1fc70
Tongzhou Wang [Tue, 26 Mar 2019 02:17:00 +0000 (19:17 -0700)]
Enable printing to stderr for test_proper_exit for better debugging (#18458)
Summary:
related to https://github.com/pytorch/pytorch/issues/16608
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18458
Differential Revision:
D14611718
Pulled By: soumith
fbshipit-source-id:
6dc903ff2d32b9c3b76470869d1f4e9a67f706df
Karl Ostmo [Tue, 26 Mar 2019 01:01:39 +0000 (18:01 -0700)]
Don't require pygraphviz for regenerate.sh (#17485)
Summary:
closes #17336
Do not overwrite config.yml if script throws an error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17485
Differential Revision:
D14604388
Pulled By: kostmo
fbshipit-source-id:
5024545e3a8711abdbc0800911c766929dbca196
Mikhail Zolotukhin [Tue, 26 Mar 2019 00:39:01 +0000 (17:39 -0700)]
Add quant-passes stubs. (#18151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18151
ghimport-source-id:
7d12462971bdf3e5e26a3f150f1fcad05bba1a15
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18152 Initial implementation of InsertObserverNodes pass.
* **#18151 Add quant-passes stubs.**
gh-metadata: pytorch pytorch 18149 gh/zolotukhinm@gmail.com/1/head
Differential Revision:
D14584224
fbshipit-source-id:
b3d0b5ff797160d5ad23f91f732e627b0129086c
Duc Ngo [Mon, 25 Mar 2019 23:55:30 +0000 (16:55 -0700)]
caffe2 - support flaky operator tests for caffe2 build (#18155)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18155
- Make a python decorator caffe2_flaky for caffe2 operator unit tests.
- The environment variable CAFFE2_RUN_FLAKY_TESTS are now used to mark flaky test mode
During test run,
- If flaky tests mode are on, only flaky tests are run
- If flaky tests mode are off, only non-flaky tests are run
Mark ctc_beam_search_decoder_op_test as flaky
Reviewed By: ezyang, salexspb
Differential Revision:
D14468816
fbshipit-source-id:
dceb4a48daeb5437ad9cc714bef3343e9761f3a4
iurii zdebskyi [Mon, 25 Mar 2019 22:48:11 +0000 (15:48 -0700)]
Remove unused th_scalar_type (#18390)
Summary:
th_scalar_type seems to be unused anywhere so can be removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18390
Reviewed By: ezyang
Differential Revision:
D14591374
Pulled By: izdeby
fbshipit-source-id:
2113aa81229cdfdfb8dc5c951ea6dea3725b8582
Ivan Ogasawara [Mon, 25 Mar 2019 21:31:43 +0000 (14:31 -0700)]
Porting CPU UpSample functions to ATen (#18020)
Summary:
This PR resolves partially #10482
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18020
Differential Revision:
D14598029
Pulled By: ezyang
fbshipit-source-id:
513e7c6438ab6d5dc3f43241e7cb724744e9a287
nihui [Mon, 25 Mar 2019 18:55:52 +0000 (11:55 -0700)]
Fix caffe2 build with BLAS=OpenBLAS (#18422)
Summary:
g++ complains about failing to find the declaration of cblas_sscal and cblas_dscal BLAS function
let's fix it :)
fedora 29, gcc 8.3.1, openblas 0.3.5
build with cmake -DBLAS=OpenBLAS ..
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18422
Differential Revision:
D14598977
Pulled By: soumith
fbshipit-source-id:
bde77bfb359d2ff38226401caeed78c114ef7468
Wanchao Liang [Mon, 25 Mar 2019 18:02:17 +0000 (11:02 -0700)]
Add addcmul, lerp to fuser, enable scalar->float specialization in symbolic script (#18081)
Summary:
This PR did two things:
1. Enable scalar->float specialization in symbolic script, so AD formula that contains scalar in the schema, should write `float` instead.
2. add addcmul, lerp to AD and fuser.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18081
Differential Revision:
D14490493
Pulled By: wanchaol
fbshipit-source-id:
b3b86d960d5f051b30733bc908b19786111cdaa4
Edward Yang [Mon, 25 Mar 2019 17:22:54 +0000 (10:22 -0700)]
Add ability to query if built with CUDA and MKL-DNN. (#18362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18362
ghimport-source-id:
374b7ab97e2d6a894368007133201f510539296f
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18242 Test running a CUDA build on CPU machine.
* **#18362 Add ability to query if built with CUDA and MKL-DNN.**
Fixes #18108.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14584430
fbshipit-source-id:
7605a1ac4e8f2a7c70d52e5a43ad7f03f0457473
svcscm [Mon, 25 Mar 2019 17:22:22 +0000 (10:22 -0700)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
b2c5eb7dfa9048e399461c00d1103e945a30a5bc
Vitaly Fedyunin [Mon, 25 Mar 2019 17:18:29 +0000 (10:18 -0700)]
Implement reference counting for shared IPC CUDA tensors (#16854)
Summary:
This is to fix #16141 and similar issues.
The idea is to track a reference to every shared CUDA Storage and deallocate memory only after a consumer process deallocates received Storage.
ezyang Done with cleanup. Same (insignificantly better) performance as in file-per-share solution, but handles millions of shared tensors easily. Note [ ] documentation in progress.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16854
Differential Revision:
D13994490
Pulled By: VitalyFedyunin
fbshipit-source-id:
565148ec3ac4fafb32d37fde0486b325bed6fbd1
Gregory Chanan [Mon, 25 Mar 2019 15:53:42 +0000 (08:53 -0700)]
Don't segfault on trying to get data_ptr of sparse tensor. (#18347)
Summary:
Also asserts in storage_initialized that there is a storage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18347
Differential Revision:
D14582028
Pulled By: gchanan
fbshipit-source-id:
df3f5d181188f39e361839169fd054539c3b2839
Gregory Chanan [Mon, 25 Mar 2019 15:38:11 +0000 (08:38 -0700)]
Assert tensor isn't sparse in enforce_invariants. (#18338)
Summary:
There's no reason we can't check this, but I'm punting on implementing it for now. But it currently segfaults, so this is an improvements.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18338
Differential Revision:
D14580308
Pulled By: gchanan
fbshipit-source-id:
44d4cafeab12e1beeb3453a2d4068d221c2e9c4f
Sacha [Mon, 25 Mar 2019 14:21:37 +0000 (07:21 -0700)]
Only look for Caffe2 package when shared (#18421)
Summary:
Previously it would look for the Config even if it was not written.
Fixed #18419
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18421
Differential Revision:
D14597139
Pulled By: ezyang
fbshipit-source-id:
c212cbf5dc91564c12d9d07e507c8285e11c6bdf
Summer Deng [Mon, 25 Mar 2019 11:18:09 +0000 (04:18 -0700)]
Add more options to the quantization model exporter (#18383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18383
Add command line options for different quantization schemes.
Reviewed By: amylittleyang
Differential Revision:
D14476862
fbshipit-source-id:
37fbf5b4c1c550121eae313f5a71d703a0a87f0f
Thomas Viehmann [Mon, 25 Mar 2019 04:26:45 +0000 (21:26 -0700)]
Revert "Specialize optional tensor inputs to graphs in the JIT (#18360)" (#18411)
Summary:
This reverts commit
7cc7ed1322405ba3c627b9c5661a330f92c4183d.
I think it's better to sort out the issues raised in #18407 firs. I'm sorry for not stopping it earlier.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18411
Differential Revision:
D14594937
Pulled By: soumith
fbshipit-source-id:
3c90b7fa7694e2f59e55607acecde4a47af801ea
Gao, Xiang [Mon, 25 Mar 2019 02:40:08 +0000 (19:40 -0700)]
Fix deprecated: type() -> scalar_type() (#18406)
Summary:
Sorry for not sending these fixes in a single PR. I found this compiler warning when I was working on something else, and I just go to GitHub and modify the file directly for convenience...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18406
Differential Revision:
D14594180
Pulled By: soumith
fbshipit-source-id:
92f48513bc62fbe2c67c759d68830a973296e43b
Gao, Xiang [Mon, 25 Mar 2019 02:24:08 +0000 (19:24 -0700)]
Fix deprecated: type() -> scalar_type()
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18394
Differential Revision:
D14593890
Pulled By: soumith
fbshipit-source-id:
92b9a8c22008341c0cc3b7a721bef1973c528daf
mc-robinson [Mon, 25 Mar 2019 02:17:00 +0000 (19:17 -0700)]
Added tensor size warning to F.mse_loss() (#18349)
Summary:
To address the issue of broadcasting giving the wrong result in `nn.MSELoss()` as mentioned here https://github.com/pytorch/pytorch/issues/16045 . In particular, the issue often arises when computing the loss between tensors with shapes (n, 1) and (n,)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18349
Differential Revision:
D14594176
Pulled By: soumith
fbshipit-source-id:
f23ae68a4bf42f3554ad7678a314ba2c7532a6db
Elias Ellison [Sun, 24 Mar 2019 21:28:22 +0000 (14:28 -0700)]
Fix For Requires Grad Infinite Loop (#18361)
Summary:
Previously, we would continue to run requires grad on a loop body when the outputs and inputs disagreed. This adds a check so that we don't continue running if the results haven't changed since the last run.
Fix for https://github.com/pytorch/pytorch/issues/18320
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18361
Differential Revision:
D14584332
Pulled By: eellison
fbshipit-source-id:
696b225f80a2036318540946428b525985a9e735
Soumith Chintala [Sun, 24 Mar 2019 20:11:20 +0000 (13:11 -0700)]
update magma instructions (#18410)
Summary:
fixes https://github.com/pytorch/pytorch/issues/18389
cc: stas00
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18410
Differential Revision:
D14594198
Pulled By: soumith
fbshipit-source-id:
fb46ef77a36c90ad95e47f7066f5d32aa1f1370f
Iurii Zdebskyi [Sun, 24 Mar 2019 15:17:34 +0000 (08:17 -0700)]
Removed some dead code (#18201)
Summary:
Removed some dead code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18201
Differential Revision:
D14555251
Pulled By: izdeby
fbshipit-source-id:
f49640133ef4ae1b0306f7cec6655f23869cc6e7
Thomas Viehmann [Sun, 24 Mar 2019 05:54:36 +0000 (22:54 -0700)]
Specialize optional tensor inputs to graphs in the JIT (#18360)
Summary:
This specializes optional tensor inputs to either a DimensionedTensorType or, when None is passed,
UndefinedTensor (aka AutogradZeroTensorType).
This works because we already have different specs and thus separate plans for the two cases.
It enhances the shape analysis - because now unwrapped optional tensors will have DimensionedTensorType with appropriate shape and required grad etc.
Also, when combined with "if-pruning" (which I understand #18259 works towards), we actually get much nicer concrete graphs, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18360
Differential Revision:
D14590577
Pulled By: soumith
fbshipit-source-id:
cac204a506d1d38b15703cbcc67a6b75fd4979f4
Will Feng [Sat, 23 Mar 2019 19:47:15 +0000 (12:47 -0700)]
Move pyobj_ to TensorImpl (#18225)
Summary:
Currently, `THPVariable_Wrap(…)` and `THPVariable_NewWithVar(…)` depend on the existence of `pyobj_` in the autograd metadata of a Variable to convert the Variable to a Python tensor. However, after the Variable/Tensor merge, there will be Variables that don't contain autograd metadata, and to allow the conversion from non-autograd-meta Variable to a Python tensor we need to store the `pyobj_` outside of autograd metadata and in a place where it will always be available.
This PR makes it possible by moving `pyobj_` into TensorImpl, so that `THPVariable_Wrap(…)` and `THPVariable_NewWithVar(…)` can always access a Variable's `pyobj_` and convert the Variable to a Python tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18225
Differential Revision:
D14562616
Pulled By: yf225
fbshipit-source-id:
18d4aaace70eee6120abaf9276036d1f8f51b18d
Xiang Gao [Sat, 23 Mar 2019 17:01:28 +0000 (10:01 -0700)]
Fix deprecated scalar type in ATen/native/Distributions.cpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18265
Differential Revision:
D14577543
Pulled By: ezyang
fbshipit-source-id:
36674530b32366c51835e4073d7ba23d455d2fda
Edward Yang [Sat, 23 Mar 2019 16:33:40 +0000 (09:33 -0700)]
Revert
D14446895: [C2] Implement rotated generate_proposals_op without opencv dependency (~2x faster)
Differential Revision:
D14446895
Original commit changeset:
847f2443e645
fbshipit-source-id:
fc6ab5ee59e027f125f5ab0f7ee51ad7db37d4a4
Michael Suo [Sat, 23 Mar 2019 09:47:57 +0000 (02:47 -0700)]
Revert
D14584266: [pytorch][PR] Better error message for tensor with grad as constant in tracing
Differential Revision:
D14584266
Original commit changeset:
4e7850dadc78
fbshipit-source-id:
3bb3b5006e469edff984c16e0ff8d5dac2862d88
Elias Ellison [Sat, 23 Mar 2019 03:13:02 +0000 (20:13 -0700)]
Better error when module attr is used (#18164)
Summary:
Adds a suggestion to add to __constants__ when a torch.nn.Module attr is accessed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18164
Differential Revision:
D14580060
Pulled By: eellison
fbshipit-source-id:
0c5adc21d7341a5691d4b45930947cb1ba84c8e8
Will Feng [Sat, 23 Mar 2019 02:25:58 +0000 (19:25 -0700)]
Fix incorrect sparse add behavior when the sparse tensor has non-contiguous values (#18179)
Summary:
Currently, this code gives incorrect result:
```python
import torch
indices=torch.tensor([[7, 1, 3]])
values=torch.tensor([[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]])
x = torch.sparse_coo_tensor(indices, values, size=(10, 3))
values=torch.tensor(1.).expand(3, 3)
y = torch.sparse_coo_tensor(indices, values, size=(10, 3))
z = x + y
tensor(indices=tensor([[7, 1, 3]]),
values=tensor([[2., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]]),
size=(10, 3), nnz=3, layout=torch.sparse_coo)
```
This PR fixes the bug by adding special handling for sparse tensors with non-contiguous values in the addition function (specifically, by cat'ing the indices and values together).
This PR closes https://github.com/pytorch/pytorch/issues/17950 and https://github.com/pytorch/pytorch/issues/17919.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18179
Reviewed By: ezyang
Differential Revision:
D14569591
Pulled By: yf225
fbshipit-source-id:
f5a14c4a31337fc95eab64596212066b4fb18b1a
Jing Huang [Sat, 23 Mar 2019 01:12:27 +0000 (18:12 -0700)]
Implement rotated generate_proposals_op without opencv dependency (1.8x faster) (#18010)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18010
[C2] Implement rotated generate_proposals_op without opencv dependency.
Reviewed By: newstzpz
Differential Revision:
D14446895
fbshipit-source-id:
847f2443e645f8cae1327dfbaa111c48875ca9be
Mikhail Zolotukhin [Sat, 23 Mar 2019 00:00:10 +0000 (17:00 -0700)]
Remove empty file (actual file_check.cpp resides in torch/csrc/jit/testing) (#18303)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18303
ghimport-source-id:
66f4402075b123e36c6ffdf806b7c93187a1a58a
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18307 Convert test_recursive_cse to use Filecheck inline annotations.
* #18306 [Filecheck] Add a feature to parse check annotations from string.
* #18305 Add python bindings for parseIR.
* **#18303 Remove empty file (actual file_check.cpp resides in torch/csrc/jit/testing)**
Differential Revision:
D14586003
fbshipit-source-id:
a13e57bd4302e4d3f06198068d525de25e2aa8b3
Michael Suo [Fri, 22 Mar 2019 23:24:36 +0000 (16:24 -0700)]
Turn script_type_parser into a class (#18211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18211
ghimport-source-id:
73b81e9ec631937b14db1da10991831788a6894b
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18296 [jit] Add namespacing for ScriptClasses
* #18284 [jit] make test module hook use save/load
* **#18211 [jit] Turn script_type_parser into a class**
* #18148 [jit] python interop for script classes
If we are namespacing classes, the type parser will need to carry around
some state about which namespaces to look in. This PR just wraps it in a
class in preparation.
Also, subscriptToType can no longer be static, since parseTypeFromExpr
may give different results depending on the namespaces available, so
it's been made a regular function instead of a static map lookup.
Reviewed By: eellison
Differential Revision:
D14581128
fbshipit-source-id:
711315472ccde1920abf9fdb5a871ac27fb86787
Michael Suo [Fri, 22 Mar 2019 23:24:36 +0000 (16:24 -0700)]
python interop for script classes (#18148)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18148
ghimport-source-id:
40a9d745dc9aeba53d098743323fcbd50ca65137
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18148 py interop**
Support for converting classes between the Python–TorchScript boundary. Like other TorchScript values, ScriptClasses are native Python values when used in Python and IValues when used in TorchScript.
Notably, there is a copy across this boundary, which will be surprising to users who will expect standard Python reference semantics. I have some ideas for fixing that, but it's a more involved process.
Reviewed By: jamesr66a
Differential Revision:
D14526259
fbshipit-source-id:
5916e3032488a42dc7da756c1826d7c040a21ebd
Elias Ellison [Fri, 22 Mar 2019 22:25:40 +0000 (15:25 -0700)]
Better error message for tensor with grad as constant in tracing (#18298)
Summary:
Fix for https://github.com/pytorch/pytorch/issues/17583
There's an unrelated issue right now causing a segfault when printing tensor so that might have to fixed first for this to land
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18298
Differential Revision:
D14584266
Pulled By: eellison
fbshipit-source-id:
4e7850dadc78ef1e98ad40b9d8adc0fef42acf48
Nikolay Korovaiko [Fri, 22 Mar 2019 22:22:23 +0000 (15:22 -0700)]
Support for basic list comprehensions (#17267)
Summary:
Supports the following syntax:
```
torch.jit.script
def comp(l):
# type: (List[float]) -> List[float]
n = [x * 3 for x in l]
return n
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17267
Differential Revision:
D14581119
Pulled By: Krovatkin
fbshipit-source-id:
6fd091a8a9ab607386ac58fda6ad88bf8aea380e
Edward Yang [Fri, 22 Mar 2019 21:58:35 +0000 (14:58 -0700)]
Make it possible to trigger XLA/slow tests via commit message. (#18345)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18345
ghimport-source-id:
9649d76bb194866859d62e6ba2a3a265c96ebba5
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18345 Make it possible to trigger XLA/slow tests via commit message.**
Four variants are supported: `[xla ci] [ci xla] [xla test] [test xla]`; substitute
xla with slow for slow tests.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14584557
fbshipit-source-id:
fcbfdfb28246823135bb3d3910baae073d16e81d
Sebastian Messmer [Fri, 22 Mar 2019 21:05:50 +0000 (14:05 -0700)]
Avoid refcount when looking up dispatch key
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18294
Reviewed By: ezyang
Differential Revision:
D14512979
fbshipit-source-id:
45e548974f06184c375c2bb8339e3049a4ebd880
Jiakai Liu [Fri, 22 Mar 2019 21:01:41 +0000 (14:01 -0700)]
Fix DCHECK to handle dangling else (#18295)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18295
Replace "if (false)" with "while (false)" which fixes potential dangling else issue as shown in added test case.
Reviewed By: ezyang
Differential Revision:
D14569608
fbshipit-source-id:
407052db9182ce27b7a59841e90fa50d3eca262e
Natalia Gimelshein [Fri, 22 Mar 2019 20:48:59 +0000 (13:48 -0700)]
Allow fusion of float function arguments (#18087)
Summary:
so that functions like `def fn(x, p:float)` can be fused. Fixes #9940 and #11186. Fuses only float (not integer) arguments to simplify assembling arguments for fusion launch.
CPU fusion is disabled in CI and this won't be tested, but I tested it locally.
cc t-vi, apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18087
Differential Revision:
D14581206
Pulled By: wanchaol
fbshipit-source-id:
ccb0cf79b1751706f9b2cdf1715115eae5a39fb6
Thomas Viehmann [Fri, 22 Mar 2019 20:31:37 +0000 (13:31 -0700)]
Fix error reporting in NVRTC use of the fuser (#18327)
Summary:
Two functions were not directed ad NVRTC.
It's a bit hard to test this, as the fuser usually produces correct code - unless I try to hack on it. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18327
Differential Revision:
D14579285
Pulled By: soumith
fbshipit-source-id:
1be7ba461cc473d514ba619507742a47d4d7c97e
Ailing Zhang [Fri, 22 Mar 2019 20:22:52 +0000 (13:22 -0700)]
Using sqrt for better precision in cosine_similarity (#18250)
Summary:
address comment in #18168 .
Testing in CI...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18250
Differential Revision:
D14568601
Pulled By: ailzhang
fbshipit-source-id:
39fbbdb08743b53fa665c7e88e4750cbe0976ec7
Jianyu Huang [Fri, 22 Mar 2019 19:28:04 +0000 (12:28 -0700)]
Fix alignment issues for Fake BFP16 fp32 -> bfp16 rounding routines (#18321)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18321
As title.
Reviewed By: jspark1105
Differential Revision:
D14575512
fbshipit-source-id:
0e33cdab54b1aef8b67f0b4c366692c5dbdf631d
Dmytro Dzhulgakov [Fri, 22 Mar 2019 19:10:19 +0000 (12:10 -0700)]
Untangle internal build python and cpp dependencies
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18326
Reviewed By: ezyang
Differential Revision:
D14576294
fbshipit-source-id:
186ce1e3d026d962b7386f861eddf093f583a878
Alexander Sidorov [Fri, 22 Mar 2019 18:49:04 +0000 (11:49 -0700)]
Caffe2: crash op (#18207)
Summary:
this is handy when testing various core dump related
things. If in the future we want to unit test our future gdb debugger
extensions, we can use this op to generate a core dump for us within a
unit test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18207
Differential Revision:
D14482186
Pulled By: salexspb
fbshipit-source-id:
39a9fffbdd4bd083597f544d1c783a82cf023a89
Duc Ngo [Fri, 22 Mar 2019 18:14:40 +0000 (11:14 -0700)]
caffe2 - Util to cleanup external inputs and outputs from a NetDef (#18194)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18194
Add a util method to cleanup external inputs and outputs from a NetDef
The following conditions will be met after the modification
- No duplicate external inputs
- No duplicate external outputs
- Going through list of ops in order, all op inputs must be outputs
from other ops, or registered as external inputs.
- All external outputs must be outputs of some operators.
Reviewed By: ZolotukhinM
Differential Revision:
D14528589
fbshipit-source-id:
c8d82fda1946aa3696abcbec869a4a8bb22f09b6
Dmytro Dzhulgakov [Fri, 22 Mar 2019 18:11:16 +0000 (11:11 -0700)]
End to end hack to call server side Caffe2 ops (#18267)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18267
Motivation: we don't actually want to use it for real under any circumstances. This is an idea to unblock our internal progress and parallelize workstreams. We can easily define schemas for all ops in question and implement forwarding to C2 ops which is NOT going to be performant. Then several things can be happening in parallel:
* move code of ops outside of C2 ops that depend on protobuf into c10
* development of optimization/fusion passes
* building python-level wrappers with clean API
* improving perf
This demonstrates, Relu, quant, dequant. It seems to cover all use cases necessary (maybe except weights prepacking). Ideally I'd demonstrate Conv, but will get to it later in a separate PR (contributions welcomed)
Reviewed By: ezyang
Differential Revision:
D14531232
fbshipit-source-id:
4cd4a71ae0cb373c6c0e81f965c442b82a1b4069
Bilge Acun [Fri, 22 Mar 2019 16:51:27 +0000 (09:51 -0700)]
Optimize MomentumSGDUpdate maximum block size and make it templated
Summary: Removing the maximum number of blocks limit from the operator and making the nesterov parameter templated to remove branching.
Reviewed By: BIT-silence
Differential Revision:
D14567003
fbshipit-source-id:
394c2039ee214adc6ccd2e562e4e9563d307131f
Edward Yang [Fri, 22 Mar 2019 14:46:50 +0000 (07:46 -0700)]
Add test for #17271 (torch.exp incorrect for 2**31 size tensor) (#18292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18292
ghimport-source-id:
a3e96584db0eef7b6202a1211808f9f6e59dd529
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18292 Add test for #17271 (torch.exp incorrect for 2**31 size tensor)**
* #18291 Correctly call superclass setUp in TestCase subclasses.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14567642
fbshipit-source-id:
c60ee7597a86f5d2c5c0b72cb106f17815950427
Edward Yang [Fri, 22 Mar 2019 14:43:40 +0000 (07:43 -0700)]
Correctly call superclass setUp in TestCase subclasses. (#18291)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18291
ghimport-source-id:
d6e95e899bd320407967df41435801e54864ba62
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18292 Add test for #17271 (torch.exp incorrect for 2**31 size tensor)
* **#18291 Correctly call superclass setUp in TestCase subclasses.**
This makes PYTORCH_TEST_SKIP_FAST work correctly for more
tests, reducing the wasted testing effort on our slow_test job.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14567643
fbshipit-source-id:
40cf1d6556e0dd0a0550ff3d9ffed8b6000f8191
Gerard Goossen [Fri, 22 Mar 2019 13:33:24 +0000 (06:33 -0700)]
Verify def before infer fensor (#18129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18129
A lot of tensor interference function assume the operator passes the schema.
So call Verity to make sure this is actually the case.
Created diff before to add checking in Concat (https://github.com/pytorch/pytorch/pull/17110), but I encountered lot more places where this is assumed (for example ElementwiseOpShapeInference)
Reviewed By: mdschatz
Differential Revision:
D14503933
fbshipit-source-id:
cf0097b8c3e4beb1cded6b61e092a6adee4b8fcb
Jongsoo Park [Fri, 22 Mar 2019 07:49:11 +0000 (00:49 -0700)]
add more Python interface functions to make quantization simpler (#18246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18246
Simplifies histogram collection and quantization process.
Histogram collection before this diff was something like this
```
from caffe2.quantization.server import dnnlowp_pybind11
...
dnnlowp_pybind11.ObserveHistogramOfOutput(hist_file)
for ...
workspace.RunNet(predict_net)
dnnlowp_pybind11.ClearNetObservers() # This is to trigger Stop function in the observer to dump out histogram file but this can have unintended consequence of also clearing all the other useful observers we attached
```
After this diff we can
```
workspace.CreateNet(predict_net) # Note we need to create net to have a net to attach observer
histogram_observer = dnnlowp_pybind11.AddHistogramObserver(predic_net, hist_file)
for ...
workspace.RunNet(predict_net)
predict_net.RemoveObserver(histogram_observer)
```
Choosing quantization parameters of weights before this diff was something like this
```
dnnlowp_pybind11.ObserveHistogramOfOutput(weight_hist_file)
workspace.RunNetOnce(init_net)
dnnlowp_pybind11.ClearNetObservers() # Has same issue as the histogram collection example above
dnnlowp_pybind11.RegisterQuantizationParamsWithHistogram(
weight_hist_file, is_weight=True, qparams_output_file_name=qparams_file
)
workspace.CreateNet(init_net, overwrite=True)
dnnlowp_pybind11.ClearNetObservers()
logger.info("Loading quantization params from {}".format(qparams_file))
blobs_to_qparams = {}
with open(qparams_file) as f:
lines = f.readlines()
for line in lines:
op_id, op_type, output_id, tensor_name, mini, maxi, scale, zero_point, precision = (
line.split()
)
op_id = int(op_id)
output_id = int(output_id)
op = net.Proto().op[op_id]
if op_type != op.type or op.output[output_id] != tensor_name:
print(
"Corrupt qparams file {} {} {} {} {}".format(
qparams_file, op_type, op.type, op.output[output_id], tensor_name
)
)
blobs_to_qparams[tensor_name] = QuantizationParam(float(scale), int(zero_point))
```
After this diff this can be simplified to
```
blobs_to_qparams = {}
for op in init_net.Proto().op:
for output in op.output:
scale, zero_point = dnnlowp_pybind11.ChooseQuantizationParams(output)
blobs_to_qparams[output] = QuantizationParam(scale, zero_point)
```
Reviewed By: dskhudia
Differential Revision:
D14544694
fbshipit-source-id:
4fd06cd63256201e2e9d15c39f503138d1be53c2
Weiyi Zheng [Fri, 22 Mar 2019 07:08:50 +0000 (00:08 -0700)]
add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta (#18257)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18257
support adding op in global_init_net. because pred_init_net is per thread, and just doesn't cut it.
Reviewed By: jspark1105
Differential Revision:
D14552695
fbshipit-source-id:
53dd44c84ad019019ab9f35fc04d076b7f941ddc
Lu Fang [Fri, 22 Mar 2019 07:07:57 +0000 (00:07 -0700)]
update of fbcode/onnx to
c05f2ae412daf8fd64136ca354b97ccf73e0ea6c (#18285)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18285
Previous import was
96c58ceeacf0f2b73d752e413e4fd78787a12da3
Included changes:
- **[
c05f2ae4](https://github.com/onnx/onnx/commit/
c05f2ae4)**: update both core and ml docs (#1879) <Lu Fang>
- **[
f895279b](https://github.com/onnx/onnx/commit/
f895279b)**: fix the problems introduced in previous PRs in operator registration (#1878) <Lu Fang>
- **[
f6f80657](https://github.com/onnx/onnx/commit/
f6f80657)**: Skip the schema check on ops in non-standard domain (#1876) <Lu Fang>
- **[
8c8be722](https://github.com/onnx/onnx/commit/
8c8be722)**: Introduce Function Body Helper (#1868) <Sherlock>
- **[
b605eafb](https://github.com/onnx/onnx/commit/
b605eafb)**: Support down sampling for Upsample with scales < 1. (#1773) <Ke Zhang>
- **[
47f7aa71](https://github.com/onnx/onnx/commit/
47f7aa71)**: Remove scaledtanh (#1866) <Ashwini Khade>
- **[
4dfc56de](https://github.com/onnx/onnx/commit/
4dfc56de)**: Add Ceil support for Max and Average Pooling (#1860) <Lara Haidar>
- **[
552a8efc](https://github.com/onnx/onnx/commit/
552a8efc)**: Add testcase generator for functions (#1862) <Raymond Yang>
- **[
fdb978a5](https://github.com/onnx/onnx/commit/
fdb978a5)**: Promote Thresholded Relu Op (#1856) <Ashwini Khade>
- **[
ce332628](https://github.com/onnx/onnx/commit/
ce332628)**: Update Slice with dynamic input & optional input steps (#1836) <Bowen Bao>
- **[
3a9a8787](https://github.com/onnx/onnx/commit/
3a9a8787)**: Merge function into opschema (#1834) <Raymond Yang>
- **[
3dbf8fe9](https://github.com/onnx/onnx/commit/
3dbf8fe9)**: Handle string comparision represented as np.objects (#1851) <Dmitri Smirnov>
- **[
3b0d3bb2](https://github.com/onnx/onnx/commit/
3b0d3bb2)**: remove global variable in header file (#1850) <Lu Fang>
- **[
1cca8733](https://github.com/onnx/onnx/commit/
1cca8733)**: bump the version for drop out - fix the issue that the version was not bumped when changing its type constraint declaration. (#1848) <Ke Zhang>
- **[
1ec81bc6](https://github.com/onnx/onnx/commit/
1ec81bc6)**: Change TopK operator to allow dynamic 'k' (#1829) <Hariharan Seshadri>
- **[
a89a4a16](https://github.com/onnx/onnx/commit/
a89a4a16)**: Remove exp op: Affine, ImageScaler,ParametricSoftplus, Crop. (#1832) <Ke Zhang>
Reviewed By: yinghai
Differential Revision:
D14566202
fbshipit-source-id:
b1e5912ae6887e2865fc628363071e2b9938dfa4
David Riazati [Fri, 22 Mar 2019 03:15:38 +0000 (20:15 -0700)]
Cleanup TorchScript rst docs (#18234)
Summary:
* Adds more headers for easier scanning
* Adds some line breaks so things are displayed correctly
* Minor copy/spelling stuff
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18234
Reviewed By: ezyang
Differential Revision:
D14567737
Pulled By: driazati
fbshipit-source-id:
046d991f7aab8e00e9887edb745968cb79a29441
Junjie Bai [Thu, 21 Mar 2019 23:24:45 +0000 (16:24 -0700)]
Replace the remaining usages of IntList in caffe2 to IntArrayRef
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18282
Differential Revision:
D14569269
Pulled By: bddppq
fbshipit-source-id:
5fc33701b83f9efdec4b456d2691764831d10e7f
Yinghai Lu [Thu, 21 Mar 2019 22:28:20 +0000 (15:28 -0700)]
Blacklist certain op types when doing bound shape inference (#18290)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18290
Some such as `Tile` will mess up our tracking of batch size and for now it makes sense to stop the shape inference on these ops so that we don't lower it and downstream ops without proper batch info.
Reviewed By: zrphercule
Differential Revision:
D14463550
fbshipit-source-id:
2792481efa540f2a7dd310e677c213860c3053ca
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Fix use of c10::guts::apply (#18159)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18159
In some instances, the call to forward could clash with std::forward. Fully qualify it to make sure it gets the right one
Reviewed By: ezyang
Differential Revision:
D14512189
fbshipit-source-id:
6242607dbe54fcdb93229c1a4aaee8b84a88caa1
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Allow using C10_DECLARE_TENSOR_TYPE and C10_DEFINE_TENSOR_TYPE from any namespace (#18158)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18158
They didn't work when called from other namespaces before because they didn't fully specify the c10 namespace.
Reviewed By: ezyang
Differential Revision:
D14512187
fbshipit-source-id:
a496b89a1bbe2b56137cfae03ab94a60f38d7068
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Move schema inference to c10 (#18090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18090
This schema inference is needed by the c10 operator registration mechanism. Move it to c10.
It is going to be used by diffs stacked on top.
Reviewed By: ezyang
Differential Revision:
D14491454
fbshipit-source-id:
0f8ddcdbd91467c8347d315dd443a1ca8b216481
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Allow registering same operator schema multiple times (#18038)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18038
Now that we have named overloads, we can allow registering the same function schema multiple times and just check it's identical.
This is going to be used in custom op registration since they register the schema every time a kernel is registered.
Reviewed By: dzhulgakov
Differential Revision:
D14467494
fbshipit-source-id:
2c26cf72a64b65f120afe05e989302ec42597515
vishwakftw [Thu, 21 Mar 2019 21:18:38 +0000 (14:18 -0700)]
Rename trtrs to triangular_solve (#18213)
Summary:
Changelog:
- Renames `trtrs` to `triangular_solve` to remain consistent with `cholesky_solve` and `solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `triangular_solve` under the name `trtrs`, and add a deprecation warning to not promote usage.
- Move `isnan` to _torch_docs.py
- Remove unnecessary imports
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18213
Differential Revision:
D14566902
Pulled By: ezyang
fbshipit-source-id:
544f57c29477df391bacd5de700bed1add456d3f
kshitij12345 [Thu, 21 Mar 2019 20:10:34 +0000 (13:10 -0700)]
Fix contribution_guide docs (#18237)
Summary:
Fixes Typo and a Link in the `docs/source/community/contribution_guide.rst`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18237
Differential Revision:
D14566907
Pulled By: ezyang
fbshipit-source-id:
3a75797ab6b27d28dd5566d9b189d80395024eaf
svcscm [Thu, 21 Mar 2019 20:08:10 +0000 (13:08 -0700)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
80b00c33e6f6c7cfa08f645cd33419f6545f45d2
Xiaomeng Yang [Thu, 21 Mar 2019 19:56:20 +0000 (12:56 -0700)]
Optimize group_norm_op (#17945)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17945
Optimize group_norm_op
Reviewed By: houseroad
Differential Revision:
D14419908
fbshipit-source-id:
4024b5c5dbeff97f4f026d61fc44af1f0e98ed68
Edward Yang [Thu, 21 Mar 2019 19:37:00 +0000 (12:37 -0700)]
Enable running of slow tests in CI. (#18236)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18236
ghimport-source-id:
2bb80d017c2ea833669a2d55b340a922b2d44685
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18236 Enable running of slow tests in CI.**
* #18231 Add a decorator for marking slow tests.
These tests only run on master, as they are slow.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14563115
fbshipit-source-id:
f54ddef4abedc7e872e58657fc9ac537952773d0
Pieter Noordhuis [Thu, 21 Mar 2019 18:49:21 +0000 (11:49 -0700)]
Run clang-format on torch/csrc/distributed/c10d
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18255
Differential Revision:
D14563072
Pulled By: pietern
fbshipit-source-id:
bd83f90ae949b14bc95f4009ba12319c9b7936d0
Edward Yang [Thu, 21 Mar 2019 18:36:12 +0000 (11:36 -0700)]
Shut up compiler about unused the_type. (#18278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18278
ghimport-source-id:
3c35f6e7229c3c2b3a27d96370d7c05fad58365e
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18278 Shut up compiler about unused this_type.**
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14563050
fbshipit-source-id:
4b516f6c9ef3784d1430f793f304066c351b1a93
Edward Yang [Thu, 21 Mar 2019 18:08:11 +0000 (11:08 -0700)]
Add a decorator for marking slow tests. (#18231)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18231
ghimport-source-id:
78c230f60c41877fe91b89c8c979b160f36f856b
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18231 Add a decorator for marking slow tests.**
The general strategy:
- It's a normal skip decorator, which triggers a skip if
PYTORCH_TEST_WITH_SLOW is not set.
- It also annotates the method in question that says it's
slow. We use this to implement a catch-all skipper in
setUp that skips all non-slow tests when
PYTORCH_TEST_SKIP_FAST is set.
I added a little smoketest to test_torch and showed that I get:
```
Ran 432 tests in 0.017s
OK (skipped=431)
```
when running with PYTORCH_TEST_WITH_SLOW=1 and PYTORCH_TEST_SKIP_FAST=1
CI integration coming in later patch, as well as nontrivial uses of
this decorator.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14544441
fbshipit-source-id:
54435ce4ec827193e019887178c09ebeae3ae2c9
Igor Fedan [Thu, 21 Mar 2019 18:04:15 +0000 (11:04 -0700)]
lint changes
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18276
Differential Revision:
D14563385
Pulled By: ifedan
fbshipit-source-id:
12a51dbdb7b9e96be9fefa21fe298796b1ae6b58
Thomas Viehmann [Thu, 21 Mar 2019 16:59:20 +0000 (09:59 -0700)]
move median to ATen (#17637)
Summary:
This moves median to ATen.
- median with dimension reduces to kthvalue
- median without dimension (aka medianall) is implemented in parallel to kthvalue because we would not want to reshape (copying for non-contiguous) and then copy again in kthvalue. We can sue the helper functions we moved from kthvalue.
- `median_cuda` was accidentally already put into ATen in #17544.
- The quickselect algorthm without indices for CPU in TH is now obsolete and removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17637
Differential Revision:
D14346510
Pulled By: ezyang
fbshipit-source-id:
c07ad144efbd6b4194179bb1c02635862521d8cb
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B903 lint: save memory for data classes with slots/namedtuple (#18184)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18184
ghimport-source-id:
2ce860b07c58d06dc10cd7e5b97d4ef7c709a50d
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18184 Fix B903 lint: save memory for data classes with slots/namedtuple**
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14530872
fbshipit-source-id:
e26cecab3a8545e7638454c28e654e7b82a3c08a
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B902 lint error: invalid first argument. (#18181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18181
ghimport-source-id:
9c23551584a1a1b0b7ac246367f3a7ae1c50b315
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* **#18181 Fix B902 lint error: invalid first argument.**
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint
A variety of sins were committed:
- Some code was dead
- Some code was actually a staticmethod
- Some code just named it the wrong way
- Some code was purposely testing the omitted case
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14530876
fbshipit-source-id:
292a371d9a76ddc7bfcfd38b6f0da9165290a58e
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B006 lint errors: using mutable structure in default argument. (#18178)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18178
ghimport-source-id:
667ee76b418f505fa64b863e52a603c508dcd1bf
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* #18181 Fix B902 lint error: invalid first argument.
* **#18178 Fix B006 lint errors: using mutable structure in default argument.**
* #18177 Fix lstrip bug revealed by B005 lint
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14530874
fbshipit-source-id:
38f4456a085bfe55f2a96fff53028ebd0d621604
Thomas Viehmann [Thu, 21 Mar 2019 15:02:30 +0000 (08:02 -0700)]
Two amendments for the shape analysis (#18271)
Summary:
Two small refinements to the shape analysis:
- `detach` can set requires grad to false for dimensioned tensors (not sure if I would also need to deal with Complete?).
- add `batch_norm_stats`.
I noticed these while looking at what's going on when trying to code batch norm manually. (Hi wanchaol )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18271
Differential Revision:
D14561303
Pulled By: ezyang
fbshipit-source-id:
64a6879392e77403c44f2ed82f84b6397754d0ea
Edward Yang [Thu, 21 Mar 2019 14:50:45 +0000 (07:50 -0700)]
Fix lstrip bug revealed by B005 lint (#18177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18177
ghimport-source-id:
fbbf915b66762fc88bc5b541464e71ba27500958
Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* **#18177 Fix lstrip bug revealed by B005 lint**
lstrip() doesn't strip a prefix; it strips all of the characters
in the passed in string. B005 lint revealed this. Replaced with
substring operation.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision:
D14530873
fbshipit-source-id:
13b3438fcc3cce13b5110730dc3d0b528a52930f
Igor Fedan [Thu, 21 Mar 2019 07:36:26 +0000 (00:36 -0700)]
Backward function for torch.cdist
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17173
Differential Revision:
D14111482
Pulled By: ifedan
fbshipit-source-id:
d72cfd53c29d0f8cf5f8ad1148d14f3d5abd938e
Lu Fang [Thu, 21 Mar 2019 05:45:57 +0000 (22:45 -0700)]
Fix ONNX symbolic for argmin and argmax (#18261)
Summary:
Fix the problem introduced in https://github.com/pytorch/pytorch/pull/17103
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18261
Reviewed By: bddppq
Differential Revision:
D14558781
Pulled By: houseroad
fbshipit-source-id:
7bb50072e77d1d7b2a93f4011fa1362f26e9df1c
Xiaomeng Yang [Thu, 21 Mar 2019 01:19:09 +0000 (18:19 -0700)]
Update math::Transpose to support tensor with size > 2G (#17670)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17670
Update math::Transpose to support tensor with size > 2G
i-am-not-moving-c2-to-c10
Differential Revision:
D14313624
fbshipit-source-id:
0b4a85b913972e5a8981f0d40d0c539407b98f30
Jongsoo Park [Thu, 21 Mar 2019 00:02:38 +0000 (17:02 -0700)]
handle dst_bin_width==0 case properly (#18240)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18240
For rare cases when dst_bin_width == 0 we should just put all numbers to an arbitrary bin.
Reviewed By: csummersea
Differential Revision:
D14544685
fbshipit-source-id:
02d04ff8bd1555d6cf7e7eeb1196a4ab3325a9e5
Lu Fang [Wed, 20 Mar 2019 23:27:56 +0000 (16:27 -0700)]
Revert
D14114134: [asr] add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta
Differential Revision:
D14114134
Original commit changeset:
112bb2ceb9d3
fbshipit-source-id:
763262c1b78eed88a653caad5adc27d97feb43aa
Gao, Xiang [Wed, 20 Mar 2019 23:18:49 +0000 (16:18 -0700)]
Cleanup arg{min, max} (#17103)
Summary:
Why do we need this workaround? `PythonArgParser` handles these two cases well.
The discussion started at https://github.com/pytorch/pytorch/pull/6201#issuecomment-
378724406. The conclusion at that time by goldsborough was:
> Because we wanted to allow `dim=None` in Python and route to a different function. Essentially the problem was wanting to wrap the C++ function in Python. AFAIK there is no way of translating `dim=None` behavior into C++? So Richard and I came up with this strategy
Maybe at that time `PythonArgParser` was not powerful enough to handle the routing of two function with same name but different C++ signature.
Will keep an eye on the CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17103
Differential Revision:
D14523503
Pulled By: VitalyFedyunin
fbshipit-source-id:
cae3e2678062da2eccd93b51d4050578c7a9ab80
Bharat123Rox [Wed, 20 Mar 2019 23:00:11 +0000 (16:00 -0700)]
Added the exception of ignore_index (#18117)
Summary:
Fix #17801 to add an exception regarding `ignore_index` in the documentation for `torch.nn.CrossEntropyLoss` and `torch.nn.NLLLoss`
If any other files/functions are hit, I'd be glad to incorporate the changes there too! 😊
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18117
Differential Revision:
D14542079
Pulled By: ezyang
fbshipit-source-id:
7b918ac61f441dde7d3d6782d080c500cf2097f1
David Riazati [Wed, 20 Mar 2019 21:48:52 +0000 (14:48 -0700)]
Add .get() for dicts (#18238)
Summary:
Fixes #18232
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18238
Differential Revision:
D14546689
Pulled By: driazati
fbshipit-source-id:
ed021e6f54c891d6c734c8f2345f4e83a3c6c905
Pieter Noordhuis [Wed, 20 Mar 2019 21:30:55 +0000 (14:30 -0700)]
Update nccl submodule to 2.4.2 (#17883)
Summary:
Didn't test this. Let's see what happens.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17883
Differential Revision:
D14547470
Pulled By: pietern
fbshipit-source-id:
c35d232f6bcc5a2dce55da636a0acbea5c2725d8