Michael Suo [Fri, 14 Dec 2018 06:10:56 +0000 (22:10 -0800)]
Revert
D13407930: [pytorch][PR] Support torch.tensor in script
Differential Revision:
D13407930
Original commit changeset:
d17f1195a221
fbshipit-source-id:
f4458872c48ec4a2c9983b21ed90bcdc0ae665b7
Duc Ngo [Fri, 14 Dec 2018 04:43:00 +0000 (20:43 -0800)]
caffe2 - make DataRandomFiller usable in unit tests (#15027)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15027
- Make DataRandomFiller able to accept input_dims and input_types for only non intermediate inputs. Add a helper to fill input directly to a workspace
Reviewed By: highker
Differential Revision:
D13408345
fbshipit-source-id:
5fc54d33da12e3f0a200e79380d4c695b0339b17
Duc Ngo [Fri, 14 Dec 2018 04:43:00 +0000 (20:43 -0800)]
caffe2 - easy - utils to set argument of operator (#15022)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15022
Add setArgument testing utils to make it easy to set argument for an operator
Reviewed By: yinghai
Differential Revision:
D13405225
fbshipit-source-id:
b5c1859c6819d53c1a44718e2868e3137067df36
Duc Ngo [Fri, 14 Dec 2018 04:43:00 +0000 (20:43 -0800)]
caffe2 - easy - test utils for tensor assertion (#15020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15020
Add test utils for assertion of a tensor (sizes and values)
Reviewed By: salexspb
Differential Revision:
D13401146
fbshipit-source-id:
bc385df074043e03ea884940b5631b96de4a607e
Duc Ngo [Fri, 14 Dec 2018 04:42:59 +0000 (20:42 -0800)]
caffe2 - easy - test utils to compare tensors in two workspaces (#15181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15181
Add test utils to compare tensors in two workspaces
Reviewed By: ZolotukhinM
Differential Revision:
D13387212
fbshipit-source-id:
e19d932a1ecc696bd0a08ea14d9a7485cce67bb2
Duc Ngo [Fri, 14 Dec 2018 04:42:59 +0000 (20:42 -0800)]
caffe2 - easy - test utils to fill tensors (#15019)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15019
Put some utils to fill tensors to test_utils
Reviewed By: salexspb
Differential Revision:
D13386691
fbshipit-source-id:
51d891aad1ca12dc5133c0352df65b8db4f96edb
Duc Ngo [Fri, 14 Dec 2018 04:42:59 +0000 (20:42 -0800)]
caffe2 - easy - test utils to create operator (#15180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15180
Test utils to create an operator
On top of
D13370461
Reviewed By: ZolotukhinM
Differential Revision:
D13382773
fbshipit-source-id:
a88040ed5a60f31d3e73f1f958219cd7338dc52e
Duc Ngo [Fri, 14 Dec 2018 04:42:58 +0000 (20:42 -0800)]
caffe2 - easy - Create test_util to make it easier to write C++ unit tests (#15014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15014
Currently it looks like many of the simple operations such as comparing tensors, creating tensors, fetching tensors... are too verbose and took effort to write correctly in unit tests.
Easy to use utilities are often more important to increase productivity writing unit tests. While caffe2 python unit tests are relatively easier to write at the moment, the C++ side seems lacking.
In this change I create a test_util, started with assertsTensorEquals, getTensor, createTensor, and we can start putting more easy to use utilities there.
Reviewed By: salexspb
Differential Revision:
D13370461
fbshipit-source-id:
bee467a127e1d032ef19482f98aa5c776cf508c0
vishwakftw [Fri, 14 Dec 2018 04:30:40 +0000 (20:30 -0800)]
Fix derivative for mvlgamma (#15049)
Summary:
Fixes #15015.
Added tests to validate derivative.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15049
Reviewed By: soumith
Differential Revision:
D13434117
Pulled By: zou3519
fbshipit-source-id:
4a292600af9eb08b67c0f8b5482e9512aac95e72
Roy Li [Fri, 14 Dec 2018 03:33:37 +0000 (19:33 -0800)]
Fix numpy conversion for int8 tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15194
Differential Revision:
D13459270
Pulled By: li-roy
fbshipit-source-id:
605534add263860a3ad9a7fa70888301ee0bf8e4
Natalia Gimelshein [Fri, 14 Dec 2018 03:15:25 +0000 (19:15 -0800)]
add erf and erfc to fuser/autodiff
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15139
Differential Revision:
D13455690
Pulled By: soumith
fbshipit-source-id:
b06e5f5d362869c2e5fa11a52f9450d77c30d4cb
Sebastian Messmer [Fri, 14 Dec 2018 02:38:55 +0000 (18:38 -0800)]
Move TensorImpl::CopyFrom to caffe2::Tensor (2/2) (#14858)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14858
This diff doesn't change logic but just takes the existing code and moves it to caffe2::Tensor
Reviewed By: ezyang
Differential Revision:
D13365817
fbshipit-source-id:
bc73b27a793602cb14200dcdf357aa63233da43c
Sebastian Messmer [Fri, 14 Dec 2018 02:38:54 +0000 (18:38 -0800)]
Move TensorImpl::CopyFrom to caffe2::Tensor (1/2) (#14656)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14656
This diff doesn't move it yet, but prepares it to be moved, i.e. removes all access to class internals.
dzhulgakov: Please comment on if you think it still makes sense to land this even though it's not blocking anymore since we're going to move at::CopyBytes anyhow.
ezyang: There's some changes in the implementation, especially handling undefined dest tensors. Please review carefully.
Reviewed By: ezyang
Differential Revision:
D13287688
fbshipit-source-id:
17800ca8a79ab1633f23be58d96f99a160d8ed24
Jing Huang [Fri, 14 Dec 2018 02:10:55 +0000 (18:10 -0800)]
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem (#15113)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15113
cv::rotatedRectangleIntersection has a known float underflow bug that would cause failure in ```CV_Assert(intersection.size() <= 8)```
For rotated proposals, replace cv::rotatedRectangleIntersection with a correct version that doesn't have underflow problem.
Otherwise, when ```USE_CPP_GENERATE_PROPOSALS = true```, the training would fail.
Reviewed By: viswanathgs
Differential Revision:
D13429770
fbshipit-source-id:
5e95d059f3c668f14059a0a83e8e53d8554cdb99
Elias Ellison [Fri, 14 Dec 2018 01:36:21 +0000 (17:36 -0800)]
Support torch.tensor in script (#14913)
Summary:
Adding support for torch.tensor in script.
The input list is typed as t[], because it can be arbitrarily nested. I added a check a compile time check that the inner type of the list is a bool, float, or int.
Also adds specialization for Boolean Lists, which already existed at the ivalue level but had not been added to the compiler yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14913
Differential Revision:
D13407930
Pulled By: eellison
fbshipit-source-id:
d17f1195a22149d5b0d08d76c89a7fab8444f7c5
Sebastian Messmer [Fri, 14 Dec 2018 01:07:57 +0000 (17:07 -0800)]
Remove TensorImpl -> Type dependency
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15086
Reviewed By: dzhulgakov
Differential Revision:
D13425628
fbshipit-source-id:
08a8a774d17b071367454e027012a02f96d177d4
Peter Goldsborough [Fri, 14 Dec 2018 00:09:08 +0000 (16:09 -0800)]
Enable performance-unnecessary-value-param in .clang-tidy (#15026)
Summary:
This PR fixes around 250 places in the codebase where we were making unnecessary copies of objects (some large, some small).
ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15026
Differential Revision:
D13458784
Pulled By: goldsborough
fbshipit-source-id:
be5148b2ce09493588d70952e6f6d6ff5ec5199b
Junjie Bai [Thu, 13 Dec 2018 23:57:20 +0000 (15:57 -0800)]
Add missing caffe2_hip extension in setup.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15189
Reviewed By: orionr
Differential Revision:
D13457644
Pulled By: bddppq
fbshipit-source-id:
c2363e9b8fd21709b62777e5b2199f01ec1c65f8
bddppq [Thu, 13 Dec 2018 23:41:55 +0000 (15:41 -0800)]
Remove disabled_features in hipify
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15098
Reviewed By: ezyang
Differential Revision:
D13453762
Pulled By: bddppq
fbshipit-source-id:
e177042c78f5bf393163d660c25b80285353853d
bddppq [Thu, 13 Dec 2018 23:07:10 +0000 (15:07 -0800)]
Run ONNX cuda backend test cases via ROCm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15069
Differential Revision:
D13427757
Pulled By: bddppq
fbshipit-source-id:
ba0273d75986cd5b146f7041a83c63ddf9c6c0cf
vishwakftw [Thu, 13 Dec 2018 22:28:09 +0000 (14:28 -0800)]
Remove _finfo; replace _finfo usage with torch.finfo (#15165)
Summary:
This PR removes the usage of _finfo defined in torch.distributions.utils and changes the call sites
to use torch.finfo instead
Differential Revision:
D13451936
Pulled By: soumith
fbshipit-source-id:
6dbda3a6179d9407bc3396bf1a2baf3e85bc4cf2
Jerry Zhang [Thu, 13 Dec 2018 21:33:13 +0000 (13:33 -0800)]
Tensor construction codemod(ResizeLike) - 4/7 (#15088)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15088
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419682
fbshipit-source-id:
3e59403bc1c0e71e5cb66df932ed0c6a0a72e643
David Reiss [Thu, 13 Dec 2018 21:14:11 +0000 (13:14 -0800)]
Replace non-printable-ascii characters in ProtoDebugString (#14918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14918
When ProtoBuf-Lite is in use, ProtoDebugString just calls SerializeAsString.
This produces binary output, which is not a very suitable "debug" string.
Specifically, we've observed it causing problems when calling code tries to
add the debug string to a Java exception message (which requires valid UTF-8).
Now, we replace all non-ASCII bytes with "?".
This is not a very fast implementation, but generating debug strings shouldn't
be a performance-sensitive operation in any application.
Reviewed By: dzhulgakov
Differential Revision:
D13385540
fbshipit-source-id:
8868172baf20efaf53fecf7d666a6980f59b64f5
Jerry Zhang [Thu, 13 Dec 2018 20:42:58 +0000 (12:42 -0800)]
Tensor construction codemod(ResizeLike) - 6/7 (#15137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15137
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419736
fbshipit-source-id:
f4ad7b9582c2f809258169b7fef9adbca7063d99
Jerry Zhang [Thu, 13 Dec 2018 20:40:33 +0000 (12:40 -0800)]
Tensor construction codemod(ResizeLike) - 5/7 (#15084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15084
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419711
fbshipit-source-id:
dd2b740c3f13d8087085bafc5571aaf908d1af42
Junjie Bai [Thu, 13 Dec 2018 20:31:38 +0000 (12:31 -0800)]
Use std::vector instead of alloca to work around hcc crash
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15175
Differential Revision:
D13453708
Pulled By: bddppq
fbshipit-source-id:
f8c147ae9f679e395fee9d4c73ebcca052c9a752
Junjie Bai [Thu, 13 Dec 2018 19:46:03 +0000 (11:46 -0800)]
Fix old tensor OutputTensorCopyFrom usage in ImageInput operator (#15094)
Summary:
cc jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15094
Differential Revision:
D13451898
Pulled By: bddppq
fbshipit-source-id:
27906be62fb88aaa13c257441a2e35a285b445ee
Vitaly Fedyunin [Thu, 13 Dec 2018 19:32:06 +0000 (11:32 -0800)]
Kill non-forward, non-backward functions generated from nn.yaml (#15127)
Summary:
Updating binding to legacy functions.
Remove unused declarations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15127
Differential Revision:
D13433405
Pulled By: VitalyFedyunin
fbshipit-source-id:
58544d38affd20818742338c9eb789d9d14ccbaa
Edward Yang [Thu, 13 Dec 2018 19:18:20 +0000 (11:18 -0800)]
Delete defunct USE_SIMPLE_BASE_CTOR_DTOR (#15144)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15144
Differential Revision:
D13440872
Pulled By: ezyang
fbshipit-source-id:
2b1d73fac0c63729ba01d8f129642334ae9d9cf3
Lu Fang [Thu, 13 Dec 2018 19:03:00 +0000 (11:03 -0800)]
Fix typo (#15045)
Summary:
Simple typo fix
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15045
Reviewed By: dzhulgakov
Differential Revision:
D13413509
Pulled By: houseroad
fbshipit-source-id:
be66700c30d038368b1433232a4e3fd9299c83d6
Michael Carilli [Thu, 13 Dec 2018 18:08:01 +0000 (10:08 -0800)]
Use a pool of per-thread cudnn handles for each device, updated (#15080)
Summary:
Rebased version of https://github.com/pytorch/pytorch/pull/14861, hopefully addressing ezyang's comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15080
Differential Revision:
D13440858
Pulled By: ezyang
fbshipit-source-id:
1c6af5c53538b81c6b92cf1dda231ed333f28035
vishwakftw [Thu, 13 Dec 2018 17:38:40 +0000 (09:38 -0800)]
Fix bincount for non-contiguous inputs on CPU (#15109)
Summary:
Fixes #15058.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15109
Differential Revision:
D13447448
Pulled By: soumith
fbshipit-source-id:
56e8d42934538fb00465105a2c5ccfeb7c18a651
Vitaly Fedyunin [Thu, 13 Dec 2018 16:53:16 +0000 (08:53 -0800)]
Unify SparseTensorImpl::size_ and TensorImpl::sizes_
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15130
Differential Revision:
D13434981
Pulled By: VitalyFedyunin
fbshipit-source-id:
98bd4d66834a3c3d2ea577adb0c8413852da095d
Peter Goldsborough [Thu, 13 Dec 2018 16:01:10 +0000 (08:01 -0800)]
Python <-> C++ Frontend inter-op (#13481)
Summary:
This PR enables C++ frontend modules to be bound into Python and added as submodules of Python modules. For this, I added lots of pybind11 bindings for the `torch::nn::Module` class, and modified the `torch.nn.Module` class in Python to have a new Metaclass that makes `isinstance(m, torch.nn.Module)` return true when `m` is a C++ frontend module. The methods and fields of C++ modules are bound in such a way that they work seamlessly as submodules of Python modules for most operations (one exception I know of: calling `.to()` ends up calling `.apply()` on each submodule with a Python lambda, which cannot be used in C++ -- this may require small changes on Python side).
I've added quite a bunch of tests to verify the bindings and equality with Python. I think I should also try out adding a C++ module as part of some large PyTorch module, like a WLM or something, and see if everything works smoothly.
The next step for inter-op across our system is ScriptModule <-> C++ Frontend Module inter-op. I think this will then also allow using C++ frontend modules from TorchScript.
apaszke zdevito
CC dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13481
Differential Revision:
D12981996
Pulled By: goldsborough
fbshipit-source-id:
147370d3596ebb0e94c82cec92993a148fee50a7
Richard Zou [Thu, 13 Dec 2018 15:51:08 +0000 (07:51 -0800)]
Reuse KernelSpec for FusionGroups with equivalent graphs (#14541)
Summary:
Before this PR, loop unrolling + the graph fuser was creating multiple
FusionGroups with the same bodies (with different variable names) for
JIT LSTMs. Each FusionGroup got registered to a separate fusion key;
each key resulted in a different compilation for the same
specializations.
This PR makes it so that when registering FusionGroups with the fusion
compiler, the compiler first checks the KernelSpec cache to see if the
FusionGroup's graph exists already. If it does, then return the
corresponding KernelSpec's key to share compiled kernels.
In addition, graphs in the KernelSpec cache are canonicalized before
being cached. I added a flag to the canonicalize pass to remove unique
names of values.
This shortens the compile time for a JIT LSTM (seq_len of 100, loop
unroll factor of 8) from 5.3s to 2.3s. Most of this compile time is
running the graph fuser and/or fusion compiler; while this PR
makes it so that there is only one unique kernel in the forward pass,
there are a lot of different kernels (6) in the backward pass
(after loop unrolling) that should be investigated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14541
Differential Revision:
D13324487
Pulled By: zou3519
fbshipit-source-id:
b841d82ed35a959b5cfc72db033bf5a7b42cc4fb
Syed Tousif Ahmed [Thu, 13 Dec 2018 08:19:13 +0000 (00:19 -0800)]
Removes THCNumerics usages in RNN.cu (#15085)
Summary:
We don't need THCNumerics here since at::Half can be implicitly converted to float and the cuda math dispatches are handled by `/usr/local/cuda/include/crt/math_functions.hpp` and `cmath`. ATen should be free of THCNumerics after this and when porting kernels from THC, one should not use THCNumerics.
Should close: https://github.com/pytorch/pytorch/issues/11878
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15085
Differential Revision:
D13447558
Pulled By: soumith
fbshipit-source-id:
4ff5cbf838edcd01e2d1397e4d7f4f920e9e9fc3
Jongsoo Park [Thu, 13 Dec 2018 08:15:51 +0000 (00:15 -0800)]
minimize header file includes from _avx2.cc (#14950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14950
Minimize the number of headers included from _avx2.cc files to avoid accidental compilation of functions defined the header files reused by other translation units that can lead to illegal instruction errors.
Reviewed By: dskhudia
Differential Revision:
D13394483
fbshipit-source-id:
67149a6fb51f7f047e745bfe395cb6dd4ae7c1ae
Gu, Jinghui [Thu, 13 Dec 2018 06:39:29 +0000 (22:39 -0800)]
Disable strict-overflow flag to avoid compilation error (#14977)
Summary:
Disable strict-overflow flag to avoid compilation error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14977
Differential Revision:
D13447577
Pulled By: soumith
fbshipit-source-id:
1957bd5aa3c7b79219da3dd53560464977c89526
Russell Kaplan [Thu, 13 Dec 2018 05:56:54 +0000 (21:56 -0800)]
Remove "early-release beta" disclaimer from README (#15136)
Summary:
Now that PyTorch 1.0 is out, this should be updated :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15136
Differential Revision:
D13447377
Pulled By: soumith
fbshipit-source-id:
bd4e662c53d0699f25d4d90c1b4c1e182b4427c2
Xianjie Chen [Thu, 13 Dec 2018 05:31:14 +0000 (21:31 -0800)]
support casting to string (#15110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15110
support casting to string on CPU
Reviewed By: intermilan
Differential Revision:
D13429381
fbshipit-source-id:
b737a1ba1237b10f692d5c42b42a544b94ba9fd1
Cheng,Penghui [Thu, 13 Dec 2018 04:19:31 +0000 (20:19 -0800)]
Implementation of ChannelShuffle Op for MKLDNN (#15106)
Summary:
the speed-up of a single operation is up to 3X .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15106
Differential Revision:
D13429596
Pulled By: bddppq
fbshipit-source-id:
f8d987cafeac9bef9c3daf7e43ede8c6a4ee2ce5
Tyler Moncur [Thu, 13 Dec 2018 03:51:34 +0000 (19:51 -0800)]
Fix resize for edge case tensors (#14874)
Summary:
Certain tensor shapes failed when being resized. This pull request addresses the bug found in #13404.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14874
Differential Revision:
D13429788
Pulled By: soumith
fbshipit-source-id:
8aa6451dbadce46d6d1c47a01cb26e6559bcfc8c
Peter Goldsborough [Thu, 13 Dec 2018 03:15:22 +0000 (19:15 -0800)]
Autoformat build_variables.py (#15152)
Summary:
autoformat `tools/build_variables.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15152
Differential Revision:
D13445343
Pulled By: goldsborough
fbshipit-source-id:
fd63588de114cb92deda03fa1a0b36f5f9082b2f
Jongsoo Park [Thu, 13 Dec 2018 02:42:41 +0000 (18:42 -0800)]
don't compile dnnlowp.cc in avx2 option (#15147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15147
Forgot to take out dnnlowp.cc from avx2 list in a previous diff.
Reviewed By: dskhudia
Differential Revision:
D13440686
fbshipit-source-id:
9ada98b6e885c7d5f22c91a735ff60304480b4cb
Brett Koonce [Thu, 13 Dec 2018 02:11:03 +0000 (18:11 -0800)]
docs: minor spelling tweaks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15148
Differential Revision:
D13443708
Pulled By: suo
fbshipit-source-id:
5e3ec0afd3416ab8ce207f2d04105c49e1c04611
Zachary DeVito [Thu, 13 Dec 2018 01:27:49 +0000 (17:27 -0800)]
Export defs.bzl to open source for pytorch (#15132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15132
Pull Request resolved: https://github.com/facebook/fbshipit/pull/64
Reviewed By: dzhulgakov
Differential Revision:
D13424093
fbshipit-source-id:
bbebef964b9f3aef8f59cd394eca068680c36b5a
Junjie Bai [Thu, 13 Dec 2018 00:34:22 +0000 (16:34 -0800)]
Add back c2 string_utils include header to benchmark_helper
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15143
Differential Revision:
D13439694
fbshipit-source-id:
78698b66d52a0178118cbf3e79a7a5ad1763d47b
Johannes M Dieterich [Thu, 13 Dec 2018 00:06:02 +0000 (16:06 -0800)]
use ROCm 1.9.2 fp16 capabilities in rocBLAS and MIOpen interfaces (#14994)
Summary:
* relax MIOpen if statement to allow fp16/fp32 mixed precision training now supported by ROCm 1.9.2
* use gemm_ex API of rocBLAS in ROCm 1.9.2 instead of the previous hgemm API
* with this: enable all but one half test in test_nn
While there, fix also:
* a group convolution issue w/ MIOpen pertaining to initializing MIOpen on multi-GPU systems properly we detected while working on this
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14994
Differential Revision:
D13439869
Pulled By: bddppq
fbshipit-source-id:
75e4eb51a59488882e64b5eabdc30555b25be25e
Viswanath Sivakumar [Wed, 12 Dec 2018 23:48:03 +0000 (15:48 -0800)]
Optimize CPU GenerateProposals op by lazily generating anchors (3-5x faster) (#15103)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15103
There are two main optimizations in this diff:
1. We generate all anchors for every single spatial grid first, and then apply
NMS to pick 2000 anchors according to RPN_PRE_NMS_TOP_N. By first sorting the
score and picking the 2000 top ones and then lazily generating only the
corresponding anchors is much faster.
2. Transposing bbox_deltas from (num_anchors * 4, H, W) to
(H, W, num_anchors * 4) was also quite slow - taking about 20ms in the RRPN
case when there are lots of anchors which it's negligible for RPN case (like
0.1 ms). Instead of transponsing, performing all operations in the
(num_anchors, H, W) format speeds things up.
For regular RPN scenario, this gives 5x speedup from 5.84ms to 1.18ms a case
with 35 anchors over a 600x600 image.
For rotated boxes with 245 anchors, the runtime down from 80ms to 27ms per
iter.
Reviewed By: newstzpz
Differential Revision:
D13428688
fbshipit-source-id:
6006b332925e01a7c9433ded2ff5dc9e6d96f7d3
Shen Li [Wed, 12 Dec 2018 23:18:57 +0000 (15:18 -0800)]
Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
Summary:
This is an optimized implementation that does the following:
1. created an empty Tensor of correct size.
2. fill the Tensor with correct values.
The following three designs to fill in the Tensor result in roughly the same performance. Hence, the 2nd option is taken for simpler code, and to return contiguous tensors.
1. Sequential: fill row coordinates first, then columns. This results in two for-loop and more arithmetic operations.
2. Interleaved: fill in index coordinates one by one, which jumps between the two output Tensor rows in every iteration.
3. Transpose: create a n X 2 Tensor, fill the Tensor sequentially, and then transpose it.
<img width="352" alt="screen shot 2018-12-10 at 3 54 39 pm" src="https://user-images.githubusercontent.com/
16999635/
49769172-
07bd3580-fc94-11e8-8164-
41839185e9f9.png">
NOTE:
This implementation returns a 2D tensor, instead of a tuple of two tensors. It means that users will not be able to do the following:
```python
x = torch.ones(3, 3)
i = torch.tril_indices(3, 3)
x[i] # need to first convert the 2D tensor into a tuple of two 1D tensors.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14904
Reviewed By: zou3519
Differential Revision:
D13433027
Pulled By: mrshenli
fbshipit-source-id:
41c876aafcf584832d7069f7c5929ffb59e0ae6a
Imran [Wed, 12 Dec 2018 23:15:45 +0000 (15:15 -0800)]
Minor documentation mistake (#15068)
Summary:
keepdim is a optional parameter for torch.max()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15068
Differential Revision:
D13437745
Pulled By: zou3519
fbshipit-source-id:
b5198c7d4ae17758cd136f6e5aecc6cb5838f174
David Riazati [Wed, 12 Dec 2018 20:25:40 +0000 (12:25 -0800)]
Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.
* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912
Differential Revision:
D13385928
Pulled By: driazati
fbshipit-source-id:
e391691b2f87dba6e13be05d4aa3ed2f004e31da
Immanuel Alexander [Wed, 12 Dec 2018 20:09:47 +0000 (12:09 -0800)]
Move adaptive avg pooling 2d to ATen native (#14714)
Summary:
adaptive_avg_pool1d, adaptive_avg_pool2d, and adaptive_avgpool3d are neural network functions that are currently implemented in our legacy THNN (CPU) / THCUNN (CUDA) libraries. It is generally better if these live in our new library ATen, since it is more feature complete and reduces cognitive overhead.
This change moves currently to adaptive_avg_pool1d and adaptive_avg_pool2d to ATen.
timed relevant cpu tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 6.273s
OK (skipped=7)
real 0m7.164s
user 3m1.289s
sys 0m0.905s
```
compared to master:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 7.232s
OK (skipped=7)
real 0m8.065s
user 3m34.714s
sys 0m2.440s
```
also timed relevant cuda tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 21.049s
OK
real 0m24.106s
user 0m20.890s
sys 0m4.026s
```
compared to master
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 23.021s
OK
real 0m27.095s
user 0m20.121s
sys 0m3.668s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14714
Differential Revision:
D13384084
Pulled By: xnder
fbshipit-source-id:
344442103ccbbda72d3c010d2feea00e9985d226
Jerry Zhang [Wed, 12 Dec 2018 20:06:09 +0000 (12:06 -0800)]
Move numa.{h, cc} to c10/util (#15024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15024
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: dzhulgakov
Differential Revision:
D13380559
fbshipit-source-id:
abc3fc7321cf37323f756dfd614c7b41978734e4
Richard Zou [Wed, 12 Dec 2018 19:32:05 +0000 (11:32 -0800)]
Stop erroneously running aten::warn (#15124)
Summary:
Fixes #15119. Before this PR, we were propagating constants through
aten::warn AND running it as a part of shape analysis.
This caused aten::warn to be run regardless of if it is
supposed to be run dynamically. This PR adds an exclusion for aten::warn
in constant propagation and shape analysis, similar to that of prim::RaiseException.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15124
Differential Revision:
D13432815
Pulled By: zou3519
fbshipit-source-id:
15ab533ce2accb2da3fd4e569070c7979ce61708
Edward Yang [Wed, 12 Dec 2018 19:19:03 +0000 (11:19 -0800)]
Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14248
This diff also introduces a horrifying hack to override CUDA's DeviceGuardImpl
with a HIPGuardImplMasqueradingAsCUDA, to accommodate PyTorch's current
behavior of pretending CUDA is HIP when you build with ROCm enabled.
Reviewed By: bddppq
Differential Revision:
D13145293
fbshipit-source-id:
ee0e207b6fd132f0d435512957424a002d588f02
Gregory Chanan [Wed, 12 Dec 2018 18:55:22 +0000 (10:55 -0800)]
Kill Type.storage. (#15075)
Summary:
It's not used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15075
Reviewed By: ezyang
Differential Revision:
D13422487
Pulled By: gchanan
fbshipit-source-id:
272aa0a10e96f3ffb97d571490b517f972b9dcf7
Brennan Vincent [Wed, 12 Dec 2018 17:58:54 +0000 (09:58 -0800)]
fix infinite loop when get_max_threads is nonzero but num_threads is 1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15114
Differential Revision:
D13431891
Pulled By: umanwizard
fbshipit-source-id:
f968b8e50cf776c346d4a28d72b12e7856c95839
Gregory Chanan [Wed, 12 Dec 2018 17:55:42 +0000 (09:55 -0800)]
Ensure there aren't variables in checked_tensor_unwrap, checked_tenso… (#15105)
Summary:
…r_list_unwrap.
These functions use unsafeGetTensorImpl(), which doesn't work with Variables (in a silent way that may blow up later).
So let's do early checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15105
Reviewed By: ezyang
Differential Revision:
D13429149
Pulled By: gchanan
fbshipit-source-id:
b85f6f5b7cdb9a6dd0c40205b924c840a3920ba0
Richard Zou [Wed, 12 Dec 2018 17:37:10 +0000 (09:37 -0800)]
Add better support for bools in the graph fuser (#15057)
Summary:
Fixes #15038.
aten::_cast_Float(tensor, non_blocking) support was added in #14336.
Its second argument is a bool, but because we don't support generating values
of type bool in the fuser codegen, the codegen errored out.
aten::_cast_Float in the fuser never actually uses its non_blocking
argument, so another way to fix this would be to have a special op for a
fused cast but I thought that we might have fusible ops that do take
bool arguments in the future so this would be good to have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15057
Differential Revision:
D13432091
Pulled By: zou3519
fbshipit-source-id:
455fe574f5f080aca9a112e346b841a2534a8dc3
Brennan Vincent [Wed, 12 Dec 2018 16:49:04 +0000 (08:49 -0800)]
fix some tests that I accidentally disabled (#15077)
Summary:
While moving these scenarios into `_test_dim_ops` I accidentally left an empty loop in the actual tests, causing them to do nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15077
Differential Revision:
D13428759
Pulled By: umanwizard
fbshipit-source-id:
08f53068981d9192c1408878b168e9053f4dc92e
Edward Yang [Wed, 12 Dec 2018 15:57:54 +0000 (07:57 -0800)]
Don't setup x86_64-linux-gnu-gcc as an sccache wrapper. (#15078)
Summary:
When I do this setup in a local Docker development environment,
I get the following error:
x86_64-linux-gnu-gcc: error trying to exec 'cc1plus': execvp: No such file or directory
Somehow, gcc seems to get confused when it gets run from the wrong
directory. Best not to do it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15078
Differential Revision:
D13432143
Pulled By: ezyang
fbshipit-source-id:
b18e15f493503a4c8205c85f92a214e49762a7bc
Junjie Bai [Wed, 12 Dec 2018 10:56:37 +0000 (02:56 -0800)]
Use c10::to_string that works cross platform (#15117)
Summary:
Fix master breakage introduced in #15108
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15117
Differential Revision:
D13430568
Pulled By: bddppq
fbshipit-source-id:
ce10bc552f085d1bf0afbc13119991bee014ac95
Zhiping Xiu [Wed, 12 Dec 2018 09:32:28 +0000 (01:32 -0800)]
Add EmptyNameScope to allow you jump out from current scope. (#14631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14631
adding a empty name scope to allow people jump out from current namescope.
This could be useful when you want to access blob from parent or sibling scope.
Facebook:
e.g: we encoutered a potential usecase in
D13124249 (it's a large diff, please search by EmptyNameScope in that diff), we need to access to a blob declared in root namescope from a device namescope (device namescope has been used by parallel_GPU API). `EmptyNameScope` can help us do that with ease.
I referenced to `EmptyDeviceScope`
D6103412 while implementing this one.
Reviewed By: yinghai
Differential Revision:
D13272240
fbshipit-source-id:
d4cde5abcc2336e456b6c6ef086266ef94d86da8
bddppq [Wed, 12 Dec 2018 07:20:31 +0000 (23:20 -0800)]
Remove linker and dlopen flags that allowed undefined symbols in rocm build (#15091)
Summary:
Previously the undefined symbols were caused by disabled_modules in tools/amd_build/disabled_features.json (now it's cleared).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15091
Differential Revision:
D13429595
Pulled By: bddppq
fbshipit-source-id:
b341e83f9e5a8d16440a364e837b045a8a4fd6e1
Peter Goldsborough [Wed, 12 Dec 2018 06:38:14 +0000 (22:38 -0800)]
Fix serialization (#15033)
Summary:
Fixes a bug where (de-)/serializing a hierarchy of submodules where one submodule doesn't have any parameters, but its submodules do, doesn't get properly loaded. This had to do with the fact that the old protobuf format couldn't store empty parameters.
Fixes https://github.com/pytorch/pytorch/issues/14891
soumith ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15033
Differential Revision:
D13411322
Pulled By: goldsborough
fbshipit-source-id:
2ef73b2aa93fa9e46b1cbe1fd47d9f134d6016d5
Fei Sun [Wed, 12 Dec 2018 06:22:42 +0000 (22:22 -0800)]
Update the output format for benchmark_helper. It outputs the dimensi… (#15108)
Summary:
…on first and all the values in the next line. This way, it can output arbitrary blob
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15108
Reviewed By: llyfacebook
Differential Revision:
D13429346
Pulled By: sf-wind
fbshipit-source-id:
5e0bba2a46fbe8d997dfc3d55a698484552e3af8
Zachary DeVito [Wed, 12 Dec 2018 06:15:20 +0000 (22:15 -0800)]
Pre-commit flake8/clang-tidy (#15102)
Summary:
Provide a pre-commit hook that does flake8 and clang tidy checks. Enables the clang-tidy script to run in parallel to make it fast enough to be used in a pre-commit hook.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15102
Reviewed By: soumith
Differential Revision:
D13429629
Pulled By: zdevito
fbshipit-source-id:
bd52fe5652f29b033de8d9926d78350b2da4c2fc
Jane Wang [Wed, 12 Dec 2018 05:03:13 +0000 (21:03 -0800)]
add gloo support for gather on GPU (#14916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14916
as titled
Reviewed By: pietern
Differential Revision:
D13267832
fbshipit-source-id:
3b89d08af93f74941f17ff892c33fc2a4a023c19
Sebastian Messmer [Wed, 12 Dec 2018 04:40:33 +0000 (20:40 -0800)]
Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818
Reviewed By: ezyang
Differential Revision:
D13348042
fbshipit-source-id:
11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
Sebastian Messmer [Wed, 12 Dec 2018 04:40:33 +0000 (20:40 -0800)]
Move UndefinedTensorImpl to c10 (meh) (#14817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14817
unfortunately, we still need this.
Reviewed By: ezyang
Differential Revision:
D13348041
fbshipit-source-id:
e8dcc89f5c71bd1ea2c9813990dac6e58e63b1fd
Sebastian Messmer [Wed, 12 Dec 2018 04:40:32 +0000 (20:40 -0800)]
Fix include paths for TensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14816
Reviewed By: ezyang
Differential Revision:
D13348040
fbshipit-source-id:
a7204d89c2dd277d13093b0ed862f40b53dee82f
Sebastian Messmer [Wed, 12 Dec 2018 04:40:32 +0000 (20:40 -0800)]
Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795
Reviewed By: ezyang
Differential Revision:
D13336856
fbshipit-source-id:
5375d0e42312ff7564f4df06210a5e49542d59e3
Gregory Chanan [Wed, 12 Dec 2018 04:35:37 +0000 (20:35 -0800)]
Add at::scalar_tensor factory function, use it instead of Type.scalar… (#15074)
Summary:
…_tensor.
This is part of a long series of paring down the Type interface.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15074
Differential Revision:
D13421482
Pulled By: gchanan
fbshipit-source-id:
84010ee71fef2cb74d32d5de7858d8ed9f36b885
Edward Yang [Wed, 12 Dec 2018 03:11:02 +0000 (19:11 -0800)]
Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
Summary:
```
This diff changes the HIPification of ATen to be out-of-place.
We now have the following mappings:
- ATen/cuda => ATen/hip
- ATen/native/cuda => ATen/native/hip
- ATen/native/sparse/cuda => ATen/native/sparse/hip
- THC => THH
- THCUNN => THHUNN
The build system is adjusted to know about these new build paths,
and HIPify is taught how to adjust include paths and
THC_GENERIC_FILE appropriately. ATen_hip is now built as
the ATen_hip library, rather than reusing ATen_cuda.
However, despite these new filepaths, none of the identifiers in ATen
have actually changed. So, e.g., THHGeneral.h still defines functions
named THC_blahblah, and HIP still shows up as CUDA in PyTorch itself.
We'll tackle this in a subsequent PR; this diff is just to get the files
out-of-place.
Minor extra improvements:
- Don't edit tmp_install when hipifying
- HIP no longer builds native_cudnn_cpp; it was unnecessary
- Caffe2_HIP_INCLUDES is now Caffe2_HIP_INCLUDE, for consistency
with all the other variables.
- HIP build now properly respects ATEN_CUDA_FILES_GEN_LIB (it
did not previously.)
- You can now override file extension matching in pyHIPIFY
by explicitly specifying its full name in the matching list.
This is used so we can HIPify CMakeLists.txt in some situations.
A little bit of string and ceiling wax:
- gen.py grows a --rocm flag so that it knows to generate CUDA
files which actually refer to the HIP headers (e.g., THH.h)
We'll get rid of this eventually and generate real HIP files,
but not for this PR.
- Management of HIP dependencies is now completely deleted
from the ATen CMakeLists.txt. The old code was dead (because
it was shoveled in ATen_CUDA_DEPENDENCY_LIBS and promptly
ignored by the Caffe2 build system) and didn't actually work.
```
Stacked on https://github.com/pytorch/pytorch/pull/14849 review last commit only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14866
Differential Revision:
D13419475
Pulled By: ezyang
fbshipit-source-id:
cb4c843df69a1d8369314c9fab1b7719520fa3db
Daniel Ingram [Wed, 12 Dec 2018 01:38:58 +0000 (17:38 -0800)]
Add error type to raise statement
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15039
Differential Revision:
D13419566
Pulled By: zou3519
fbshipit-source-id:
f67a3aebce937e3e640e91e81eb3e184cfdf269c
Peter Goldsborough [Wed, 12 Dec 2018 00:36:25 +0000 (16:36 -0800)]
Remove deprecated variable_tensor_functions (#15003)
Summary:
Removing the deprecated functions in `torch/csrc/variable_tensor_functions.h` (like `torch::CPU`) and corresponding implementations from `torch/csrc/torch.cpp` from master after the release.
ezyang gchanan soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15003
Differential Revision:
D13418086
Pulled By: goldsborough
fbshipit-source-id:
a0accdf6f7b0efa1ec07ac7b74b86ff2da37543f
Jane Wang [Wed, 12 Dec 2018 00:13:31 +0000 (16:13 -0800)]
add gloo scatter support on GPU (#14917)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14917
as titled
Reviewed By: pietern
Differential Revision:
D13271560
fbshipit-source-id:
0187a3390f8ebd72a2c074e7a651432159d427c0
Zachary DeVito [Wed, 12 Dec 2018 00:11:09 +0000 (16:11 -0800)]
re-enable copy of python files, but be careful that the copy is only … (#14982)
Summary:
…done once
This allow no-op build to work correctly even when BUILD_CAFFE2_OPS is on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14982
Differential Revision:
D13413960
Pulled By: zdevito
fbshipit-source-id:
6e5412a8c375af8a47c76f548cdd31cff15f3853
Richard Zou [Tue, 11 Dec 2018 22:50:33 +0000 (14:50 -0800)]
Split off fuser tests in test_jit.py to their own test case (#15072)
Summary:
This PR creates TestFuser inside test_jit.py to be a home for graph fuser
specific tests.
This was a useful exercise because now that all the fuser tests are in
one place, I can spot redundant and bitrotting tests for cleanup in a
future PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15072
Differential Revision:
D13421458
Pulled By: zou3519
fbshipit-source-id:
80b1a7712feff75a0c186d1664601c4edbbca694
David Riazati [Tue, 11 Dec 2018 21:49:59 +0000 (13:49 -0800)]
Supress warnings on generated tests
Summary: Removes all warnings spew for the TestJitGenerated tests
Differential Revision:
D13420919
fbshipit-source-id:
f251c12f923088ccc5daa2984c15003a67cbd1c1
Josef Lindman Hörnlund [Tue, 11 Dec 2018 21:36:00 +0000 (13:36 -0800)]
Issue 14984: Remove divide by zero error in index_put_ (#14986)
Summary:
No check for zero index tensor was done in the accumulate=True (serial) case in the new TensorIterator code since https://github.com/pytorch/pytorch/pull/13420.
https://github.com/pytorch/pytorch/issues/14984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14986
Differential Revision:
D13417861
Pulled By: colesbury
fbshipit-source-id:
e6ed1af8f708b53a35803fc157ed1f043169ec89
zrphercule [Tue, 11 Dec 2018 21:12:23 +0000 (13:12 -0800)]
Update onnx coverage script for more accurate result (#15029)
Summary:
The coverage of scalar-input test cases were not accurate. This patch fixed that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15029
Differential Revision:
D13419764
Pulled By: zrphercule
fbshipit-source-id:
a14a5cbef432bea8c9126156f5deb1125e1aeb47
Michael Suo [Tue, 11 Dec 2018 21:12:20 +0000 (13:12 -0800)]
tox.ini -> .flake8 (#15065)
Summary:
We were only using this file to configure flake8, and fbcode linters do not recognize tox.ini which causes spurious linter warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15065
Differential Revision:
D13420774
Pulled By: suo
fbshipit-source-id:
e43a46befa36862c8b3c0a90074aec6a66531492
Roy Li [Tue, 11 Dec 2018 21:06:11 +0000 (13:06 -0800)]
silence unreachable code warnings (#15036)
Summary:
Stack:
:black_circle: **#15036 silence unreachable code warnings** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/
D13411100/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15036
Differential Revision:
D13414712
Pulled By: li-roy
fbshipit-source-id:
d4aa84571fa94c66f3c5bfa9575a10c6ee398f9e
Michael Suo [Tue, 11 Dec 2018 19:44:27 +0000 (11:44 -0800)]
improve deep equality check in alias annotation test (#15031)
Summary:
Previously we were returning true if either IValue wasn't a tensor, which…is bad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15031
Differential Revision:
D13409759
Pulled By: suo
fbshipit-source-id:
f8bdcd05d334c1276ce46f55812065d358c1ff5d
James Sun [Tue, 11 Dec 2018 19:14:50 +0000 (11:14 -0800)]
Fix race condition in ThreadPool::workOnTasksUntilCompleted (#14833)
Summary:
Resolves #14704
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14833
Differential Revision:
D13405211
Pulled By: highker
fbshipit-source-id:
8552d51eeb5d3af0ed66c461e5ddfeb9ae2926bd
TerryTsao [Tue, 11 Dec 2018 18:41:37 +0000 (10:41 -0800)]
Fix CMakeLists.txt for Int8 python bindings (#15047)
Summary:
Currently in caffe2, one cannot properly fetch the content of Int8 blobs.
Upon digging the source code, it turns out that the relevant source code is not being compiled. Adding the source to CMakeLists.txt fixes this issue.
First time ever doing a pull request. Please let me know if there's any rule I should follow. Thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15047
Differential Revision:
D13417583
Pulled By: bddppq
fbshipit-source-id:
dd39575971a3012635edbf97a045d80e4b62a8eb
Orion Reblitz-Richardson [Tue, 11 Dec 2018 17:59:28 +0000 (09:59 -0800)]
Install cpp tests when built (#15000)
Summary:
This is broken out of https://github.com/pytorch/pytorch/pull/13733/
We want to install cpp tests so they can ultimately be runnable from that location for Caffe2 tests run from PyTorch builds.
cc pjh5 yf225 anderspapitto
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15000
Reviewed By: pjh5
Differential Revision:
D13416253
Pulled By: orionr
fbshipit-source-id:
51280be0a22557a742f90c9f303c58c35cbd4a38
Michael Carilli [Tue, 11 Dec 2018 17:46:25 +0000 (09:46 -0800)]
Stashing checkpointing RNG states based on devices of arg tensors (#14518)
Summary:
This PR intends to address apaszke's concerns in https://github.com/pytorch/pytorch/pull/14253#issuecomment-
441740016. Preserving the rng state is now controlled by a kwarg rather than a global state, hopefully in a python 2.7-compatible way.
Additionally, the checkpointing function stashes and restores the RNG states of
1. devices associated with all input tensor args to run_fn as well as
2. the current device.
I could easily change this to only save and restore the RNG states associated 1. alone. This would simplify the logic to create a [deduplicated, ordered](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R37) list of devices considered active.
I'm wondering if the [get_device_states](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R32) and [set_device_states](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R47) functions are general enough to reside elsewhere (presumably torch/random.py). I'm also wondering if the check on [torch.cuda._initialized](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R47) would be better placed within `get_device_states`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14518
Differential Revision:
D13356210
Pulled By: ezyang
fbshipit-source-id:
afa4cc21ce7862142d5cb1dec3750018df222039
svcscm [Tue, 11 Dec 2018 15:38:23 +0000 (07:38 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
d39b31f12ab2ab570548f3e8a65949332a64a0ff
Marat Dukhan [Tue, 11 Dec 2018 08:46:55 +0000 (00:46 -0800)]
Switch Int8Softmax, Int8Relu, and Int8LeakyRelu to QNNPACK (#14933)
Summary:
Int8Softmax: 4x-5x speedup compared to previous implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14933
Differential Revision:
D13406820
Pulled By: Maratyszcza
fbshipit-source-id:
ea8cbe1b861ddb7ff1b851d06d52c6fd6d04ed01
Lingyi Liu [Tue, 11 Dec 2018 06:49:47 +0000 (22:49 -0800)]
Adjust the API call to deserilize the tensorproto (#14132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14132
as title
Reviewed By: jerryzh168
Differential Revision:
D13110697
fbshipit-source-id:
822c9079de11951f90aec3d26f0e4108847e7dac
Natalia Gimelshein [Tue, 11 Dec 2018 06:48:16 +0000 (22:48 -0800)]
use datatype dependent tolerance in data parallel tests
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14856
Differential Revision:
D13413560
Pulled By: soumith
fbshipit-source-id:
b3a0cfe93477ed332e6eaa2e39ef5f4cc8b36481
paland3 [Tue, 11 Dec 2018 06:34:17 +0000 (22:34 -0800)]
Update pooling.py (#14998)
Summary:
Strange line in the documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14998
Differential Revision:
D13413235
Pulled By: soumith
fbshipit-source-id:
80d05ec1185719b785f0aac914bc2369c1174f2f
Zachary DeVito [Tue, 11 Dec 2018 06:10:11 +0000 (22:10 -0800)]
Clean up casting ops (#14947)
Summary:
This removes FloatToInt style names replacing it with just the destination
name (e.g. FloatToInt -> Float). This makes it more consistent with the
syntax and makes it easier to add type conversions (just add a new
prim::Int op, for instance).
None of these ops get serialized so this should not effect loading of
old models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14947
Differential Revision:
D13408409
Pulled By: zdevito
fbshipit-source-id:
d773fe863f14d9de893f686832769f8cc8903a8e
Jongsoo Park [Tue, 11 Dec 2018 06:08:04 +0000 (22:08 -0800)]
share code between adagrad and rowwise adagrad tests (#14692)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14692
Remove some code duplication
Reviewed By: chocjy
Differential Revision:
D13296731
fbshipit-source-id:
5924e037ca64fc4b89234be922bc5ca47fb8bd32
Ilia Cherniavskii [Tue, 11 Dec 2018 05:30:53 +0000 (21:30 -0800)]
TBB task graph (#15041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15041
Adding an alternative implementation of a task graph based on TBB
Reviewed By: dmudiger
Differential Revision:
D13412517
fbshipit-source-id:
f5efedd680bbe0072bf38d504e5682ab51dd630f
bddppq [Tue, 11 Dec 2018 05:25:45 +0000 (21:25 -0800)]
Enable more caffe2 fp16 rocm tests (#15040)
Summary:
cc rohithkrn petrex
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15040
Reviewed By: houseroad
Differential Revision:
D13413068
Pulled By: bddppq
fbshipit-source-id:
b2967f16f8da0b9e80083138fb8632c14e9e9b63
Lu Fang [Tue, 11 Dec 2018 05:22:44 +0000 (21:22 -0800)]
Enable the build of tests in ATen/core (#15032)
Summary:
Otherwise they won't build
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15032
Reviewed By: yinghai
Differential Revision:
D13409801
Pulled By: houseroad
fbshipit-source-id:
95464aa8f3604835997ba1bb7f3c3e51485d1686