Shen Li [Wed, 12 Dec 2018 23:18:57 +0000 (15:18 -0800)]
Implement torch.tril_indices and torch.triu_indices (#12653) (#14904)
Summary:
This is an optimized implementation that does the following:
1. created an empty Tensor of correct size.
2. fill the Tensor with correct values.
The following three designs to fill in the Tensor result in roughly the same performance. Hence, the 2nd option is taken for simpler code, and to return contiguous tensors.
1. Sequential: fill row coordinates first, then columns. This results in two for-loop and more arithmetic operations.
2. Interleaved: fill in index coordinates one by one, which jumps between the two output Tensor rows in every iteration.
3. Transpose: create a n X 2 Tensor, fill the Tensor sequentially, and then transpose it.
<img width="352" alt="screen shot 2018-12-10 at 3 54 39 pm" src="https://user-images.githubusercontent.com/
16999635/
49769172-
07bd3580-fc94-11e8-8164-
41839185e9f9.png">
NOTE:
This implementation returns a 2D tensor, instead of a tuple of two tensors. It means that users will not be able to do the following:
```python
x = torch.ones(3, 3)
i = torch.tril_indices(3, 3)
x[i] # need to first convert the 2D tensor into a tuple of two 1D tensors.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14904
Reviewed By: zou3519
Differential Revision:
D13433027
Pulled By: mrshenli
fbshipit-source-id:
41c876aafcf584832d7069f7c5929ffb59e0ae6a
Imran [Wed, 12 Dec 2018 23:15:45 +0000 (15:15 -0800)]
Minor documentation mistake (#15068)
Summary:
keepdim is a optional parameter for torch.max()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15068
Differential Revision:
D13437745
Pulled By: zou3519
fbshipit-source-id:
b5198c7d4ae17758cd136f6e5aecc6cb5838f174
David Riazati [Wed, 12 Dec 2018 20:25:40 +0000 (12:25 -0800)]
Add script standard library documentation + cleanup (#14912)
Summary:
Documents what is supported in the script standard library.
* Adds `my_script_module._get_method('forward').schema()` method to get function schema from a `ScriptModule`
* Removes `torch.nn.functional` from the list of builtins. The only functions not supported are `nn.functional.fold` and `nn.functional.unfold`, but those currently just dispatch to their corresponding aten ops, so from a user's perspective it looks like they work.
* Allow printing of `IValue::Device` by getting its string representation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14912
Differential Revision:
D13385928
Pulled By: driazati
fbshipit-source-id:
e391691b2f87dba6e13be05d4aa3ed2f004e31da
Immanuel Alexander [Wed, 12 Dec 2018 20:09:47 +0000 (12:09 -0800)]
Move adaptive avg pooling 2d to ATen native (#14714)
Summary:
adaptive_avg_pool1d, adaptive_avg_pool2d, and adaptive_avgpool3d are neural network functions that are currently implemented in our legacy THNN (CPU) / THCUNN (CUDA) libraries. It is generally better if these live in our new library ATen, since it is more feature complete and reduces cognitive overhead.
This change moves currently to adaptive_avg_pool1d and adaptive_avg_pool2d to ATen.
timed relevant cpu tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 6.273s
OK (skipped=7)
real 0m7.164s
user 3m1.289s
sys 0m0.905s
```
compared to master:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.s.s.s.s.s.s.s...
----------------------------------------------------------------------
Ran 17 tests in 7.232s
OK (skipped=7)
real 0m8.065s
user 3m34.714s
sys 0m2.440s
```
also timed relevant cuda tests with this change:
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 21.049s
OK
real 0m24.106s
user 0m20.890s
sys 0m4.026s
```
compared to master
```
[ialex@devgpu064.ash5 ~/pytorch] time python test/test_nn.py
test_AdaptiveAvgPool1d (__main__.TestNN)
test_AdaptiveAvgPool1d_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_single (__main__.TestNN)
test_AdaptiveAvgPool2d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool2d_tuple_none_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_single (__main__.TestNN)
test_AdaptiveAvgPool3d_single_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_cuda (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none (__main__.TestNN)
test_AdaptiveAvgPool3d_tuple_none_cuda (__main__.TestNN)
test_adaptive_log_softmax (__main__.TestNN)
test_adaptive_pooling_input_size (__main__.TestNN)
test_adaptive_pooling_size_none (__main__.TestNN)
.................
----------------------------------------------------------------------
Ran 17 tests in 23.021s
OK
real 0m27.095s
user 0m20.121s
sys 0m3.668s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14714
Differential Revision:
D13384084
Pulled By: xnder
fbshipit-source-id:
344442103ccbbda72d3c010d2feea00e9985d226
Jerry Zhang [Wed, 12 Dec 2018 20:06:09 +0000 (12:06 -0800)]
Move numa.{h, cc} to c10/util (#15024)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15024
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393
att
Reviewed By: dzhulgakov
Differential Revision:
D13380559
fbshipit-source-id:
abc3fc7321cf37323f756dfd614c7b41978734e4
Richard Zou [Wed, 12 Dec 2018 19:32:05 +0000 (11:32 -0800)]
Stop erroneously running aten::warn (#15124)
Summary:
Fixes #15119. Before this PR, we were propagating constants through
aten::warn AND running it as a part of shape analysis.
This caused aten::warn to be run regardless of if it is
supposed to be run dynamically. This PR adds an exclusion for aten::warn
in constant propagation and shape analysis, similar to that of prim::RaiseException.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15124
Differential Revision:
D13432815
Pulled By: zou3519
fbshipit-source-id:
15ab533ce2accb2da3fd4e569070c7979ce61708
Edward Yang [Wed, 12 Dec 2018 19:19:03 +0000 (11:19 -0800)]
Move CUDAGuard, CUDAStream and CUDAGuardImpl to c10/cuda (#14248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14248
This diff also introduces a horrifying hack to override CUDA's DeviceGuardImpl
with a HIPGuardImplMasqueradingAsCUDA, to accommodate PyTorch's current
behavior of pretending CUDA is HIP when you build with ROCm enabled.
Reviewed By: bddppq
Differential Revision:
D13145293
fbshipit-source-id:
ee0e207b6fd132f0d435512957424a002d588f02
Gregory Chanan [Wed, 12 Dec 2018 18:55:22 +0000 (10:55 -0800)]
Kill Type.storage. (#15075)
Summary:
It's not used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15075
Reviewed By: ezyang
Differential Revision:
D13422487
Pulled By: gchanan
fbshipit-source-id:
272aa0a10e96f3ffb97d571490b517f972b9dcf7
Brennan Vincent [Wed, 12 Dec 2018 17:58:54 +0000 (09:58 -0800)]
fix infinite loop when get_max_threads is nonzero but num_threads is 1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15114
Differential Revision:
D13431891
Pulled By: umanwizard
fbshipit-source-id:
f968b8e50cf776c346d4a28d72b12e7856c95839
Gregory Chanan [Wed, 12 Dec 2018 17:55:42 +0000 (09:55 -0800)]
Ensure there aren't variables in checked_tensor_unwrap, checked_tenso… (#15105)
Summary:
…r_list_unwrap.
These functions use unsafeGetTensorImpl(), which doesn't work with Variables (in a silent way that may blow up later).
So let's do early checking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15105
Reviewed By: ezyang
Differential Revision:
D13429149
Pulled By: gchanan
fbshipit-source-id:
b85f6f5b7cdb9a6dd0c40205b924c840a3920ba0
Richard Zou [Wed, 12 Dec 2018 17:37:10 +0000 (09:37 -0800)]
Add better support for bools in the graph fuser (#15057)
Summary:
Fixes #15038.
aten::_cast_Float(tensor, non_blocking) support was added in #14336.
Its second argument is a bool, but because we don't support generating values
of type bool in the fuser codegen, the codegen errored out.
aten::_cast_Float in the fuser never actually uses its non_blocking
argument, so another way to fix this would be to have a special op for a
fused cast but I thought that we might have fusible ops that do take
bool arguments in the future so this would be good to have.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15057
Differential Revision:
D13432091
Pulled By: zou3519
fbshipit-source-id:
455fe574f5f080aca9a112e346b841a2534a8dc3
Brennan Vincent [Wed, 12 Dec 2018 16:49:04 +0000 (08:49 -0800)]
fix some tests that I accidentally disabled (#15077)
Summary:
While moving these scenarios into `_test_dim_ops` I accidentally left an empty loop in the actual tests, causing them to do nothing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15077
Differential Revision:
D13428759
Pulled By: umanwizard
fbshipit-source-id:
08f53068981d9192c1408878b168e9053f4dc92e
Edward Yang [Wed, 12 Dec 2018 15:57:54 +0000 (07:57 -0800)]
Don't setup x86_64-linux-gnu-gcc as an sccache wrapper. (#15078)
Summary:
When I do this setup in a local Docker development environment,
I get the following error:
x86_64-linux-gnu-gcc: error trying to exec 'cc1plus': execvp: No such file or directory
Somehow, gcc seems to get confused when it gets run from the wrong
directory. Best not to do it.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15078
Differential Revision:
D13432143
Pulled By: ezyang
fbshipit-source-id:
b18e15f493503a4c8205c85f92a214e49762a7bc
Junjie Bai [Wed, 12 Dec 2018 10:56:37 +0000 (02:56 -0800)]
Use c10::to_string that works cross platform (#15117)
Summary:
Fix master breakage introduced in #15108
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15117
Differential Revision:
D13430568
Pulled By: bddppq
fbshipit-source-id:
ce10bc552f085d1bf0afbc13119991bee014ac95
Zhiping Xiu [Wed, 12 Dec 2018 09:32:28 +0000 (01:32 -0800)]
Add EmptyNameScope to allow you jump out from current scope. (#14631)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14631
adding a empty name scope to allow people jump out from current namescope.
This could be useful when you want to access blob from parent or sibling scope.
Facebook:
e.g: we encoutered a potential usecase in
D13124249 (it's a large diff, please search by EmptyNameScope in that diff), we need to access to a blob declared in root namescope from a device namescope (device namescope has been used by parallel_GPU API). `EmptyNameScope` can help us do that with ease.
I referenced to `EmptyDeviceScope`
D6103412 while implementing this one.
Reviewed By: yinghai
Differential Revision:
D13272240
fbshipit-source-id:
d4cde5abcc2336e456b6c6ef086266ef94d86da8
bddppq [Wed, 12 Dec 2018 07:20:31 +0000 (23:20 -0800)]
Remove linker and dlopen flags that allowed undefined symbols in rocm build (#15091)
Summary:
Previously the undefined symbols were caused by disabled_modules in tools/amd_build/disabled_features.json (now it's cleared).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15091
Differential Revision:
D13429595
Pulled By: bddppq
fbshipit-source-id:
b341e83f9e5a8d16440a364e837b045a8a4fd6e1
Peter Goldsborough [Wed, 12 Dec 2018 06:38:14 +0000 (22:38 -0800)]
Fix serialization (#15033)
Summary:
Fixes a bug where (de-)/serializing a hierarchy of submodules where one submodule doesn't have any parameters, but its submodules do, doesn't get properly loaded. This had to do with the fact that the old protobuf format couldn't store empty parameters.
Fixes https://github.com/pytorch/pytorch/issues/14891
soumith ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15033
Differential Revision:
D13411322
Pulled By: goldsborough
fbshipit-source-id:
2ef73b2aa93fa9e46b1cbe1fd47d9f134d6016d5
Fei Sun [Wed, 12 Dec 2018 06:22:42 +0000 (22:22 -0800)]
Update the output format for benchmark_helper. It outputs the dimensi… (#15108)
Summary:
…on first and all the values in the next line. This way, it can output arbitrary blob
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15108
Reviewed By: llyfacebook
Differential Revision:
D13429346
Pulled By: sf-wind
fbshipit-source-id:
5e0bba2a46fbe8d997dfc3d55a698484552e3af8
Zachary DeVito [Wed, 12 Dec 2018 06:15:20 +0000 (22:15 -0800)]
Pre-commit flake8/clang-tidy (#15102)
Summary:
Provide a pre-commit hook that does flake8 and clang tidy checks. Enables the clang-tidy script to run in parallel to make it fast enough to be used in a pre-commit hook.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15102
Reviewed By: soumith
Differential Revision:
D13429629
Pulled By: zdevito
fbshipit-source-id:
bd52fe5652f29b033de8d9926d78350b2da4c2fc
Jane Wang [Wed, 12 Dec 2018 05:03:13 +0000 (21:03 -0800)]
add gloo support for gather on GPU (#14916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14916
as titled
Reviewed By: pietern
Differential Revision:
D13267832
fbshipit-source-id:
3b89d08af93f74941f17ff892c33fc2a4a023c19
Sebastian Messmer [Wed, 12 Dec 2018 04:40:33 +0000 (20:40 -0800)]
Fix include paths for UndefinedTensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14818
Reviewed By: ezyang
Differential Revision:
D13348042
fbshipit-source-id:
11bdfc755767ce9d0a6fa95b2cf49d50adde8d60
Sebastian Messmer [Wed, 12 Dec 2018 04:40:33 +0000 (20:40 -0800)]
Move UndefinedTensorImpl to c10 (meh) (#14817)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14817
unfortunately, we still need this.
Reviewed By: ezyang
Differential Revision:
D13348041
fbshipit-source-id:
e8dcc89f5c71bd1ea2c9813990dac6e58e63b1fd
Sebastian Messmer [Wed, 12 Dec 2018 04:40:32 +0000 (20:40 -0800)]
Fix include paths for TensorImpl.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14816
Reviewed By: ezyang
Differential Revision:
D13348040
fbshipit-source-id:
a7204d89c2dd277d13093b0ed862f40b53dee82f
Sebastian Messmer [Wed, 12 Dec 2018 04:40:32 +0000 (20:40 -0800)]
Move TensorImpl to c10 (yay!)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14795
Reviewed By: ezyang
Differential Revision:
D13336856
fbshipit-source-id:
5375d0e42312ff7564f4df06210a5e49542d59e3
Gregory Chanan [Wed, 12 Dec 2018 04:35:37 +0000 (20:35 -0800)]
Add at::scalar_tensor factory function, use it instead of Type.scalar… (#15074)
Summary:
…_tensor.
This is part of a long series of paring down the Type interface.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15074
Differential Revision:
D13421482
Pulled By: gchanan
fbshipit-source-id:
84010ee71fef2cb74d32d5de7858d8ed9f36b885
Edward Yang [Wed, 12 Dec 2018 03:11:02 +0000 (19:11 -0800)]
Make ATen HIPify out-of-place, but still reuse CUDA names. (#14866)
Summary:
```
This diff changes the HIPification of ATen to be out-of-place.
We now have the following mappings:
- ATen/cuda => ATen/hip
- ATen/native/cuda => ATen/native/hip
- ATen/native/sparse/cuda => ATen/native/sparse/hip
- THC => THH
- THCUNN => THHUNN
The build system is adjusted to know about these new build paths,
and HIPify is taught how to adjust include paths and
THC_GENERIC_FILE appropriately. ATen_hip is now built as
the ATen_hip library, rather than reusing ATen_cuda.
However, despite these new filepaths, none of the identifiers in ATen
have actually changed. So, e.g., THHGeneral.h still defines functions
named THC_blahblah, and HIP still shows up as CUDA in PyTorch itself.
We'll tackle this in a subsequent PR; this diff is just to get the files
out-of-place.
Minor extra improvements:
- Don't edit tmp_install when hipifying
- HIP no longer builds native_cudnn_cpp; it was unnecessary
- Caffe2_HIP_INCLUDES is now Caffe2_HIP_INCLUDE, for consistency
with all the other variables.
- HIP build now properly respects ATEN_CUDA_FILES_GEN_LIB (it
did not previously.)
- You can now override file extension matching in pyHIPIFY
by explicitly specifying its full name in the matching list.
This is used so we can HIPify CMakeLists.txt in some situations.
A little bit of string and ceiling wax:
- gen.py grows a --rocm flag so that it knows to generate CUDA
files which actually refer to the HIP headers (e.g., THH.h)
We'll get rid of this eventually and generate real HIP files,
but not for this PR.
- Management of HIP dependencies is now completely deleted
from the ATen CMakeLists.txt. The old code was dead (because
it was shoveled in ATen_CUDA_DEPENDENCY_LIBS and promptly
ignored by the Caffe2 build system) and didn't actually work.
```
Stacked on https://github.com/pytorch/pytorch/pull/14849 review last commit only
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14866
Differential Revision:
D13419475
Pulled By: ezyang
fbshipit-source-id:
cb4c843df69a1d8369314c9fab1b7719520fa3db
Daniel Ingram [Wed, 12 Dec 2018 01:38:58 +0000 (17:38 -0800)]
Add error type to raise statement
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15039
Differential Revision:
D13419566
Pulled By: zou3519
fbshipit-source-id:
f67a3aebce937e3e640e91e81eb3e184cfdf269c
Peter Goldsborough [Wed, 12 Dec 2018 00:36:25 +0000 (16:36 -0800)]
Remove deprecated variable_tensor_functions (#15003)
Summary:
Removing the deprecated functions in `torch/csrc/variable_tensor_functions.h` (like `torch::CPU`) and corresponding implementations from `torch/csrc/torch.cpp` from master after the release.
ezyang gchanan soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15003
Differential Revision:
D13418086
Pulled By: goldsborough
fbshipit-source-id:
a0accdf6f7b0efa1ec07ac7b74b86ff2da37543f
Jane Wang [Wed, 12 Dec 2018 00:13:31 +0000 (16:13 -0800)]
add gloo scatter support on GPU (#14917)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14917
as titled
Reviewed By: pietern
Differential Revision:
D13271560
fbshipit-source-id:
0187a3390f8ebd72a2c074e7a651432159d427c0
Zachary DeVito [Wed, 12 Dec 2018 00:11:09 +0000 (16:11 -0800)]
re-enable copy of python files, but be careful that the copy is only … (#14982)
Summary:
…done once
This allow no-op build to work correctly even when BUILD_CAFFE2_OPS is on.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14982
Differential Revision:
D13413960
Pulled By: zdevito
fbshipit-source-id:
6e5412a8c375af8a47c76f548cdd31cff15f3853
Richard Zou [Tue, 11 Dec 2018 22:50:33 +0000 (14:50 -0800)]
Split off fuser tests in test_jit.py to their own test case (#15072)
Summary:
This PR creates TestFuser inside test_jit.py to be a home for graph fuser
specific tests.
This was a useful exercise because now that all the fuser tests are in
one place, I can spot redundant and bitrotting tests for cleanup in a
future PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15072
Differential Revision:
D13421458
Pulled By: zou3519
fbshipit-source-id:
80b1a7712feff75a0c186d1664601c4edbbca694
David Riazati [Tue, 11 Dec 2018 21:49:59 +0000 (13:49 -0800)]
Supress warnings on generated tests
Summary: Removes all warnings spew for the TestJitGenerated tests
Differential Revision:
D13420919
fbshipit-source-id:
f251c12f923088ccc5daa2984c15003a67cbd1c1
Josef Lindman Hörnlund [Tue, 11 Dec 2018 21:36:00 +0000 (13:36 -0800)]
Issue 14984: Remove divide by zero error in index_put_ (#14986)
Summary:
No check for zero index tensor was done in the accumulate=True (serial) case in the new TensorIterator code since https://github.com/pytorch/pytorch/pull/13420.
https://github.com/pytorch/pytorch/issues/14984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14986
Differential Revision:
D13417861
Pulled By: colesbury
fbshipit-source-id:
e6ed1af8f708b53a35803fc157ed1f043169ec89
zrphercule [Tue, 11 Dec 2018 21:12:23 +0000 (13:12 -0800)]
Update onnx coverage script for more accurate result (#15029)
Summary:
The coverage of scalar-input test cases were not accurate. This patch fixed that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15029
Differential Revision:
D13419764
Pulled By: zrphercule
fbshipit-source-id:
a14a5cbef432bea8c9126156f5deb1125e1aeb47
Michael Suo [Tue, 11 Dec 2018 21:12:20 +0000 (13:12 -0800)]
tox.ini -> .flake8 (#15065)
Summary:
We were only using this file to configure flake8, and fbcode linters do not recognize tox.ini which causes spurious linter warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15065
Differential Revision:
D13420774
Pulled By: suo
fbshipit-source-id:
e43a46befa36862c8b3c0a90074aec6a66531492
Roy Li [Tue, 11 Dec 2018 21:06:11 +0000 (13:06 -0800)]
silence unreachable code warnings (#15036)
Summary:
Stack:
:black_circle: **#15036 silence unreachable code warnings** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/
D13411100/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15036
Differential Revision:
D13414712
Pulled By: li-roy
fbshipit-source-id:
d4aa84571fa94c66f3c5bfa9575a10c6ee398f9e
Michael Suo [Tue, 11 Dec 2018 19:44:27 +0000 (11:44 -0800)]
improve deep equality check in alias annotation test (#15031)
Summary:
Previously we were returning true if either IValue wasn't a tensor, which…is bad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15031
Differential Revision:
D13409759
Pulled By: suo
fbshipit-source-id:
f8bdcd05d334c1276ce46f55812065d358c1ff5d
James Sun [Tue, 11 Dec 2018 19:14:50 +0000 (11:14 -0800)]
Fix race condition in ThreadPool::workOnTasksUntilCompleted (#14833)
Summary:
Resolves #14704
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14833
Differential Revision:
D13405211
Pulled By: highker
fbshipit-source-id:
8552d51eeb5d3af0ed66c461e5ddfeb9ae2926bd
TerryTsao [Tue, 11 Dec 2018 18:41:37 +0000 (10:41 -0800)]
Fix CMakeLists.txt for Int8 python bindings (#15047)
Summary:
Currently in caffe2, one cannot properly fetch the content of Int8 blobs.
Upon digging the source code, it turns out that the relevant source code is not being compiled. Adding the source to CMakeLists.txt fixes this issue.
First time ever doing a pull request. Please let me know if there's any rule I should follow. Thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15047
Differential Revision:
D13417583
Pulled By: bddppq
fbshipit-source-id:
dd39575971a3012635edbf97a045d80e4b62a8eb
Orion Reblitz-Richardson [Tue, 11 Dec 2018 17:59:28 +0000 (09:59 -0800)]
Install cpp tests when built (#15000)
Summary:
This is broken out of https://github.com/pytorch/pytorch/pull/13733/
We want to install cpp tests so they can ultimately be runnable from that location for Caffe2 tests run from PyTorch builds.
cc pjh5 yf225 anderspapitto
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15000
Reviewed By: pjh5
Differential Revision:
D13416253
Pulled By: orionr
fbshipit-source-id:
51280be0a22557a742f90c9f303c58c35cbd4a38
Michael Carilli [Tue, 11 Dec 2018 17:46:25 +0000 (09:46 -0800)]
Stashing checkpointing RNG states based on devices of arg tensors (#14518)
Summary:
This PR intends to address apaszke's concerns in https://github.com/pytorch/pytorch/pull/14253#issuecomment-
441740016. Preserving the rng state is now controlled by a kwarg rather than a global state, hopefully in a python 2.7-compatible way.
Additionally, the checkpointing function stashes and restores the RNG states of
1. devices associated with all input tensor args to run_fn as well as
2. the current device.
I could easily change this to only save and restore the RNG states associated 1. alone. This would simplify the logic to create a [deduplicated, ordered](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R37) list of devices considered active.
I'm wondering if the [get_device_states](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R32) and [set_device_states](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R47) functions are general enough to reside elsewhere (presumably torch/random.py). I'm also wondering if the check on [torch.cuda._initialized](https://github.com/pytorch/pytorch/compare/master...mcarilli:checkpointing_rng_touchup?expand=1#diff-58da227fc9b1d56752b7dfad90428fe0R47) would be better placed within `get_device_states`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14518
Differential Revision:
D13356210
Pulled By: ezyang
fbshipit-source-id:
afa4cc21ce7862142d5cb1dec3750018df222039
svcscm [Tue, 11 Dec 2018 15:38:23 +0000 (07:38 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
d39b31f12ab2ab570548f3e8a65949332a64a0ff
Marat Dukhan [Tue, 11 Dec 2018 08:46:55 +0000 (00:46 -0800)]
Switch Int8Softmax, Int8Relu, and Int8LeakyRelu to QNNPACK (#14933)
Summary:
Int8Softmax: 4x-5x speedup compared to previous implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14933
Differential Revision:
D13406820
Pulled By: Maratyszcza
fbshipit-source-id:
ea8cbe1b861ddb7ff1b851d06d52c6fd6d04ed01
Lingyi Liu [Tue, 11 Dec 2018 06:49:47 +0000 (22:49 -0800)]
Adjust the API call to deserilize the tensorproto (#14132)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14132
as title
Reviewed By: jerryzh168
Differential Revision:
D13110697
fbshipit-source-id:
822c9079de11951f90aec3d26f0e4108847e7dac
Natalia Gimelshein [Tue, 11 Dec 2018 06:48:16 +0000 (22:48 -0800)]
use datatype dependent tolerance in data parallel tests
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14856
Differential Revision:
D13413560
Pulled By: soumith
fbshipit-source-id:
b3a0cfe93477ed332e6eaa2e39ef5f4cc8b36481
paland3 [Tue, 11 Dec 2018 06:34:17 +0000 (22:34 -0800)]
Update pooling.py (#14998)
Summary:
Strange line in the documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14998
Differential Revision:
D13413235
Pulled By: soumith
fbshipit-source-id:
80d05ec1185719b785f0aac914bc2369c1174f2f
Zachary DeVito [Tue, 11 Dec 2018 06:10:11 +0000 (22:10 -0800)]
Clean up casting ops (#14947)
Summary:
This removes FloatToInt style names replacing it with just the destination
name (e.g. FloatToInt -> Float). This makes it more consistent with the
syntax and makes it easier to add type conversions (just add a new
prim::Int op, for instance).
None of these ops get serialized so this should not effect loading of
old models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14947
Differential Revision:
D13408409
Pulled By: zdevito
fbshipit-source-id:
d773fe863f14d9de893f686832769f8cc8903a8e
Jongsoo Park [Tue, 11 Dec 2018 06:08:04 +0000 (22:08 -0800)]
share code between adagrad and rowwise adagrad tests (#14692)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14692
Remove some code duplication
Reviewed By: chocjy
Differential Revision:
D13296731
fbshipit-source-id:
5924e037ca64fc4b89234be922bc5ca47fb8bd32
Ilia Cherniavskii [Tue, 11 Dec 2018 05:30:53 +0000 (21:30 -0800)]
TBB task graph (#15041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15041
Adding an alternative implementation of a task graph based on TBB
Reviewed By: dmudiger
Differential Revision:
D13412517
fbshipit-source-id:
f5efedd680bbe0072bf38d504e5682ab51dd630f
bddppq [Tue, 11 Dec 2018 05:25:45 +0000 (21:25 -0800)]
Enable more caffe2 fp16 rocm tests (#15040)
Summary:
cc rohithkrn petrex
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15040
Reviewed By: houseroad
Differential Revision:
D13413068
Pulled By: bddppq
fbshipit-source-id:
b2967f16f8da0b9e80083138fb8632c14e9e9b63
Lu Fang [Tue, 11 Dec 2018 05:22:44 +0000 (21:22 -0800)]
Enable the build of tests in ATen/core (#15032)
Summary:
Otherwise they won't build
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15032
Reviewed By: yinghai
Differential Revision:
D13409801
Pulled By: houseroad
fbshipit-source-id:
95464aa8f3604835997ba1bb7f3c3e51485d1686
Gregory Chanan [Tue, 11 Dec 2018 03:51:27 +0000 (19:51 -0800)]
More scaffolding for LegacyTHDispatch. (#14852)
Summary:
1) at::functions are now also exposed in the at::legacy::th namespace and we move relevant calls over to use them (to avoid merge conflicts)
2) LegacyTHDispatch now handles device-type initialization
3) We generate derived LegacyTHDispatchers, e.g. THLegacyCPULongDispatcher, although they are currently empty.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14852
Reviewed By: ezyang
Differential Revision:
D13360852
Pulled By: gchanan
fbshipit-source-id:
af6705aeba3593ea5dba9bfc62890e5257bc81f8
Ilia Cherniavskii [Tue, 11 Dec 2018 03:18:06 +0000 (19:18 -0800)]
Back out "Revert
D13043261: [caffe2] Task graph and task future abstractions in executor"
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15030
Reviewed By: bddppq
Differential Revision:
D13408998
fbshipit-source-id:
9eb675e09fbc4829eab34df7aa660a0590816feb
Jerry Zhang [Tue, 11 Dec 2018 03:16:18 +0000 (19:16 -0800)]
Tensor construction codemod - 2/3 (#14836)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14836
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: bddppq
Differential Revision:
D13335176
fbshipit-source-id:
8d89510670e2cf70559d2f75e68f7181feb0b6d9
Jesse Hellemn [Tue, 11 Dec 2018 02:16:42 +0000 (18:16 -0800)]
Fixing reading of FBGEMM from env variables
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15023
Reviewed By: orionr
Differential Revision:
D13406778
Pulled By: pjh5
fbshipit-source-id:
2265f01170fb7969cbdf4e44ca6ef183f5d8017d
Syed Tousif Ahmed [Tue, 11 Dec 2018 01:53:33 +0000 (17:53 -0800)]
Alignas Array struct (#14920)
Summary:
This PR aligns the Array struct such that cuda vector performance improvements can be utilized.
I tested this by using it on our Philox header. Note how the vector store instruction gets used for cuda vector types and when using alignas on Array, vs when not using alignas on Array.
With cuda vector type (uint4, uint2, float4): https://godbolt.org/z/UaWOmR
With alignas: https://godbolt.org/z/Eeh0t5
Without alignas: https://godbolt.org/z/QT63gq
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14920
Differential Revision:
D13406751
Pulled By: soumith
fbshipit-source-id:
685b1010ef1f576dde30c278b1e9b642f87c843d
rohithkrn [Tue, 11 Dec 2018 01:25:46 +0000 (17:25 -0800)]
Integrate rocBLAS fp16 api into Caffe2 (#14882)
Summary:
This PR integrates rocBLAS half and mixed precision APIs in to Caffe2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14882
Differential Revision:
D13407840
Pulled By: bddppq
fbshipit-source-id:
75cb0d74da066776fa66575f1d255e879d36121e
Junjie Bai [Tue, 11 Dec 2018 01:19:15 +0000 (17:19 -0800)]
Fix old tensor CopyFrom usage in boolean mask operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15025
Differential Revision:
D13407323
Pulled By: bddppq
fbshipit-source-id:
1bc1d28ad0c6c71d25d788549be18917e393ee50
Jongsoo Park [Tue, 11 Dec 2018 01:16:32 +0000 (17:16 -0800)]
unit test with multiple omp threads (#14958)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14958
Test with multiple threads
Reviewed By: jianyuh
Differential Revision:
D13394791
fbshipit-source-id:
931a6c3bda15ebc816807e537dd0841c383e7a6f
Jerry Zhang [Tue, 11 Dec 2018 01:13:51 +0000 (17:13 -0800)]
Remove partially initialized Tensor in Deserialization (#14197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14197
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13642
Previously we pass in a patially initialized Tensor to Deserialize and it will fill
it with the result of deserialization of a tensor proto. Now we want it to return
a Tensor directly since it's just a shared pointer to TensorImpl.
Reviewed By: dzhulgakov
Differential Revision:
D12874357
fbshipit-source-id:
12b80a763375da23cfa64a74d6bc186d8d03b94f
Junjie Bai [Mon, 10 Dec 2018 23:56:37 +0000 (15:56 -0800)]
Revert
D13043261: [caffe2] Task graph and task future abstractions in executor
Differential Revision:
D13043261
Original commit changeset:
d89424354aea
fbshipit-source-id:
b307e3281c4d83b60ba2bfadcbcf69afb7a41412
James Reed [Mon, 10 Dec 2018 23:35:11 +0000 (15:35 -0800)]
apply() for ScriptModules (#14655)
Summary:
This can be use to initialize state that is not necessarily eligible for serialization/is implementation-specific. Concretely, I'm going to use this to pack the weight matrices for quantized Linear modules according to the FBGEMM APIs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14655
Differential Revision:
D13404438
Pulled By: jamesr66a
fbshipit-source-id:
2d327cef5520fdd716b5b1b29effd60a049e8a4a
Edward Yang [Mon, 10 Dec 2018 23:16:27 +0000 (15:16 -0800)]
Simplify THPPointer implementation for Storage. (#14897)
Summary:
We've virtualized the destructor for storage, so we
no longer have to forward to a particular backend.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14897
Differential Revision:
D13399216
Pulled By: ezyang
fbshipit-source-id:
531d29c3f278477cfa8759f30ab4f304d695b659
Edward Yang [Mon, 10 Dec 2018 23:10:23 +0000 (15:10 -0800)]
Disable getNumGPUs rewrite (#14993)
Summary:
cc iotamudelta
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14993
Differential Revision:
D13405804
Pulled By: ezyang
fbshipit-source-id:
c4aa9ed29ee2a4f3abf76c1e0fa8babfd738db35
Sebastian Messmer [Mon, 10 Dec 2018 23:06:30 +0000 (15:06 -0800)]
Fix include path for WrapDimMinimal.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14794
Reviewed By: dzhulgakov
Differential Revision:
D13336842
fbshipit-source-id:
ca49a9fd1d409d8a75e43eeb9b9b02c305ebb79a
Sebastian Messmer [Mon, 10 Dec 2018 23:06:30 +0000 (15:06 -0800)]
Move WrapDimMinimal to c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14793
Reviewed By: ezyang
Differential Revision:
D13336841
fbshipit-source-id:
4365a799e1856cc68dd94a273e97663fee5f51db
Edward Yang [Mon, 10 Dec 2018 22:52:16 +0000 (14:52 -0800)]
Stop disabling maybeOverlappingIndices (#14999)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
cc iotamudelta
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14999
Differential Revision:
D13405754
Pulled By: ezyang
fbshipit-source-id:
98459496494390ad1115b4f1f6738d53c14f0745
Jane Wang [Mon, 10 Dec 2018 22:29:41 +0000 (14:29 -0800)]
add gloo allgather support on GPU (#14576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14576
as titled
Reviewed By: pietern
Differential Revision:
D13266063
fbshipit-source-id:
e262f77d63724a7504a7112907bbfba49612fe75
Ilia Cherniavskii [Mon, 10 Dec 2018 22:21:24 +0000 (14:21 -0800)]
Task graph and task future abstractions in executor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14116
Reviewed By: dmudiger
Differential Revision:
D13043261
fbshipit-source-id:
d89424354aea14d1d14eb8320fb3aa34908a4e81
Jerry Zhang [Mon, 10 Dec 2018 22:17:43 +0000 (14:17 -0800)]
caffe2/caffe2/contrib/script (#15007)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15007
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14979
att
Reviewed By: dzhulgakov
Differential Revision:
D13286191
fbshipit-source-id:
b8a6bc7aea44487aea4dcf7f44c858fd30c6293c
Michael Suo [Mon, 10 Dec 2018 21:43:11 +0000 (13:43 -0800)]
s/Torch Script/TorchScript/g (#15011)
Summary:
pls
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15011
Differential Revision:
D13404158
Pulled By: suo
fbshipit-source-id:
e906281463d65c86e4e9073eb0c0a26f4f29e307
Yuxin Wu [Mon, 10 Dec 2018 20:48:11 +0000 (12:48 -0800)]
Improve the docs of interpolate(align_corners=) (#14806)
Summary:
ailzhang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14806
Reviewed By: ailzhang
Differential Revision:
D13366332
Pulled By: ppwwyyxx
fbshipit-source-id:
08fcea95d5c86b11cdfe464fdd9daa50050871f1
Giuseppe Ottaviano [Mon, 10 Dec 2018 19:54:45 +0000 (11:54 -0800)]
Improve build time of register_symbols.cpp without compiler hacks (#14911)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14911
In optimized modes the compiler tries to inline all the
`unordered_map::operator[]` calls, creating a massive amount of code
which takes several minutes to optimize. Instead, create a table of
PODs and populate the maps using a simple loop.
Reviewed By: soumith, luciang
Differential Revision:
D13382948
fbshipit-source-id:
b6752921e0f7213595d26b39e4397f6a3897960b
Edward Yang [Mon, 10 Dec 2018 18:44:13 +0000 (10:44 -0800)]
Delete defunct THP_API.h header. (#14899)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14899
Differential Revision:
D13383687
Pulled By: ezyang
fbshipit-source-id:
f2a08a769cc3775ba55f9c58d622a83df622d816
Edward Yang [Mon, 10 Dec 2018 18:40:25 +0000 (10:40 -0800)]
Disable test_leaf_variable_sharing on ASAN runs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15001
Reviewed By: orionr
Differential Revision:
D13399119
fbshipit-source-id:
6b1d098e55a67b1f5bc6d08a8ee3c1be8234a654
Edward Yang [Mon, 10 Dec 2018 18:29:43 +0000 (10:29 -0800)]
Revert
D13306052: [pytorch][PR] Allow converting CharTensor to np arrays
Differential Revision:
D13306052
Original commit changeset:
202d038f139c
fbshipit-source-id:
11f6bdd687f8ea5ce2e5f28f48d19449a5c403eb
Edward Yang [Mon, 10 Dec 2018 17:32:55 +0000 (09:32 -0800)]
Non-INTERFACE AT_LINK_STYLE is dead code (#14822)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14822
Differential Revision:
D13355574
Pulled By: ezyang
fbshipit-source-id:
a7173084f8735424619b2e393df2715a05918b44
SsnL [Mon, 10 Dec 2018 16:05:06 +0000 (08:05 -0800)]
Support torch.load with encoding (#14743)
Summary:
Addresses a common compatibility issue when loading Py2 checkpoints in Py3 regarding to bytes.
E.g.,
[1] https://github.com/pytorch/pytorch/issues/5994,
[2] https://github.com/CSAILVision/places365/issues/25,
[3] https://discuss.pytorch.org/t/how-to-load-a-saved-model-trained-on-pytorch-0-3-1-python-2-7-on-pyorch-1-0-python-3-7/31212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14743
Reviewed By: weiyangfb
Differential Revision:
D13350888
Pulled By: soumith
fbshipit-source-id:
2df4e828a8b70509118a355307ca3ebe51e108f6
SsnL [Mon, 10 Dec 2018 15:36:06 +0000 (07:36 -0800)]
Convert int8 numpy array to CharTensor (#14700)
Summary:
When rewriting `default_collate`, I noticed that `from_numpy` and `as_tensor` and `tensor` all do not work on `np.int8` arrays.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14700
Reviewed By: weiyangfb
Differential Revision:
D13305297
Pulled By: soumith
fbshipit-source-id:
2937110f65ed714ee830d50098db292238e9b2a9
SsnL [Mon, 10 Dec 2018 15:33:26 +0000 (07:33 -0800)]
Allow converting CharTensor to np arrays (#14710)
Summary:
The other direction of #14700
cc soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14710
Reviewed By: weiyangfb
Differential Revision:
D13306052
Pulled By: soumith
fbshipit-source-id:
202d038f139cf05e01069ff8d05268c66354c983
Jongsoo Park [Mon, 10 Dec 2018 09:06:17 +0000 (01:06 -0800)]
pre-pack operation of dnnlowp conv with 16-bit accumulation (#14881)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14881
This diff allows us to pre-quantize and pre-pack weight matrix used in DNNLOWP_ACC16 .
The intended use pattern is run Int8ConvPackWeight in init_net that generates a packed weight and Int8Conv with DNNLOWP_ACC16 engine uses the the packed weight.
Reviewed By: csummersea
Differential Revision:
D13374662
fbshipit-source-id:
dd02b9a4eb7af1fe208aa857fcd0b445e6e395af
Zachary DeVito [Mon, 10 Dec 2018 06:45:18 +0000 (22:45 -0800)]
Respect -q of setup.py (#14972)
Summary:
1. Changes the prints along the 'rebuild' pathway to respect the '-q' flag of setup.py
A clean rebuild now only prints:
[zdevito@devgpu172.prn2 /data/users/zdevito/pytorch] python setup.py -q rebuild develop
[0/1] Install the project...
-- Install configuration: "RelWithDebInfo"
ninja: no work to do.
ninja: no work to do.
ninja: no work to do.
ninja: no work to do.
ninja: no work to do.
ninja: no work to do.
2. Deletes apparently dead calls to `generate_code`. Now that CMake builds these files,
it appears that it is getting called twice and the second version is never used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14972
Reviewed By: soumith
Differential Revision:
D13396330
Pulled By: zdevito
fbshipit-source-id:
83c45143bbc6a6d2c1cfee929291ec059f2b5dc3
SsnL [Mon, 10 Dec 2018 05:10:39 +0000 (21:10 -0800)]
_get_device_index supports parsing device strings
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14929
Reviewed By: weiyangfb
Differential Revision:
D13394498
Pulled By: soumith
fbshipit-source-id:
948c6118abdf6c1e1a8a17709333954cafb2345e
Soumith Chintala [Mon, 10 Dec 2018 04:41:44 +0000 (20:41 -0800)]
remove mingfeima mkldnn reference from README, as no longer necessary (#14975)
Summary: we now get mkldnn automatically from third_party/ideep
Differential Revision:
D13396480
Pulled By: soumith
fbshipit-source-id:
20f819ba4b78cbe9c7d0baeab1c575669cbf6c20
Zachary DeVito [Mon, 10 Dec 2018 00:29:38 +0000 (16:29 -0800)]
fixing some rebuild issues (#14969)
Summary:
This fixes rebuild issues with the ninja part of the build. With this patch all ninja files will now report `nothing to do` if nothing has changed assuming `BUILD_CAFFE2_OPS=0`.
1. This only does the python file processing for caffe2 when BUILD_CAFFE2_OPS=1, this part of the build file is written in such a way that it is always required to rerun and can take substantial time to move files around in the no-op build. In the future this part should be rewritten to use a faster method of copying the files or should treat copying the files as part of the build rules and only run when the files are out of date.
2. This points `sleef` to a patched version that fixes a dead build output that is causing everything to relink all the time. See https://github.com/shibatch/sleef/pull/231#partial-pull-merging for the upstream change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14969
Reviewed By: soumith
Differential Revision:
D13395998
Pulled By: zdevito
fbshipit-source-id:
ca85b7be9e99c5c578103c144ef0f2c3b927e724
vishwakftw [Sun, 9 Dec 2018 23:53:34 +0000 (15:53 -0800)]
Remove deprecated info argument in btrifact (#14935)
Summary:
As specified in title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14935
Differential Revision:
D13394449
Pulled By: soumith
fbshipit-source-id:
569d59414f3a1a43ea641bded4b5433eb53e3490
Soumith Chintala [Sun, 9 Dec 2018 23:52:25 +0000 (15:52 -0800)]
add fix for CUDA 10 (#14971)
Summary:
Linux binaries-only fix for CUDA10
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14971
Differential Revision:
D13395932
Pulled By: soumith
fbshipit-source-id:
a72d6ab6b98c6c936e6391d55d2e4e45b9f1e6dd
Your Name [Sun, 9 Dec 2018 16:55:26 +0000 (08:55 -0800)]
Fix mismatched test_{full,ones,zeros}_like onnx expect files (#14956)
Summary:
master broken #14903
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14956
Differential Revision:
D13395363
Pulled By: bddppq
fbshipit-source-id:
31f0913843292e557807fd5a976f8907fa6cae4b
Yiming Wu [Sun, 9 Dec 2018 16:23:36 +0000 (08:23 -0800)]
fix auto grad summing for IfOp where intermediate output needs renaming (#14772)
Summary:
fix auto grad summing for IfOp where intermediate output needs renaming.
Bug before this diff:
- we only renames the output of IfOp without changing the subnet ops output
- this results in blob not found error
the unittest provides an example
this diff fix that for IfOp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14772
Differential Revision:
D13327090
Pulled By: harouwu
fbshipit-source-id:
ec40ee88526ace3619c54551e223dd71158a02f8
Spandan Tiwari [Sun, 9 Dec 2018 06:46:03 +0000 (22:46 -0800)]
Export ones_like, zeros_like and full_like using ONNX ConstantLike op. (#14903)
Summary:
This PR does the following:
1) Updates the ONNX export for `torch.zeros_like` and `torch.full_like` ops to use ONNX op `ConstantLike`. This reduces the export of experimental op `ConstantFill`, which may possibly be removed in future, see https://github.com/onnx/onnx/pull/1434).
2) It also adds export support for `torch.ones_like`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14903
Differential Revision:
D13383700
Pulled By: houseroad
fbshipit-source-id:
566d00a943e9497172fcd5a034b638a650ab13a2
Edward Yang [Sun, 9 Dec 2018 03:32:01 +0000 (19:32 -0800)]
Canonicalize all includes in PyTorch. (#14849)
Summary:
Anywhere we used #include "foo.h", we now say #include <foo.h>
Paths are adjusted to be rooted out of aten/src, torch/lib, or
the root level directory.
I modified CMakeLists.txt by hand to remove TH and THC from
the include paths.
I used the following script to do the canonicalization:
```
import subprocess
import re
import os.path
files = subprocess.check_output(['git', 'ls-files']).decode('utf-8').rstrip().split('\n')
for fn in files:
if not any(fn.endswith(suff) for suff in ['.cu', '.cpp', '.in', '.h', '.hpp', '.cu', '.cuh', '.cc']):
continue
if not any(fn.startswith(pref) for pref in ["aten/", "torch/"]):
continue
with open(fn, 'r') as f:
c = f.read()
def fmt(p):
return "#include <{}>".format(p)
def repl(m):
p = m.group(1)
if p in ["dlfcn.h", "unistd.h", "nvrtc.h", "cuda.h", "cuda_runtime.h", "cstdint", "cudnn.h", "Python.h", "cusparse.h", "cuda_runtime_api.h", "cuda_fp16.h", "cublas_v2.h", "stdint.h", "curand_kernel.h"]:
return fmt(p)
if any(p.startswith(pref) for pref in ["torch/csrc", "c10/", "ATen/", "caffe2/", "TH/", "THC/", "Eigen/", "gtest/", "zdl/", "gloo/", "onnx/", "miopen/"]):
return fmt(p)
for root in ["aten/src", "torch/lib", ""]:
for bad_root in [os.path.dirname(fn), "aten/src/TH", "aten/src/THC", "torch/csrc"]:
new_p = os.path.relpath(os.path.join(bad_root, p), root)
if not new_p.startswith("../") and (os.path.exists(os.path.join(root, new_p)) or os.path.exists(os.path.join(root, new_p + ".in"))):
return fmt(new_p)
print("ERROR: ", fn, p)
return m.group(0)
new_c = re.sub(r'#include "([^"]+)"', repl, c)
if new_c != c:
print(fn)
with open(fn, 'w') as f:
f.write(new_c)
```
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14849
Reviewed By: dzhulgakov
Differential Revision:
D13363445
Pulled By: ezyang
fbshipit-source-id:
52361f878a672785f9306c9e9ab2513128092b68
Jongsoo Park [Sun, 9 Dec 2018 02:15:00 +0000 (18:15 -0800)]
race condition fix of calling mutable_data inside a openmp region (#14921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14921
Fix race condition introduced in
D13188595 .
Let's reminder ourselves "never call mutable_data from an OpenMP region!!!"
Reviewed By: jianyuh
Differential Revision:
D13387692
fbshipit-source-id:
6a3aeedeeda55a9ede660de8f1f44d4eee76ae2b
Fei Sun [Sat, 8 Dec 2018 19:12:40 +0000 (11:12 -0800)]
Add crop argument, can crop rec as well, first resize and then crop
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14894
Reviewed By: llyfacebook
Differential Revision:
D13377604
Pulled By: sf-wind
fbshipit-source-id:
333d0d864e6c2dc85f405baa25ed58029d62750f
Marat Dukhan [Sat, 8 Dec 2018 10:45:41 +0000 (02:45 -0800)]
Switch Int8Sigmoid to QNNPACK (#14883)
Summary:
50x-100x speedup compared to current version.
Also, fixes a bug in the current version when batch size exceeds 1 (current version processes only the first image in this case).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14883
Differential Revision:
D13390655
Pulled By: Maratyszcza
fbshipit-source-id:
1b33a97bf2d0866d38faa2b42e64fd2859017898
Your Name [Sat, 8 Dec 2018 09:04:02 +0000 (01:04 -0800)]
ONNX changes to use int32_t (instead of enum) to store data type
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14926
Reviewed By: houseroad
Differential Revision:
D13390642
Pulled By: bddppq
fbshipit-source-id:
c2314b24d9384f188fda2b9a5cc16465ad39581e
Sebastian Messmer [Sat, 8 Dec 2018 08:26:14 +0000 (00:26 -0800)]
Remove at references from c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14432
Reviewed By: dzhulgakov
Differential Revision:
D13223904
fbshipit-source-id:
43b06e33e088e7789ccea6d92267936fe30d8571
Brennan Vincent [Sat, 8 Dec 2018 04:13:31 +0000 (20:13 -0800)]
Implement `std` for multiple dimensions on CPU devices. (#14535)
Summary:
Tested on a tensor with 1 billion elements and 3 dimensions on a powerful, highly
multi-core Linux machine.
parallelized: All operations (e.g., `t.std(1)`) that could be done in the old code are now several times faster. All
new operations (e.g., `t.std((0,2))` are significantly faster than the NumPy equivalents.
`t.std((0, 1, 2))`, a new operation, is logically equivalent to the
old `t.std()`, but faster.
serial: The above comment about old operationos now being faster still
holds, but `t.std((t1, ..., tn))` is now a few
times slower than `t.std()`. If this turns out to be important, we can
special-case that to use the old algorithm.
The approach is to create a new method, `TensorIterator::foreach_reduced_elt`,
valid for `TensorIterator`s that represent a dimension reduction. This
method calls a supplied function for each element in the output,
supplying it with the input elements that correspond to that output.
Given that primitive, we can implement reductions like the following pseudocode:
If there is more than one output element:
```
PARALLEL FOR EACH element IN output:
accumulator = identity
SERIAL FOR EACH data_point IN element.corresponding_input:
accumulator.update(data_point)
element = accumulator.to_output()
```
If there is only one output element, we still want to parallelize, so we
do so along the *input* instead:
```
accumulators[n_threads]
PARALLEL FOR EACH input_chunk IN input.chunks():
accumulators[thread_num()] = identity
SERIAL FOR EACH data_point IN input_chunk:
accumulators[thread_num()].update_with_data(data_point)
accumulator = identity
SERIAL FOR EACH acc in accumulators:
accumulator.update_with_other_accumulator(acc)
output_element = accumulator.to_output()
```
Note that accumulators and data points do not have to be the same type
in general, since it might be necessary to track arbitrary amounts of
data at intermediate stages.
For example, for `std`, we use a parallel version of Welford's
algorithm, which requies us to track the mean, second moment, and number
of elements, so the accumulator type for `std` contains three pieces of
data.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14535
Differential Revision:
D13283887
Pulled By: umanwizard
fbshipit-source-id:
8586b7bf00bf9f663c55d6f8323301e257f5ec3f
Orion Reblitz-Richardson [Sat, 8 Dec 2018 03:48:38 +0000 (19:48 -0800)]
Add CAFFE2_API to video processing functions (#14900)
Summary:
Extracted from https://github.com/pytorch/pytorch/pull/13733
Some tests were failing because these methods didn't have an export.
cc pjh5 yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14900
Reviewed By: pjh5
Differential Revision:
D13381130
Pulled By: orionr
fbshipit-source-id:
030536f8fb09765c09a7b0bd45400161053f2e18
Johannes M Dieterich [Sat, 8 Dec 2018 02:55:21 +0000 (18:55 -0800)]
Enable unit tests known to work on ROCm (#14011)
Summary:
* Enable unit tests known to work on ROCm.
* Disable a few that are known to be flaky for the time being.
* Use std::abs for Half
* No more special casing for ROCm in TensorMathReduce
* Document an important detail for a hardcoded block size w.r.t. ROCm in TensorMathReduce
ezyang bddppq for awareness
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14011
Differential Revision:
D13387679
Pulled By: bddppq
fbshipit-source-id:
4177f2a57b09d866ccbb82a24318f273e3292f71
Lu Fang [Sat, 8 Dec 2018 01:24:01 +0000 (17:24 -0800)]
update of fbcode/onnx to
aca8473a40cf43f01958c81b648efcee7f3a755a (#14865)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14865
Previous import was
42804705bdbf179d1a98394008417e1392013547
Included changes:
- **[aca8473](https://github.com/onnx/onnx/commit/aca8473)**: Add Erf operator for computing error function (#1675) <bddppq>
- **[3fc82ca](https://github.com/onnx/onnx/commit/3fc82ca)**: Add IsNaN operator. (#1656) <Pranav Sharma>
- **[0685f01](https://github.com/onnx/onnx/commit/0685f01)**: Add Sign Op (#1658) <Rui Zhu>
- **[2a8fae8](https://github.com/onnx/onnx/commit/2a8fae8)**: Fix unused var warning (#1669) <Yinghai Lu>
- **[e212833](https://github.com/onnx/onnx/commit/e212833)**: Update scan (#1653) <G. Ramalingam>
Reviewed By: zrphercule
Differential Revision:
D13370727
fbshipit-source-id:
13a93d5acc8d4758f682278ea162ec9124ced22d