marka17 [Wed, 9 Jan 2019 05:14:54 +0000 (21:14 -0800)]
Add element-wise multiplication in formulas (#15834)
Summary:
Absence of element-wise multiplication can confused some beginners
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15834
Differential Revision:
D13603369
Pulled By: soumith
fbshipit-source-id:
1d5c17c57778ddbb4b201122d826d1d6437204d1
Derek Kim [Wed, 9 Jan 2019 04:53:11 +0000 (20:53 -0800)]
Typos fixed in CWrapPlugin.get_type_check (#15859)
Summary:
Typos fixed in CWrapPlugin.get_type_check
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15859
Differential Revision:
D13605908
Pulled By: soumith
fbshipit-source-id:
a8c970f0ac6d54dfd69b9775fc1a2b4f198b4ed6
Sebastian Messmer [Wed, 9 Jan 2019 04:22:42 +0000 (20:22 -0800)]
Move LayerNorm op schema to c10 (#15199)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15199
In order to call it from PyTorch, this op schema can't live in caffe2 but must be included from PyTorch.
Moving it to c10. This is not where it should be in the end (that's why there is a large TODO here),
but an intermediate hack to enable this use case and proof-of-concept.
Reviewed By: ezyang
Differential Revision:
D13462124
fbshipit-source-id:
1e187b9def8ef049c91e6de947ea4a85758d711b
Sebastian Messmer [Wed, 9 Jan 2019 04:22:42 +0000 (20:22 -0800)]
Update flat_hash_map (#15367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15367
This updates flat_hash_map and fixes an issue with singletons across library boundaries
(see the PRs linked at the top of the file)
Reviewed By: ezyang
Differential Revision:
D13510912
fbshipit-source-id:
e90a297a7a2d69ae3fe48e4fcd8a44ad4b81292a
Sebastian Messmer [Wed, 9 Jan 2019 04:22:41 +0000 (20:22 -0800)]
Fix C10_API/C10_EXPORT for op schema registration (#15324)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15324
This was missing but needs to be here, otherwise we can't register schemas without linker errors.
Reviewed By: ezyang
Differential Revision:
D13500679
fbshipit-source-id:
ba06351cb8ae09ec456cb93e527d388ace578fbb
Sebastian Messmer [Wed, 9 Jan 2019 04:22:41 +0000 (20:22 -0800)]
Use C10Tensor in the dispatcher (#15195)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15195
This removes the use of caffe2::Tensor or at::Tensor in the c10 dispatcher and only uses C10::Tensor.
It also changes output tensors to be passed as `const Tensor&` instead of `Tensor*` because we otherwise can't forward them in operator_c10wrapper.h.
Reviewed By: ezyang
Differential Revision:
D13461640
fbshipit-source-id:
7f79925a7d60f01660a24bbfda47391af0c70ed3
Sebastian Messmer [Wed, 9 Jan 2019 04:22:41 +0000 (20:22 -0800)]
Convert caffe2/aten Tensors to/from c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14820
Reviewed By: dzhulgakov
Differential Revision:
D13348044
fbshipit-source-id:
95008e6ead3cfc478696b1c203769241d4cf6ca8
Sebastian Messmer [Wed, 9 Jan 2019 04:22:40 +0000 (20:22 -0800)]
Implement c10::Tensor (#14819)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14819
This is a minimal wrapper for a c10::TensorImpl,
maybe destined for greatness later when we move caffe2::Tensor or at::Tensor into c10.
Reviewed By: dzhulgakov
Differential Revision:
D13348039
fbshipit-source-id:
874f515358e94f35dc7a4c3e55b35fde59c51ff1
albanD [Wed, 9 Jan 2019 03:57:16 +0000 (19:57 -0800)]
Allow ReadyQueue to handle empty tasks (#15791)
Summary:
Allow the comparison function used in ReadyQueue to handle the empty FunctionTasks created by the reentrant autograd.
Fix #11732
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15791
Differential Revision:
D13598006
Pulled By: soumith
fbshipit-source-id:
0bfdf28a735fbfe44f0fdbaf8b74a6198e6a1984
Brennan Vincent [Wed, 9 Jan 2019 03:51:41 +0000 (19:51 -0800)]
In loop_wrapper, do not copy the passed-in functor (capture it by reference instead). (#15845)
Summary:
The overhead of the copy actually makes an appreciable difference when doing a lot of small reductions (i.e., when the reduced dimension is significantly smaller than the non-reduced dimensions.
```
x=torch.randn((1024,10,1024),dtype=torch.float64)
torch.set_num_threads(1)
%timeit x.std(1)
```
Before: 813.0 ms
After: 708.25 ms
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15845
Differential Revision:
D13603246
Pulled By: umanwizard
fbshipit-source-id:
020d224d76fcb8a0b55b75b0f2937e9508891beb
David Carrillo Cisneros [Wed, 9 Jan 2019 00:04:41 +0000 (16:04 -0800)]
Add NHWC support to Resize Operator (#15553)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15553
Add unit test and implementation of NHWC layout for Resize operator.
Also, add pragma parallel loop to old NCHWC layout.
Reviewed By: jspark1105
Differential Revision:
D13540762
fbshipit-source-id:
eebf252bf0d1efdff180a171d804181045f100a5
andersj [Tue, 8 Jan 2019 23:54:20 +0000 (15:54 -0800)]
Revert "remove use of tmp_install" (#15847)
Summary:
This reverts commit
04bf5285896e52ac118d2f9e9b7f582f695f13e2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15847
Differential Revision:
D13603174
Pulled By: anderspapitto
fbshipit-source-id:
ae321434d3345ad94fad67bf71fd027cddeb4588
Jesse Hellemn [Tue, 8 Jan 2019 23:04:13 +0000 (15:04 -0800)]
Correcting source pybind11 library to install into Python
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15836
Reviewed By: anderspapitto
Differential Revision:
D13601331
Pulled By: pjh5
fbshipit-source-id:
36785c501774c01f47acb49cdac265b2c95a5040
Zachary DeVito [Tue, 8 Jan 2019 21:09:11 +0000 (13:09 -0800)]
implement floordiv with correct integer and division by 0 semantics (#15813)
Summary:
fixes #15768
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15813
Differential Revision:
D13594872
Pulled By: zdevito
fbshipit-source-id:
c6c78c9e17fb16ec2bdc42402d203592cf35b7db
Derek Kim [Tue, 8 Jan 2019 21:03:16 +0000 (13:03 -0800)]
A trivial error message updates on `at::Tensor _convolution` (#15830)
Summary:
I fixed an grammatical error on this function previously, but I also realized that its content was also wrong. A weight tensors of a convolutional layer should be at least 3 dimensional, not 2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15830
Differential Revision:
D13597968
Pulled By: soumith
fbshipit-source-id:
72a75106e88945c68d6462828b149441cfb5acde
peter [Tue, 8 Jan 2019 21:03:10 +0000 (13:03 -0800)]
Enable torch static build on Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15769
Reviewed By: yf225, pjh5
Differential Revision:
D13597845
Pulled By: orionr
fbshipit-source-id:
99640e22974990ae570a4795ce07274c4447cb01
Richard Zou [Tue, 8 Jan 2019 20:34:43 +0000 (12:34 -0800)]
Fix sum_to behavior with zero dimensions (#15796)
Summary:
Fixes #15223.
This fixes an autograd bug where backprop either fails or produces
gradients of incorrect sizes when tensors with zero-sized dimensions are
involved.
Previously, we were reducing along dimensions that had size greater than 1
when summing to a size in autograd. This is incorrect because we should also reduce
along dimensions with size 0 to produce a tensor of size 1 in that
dimension that then gets viewed to the correct shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15796
Differential Revision:
D13593199
Pulled By: zou3519
fbshipit-source-id:
2e2acac34943a9b7fabadc10c9efd4f66db298fd
mwootton [Tue, 8 Jan 2019 20:23:30 +0000 (12:23 -0800)]
Cache workspace size in the BenchmarkCache. (#15742)
Summary:
Cache the workspace size information for MIOpen for a given configuration as opposed to inquiring it every time. This reduces overhead significantly as inquiring the workspace size forces a full read of the performance database in MIOpen and this database has grown significantly in recent releases. This caching gets us back to ideal performance.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15742
Differential Revision:
D13598932
Pulled By: bddppq
fbshipit-source-id:
4e65d247b71dec828293cf0562aac3fbd4fad83a
mruberry [Tue, 8 Jan 2019 20:17:26 +0000 (12:17 -0800)]
Refactors shape logic out of code generation, fixes possible segfault (#15750)
Summary:
This PR:
- Removes shape logic from the code generator, which was previously relied on to return chunk and concat information
- Copies the logic to detect if a kernel has a rand_like node to the executor, making its pass independent of the code generator
- Fixes a possible segfault where references to a vector still being modified were relied upon
The actual shape logic is unchanged.
The possible segfault is in the handling of the former "flat_inputs" in codegen.cpp. This vector holds pairs, and the second element of these pairs is a reference. In some cases these would be references to items in the vector chunk_desc, which could be added to later, possibly invalidating any references to items in it. I hit a similar segfault in testing when naively making parallel code for "flat_outputs."
I'm submitting this small PR because it's separable, self-contained, has a fix, and I am trying to actively get away from large PRs to encourage more stability and incremental change in the fuser.
ngimel zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15750
Differential Revision:
D13597451
Pulled By: zou3519
fbshipit-source-id:
0d48b365779b42849b044ba0286258aacc7b0332
Johannes M Dieterich [Tue, 8 Jan 2019 20:05:14 +0000 (12:05 -0800)]
Use parallel thrust execution policy on ROCm (#15481)
Summary:
The Thrust shipped with ROCm is recent enough to support this API. Minimize divergence between CUDA/ROCm by changing idef guards.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15481
Differential Revision:
D13598739
Pulled By: bddppq
fbshipit-source-id:
20d0a7e3887a4050eea65033161561af47411de1
ashishfarmer [Tue, 8 Jan 2019 19:23:01 +0000 (11:23 -0800)]
Use correct workspace alloc call in MIOpen conv operator (#15712)
Summary:
This PR contains changes for:
1. Using memory alloc from HIPContext while allocating workspace for MIOpen conv and transpose_conv operators rather than direct HIP mem alloc
2. Minor cleanup and removing an unnecessary sync call from MIOpen conv op
Differential Revision:
D13598894
Pulled By: bddppq
fbshipit-source-id:
44886161abdf91cd29c7c93b3e23620e1b09c7c9
Jerry Zhang [Tue, 8 Jan 2019 19:22:39 +0000 (11:22 -0800)]
Tensor method rename dims()->sizes() - 2/2
Summary: Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: smessmer
Differential Revision:
D13581787
fbshipit-source-id:
b04c6aa87fea3a10b522a71fccc1fcfb76a2c212
Jerry Zhang [Tue, 8 Jan 2019 18:55:26 +0000 (10:55 -0800)]
Remove caffe2::ShareData (#15418)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15418
Previously we are using Resize + ShareData.
Instead, we'll create a function on Tensor that clones itself with same storage.
Suppose we want `t` to `ShareData` with `t0`, Previous:
```
Tensor t(dims, CPU);
t.Resize(t0.sizes());
t.ShareData(t0);
```
Now:
```
Tensor t = t0.Alias();
```
Reviewed By: dzhulgakov
Differential Revision:
D13507609
fbshipit-source-id:
6e4275d02f4c3356cbce91127f1b01111dc86b9f
Peter Goldsborough [Tue, 8 Jan 2019 18:26:32 +0000 (10:26 -0800)]
Move isnan to C++ (#15722)
Summary:
Wanted to use `Tensor.isnan` in C++, figured it'd be nice to have, so I made it into a tiny native function.
gchanan ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15722
Differential Revision:
D13591315
Pulled By: goldsborough
fbshipit-source-id:
a78bd22101fde87a0257f759b9bfcf3b4208f5fa
Natalia Gimelshein [Tue, 8 Jan 2019 17:45:52 +0000 (09:45 -0800)]
use all_weights instead of _parameters in _flat_weights in rnn (#15766)
Summary:
Fixes #15749
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15766
Differential Revision:
D13592320
Pulled By: soumith
fbshipit-source-id:
6c3805f576c3df5a2da8bef1e4305eda379718df
Richard Zou [Tue, 8 Jan 2019 15:26:15 +0000 (07:26 -0800)]
Use CUDAGuard when serializing CUDA Tensors (#15807)
Summary:
Fixes #15308. Before this change, `torch.save` and `torch.load` would
initialize the CUDA context on GPU 0 if it hadn't been initialized
already, even if the serialized tensors are only on GPU 1.
This PR fixes that bug by using CUDAGuard in the storage serialization
path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15807
Differential Revision:
D13593201
Pulled By: zou3519
fbshipit-source-id:
4addc91ea5a5278d56a03f3d422577ee39e99897
Adam Paszke [Tue, 8 Jan 2019 15:20:22 +0000 (07:20 -0800)]
Stop leaving garbage files after running test_jit.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15404
Differential Revision:
D13548316
Pulled By: zou3519
fbshipit-source-id:
fe8731d8add59777781d34d9c3f3314f11467b23
Adam Paszke [Tue, 8 Jan 2019 14:57:45 +0000 (06:57 -0800)]
Add support for batch_norm fusion to the JIT (#15146)
Summary:
We don't support reductions yet, but simply decomposing batch_norm
into a kernel that computes the stats, and the fusing everything else
with ReLU and following pointwise ops provides nice speedups.
Note that this is only limited to inference mode for now, because we
don't support convolutions and batch norm in AD, so the fuser isn't
applied to those parts.
This commit gives us a 7% end-to-end speedup for ResNet50 with batch size 32. Note that this only applies to inference mode at the moment due to lack of AD support for CNN operations (I'll be adding that soon), and not to the standard `torchvision` models, because they use in-place ops which aren't supported by the fuser (we need a way of proving that de-inplacing them is safe).
cc zou3519 zdevito mruberry ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15146
Differential Revision:
D13548303
Pulled By: zou3519
fbshipit-source-id:
a2e2e5abc383f637fae19bd1b423f20c2cbc056a
Yinghai Lu [Tue, 8 Jan 2019 06:09:38 +0000 (22:09 -0800)]
Support communicating with C2 protobuf in Onnxifi flow (#15472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15472
Create a path to pass serialized C2 protobuf instead of ONNX during ONNXIFI flow
Reviewed By: houseroad
Differential Revision:
D13536603
fbshipit-source-id:
7d016474f4beedbda480ed2e2c0004af7868aafe
Xiaomeng Yang [Tue, 8 Jan 2019 05:33:44 +0000 (21:33 -0800)]
Add count_include_pad arg for AveragePoolOp on GPU (#15787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15787
Add count_include_pad arg for AveragePoolOp on GPU
Reviewed By: houseroad
Differential Revision:
D13589185
fbshipit-source-id:
235a84cfcd2033ee796c13e338fc3d03e832b5b1
Shen Li [Tue, 8 Jan 2019 04:55:37 +0000 (20:55 -0800)]
Move Stream.query() implementation down to C++ (#15737)
Summary:
See #15682
Pushing up this small PR to check if I am doing the right thing. If correct, more will follow for other Stream APIs. Questions will be added inline.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15737
Differential Revision:
D13581400
Pulled By: mrshenli
fbshipit-source-id:
24afed7847b89b62f0692c79a101ec7ff9d9ee4d
Derek Kim [Tue, 8 Jan 2019 03:59:20 +0000 (19:59 -0800)]
A trivial error in the error message of `at::Tensor _convolution` fixed (#15772)
Summary:
A trivial grammatical error fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15772
Differential Revision:
D13592279
Pulled By: zou3519
fbshipit-source-id:
14f60c61747a3893cd0e4c860f7b4c4c4ba28c28
Jongsoo Park [Tue, 8 Jan 2019 02:45:32 +0000 (18:45 -0800)]
clean up
D13579188 (#15759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15759
Some flags have too long names. And some other few minor clean ups.
Reviewed By: jianyuh
Differential Revision:
D13587353
fbshipit-source-id:
f8aee7f167505644f5d8f80fe2eed70201ef1e54
BowenBao [Tue, 8 Jan 2019 00:06:34 +0000 (16:06 -0800)]
Add support for exporting onnx split (#15092)
Summary:
* With the update of split output to dynamic list it breaks the export to onnx.
Now split ir becomes two ops: 1. Dynamic[] <= Split(), and 2. out1, out2, out3
<= Prim::ListUnpack. In this fix these two consecutive ops get fused when being
exported to onnx.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15092
Reviewed By: dzhulgakov
Differential Revision:
D13583832
Pulled By: houseroad
fbshipit-source-id:
3eb18c871e750921ad6d5cc179254bee9bcf4c99
Jongsoo Park [Mon, 7 Jan 2019 23:12:25 +0000 (15:12 -0800)]
simplify conv dnnlowp ops by not allowing fp32 in/out (#15758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15758
DNNLOWP Conv operators became very complex due to many options. This diff simplifies them by not allowing fp32 in/out. This is OK for Conv operators because Conv operators are usually used in deep networks where quantizing and dequantizing using separate operators is not much overhead.
Reviewed By: csummersea
Differential Revision:
D13587341
fbshipit-source-id:
e88c919dae79d1c5b7d787ea539edf5bcb064afc
Gu, Jinghui [Mon, 7 Jan 2019 22:10:27 +0000 (14:10 -0800)]
Enable conv+add fusion, same as conv+sum (#15268)
Summary:
Enable conv+add fusion, same as conv+sum
Caution: only element-wise add is supported on IDEEP without scalar
broadcast. Otherwise, the fusion is illegal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15268
Differential Revision:
D13577375
Pulled By: yinghai
fbshipit-source-id:
92c9c4b667c5ca5f7a262a5bffaa8aa68eeff3bd
David Riazati [Mon, 7 Jan 2019 21:49:20 +0000 (13:49 -0800)]
Allow List arguments to Python Ops (#15721)
Summary:
Adds `List` to eval environment for type lines and allows `List` to be used on PythonOps (follows the same style as the `Tuple` code), fixes #15661
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15721
Differential Revision:
D13578540
Pulled By: driazati
fbshipit-source-id:
fce54dc3c0931d8b017b2e3483f0ac53826dda94
SsnL [Mon, 7 Jan 2019 20:29:17 +0000 (12:29 -0800)]
Bump CircleCI docker version to 278 (#15795)
Summary:
Just changing the version number doesn't seem to work. I needed to also fix macos brew parallel conflict
should this merge together with https://github.com/pytorch/ossci-job-dsl/pull/36 ?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15795
Differential Revision:
D13591839
Pulled By: yf225
fbshipit-source-id:
6b2a90943e63c8dcc4b6d9159eb54f1b5974c9ac
Peter Goldsborough [Mon, 7 Jan 2019 19:34:16 +0000 (11:34 -0800)]
Fix C++ Frontend example in frontend.html (#15717)
Summary:
The small end-to-end example in https://pytorch.org/cppdocs/frontend.html is a little outdated and needs fixes.
ezyang soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15717
Differential Revision:
D13591306
Pulled By: goldsborough
fbshipit-source-id:
3334d68c7f77cf094b66ec2b2f396c4c65bb0d72
Peter Goldsborough [Mon, 7 Jan 2019 19:31:45 +0000 (11:31 -0800)]
Fix restructured text issue in tensor_basics.rst (#15701)
Summary:
Fix submitted by huntzhan in https://github.com/pytorch/cppdocs/pull/4. The source is in this repo so the patch has to be applied here.
soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15701
Differential Revision:
D13591302
Pulled By: goldsborough
fbshipit-source-id:
796957696fd560a9c5fb42265d7b2d018abaebe3
Gu, Jinghui [Mon, 7 Jan 2019 19:07:51 +0000 (11:07 -0800)]
Fallback to CPU concat op to handle TensorCPU inputs (#15263)
Summary:
Fallback to CPU concat op to handle TensorCPU inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15263
Differential Revision:
D13587030
Pulled By: yinghai
fbshipit-source-id:
010a8579d61c3beb8556eb92493a552b2ab0030c
Jongsoo Park [Mon, 7 Jan 2019 19:04:22 +0000 (11:04 -0800)]
fix conv unit test for groupwise quantization and pre-packing (#15761)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15761
As title says.
Reviewed By: csummersea
Differential Revision:
D13587727
fbshipit-source-id:
f0631b8cbb89d65a1d952bc25b463de23de93bec
vishwakftw [Mon, 7 Jan 2019 18:38:16 +0000 (10:38 -0800)]
Add is_floating_point to docs (#15704)
Summary:
Fixes #15700 .
Changelog:
- Expose torch.*.is_floating_point to docs
Differential Revision:
D13580734
Pulled By: zou3519
fbshipit-source-id:
76edb4af666c08237091a2cebf53d9ba5e6c8909
Elias Ellison [Mon, 7 Jan 2019 17:58:08 +0000 (09:58 -0800)]
Pool prim::None nodes (#15745)
Summary:
Make the constant pooling pass pool prim::None nodes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15745
Differential Revision:
D13583518
Pulled By: eellison
fbshipit-source-id:
7f8aa70522515805ab0991c6db3d96b5a96cdede
Owen Anderson [Mon, 7 Jan 2019 02:54:25 +0000 (18:54 -0800)]
Replace some malloc+memset pairs with calloc.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15765
Differential Revision:
D13588723
Pulled By: resistor
fbshipit-source-id:
47d35dc608847a5b173cfcf2aaa2a77359e56722
mruberry [Sat, 5 Jan 2019 17:04:54 +0000 (09:04 -0800)]
Removes print statements from test_torch.py (#15747)
Summary:
These print statements do not affect the test, and tests (generally) shouldn't print.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15747
Differential Revision:
D13587289
Pulled By: soumith
fbshipit-source-id:
c758793c9e35faf02bacba6c7c6d072f7c40453f
Mickaël Schoentgen [Sat, 5 Jan 2019 16:51:14 +0000 (08:51 -0800)]
Fix several DeprecationWarning: invalid escape sequence (#15733)
Summary:
Hello,
This is a little patch to fix `DeprecationWarning: invalid escape sequence`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15733
Differential Revision:
D13587291
Pulled By: soumith
fbshipit-source-id:
ce68db2de92ca7eaa42f78ca5ae6fbc1d4d90e05
ArutyunovG [Sat, 5 Jan 2019 16:23:02 +0000 (08:23 -0800)]
caffe2_benchmark msvc build fix (#15619)
Summary:
Fixing error in caffe2_benchmark binary
```
2018-12-29T14:09:59.7867995Z d:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.h(90): error C2678: binary '|=': no operator found which takes a left-hand operand of type 'std::_Iosb<int>::_Openmode' (or there is no acceptable conversion) (compiling source file D:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.cc) [D:\a\1\s\caffe2_builders\v141\pytorch\build\Release\binaries\caffe2_benchmark.vcxproj]
2018-12-29T14:09:59.7868252Z d:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.h(92): error C2678: binary '|=': no operator found which takes a left-hand operand of type 'std::_Iosb<int>::_Openmode' (or there is no acceptable conversion) (compiling source file D:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.cc) [D:\a\1\s\caffe2_builders\v141\pytorch\build\Release\binaries\caffe2_benchmark.vcxproj]
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15619
Differential Revision:
D13580195
Pulled By: soumith
fbshipit-source-id:
b0a4479cd5f7555801b1977aeee96b6433293da7
Lu Fang [Sat, 5 Jan 2019 06:47:35 +0000 (22:47 -0800)]
Adding a hook (wrapper) for non-std stream reader in PyTorchStreamReader (#15551)
Summary:
To implement a stream is very annoying, since it is closely defined with the underlying storage streambuffer.
So in this PR, we add ReadAdapterInterface and PyTorchStreamReader will use it. We implement IStreamAdapter as a wrapper of std::istream. And keep the user interface unchanged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15551
Reviewed By: zrphercule
Differential Revision:
D13568907
Pulled By: houseroad
fbshipit-source-id:
93708cb801248a6c101f35cb14d1631029365c3c
Cheng,Penghui [Sat, 5 Jan 2019 06:30:48 +0000 (22:30 -0800)]
support 0 size in any of the tensor dimensions in mkldnn (#15295)
Summary:
support 0 size in any of the tensor dimensions in mkldnn
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15295
Differential Revision:
D13573747
Pulled By: yinghai
fbshipit-source-id:
5bf7a0b9e2567e80f44981a7823be5407fc94e53
Lin Huang [Sat, 5 Jan 2019 00:59:18 +0000 (16:59 -0800)]
Port replication_pad2d and replication_pad3d to ATen (#15538)
Summary:
port replication padding 2D and 3D from legacy TH API implementation
to ATen implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15538
Differential Revision:
D13547567
Pulled By: lhuang04
fbshipit-source-id:
decfe100d9edfdcfb62f39ee23f37b6cae0d461f
zrphercule [Sat, 5 Jan 2019 00:11:23 +0000 (16:11 -0800)]
Fix different types in rsub caused bug (#15707)
Summary:
Before this pr, rsub did not convert two elements into the same dtype, therefore "1 - x" may export to an onnx model that two elements of rsub having different dtype.
By adding this symbolic patch this bug should be fixed.
Related test cases also created.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15707
Differential Revision:
D13583042
Pulled By: zrphercule
fbshipit-source-id:
3a2de47a1a8d1ded1a0adfb911adbe6ac729cdef
Jerry Zhang [Fri, 4 Jan 2019 23:48:21 +0000 (15:48 -0800)]
Tensor method rename dims()->sizes() - 1/2
Summary: Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: BIT-silence
Differential Revision:
D13581782
fbshipit-source-id:
b16b4198e100617769d84aa599bf141117cfbe5b
Lu Fang [Fri, 4 Jan 2019 23:38:07 +0000 (15:38 -0800)]
update of fbcode/onnx to
8384c788939bc65463f9754b6a7a00b212b18ba1 (#15739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15739
Previous import was
765f5ee823a67a866f4bd28a9860e81f3c811ce8
Included changes:
- **[8384c78](https://github.com/onnx/onnx/commit/8384c78)**: add constantofshape (#1582) <Rui Zhu>
- **[9afc06c](https://github.com/onnx/onnx/commit/9afc06c)**: Set symbol visibility to hidden for non-Windows (#1707) <Paul Jesse Hellemn>
- **[6f8a9f0](https://github.com/onnx/onnx/commit/6f8a9f0)**: Revert "Add NonMaxSupression operator (#1695)" (#1702) <Lu Fang>
- **[8b89544](https://github.com/onnx/onnx/commit/8b89544)**: Add NonMaxSupression operator (#1695) <Hector Li>
- **[0a7cc48](https://github.com/onnx/onnx/commit/0a7cc48)**: Add bfloat16 support. (#1699) <Dmitri Smirnov>
- **[da7c50c](https://github.com/onnx/onnx/commit/da7c50c)**: ONNX does not maintain versions for experimental ops (#1696) <Ke Zhang>
- **[0c8d857](https://github.com/onnx/onnx/commit/0c8d857)**: Correct type of value_info in Graph (#1694) <Maik Riechert>
- **[f612532](https://github.com/onnx/onnx/commit/f612532)**: Fix typos (#1686) <Eundoo Song>
Reviewed By: zrphercule
Differential Revision:
D13581674
fbshipit-source-id:
8f8ee86a05a86fe99bf94509148c559ea3df1464
andersj [Fri, 4 Jan 2019 21:45:12 +0000 (13:45 -0800)]
remove use of tmp_install
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14553
Differential Revision:
D13583335
Pulled By: anderspapitto
fbshipit-source-id:
8711fead9eda877c1037a0bc59f91a3d2e01f3e0
Will Feng [Fri, 4 Jan 2019 21:30:28 +0000 (13:30 -0800)]
Update CI credentials
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15736
Differential Revision:
D13583174
Pulled By: yf225
fbshipit-source-id:
742470db10ef9df8f95e27626453b68ca90723e8
zrphercule [Fri, 4 Jan 2019 21:26:32 +0000 (13:26 -0800)]
Temporarily disable all XXXlike operator tests in pytorch-onnx test (#15740)
Summary:
We are going to have some breaking changes in ConstantLike and related operators in onnx, therefore it is better to disable all related tests for these operators for now.
These operators are not currently supported by caffe2, and are not included in our most recently released onnx, therefore we do not need to worry about internal/external production breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15740
Differential Revision:
D13582528
Pulled By: zrphercule
fbshipit-source-id:
92a890c1dc2a833969af69edfea85331bb4d562f
Jerry Zhang [Fri, 4 Jan 2019 21:23:21 +0000 (13:23 -0800)]
Tensor construction codemod - 2/2 (#15600)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15600
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision:
D13542455
fbshipit-source-id:
8a3b15b0a1f81565f34e309114e1c3e1f7f65a3c
Elias Ellison [Fri, 4 Jan 2019 21:01:49 +0000 (13:01 -0800)]
Print out operator suggestions for unknown builtin op (#15183)
Summary:
This improves the error message for "unknown builtin op" to suggest similarly named ops.
Currently it prints out all operators with a name within two edits.
Related issue: https://github.com/pytorch/pytorch/issues/13409
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15183
Differential Revision:
D13578509
Pulled By: eellison
fbshipit-source-id:
5c73408eda1f7aa456f5bd28790c34df0c76aeca
svcscm [Fri, 4 Jan 2019 20:15:25 +0000 (12:15 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
b8be56b57d109dfef5980ea7255e2ab021da099e
Jerry Zhang [Fri, 4 Jan 2019 19:50:17 +0000 (11:50 -0800)]
Tensor construction codemod - 1/2 (#15598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15598
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision:
D13542429
fbshipit-source-id:
db1059c78e85724d9b4fdab70466cf329db68359
Jongsoo Park [Fri, 4 Jan 2019 15:53:26 +0000 (07:53 -0800)]
remove dependency to fp32 batch permutation op (#15723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15723
As title says.
Reviewed By: jianyuh
Differential Revision:
D13578604
fbshipit-source-id:
0da0ac31ae83c1e0daa9077e878feb4deffed6a3
Michael Carilli [Fri, 4 Jan 2019 14:18:43 +0000 (06:18 -0800)]
Cudnn Handle Pool 3: At Wit's End (#15668)
Summary:
ezyang Here's a freshly rebased version of https://github.com/pytorch/pytorch/pull/15080 with the if statement that relieved the hangs that occasionally, nondeterministically, occurred on cudnnCreate on a particular windows build ([example w/debug statements](https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-win-ws2016-cuda9-cudnn7-py3-test2/19238/console)) in https://github.com/pytorch/pytorch/pull/15280.
I'd like to run the CI over this several times before it's considered mergeable. Sometimes the windows hang doesn't manifest for 2 or 3 consecutive trials.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15668
Differential Revision:
D13579291
Pulled By: soumith
fbshipit-source-id:
3972eb98bad6ece933ca5e67a10fc4bc2ed06068
vishwakftw [Fri, 4 Jan 2019 14:18:35 +0000 (06:18 -0800)]
Remove TH/THC link for cholesky_solve (#15691)
Summary:
Changelog:
- Remove TH/THC binding
- Port single matrix case to ATen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15691
Differential Revision:
D13579317
Pulled By: soumith
fbshipit-source-id:
63a55606c656396e777e8e6828acd2ef88ed1543
Youngseok [Fri, 4 Jan 2019 05:37:28 +0000 (21:37 -0800)]
Modify torch.gesv error message (#15654)
Summary:
[doc](https://pytorch.org/docs/stable/torch.html#torch.gesv) uses `B` uppercase so error message should follow to avoid confusion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15654
Differential Revision:
D13571297
Pulled By: soumith
fbshipit-source-id:
0b4e7797eceff92618f808bbfa65d13c1dcc2da0
Jongsoo Park [Fri, 4 Jan 2019 05:37:03 +0000 (21:37 -0800)]
make conv_depthwise_dnnlowp_op_test faster (#15725)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15725
As title says.
Reviewed By: jianyuh
Differential Revision:
D13579188
fbshipit-source-id:
382072c95929ccf9e189e2338e35b046c4a0650f
Elad Zippory [Fri, 4 Jan 2019 05:36:49 +0000 (21:36 -0800)]
clarified language of doc for torch.mul (#15664)
Summary:
see issue #15636
Please note - I build the documents but the HTML is not updated with the edited content.
I did not also build the fork.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15664
Differential Revision:
D13571310
Pulled By: soumith
fbshipit-source-id:
d43be0f61705693d778cc12c13e86d6b06130ac7
Jongsoo Park [Fri, 4 Jan 2019 04:28:09 +0000 (20:28 -0800)]
disallow nbits_in_non_outlier == 0 in acc16 conv; option to fallback to acc32 (#15708)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15708
nbits_in_non_outlier == 0 doesn't make sense because it means everything is outlier and we can just use 32-bit accumulation.
Depending on architecture, break-even point between acc16 and acc32 can be different. Adding thresholds for falling back to acc32.
Reviewed By: jianyuh
Differential Revision:
D13574832
fbshipit-source-id:
b7a37aacbfdc7867e31838dafcdd5f7c2ac282af
Elias Ellison [Fri, 4 Jan 2019 01:31:56 +0000 (17:31 -0800)]
Torch tensor (#15224)
Summary:
Support torch.tensor in script. Already been accepted, trying to reland
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15224
Differential Revision:
D13466616
Pulled By: eellison
fbshipit-source-id:
f7850da07b0eb11af98f255fc15bd3cf861f2a40
Shen Li [Thu, 3 Jan 2019 23:12:13 +0000 (15:12 -0800)]
A quick fix for Stream operation errors on non-current device (#15689)
Summary:
see #15682
This is a quick fix by implementing the simpler solution as suggested by colesbury. As benchmark result shows, it slows down `Stream.query()` by ~20%, I would be happy to further pursue a more complex solution by implementing this in C++/ATen. But I would still vote for merge this quick fix first just to get rid of the bug sooner.
~Test TBA~ Added
FYI jeffreyksmithjr
now
```python
In [1]: def f():
...: d0 = torch.device('cuda:0')
...: d1 = torch.device('cuda:1')
...: with torch.cuda.device(d0):
...: s0 = torch.cuda.current_stream()
...: with torch.cuda.device(d1):
...: s1 = torch.cuda.current_stream()
...: s0.query()
...: s1.query()
In [4]: %timeit f()
38.1 µs ± 4.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [5]: %timeit f()
37.6 µs ± 2.7 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
before
```python
In [4]: %timeit f()
28.5 µs ± 1.74 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [5]: %timeit f()
35.3 µs ± 2.91 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15689
Differential Revision:
D13571697
Pulled By: mrshenli
fbshipit-source-id:
4fe697f91248c6419136d37bb5b7147e612e2f4c
David Riazati [Thu, 3 Jan 2019 22:31:09 +0000 (14:31 -0800)]
Break up generated tests (#13992)
Summary:
This PR breaks up `TestJitGenerated` into 3 classes. This makes for
easier testing of specific groups (e.g. run all generated functional
tests without having to wait for the autograd tests)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13992
Differential Revision:
D13076371
Pulled By: driazati
fbshipit-source-id:
1267af59be7d69feb690f5805fcd43fea58a7159
Michael Suo [Thu, 3 Jan 2019 21:50:42 +0000 (13:50 -0800)]
flake8 hook fix (#15693)
Summary:
This PR bypasses checking the user's configuration entirely and always use strict, since the CI considers it a hard failure if you can't pass flake8.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15693
Differential Revision:
D13574889
Pulled By: suo
fbshipit-source-id:
f5e1c5731cc49b6223b415317033c275bc7d4fec
Stuart Golodetz [Thu, 3 Jan 2019 21:37:50 +0000 (13:37 -0800)]
Prevent VS2017 from emitting ambiguous symbol errors (#15697)
Summary:
These `std::forward` calls cause VS2017 to emit:
error C2872: 'std': ambiguous symbol
This fix prevents the ambiguity by specifying that `::std` is intended.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15697
Differential Revision:
D13573483
Pulled By: goldsborough
fbshipit-source-id:
0439de3523a37a18df7af0cff4a1284a53833ddd
Zachary DeVito [Thu, 3 Jan 2019 20:14:17 +0000 (12:14 -0800)]
trace s_copy_ (#15690)
Summary:
s_copy_ was previously special-cased for out of place tracing.
This adds support for inplace tracing, which fixes tracing of
inception_v3
Fixes #15216
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15690
Differential Revision:
D13572011
Pulled By: zdevito
fbshipit-source-id:
1d565dec039a4b8c59179254285e61d2517ef9a9
Ailing Zhang [Thu, 3 Jan 2019 18:42:35 +0000 (10:42 -0800)]
Add mkldnn conv double backward (#15686)
Summary:
Fixes #15353 .
Like cudnn conv implementation, mkldnn also falls back to the default `_convolution_double_backward` as double backward.
This bug wasn't caught by CI before because mkldnn is only used when input scalar type is float, but our tests are all using double as default.
Adding test for float inputs, but mkldnn seems to have imprecision issues similar to cudnn implementation, so here I only check if double backward exists instead of calling `gradgradcheck`. Please correct me if the precision should actually be checked.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15686
Differential Revision:
D13571682
Pulled By: ailzhang
fbshipit-source-id:
f1762439762370f276cfd59e8b8b8a4dee960a4b
Spandan Tiwari [Thu, 3 Jan 2019 18:29:03 +0000 (10:29 -0800)]
Fix ONNX export of logical ops, including torch.ne, to have correct output datatype (#15677)
Summary:
This is the an updated version of the earlier PR https://github.com/pytorch/pytorch/pull/15185, since that one was closed.
Currently PyTorch ONNX exporter exports the logical ops (lt, gt, le, ge, eq, ne) with output type in corresponding ONNX ops as type tensor(uint8). But ONNX spec allows for only tensor(bool), which is why models that have these ops fail to load properly.
This issue is captured in #11339. Part of this issue, relating to the allowed input types, has been fixed in ONNX spec by houseroad. This PR fixes the other part pertaining to output type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15677
Reviewed By: dzhulgakov
Differential Revision:
D13568450
Pulled By: houseroad
fbshipit-source-id:
a6afbea1afdb4edad8f8b1bc492f50b14e5f2fce
Shen Li [Thu, 3 Jan 2019 18:23:07 +0000 (10:23 -0800)]
Port legacy reflection_pad1d to ATen (#15480)
Summary:
1. Avoided using `THCDeviceTensor` by re-calculating the mapping from cuda (blockIdx, threadIdx) to input/output tensor index.
2. Changed Camelcase naming to underscore naming.
Profiling:
Legacy:
```bash
$py.test test/test_nn.py -k ReflectionPad1d -v -s
....
=========== 2 passed, 1258 deselected, 800 warnings in 4.35 seconds ============
```
Now:
```bash
$py.test test/test_nn.py -k ReflectionPad1d -v -s
...
=========== 2 passed, 1258 deselected, 800 warnings in 4.03 seconds ============
```
I have two questions about the code. Any insights are appreciated. gchanan zou3519
1. I can verify that [this magic](https://github.com/pytorch/pytorch/blob/master/aten/src/THCUNN/TemporalReflectionPadding.cu#L32-L36) correctly maps output index to input index in different cases. But, I have no idea about how did you come up with this algorithm that merges three categories (in left padding, in original input, in right padding) into a single statement?
2. Why do we need [get contiguous](https://github.com/pytorch/pytorch/blob/master/aten/src/THNN/generic/TemporalReflectionPadding.c#L80) tensors when calculating forward and backward propagation?
Reflection_pad2d porting will come in the next PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15480
Differential Revision:
D13544924
Pulled By: mrshenli
fbshipit-source-id:
182045434f210032a82cab721a190da0cd781fbf
Jongsoo Park [Thu, 3 Jan 2019 17:43:46 +0000 (09:43 -0800)]
bug fix in 3d group conv (#15625)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15625
3D group conv (both NCHW and NHWC layout) was not correct.
Added group=2 in test_1d_convolution and test_3d_convolution in conv_test
Reviewed By: protonu
Differential Revision:
D13562099
fbshipit-source-id:
586e8a7574a2764f2a3b559db6c2415b3ab90453
Gregory Chanan [Thu, 3 Jan 2019 17:16:16 +0000 (09:16 -0800)]
Port torch.arange to aten and parallelize on CPU.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15667
Differential Revision:
D13566631
Pulled By: gchanan
fbshipit-source-id:
e3243a4e81ecb58373681df8bf6a00428352fb14
Gerard Goossen [Thu, 3 Jan 2019 12:59:41 +0000 (04:59 -0800)]
Ignore flake8 warning about whitespace before ':' (#15663)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15663
Ignore sometimes incorrect flake8 warning about whitespace before ':'
See https://github.com/ambv/black/issues/315
Reviewed By: soumith
Differential Revision:
D13565818
fbshipit-source-id:
9d5ec2335899527ee71f4b505c00865a354e3bf0
Xiaomeng Yang [Thu, 3 Jan 2019 08:16:03 +0000 (00:16 -0800)]
Add count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue. (#15651)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15651
Add count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue.
Reviewed By: houseroad
Differential Revision:
D13564257
fbshipit-source-id:
3a143f1122bc507ccb7827e9b46908d5c7203735
Jianyu Huang [Thu, 3 Jan 2019 05:05:55 +0000 (21:05 -0800)]
Unify the usage of Dequantize (#15685)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15685
The declaration of "Dequantize" is in "fbsource/fbcode/deeplearning/fbgemm2/QuantUtils.h", so it requires the "namespace fbgemm".
<T> is actually optional, since the type can de deduced from the first argument.
In some places we have "Dequantize<T>(...)", while in other places we have "Dequantize(...)". We'd better unify them. As a reference, all occurrences of "Quantize" are using "fbgemm::Quantize<T>(...)".
Reviewed By: jspark1105
Differential Revision:
D13570847
fbshipit-source-id:
7fca9f7f9e4e0d9e5eb27ac44b8707adc3c80717
Shen Li [Thu, 3 Jan 2019 05:01:13 +0000 (21:01 -0800)]
Fix vec256 inversion (#15659)
Summary:
soumith zou3519
I was browsing the code, and think `vec256_int.h` might need a minor revision, but not 100% sure.
1. It currently invert the result by `XOR` with 0. Should it `XOR` with 1 instead?
~2. AVX2 logical operations would set all bits in a byte/word/... to `1` if the condition holds. So functions, such as `_mm256_cmpeq_epi64 ` would return `0/-1` instead of `0/1`. Should it be masked with `1` to make sure it returns 0/1?~
~Would I be correct if I assume that the code revised below is not yet activated, but will be after we port legacy code to ATen?~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15659
Differential Revision:
D13565929
Pulled By: mrshenli
fbshipit-source-id:
8ae3daf256c3d915dd855a2215c95275e899ea8c
Zachary DeVito [Thu, 3 Jan 2019 04:07:55 +0000 (20:07 -0800)]
Add min/max on numbers to JIT
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15680
Differential Revision:
D13568806
Pulled By: zdevito
fbshipit-source-id:
ef0f33cc12a057184293bc31d28cc7b24f73eb94
Natalia Gimelshein [Thu, 3 Jan 2019 03:50:19 +0000 (19:50 -0800)]
initialize with ident value in global reduction (#15653)
Summary:
Fixes #15647. cc colesbury.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15653
Differential Revision:
D13571132
Pulled By: soumith
fbshipit-source-id:
8f25943c974b3b931f4528e0e0a370bc095dab51
svcscm [Thu, 3 Jan 2019 02:53:15 +0000 (18:53 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
f7b540159cf1fe72825d09d55d56117d14ff90eb
rtarquini [Thu, 3 Jan 2019 02:48:31 +0000 (18:48 -0800)]
Support for Jetson Xavier (#15660)
Summary:
The request changes are to support building Pytorch 1.0 on the Jetson Xavier with Openblas. Jetson Xavier with Jetpack 3.3 has generic lapack installed. To pick up the CUDA accelerated BLAS/Lapack, I had to build Openblas and build/link pytorch from source. Otherwise, I got a runtime error indicating lapack routines were not cuda enabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15660
Differential Revision:
D13571324
Pulled By: soumith
fbshipit-source-id:
9b148d081d6e7fa7e1824dfdd93283c67f69e683
Jesse Hellemn [Thu, 3 Jan 2019 01:10:35 +0000 (17:10 -0800)]
Fixing cuda100 smoke tests
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15673
Reviewed By: yf225
Differential Revision:
D13568746
Pulled By: pjh5
fbshipit-source-id:
e636de417d61b48074399da75bfb2576c9f62743
Jerry Zhang [Thu, 3 Jan 2019 00:32:02 +0000 (16:32 -0800)]
Remove PythonOp non-CPU path and PytorchOp (#15417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15417
Right now the way we test whether Blob contains a CPU tensor is broken in ```PythonOpBase``` is broken, which means non-CPU path might never be taken.
Searching through the codebase, non-gpu path is used in PythonDLPack, and it is used in PytorchOp which is unused. So we'll remove non-gpu path in this diff.
Reviewed By: dzhulgakov
Differential Revision:
D13495011
fbshipit-source-id:
9fe9537f05026d2a2cf7051efa81d184de722710
svcscm [Wed, 2 Jan 2019 22:55:43 +0000 (14:55 -0800)]
Updating submodules
Reviewed By: yns88
fbshipit-source-id:
bb142e8f91046cc2b7ea32dac46ec0753b4bc218
Michael Suo [Wed, 2 Jan 2019 22:32:00 +0000 (14:32 -0800)]
fix select after chunk op (#15672)
Summary:
Fixes #15669.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15672
Differential Revision:
D13567274
Pulled By: suo
fbshipit-source-id:
a63e6cfc9dacedd4cb99dc51eee452038418001e
Michael Suo [Wed, 2 Jan 2019 20:50:13 +0000 (12:50 -0800)]
make flake8 failure blocking (#15675)
Summary:
Right now it just prints whatever flake8 errors and moves forward with the commit. This is too easy to miss.
It should block the commit so that the user can fix the issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15675
Differential Revision:
D13567821
Pulled By: suo
fbshipit-source-id:
5f0de40ddd771bad8d6848417408cffbceb03183
Zachary DeVito [Wed, 2 Jan 2019 20:45:38 +0000 (12:45 -0800)]
redo sleef build fix (#15549)
Summary:
This was accidentally reverted by #14866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15549
Differential Revision:
D13549674
Pulled By: zdevito
fbshipit-source-id:
e209aac53dccb082b91cfa2d292310eabeb459e3
Jongsoo Park [Wed, 2 Jan 2019 19:25:41 +0000 (11:25 -0800)]
format conv_test.py to prepare
D13562099 (#15632)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15632
Just formatting and a few lints.
Reviewed By: yinghai
Differential Revision:
D13562403
fbshipit-source-id:
c56f8ee61f68cdaccc0828a764ff729454f68259
kiendang [Wed, 2 Jan 2019 08:18:07 +0000 (00:18 -0800)]
Fix torch.gesv args in doc
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15649
Differential Revision:
D13564312
Pulled By: soumith
fbshipit-source-id:
b3bba2ece600880077eb09b092ce17e331995bd6
surgan12 [Wed, 2 Jan 2019 07:09:45 +0000 (23:09 -0800)]
clamp fixes (#15479)
Summary: fix to #15338 .
Differential Revision:
D13564343
Pulled By: soumith
fbshipit-source-id:
be64b572945533e10ae6f627d335b47f093720a3
svcscm [Wed, 2 Jan 2019 03:41:31 +0000 (19:41 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
acb68439e62ea270af22364183a6ecba883fab66
svcscm [Wed, 2 Jan 2019 01:20:19 +0000 (17:20 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
5c5ad6a5cc9220ee1dd9565d64c7459f866ff74d
Alexander Rodin [Mon, 31 Dec 2018 02:05:29 +0000 (18:05 -0800)]
Fix typo in documentation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15628
Differential Revision:
D13562685
Pulled By: soumith
fbshipit-source-id:
1621fcff465b029142313f717035e935e9159513
vishwakftw [Sun, 30 Dec 2018 20:39:10 +0000 (12:39 -0800)]
Make btriunpack work for high dimensional batches and faster than before (#15286)
Summary:
Changelog:
- Optimize btriunpack by using `torch.where` instead of indexing, inplace operations instead of out place operations and avoiding costly permutations by computing the final permutation over a list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15286
Differential Revision:
D13562038
Pulled By: soumith
fbshipit-source-id:
e2c94cfab5322bf1d24bf56d7b056619f553acc6