Jerry Zhang [Thu, 31 Jan 2019 23:42:37 +0000 (15:42 -0800)]
fix scope related naming issue in build_quant_conv_bn_relu, and also format function signature
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14885
Reviewed By: harouwu
Differential Revision:
D13374077
fbshipit-source-id:
5082c4ea0d2fdc197243b022b9b489f38b04c8e9
Dmytro Dzhulgakov [Thu, 31 Jan 2019 23:39:22 +0000 (15:39 -0800)]
Disable layernorm_c10 test for now (#16630)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16630
two PRs landed concurrently - enforcing tensor constraints and refactoring c10. Since it's not a prod code - disable test and I'll let Sebastian to fix it properly.
Reviewed By: ezyang
Differential Revision:
D13908117
fbshipit-source-id:
381c5626078b794afa1fc7a95cb1ea529650424c
Elias Ellison [Thu, 31 Jan 2019 23:37:52 +0000 (15:37 -0800)]
Remove constant propagation expect files (#16348)
Summary:
Remove constant prop expect files, and express graph conditions via python bindings.
First diff in larger effort to remove expect files
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16348
Differential Revision:
D13906929
Pulled By: eellison
fbshipit-source-id:
7963caa3ccbc7bfc0006a160c952aa173d1ce633
James Reed [Thu, 31 Jan 2019 22:13:45 +0000 (14:13 -0800)]
Fix a lot of C++ build warnings (#16411)
Summary:
I went through my build log and did what I thought were reasonable fixes to all the C++ compilation warnings that came up
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16411
Differential Revision:
D13901006
Pulled By: jamesr66a
fbshipit-source-id:
02df4e3e5a5c8dd9e69ac9f065cd3f2a80645033
David Riazati [Thu, 31 Jan 2019 22:06:44 +0000 (14:06 -0800)]
Add immutable dict support (#16208)
Summary:
This PR adds basic support (creation and indexing) for immutable dictionaries in Script. This includes Python/string frontend support and a `IValue::GenericDict` type backed by a `std::unordered_map`. Only `str`, `int`, and `float` are supported as keys, any type can be a value. Structure is pretty similar to list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16208
Differential Revision:
D13881686
Pulled By: driazati
fbshipit-source-id:
29ce9835b953c3456f57bcc2bbdf7fe0cbf941c0
Jithun Nair [Thu, 31 Jan 2019 22:00:00 +0000 (14:00 -0800)]
Make the miopen handle part of ConvolutionParams (#16613)
Summary:
so that it's included in the hashed key that decides whether to call Find or not. This is required to ensure that Find is run for all devices
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16613
Differential Revision:
D13901769
Pulled By: bddppq
fbshipit-source-id:
7d29ea9e40231cd4eef80847afa1307efeb0945c
Dmytro Dzhulgakov [Thu, 31 Jan 2019 21:30:58 +0000 (13:30 -0800)]
Back out "Revert
D13596031: Improve c2-aten tensor interop and add proper testing" (#16514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16514
Original commit changeset:
dc371697f14b
Relanding https://github.com/pytorch/pytorch/pull/15860 - the problem was that layer_norm was using at::empty which is not yet on mobile
Reviewed By: ezyang
Differential Revision:
D13861480
fbshipit-source-id:
e2116da32bc117175c96b9151b1beba9b31eff36
Zachary DeVito [Thu, 31 Jan 2019 21:11:35 +0000 (13:11 -0800)]
use distutils to discover msvc compiler paths (#16540)
Summary:
This simplifies the process for building on windows, since users no longer have to find and run the vcvarsall.bat file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16540
Differential Revision:
D13893596
Pulled By: zdevito
fbshipit-source-id:
79b7ad55c3251b3f573fd8464931138f8a52dd1d
Bram Wasti [Thu, 31 Jan 2019 20:41:55 +0000 (12:41 -0800)]
Fix SIOF in torch using caffe2 registry (#16473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16473
This resolves the issues associated with caffe2 initialization (specifically the REGISTER_FUNCTION_SCHEMA_OPERATOR calls) being run after Torch's static op registration calls.
The fix employs a meyer's singleton wrapped by the constructor of a type. Everything is placed inside a macro to make it easier for users to use.
Reviewed By: smessmer
Differential Revision:
D13854306
fbshipit-source-id:
ecf60861f229532826fae254974e9af4389055df
Bram Wasti [Thu, 31 Jan 2019 20:41:55 +0000 (12:41 -0800)]
Swap Caffe2 operator constructor to pass arguments by value (#16576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16576
allows instantiation of operator with arguments passed by move rather than explicit copies
per Sebastian's suggestion
Reviewed By: smessmer
Differential Revision:
D13882416
fbshipit-source-id:
bc8d50e73f5a1ae87155b0cf96799b8573a7a8fa
David Riazati [Thu, 31 Jan 2019 19:58:56 +0000 (11:58 -0800)]
Allow ScriptModule(optimize=False) when jit disabled (#16297)
Summary:
Fixes #16285
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16297
Differential Revision:
D13797276
Pulled By: driazati
fbshipit-source-id:
3a93500d4233cfbb8f5af7feba43f6ff4c3d22c7
Thomas Viehmann [Thu, 31 Jan 2019 19:57:56 +0000 (11:57 -0800)]
Get more fusion after autodiff uses SumToSize (#14957)
Summary:
Here is a fresh attempt at getting some fusion back in autodiff-generated graphs in the presence of SumToSize.
- The sum to size operator is now `aten::_grad_sum_to_size` to allow symbolic script differentiation (and that in turn would need to use this in place of sum_to_size to signal that it strictly operates on gradients). This is also used in the autodiff code, replacing `prim::SumToSize`.
- `_grad_sum_to_size` is now fusable, `cat`s - which are fused afterwards thanks to Adam's simplification of the code - are only fused if there is no `_grad_sum_to_size` in the fusion group.
- I push the `_grad_sum_to_size` out of the the fusion group when compiling and record the desired summations in the KernelSpec. The reasoning is the following:
- As the autodiff is a repeated applicaiton of the chain rule, we always have the pattern `grad_in = mm(A, grad_out)`, with A often diagonal for cases interesting to the fuser, whence it is `grad_in = a * grad_out` (a pointwise multiplication). We know that only `grad_out` may have AutodiffGradSumToSize applied, so we can commute AutodiffGradSumToSize with the `mul` (and `div` and `neg` are of similar origin).
- For `type_as` the gradient might be giving the type, so just skip SumToSize,
- `add` (which was inserted as `prim::AutogradAdd`) adding gradients when the forward used the same value in several places. This is non-broadcasting, so we know that the two arguments would have the same sizes as inputs - which is good so we don't have to do bookkeeping of the two parts.
Details:
- During fusion, the Tensor arguments are always kept as the first parameters of the fusion group to accomodate indexing assumptions in the fuser.
- The rewriting of the fusion group to record the necessary output transformation and eliminate `_grad_sum_to_size` from the fusion group is now in the fuser compile step.
- In the execution step, the arguments are split into Tensor / Non-Tensor and the non-tensor args are mostly forgotten about except for doing `sum_to_size` at the end. This would want to be improved if/when we fuse nonconstant scalar arguments.
- In a number of places in the fuser, the non-Tensor arguments to the fusion group needed to be ignored.
Thank you, apaszke for the insightful discussion. All bad ideas and errors are my own.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14957
Differential Revision:
D13888173
Pulled By: zou3519
fbshipit-source-id:
071992c876e8b845f2b3e6329ae03a835d39a0ea
peter [Thu, 31 Jan 2019 19:19:31 +0000 (11:19 -0800)]
Enable USE_NINJA in build_pytorch_libs.py if it is in PATH (#16545)
Summary:
It is required to fix the nightly conda builds.
cc zdevito ezyang soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16545
Differential Revision:
D13900610
Pulled By: soumith
fbshipit-source-id:
676f903a082f6f083e07245a1df38175bb82b2f7
sebftw [Thu, 31 Jan 2019 19:14:26 +0000 (11:14 -0800)]
Replaced "from_numpy" with "as_tensor" in docs. (#16587)
Summary:
In the warning box on https://pytorch.org/docs/stable/tensors.html#torch.Tensor.new_tensor it says:
> new_tensor() always copies data. [...] If you have a numpy array and want to avoid a copy, use **torch.from_numpy()**.
But then further up the page we have another warning box with the message:
> torch.tensor() always copies data. [...] If you have a numpy array and want to avoid a copy, use **torch.as_tensor()**.
Now I believe this is just a small oversight, since from_numpy is to be deprecated in favour of as_tensor. See for example https://github.com/pytorch/pytorch/issues/6885 and https://github.com/pytorch/pytorch/issues/8611. I suggest to just use **torch.as_tensor()** in both of the warning boxes.
cc gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16587
Differential Revision:
D13897038
Pulled By: gchanan
fbshipit-source-id:
2eb3cd47d2c0b5bf4350f980de3be9fe59b4a846
bhushan [Thu, 31 Jan 2019 19:12:21 +0000 (11:12 -0800)]
printing correct dimension while indexing (#16495)
Summary:
applySelect does modify the tensor and removes the top most dimension which makes it complicated to track just using dim and need to use another parameter as real_dim to signify original dimension
fixes #16192
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16495
Differential Revision:
D13897182
Pulled By: gchanan
fbshipit-source-id:
105581dbbff6b431cc8e2539a07e0058161e53a1
Brennan Vincent [Thu, 31 Jan 2019 18:41:17 +0000 (10:41 -0800)]
remove unused capture (#16526)
Summary:
We don't use this in the lambda body anymore. Remove it to fix a warning.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16526
Differential Revision:
D13867043
Pulled By: umanwizard
fbshipit-source-id:
4c9a9d194fdfcb63fde16823517d2c6c8e2ae93d
Michael Suo [Thu, 31 Jan 2019 18:25:40 +0000 (10:25 -0800)]
split up AliasTracker into a separate file (#16588)
Summary:
This just moves thing around to make AliasTracker independently testable and keep things a little more separate. Follow-on PRs will change the interfaces of AliasDb and AliasTracker to be more clearly distinct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16588
Differential Revision:
D13891894
Pulled By: suo
fbshipit-source-id:
c5b590b5fdd462afefe743e499034068bf35784a
Zachary DeVito [Thu, 31 Jan 2019 18:06:26 +0000 (10:06 -0800)]
Access profiler from cpp (#16580)
Summary:
jamesr66a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16580
Differential Revision:
D13891299
Pulled By: zdevito
fbshipit-source-id:
83b335bf3231a9ab30e9318f2bce6d741ba5ffae
SsnL [Thu, 31 Jan 2019 14:53:57 +0000 (06:53 -0800)]
Fix cuFFT plan cache size on CUDA 10 cannot be set to > 4096 (#16384)
Summary:
Doc doesn't need to be changed. Also clarifies two inaccurate comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16384
Differential Revision:
D13886637
Pulled By: soumith
fbshipit-source-id:
227385008211a6f3ad9135c54fd2d3754cc9daaf
Jesse Hellemn [Thu, 31 Jan 2019 07:36:32 +0000 (23:36 -0800)]
Clean up binary jobs in CircleCI (#16511)
Summary:
- Add libtorch upload jobs
- Unify checkout and env code for binary jobs (san binary test jobs)
- Compress variables passed into binary jobs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16511
Differential Revision:
D13893714
Pulled By: pjh5
fbshipit-source-id:
b8bd72e1397dec569a8ec3e859e319178c7c6f8b
svcscm [Thu, 31 Jan 2019 07:29:16 +0000 (23:29 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
36c332beab1aaccb281d5ee07952d399056b7f8c
Jongsoo Park [Thu, 31 Jan 2019 06:46:07 +0000 (22:46 -0800)]
more careful use of inline/template function in perfkernels (#15388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15388
This is another pass to make perfkernels code safer from illegal instruction error.
Removed dependency to c10/util/Logging.h
We're err on the safer side at the expense of some verbosity.
Reviewed By: dskhudia
Differential Revision:
D13502902
fbshipit-source-id:
4f833115df885c5b4f8c1ca83b9badea1553f944
svcscm [Thu, 31 Jan 2019 05:08:38 +0000 (21:08 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
a0a2a635f86ef3bebfb4ca1a36f7ec9c2b09d7bb
Jerry Zhang [Thu, 31 Jan 2019 02:26:48 +0000 (18:26 -0800)]
DeviceScope support for CUDA and testing (#15357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15357
Supporting device option in FQ bn folding for ITER related ops
Reviewed By: wat3rBro
Differential Revision:
D13370259
fbshipit-source-id:
4324c2716dfa69ddedc661ae2b1ad34c2f6fc4b6
Antoine Busque [Thu, 31 Jan 2019 02:11:04 +0000 (18:11 -0800)]
Fix: avoid race condition on model zoo directory creation (#16578)
Summary:
The current implementation of the `torch.utils.model_zoo.load_url`
function is prone to a race condition when creating the directory in
which it saves the loaded models, since it checks whether the
directory exists and then creates it in two separate steps. The
directory can be created after the check was made but before we
attempt to create the directory, resulting in an unhandled exception.
Instead, try to create the directory directly, and do nothing if it
already exists.
Note: for Python versions ≥ 3.2, we could simply use the
`exist_ok=True` flag on `os.makedirs`, but this is unavailable in
Python 2.7.
Signed-off-by: Antoine Busque <antoine.busque@elementai.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16578
Differential Revision:
D13886470
Pulled By: soumith
fbshipit-source-id:
88815c8a65eec96caea32d6e9a7f83802502fdb9
Iurii Zdebskyi [Thu, 31 Jan 2019 02:09:56 +0000 (18:09 -0800)]
Remove redundant declarations (#16463)
Summary:
As there are no checks that all the functions are actually being used, we can end up with stale entries. This diff removes unused entries from Declarations.cwrap
Testing:
Successful build via "python setup.py develop"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16463
Differential Revision:
D13885815
Pulled By: izdeby
fbshipit-source-id:
4e35c2ac9196167af74dff3d4f971210721285f8
Michael Suo [Thu, 31 Jan 2019 01:48:59 +0000 (17:48 -0800)]
begin splitting up cpp tests (#16536)
Summary:
Start splitting up these tests so we don't have a massive test file. Doesn't change how you run them, since `gtest.cpp` and `no-gtest.cpp` will still collect everything.
Renamed `tests.h` to `test_misc.h` to vaguely discourage people from adding yet more stuff to it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16536
Reviewed By: zdevito, eellison
Differential Revision:
D13882215
Pulled By: suo
fbshipit-source-id:
61cf97f3c2c50703dcf6a3a34da01415ecb7e7d6
Christian Puhrsch [Thu, 31 Jan 2019 01:19:20 +0000 (17:19 -0800)]
Use dispatch tensor for device_guard instead of first Tensor argument
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16579
Differential Revision:
D13886593
Pulled By: cpuhrsch
fbshipit-source-id:
0722ec61da13c2541f7de51bf5c1ecfb9a12fad2
Owen Anderson [Thu, 31 Jan 2019 01:04:02 +0000 (17:04 -0800)]
Eliminate PYCMD in favor of PYTHON_EXECUTABLE in CMake.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16522
Differential Revision:
D13867376
Pulled By: resistor
fbshipit-source-id:
6bce68facea83c5161a31fcdfafe08827999eb2b
ParticularlyPythonicBS [Thu, 31 Jan 2019 00:43:51 +0000 (16:43 -0800)]
added example to clear ambiguity in torch.Tensor.view (#16563)
Summary:
Added example to the documentation of [torch.Tensor.view](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) to avoid the misunderstanding referenced in issue [#16560](https://github.com/pytorch/pytorch/issues/16560)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16563
Differential Revision:
D13885008
Pulled By: soumith
fbshipit-source-id:
b7e7fbea1f16124bc4e679ae9c50ab619e1f043d
Gregory Chanan [Thu, 31 Jan 2019 00:01:51 +0000 (16:01 -0800)]
Fix uninitialized data and broken broadcasting with sparse.mm and spa… (#16572)
Summary:
…rse.addmm.
Fixes https://github.com/pytorch/pytorch/issues/16543.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16572
Differential Revision:
D13884235
Pulled By: gchanan
fbshipit-source-id:
308916051364d72f72ec56f0495c6c7c09845131
SsnL [Wed, 30 Jan 2019 23:05:38 +0000 (15:05 -0800)]
add new build files to gitignore; test that build does not leave git repo checkout dirty (#16565)
Summary:
These appear when I run
```
MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ NO_CUDA=1 NO_DISTRIBUTED=1 BUILD_CAFFE2_OPS=0 DEBUG=1 python3 setup.py develop --cmake
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16565
Differential Revision:
D13885790
Pulled By: ezyang
fbshipit-source-id:
af0e028d7fa7832a945aaee4e241ceb5418f4ec8
Edward Yang [Wed, 30 Jan 2019 21:51:14 +0000 (13:51 -0800)]
Move Deprecated.h to c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16504
Reviewed By: smessmer
Differential Revision:
D13860570
fbshipit-source-id:
4742dc30c78d49b0f655b4e9536f51b36a595636
Elias Ellison [Wed, 30 Jan 2019 21:48:36 +0000 (13:48 -0800)]
Allow generic containers as module inputs (#16482)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16326
Previously we didn't handle module inputs which included Generic Lists. When checking whether a generic list if a subvalue of the input arg type, I currently recurse on every element of the list. This shouldn't be too slow since the innermost list will be specialized and we won't have to check it's elements.
E.g. Tensor[][] -> GenericList [TensorList ].
The error message could be improved, but extracting the complete type of nested lists would have to deal with unifying types across lists / empty lists & typevars so I'm going to save that for a follow up PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16482
Differential Revision:
D13882582
Pulled By: eellison
fbshipit-source-id:
3609bc572f0ee9ebf20a77ea5ebc8fa3b165e24b
Erik Brinkman [Wed, 30 Jan 2019 21:30:35 +0000 (13:30 -0800)]
Explicit pdist captures (#16286)
Summary:
Per discussion with cpuhrsch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16286
Differential Revision:
D13883001
Pulled By: erikbrinkman
fbshipit-source-id:
86f35d35fde5db67e3fbb09abc418da0116c9aac
Mikhail Zolotukhin [Wed, 30 Jan 2019 21:30:30 +0000 (13:30 -0800)]
Include ATen/core/functional.h directly instead of torch/csrc/utils/functional.h. (#16377)
Summary:
One more shim removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16377
Differential Revision:
D13821816
Pulled By: ZolotukhinM
fbshipit-source-id:
007f014d404de51841437db7eef28367a2f6e46b
Jesse Hellemn [Wed, 30 Jan 2019 21:29:33 +0000 (13:29 -0800)]
Remove --no-update-dependencies (#16575)
Summary:
Absolutely no idea why this is needed. This should be a valid argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16575
Differential Revision:
D13884796
Pulled By: pjh5
fbshipit-source-id:
6011e721e2870499f6b5e627d5ad00ece08b530b
Edward Yang [Wed, 30 Jan 2019 21:21:02 +0000 (13:21 -0800)]
Update PyTorch DockerVersion to 285. (#16507)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16507
Differential Revision:
D13884588
Pulled By: ezyang
fbshipit-source-id:
b7e22daa15874f9a226195d4749b4f9f827d7c1e
Tim Khatkevich [Wed, 30 Jan 2019 21:15:59 +0000 (13:15 -0800)]
Support fallback for more operators (#16566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16566
it's a follow-up to https://github.com/pytorch/pytorch/pull/16456
Reviewed By: yinghai
Differential Revision:
D13881462
fbshipit-source-id:
eff063580ac8f622477417ed4b25320299451811
Lu Fang [Wed, 30 Jan 2019 21:12:42 +0000 (13:12 -0800)]
fix the linter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16567
Differential Revision:
D13882166
Pulled By: houseroad
fbshipit-source-id:
daf760f51e4fce376ca09421900405970d00c4d2
Sebastian Messmer [Wed, 30 Jan 2019 21:12:33 +0000 (13:12 -0800)]
Add a test case calling caffe2 layer_norm from caffe2 executor but through the c10 dispatcher
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16283
Reviewed By: ezyang
Differential Revision:
D13792591
fbshipit-source-id:
9c190649e38e8706549102b2e136ceaf508eb37f
Jerry Zhang [Wed, 30 Jan 2019 20:37:55 +0000 (12:37 -0800)]
Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" (#16516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16516
Original commit changeset:
64abce3dbaed
Reviewed By: dzhulgakov
Differential Revision:
D13863715
fbshipit-source-id:
f1923fdca4a1a82768d9c280a8493ff15a7eb2ba
zrphercule [Wed, 30 Jan 2019 19:23:48 +0000 (11:23 -0800)]
Remove the debugging info of pytorch=>onnx coverage script (#16538)
Summary:
Remove the debug info.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16538
Reviewed By: houseroad
Differential Revision:
D13872068
Pulled By: zrphercule
fbshipit-source-id:
7572668d0048c37f6b6029a48e5ae4b8b21823f7
Jacie Fan [Wed, 30 Jan 2019 19:20:44 +0000 (11:20 -0800)]
CUDA histogram implementation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15842
Reviewed By: zou3519
Differential Revision:
D13868982
Pulled By: jaciefan
fbshipit-source-id:
bce81dc121c4538d204047506f8f14d0b4d8f905
Michael Suo [Wed, 30 Jan 2019 19:06:32 +0000 (11:06 -0800)]
Use a points-to graph for alias analysis (#16386)
Summary:
This PR changes the way we store aliasing information from a "set" approach to a "points-to" analysis. Set-based approaches lose information in ways that make it difficult to do "live" updates to the alias DB as one as mutating the graph.
The tradeoff is that simple queries get more expensive, since they require traversing the points-to graph to answer most questions. In practice, this is unlikely to be that costly since we don't have massive aliasing chains, but we could create an approximation/caching layer if this becomes a problem.
My rough plan is:
1. This PR, switching to a points-to graph
2. Make it "live": analyzing a node should record all the edges the node added, so that we can rollback when the node is destroyed.
3. Reduce wildcard scope: we can make the wildcard a special vertex that points to anything that we're not "sure" about; namely, things that have been put inside lists, or graph inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16386
Differential Revision:
D13855117
Pulled By: suo
fbshipit-source-id:
f009f58143173c275501624eb105d07ab60fe5e1
Lara Haidar-Ahmad [Wed, 30 Jan 2019 18:57:08 +0000 (10:57 -0800)]
ONNX Export Flatten operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16240
Reviewed By: bddppq
Differential Revision:
D13800025
Pulled By: houseroad
fbshipit-source-id:
ae4c5e42026477b28ffd44eda2438d93936ea510
Edward Yang [Wed, 30 Jan 2019 18:49:22 +0000 (10:49 -0800)]
Revert
D13880053: [pytorch][PR] add new build files to gitignore; test that build doesn't leave repo dirty
Differential Revision:
D13880053
Original commit changeset:
0171f42438ef
fbshipit-source-id:
a734f8704c1cbe16434c672289c505b19b2b490a
vishwakftw [Wed, 30 Jan 2019 18:44:59 +0000 (10:44 -0800)]
Allow list and tuples to be passed as output_size to max_unpool1d (#16489)
Summary:
Changelog:
- Modify concantenation of [1] to a tuple by using cases for list and non-list types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16489
Differential Revision:
D13875838
Pulled By: soumith
fbshipit-source-id:
fade65cc47385986b773b9bde9b4601ab93fe1cf
Lu Fang [Wed, 30 Jan 2019 17:30:22 +0000 (09:30 -0800)]
Fix the flake8 linter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16549
Reviewed By: bddppq
Differential Revision:
D13877435
Pulled By: houseroad
fbshipit-source-id:
dbe575ba3f6dd30d27ac6aa5eec2eea025063540
Ailing Zhang [Wed, 30 Jan 2019 17:27:06 +0000 (09:27 -0800)]
add example multiprocess code (#16345)
Summary: fixes #16141
Differential Revision:
D13868539
Pulled By: ailzhang
fbshipit-source-id:
03e858d0aff7804c5e9e03a8666f42fd12836ef2
Yinghai Lu [Wed, 30 Jan 2019 16:55:37 +0000 (08:55 -0800)]
Support int64_t shape data for ideep reshape op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16533
Reviewed By: jerryzh168
Differential Revision:
D13867402
fbshipit-source-id:
ff53a851f142ef915ad69da3868bb3aab4d48987
SsnL [Wed, 30 Jan 2019 16:38:49 +0000 (08:38 -0800)]
add new build files to gitignore; test that build doesn't leave repo dirty
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16441
Differential Revision:
D13880053
Pulled By: ezyang
fbshipit-source-id:
0171f42438efdd651b6af22e521b80e85b12681c
Tim Khatkevich [Wed, 30 Jan 2019 11:43:48 +0000 (03:43 -0800)]
Fallback support for more operators (#16456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16456
Adding fallbacks for more operators and fixing ifndef for expand_op.h
Reviewed By: yinghai
Differential Revision:
D13845382
fbshipit-source-id:
b7c5b7f7f176707b9ddffade139562a8085967ed
Lu Fang [Wed, 30 Jan 2019 09:13:16 +0000 (01:13 -0800)]
Fix the dropout onnx symbolic, and ensure all exported models in test_operators.py are eval mode (#16547)
Summary:
In eval mode, skip dropout operator in onnx exporter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16547
Reviewed By: houseroad
Differential Revision:
D13877136
Pulled By: dzhulgakov
fbshipit-source-id:
c366da156f83677bcf4989b79166aae5b6c36125
Xiaomeng Yang [Wed, 30 Jan 2019 08:01:15 +0000 (00:01 -0800)]
Seperate level1 elementwise functions from math (#16397)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16397
Seperate level1 elementwise functions from math
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D13830626
fbshipit-source-id:
e6e672647076dab8b3b24be181f580a1486250c9
Sebastian Messmer [Wed, 30 Jan 2019 07:29:54 +0000 (23:29 -0800)]
Fix includes for ATen/core/stack.h (#16462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16462
This file was moved, now we change the includes to the new location and remove the proxy header.
Reviewed By: ezyang
Differential Revision:
D13847279
fbshipit-source-id:
4617d52fdcfe785cb7b2154460a6686c437abd8b
Sebastian Messmer [Wed, 30 Jan 2019 02:02:21 +0000 (18:02 -0800)]
Add test case for calling c10 ops from pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16062
Reviewed By: ezyang
Differential Revision:
D13628955
fbshipit-source-id:
f6ed3f07db2675bd9ae9251da990ca7b8c963717
Sebastian Messmer [Wed, 30 Jan 2019 02:02:21 +0000 (18:02 -0800)]
Kernel gets Stack* instead of ArrayRef<IValue> (#16282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16282
This changes the core kernel abstraction to be a function taking a stack, popping its arguments from the stack and pushing results to the stack,
instead of getting arguments as ArrayRef<IValue> and returning an output IValue.
Caffe2 operators need to have a way to pass in preallocated output tensors.
The convention for them is to get all inputs *and* outputs on the stack and also return all of them, i.e. a caffe2 op will always have inputs == outputs.
This will probably change in later diffs towards making the outputs in-arguments optional in the JIT schema.
Reviewed By: ezyang
Differential Revision:
D13792335
fbshipit-source-id:
e9cc2b5e438cc4653e1f701633a154b92b604932
xuzhu [Wed, 30 Jan 2019 01:32:09 +0000 (17:32 -0800)]
Chunk dataset implementation (#15932)
Summary:
This PR contains the implementation of chunk dataset, with the API proposed in PR https://github.com/pytorch/pytorch/pull/15562
A chunk dataset is derived from StatefulDataset. It utilizes worker threads to prefetches chunk data, splits it into batches and caches them into a queue. When get_batch is called from dataloader, batch data is retrieved from the queue, and data in new chunks will be pushed for later following batches.
Chunk dataset uses two samplers (chunk_sampler and example_sampler) to perform sampling. The chunk_sampler decides which chunk to load, and example_sampler shuffles the examples inside a specific chunk. More detail of this sampling approach can be found here: http://martin.zinkevich.org/publications/nips2010.pdf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15932
Differential Revision:
D13868688
Pulled By: soumith
fbshipit-source-id:
a43000c478ca2a3c64cc84b3626d6b8b1ad9a07e
Zachary DeVito [Wed, 30 Jan 2019 00:36:08 +0000 (16:36 -0800)]
try to get rid of tmp_install (#16414)
Summary:
Rehash of previous attempts. This tries a different approach where we accept the install as specified in cmake (leaving bin/ include/ and lib/ alone), and then try to adjust the rest of the files to this more standard layout.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16414
Differential Revision:
D13863635
Pulled By: zdevito
fbshipit-source-id:
23725f5c64d7509bf3ca8f472dcdcad074de9828
Gregory Chanan [Tue, 29 Jan 2019 23:32:56 +0000 (15:32 -0800)]
Fix torch.sparse.sum parsing of dim. (#16517)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16501.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16517
Differential Revision:
D13865322
Pulled By: gchanan
fbshipit-source-id:
fa0ac37a9e7b8f19a5bdf75e5771128e48c41612
Pieter Noordhuis [Tue, 29 Jan 2019 23:31:09 +0000 (15:31 -0800)]
Make Store::setTimeout take milliseconds (#16278)
Summary:
This is particularly useful when using a c10d::Store from tests.
cc jgehring
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16278
Reviewed By: janewangfb
Differential Revision:
D13866271
Pulled By: pietern
fbshipit-source-id:
c8670b5f4ebd5cd009f2cabbe46cc17a9237d775
Edward Yang [Tue, 29 Jan 2019 22:16:44 +0000 (14:16 -0800)]
Back out "Delete duplicate copy of THCCachingAllocator." (#16510)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16510
This diff was supposed to be memory usage neutral, but based on
some internal flows involving cuDNN, it was not. Reverting pending
further investigation.
Original commit changeset:
03f1ebf7f11c
Reviewed By: xw285cornell
Differential Revision:
D13863610
fbshipit-source-id:
15517e255fd6b0c064b65fb99f0ef19742236cfd
Matthew Brandyberry [Tue, 29 Jan 2019 21:25:35 +0000 (13:25 -0800)]
Fix compare_exchange_weak usage in weak_intrusive_ptr (#16302)
Summary:
In the case of spurious failure, refcount is not incremented -- which leads to underflow once all references are released.
This was discovered when exercising multiprocessing on ppc64le.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16302
Differential Revision:
D13845435
Pulled By: ezyang
fbshipit-source-id:
8e264fff9dca8152cb12617e3216d5e48acd9557
Lu Fang [Tue, 29 Jan 2019 21:17:59 +0000 (13:17 -0800)]
update of fbcode/onnx to
15c33c945851907411619f599900c3852108e7e3 (#16493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16493
Previous import was
dc75285d4a1cff9618400164dfdb26c5a1bab70a
Included changes:
- **[15c33c9](https://github.com/onnx/onnx/commit/15c33c9)**: Add ppc64le build (#1768) <Chin Huang>
- **[198f840](https://github.com/onnx/onnx/commit/198f840)**: Update Broadcasting.md (#1769) <Verma-Rajat>
- **[60ac95f](https://github.com/onnx/onnx/commit/60ac95f)**: Merge back from release 1.4.1 (#1767) <Raymond Yang>
- **[a683372](https://github.com/onnx/onnx/commit/a683372)**: Bump up version number for v1.4.0 (#1761) (#1763) <Raymond Yang>
- **[dbf3581](https://github.com/onnx/onnx/commit/dbf3581)**: Add TfIdfVectorizer operator to ONNX (#1721) <Dmitri Smirnov>
Reviewed By: zrphercule
Differential Revision:
D13858840
fbshipit-source-id:
1d00f63f265cc6deed965b92ed00c44f547ff03e
Edward Yang [Tue, 29 Jan 2019 21:13:30 +0000 (13:13 -0800)]
Make the pytorch's cmake minimum required version equal to caffe2's. (#16506)
Summary:
Stack:
:black_circle: **#16506 Make the pytorch's cmake minimum required version equal to caffe2's.** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/
D13861564/)
Originally authored by JerryShih <bignose1007@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16506
Differential Revision:
D13863979
Pulled By: ezyang
fbshipit-source-id:
9275739a820ae03ec6eaa41959f9340c9bba8de3
peter [Tue, 29 Jan 2019 20:37:37 +0000 (12:37 -0800)]
More windows fixes towards the code refactor (#16451)
Summary:
Fixes #16446.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16451
Differential Revision:
D13864388
Pulled By: soumith
fbshipit-source-id:
6cb173eafbd3da33c479c56c85aff75e8be4bf35
SsnL [Tue, 29 Jan 2019 20:23:06 +0000 (12:23 -0800)]
Add stack & cat support for CPU Half (#16389)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/6968
Needed for #14705
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16389
Differential Revision:
D13861446
Pulled By: gchanan
fbshipit-source-id:
7b8700b95aaf252d9669693dbddccb2302e58409
peter [Tue, 29 Jan 2019 20:04:56 +0000 (12:04 -0800)]
Add some smoke tests for Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16496
Differential Revision:
D13863489
Pulled By: soumith
fbshipit-source-id:
518003c27a6b788b5a78b58cdb8698f0bb6ce4d8
Thomas Viehmann [Tue, 29 Jan 2019 19:19:51 +0000 (11:19 -0800)]
create type hint stub files for module torch (#12500)
Summary:
We have:
- This is an initial stab at creating a type stub `torch/__init__.pyi` .
- This is only tested on Python 3, since that's the only Python version mypy
works on.
- So far, we only aim at doing this for torch functions and torch.Tensor.
- Quite a few methods and functions have to be typed manually. These are
done in `torch/__init__.pyi.in`
For me, PyCharm (the non-paid one) didn't seem to indicate errors in the .pyi when opening and seemed to be able to get the type hint for the few functions I tried, but I don't use PyCharm for my usual PyTorch activities, so I didn't extensively try this out.
An example of a generated PYI is at [this gist](https://gist.github.com/ezyang/
bf9b6a5fa8827c52152858169bcb61b1).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12500
Differential Revision:
D13695553
Pulled By: ezyang
fbshipit-source-id:
4566c71913ede4e4c23ebc4a72c17151f94e8e21
Edward Yang [Tue, 29 Jan 2019 15:11:47 +0000 (07:11 -0800)]
Revert
D13596031: Improve c2-aten tensor interop and add proper testing
Differential Revision:
D13596031
Original commit changeset:
d20b601e06ba
fbshipit-source-id:
dc371697f14b3893a9164380a39e7a49d8d68ecf
Soumith Chintala [Tue, 29 Jan 2019 09:26:22 +0000 (01:26 -0800)]
url download bugfix for URLs served without Content-Length header (#16153)
Summary:
Some HTTP servers dont return Content-Length, account for that
Fixes: https://github.com/pytorch/pytorch/issues/16152
Differential Revision:
D13858882
Pulled By: soumith
fbshipit-source-id:
e4293e9368ed4c87548d22adec1ce0c25ea4bd8f
Mikhail Zolotukhin [Tue, 29 Jan 2019 08:17:30 +0000 (00:17 -0800)]
Properly screen string literals when dumping JIT IR
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16056
Differential Revision:
D13719444
Pulled By: ZolotukhinM
fbshipit-source-id:
7113ee9328eff6263513476cdf9254a2e1116f4c
Mikhail Zolotukhin [Tue, 29 Jan 2019 08:15:17 +0000 (00:15 -0800)]
Remove dependency on ResourceGuard from IR.h. (#16351)
Summary:
It looks like `WithInsertionPoint` and `WithCurrentScope` can be easily implemented without
`ResourceGuard` - that helps readability and removes one more dependency. Is there anything I'm missing?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16351
Differential Revision:
D13821826
Pulled By: ZolotukhinM
fbshipit-source-id:
b203200b345fb5508a97dc8656e6f51cde4cc21f
Mikhail Zolotukhin [Tue, 29 Jan 2019 07:44:31 +0000 (23:44 -0800)]
Remove redundant includes from scope.h and attributes.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16472
Differential Revision:
D13852553
Pulled By: ZolotukhinM
fbshipit-source-id:
d5634982c2c42e704d9902774a77660e05fd71eb
Dmytro Dzhulgakov [Tue, 29 Jan 2019 07:39:17 +0000 (23:39 -0800)]
Improve c2-aten tensor interop and add proper testing (#15860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15860
Few changes (which are harder to split in separate diffs, so together):
- make conversion explicit (as they can throw to avoid surprises)
- fix tensor legacy dispatch not initialized when tensor is created on C2 side
- add a bunch of invariants to enforce
Reviewed By: ezyang
Differential Revision:
D13596031
fbshipit-source-id:
d20b601e06ba47aeff2f6e8e15769840e2d46108
Your Name [Tue, 29 Jan 2019 06:56:55 +0000 (22:56 -0800)]
Remove redundant "build" setup.py commond from onnx scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16487
Differential Revision:
D13858628
Pulled By: bddppq
fbshipit-source-id:
e1ff3fc5f9be5d3dbbf96ee73c3a8c901b440b82
James Reed [Tue, 29 Jan 2019 05:44:33 +0000 (21:44 -0800)]
Fix identifier shadowing in tracer (#16480)
Summary:
This was causing build failures under `-Werror` targets under optimized build modes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16480
Differential Revision:
D13857621
Pulled By: jamesr66a
fbshipit-source-id:
2990b987dbca943298ad478c9ee2792236f5fa5b
Owen Anderson [Tue, 29 Jan 2019 04:51:52 +0000 (20:51 -0800)]
Pass WERROR to CMake as an explicit parameter rather than an env var.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16465
Differential Revision:
D13853949
Pulled By: resistor
fbshipit-source-id:
71ccf90a2824ad21c9f26dd753b186f30435d82a
Edward Yang [Tue, 29 Jan 2019 04:43:59 +0000 (20:43 -0800)]
Remove redundant build from build develop instructions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16467
Differential Revision:
D13849661
Pulled By: ezyang
fbshipit-source-id:
d3d58bd31ac65ad9cbf0057b9a4c499c0f59d95a
Jerry Zhang [Tue, 29 Jan 2019 02:24:42 +0000 (18:24 -0800)]
Change SetOutputSize in ConvTransposeUnpoolBaseOp (#16179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16179
to avoid passing partially initialized Tensor around.
Reviewed By: ezyang
Differential Revision:
D13744009
fbshipit-source-id:
4c545765e1cd164b3e87ce08ec4c1cb1e37e2b8f
Sebastian Messmer [Tue, 29 Jan 2019 01:38:38 +0000 (17:38 -0800)]
Move stack.h to ATen/core (#16247)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16247
Stack is going to be used by the c10 dispatcher.
This just moves the file, also changing the namespace turned out to be more complicated than I thought, I'll leave the namespace for now.
Reviewed By: ezyang
Differential Revision:
D13774189
fbshipit-source-id:
66aeee36425e0ea2b3a4f8159604f38572306d57
Sebastian Messmer [Tue, 29 Jan 2019 01:38:38 +0000 (17:38 -0800)]
Remove state from schema and calling API (#16180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16180
Only the kernel knows about its state, the caller doesn't see it anymore.
Reviewed By: ezyang
Differential Revision:
D13744071
fbshipit-source-id:
cb00ff1a881508c1b36ac4123bee1f68ca02ca9c
Mikhail Zolotukhin [Tue, 29 Jan 2019 00:56:44 +0000 (16:56 -0800)]
Remove generic_if.h. (#16354)
Summary:
The current uses of `IR_IF` are mostly trivial, so there is not much value in having special macros for it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16354
Differential Revision:
D13821823
Pulled By: ZolotukhinM
fbshipit-source-id:
1ca73111f5b4868fa38a1f29c9230540773e5de6
Jesse Hellemn [Tue, 29 Jan 2019 00:49:25 +0000 (16:49 -0800)]
Remove CUDA_VERSION to flag and remove JOB_BASE_NAME from binary jobs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16470
Differential Revision:
D13853387
Pulled By: pjh5
fbshipit-source-id:
a2baccde65ab82b69380ee57b16e43cc80ed3e04
Gregory Chanan [Mon, 28 Jan 2019 23:54:04 +0000 (15:54 -0800)]
Fix cmake byte version issue in build_pytorch_libs.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16457
Differential Revision:
D13846408
Pulled By: gchanan
fbshipit-source-id:
26962bc12d7d9fdad71f9dd7526f6d32e6008295
Jerry Zhang [Mon, 28 Jan 2019 23:51:25 +0000 (15:51 -0800)]
Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#16273)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16273
Previously we have SetOutputSize which accept a partially initialized Output Tensor and set it to the correct size,
the diff change this to GetOutputSize that returns the correct size instead.
e.g.
```
auto* Y = Output(0);
ConvPoolOp<Context>::SetOutputSize(X, Y, channels);
...
Y->mutable_data<T>...
```
-->
```
auto sizes = ConvPoolOp<Context>::GetOutputSize(X, channels);
auto* Y = Output(0, sizes, at::dtype<T>());
```
Reviewed By: dzhulgakov
Differential Revision:
D13736281
fbshipit-source-id:
64abce3dbaed0b375098463333dfd0ea5a3b1945
James Reed [Mon, 28 Jan 2019 23:22:08 +0000 (15:22 -0800)]
Move tracer impls into cpp file (#16410)
Summary:
Working on the tracer was really annoying because a lot of the implementations were in `tracer.h` and editing that file caused us to rebuild almost the whole world. So this moves all the implementations into tracer.cpp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16410
Differential Revision:
D13847776
Pulled By: jamesr66a
fbshipit-source-id:
ec8500da32b2d4cd990f293a0a96101d3e82f158
Michael Suo [Mon, 28 Jan 2019 23:04:53 +0000 (15:04 -0800)]
fix alias annotations on to, cpu, cuda (#16460)
Summary:
Fix alias annotations for ops that may return a fresh tensor. The previous version was overly conservative.
Currently there is no actual behavior change in the alias analysis, but we may use the information in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16460
Differential Revision:
D13849086
Pulled By: suo
fbshipit-source-id:
cd23b314a800e5e077d866e74456d37a321439d5
Your Name [Mon, 28 Jan 2019 22:01:30 +0000 (14:01 -0800)]
Remove usage of deprecated "min_satisfying_examples" hypothesis setting (#16401)
Summary:
This setting has been deprecated in [hypythesis 3.56.0](https://github.com/HypothesisWorks/hypothesis/blob/
d1b0df5b91051de7d3f9cea6550ce31e9f0ee2c8/hypothesis-python/docs/changes.rst#3560---2018-04-17) and recently has been removed in [hypothesis 4.x](https://github.com/HypothesisWorks/hypothesis/blob/
d1b0df5b91051de7d3f9cea6550ce31e9f0ee2c8/hypothesis-python/docs/changes.rst#400---2019-01-14).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16401
Reviewed By: ezyang
Differential Revision:
D13832528
Pulled By: bddppq
fbshipit-source-id:
04b9f1dfdf2dcfe0ef121dd02f7fbfdf6bf4aead
Christian Puhrsch [Mon, 28 Jan 2019 21:54:14 +0000 (13:54 -0800)]
Support Tensor alias annotations for native_functions.yaml (#16239)
Summary:
Adds Tensor alias annotations.
This isn't a full implementation of alias annotations, but that isn't required to increase compliance with the JIT signature schema. There are some sanity checks within native_parse.py for their usage, which can also help overall correctness. Otherwise, this exists solely for further alignment between the JIT signature schema and the native_functions.yaml func schema.
This gets us to ~85% matches.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16239
Differential Revision:
D13804133
Pulled By: cpuhrsch
fbshipit-source-id:
aa5750f2c7e0f08b8c35d6d8f38cb148e9629855
Johannes M Dieterich [Mon, 28 Jan 2019 21:35:48 +0000 (13:35 -0800)]
Annotate the bicubic interpolation kernels (#16449)
Summary:
with the correct `__launch_bounds__` for ROCm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16449
Differential Revision:
D13844111
Pulled By: bddppq
fbshipit-source-id:
07ed8552a630f3a6426d9e5648c415f066991e3d
SsnL [Mon, 28 Jan 2019 21:35:36 +0000 (13:35 -0800)]
Clear cmake cache when --cmake (#16426)
Summary:
Also, because sometimes we have `CMakeCache.txt` but cmake errored out so I'm adding the existence of `'build.ninja'` as another criterion of rerunning cmake.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16426
Differential Revision:
D13843801
Pulled By: ezyang
fbshipit-source-id:
ea1efb201062f23b7608f8d061997d8a8e293445
Jerry Zhang [Mon, 28 Jan 2019 20:18:19 +0000 (12:18 -0800)]
Remove dims() in caffe2::Tensor (#16356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16356
att
Reviewed By: dzhulgakov
Differential Revision:
D13813197
fbshipit-source-id:
68c0fb43404536f622422c51949c819d8a037aa5
Sebastian Messmer [Mon, 28 Jan 2019 19:36:31 +0000 (11:36 -0800)]
Op-calling API can handle state (#16177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16177
Change the API for calling operators so that it can store state in an OpKernel object.
This diff doesn't store the state there yet, that comes in a follow up diff.
Reviewed By: ezyang
Differential Revision:
D13742889
fbshipit-source-id:
20511a9a1b9f850074e50634d4b4acf87f8c6ecd
Sebastian Messmer [Mon, 28 Jan 2019 19:36:30 +0000 (11:36 -0800)]
Handle stack correctly (#16246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16246
The op schema says it returns multiple values, so let's actually return multiple values instead of one tuple.
For some reason, this did work when called from python (probably some auto-unpacking),
but once called from JIT, it segfaulted. This diff fixes that.
Reviewed By: dzhulgakov
Differential Revision:
D13780147
fbshipit-source-id:
fe94f82f4c53b7454f77c4484fca4ac9dc444475
Helmut [Mon, 28 Jan 2019 19:25:33 +0000 (11:25 -0800)]
Fix compiler error in swapBytes64 for rare architectures (#16418)
Summary:
swapBytes64 used to use SwapByteOrder_32 and value, both of which dont exist. This commit rewrites that part from scratch.
This happened on Debugbuild on Microsoft compiler. For that case " && !defined(_DEBUG)" is also removed, because _byteswap_uint64 works fine in debug mode (if it is necessary it should me commented why).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16418
Differential Revision:
D13843306
Pulled By: ezyang
fbshipit-source-id:
dde1c7baeccec3aaa750d4b7200b3f4ccb4a00cb
Junjie Bai [Mon, 28 Jan 2019 19:10:18 +0000 (11:10 -0800)]
Fix lint errors introduced in pytorch/pytorch@ceece5d (#16454)
Summary:
ifedan
```
./test/common_utils.py:748:1: E302 expected 2 blank lines, found 1
./test/test_torch.py:1235:5: E303 too many blank lines (2)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16454
Differential Revision:
D13844905
Pulled By: bddppq
fbshipit-source-id:
3dc7c740d86310a8efc9864d7c7798fda8257a21
Syed Tousif Ahmed [Mon, 28 Jan 2019 18:20:47 +0000 (10:20 -0800)]
Report the slowest 10 tests when using pytest (#16423)
Summary:
This flag is useful in identifying if a test is taking way too long like the ones in the following snippet when running the test suite with pytest. https://github.com/pytorch/pytorch/blob/
9757ad35b0b56cf955f294e751de9b437f9bb4ff/test/common_utils.py#L814-L835
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16423
Differential Revision:
D13843507
Pulled By: ezyang
fbshipit-source-id:
643e1766a85905b3b112ea5ca562135a17896a72
Xiaomeng Yang [Mon, 28 Jan 2019 17:26:41 +0000 (09:26 -0800)]
Optimize SpatialBNOp on GPU (#16395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16395
Optimize SpatialBNOp on GPU
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D13829833
fbshipit-source-id:
04d2a63e8e9830c4c39a91cf87fcd7aa765dc55f