Richard Zou [Fri, 21 Dec 2018 01:34:41 +0000 (17:34 -0800)]
Add option to automatically handle unsorted variable-length sequences in RNNs (#15225)
Summary:
Fixes #3584.
Motivation: manually sorting sequences, packing them, and then unsorting them
is something a lot of users have complained about doing, especially when we can
offer library support for them.
Overview: we internally sort sequences before packing them and store a list of
`unsorted_indices` that represent how to unsort the sequences inside
PackedSequence. The packing helper functions return PackedSequence with the
`permutation` field and the unpacking helper functions use it to unsort.
To implement this, the following changes were made:
- PackedSequence now keeps `sorted_indices` and `unsorted_indices`.
These two can be thought of as permutations and are inverses of each other.
`sorted_indices` is how the sequences were sorted; `unsorted_indices` is how
to unsort the sequences.
- Added an `enforce_sorted` argument to pack_sequence and pack_padded_sequence
that maintains the legacy behavior of error-ing out on unsorted-sequences.
When `enforce_sorted=True`, these functions maintain their ONNX exportability.
- pack_sequence(sequences, enforce_sorted) takes in unsorted sequences.
- pack_padded_sequence can take in a padded tensor that represents padded,
unsorted sequences.
- pad_packed_sequence unsorts the PackedSequence such that it is still the
inverse operation of packed_padded_sequence.
- RNNs apply `sort_indices` to their input hidden state and apply
`unsort_indices` to their output hidden state. This is to ensure that the
hidden state batches correspond to the user's ordering of input sequences.
NOT BC-Breaking
- The default for pack_sequence and pack_padded_sequence is
`enforce_sorted=True` to avoid breaking ONNX export. To use the new
functionality, pass in `enforce_sorted=False`
Testing Plan
- Modified TestNN.test_pack_sequence, TestNN.test_packed_padded_sequence,
and TestNN.test_variable_sequence (RNN test) to check the behavior
of unsorted sequences, sorted sequences, and sorted sequences with
enforce_sorted=True
- test/test_jit.py has a test to see if RNNs are exportable with
enforce_sorted=True
cc colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15225
Reviewed By: soumith
Differential Revision:
D13507138
Pulled By: zou3519
fbshipit-source-id:
b871dccd6abefffca81bc4e3efef1873faa242ef
WeihuangXu [Fri, 21 Dec 2018 01:04:14 +0000 (17:04 -0800)]
Change default value of unique to 'sorted=True'
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15379
Differential Revision:
D13531287
Pulled By: ezyang
fbshipit-source-id:
1512da7d660dc413688d99264e6434897c3ac78c
Jongsoo Park [Fri, 21 Dec 2018 01:01:53 +0000 (17:01 -0800)]
add denormal options (ftz and daz)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15423
Reviewed By: yinghai
Differential Revision:
D13526340
fbshipit-source-id:
de2ecc717b4f778f33a8bf940ed144dbb230c7a8
surgan12 [Fri, 21 Dec 2018 00:53:49 +0000 (16:53 -0800)]
collect_env fix (#15447)
Summary:
fixes #15214
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15447
Differential Revision:
D13531523
Pulled By: ezyang
fbshipit-source-id:
8f24f5ae9f3e78f6c5c9ee702ba14faca7aa297a
Lu Fang [Fri, 21 Dec 2018 00:14:16 +0000 (16:14 -0800)]
Remove unused field in jit script module deserializer (#15439)
Summary:
A little bit clean up.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15439
Reviewed By: zrphercule
Differential Revision:
D13532015
Pulled By: houseroad
fbshipit-source-id:
2fb1e01fc28549c7e78af6c65ee68339950bc7da
Edward Yang [Thu, 20 Dec 2018 23:44:09 +0000 (15:44 -0800)]
Revert
D13494873: [pytorch][PR] Fixing ONNX export of logical ops to have correct output datatype
Differential Revision:
D13494873
Original commit changeset:
069d2f956a5a
fbshipit-source-id:
80ef10b2eb623a63da51dc2e4874f2ee446f426d
Viswanath Sivakumar [Thu, 20 Dec 2018 23:33:44 +0000 (15:33 -0800)]
Fix ASAN div by zero error in rotated GenerateProposals op (#15415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15415
Was introduced in
D13429770
Reviewed By: SuperIRabbit
Differential Revision:
D13524114
fbshipit-source-id:
a890eb3b97c24952c361155d1432a801499f4ddd
Jerry Zhang [Thu, 20 Dec 2018 23:28:12 +0000 (15:28 -0800)]
Tensor construction codemod(ResizeLike) - 7/7 (#15087)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15087
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419765
fbshipit-source-id:
34d695309a66723281429610a12544598c507d74
rory [Thu, 20 Dec 2018 23:18:39 +0000 (15:18 -0800)]
allow numpy-like boolean-list indexing in pytorch (#14932)
Summary:
Suggested fix to issue #6773, the fix allows numpy-like boolean-list indexing in pytorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14932
Differential Revision:
D13398795
Pulled By: ezyang
fbshipit-source-id:
67f8daf9829db2550ff76d2bde673be6dd2708cd
Teng Li [Thu, 20 Dec 2018 22:46:01 +0000 (14:46 -0800)]
Doc improvement on DDP (#15440)
Summary:
I noticed that some users don't even know we have this support. Adding into the doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15440
Differential Revision:
D13531045
Pulled By: teng-li
fbshipit-source-id:
9757c400c0010608758c754df04e603b36035a10
Edward Yang [Thu, 20 Dec 2018 22:26:23 +0000 (14:26 -0800)]
Fix type annotation error. (#15448)
Summary:
According to mypy, the trailing -> None is mandatory.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15448
Differential Revision:
D13532179
Pulled By: ezyang
fbshipit-source-id:
e8972f8c9ada4657c518cd7bcd46e489ab8ddf5f
Johannes M Dieterich [Thu, 20 Dec 2018 22:26:14 +0000 (14:26 -0800)]
Add launch bounds needed for ROCm 2.0 (#15400)
Summary:
ROCm 2.0's compiler requires launch_bounds annotations if flat work group sizes are larger than the default of 256.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15400
Differential Revision:
D13531239
Pulled By: ezyang
fbshipit-source-id:
c0b40600a8c332823da6c7113c644d8dba424a9c
Zachary DeVito [Thu, 20 Dec 2018 22:26:06 +0000 (14:26 -0800)]
Support enough of closures to write autograd functions (#15411)
Summary:
This PR adds enough of the infra for supporting closures (inner script functions) in order to allow us to expression symbolic gradients using them. We do not actually ever run graphs that contain these closures. The symbolic_script infrastructure just extracts them out of the original forward graph and turns them into discrete forward/backward pairs. This cuts down on the type annotations necessary to write forward/backward pairs and aligns closely with the "differentiator" function approach to expression reverse-mode AD.
Example:
This code:
```
import torch
r = torch.jit.CompilationUnit(
'''
def mul_forward(self, other):
def backward(grad_output):
grad_self = (grad_output * other).sum_to_size(self.size())
grad_other = (grad_output * self).sum_to_size(other.size())
return grad_self, grad_other
return self * other, backward
''')
print(r.module.code)
```
Will produce this graph (pretty printed for clarity):
```
def mul_forward(self,
self: Tensor,
other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
backward = (self.__lambda, (other, self))
return (torch.mul(self, other), backward)
def __lambda(self,
context: Tuple[Tensor, Tensor],
grad_output: Tensor) -> Tuple[Tensor, Tensor]:
other, self, = context
grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
return (grad_self, grad_other)
```
symbolic_script will then do some modifications to remove the unsuppored prim::Function node, yielding:
```
def mul_forward(self,
self: Tensor,
other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
return (torch.mul(self, other), (other, self))
def backward(self,
context: Tuple[Tensor, Tensor],
grad_output: Tensor) -> Tuple[Tensor, Tensor]:
other, self, = context
grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
return (grad_self, grad_other)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15411
Differential Revision:
D13523340
Pulled By: zdevito
fbshipit-source-id:
4d4a269460e595b16802c00ec55ae00e3e682d49
hbraun@nvidia.com [Thu, 20 Dec 2018 22:24:27 +0000 (14:24 -0800)]
Adding CUDA version for C2 operators generate proposals and nms (#13694)
Summary:
Related to issue #13684
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13694
Reviewed By: wat3rBro
Differential Revision:
D13017791
Pulled By: newstzpz
fbshipit-source-id:
4bdc58e474d8e1f6cd73a02bf51f91542a2b9d0b
Gao, Xiang [Thu, 20 Dec 2018 22:09:09 +0000 (14:09 -0800)]
Add at::one_hot (#15208)
Summary: Closes: https://github.com/pytorch/pytorch/issues/15060
Differential Revision:
D13528014
Pulled By: ezyang
fbshipit-source-id:
5a18689a4c5638d92f9390c91517f741e5396293
Fei Sun [Thu, 20 Dec 2018 21:24:01 +0000 (13:24 -0800)]
Extract arguments to its own file and pass arguments to ios apps (#15413)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15413
In order to pass arguments to the ios app, need to extarct the arguments
to its own file. Also, in the ios app, do not use the benchmark.json, which
parses the arguments.
This is an incompatible change, needs to add hot fix to the tests.
Reviewed By: llyfacebook
Differential Revision:
D13523240
fbshipit-source-id:
b559cc7f52d8f50ee206a7ff8d7b59292d855197
Spandan Tiwari [Thu, 20 Dec 2018 20:24:42 +0000 (12:24 -0800)]
Fixing ONNX export of logical ops to have correct output datatype (#15185)
Summary:
Currently PyTorch ONNX exporter exports the logical ops (`lt`, `gt`, `le`, `ge`, `eq`) with output type in corresponding ONNX ops as type `tensor(uint8)`. But ONNX spec allows for only `tensor(bool)`, which is why models that have these ops fail to load properly.
This issue is captured in https://github.com/pytorch/pytorch/issues/11339. Part of this issue, relating to the allowed input types, has been fixed in ONNX spec by houseroad. This PR fixes the other part pertaining to output type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15185
Differential Revision:
D13494873
Pulled By: houseroad
fbshipit-source-id:
069d2f956a5ae9bf0ac2540a32594a31b01adef8
David Riazati [Thu, 20 Dec 2018 20:20:42 +0000 (12:20 -0800)]
Miscellaneous small doc fixes (#15373)
Summary:
This PR makes some small changes for better consistency in our README and
CONTRIBUTING docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15373
Differential Revision:
D13512753
Pulled By: driazati
fbshipit-source-id:
44398ad1894eef521d5f5acb1d06acaad67728cf
Edward Yang [Thu, 20 Dec 2018 19:14:21 +0000 (11:14 -0800)]
Extend README for ATen/native/cpu (#15437)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15437
Differential Revision:
D13529436
Pulled By: ezyang
fbshipit-source-id:
2e2193d54ea7f7626fe7392e4d0c130c2f87a76f
Shen Li [Thu, 20 Dec 2018 18:21:02 +0000 (10:21 -0800)]
Implementing cuda kernel for tril_indices and triu_indices (#15203)
Summary:
Followup PR of #14904, and the stretch goal of #12653.
Directly calculate coordinates in the original tensor using column index in the result tensor. Every GPU thread takes care of a column (two numbers) in the output tensor.
The implementation detects and handles precision loss during calculating the square root of a `int64_t` variable, and supports tensors with up to `row * column = 2 ^ 59` numbers.
Algorithm details are describe in [comments of TensorFactories.cu](https://github.com/pytorch/pytorch/blob/
23ddb6f58a1c8a7a660a793f174cf014230176c6/aten/src/ATen/native/cuda/TensorFactories.cu#L109-L255).
zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15203
Reviewed By: zou3519
Differential Revision:
D13517695
Pulled By: mrshenli
fbshipit-source-id:
86b305d22cac08c8962a3b0cf8e9e620b7ec33ea
Edward Yang [Thu, 20 Dec 2018 18:00:09 +0000 (10:00 -0800)]
Revert
D13498974: [pytorch][PR] [jit] Add self to Python printer reserved words
Differential Revision:
D13498974
Original commit changeset:
488efb661476
fbshipit-source-id:
3b991bccf4cf2ffdafe70f145aff0ae2837e31f8
Erik Brinkman [Thu, 20 Dec 2018 17:35:08 +0000 (09:35 -0800)]
Add support for batched pdist (#12302)
Summary:
This updates pdist to work for batched inputs, and updates the
documentation to reflect issues raised.
closes #9406
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12302
Reviewed By: ezyang
Differential Revision:
D13528485
Pulled By: erikbrinkman
fbshipit-source-id:
63d93a6e1cc95b483fb58e9ff021758b341cd4de
Brennan Vincent [Thu, 20 Dec 2018 16:53:44 +0000 (08:53 -0800)]
multi-dim standard deviation for CUDA. (#14990)
Summary:
This is the CUDA version of #14535 .
It refactors Reduce.cuh to allow more general classes of reductions to be performed -- we no longer assume that the temporary data returned during reduction is just one scalar, and instead allow an arbitrary accumulate type.
We also allow 64-bit indexing when necessary, since in general we will no longer be able to accumulate directly in the output. (In the cases when we can, we continue to split the tensors until they can be addressed with 32-bits, as before).
As an initial use-case, we implement `std` in multiple dimensions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14990
Differential Revision:
D13405097
Pulled By: umanwizard
fbshipit-source-id:
a56c24dc2fd5326d417632089bd3f5c4f9f0d2cb
David Riazati [Thu, 20 Dec 2018 10:25:20 +0000 (02:25 -0800)]
Add self to Python printer reserved words (#15318)
Summary:
This adds `self` to the list of reserved words and also sorts the lines and prevents the tracer from naming values 'self' (which happens in torch/tensor.py)
Fixes #15240
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15318
Differential Revision:
D13498974
Pulled By: driazati
fbshipit-source-id:
488efb661476cdcdb8ecb9cb48942f02e3c1e611
Peter Goldsborough [Thu, 20 Dec 2018 05:38:00 +0000 (21:38 -0800)]
Pretty printing of C++ modules (#15326)
Summary:
A long outstanding nicety: pretty printing of C++ modules. E.g.
```
Sequential sequential(
Linear(10, 3),
Conv2d(1, 2, 3),
Dropout(0.5),
BatchNorm(5),
Embedding(4, 10),
LSTM(4, 5));
std::cout << sequential;
```
prints
```
torch::nn::Sequential(
(0): torch::nn::Linear(in=10, out=3, with_bias=true)
(1): torch::nn::Conv2d(input_channels=1, output_channels=2, kernel_size=[3, 3], stride=[1, 1])
(2): torch::nn::Dropout(rate=0.5)
(3): torch::nn::BatchNorm(features=5, eps=1e-05, momentum=0.1, affine=true, stateful=true)
(4): torch::nn::Embedding(count=4, dimension=10)
(5): torch::nn::LSTM(input_size=4, hidden_size=5, layers=1, dropout=0)
)
```
apaszke ebetica ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15326
Differential Revision:
D13518986
Pulled By: goldsborough
fbshipit-source-id:
63bf753672f0e348951de3645208f263581de5fb
Hassan Eslami [Thu, 20 Dec 2018 05:35:08 +0000 (21:35 -0800)]
Restructuring prof dag counters (#13321)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13321
This diff simply refactors the `ProfDAGCounters` into two:
* `ProfDAGCounters` that gathers stats at runtime.
* `ProfDAGReport` which holds the report from the gathered stats once stats collection is done.
This refactoring allow us to implement `+=` for `ProfDAGReport`, which can be used for aggregating same-net reports on each host.
Reviewed By: donglimm
Differential Revision:
D12837988
fbshipit-source-id:
0470c5fd6437f12711cab25a15a12965d79b2a91
Wanchao Liang [Thu, 20 Dec 2018 05:35:01 +0000 (21:35 -0800)]
Remove python_default_init from ATen and use Optional (#15234)
Summary:
Optional clean up. This PR remove python_default_init from the yaml files, and the code-gen, and utilize optional type to do the work.
This also fix the bug in the #13149 to correctly adopt as_strided backward.
Fixes #9941
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15234
Differential Revision:
D13502044
Pulled By: wanchaol
fbshipit-source-id:
774b61fc4414482cf11d56e22bd0275aefb352a4
Jerry Zhang [Thu, 20 Dec 2018 05:34:36 +0000 (21:34 -0800)]
Tensor construction codemod(ResizeLike) - 1/7 (#15073)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15073
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision:
D13419563
fbshipit-source-id:
8c284405fa3a867303216df876ee6b20d8a46551
bddppq [Thu, 20 Dec 2018 05:29:41 +0000 (21:29 -0800)]
Do not use fork to invoke test scripts in pytorch rocm CI
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14600
Differential Revision:
D13523937
Pulled By: bddppq
fbshipit-source-id:
1493fdd051283650081d7944bb2bd7f0c4c44990
Edward Yang [Thu, 20 Dec 2018 04:31:09 +0000 (20:31 -0800)]
Replace Vec256<T>::size with constexpr method (#15406)
Summary:
Stack:
:black_circle: **#15406 Replace Vec256<T>::size with constexpr method** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/
D13519902/)
See Note [constexpr static function to avoid odr-usage compiler bug]
for detailed justification.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15406
Differential Revision:
D13523774
Pulled By: ezyang
fbshipit-source-id:
c0ab44298bb2ef3d68a66d026fc6bc156a909a6b
Marat Dukhan [Thu, 20 Dec 2018 04:20:47 +0000 (20:20 -0800)]
Make cpuinfo logging less verbose (#15405)
Summary:
Log only errors in cpuinfo.
Fix to #15401 and #15398
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15405
Differential Revision:
D13526251
Pulled By: Maratyszcza
fbshipit-source-id:
4d9eba0912f7b45093bed2e343cd77a151ffa8c4
James Sun [Thu, 20 Dec 2018 02:51:41 +0000 (18:51 -0800)]
Support error handling in forked threads (#14523)
Summary:
Save error info in the future for parent thread to pick up. Throw the error
when the thread is the root thread.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14523
Differential Revision:
D13251756
Pulled By: highker
fbshipit-source-id:
b40f9a45665e1a934743f131ec5e8bad5622ce67
Jerry Zhang [Thu, 20 Dec 2018 02:10:36 +0000 (18:10 -0800)]
default options for OutputTensorCopyFrom (#15248)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15248
OutputTensorCopyFrom takes four arguments: index, a source Tensor, TensorOptions and whether we want to perform an async call.
We want to provide some default option for TensorOptions, (1). default device to context_.device() (2). default dtype to input.dtype(). User can also explicitly provide these options to override default values.
next diff will change the order of TensorOptions parameter so that user don't need to write down tensor options unless they want to override.
Reviewed By: dzhulgakov
Differential Revision:
D13453824
fbshipit-source-id:
87401f81c7c3f9fd3d8936c710e6c2e04a59b689
James Sun [Thu, 20 Dec 2018 01:06:54 +0000 (17:06 -0800)]
Fix Module::copy_into
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15393
Differential Revision:
D13519477
Pulled By: highker
fbshipit-source-id:
d62928597ec0700b550e7cf481c8febae57b200d
Zachary DeVito [Wed, 19 Dec 2018 23:02:13 +0000 (15:02 -0800)]
add unpack_outputs to inlineCallTo
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15382
Differential Revision:
D13518844
Pulled By: zdevito
fbshipit-source-id:
981936988080af80629b70bf5f6dfa52ceb09c2f
Benoit Rostykus [Wed, 19 Dec 2018 22:55:37 +0000 (14:55 -0800)]
Fix documentation (#15372)
Summary:
Current documentation example doesn't compile. This fixes the doc so the example works.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15372
Differential Revision:
D13522167
Pulled By: goldsborough
fbshipit-source-id:
5171a5f8e165eafabd9d1a28d23020bf2655f38b
Bram Wasti [Wed, 19 Dec 2018 22:31:06 +0000 (14:31 -0800)]
computeChains with nomnigraph (#15366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15366
swap the old implementation with a slightly easier one to understand
I ran the tests and compared the number of chains compared to the old algorithm. This one outperforms on every test, but we have yet to see if that impacts performance at all.
old chain 34 nomnigraph chain 25
old chain 46 nomnigraph chain 34
old chain 228 nomnigraph chain 188
old chain 397 nomnigraph chain 338
Reviewed By: ilia-cher
Differential Revision:
D13057451
fbshipit-source-id:
ccd050bfead6eb94ab9c7b0a70b09a22c2b9e499
SsnL [Wed, 19 Dec 2018 20:26:44 +0000 (12:26 -0800)]
Refactor dataloader.py (#15331)
Summary:
Same as #14668, and was approved there.
ailzhang , please apply this patch to Horizon's `data_streamer.py`: https://gist.github.com/SsnL/
020fdb3d6b7016d81b6ba1d04cc41459 Thank you!
Below is the original description at #14668:
As I am working on tasks in https://github.com/pytorch/pytorch/issues/13023, I realized how unreadable the code is because all functions to be run in multiprocessing must be at top global level. Adding more functionalities to `dataloader.py` will only make things worse.
So in this PR, I refactor `dataloader.py` and move much of it into `data._utils`. E.g., the `_worker_loop` and related methods are now in `data._utils.worker`, signal handling code in `data._utils.signal_handling`, collating code in `data._utils.collate`, etc. This split, IMHO, makes code much clearer. I will base my future changes to DataLoader on top of this.
No functionality is changed, except that I added `torch._six.queue`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15331
Reviewed By: yf225
Differential Revision:
D13503120
Pulled By: ailzhang
fbshipit-source-id:
94df16b4d80ad1102c437cde0d5a2e62cffe1f8e
vishwakftw [Wed, 19 Dec 2018 20:11:49 +0000 (12:11 -0800)]
Rename potrs to cholesky_solve (#15334)
Summary:
Changelog:
- Renames `potrs` to `cholesky_solve` to remain consistent with Tensorflow and Scipy (not really, they call their function chol_solve)
- Default argument for upper in cholesky_solve is False. This will allow a seamless interface between `cholesky` and `cholesky_solve`, since the `upper` argument in both function are the same.
- Rename all tests
- Create a tentative alias for `cholesky_solve` under the name `potrs`, and add deprecated warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15334
Differential Revision:
D13507724
Pulled By: soumith
fbshipit-source-id:
b826996541e49d2e2bcd061b72a38c39450c76d0
Elias Ellison [Wed, 19 Dec 2018 18:45:32 +0000 (10:45 -0800)]
centralize side effects ops as node method (#15188)
Summary:
A number of different passes rely on whether a node has side effects. This centralizes the list of side effectful ops in one place.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15188
Differential Revision:
D13508438
Pulled By: eellison
fbshipit-source-id:
2143e782b787731ce007b6dcd50cbde30e1b8dd0
Tugrul Ates [Wed, 19 Dec 2018 18:40:48 +0000 (10:40 -0800)]
Optional ScalarType support for native functions & JIT (#15154)
Summary:
For #6593 and #9515
This completes the support for optional<ScalarType> in native, JIT and autograd.
Note: Mostly following the existing implementation for optional<Scalar> that was added in https://github.com/pytorch/pytorch/pull/12582.
This PR introduces a way to make functions accept an optional dtype and it will unblock #9515 by allowing the `dtype` param for type promotion interface:
```
func: name(inputs, *, ScalarType? dtype=None, Casting casting=same_kind)
```
An alternative approach could have been using `ScalarType::Undefined` for the same purpose but without optional, though it would have been a bit hacky.
```
func: name(inputs, *, ScalarType dtype=Undefined, Casting casting=same_kind)
```
Here's an example use of this in action: https://github.com/pytorch/pytorch/pull/15133/commits/
971f69eac69101955ed90078b44dab975d37a4f7
There are already a bunch of native functions that were getting optional `dtype` through function overloading. https://github.com/pytorch/pytorch/pull/15133 is the attempt to migrate all of those. I will send those changes separately after this since some functions (e.g. sum) need quite a bit of change in the codebase. See the commits over there.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15154
Differential Revision:
D13457760
Pulled By: tugrulates
fbshipit-source-id:
706134f0bd578683edd416b96329b49a1ba8ab48
vfdev-5 [Wed, 19 Dec 2018 18:34:37 +0000 (10:34 -0800)]
Implement 'to' on ScriptModules (#15340)
Summary:
Following #6008
Fixes "Implement 'to' on ScriptModules #7354"
cc zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15340
Differential Revision:
D13506646
Pulled By: zdevito
fbshipit-source-id:
318fea2e8e51a37ce9844efa4c8db67d45a66317
Marat Dukhan [Wed, 19 Dec 2018 15:24:27 +0000 (07:24 -0800)]
Update cpuinfo submodule (#15385)
Summary:
Pull cpuinfo changes that should make it work on AWS Lambda servers (which don't have `/sys/devices/system/cpu/{possible,present}` files, and probably don't mount sysfs at all).
I'm not 100% sure it will fix the issue, but getting this update in would make it easier for users to test using a nightly build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15385
Reviewed By: soumith
Differential Revision:
D13517467
Pulled By: Maratyszcza
fbshipit-source-id:
e8e544cd1f9dad304172ebb7b6ba7a8ad7d34e66
svcscm [Wed, 19 Dec 2018 07:33:54 +0000 (23:33 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
dfbdae40e505c46cd64751c6ec107c84f9434131
Jianyu Huang [Wed, 19 Dec 2018 07:17:11 +0000 (23:17 -0800)]
race condition fix of using mutable_data inside OPENMP region for batched matmul (#15371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15371
Similar to
D13387692:
Never call mutable_data from an OpenMP region!!!
Reviewed By: jspark1105
Differential Revision:
D13511259
fbshipit-source-id:
100812d2a547c0a1d5018749d5fdc88162375673
Michael Suo [Wed, 19 Dec 2018 06:31:51 +0000 (22:31 -0800)]
add whitelisted clang-format checks (#15254)
Summary:
This PR adds clang-format automation:
- It only checks on whitelisted files, so we can enable incrementally without noise
- There is a pre-commit hook provided that will do the same check, plus prompt users to apply the clang-format changes (no change is made without the user agreeing).
My plan is to migrate over whole files at a time, clang-formatting them and then adding them to the whitelist. Doing it this way should avoid too many merge pains (the most you'll have to is run clang-format on the affected file before rebasing).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15254
Differential Revision:
D13515888
Pulled By: suo
fbshipit-source-id:
d098eabcc97aa228c4dfce8fc096c3b5a45b591f
Zachary DeVito [Wed, 19 Dec 2018 06:08:28 +0000 (22:08 -0800)]
build fix
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15384
Differential Revision:
D13515708
Pulled By: zdevito
fbshipit-source-id:
ea077cfec30edf41b85dc83c0a969d1146434145
Zachary DeVito [Wed, 19 Dec 2018 03:41:00 +0000 (19:41 -0800)]
Split up compiler.cpp (#15355)
Summary:
This separates the different parts of compiler.cpp to make their relationship more clear. In particular it adds:
* sugared_value.{h,cpp} - all the public SugaredValues that the compiler defines and a few that were inside compiler.cpp
* type_parser.{h, cpp} - Turns TreeRef's defining types into TypePtr
* schema_matching.{h, cpp} - infrastructure for matching arguments against overloaded schema and emitting builtin operators with a particular schema.
Retains:
* compiler.{h, cpp} - now responsible simply for the `defineMethodsInModule` infra structure.
Some utility functions like inlineCallTo have moved to ir.h.
Only thing that is not a move is some changes in module.h/cpp that remove multiple returns from `Method::emit_call_to`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15355
Reviewed By: suo, wanchaol
Differential Revision:
D13507524
Pulled By: zdevito
fbshipit-source-id:
69ec936a9ff1a383c12a883616346b219c72e393
Ailing Zhang [Wed, 19 Dec 2018 02:56:06 +0000 (18:56 -0800)]
Autograd using torchscript (#14604)
Summary:
This PR enables autodiff to use the forward/backward graph compiled from python code, instead of using symbolic gradients(modifying the original graph directly).
We put the map in a separate .h file for now to wait for the native_functions.yaml and derivatives.yaml merge. This should ideally go into native_functions.yaml eventually.
This PR should be enough to unblock us for now, we can start writing gradients for aten functions in python.
Differential Revision:
D13494635
Pulled By: ailzhang
fbshipit-source-id:
f8d51a15243ac46afd09d930c573ccdfcd9fdaaf
Wanchao Liang [Wed, 19 Dec 2018 02:23:55 +0000 (18:23 -0800)]
Minor clean up for test_jit (#15368)
Summary:
* remove None args in functional tests
* remove some expect files that are not necessary
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15368
Differential Revision:
D13512349
Pulled By: wanchaol
fbshipit-source-id:
304cffff966487d15c373057ae8ad114ef8aa7f9
David Riazati [Wed, 19 Dec 2018 01:25:51 +0000 (17:25 -0800)]
Add RNNCell modules to Script standard library (#14695)
Summary:
Adds RNNCell modules to script standard lib
cc apaszke for argument_spec changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14695
Differential Revision:
D13467680
Pulled By: driazati
fbshipit-source-id:
13a14da87714325cc4c3d49e5fde8a850d5d757b
David Riazati [Wed, 19 Dec 2018 00:44:04 +0000 (16:44 -0800)]
Remove fully qualified weak script names (#15364)
Summary:
Cleanup to make references to `weak_script` consistent across codebase
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15364
Differential Revision:
D13509676
Pulled By: driazati
fbshipit-source-id:
93dbbbe57e9b9b6587895f3cc6fac678babd21de
Chandler Zuo [Wed, 19 Dec 2018 00:40:23 +0000 (16:40 -0800)]
Redefine scheduler to set learning rate using recursive formula (#14010)
Summary:
Modified step_lr for StepLR, MultiStepLR, ExponentialLR and CosineAnnealingLR. In this way, multiple schedulers can be used simultaneously to modify the learning rates.
Related issue: https://github.com/pytorch/pytorch/issues/13022
Added unit tests combining multiple schedulers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14010
Reviewed By: ezyang
Differential Revision:
D13494941
Pulled By: chandlerzuo
fbshipit-source-id:
7561270245639ba1f2c00748f8e4a5f7dec7160c
Ruiyang Liu [Wed, 19 Dec 2018 00:28:14 +0000 (16:28 -0800)]
Replace resize_dim() with set_sizes_and_strides() in (#15348)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15348
We have a function resize_dim() on TensorImpl in c10/core/TensorImpl.h which lets you change the dimensionality of a tensor, resizing both sizes and strides. Unfortunately, this API is fairly easy to misuse, because it fills in the new entries with garbage when you size it larger. We want to refactor the call sites to use set_sizes_and_strides() instead, so that there is never an intermediate tensor state where the sizes/strides don't make sense. In this diff, resize_dim() is
replaced with set_sizes_and_strides() in aten/src/TH/THTensor.hpp.
Reviewed By: ezyang
Differential Revision:
D13505512
fbshipit-source-id:
193bab89f0018c13ca07488be336d8e967746b76
Richard Zou [Wed, 19 Dec 2018 00:13:39 +0000 (16:13 -0800)]
Minor cleanup for TestFuser tests (#15134)
Summary:
Changelog:
- change some expect tests that didn't have to be expect tests,
instead use self.assertAllFused
- Some of the fuser tests weren't using self.assertAllFused.
- Minor test renames
cc apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15134
Differential Revision:
D13507481
Pulled By: zou3519
fbshipit-source-id:
dd0788530a60bb5ed2f42b961fae3db2b4404b64
Bill Li [Wed, 19 Dec 2018 00:07:55 +0000 (16:07 -0800)]
add dense vector to id_list operator (#15090)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15090
as title
step 2 of the linked task
Reviewed By: ellie-wen
Differential Revision:
D13425977
fbshipit-source-id:
f3538ed68f42470ba39c5b779af764d4a5591a9d
Michael Suo [Tue, 18 Dec 2018 23:01:10 +0000 (15:01 -0800)]
fix clang-tidy script for python 3
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15360
Differential Revision:
D13509668
Pulled By: suo
fbshipit-source-id:
a3448a115eaac8dd4c3f179901a23bdbc5098408
Gregory Chanan [Tue, 18 Dec 2018 22:56:43 +0000 (14:56 -0800)]
Port torch.linspace to ATen and parallelize it on CPU.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15320
Reviewed By: ezyang
Differential Revision:
D13498995
Pulled By: gchanan
fbshipit-source-id:
fba655d51d978fffaa53a5e4cae4a99ebfb0eddc
David Riazati [Tue, 18 Dec 2018 19:43:45 +0000 (11:43 -0800)]
Add (Un)Fold modules to standard library (#14759)
Summary:
Depends on #14597 for the corresponding aten ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14759
Differential Revision:
D13325356
Pulled By: driazati
fbshipit-source-id:
99e39449c1ccfa293de05672c31a11e580bdd11f
Lu Fang [Tue, 18 Dec 2018 19:28:04 +0000 (11:28 -0800)]
Fix the (reduce)min and (reduce)max ONNX exporting (#15241)
Summary:
max and reducemax are smashed together, we need to support one input case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15241
Reviewed By: yinghai
Differential Revision:
D13473312
Pulled By: houseroad
fbshipit-source-id:
9b8c847286a2631b006ca900271bc0d26574101a
Zachary DeVito [Tue, 18 Dec 2018 18:27:26 +0000 (10:27 -0800)]
Method returns a single argument (#15289)
Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.
This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.
This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]
The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed values.
Notes:
* inferTypeFrom is renamed to be more accurate and discourage use.
* This has uncovered some bugs in other components, which are noted in
the diff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15289
Differential Revision:
D13481649
Pulled By: zdevito
fbshipit-source-id:
0e2242a40bb28cca2d0e8be48bede96195e4858c
Jerry Zhang [Tue, 18 Dec 2018 16:17:56 +0000 (08:17 -0800)]
caffe2 mobile opengl (#15322)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15322
caffe2 mobile opengl code is not used, deleting it to reduce complications when we perform other changes
Reviewed By: Maratyszcza
Differential Revision:
D13499943
fbshipit-source-id:
6479f6b9f50f08b5ae28f8f0bc4a1c4fc3f3c3c2
Edward Yang [Tue, 18 Dec 2018 15:35:43 +0000 (07:35 -0800)]
Revert
D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
Differential Revision:
D13383102
Original commit changeset:
c434f0e0ddff
fbshipit-source-id:
690f46ca0710954fa591a5ea77535e9759db4de5
svcscm [Tue, 18 Dec 2018 05:23:30 +0000 (21:23 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
4bf66581d07d839f459869bc9c6428011063cc5b
Zachary DeVito [Tue, 18 Dec 2018 05:11:30 +0000 (21:11 -0800)]
improve script/no script save error (#15321)
Summary:
Improves the error message for #15116
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15321
Differential Revision:
D13499379
Pulled By: zdevito
fbshipit-source-id:
b8dc0a83efabff74199f4aab2ee98aa41c42608b
James Sun [Tue, 18 Dec 2018 04:28:00 +0000 (20:28 -0800)]
Allow tracing with fork/wait (#15184)
Summary:
There is still limitation on this: if a script module is somewhere
in the trace, the inputs/outputs can only be tensors or tuples of
tensors.
resolves #15052
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15184
Differential Revision:
D13457691
Pulled By: highker
fbshipit-source-id:
8fe46afc41357a0eb8eadd83f687b31d074deb0e
Jie [Tue, 18 Dec 2018 04:08:15 +0000 (20:08 -0800)]
[TensorIterator fixing mean to output correct result for half precisi… (#14878)
Summary:
…on](#12115)
mean is calculated in two step sum()/numel(). For half precision, data gets
casted back to half after sum().
We fused the division into the reduction kernel by adding pre_op/post_op.
This allows us to do torch.ones(65536).cuda().half().mean() to return correct
result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14878
Differential Revision:
D13491159
Pulled By: soumith
fbshipit-source-id:
e83802e1628b6d2615c45e18d7acf991d143a09e
Edward Yang [Tue, 18 Dec 2018 03:50:10 +0000 (19:50 -0800)]
Reenable OpenMP by reverting the following two commits. (#15315)
Summary:
Revert "Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)"
This reverts commit
a84e873bb156080ea76ab182171b1f3b4d5395f6.
Revert "Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)"
This reverts commit
8901935ad42fe9bf093d1106ea43606008a4024d.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15315
Differential Revision:
D13495852
Pulled By: ezyang
fbshipit-source-id:
bcd3f60088b14831c53d3c171f10cd1ab6b35dee
Peter Goldsborough [Tue, 18 Dec 2018 00:08:05 +0000 (16:08 -0800)]
Fix _apply in nn.Module (#15305)
Summary:
Fixes an issue that arose from https://github.com/pytorch/pytorch/pull/13481 where `.shared_memory()` couldn't be called. Effectively undoes all changes to `nn.Module` from that PR and solve the relevant problem in a different way (the goal was to be able to call `._apply()` on the Python wrapper for a C++ module).
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15305
Differential Revision:
D13493937
Pulled By: goldsborough
fbshipit-source-id:
4cb8687f90fc8709a536c5e7eacd0dc8edf6f750
Peter Goldsborough [Tue, 18 Dec 2018 00:07:14 +0000 (16:07 -0800)]
Add a correctness check for C++ types to custom operators (#15247)
Summary:
The JIT uses `int64_t` for its integer type and `double` for its floating point type, but users quite often want to write `int` or `float` and that currently fails in not-so-nice ways for custom ops. This PR adds a simple `static_assert` to catch these common failure cases.
zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15247
Differential Revision:
D13493941
Pulled By: goldsborough
fbshipit-source-id:
c1cd0d10ab5838c75f167c0bdb57e45a0bc1344e
Tristan Rice [Mon, 17 Dec 2018 23:59:45 +0000 (15:59 -0800)]
caffe2/python/task: added __repr__ methods to all task definitions (#15250)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15250
This adds `__repr__` methods to all of the classes under task.py. This makes the objects much easier to interact with when using them in an interactive manner, such as in a Jupyter notebook.
The default `__repr__` method just returns the object ID which is very unhelpful.
Reviewed By: hanli0612
Differential Revision:
D13475758
fbshipit-source-id:
6e1b166ec35163b9776c797b6a2e0d002560cd29
Roy Li [Mon, 17 Dec 2018 23:44:23 +0000 (15:44 -0800)]
Port nn fold and unfold to c++
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14597
Reviewed By: ezyang
Differential Revision:
D13272227
fbshipit-source-id:
6eccab5ff5830a977398a96393b778095120edc6
James Sun [Mon, 17 Dec 2018 23:36:28 +0000 (15:36 -0800)]
Allow future type parsing
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14887
Differential Revision:
D13490984
Pulled By: highker
fbshipit-source-id:
165fe995867be273793f983154aa6cbce13e4396
Jesse Hellemn [Mon, 17 Dec 2018 23:27:53 +0000 (15:27 -0800)]
Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15064
Reviewed By: orionr
Differential Revision:
D13474801
Pulled By: pjh5
fbshipit-source-id:
9d3664c3a3a1b6c2d9f083f8476fe3b037296b98
David Riazati [Mon, 17 Dec 2018 23:22:07 +0000 (15:22 -0800)]
Bicubic interpolation for nn.functional.interpolate (#9849)
Summary:
Addresses #918, interpolation results should be similar to tf
* Adds bicubic interpolation operator to `nn.functional.interpolate`
* Corresponding test in `test_nn.py`
The operator is added in legacy `TH` to be aligned with the other upsampling operators; they can be refactored/moved to ATen all at once when #10482 is resolved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9849
Differential Revision:
D9007525
Pulled By: driazati
fbshipit-source-id:
93ef49a34ce4e5ffd4bda94cd9a6ddc939f0a4cc
Wanchao Liang [Mon, 17 Dec 2018 23:18:51 +0000 (15:18 -0800)]
add isinstance static type checking for jit (#15076)
Summary:
This PR add isinstance to do static type checking in JIT.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15076
Differential Revision:
D13471067
Pulled By: wanchaol
fbshipit-source-id:
d39b7ed5db9fcca4b503659d02cf7795950ea8ea
peter [Mon, 17 Dec 2018 23:18:15 +0000 (15:18 -0800)]
Fix the missing caffe2 proto files for Windows (#15157)
Summary:
Fixes #15156
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15157
Differential Revision:
D13490420
Pulled By: orionr
fbshipit-source-id:
4387d707f634a5975238af915b1befb2277f8ec7
Edward Yang [Mon, 17 Dec 2018 23:09:40 +0000 (15:09 -0800)]
Replace SwitchToDevice(0) with SwitchToDevice() (#15126)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15126
I want to make people stop manufacturing StreamId from thin air,
and a first step is to make people use the default stream.
Reviewed By: dzhulgakov
Differential Revision:
D13432922
fbshipit-source-id:
9f0d8d70646c50d979bde5ba3c3addeebac48a3d
David Riazati [Mon, 17 Dec 2018 22:38:46 +0000 (14:38 -0800)]
Don't enforce docstrings on bool dispatch (#15306)
Summary:
Allows 2 functions that are boolean dispatched to have no docstrings (the only case that will fail now is if both functions have docstrings)
Fixes #15281
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15306
Differential Revision:
D13494884
Pulled By: driazati
fbshipit-source-id:
65fec39ae03a7d6a68ad617c9b270faeb1617930
Soumyaroop Roy [Mon, 17 Dec 2018 22:23:54 +0000 (14:23 -0800)]
Fix for issue 14829 (#14908)
Summary:
* Modify the testcase as outlined in the issue
* Issue url: https://github.com/pytorch/pytorch/issues/14829
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14908
Differential Revision:
D13490360
Pulled By: ezyang
fbshipit-source-id:
ff11a72e19b49223652182e82c2b4e65fe444ca7
Junjie Bai [Mon, 17 Dec 2018 21:45:26 +0000 (13:45 -0800)]
Minor fixes in .jenkins/caffe2/bench.sh
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15304
Differential Revision:
D13493876
Pulled By: bddppq
fbshipit-source-id:
7146eb2587e526af65b4b0290c25bd55653a3088
Spandan Tiwari [Mon, 17 Dec 2018 21:45:21 +0000 (13:45 -0800)]
Adding ONNX export for torch.expand and torch.ne (#15050)
Summary:
`torch.expand` and `torch.ne` are used often in models and this PR adds ONNX export support for them. ArmenAg has created issue https://github.com/pytorch/pytorch/issues/10882 for this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15050
Differential Revision:
D13453036
Pulled By: houseroad
fbshipit-source-id:
4724b4ffcebda6cd6b2acac51d6733cb27318daf
Edward Yang [Mon, 17 Dec 2018 21:25:31 +0000 (13:25 -0800)]
Tighten up invariants regarding StreamId. (#15125)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15125
I realized that it is really bad juju if you fake a StreamId
out of thin air, because in general this isn't going to work.
So, make the constructor a lot scarier.
Most "faking StreamId out of thin air" happens because someone
just wants to put something on the default stream.
Reviewed By: dzhulgakov
Differential Revision:
D13432800
fbshipit-source-id:
a86991d6fc1d8aa4e54e8175e5f06f90856238e6
David Riazati [Mon, 17 Dec 2018 21:08:03 +0000 (13:08 -0800)]
Fix tensor printing bug in Python 2 (#12732)
Summary:
`rsplit` doesn't have kwargs in Python 2 so this line raises an error
Fixes #15135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12732
Differential Revision:
D10458630
Pulled By: driazati
fbshipit-source-id:
a63e42fbc0e39e4291480775b516c98122ec05a1
peter [Mon, 17 Dec 2018 05:50:43 +0000 (21:50 -0800)]
Refactor hotpatch_vars and apply it to libtorch (#14976)
Summary:
Fixes #14801.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14976
Differential Revision:
D13485381
Pulled By: soumith
fbshipit-source-id:
0af3c2e1b90988d56f6f85632328d1e4b788ffd2
Derek Kim [Sat, 15 Dec 2018 18:56:49 +0000 (10:56 -0800)]
Trivial comment correction in dataloader (#15276)
Summary:
Trivial comment correction in dataloader
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15276
Differential Revision:
D13477324
Pulled By: soumith
fbshipit-source-id:
2a74a014999655d129311d611f2a09411339cb13
Krishna Kalyan [Sat, 15 Dec 2018 17:46:55 +0000 (09:46 -0800)]
Delete ffi documentation (#15220)
Summary: Deleting FFI documentation since its deprecated.
Differential Revision:
D13477329
Pulled By: soumith
fbshipit-source-id:
0b3d485eb7cef1f05b6b397dff50f21a49d6409e
Fei Sun [Sat, 15 Dec 2018 17:07:02 +0000 (09:07 -0800)]
Fix a typo in the assert
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15265
Reviewed By: llyfacebook
Differential Revision:
D13477029
Pulled By: sf-wind
fbshipit-source-id:
9c5571a583c01f9701625541ebec0c836cb923f2
y0ast [Sat, 15 Dec 2018 12:41:02 +0000 (04:41 -0800)]
fix cholesky call in potrs example (#15215)
Summary:
Cholesky by default returns the lower triangular matrix, see [docs](https://pytorch.org/docs/stable/torch.html#torch.cholesky).
However `torch.potrs` by default requires the upper triangular matrix. The naming of the variable `u` suggests that the example expects the upper to be returned, so I've added the flag to make that happen in the example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15215
Differential Revision:
D13476468
Pulled By: soumith
fbshipit-source-id:
7b68035f435a2b1be4d363b3f63e407394af949d
Michael Suo [Sat, 15 Dec 2018 09:14:45 +0000 (01:14 -0800)]
value-based mark and sweep DCE (#14910)
Summary:
This makes DCE more granular by tracking live values/aliases through the graph (rather than just nodes). So we can be more aggressive in DCE around control flow blocks. For example, in:
```
%a0 = aten::foo()
%b = aten::foo()
%a2, %b2 = prim::If(%cond) {
block0() {
%a1 = aten::foo(%.0)
%b1 = aten::foo(%b)
} -> (%a1, %b1)
}
return (%a2)
```
we will now dce all the `%b` stuff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14910
Differential Revision:
D13476445
Pulled By: suo
fbshipit-source-id:
2bf5db19711c07dde946697a4f4b270bd8baf791
Xiang Gao [Sat, 15 Dec 2018 08:07:37 +0000 (00:07 -0800)]
Mention Jacobian-vector product in the doc of torch.autograd (#15197)
Summary:
A friend of me is learning deep learning and pytorch, and he is confused by the following piece of code from the tutorial https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients :
```python
x = torch.randn(3, requires_grad=True)
y = x * 2
while y.data.norm() < 1000:
y = y * 2
print(y)
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
y.backward(gradients)
print(x.grad)
```
He don't know where the following line comes from:
```python
gradients = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float)
```
What are we computing? Why don't we compute "the gradient of `y` w.r.t `x`"?
In the tutorial, it only says
> You can do many crazy things with autograd!
Which does not explain anything. It seems to be hard for some beginners of deep learning to understand why do we ever do backwards with external gradient fed in and what is the meaning of doing so. So I modified the tutorial in https://github.com/pytorch/tutorials/pull/385
and the docstring correspondingly in this PR, explaining the Jacobian vector product. Please review this PR and https://github.com/pytorch/tutorials/pull/385 together.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15197
Differential Revision:
D13476513
Pulled By: soumith
fbshipit-source-id:
bee62282e9ab72403247384e4063bcdf59d40c3c
Jerry Zhang [Sat, 15 Dec 2018 05:08:20 +0000 (21:08 -0800)]
Tensor method rename dims()->sizes() (#15246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15246
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: igorsugak
Differential Revision:
D13470369
fbshipit-source-id:
ce995beab7c64bebe8b234fb5e6d015940ec2952
Zachary DeVito [Sat, 15 Dec 2018 03:29:19 +0000 (19:29 -0800)]
Create parser.cpp (#15238)
Summary:
Moves implementation into .cpp file. Parser was getting included in several compilation units.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15238
Differential Revision:
D13474635
Pulled By: zdevito
fbshipit-source-id:
7dc824eea8f506d6c8ae1aa67aeec0c34d5285fc
Fei Sun [Sat, 15 Dec 2018 01:35:12 +0000 (17:35 -0800)]
Add several features to converting images to blobs (#15204)
Summary:
Several enhancements are implemented:
* Resize the images to be within a boundary between min-size and max-size (can be height and weight). It tries to resize the minimum size to match the min-size and keep the aspect ratio. However, if in that case the maximum size is more than the max-size, then resize the maximum size to be equal to the max-size (and the minimum size is less than min-size). The min/max sizes are specified in argument scale, in a comma separated form. If one of the size is -1, then that size is not a restriction.
* Change the OpenCV resize function arguments from using cv::Size() to the x, y scale. Theoretically they should be the same. But in reality, the two ways of specifying them may result to different resized outputs.
* Once the image is read in, change the data to floats. That means, after resize and other preprocessing steps, the float values are preserved (not truncated to int).
* It is possible to convert data in text format to the blob format.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15204
Reviewed By: llyfacebook
Differential Revision:
D13467225
Pulled By: sf-wind
fbshipit-source-id:
7da34a72d43a9603cd7ab953f5821c1222d0178f
Yinghai Lu [Sat, 15 Dec 2018 00:34:11 +0000 (16:34 -0800)]
Supply static shape info to Reshape when doing onnxGetCompatibility (#15242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15242
Newer version ONNX Reshape gets shape info from a tensor. Hence for static backend, we need to provide this info to it when doing `onnxGetCompatibility` too.
Reviewed By: jackm321
Differential Revision:
D13471959
fbshipit-source-id:
8a58e28edd900b6ad54a1dbd63ff2579fbe0e820
rohithkrn [Sat, 15 Dec 2018 00:31:34 +0000 (16:31 -0800)]
FP16MomentumSGDUpdate Op fix and enable for ROCm (#15150)
Summary:
1. Fix a bug in FP16MomentumSGDUpdate operator
2. Enable operator for ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15150
Differential Revision:
D13473145
Pulled By: bddppq
fbshipit-source-id:
4c5c5f30cb9bba658e3639dbe193fa08a304d306
Alexander Sidorov [Sat, 15 Dec 2018 00:20:37 +0000 (16:20 -0800)]
Start unittesting our main observer (#15191)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15191
OSS:
just splitting out basic flags from a unit test. So I can extend them in another test where I need to add additional flags.
Reviewed By: yinghai
Differential Revision:
D13159184
fbshipit-source-id:
9823e792cf0ed8d0379235c44564862b7d784845
bddppq [Fri, 14 Dec 2018 23:34:38 +0000 (15:34 -0800)]
Build c10 HIP test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15233
Reviewed By: ezyang
Differential Revision:
D13471002
Pulled By: bddppq
fbshipit-source-id:
b42c3bc2b9db672ce50a52eb700cc6ed13d3535f
Krishna Kalyan [Fri, 14 Dec 2018 23:24:45 +0000 (15:24 -0800)]
record unit time in torch.cuda.event (#15221)
Summary: Record unit of time for torch.cuda.Event's elapsed_time
Differential Revision:
D13467646
Pulled By: zou3519
fbshipit-source-id:
4f1f4ef5fa4bc5a1b4775dfcec6ab155e5bf8d6e
James Reed [Fri, 14 Dec 2018 23:05:24 +0000 (15:05 -0800)]
Preserve module hierarchy on traced modules (#15101)
Summary:
We need this, for example, to properly call `_unpack` when we have a traced module in the hierarchy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15101
Differential Revision:
D13468467
Pulled By: jamesr66a
fbshipit-source-id:
c2b6740b12cde6e23395d12e42d4fc2c4c7ca3f2