Michael Antonov [Thu, 7 Feb 2019 20:42:38 +0000 (12:42 -0800)]
Update ATen internals to use int64_t for dimension indexing (#16739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16739
Some code ATen locations seemed to use int, etc. inclorrectly where either
int64_t or size_t was required. Update them to use int64_t for dimension indexing where necessary.
Reviewed By: ezyang
Differential Revision:
D13950124
fbshipit-source-id:
aaf1cef783bf3c657aa03490f2616c35c816679f
Will Feng [Thu, 7 Feb 2019 19:58:50 +0000 (11:58 -0800)]
Make JIT attributes t_ and ts_ store Variable instead of Tensor (#16596)
Summary:
Discussed with zdevito and we want to use Variable (with `set_requires_grad(false)`) instead of Tensor in all parts of JIT, to eliminate the distinction and the conceptual overhead when trying to figure out which one to use.
This also helps with the Variable/Tensor merge work tracked at https://github.com/pytorch/pytorch/issues/13638, which will make common functions (such as `numel()` / `sizes()` / `dim()`) on Variable much faster when finished.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16596
Differential Revision:
D13979971
Pulled By: yf225
fbshipit-source-id:
c69119deec5bce0c22809081115f1012fdbb7d5a
David Riazati [Thu, 7 Feb 2019 19:50:27 +0000 (11:50 -0800)]
Better error when using a constant tensor (#16724)
Summary:
Fixes #16284
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16724
Differential Revision:
D13990531
Pulled By: driazati
fbshipit-source-id:
adbf47a07eddb3813fbe1322944abfe5fcff89fa
Richard Zou [Thu, 7 Feb 2019 19:09:10 +0000 (11:09 -0800)]
Backport the stable doc build on v1.0.1 to master (#16503)
Summary:
List of changes:
- Always push the final state of the doc build docker for debugging purposes.
- Adds code for the stable doc build. This code is never actually run on master, only the v1.0.1 branch. There is a big note for this behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16503
Differential Revision:
D13972469
Pulled By: zou3519
fbshipit-source-id:
68f459650ef0de200a34edd43fc1372143923972
Wanchao Liang [Thu, 7 Feb 2019 18:32:02 +0000 (10:32 -0800)]
Remove undefined tensor in jit script (#16379)
Summary:
This PR is a follow up of #15460, it did the following things:
* remove the undefined tensor semantic in jit script/tracing mode
* change ATen/JIT schema for at::index and other index related ops with `Tensor?[]` to align with what at::index is really doing and to adopt `optional[tensor]` in JIT
* change python_print to correctly print the exported script
* register both TensorList and ListOfOptionalTensor in JIT ATen ops to support both
* Backward compatibility for `torch.jit.annotate(Tensor, None)`
List of follow ups:
* remove the undefined tensor semantic in jit autograd, autodiff and grad_of
* remove prim::Undefined fully
For easy reviews, please turn on `hide white space changes` in diff settings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16379
Differential Revision:
D13855677
Pulled By: wanchaol
fbshipit-source-id:
0e21c14d7de250c62731227c81bfbfb7b7da20ab
Fritz Obermeyer [Thu, 7 Feb 2019 09:33:41 +0000 (01:33 -0800)]
Support multiple inheritance in torch.distributions (#16772)
Summary:
This adds calls to `super().__init__()` in three classes in torch.distributions.
This is needed when `Distribution` and `Transform` objects are used with multiple inheritance, as e.g. combined with `torch.nn.Module`s. For example
```py
class MyModule(torch.distributions.Transform, torch.nn.Module):
...
```
cc martinjankowiak esling who have wanted to use this pattern, e.g. in #16756
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16772
Differential Revision:
D13978633
Pulled By: soumith
fbshipit-source-id:
8bc6cca1747cd74d32135ee2fe588bba2ea796f1
vishwakftw [Thu, 7 Feb 2019 09:10:54 +0000 (01:10 -0800)]
Remove redundant wrappers in torch.distributions (#16807)
Summary:
Changelog:
- Remove torch.distributions.multivariate_normal._batch_diag : same functionality is provided by torch.diagonal
- Remove torch.distributions.lowrank_multivariate_normal._batch_vector_diag : same functionality is provided by torch.diag_embed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16807
Differential Revision:
D13985550
Pulled By: soumith
fbshipit-source-id:
25c7d00c52ff7f85e431134e9ce0d5dda453667b
Ying Zhang [Thu, 7 Feb 2019 08:33:29 +0000 (00:33 -0800)]
Insert AdjustBatchSizeOp into the predict_net. (#16811)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16811
As the title. The AdjustBatch ops will be inserted before and after the Onnxifi op to:
1) adjust batch/seq sizes to the ideal batch/seq size before these tensors are processed by the Onnxifi op;
2) adjust batch size to the original batch size for batches generated by the Onnxifi op.
Reviewed By: yinghai
Differential Revision:
D13967711
fbshipit-source-id:
471b25ae6a60bf5b7ebee1de6449e0389b6cafff
rohithkrn [Thu, 7 Feb 2019 08:21:21 +0000 (00:21 -0800)]
Unify gpu_support variable in python tests (#16748)
Summary:
Assign `has_gpu_support = has_cuda_support or has_hip_support` and make according changes in python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16748
Differential Revision:
D13983132
Pulled By: bddppq
fbshipit-source-id:
ca496fd8c6ae3549b736bebd3ace7fa20a6dad7f
Mohana Rao [Thu, 7 Feb 2019 07:33:40 +0000 (23:33 -0800)]
Update Docker file section in README.md (#16812)
Summary:
Emphasize on the fact that docker build should be triggered from pytorch repo directory.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16812
Differential Revision:
D13985531
Pulled By: soumith
fbshipit-source-id:
c6511d1e81476eb795b37fb0ad23e8951dbca617
Jie [Thu, 7 Feb 2019 07:05:49 +0000 (23:05 -0800)]
TensorIterator cuda launch configs update (#16224)
Summary:
Update launch configs for TensorIterator gpu_reduce_kernel. Enable flexible
block dimension to improve efficiency for reduction cases with small fast
dimension.
Previously TensorIterator launches blocks with fixed 32x16 threads.
For cases like:
import torch
torch.randn(2**20, 4, device='cuda').sum(0)
The fixed launch config does handle coalesced memory access efficiently.
Updated launch configure enables flexible block dimension. Combining with
improved reduction scheme (using flexible vertical / horizontal reduction
instead of limited warp / block reduction in the old code), it ensures optimal
memory access pattern even with reduction on dimension with small stride.
Possible future improvements:
1. Precise dynamic shared memory allocation.
2. Using warp shuffle for vertical (block_y) reduction.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16224
Differential Revision:
D13806753
Pulled By: soumith
fbshipit-source-id:
37e45c7767b5748cf9ecf894fad306e040e2f79f
Sebastian Messmer [Thu, 7 Feb 2019 05:14:21 +0000 (21:14 -0800)]
Define layer_norm schema in caffe2 (#16535)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16535
There is now no need anymore to define the layer norm schema in a central location.
It can just be defined in caffe2 next to the kernel implementation.
Reviewed By: ezyang
Differential Revision:
D13869503
fbshipit-source-id:
c478153f8fd712ff6d507c794500286eb3583149
Sebastian Messmer [Thu, 7 Feb 2019 05:14:20 +0000 (21:14 -0800)]
Automatically register c10 ops with JIT (#16534)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16534
All c10 ops from the c10 dispatcher are now automatically registered with JIT
Reviewed By: dzhulgakov
Differential Revision:
D13869275
fbshipit-source-id:
5ab5dec5b983fe661f977f9d29d8036768cdcab6
Yinghai Lu [Thu, 7 Feb 2019 03:12:32 +0000 (19:12 -0800)]
Add AdjustBatch Op (#16676)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16676
This op is used for changing batch size (first dimension) of the tensor.
Reviewed By: bertmaher, ipiszy
Differential Revision:
D13929200
fbshipit-source-id:
4f2c3faec072d468be8301bf00c80d33adb3b5b3
bddppq [Thu, 7 Feb 2019 01:52:12 +0000 (17:52 -0800)]
Bring back running pytorch tests in rocm CI
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16829
Differential Revision:
D13982323
Pulled By: bddppq
fbshipit-source-id:
6ffadb96b9e2ebd64a29e38674a51401dfb211db
Zachary DeVito [Thu, 7 Feb 2019 01:22:47 +0000 (17:22 -0800)]
Rename DynamicType -> TensorType (#16787)
Summary:
```
import json
from subprocess import check_call
from pprint import pprint
renames = {
'c10::TensorType': 'DimentionedTensorType',
'c10::DynamicType': 'TensorType',
'c10::TensorTypePtr': 'DimentionedTensorTypePtr',
'c10::DynamicTypePtr': 'TensorTypePtr',
'c10::TypeKind::DynamicType': 'TensorType',
'c10::TypeKind::TensorType': 'DimentionedTensorType',
}
entries = json.loads(open('compile_commands.json', 'r').read())
build = None
sources = []
for e in entries:
name = e['file']
if not ('jit' in name or 'ATen/core' in name):
continue
build = e['directory']
sources.append(name)
args = ['clang-rename', '-i', '-force', '-pl']
for name in sorted(renames.keys()):
args += ['-qualified-name={}'.format(name), '-new-name={}'.format(renames[name])]
for source in sources:
cmd = args + [source]
pprint(args)
check_call(cmd, cwd=build)
check_call(['git', 'stash', 'push', '-m', 'rename'])
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16787
Differential Revision:
D13974132
Pulled By: zdevito
fbshipit-source-id:
8368fd53e17cff83707bbe77f2d7aad74f8ce60e
Yinghai Lu [Thu, 7 Feb 2019 00:13:24 +0000 (16:13 -0800)]
Use bound shape inference in onnxifi transform (#16598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16598
ATT.
Reviewed By: bertmaher, rdzhabarov
Differential Revision:
D13893698
fbshipit-source-id:
8d2ad9814fe76924a46b450eb7ebd3601fbdbbc7
Soumith Chintala [Wed, 6 Feb 2019 23:46:39 +0000 (15:46 -0800)]
improve error message (#16719)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16712
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16719
Differential Revision:
D13978688
Pulled By: ezyang
fbshipit-source-id:
61f8fa4c54c6969a58550e32e18be2eb9254ced7
Jongsoo Park [Wed, 6 Feb 2019 23:14:17 +0000 (15:14 -0800)]
int8 SpatialBN (#16796)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16796
SpatialBN int8 version
Reviewed By: dskhudia
Differential Revision:
D13971224
fbshipit-source-id:
e55fd608c161069daaa4e62c618bc14b01f32cb7
Jongsoo Park [Wed, 6 Feb 2019 23:10:07 +0000 (15:10 -0800)]
call istringstream clear after str (#16820)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16820
Sometimes parsing histogram was not working correctly due to changes in
D13633256
We need to call istringstream clear after str
Reviewed By: csummersea
Differential Revision:
D13977509
fbshipit-source-id:
ce3e8cb390641d8f0b5c9a7d6d6daadffeddbe11
Narine Kokhlikyan [Wed, 6 Feb 2019 22:29:35 +0000 (14:29 -0800)]
Replace resize_dim() with set_sizes_and_strides() (#16732)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16732
Use set_sizes_and_strides instead of resize_dim with.
Reviewed By: ezyang
Differential Revision:
D13947867
fbshipit-source-id:
067b096b1fde14b039690992a5fe3ace386b2789
Jongsoo Park [Wed, 6 Feb 2019 19:52:23 +0000 (11:52 -0800)]
no EIGEN engine for DeformConv (#16785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16785
There's no EIGEN engine implemented for DeformConv but unit test was checking it.
Reviewed By: BIT-silence
Differential Revision:
D13967306
fbshipit-source-id:
e29c19f59f5700fc0501c59f45d60443b87ffedc
Jongsoo Park [Wed, 6 Feb 2019 19:52:23 +0000 (11:52 -0800)]
format deform_conv_test.py (#16786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16786
Format to prepare
D13967306
Reviewed By: BIT-silence
Differential Revision:
D13967317
fbshipit-source-id:
2de895f8474b04c55ba067fbf788c553dc010c60
Yinghai Lu [Wed, 6 Feb 2019 18:23:01 +0000 (10:23 -0800)]
Fix/Improve bound shape inference with real net tests (#16597)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16597
This diff fixes some bugs in shape inference for `SparseLengthsSumFused8BitRowwise`. And added input shape inference for `Concat` when `add_axis=1`.
Reviewed By: bertmaher
Differential Revision:
D13892452
fbshipit-source-id:
6cd95697a6fabe6d78a5ce3cb749a3a1e51c68e7
Edward Yang [Wed, 6 Feb 2019 18:19:37 +0000 (10:19 -0800)]
Typofix (#16800)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16800
Differential Revision:
D13972592
Pulled By: ezyang
fbshipit-source-id:
45c352ac6090c8060bf75f44dec7205556986d88
Oleg Bogdanov [Wed, 6 Feb 2019 17:40:00 +0000 (09:40 -0800)]
caffe2 | MSVS compatibility fixes (#16765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16765
Code changes required to build caffe2 for windows with toolchain used by FB.
Reviewed By: orionr
Differential Revision:
D13953258
fbshipit-source-id:
651823ec9d81ac70e32d4cce5bc2472434104733
Gu, Jinghui [Wed, 6 Feb 2019 17:25:42 +0000 (09:25 -0800)]
Fallback sum/add to CPU if needed (#15267)
Summary:
Fallback sum/add to CPU if needed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15267
Differential Revision:
D13935064
Pulled By: yinghai
fbshipit-source-id:
eb228683d00a0462a1970f849d35365bc98340d6
Lu Fang [Wed, 6 Feb 2019 17:17:37 +0000 (09:17 -0800)]
update of fbcode/onnx to
822d8df0a2a32233c6022f50a158817a0f19bdc7 (#16791)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16791
Previous import was
bfa8b335ab6d1ed7b688dc2ec96421a3fe9e644c
Included changes:
- **[822d8df](https://github.com/onnx/onnx/commit/822d8df)**: allow removed experimental ops in the checker for now (#1792) <Lu Fang>
Reviewed By: MisterTea
Differential Revision:
D13970103
fbshipit-source-id:
5feaaa6c65ba10901eeba0b63724d7e451e9b642
Freddie Mendoza [Wed, 6 Feb 2019 17:02:18 +0000 (09:02 -0800)]
Adding torch/lib64 in .gitignore for ppc64le CI build to pass (#16782)
Summary:
Adding torch/lib64 in .gitignore so that a git status --porcelain
check during CI build and test passes for ppc64le. During build
torch/lib64 is created and contains third-party libraries. This
should be ignored by the porcelain check
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16782
Differential Revision:
D13972794
Pulled By: ezyang
fbshipit-source-id:
5459c524eca42d396ac46e756a327980b4b1fa53
Edward Yang [Wed, 6 Feb 2019 16:17:55 +0000 (08:17 -0800)]
Revert
D13854304: [redo][c10] LayerNorm Registration Example
Differential Revision:
D13854304
Original commit changeset:
ec463ce22721
fbshipit-source-id:
4262b9a2ef486e1c7c0283ea021331ac97cc5f56
Edward Yang [Wed, 6 Feb 2019 16:17:54 +0000 (08:17 -0800)]
Revert
D13855525: [c10] Expose RoIAlign to torch
Differential Revision:
D13855525
Original commit changeset:
cfee7bb1544d
fbshipit-source-id:
0b4124b78c4082b52e592a1275069c879a9aed39
Edward Yang [Wed, 6 Feb 2019 16:17:53 +0000 (08:17 -0800)]
Revert
D13856086: [c10] Expose GenerateProposals to torch
Differential Revision:
D13856086
Original commit changeset:
a4873646a71a
fbshipit-source-id:
79b634426404236ddbc407d3796a350ad3dae5ca
Edward Yang [Wed, 6 Feb 2019 16:17:53 +0000 (08:17 -0800)]
Revert
D13864292: [c10] Expose BBoxTransform to pytorch
Differential Revision:
D13864292
Original commit changeset:
1f57664e7834
fbshipit-source-id:
37663b7e8213185ecaa5c219076fc7de64704549
Edward Yang [Wed, 6 Feb 2019 16:17:52 +0000 (08:17 -0800)]
Revert
D13865221: [c10] Expose BoxWithNMSLimit
Differential Revision:
D13865221
Original commit changeset:
8a3f1d420183
fbshipit-source-id:
0057be9619b660dcad8c01bae67b54400127577e
Edward Yang [Wed, 6 Feb 2019 16:17:52 +0000 (08:17 -0800)]
Revert
D13866214: [c10] Expose HeatmapMaxKeypoints to torch
Differential Revision:
D13866214
Original commit changeset:
2ca79037fc07
fbshipit-source-id:
d2c653f4f32cf0ea76875888f3523c0dc7db9960
Rodrigo Berriel [Wed, 6 Feb 2019 15:41:42 +0000 (07:41 -0800)]
Fix pip list format in collect_env (#16798)
Summary:
Since pip 18.0 (2018-07-22), `legacy` is no longer a valid choice for `pip list --format` as can be seen in the [Release Notes](https://pip.pypa.io/en/stable/news/#id62). Therefore, the options now are: `columns`, `freeze` and `json`. With `legacy`, this is how it looked like:
```
[...]
Versions of relevant libraries:
[pip3] numpy (1.16.1)
[pip3] torch (1.0.1)
[pip3] torchvision (0.2.1)
[...]
```
Changing to `freeze`, this is how it looks like:
```
[...]
Versions of relevant libraries:
[pip3] numpy==1.16.1
[pip3] torch==1.0.1
[pip3] torchvision==0.2.1
[...]
```
Currently, this is what happens:
```
[...]
Versions of relevant libraries:
[pip] Could not collect
[...]
```
The `freeze` option is also available in old pip, so this change is backwards compatible. Also, if we would like to keep the old style, which I think it is not necessary, I could easily change that.
---
In case anyone wants to know how `columns` looks like (I prefer `freeze`):
```
[...]
Versions of relevant libraries:
[pip3] numpy 1.16.1
[pip3] torch 1.0.1
[pip3] torchvision 0.2.1
[...]
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16798
Differential Revision:
D13971793
Pulled By: soumith
fbshipit-source-id:
3721d9079a2afa245e1185f725598901185ea4cd
Soumith Chintala [Wed, 6 Feb 2019 13:09:09 +0000 (05:09 -0800)]
disable default system-wide detection of gflags, glog, opencv, lmdb, leveldb (#16789)
Summary:
They can instead by enable by env flags USE_* (as always).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16789
Differential Revision:
D13971789
Pulled By: soumith
fbshipit-source-id:
d5eac9be677114be3fb15b43080faa0efdfff8ee
Zachary DeVito [Wed, 6 Feb 2019 06:36:48 +0000 (22:36 -0800)]
fix BUILD_CAFFE2_OPS
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16783
Differential Revision:
D13965061
Pulled By: zdevito
fbshipit-source-id:
6fe710ca51e2f338873b56f23256668ca3fe2032
Edward Yang [Wed, 6 Feb 2019 05:09:06 +0000 (21:09 -0800)]
Remove unnecessary typing import. (#16777)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16777
Differential Revision:
D13969679
Pulled By: ezyang
fbshipit-source-id:
d4728797a5927ae32628621c654eadb93c0e7682
Michael Suo [Wed, 6 Feb 2019 04:37:30 +0000 (20:37 -0800)]
Fix alias analysis for fork/wait (#16671)
Summary:
(review top commit only).
As expected, fork/wait introduces some corner cases into the alias analysis. The comments inline should describe the changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16671
Differential Revision:
D13963219
Pulled By: suo
fbshipit-source-id:
2bec6fc03a4989cf309fbb9473f3f2ffe2c31431
Ailing Zhang [Wed, 6 Feb 2019 02:59:45 +0000 (18:59 -0800)]
changes to apply xla patch (#16781)
Summary:
This PR will let xla tests passes after https://github.com/pytorch/xla/pull/183 is in.
Will add back the branch filters once it's ready.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16781
Differential Revision:
D13968976
Pulled By: ailzhang
fbshipit-source-id:
df3b173336b3247aa56ef723569a1f68cdfa56e0
Jerry Zhang [Wed, 6 Feb 2019 02:46:38 +0000 (18:46 -0800)]
Tensor construction codemod (#16568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16568
In caffe2/caffe2/operators and caffe2/caffe2/fb/operators
(Resize + mutable_data) and (ResizeLike + mutable_data)
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: dzhulgakov
Differential Revision:
D13863416
fbshipit-source-id:
90ad3971850b89bf4b2ac81e9fa59d3bc43dc1c9
David Riazati [Wed, 6 Feb 2019 02:22:17 +0000 (18:22 -0800)]
Warn when tracing legacy constructors
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16770
Differential Revision:
D13963581
Pulled By: driazati
fbshipit-source-id:
8f8cdfc455ba65be370fd952fc5e5c233525d002
David Riazati [Wed, 6 Feb 2019 01:52:07 +0000 (17:52 -0800)]
Use torch.zeros for nn.LSTM
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16779
Differential Revision:
D13963577
Pulled By: driazati
fbshipit-source-id:
dc9edc3d2096760737ecbe4b3dd441ed2d53f4ad
Roy Li [Tue, 5 Feb 2019 23:17:22 +0000 (15:17 -0800)]
Set SCCACHE_IDLE_TIMEOUT=1200 (#16728)
Summary:
Doubling the sccache timeout from default of 600.
the asan build of #16645 will fail without this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16728
Differential Revision:
D13963727
Pulled By: li-roy
fbshipit-source-id:
3614d75c1b46d663fa05b84f99d8a099283a8e64
Johannes M Dieterich [Tue, 5 Feb 2019 22:53:16 +0000 (14:53 -0800)]
Document hip-clang and its __HIP__ macro (#16771)
Summary:
In #16085 , we introduced initial hip-clang bring-up code. Document the use of the __HIP__ macro now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16771
Differential Revision:
D13961538
Pulled By: ezyang
fbshipit-source-id:
67f6226abcbe62e2f4efc291c84652199c464ca6
Edward Yang [Tue, 5 Feb 2019 22:39:43 +0000 (14:39 -0800)]
Rename IntList to IntArrayRef. (#16751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16751
This was made more complicated by the fact that ivalue::IntList
is a thing. So I had to fix all of the sites where we referring
to IValue post facto.
The following codemods were run, in this order:
```
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntList IntArrayRef
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntArrayRef::create IntList::create
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in ivalue::IntArrayRef ivalue::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in Tag::IntArrayRef Tag::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in isIntArrayRef isIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in toIntArrayRef toIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'Shared<IntArrayRef>' 'Shared<IntList>'
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'intrusive_ptr<IntArrayRef>' 'intrusive_ptr<IntList>'
```
Some manual fixups were done afterwards; they can be reviewed separately
at https://github.com/pytorch/pytorch/pull/16752
Reviewed By: dzhulgakov
Differential Revision:
D13954363
fbshipit-source-id:
b5c40aacba042402155a2f5a229fa6db7992ac64
David Riazati [Tue, 5 Feb 2019 21:48:52 +0000 (13:48 -0800)]
dict values(), keys(), and len() (#16629)
Summary:
Adds some operations for dicts to match Python and tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16629
Differential Revision:
D13961144
Pulled By: driazati
fbshipit-source-id:
b31f27a4320ff62cd118b508fb0a13056535dc7c
Lu Fang [Tue, 5 Feb 2019 21:13:16 +0000 (13:13 -0800)]
update of fbcode/onnx to
bfa8b335ab6d1ed7b688dc2ec96421a3fe9e644c (#16767)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16767
Previous import was
875f7bbe537b9d6931d065977c192eaaf61e1179
Included changes:
- **[bfa8b33](https://github.com/onnx/onnx/commit/bfa8b33)**: [ONNXIFI]Add extension of onnxSetIOAndRunGraph (#1781) <Rui Zhu>
Reviewed By: zrphercule
Differential Revision:
D13959349
fbshipit-source-id:
4876d00a3f7033cf9d89554f8b4789acd6881f72
Jesse Hellemn [Tue, 5 Feb 2019 20:56:56 +0000 (12:56 -0800)]
Fix commit races on binary CI on master PR-merges (#16773)
Summary:
There is no way to test this until it is merged.
On master jobs that run after a PR is merged, there is no CIRCLE_PR_NUMBER so the binary builds clone pytorch/pytorch/master, which races.
Based off of https://circleci.com/docs/2.0/env-vars/ and the circleci checkout code
```
git config --global url."ssh://git@github.com".insteadOf "https://github.com" || true
git config --global gc.auto 0 || true
if [ -e /home/circleci/project/.git ]
then
cd /home/circleci/project
git remote set-url origin "$CIRCLE_REPOSITORY_URL" || true
else
mkdir -p /home/circleci/project
cd /home/circleci/project
git clone "$CIRCLE_REPOSITORY_URL" .
fi
if [ -n "$CIRCLE_TAG" ]
then
git fetch --force origin "refs/tags/${CIRCLE_TAG}"
else
git fetch --force origin "master:remotes/origin/master"
fi
if [ -n "$CIRCLE_TAG" ]
then
git reset --hard "$CIRCLE_SHA1"
git checkout -q "$CIRCLE_TAG"
elif [ -n "$CIRCLE_BRANCH" ]
then
git reset --hard "$CIRCLE_SHA1"
git checkout -q -B "$CIRCLE_BRANCH"
fi
git reset --hard "$CIRCLE_SHA1"
```
I believe we do no use git tags
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16773
Differential Revision:
D13962132
Pulled By: pjh5
fbshipit-source-id:
c62d2139f38ff39ecda1509b0bcd8bd102828e40
Bram Wasti [Tue, 5 Feb 2019 20:30:31 +0000 (12:30 -0800)]
Expose HeatmapMaxKeypoints to torch (#16528)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16528
..
Reviewed By: smessmer
Differential Revision:
D13866214
fbshipit-source-id:
2ca79037fc070bade5542345af5ce09f88beda44
Bram Wasti [Tue, 5 Feb 2019 20:30:31 +0000 (12:30 -0800)]
Expose BoxWithNMSLimit (#16529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16529
..
Reviewed By: smessmer
Differential Revision:
D13865221
fbshipit-source-id:
8a3f1d420183ed5ae51b3c9e4eb6e033078c7ae4
Bram Wasti [Tue, 5 Feb 2019 20:30:31 +0000 (12:30 -0800)]
Expose BBoxTransform to pytorch (#16530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16530
..
Reviewed By: smessmer
Differential Revision:
D13864292
fbshipit-source-id:
1f57664e78347e72c0087aa3d825a6a9517c1945
Bram Wasti [Tue, 5 Feb 2019 20:30:29 +0000 (12:30 -0800)]
Expose GenerateProposals to torch (#16477)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16477
expose generateproposals to torch
Reviewed By: smessmer
Differential Revision:
D13856086
fbshipit-source-id:
a4873646a71a6b6c01740d21729e827f4b36588f
Bram Wasti [Tue, 5 Feb 2019 20:30:29 +0000 (12:30 -0800)]
Expose RoIAlign to torch (#16476)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16476
enable calling roialign (caffe2) from torch frontend
Reviewed By: smessmer
Differential Revision:
D13855525
fbshipit-source-id:
cfee7bb1544dc58df4231604ba01d61ca905ae3f
Bram Wasti [Tue, 5 Feb 2019 20:30:29 +0000 (12:30 -0800)]
LayerNorm Registration Example (#16478)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16478
This diff includes an example registration of a caffe2 op in torch. A previous attempt ran into a static initialization order bug.
Reviewed By: smessmer
Differential Revision:
D13854304
fbshipit-source-id:
ec463ce2272126d08a5163d1599361ee5b718bbc
Bram Wasti [Tue, 5 Feb 2019 20:30:28 +0000 (12:30 -0800)]
Enable undefined at::Tensor to be passed as Output (#16730)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16730
with Jerry's new updates Tensor must be defined -- as a result I've needed to update the shim for caffe2 ops being used in PyTorch
Reviewed By: smessmer
Differential Revision:
D13946950
fbshipit-source-id:
6f77877c61a743f82bdfc2ad04d6ab583000cc18
Alex Şuhan [Tue, 5 Feb 2019 20:20:21 +0000 (12:20 -0800)]
Add XLA / TPU device type, backend type and type id (#16763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16763
Replicate the easy bits in https://github.com/pytorch/pytorch/pull/15153 with TPU / XLA instead of MSNPU. Also don't initialize the storage for XLA tensors for now.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16585
Reviewed By: ezyang
Differential Revision:
D13912118
Pulled By: gchanan
fbshipit-source-id:
4889177e2478768fb281ed075b71146d1d850bd9
Zachary DeVito [Tue, 5 Feb 2019 20:16:56 +0000 (12:16 -0800)]
Preserve method parameter names (#16750)
Summary:
Fixes #16591
This uses uniqueBaseName so that parameters do not end up with suffixes. It changes next_id to be per-base-name rather than global to fix jittering issues when re-importing a re-numbered graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16750
Differential Revision:
D13960282
Pulled By: zdevito
fbshipit-source-id:
2156f581d9b95d77bf1f1252074e800b19116555
Ailing Zhang [Tue, 5 Feb 2019 20:05:56 +0000 (12:05 -0800)]
add xla tests to enabled-configs (#16761)
Summary:
This should enable xla tests thus let master xla tests pass.
As usual, I will add the branch filters back before landing.
Thanks ezyang !
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16761
Differential Revision:
D13959746
Pulled By: ailzhang
fbshipit-source-id:
7384da281d093d16edccb4283c74e47ac659eeff
Jesse Hellemn [Tue, 5 Feb 2019 19:25:30 +0000 (11:25 -0800)]
Fix logging top commit of pytorch + builder in binaries for long summaries (#16766)
Summary:
I'll test with this really long summary.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce risus sem, mattis vitae commodo vitae, mattis vel ex. Integer nec consectetur ligula, sit amet ultricies risus. Suspendisse potenti. Donec aliquet quam ante. Donec porttitor justo ligula, ut vestibulum erat facilisis a. Nullam eget lobortis nisi. Aenean quis sem id ante eleifend condimentum nec a lacus. Sed sed dolor augue. Proin feugiat, tellus in eleifend cursus, libero nulla lacinia erat, et efficitur dui odio ut ex. In et sem purus. Proin dictum scelerisque magna, nec feugiat dolor lobortis id. Proin ante urna, ultrices in semper et, pulvinar et dui. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Mauris ullamcorper neque a pharetra rhoncus.
Aliquam vel semper felis. Integer id massa erat. Morbi leo eros, varius sed viverra eu, dictum nec purus. Fusce vitae mollis sem, non fringilla nulla. Donec tincidunt luctus dolor. Morbi lobortis, magna quis viverra bibendum, lacus tortor pulvinar risus, eu porta tellus nulla vitae dolor. Sed tincidunt, turpis quis facilisis malesuada, nulla eros lobortis lorem, a fermentum mi nisl non quam. Pellentesque vehicula, nisl non eleifend viverra, tellus neque accumsan tellus, id ultricies lacus mi sed sapien. Proin rutrum ultrices quam sit amet euismod. Maecenas vel faucibus libero, nec efficitur mi. Proin felis augue, elementum eget vestibulum non, euismod sed urna. Curabitur purus nisi, interdum nec rutrum id, faucibus nec sapien. Integer consectetur interdum elit, volutpat vulputate velit. Integer et ultricies magna. Fusce blandit lorem urna, quis sodales sapien porttitor in. Nulla nec sodales sem.
Morbi consequat massa sit amet fringilla pretium. Nunc maximus vitae neque auctor pharetra. Morbi gravida feugiat urna, eu sagittis est pulvinar eget. Maecenas ut fermentum ante, eget malesuada neque. In ut maximus magna. Donec nec finibus sapien. Quisque viverra erat lobortis, rhoncus augue sed, hendrerit dui. Donec in feugiat augue, a ultrices justo. Pellentesque rutrum augue sed nulla auctor, a venenatis risus aliquam. Nullam ipsum justo, dictum sit amet elementum eu, eleifend a turpis. Proin ut tellus ut urna volutpat fermentum ac aliquam tellus.
Quisque ultricies est id eros dictum ultrices. Cras eu urna interdum, eleifend felis vitae, vulputate nulla. Cras tincidunt, mi sodales imperdiet tristique, diam odio convallis ligula, ac vulputate enim sapien eu tellus. Phasellus eleifend finibus sapien id ullamcorper. Donec aliquet eleifend consectetur. Proin in nulla venenatis, egestas neque quis, blandit sem. Suspendisse pellentesque arcu vel ligula fermentum maximus. Aliquam non ipsum ut ante pharetra finibus.
Nunc rhoncus purus sit amet risus congue venenatis. Integer id vestibulum neque, et fermentum elit. Nunc sit amet tortor quis mi aliquam vestibulum et in mauris. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Maecenas mollis hendrerit nulla, non tempus neque pharetra ac. Proin commodo bibendum velit, consectetur pretium metus sollicitudin eget. Aliquam malesuada semper tempor. Ut vel vulputate dolor, eu faucibus mauris. Nam commodo quis dolor sit amet eleifend. Phasellus eget massa odio. Donec tempor est at ante finibus lobortis. Suspendisse porttitor imperdiet ultrices. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.
Nullam id dignissim magna, non suscipit odio. Vestibulum vel maximus erat, suscipit ullamcorper tellus. Fusce egestas augue lorem, in ultricies est vehicula ac. Integer pretium, ex in elementum varius, nisi turpis posuere lectus, nec posuere ligula mi ac ligula. Donec vehicula dolor ut ex elementum, quis scelerisque tellus molestie. Mauris euismod magna ac ornare cursus. Vivamus dapibus quam nec tellus aliquam elementum.
Phasellus ultricies quis augue ut fringilla. Suspendisse eu molestie eros. Suspendisse potenti. Curabitur varius sodales maximus. Etiam nec rutrum est. Sed vulputate suscipit elit, eu condimentum mauris pretium eget. Curabitur convallis commodo dui. Aenean lectus orci, pretium non mi sit amet, commodo imperdiet dui. In hac habitasse platea dictumst. In et ex nisl. Duis justo tortor, finibus at augue vitae, fermentum hendrerit tellus. Donec malesuada justo a molestie posuere. Morbi nisl leo, feugiat ut faucibus ut, mattis id purus.
Vestibulum hendrerit lorem ligula, et ullamcorper nisl lacinia sed. Integer vitae lacinia nunc, sed interdum enim. Aliquam aliquet ipsum vitae eros ornare accumsan. Phasellus venenatis laoreet est, sed feugiat neque lobortis id. Proin pulvinar placerat leo lacinia vehicula. Duis accumsan semper lobortis. Donec elementum nunc non quam aliquam, rutrum fringilla justo interdum. Morbi pulvinar pellentesque massa vitae maximus. Cras condimentum aliquam massa, et pellentesque lorem dictum a. Vivamus at dignissim justo. Donec ligula dui, tempus vestibulum est vel, rutrum blandit arcu. Vivamus iaculis molestie neque in elementum. Sed convallis tempus quam non elementum. Nulla euismod lobortis ligula. Etiam ac mauris eget magna posuere ornare id vitae felis. Nunc efficitur lorem et euismod porttitor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16766
Differential Revision:
D13959962
Pulled By: pjh5
fbshipit-source-id:
9b71bdf981d4fda9d8951e2d183db81f349b7f81
Richard J. Knight [Tue, 5 Feb 2019 18:24:48 +0000 (10:24 -0800)]
Fix type-o in unsupported data type error message (#16537)
Summary:
-In the case where an operator does not support a given data type
an error message is emitted to alert the user, this message is
incorrectly structured. This commit adds to and rearranges the
error message to make it a little clearer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16537
Differential Revision:
D13958859
Pulled By: zou3519
fbshipit-source-id:
935fc3adcef2f969042b1db902c9ec004488ea9c
Adam Paszke [Tue, 5 Feb 2019 17:31:21 +0000 (09:31 -0800)]
Make tuple checks faster (#16657)
Summary:
As the comment indicates, the issue is only present in some versions of
Python 2, so we should be able to use heavily optimized PyTuple_Check in
most cases, and skip allocation of the strings, and unnecessary lookups
on object's type.
cc ezyang zasdfgbnm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16657
Differential Revision:
D13957854
Pulled By: ezyang
fbshipit-source-id:
be32eb473ad77a0805e8247d8d583d673d4bdf25
Syed Tousif Ahmed [Tue, 5 Feb 2019 17:25:18 +0000 (09:25 -0800)]
Fixes selection of cuDNN algorithm (#15881)
Summary:
This PR updates the logic for using cudnnGet* and cudnnFind*. Current version of cudnn find and get (v7) returns a pair of best algorithm and the convDesc mathType. While we were using the returned algorithm, we didn't update the mathType. As a result, we ended up with a slow choice of algorithm and math type. Without this patch, we are seeing a 10x regression in group convolutions.
Changelist:
- Changed the template arguments to be `perf_t` instead of `algo_t` to unify cudnnFind and cudnnGet. Both cudnnFind and cudnnGet have the same purpose and hence, it made sense to unify them and get rid of `getAlgorithm`.
- Used cudnnGet*_v7 everywhere cudnnGet* was being used.
- Removed all cudnn6 paths (This PR depends on https://github.com/pytorch/pytorch/pull/15851)
Differential Revision:
D13957944
Pulled By: ezyang
fbshipit-source-id:
a88c39d80ae37f2d686665622302b62b50fab404
Adam Paszke [Tue, 5 Feb 2019 16:52:55 +0000 (08:52 -0800)]
Don't throw in operator== for TypeMeta and ScalarType (#16736)
Differential Revision:
D13957847
Pulled By: ezyang
fbshipit-source-id:
3cc01538aab1bbb396c29ce61e0e95118f8d011f
Brennan Vincent [Tue, 5 Feb 2019 16:27:04 +0000 (08:27 -0800)]
logsumexp for multiple dimensions (#16475)
Summary:
Move `logsumexp` and `max_values` to `TensorIterator` and use it to make `logsumexp` work for multiple dimensions.
Timings on a tensor of shape `(10,1000000,10)`, for each combination of (cpu, single-threaded cpu, gpu) and dimension:
**before**
208 ms ± 2.72 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
279 ms ± 5.07 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
199 ms ± 2.64 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.11 s ± 33.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.25 s ± 25.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.11 s ± 6.83 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
15.4 ms ± 1.02 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
132 ms ± 30.1 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
39.6 ms ± 19.1 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
**after**
199 ms ± 8.23 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
307 ms ± 8.73 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
207 ms ± 7.62 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.16 s ± 8.92 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.26 s ± 47.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.13 s ± 13.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
15.4 ms ± 868 ns per loop (mean ± std. dev. of 7 runs, 100 loops each)
132 ms ± 27.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
39.6 ms ± 21.8 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16475
Differential Revision:
D13855746
Pulled By: umanwizard
fbshipit-source-id:
aaacc0b967c3f89073487e1952ae6f76b7bd7ad3
Edward Yang [Tue, 5 Feb 2019 15:39:32 +0000 (07:39 -0800)]
Revert
D13952085: [pytorch][PR] Fix static linkage cases and NO_DISTRIBUTED=1 + CUDA
Differential Revision:
D13952085
Original commit changeset:
410c4e117a44
fbshipit-source-id:
fca59c37e71f8e61ae52867d5401b28fbacefe5a
James Reed [Tue, 5 Feb 2019 09:51:52 +0000 (01:51 -0800)]
Integrate PyTorch quantization APIs into ensemble export modules (#309)
Summary:
Pull Request resolved: https://github.com/pytorch/translate/pull/309
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16481
This gives us a boolean flag `quantize` on the `BeamSearch` module that allows us to apply FBGEMM quantization to a pretrained PyTorch model and export this to PyTorch native runtime.
Reviewed By: jmp84
Differential Revision:
D13514776
fbshipit-source-id:
3f7cbff0782aae54c9623ad1ea7e66d7f49e2b32
James Reed [Tue, 5 Feb 2019 09:51:52 +0000 (01:51 -0800)]
Fork/join parallelism for ensemble export modules (#310)
Summary:
Pull Request resolved: https://github.com/pytorch/translate/pull/310
This adds fork/join parallelism to the EncoderEnsemble and DecoderBatchedStepEnsemble models. Note that when run in Python, these calls are no-op, and similarly we remove these calls before exporting to ONNX. But when we run in the PyTorch native runtime, we will now have the opportunity to run these sections in parallel.
Benchmark validation is pending me slogging through FBLearner Flow issues, as usual
Reviewed By: jmp84
Differential Revision:
D13827861
fbshipit-source-id:
0cb9df6e10c0ba64a6b81fa374e077bce90f1d5b
James Reed [Tue, 5 Feb 2019 08:03:53 +0000 (00:03 -0800)]
Add an API to set the number of threads in C10 thread pool (#16669)
Summary:
Tested locally on machine translation service
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16669
Differential Revision:
D13927858
Pulled By: jamesr66a
fbshipit-source-id:
efcb8c21e0c2f76ac37967e6f52967da515595c3
Dmytro Dzhulgakov [Tue, 5 Feb 2019 07:56:00 +0000 (23:56 -0800)]
Try to turn off zero-out of tensors fully
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16601
Reviewed By: ezyang
Differential Revision:
D13893776
fbshipit-source-id:
3190258f2591540dc54ad8504ac6ded998bef384
Jerry Zhang [Tue, 5 Feb 2019 07:51:49 +0000 (23:51 -0800)]
Tensor method rename size()->numel() - 2/3 (#16745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16745
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: dzhulgakov
Differential Revision:
D13944353
fbshipit-source-id:
25c2ca22204706544ee67e59c663bf495f2b4f6b
Jerry Zhang [Tue, 5 Feb 2019 07:51:34 +0000 (23:51 -0800)]
Tensor method rename size()->numel() - 3/3 (#16747)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16747
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: dzhulgakov
Differential Revision:
D13944380
fbshipit-source-id:
2167e2092ab27d31a4d5ef6cfa4b65d192f597a8
Jerry Zhang [Tue, 5 Feb 2019 07:26:07 +0000 (23:26 -0800)]
Tensor method rename size()->numel() - 1/3
Summary: Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: dzhulgakov
Differential Revision:
D13944296
fbshipit-source-id:
67e97c2cf45889d25f2cb3e2203cecba03c8a3aa
Summer Deng [Tue, 5 Feb 2019 06:28:49 +0000 (22:28 -0800)]
Bug fix in l2 quantization (#16749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16749
Use global quantization options in l2 quantization
Reviewed By: jspark1105
Differential Revision:
D13951378
fbshipit-source-id:
d4e356149587e5d2d09a6937c7fa1aa131957fd6
Michael Suo [Tue, 5 Feb 2019 06:01:12 +0000 (22:01 -0800)]
points-to graph simplification (#16605)
Summary:
This PR reworks the mutability API to be simpler (updates passes to use "mayAlias" calls) and improves the caching logic.
The difference is that we now directly express the idea of a "memory location." Leaves in the alias trackers points-to graph are considered unique memory locations, and mayAlias questions can be boiled down whether two values share a leaf.
To speed up queries, some basic path compression has been added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16605
Differential Revision:
D13952738
Pulled By: suo
fbshipit-source-id:
cfc7fb2b23369f1dc425d1d8ca2c753c193d95dd
Edward Yang [Tue, 5 Feb 2019 03:23:57 +0000 (19:23 -0800)]
Revert "Move outplace ops to ATen (#12413)" (#16731)
Summary:
This reverts commit
f660d3ae19decc64390e894fbaf8de80d87585e0.
cc zasdfgbnm
Reasoning at https://github.com/pytorch/pytorch/pull/12413#issuecomment-
460424129
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16731
Differential Revision:
D13948022
Pulled By: ezyang
fbshipit-source-id:
b10669cf03679e306850314b7b5b08bed0839e19
Lu Fang [Tue, 5 Feb 2019 01:34:33 +0000 (17:34 -0800)]
update of fbcode/onnx to
875f7bbe537b9d6931d065977c192eaaf61e1179 (#16734)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16734
Previous import was
15c33c945851907411619f599900c3852108e7e3
Included changes:
- **[875f7bb](https://github.com/onnx/onnx/commit/875f7bb)**: Bump docker image version from 230 to 238 (#1786) <bddppq>
- **[f94e430](https://github.com/onnx/onnx/commit/f94e430)**: Fix: setup.py is using wrong cmake build type (#1784) <Changming Sun>
- **[2896c77](https://github.com/onnx/onnx/commit/2896c77)**: Fix Cast testcase data (#1776) <Raymond Yang>
Reviewed By: bddppq
Differential Revision:
D13948288
fbshipit-source-id:
5f733005d4bf483d58b630d511cadb0fa4ac7910
Soumith Chintala [Tue, 5 Feb 2019 00:47:55 +0000 (16:47 -0800)]
Fix static linkage cases and NO_DISTRIBUTED=1 + CUDA (#16705)
Differential Revision:
D13952085
Pulled By: soumith
fbshipit-source-id:
410c4e117a44c08eadc6f3ded91fafc320a7c696
Jerry Zhang [Mon, 4 Feb 2019 23:46:00 +0000 (15:46 -0800)]
Tensor method rename ndim()->dim() - 1/3 (#16678)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16678
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: houseroad
Differential Revision:
D13929413
fbshipit-source-id:
677ce760bdbf9f5560630fdc40dd60af227fb696
Mas-ud Hussain [Mon, 4 Feb 2019 23:30:01 +0000 (15:30 -0800)]
Merge job-spec env variables of Pytorch/Caffe2 CI jobs (#16649)
Summary:
The idea is to unify the environment variables `JOB_BASE_NAME` and `BUILD_ENVIRONMENT` which controlled the Pytorch and Caffe2 jobs respectively. In this commit, we have converted all the `JOB_BASE_NAME` references in _.jenkins/pytorch/*_ files to `BUILD_ENVIRONMENT`. Then, did the same thing in ._circleci/config.yml_. One thing that we needed to be careful was when both `BUILD_ENVIRONMENT `and `JOB_BASE_NAME` were present under same declaration in _config.yml_ file (e.g., for "caffe2-" stuffs). To ensure that all "==" checks work as expected, we also had to add "*" in some if conditions in _.jenkins/caffe2/build.sh_ file. Finally, removed "-build", "-test", etc. suffixes from `COMPACT_JOB_NAME` variable assignment in the bash script files in _.jenkins/pytorch_ folder, e.g., modify `COMPACT_JOB_NAME="${BUILD_ENVIRONMENT}-build"` to `COMPACT_JOB_NAME="${BUILD_ENVIRONMENT}"`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16649
Differential Revision:
D13946392
Pulled By: mmh683
fbshipit-source-id:
790de6abf96de184758e395c9098a50998e05bc5
Jesse Hellemn [Mon, 4 Feb 2019 22:18:20 +0000 (14:18 -0800)]
Log top commit of pytorch + builder in binaries
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16729
Differential Revision:
D13947737
Pulled By: pjh5
fbshipit-source-id:
9ba8ea56baff7147f73458ab26d0553fff31a46f
Junjie Bai [Mon, 4 Feb 2019 22:07:27 +0000 (14:07 -0800)]
Run resnext101 training in rocm benchmark (#16017)
Summary:
cc xw285cornell
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16017
Differential Revision:
D13946680
Pulled By: bddppq
fbshipit-source-id:
ea125b0389188a59db3d537671a3214a557aecdb
Joshua Meier [Mon, 4 Feb 2019 20:14:36 +0000 (12:14 -0800)]
Replace resize_dim() with set_sizes_and_strides() in THTensor_(unsqueeze1d) in aten/src/TH/generic/THTensor.cpp (#16673)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16673
Replace resize_dim() with set_sizes_and_strides() in THTensor_(unsqueeze1d) in aten/src/TH/generic/THTensor.cpp, as described in T38058642.
Reviewed By: ezyang
Differential Revision:
D13928879
fbshipit-source-id:
d593cebcc82589cd362ac78884d4e367d0da0ce6
Jerry Zhang [Mon, 4 Feb 2019 19:09:19 +0000 (11:09 -0800)]
Tensor method rename ndim()->dim() - 2/3 (#16679)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16679
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: houseroad
Differential Revision:
D13929450
fbshipit-source-id:
fcc222744c28b41f2cedffc0c2ef5d04aceaa5af
JerryShih [Mon, 4 Feb 2019 16:50:35 +0000 (08:50 -0800)]
Update the cmake build configuration for AppleClang compiler (#15820)
Summary:
This pr try to merge the https://github.com/pytorch/pytorch/pull/11563 again and fix the linking error in https://github.com/pytorch/pytorch/pull/14837.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15820
Differential Revision:
D13942024
Pulled By: ezyang
fbshipit-source-id:
dc6d1e9c4b0f177914f3745665244272a03ce33c
Dmytro Dzhulgakov [Mon, 4 Feb 2019 06:11:43 +0000 (22:11 -0800)]
Fix build with cuda but no cudnn in caffe2 (#16701)
Summary:
Just noticed while building on a machine without cudnn present - it was building but the runtime failed since some methods weren't bound
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16701
Differential Revision:
D13937247
Pulled By: dzhulgakov
fbshipit-source-id:
c81f05be7a9e64a1a8591036dcf8692c0ed4064e
Dmytro Dzhulgakov [Mon, 4 Feb 2019 05:28:48 +0000 (21:28 -0800)]
Fix ReservoirSampling zero-initialization reliance (#16702)
Summary:
The op was implicitly relying on pos_to_output to be zero-initialized after extending. We're removing this functionality from allocator, thus fixing here. For some reason it wasn't spotted by junk-initialization but was reliably reproducible with standard malloc() if both junk_fill and zero_fill flags are turned off.
cc kittipatv jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16702
Reviewed By: kittipatv
Differential Revision:
D13937257
Pulled By: dzhulgakov
fbshipit-source-id:
3ee520b05467108e6c3e64eb3e6c60589bdf3d87
Pieter Noordhuis [Sun, 3 Feb 2019 21:36:18 +0000 (13:36 -0800)]
Remove --without-parallel (#16704)
Summary:
See homebrew/homebrew-core@
60c72ba9 and homebrew/homebrew-core#31510.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16704
Differential Revision:
D13938093
Pulled By: pietern
fbshipit-source-id:
8a70d462180257f96202a0373a86a273b524045c
Pieter Noordhuis [Sun, 3 Feb 2019 19:49:25 +0000 (11:49 -0800)]
Bump gloo (#16638)
Summary:
This bump includes:
* Memory leak fix where the Gloo transport would hold on to auxiliary
structures for send/recv pairs after they finished.
* Fix write-after-free from Gloo thread during stack unwinding on error.
* Removal of the PATENTS file.
Fixes #16144.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16638
Differential Revision:
D13937950
Pulled By: pietern
fbshipit-source-id:
3cfecaf13ee0f214c06681386557a4b1c3e1d6b9
vishwakftw [Sun, 3 Feb 2019 02:52:55 +0000 (18:52 -0800)]
Fix issue with scalars and __rpow__ (#16687)
Summary:
Changelog:
- Modify __rpow__ function in tensor.py to adapt to scalars
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16687
Differential Revision:
D13936720
Pulled By: soumith
fbshipit-source-id:
b0c8727968b04efbc6e7461807c812d962f03370
Sebastian Messmer [Sun, 3 Feb 2019 00:23:55 +0000 (16:23 -0800)]
Improve LeftRight (#16524)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16524
- Make it exception safe. When an exception happens during write, the old state is recovered.
- Use RAII instead of try/catch to increment counters in readers. This is more readable, and it also makes it work with reader closures that return void, which previously didn't work because the reader return value was stored on the stack.
- Assert there's no reads or writes happening when it's destructed to avoid destruction race conditions
- Explain the algorithm in detail in comments
- Add test cases
Reviewed By: ezyang
Differential Revision:
D13866609
fbshipit-source-id:
01306a282a3f555569caa13d8041486f960d00e2
svcscm [Sat, 2 Feb 2019 18:53:43 +0000 (10:53 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
e66e01e164d1784740fcb8bebc4817d2a8cd7903
svcscm [Sat, 2 Feb 2019 13:01:51 +0000 (05:01 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
31a8d843ffba2d7405b4742ea553937a00dff216
James Reed [Sat, 2 Feb 2019 08:52:38 +0000 (00:52 -0800)]
fix conditional in mean workaround (#16686)
Summary:
When trying to get a test to pass I was missing an exclamation mark. Instead now I just use a different function in the conditional
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16686
Differential Revision:
D13935182
Pulled By: jamesr66a
fbshipit-source-id:
7525a1a829276641dbafe06734f03f6202df6b22
Xiaomeng Yang [Sat, 2 Feb 2019 07:45:38 +0000 (23:45 -0800)]
Use macro for reduce on 2d blocks (#16344)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16344
Use macro for reduce on 2d blocks
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D13808988
fbshipit-source-id:
b68c0fb6079c1b6e203a072083aba7a95c202bc2
Sebastian Messmer [Sat, 2 Feb 2019 05:31:13 +0000 (21:31 -0800)]
Simplify layer_norm_op_test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16570
Reviewed By: ezyang
Differential Revision:
D13883913
fbshipit-source-id:
7437d3cbc00c0de92bb01562c620cb658aa9f0d3
Hao Lu [Sat, 2 Feb 2019 04:55:50 +0000 (20:55 -0800)]
Make predictor base class
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16541
Reviewed By: ajtulloch
Differential Revision:
D13858261
fbshipit-source-id:
acbfdbea59bd20ab1cc7956ee0d8856d6faa8361
Yinghai Lu [Sat, 2 Feb 2019 02:45:44 +0000 (18:45 -0800)]
Tag model_id and onnxifi index in OnnxifiOp (#16648)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16648
We added onnxGraph sharing keyed on model id and net seq number but we forgot to supply these info to the Onnxifi. Therefore, we will only create ONE onnxGraph whatsoever... This diff adds necessary info to the OnnxifiOp to prevent this from happening.
Reviewed By: bertmaher, rdzhabarov
Differential Revision:
D13912356
fbshipit-source-id:
fe8982327287a35f32fe3b125d94b617d18c0ab5
svcscm [Sat, 2 Feb 2019 02:41:00 +0000 (18:41 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
ed389204bc423d2d5f7a36e2d61c0f55fe0522e1