platform/upstream/pytorch.git
5 years agoEnable fp16 for MIOPEN operators in Caffe2 (#14905)
rohithkrn [Sat, 8 Dec 2018 01:23:49 +0000 (17:23 -0800)]
Enable fp16 for MIOPEN operators in Caffe2 (#14905)

Summary:
This PR enables fp16 MIOPEN operators in Caffe2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14905

Differential Revision: D13383439

Pulled By: bddppq

fbshipit-source-id: 840afa8d08bef2952ca0039dee2423f1542bb330

5 years agoUpgrade MKL-DNN to version 0.17 (#14308)
Gu, Jinghui [Sat, 8 Dec 2018 00:42:39 +0000 (16:42 -0800)]
Upgrade MKL-DNN to version 0.17 (#14308)

Summary:
upgrade MKL-DNN to version 0.17
update mkldnn bridge to latest.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14308

Differential Revision: D13383102

Pulled By: yinghai

fbshipit-source-id: c434f0e0ddff2ee2c86db2d6c44a37298fd005a3

5 years agoFix build with OpenCV 4.0 (#14356)
Daniel Bermond [Sat, 8 Dec 2018 00:37:30 +0000 (16:37 -0800)]
Fix build with OpenCV 4.0 (#14356)

Summary:
Fixes #14355
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14356

Differential Revision: D13356237

Pulled By: bddppq

fbshipit-source-id: 2bf6ee21995c2c7b617c4e78ea7341f975f1b937

5 years agoRemove unused TensorImpl dependencies
Sebastian Messmer [Sat, 8 Dec 2018 00:18:20 +0000 (16:18 -0800)]
Remove unused TensorImpl dependencies

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14792

Reviewed By: ezyang

Differential Revision: D13336843

fbshipit-source-id: 12f84799a70c2e90a8b934dd8dc031c09a6782f0

5 years agoRemove TensorImpl -> context_base dependency (#14658)
Sebastian Messmer [Sat, 8 Dec 2018 00:18:20 +0000 (16:18 -0800)]
Remove TensorImpl -> context_base dependency (#14658)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14658

Remove this dependency by moving at::CopyBytes to c10.
The implementations for at::CopyBytes will have to live in aten/caffe2 for now because they're not unified for CUDA yet.
They'll be moved into c10/backend/xxx later.

Reviewed By: dzhulgakov

Differential Revision: D13288655

fbshipit-source-id: 1c92379345308b3cd39a402779d7b7999613fc0d

5 years agoFix include paths for TensorOptions
Sebastian Messmer [Sat, 8 Dec 2018 00:18:19 +0000 (16:18 -0800)]
Fix include paths for TensorOptions

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14747

Reviewed By: ezyang

Differential Revision: D13318645

fbshipit-source-id: f5ba77a93f6019fbf5faffb47a2837c95fad474d

5 years agoUpdate graph printouts in JIT docs (#14914)
James Reed [Fri, 7 Dec 2018 23:06:48 +0000 (15:06 -0800)]
Update graph printouts in JIT docs (#14914)

Summary:
Tracing records variable names and we have new types and stuff in the IR, so this updates the graph printouts in the docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14914

Differential Revision: D13385101

Pulled By: jamesr66a

fbshipit-source-id: 6477e4861f1ac916329853763c83ea157be77f23

5 years agoImprove hub documentation (#14862)
Ailing Zhang [Fri, 7 Dec 2018 22:56:56 +0000 (14:56 -0800)]
Improve hub documentation (#14862)

Summary:
Added a few examples and explains to how publish/load models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14862

Differential Revision: D13384790

Pulled By: ailzhang

fbshipit-source-id: 008166e84e59dcb62c0be38a87982579524fb20e

5 years agoUSE_FBGEMM=True by default
James Reed [Fri, 7 Dec 2018 22:14:25 +0000 (14:14 -0800)]
USE_FBGEMM=True by default

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14868

Differential Revision: D13383390

Pulled By: jamesr66a

fbshipit-source-id: 1880c07dfd239e19153bd4fde2ab2c8d0604f956

5 years agoUSE_TENSORRT support and TensorRT 5 compatibility
Sergei Nikolaev [Fri, 7 Dec 2018 21:52:56 +0000 (13:52 -0800)]
USE_TENSORRT support and TensorRT 5 compatibility

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13945

Differential Revision: D13317525

Pulled By: yinghai

fbshipit-source-id: 8630dfec1bbc5aac19539e344e7c38a7fd8b051d

5 years agoAdd __init__.py so files get picked up on install (#14898)
Orion Reblitz-Richardson [Fri, 7 Dec 2018 21:35:58 +0000 (13:35 -0800)]
Add __init__.py so files get picked up on install (#14898)

Summary:
This will let us install tests and other Caffe2 python code as a part of running Caffe2 tests in PyTorch.

Broken out of https://github.com/pytorch/pytorch/pull/13733/

cc pjh5 yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14898

Reviewed By: pjh5

Differential Revision: D13381123

Pulled By: orionr

fbshipit-source-id: 0ec96629b0570f6cc2abb1d1d6fce084e7464dbe

5 years agoReplace calls of Type::_th_tensor. (#14877)
Gregory Chanan [Fri, 7 Dec 2018 20:37:03 +0000 (12:37 -0800)]
Replace calls of Type::_th_tensor. (#14877)

Summary:
_th_tensor is moving off Type, so these calls need to be replaced.

Unfortunately, replacing these with a full-fledged solution [e.g. from_storage(..., TensorOptions)] is a bit complicated because the storage itself fully defines the Type (modulo variable).  It's simpler to just wait for the Variable/Tensor merge rather than to solve this now, so instead I changed the call sites to: at::empty({0}, type.options()).set_(storage...).

This isn't great because we are also trying to get rid of Type::options, but this seems to be the lesser-of-two-evils.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14877

Differential Revision: D13374310

Pulled By: gchanan

fbshipit-source-id: eb953ed041507e6190d6f32e383912e5a08311cd

5 years agoLarge scale fix of python-related files in torch/csrc/
Peter Goldsborough [Fri, 7 Dec 2018 20:22:49 +0000 (12:22 -0800)]
Large scale fix of python-related files in torch/csrc/

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14515

Differential Revision: D13247966

Pulled By: goldsborough

fbshipit-source-id: 7a127c508fc576a7a92626dd6b729f660162d628

5 years agoImplementation of WeightedSum op for mkl-dnn and fix FC op output shape issue.
PenghuiCheng [Fri, 7 Dec 2018 20:01:44 +0000 (12:01 -0800)]
Implementation of WeightedSum op for mkl-dnn and fix FC op output shape issue.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14407

Reviewed By: yinghai

Differential Revision: D13364364

Pulled By: wesolwsk

fbshipit-source-id: e69bcd1bc52e35b2f0e45e5dc40184f1bd66605d

5 years agoRevert D13205604: Move numa.{h, cc} to c10/util
Yudong Guang [Fri, 7 Dec 2018 17:58:54 +0000 (09:58 -0800)]
Revert D13205604: Move numa.{h, cc} to c10/util

Differential Revision:
D13205604

Original commit changeset: 54166492d318

fbshipit-source-id: 89b6833518c0b554668c88ae38d97fbc47e2de17

5 years agoExpose torch.roll function and method (#14880)
vishwakftw [Fri, 7 Dec 2018 15:25:55 +0000 (07:25 -0800)]
Expose torch.roll function and method (#14880)

Summary: Fixes #14859 .

Differential Revision: D13376915

Pulled By: zou3519

fbshipit-source-id: f1fc0e8492a159431a3fc0a19a41aa10429ecc80

5 years agoMake autograd engine compatible with hip
Junjie Bai [Fri, 7 Dec 2018 08:07:05 +0000 (00:07 -0800)]
Make autograd engine compatible with hip

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14873

Differential Revision: D13375053

Pulled By: bddppq

fbshipit-source-id: f3051640386667bbf0566856ed433eb83276c39e

5 years agoFixed ConvT docstring (#14876)
Jon Crall [Fri, 7 Dec 2018 07:55:34 +0000 (23:55 -0800)]
Fixed ConvT docstring (#14876)

Summary:
Fixes #14099

I attempted to be as consistent as possible with the formatting, hence why my equation reads d*(k - 1) instead of (k - 1)*d.

Also there is an unused variable on line 46: `n = self.in_channels`. I could fix that here too if that's not too out of scope.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14876

Differential Revision: D13374317

Pulled By: soumith

fbshipit-source-id: a9f110acafa58cdb4206956dbe3ab4738d48292d

5 years agoUpdating submodules
svcscm [Fri, 7 Dec 2018 06:50:09 +0000 (22:50 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 7da015701f18f8a0b5a8092aae02a42ede7bfd44

5 years agoRemove weak module test expect files (#14871)
David Riazati [Fri, 7 Dec 2018 05:50:35 +0000 (21:50 -0800)]
Remove weak module test expect files (#14871)

Summary:
This PR removes some expect files that aren't really testing anything
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14871

Differential Revision: D13373762

Pulled By: driazati

fbshipit-source-id: e3537ee83df23b3b3b854f9b1253fd0cc8e9dd33

5 years agogradcheck (#14596)
Wei Yang [Fri, 7 Dec 2018 01:58:16 +0000 (17:58 -0800)]
gradcheck (#14596)

Summary:
- allow gradcheck to take sparse tensor as input
- sparse output is not allowed yet at gradcheck
- add backward for `to_dense()` to get around sparse output
- calling gradcheck at test_sparse, so that we can use `_gen_sparse()` and also easily cover coalesced / uncoalesced test cases
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14596

Differential Revision: D13271904

Pulled By: weiyangfb

fbshipit-source-id: 5317484104404fd38058884c86e987546011dd86

5 years agoSkipping two c10d tests only if there are multi-GPUs (#14860)
Teng Li [Fri, 7 Dec 2018 01:22:04 +0000 (17:22 -0800)]
Skipping two c10d tests only if there are multi-GPUs (#14860)

Summary:
Otherwise, these tests will fail, even though there are never meant to run on single GPU machines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14860

Differential Revision: D13369060

Pulled By: teng-li

fbshipit-source-id: 8a637a6d57335491ba8602cd09927700b2bbf8a0

5 years agoMove TensorOptions, DefaultTensorOptions to c10
Sebastian Messmer [Thu, 6 Dec 2018 23:52:15 +0000 (15:52 -0800)]
Move TensorOptions, DefaultTensorOptions to c10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14746

Reviewed By: ezyang

Differential Revision: D13318644

fbshipit-source-id: b703d7dc67e75d9e9571c80d62a100c5fc4e84df

5 years agoSwitch Int8MaxPool operator to QNNPACK (#14832)
Marat Dukhan [Thu, 6 Dec 2018 23:12:35 +0000 (15:12 -0800)]
Switch Int8MaxPool operator to QNNPACK (#14832)

Summary:
1.6-2.4X speedup on ARM when compiled with gcc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14832

Differential Revision: D13358160

Pulled By: Maratyszcza

fbshipit-source-id: 39e9791886fac62650bb53a9df341889f0bb5d49

5 years agocollect_env.py: get conda magma and mkl information (#14854)
Richard Zou [Thu, 6 Dec 2018 22:55:55 +0000 (14:55 -0800)]
collect_env.py: get conda magma and mkl information (#14854)

Summary:
Fixes #12371
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14854

Differential Revision: D13363635

Pulled By: zou3519

fbshipit-source-id: f8b5d05038bf5ce451399dfeed558ae298178128

5 years agoAdd LogSigmoid support in ONNX symbolic (#14830)
zrphercule [Thu, 6 Dec 2018 22:04:44 +0000 (14:04 -0800)]
Add LogSigmoid support in ONNX symbolic (#14830)

Summary:
Add LogSigmoid:

torch.LogSigmoid(x) = onnx.Log(onnx.Sigmoid(x))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14830

Differential Revision: D13353891

Pulled By: zrphercule

fbshipit-source-id: bf456170b9e6c4edad07b3333cd5797f8e0fa97f

5 years agoKill GPU memory logs in normal runs (#14838)
Ashwin Bharambe [Thu, 6 Dec 2018 21:44:33 +0000 (13:44 -0800)]
Kill GPU memory logs in normal runs (#14838)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14838

The GPU memory tracking logs are incredibly annoying and merely serve
to pollute output. I `VLOG(1)`ed them. Hopefully, this is non-controversial.

Reviewed By: kuttas

Differential Revision: D13343290

fbshipit-source-id: b3cae99346c97b66e97ea660061e15dc5c99b9fc

5 years agoStop inserting static casts in Hipify (#14853)
Junjie Bai [Thu, 6 Dec 2018 21:17:28 +0000 (13:17 -0800)]
Stop inserting static casts in Hipify (#14853)

Summary:
Latest hcc can now properly cast to correct type internally, so there is no need to insert static_cast in hipify scripts anymore.
However the hcc included in the latest ROCm release (1.9.2) doesn't have this fix, so leaving a flag to continue doing static_cast for those using the official ROCm releases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14853

Differential Revision: D13363171

Pulled By: bddppq

fbshipit-source-id: a36476a8511222ff3c933d31788e8a0ffb04f5ca

5 years agoTensor construction codemod - 3/3 (#14835)
Jerry Zhang [Thu, 6 Dec 2018 19:16:07 +0000 (11:16 -0800)]
Tensor construction codemod - 3/3 (#14835)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14835

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: bddppq

Differential Revision: D13335184

fbshipit-source-id: 26d8247e16b30bdff045530034af9b72c76d066f

5 years agoTensor construction codemod - 1/3 (#14828)
Jerry Zhang [Thu, 6 Dec 2018 19:14:48 +0000 (11:14 -0800)]
Tensor construction codemod - 1/3 (#14828)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14828

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: bddppq

Differential Revision: D13335160

fbshipit-source-id: a3ae4c5a86bfbdaf2d5aa14e0eef57255e829fd4

5 years agoMove numa.{h, cc} to c10/util (#14393)
Jerry Zhang [Thu, 6 Dec 2018 18:56:14 +0000 (10:56 -0800)]
Move numa.{h, cc} to c10/util (#14393)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393

att

Reviewed By: ezyang

Differential Revision: D13205604

fbshipit-source-id: 54166492d31827b0343ed070cc36a825dd86e2ed

5 years agoUpgrade CI to ROCm 1.9.2 (#14216)
Johannes M Dieterich [Thu, 6 Dec 2018 18:04:37 +0000 (10:04 -0800)]
Upgrade CI to ROCm 1.9.2 (#14216)

Summary:
Drop custom hcc/hip as the 1.9.2 release should contain the relevant patches therein.

Most notable feature in 1.9.2 is mixed precision support in rocBLAS and MIOpen. These features will be enabled by subsequent PRs.

bddppq ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14216

Differential Revision: D13354294

Pulled By: bddppq

fbshipit-source-id: 2541d4a196af21c9432c1aff7f6e65b572628028

5 years agoAllow linspace and logspace with steps=1 and start != end like numpy (#14748)
Jan Schlüter [Thu, 6 Dec 2018 17:29:01 +0000 (09:29 -0800)]
Allow linspace and logspace with steps=1 and start != end like numpy (#14748)

Summary:
`torch.linspace(0, 1, 1)` fails with `RuntimeError: invalid argument 3: invalid number of points at ../aten/src/TH/generic/THTensorMoreMath.cpp:2119`, while `np.linspace(0, 1, 1)` works fine.
Looking at the code, there is even a comment by gchanan asking: "NumPy allows you to pass different points even if n <= 1 -- should we?"
I would say "yes". Currently, I would need to handle the case of `steps == 1` or `steps == 0` separately, making sure to change the `end` when calling `torch.linspace`. This is impractical. If we support `start != end`, there are two possibilities for the result: Either we ensure the first value in the resulting sequence always equals `start`, or we ensure the last value in the resulting sequence always equals `end`. Numpy chose the former, which also allows it to support a boolean `endpoint` flag. I'd say we should follow numpy.

This PR adapts `linspace` and `logspace` to mimic the behavior of numpy, adapts the tests accordingly, and extends the docstrings to make clear what happens when passing `steps=1`.

If you decide against this PR, the error message should become explicit about what I did wrong, and the documentation should be extended to mention this restriction.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14748

Differential Revision: D13356136

Pulled By: ezyang

fbshipit-source-id: db85b8f0a98a5e24b3acd766132ab71c91794a82

5 years ago(#14580)
Jie [Thu, 6 Dec 2018 16:57:39 +0000 (08:57 -0800)]
(#14580)

Summary:
Removes cast of half to float in torch.sum, with float16 input tensor and
float32 output tensor, instead we cast data when loading input in kernel.

This supposingly would save a kernel launch as well as a full global memory load
on promoted data type (float).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14580

Differential Revision: D13356203

Pulled By: ezyang

fbshipit-source-id: 85e91225b880a65fe3ceb493371b9b36407fdf48

5 years agoConsistent formatting in losses' docs
Ricardo Cuenca [Thu, 6 Dec 2018 16:57:31 +0000 (08:57 -0800)]
Consistent formatting in losses' docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14739

Differential Revision: D13356143

Pulled By: ezyang

fbshipit-source-id: 9ae8316dd8ba6e910247b64cec22db63df10e11c

5 years agoAdd (partial) autodiff support for nll_loss (#14305)
Alex Şuhan [Thu, 6 Dec 2018 16:56:25 +0000 (08:56 -0800)]
Add (partial) autodiff support for nll_loss (#14305)

Summary:
Not ready yet, need some comments / help with this. It's good enough for https://github.com/pytorch/xla immediate goals (forward + backward trace fusion), but there are at least two issues with it:

1. If we don't allow it, `test/test_jit.py` fails to cover the change.
2. If we allow the weight to be set, running `test/test_jit.py TestJitGenerated.test_nn_nll_loss` fails with:

```
======================================================================
ERROR: test_nn_nll_loss (__main__.TestJitGenerated)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test/test_jit.py", line 10001, in do_test
    fn, f_args_variable, kwargs_variable, no_grad=no_grad)
  File "test/test_jit.py", line 9360, in check_against_reference
    outputs_test = self.runAndSaveRNG(func, recording_inputs, kwargs)
  File "test/test_jit.py", line 425, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "test/test_jit.py", line 9298, in script_fn
    self.assertExportImport(CU.the_method.graph, tensors)
  File "test/test_jit.py", line 415, in assertExportImport
    self.assertExportImportModule(m, inputs)
  File "test/test_jit.py", line 419, in assertExportImportModule
    self.assertEqual(self.runAndSaveRNG(m.forward, inputs),
  File "test/test_jit.py", line 425, in runAndSaveRNG
    results = func(*inputs, **kwargs)
RuntimeError:
arguments for call are not valid:

  for operator aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor out) -> Tensor:
  expected a value of type Tensor for argument 'total_weight' but found bool
  <internally-created-node>
  ~ <--- HERE

  for operator aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> Tensor:
  expected a value of type Tensor for argument 'total_weight' but found bool
  <internally-created-node>
  ~ <--- HERE
for call at:
<internally-created-node>
~ <--- HERE
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14305

Differential Revision: D13356265

Pulled By: ezyang

fbshipit-source-id: 504d783b2d87f923e698a6a4efc0fd9935a94a41

5 years agoUpdating submodules
svcscm [Thu, 6 Dec 2018 11:18:17 +0000 (03:18 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 2adbb6f97d4b8f067a2538fec855063510b0ca3f

5 years agoUpdating submodules
svcscm [Thu, 6 Dec 2018 10:53:28 +0000 (02:53 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: e0509413215f3b7578b825c52365fec4da625bd5

5 years agoFixed MIOpen RNN Segfault issue and enabled RNN test (#14810)
lcskrishna [Thu, 6 Dec 2018 07:52:42 +0000 (23:52 -0800)]
Fixed MIOpen RNN Segfault issue and enabled RNN test (#14810)

Summary:
This pull request contains changes for:
1. Added MIOpen RNN API miopenGetRNNLayerBiasSize and miopenGetRNNLayerParamSize.
2. Fixed usage of API miopenGetRNNLayerParam.
3. Modifying the RNN test to run using MIOpen engine.

Differential Revision: D13355699

Pulled By: bddppq

fbshipit-source-id: 6f750657f8049c5446eca893880b397804120b69

5 years agoExport complete subgraph io info when calling onnxGetBackendCompatibility (#14827)
Yinghai Lu [Thu, 6 Dec 2018 07:50:12 +0000 (23:50 -0800)]
Export complete subgraph io info when calling onnxGetBackendCompatibility (#14827)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14827

We need to send complete IO info when doing `onnxGetBackendCompatibility` to backend like Glow. Previously we are missing some info because sometimes we generate more than one nodes from one C2 op. This fixes the issue.

Reviewed By: jackm321

Differential Revision: D13352049

fbshipit-source-id: 8d8ac70656a0ac42f3a0ccecad61456a4f3b2435

5 years agoFix clip gradient with empty input (#14709)
Huan Gui [Thu, 6 Dec 2018 06:51:23 +0000 (22:51 -0800)]
Fix clip gradient with empty input (#14709)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14709

As titled

Reviewed By: Wakeupbuddy

Differential Revision: D13305554

fbshipit-source-id: 380062d4b0e4f9dc0207a27766cac7b8d05384d5

5 years agoRemove protobuf dependency in pytorch cmake file. (#14182)
JerryShih [Thu, 6 Dec 2018 06:47:54 +0000 (22:47 -0800)]
Remove protobuf dependency in pytorch cmake file. (#14182)

Summary:
Currently, pytorch doesn't dependent on protobuf. So, we don't need to include the protobuf dir in pytorch cmake file.
And if we build caffe2 without custom-protobuf[1], we will have the protobuf mismatched problem.

[1]
https://github.com/pytorch/pytorch/blob/92dbd0219f6fbdb1db105386386ccf92c0758e86/CMakeLists.txt#L65
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14182

Differential Revision: D13356273

Pulled By: ezyang

fbshipit-source-id: 8120c3452d158dc51d70156433d7b9076c6aed47

5 years agoOptimize images (#14084)
Xiang Gao [Thu, 6 Dec 2018 06:44:27 +0000 (22:44 -0800)]
Optimize images (#14084)

Summary:
This is a PR that [ImgBot](https://imgbot.net/) opened on my fork https://github.com/zasdfgbnm/pytorch/pull/1, I forward it here.  ImgBot does lossless compression on images to reduce file size.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14084

Differential Revision: D13356293

Pulled By: ezyang

fbshipit-source-id: 731236d95ad870db8ccb99b03ed306704365242c

5 years agoPrevent `profile_observer_test` from being run by CPU test (#14168)
Aldian Fazrihady [Thu, 6 Dec 2018 06:31:39 +0000 (22:31 -0800)]
Prevent `profile_observer_test` from being run by CPU test (#14168)

Summary:
Fix CMakeLists.txt, so the test for CPU won't run profile_observer_test.cc, as currently it only supports GPU
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14168

Differential Revision: D13356274

Pulled By: ezyang

fbshipit-source-id: 7d105f2e18675e5fab129864958148b0f18d582c

5 years agoCAFFE2_INCLUDE_DIRS points to invalid path (#14306)
Achal Shah [Thu, 6 Dec 2018 06:30:07 +0000 (22:30 -0800)]
CAFFE2_INCLUDE_DIRS points to invalid path (#14306)

Summary:
I know that including CAFFE2_INCLUDE_DIRS in include headers are not necessary for newer cmakes. But I had this in one of my old projects and **cmake gave me error that "/usr/lib/include" is invalid path**.

It seems like "${_INSTALL_PREFIX}/lib/include" should be changed to "${_INSTALL_PREFIX}/include" as all caffe2 headers are in /include rather than /lib/include/

Please correct me if I am wrong?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14306

Differential Revision: D13356246

Pulled By: ezyang

fbshipit-source-id: e2d5d3c42352e59b245714ad90fd7a9ef48170d7

5 years agouse "Extension" instead of the unimported "setuptools.Extension" (#14475)
HB_alon [Thu, 6 Dec 2018 06:16:44 +0000 (22:16 -0800)]
use "Extension" instead of the unimported "setuptools.Extension" (#14475)

Summary:
use "Extension" instead of the unimported "setuptools.Extension"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14475

Differential Revision: D13356219

Pulled By: ezyang

fbshipit-source-id: 5a3e7eb73a32d6bf09676efd9eddded5586435cd

5 years agogenerate ATen core files with LF. (#14667)
Shuichi KITAGUCHI [Thu, 6 Dec 2018 06:07:45 +0000 (22:07 -0800)]
generate ATen core files with LF. (#14667)

Summary:
on Windows environment, some ATen core files (Type.h, Tensor.h, TensorMethods.h) are created and it's new line code is CRLF. (maybe enviconment dependant)
therefore, comparing files is failed in generate_outputs()agener917.py and compilation stopped.
this patch generates these files with LF forcibly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14667

Differential Revision: D13356170

Pulled By: ezyang

fbshipit-source-id: ef8cc3a6cc8bf3c45b78e9eb3df98cf47c0d33bb

5 years agoRemove outdated css file and refs in cpp conf.py (#14779)
Brendan Soffientini [Thu, 6 Dec 2018 05:53:36 +0000 (21:53 -0800)]
Remove outdated css file and refs in cpp conf.py (#14779)

Summary:
pytorch_theme.css is no longer necessary for the cpp or html docs site build. The new theme styles are located at https://github.com/pytorch/pytorch_sphinx_theme. The Lato font is also no longer used in the new theme.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14779

Differential Revision: D13356125

Pulled By: ezyang

fbshipit-source-id: c7635eb7512c7dcaddb9cad596ab3dbc96480144

5 years agoFixes for some Windows compiler warnings (#14490)
vaeksare [Thu, 6 Dec 2018 05:24:58 +0000 (21:24 -0800)]
Fixes for some Windows compiler warnings (#14490)

Summary:
Implement some simple fixes to clean up windows build by fixing compiler warnings. Three main types of warnings were fixes:

1. GCC specific pragmas were changed to not be used on windows.
2. cmake flags that don't exist on windows were removed from windows build
3. Fix a macro that was defined multiple times on Windows.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14490

Differential Revision: D13241988

Pulled By: ezyang

fbshipit-source-id: 38da8354f0e3a3b9c97e33309cdda9fd23c08247

5 years agoShut up "address will always evaluate to 'true'" warnings (#14774)
Edward Yang [Thu, 6 Dec 2018 05:14:03 +0000 (21:14 -0800)]
Shut up "address will always evaluate to 'true'" warnings (#14774)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14774

Differential Revision: D13327969

Pulled By: ezyang

fbshipit-source-id: 43380c89eedaaa89467952401b8fd3f5a9ad754a

5 years agoHIPify less files in PyTorch (#14804)
Edward Yang [Thu, 6 Dec 2018 04:50:41 +0000 (20:50 -0800)]
HIPify less files in PyTorch (#14804)

Summary:
Stacked on #14803
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14804

Differential Revision: D13347986

Pulled By: ezyang

fbshipit-source-id: c93177b4ad51855660d0de36d042bfc542bd4be0

5 years agoUnify device argument parsing between torch and c10
Junjie Bai [Thu, 6 Dec 2018 02:35:21 +0000 (18:35 -0800)]
Unify device argument parsing between torch and c10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14786

Differential Revision: D13334501

Pulled By: bddppq

fbshipit-source-id: ae3536be1fe0dcd6a1552ec93629ecc9554c0d7c

5 years agoImprove assertion failure message (#14813)
Pieter Noordhuis [Thu, 6 Dec 2018 01:18:06 +0000 (17:18 -0800)]
Improve assertion failure message (#14813)

Summary:
See #14554.

I can't figure out how the reported issue can happen. The best next
thing is have more information when this happens again.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14813

Differential Revision: D13351908

Pulled By: pietern

fbshipit-source-id: 61b30fcae2e34da54329d0893ca4921b6ad60f0d

5 years agoAdd FunctionSchema based Operator Registry (#13789)
Bram Wasti [Thu, 6 Dec 2018 01:16:24 +0000 (17:16 -0800)]
Add FunctionSchema based Operator Registry (#13789)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13789

This enables creation of operators with FunctionSchema and IValue

Reviewed By: smessmer

Differential Revision: D13008791

fbshipit-source-id: 151efc88ac315f4a0ab0171a99774caaf767ef1e

5 years agoIncrease test timeout (#14814)
Pieter Noordhuis [Thu, 6 Dec 2018 01:15:51 +0000 (17:15 -0800)]
Increase test timeout (#14814)

Summary:
It is possible that some sort of contention causes process scheduling
delays which in turn cause the timeout to *not* be hit.

Increased sleep here will decrease the probability of this happening.

Fixes #14555.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14814

Differential Revision: D13351924

Pulled By: pietern

fbshipit-source-id: 1222cf0855408dfcb79f30f94694c790ee998cf9

5 years agoRetry test on address already in use error (#14815)
Pieter Noordhuis [Thu, 6 Dec 2018 01:07:26 +0000 (17:07 -0800)]
Retry test on address already in use error (#14815)

Summary:
Thanks nairbv for the suggestion.

Also see #14589.

Fixes #14703.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14815

Differential Revision: D13351913

Pulled By: pietern

fbshipit-source-id: d11a4152505d0ce15592b13e417bb80551476a61

5 years agoimprove ONNX tests on torch.Linear
Lu Fang [Thu, 6 Dec 2018 01:04:39 +0000 (17:04 -0800)]
improve ONNX tests on torch.Linear

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14821

Reviewed By: zrphercule

Differential Revision: D13348773

Pulled By: houseroad

fbshipit-source-id: 611ca6e28f715e5518649c8c16f702ac3433308c

5 years agoDefine THPStorage struct only once (rather than N times) (#14802)
Lin Huang [Wed, 5 Dec 2018 21:12:37 +0000 (13:12 -0800)]
Define THPStorage struct only once (rather than N times) (#14802)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14802

The definetion of THPStorage does not depend on any Real, its macro
defintion is unnecessary, refactor the code so that THPStorage is not macro
defined.

Reviewed By: ezyang

Differential Revision: D13340445

fbshipit-source-id: 343393d0a36c868b9a06eea2ad9b80f5e395e947

5 years agoFile name change for FbgemmI8Depthwise.h and FbgemmI8Depthwise.cc (#14725)
Daya S Khudia [Wed, 5 Dec 2018 21:09:55 +0000 (13:09 -0800)]
File name change for FbgemmI8Depthwise.h and FbgemmI8Depthwise.cc (#14725)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14725

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/33

Renaming FbgemmI8Depthwise.h to FbgemmI8DepthwiseAvx2.h and FbgemmI8Depthwise.cc to FbgemmI8DepthwiseAvx2.cc since FbgemmI8DepthwiseAvx2.cc will be compiled with avx2 flags

Reviewed By: jianyuh

Differential Revision: D13313898

fbshipit-source-id: a8111eacf3d79a466ce0565bfe5f2f0b200a5c33

5 years agoAdd torch.nn.RReLU support in symbolic (#14781)
zrphercule [Wed, 5 Dec 2018 20:59:44 +0000 (12:59 -0800)]
Add torch.nn.RReLU support in symbolic (#14781)

Summary:
Now we support exporting torch.nn.RReLU in onnx.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14781

Reviewed By: houseroad

Differential Revision: D13343872

Pulled By: zrphercule

fbshipit-source-id: 1e96b957de4fc2f5ba3959d42329807975419ae3

5 years agoMove avx2 specific code in different source files (#28)
Daya S Khudia [Wed, 5 Dec 2018 19:50:57 +0000 (11:50 -0800)]
Move avx2 specific code in different source files (#28)

Summary:
Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/28

Pull Request resolved: https://github.com/pytorch/pytorch/pull/14516

This is the first diff in a series of diffs that will separate out avx2 specific code in separate files. The goal is to compile as little as possible code with avx2 and avx512 compiler flags.

Reviewed By: jianyuh

Differential Revision: D13248376

fbshipit-source-id: 401c2e9d3cd96c420fd08c3efa011febce96ffbb

5 years agoValidate matching input shapes in Int8Add operator (#14520)
Marat Dukhan [Wed, 5 Dec 2018 19:39:46 +0000 (11:39 -0800)]
Validate matching input shapes in Int8Add operator (#14520)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14520

Default engine doesn't support broadcast semantics in Int8Add operator. This patch adds a check that shapes are equivalent.

Reviewed By: bertmaher

Differential Revision: D13250922

fbshipit-source-id: 8526d07723bd9a34d54dee04d121c57f8b33c481

5 years agofix stft arg types
Tongzhou Wang [Wed, 5 Dec 2018 19:21:19 +0000 (11:21 -0800)]
fix stft arg types

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14800

Reviewed By: zou3519

Differential Revision: D13340574

Pulled By: SsnL

fbshipit-source-id: 8b0dbbe299d1a362da0ecc0b1c0dadb2543ded5d

5 years agoImprove HIPify performance (#14803)
Edward Yang [Wed, 5 Dec 2018 18:57:00 +0000 (10:57 -0800)]
Improve HIPify performance (#14803)

Summary:
```
    Improve performance of pyHIPIFY

    Changes:
    - Pre-compile regexes, don't use regexes when it's not necessary
      (this saves us ~15%)
    - Compile all substitutions for mappings into a single, non-backtracking
      regex using a Trie.  This gives big savings.

    Before, running pyHIPIFY on all files took 15.8s.  Now it takes 3.9s.
```

Stacked on #14769
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14803

Differential Revision: D13342620

Pulled By: ezyang

fbshipit-source-id: 1cfa36b3236bbe24d07080a31cc788a52d740f40

5 years agoFix cuda multiprocessing cached memory (#14736)
Ailing Zhang [Wed, 5 Dec 2018 18:52:39 +0000 (10:52 -0800)]
Fix cuda multiprocessing cached memory (#14736)

Summary:
This PR fixes #11422

In the old world of CUDA IPC, when we want to share a tensor T from A to B, we have to share the whole CUDA mem allocation where T's storage sit in. And we casted it to the same type of storage of T's.

This causes problem when two different types of storage got allocated to the same CUDA mem block. When we try to reconstruct the second tensor, it will complain about wrong storage type.

In this PR we reconstruct the storage only (not the entire mem block). However, CUDA only allows one open memHandle once per process, we have to save the device pointer in a global cache so that we can reconstruct tensors as they come.

Thanks a ton to ezyang who helped design the solution and debugged the issue!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14736

Differential Revision: D13335899

Pulled By: ailzhang

fbshipit-source-id: cad69db392ed6f8fdc2b93a9dc2899f6d378c371

5 years agoSet and get default dtype (#13748)
Peter Goldsborough [Wed, 5 Dec 2018 18:18:20 +0000 (10:18 -0800)]
Set and get default dtype (#13748)

Summary:
Replaces the `DefaultTensorOptions` with just a global default dtype that you can set and get like in Python.

Also, calls `set_default_dtype` in the implementation of `torch.set_default_dtype`. Right now these two default values are separate but will always be the same. Should we just bind `set_default_dtype`  into Python? I think that might be good to do in a separate PR though.

ezyang gchanan

Also CC colesbury who wanted to do this for ATen for a while? What do you think about it?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13748

Differential Revision: D13340207

Pulled By: goldsborough

fbshipit-source-id: 2689b09eb137fabb3a92d1ad1635782bee9398e8

5 years agoSwitch Int8AveragePool operator to QNNPACK (#14783)
Marat Dukhan [Wed, 5 Dec 2018 18:10:32 +0000 (10:10 -0800)]
Switch Int8AveragePool operator to QNNPACK (#14783)

Summary:
2.2-2.9X better performance on ARM when compiled with gcc (same bad perf when compiled with Clang)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14783

Differential Revision: D13332680

Pulled By: Maratyszcza

fbshipit-source-id: 4c1138500c6b3026335e9bfe5f6be43b1ae2cefb

5 years agoUpdate magma to 2.4.0 for Windows
peterjc123 [Wed, 5 Dec 2018 17:50:41 +0000 (09:50 -0800)]
Update magma to 2.4.0 for Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14738

Differential Revision: D13341611

Pulled By: soumith

fbshipit-source-id: 39a49fc60e710cc32a463858c9cee57c182330e2

5 years agoUnify build_caffe2_amd.py and build_pytorch_amd.py (#14769)
Edward Yang [Wed, 5 Dec 2018 17:21:13 +0000 (09:21 -0800)]
Unify build_caffe2_amd.py and build_pytorch_amd.py (#14769)

Summary:
I need to preserve ability to HIPify out-of-place files
only, so build_amd.py grows a --out-of-place-only flag.

Stacked on #14757
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14769

Differential Revision: D13340154

Pulled By: ezyang

fbshipit-source-id: 1b855bc79e824ea94517a893236fd2c8ba4cb79d

5 years agoDefault pool() option (#14636)
Ilia Cherniavskii [Wed, 5 Dec 2018 16:40:54 +0000 (08:40 -0800)]
Default pool() option (#14636)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14636

Add a default CPU option for the pool()

Reviewed By: andrewwdye

Differential Revision: D13281367

fbshipit-source-id: 92dbfce89c900a41731b6d1ff62bb97886c40f77

5 years agoStorage.clone maintains original device (#14751)
Francisco Massa [Wed, 5 Dec 2018 16:27:00 +0000 (08:27 -0800)]
Storage.clone maintains original device (#14751)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/14673

As pointed out by vishwakftw , the root case of the `deepcopy` issue was that `storage.clone()` would create a new storage in the default device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14751

Reviewed By: soumith

Differential Revision: D13323061

Pulled By: fmassa

fbshipit-source-id: bfe46ebd78f0b6cd9518c11d09de7849282ed2a2

5 years agoUpdating submodules
svcscm [Wed, 5 Dec 2018 14:24:49 +0000 (06:24 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 080e0034bd6353420383ac7b476af5a35eaba7c3

5 years agoUpdating submodules
svcscm [Wed, 5 Dec 2018 10:53:49 +0000 (02:53 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: e397238c7c477c4268e2dc89e530776fc89f18f8

5 years agoinclude avx512vl to avx512 code path (#14733)
Jongsoo Park [Wed, 5 Dec 2018 08:49:01 +0000 (00:49 -0800)]
include avx512vl to avx512 code path (#14733)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14733

We often also want to use AVX512VL instruction sets.
We already included AVX512F, AVX512DQ.
Skylake also has AVX512BW, AVX512CD we may want to later.

Reviewed By: duc0

Differential Revision: D13317282

fbshipit-source-id: 82c8e401d82d5c3a5452fb4ccb6e5cb88d242bda

5 years agoUse AT_WARN for warnings in the JIT (#14770)
Adam Paszke [Wed, 5 Dec 2018 08:07:51 +0000 (00:07 -0800)]
Use AT_WARN for warnings in the JIT (#14770)

Summary:
Previously their implementation dispatched to prim::Print, which kept
printing the warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14770

Differential Revision: D13327629

Pulled By: suo

fbshipit-source-id: b9913f533d4530eb7c29146c39981ba7f72b6b68

5 years agoAdd output info when doing onnxGetBackendCompatibility (#14784)
Yinghai Lu [Wed, 5 Dec 2018 05:50:41 +0000 (21:50 -0800)]
Add output info when doing onnxGetBackendCompatibility (#14784)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14784

TSIA. To give more complete info to `onnxGetBackendCompatibility`.

Reviewed By: bertmaher, rdzhabarov

Differential Revision: D13331989

fbshipit-source-id: 1064b93f7f474788f736e6f0c893dae915c6fb99

5 years agoDon't DCE PythonOp
Adam Paszke [Wed, 5 Dec 2018 05:35:48 +0000 (21:35 -0800)]
Don't DCE PythonOp

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14773

Reviewed By: eellison

Differential Revision: D13327673

Pulled By: suo

fbshipit-source-id: 236db3407c7eacac470530836e3d4d0dc323110c

5 years agoImprovements for symbolic AD (#14758)
Adam Paszke [Wed, 5 Dec 2018 04:35:51 +0000 (20:35 -0800)]
Improvements for symbolic AD (#14758)

Summary:
**Review only the last commit.**

This commit adds a few optimizations to AD, that let us dramatically
reduce the number of sizes we capture from forward.

We now:
- collapse chains of SumToSize
- avoid capturing sizes of tensors that are captured anyway
- more aggressively DCE the reverse code
- run CSE on the primal code to deduplicate `aten::size` calls

cc zou3519 zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14758

Differential Revision: D13324440

Pulled By: zou3519

fbshipit-source-id: 45ccbc13605adcef2b461840c6089d3200000c72

5 years agoRevert D13289919: [pytorch][PR] [DataLoader] Refactor dataloader.py
Ailing Zhang [Wed, 5 Dec 2018 04:23:25 +0000 (20:23 -0800)]
Revert D13289919: [pytorch][PR] [DataLoader] Refactor dataloader.py

Differential Revision:
D13289919

Original commit changeset: d701bc7bb48f

fbshipit-source-id: c350c491fefa98a0a7c0cf22cb832e78aeb15c3d

5 years agoDelete defunct files from torch/csrc/distributed (#14785)
Edward Yang [Wed, 5 Dec 2018 04:07:49 +0000 (20:07 -0800)]
Delete defunct files from torch/csrc/distributed (#14785)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14785

Differential Revision: D13333066

Pulled By: ezyang

fbshipit-source-id: e7937b4e8e12409b0fa964c34f995f7861ca95ff

5 years agosupport conv transpose in script
Elias Ellison [Wed, 5 Dec 2018 03:52:07 +0000 (19:52 -0800)]
support conv transpose in script

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14775

Differential Revision: D13330491

Pulled By: eellison

fbshipit-source-id: 432b327d6a33517ff53ea33c9f64700e81432332

5 years agoMaking dist.get_default_group private for PT1 release (#14767)
Teng Li [Wed, 5 Dec 2018 03:20:08 +0000 (19:20 -0800)]
Making dist.get_default_group private for PT1 release (#14767)

Summary:
When I wrote the frontend API, it is designed on not letting users use the default_group directly on any functions.  It should really be private.

All collectives are supposed to either use group.WORLD, or anything that comes out of new_group. That was the initial design.

We need to make a TODO on removing group.WORLD one day. It exists for backward compatibility reasons and adds lots of complexity.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14767

Reviewed By: pietern

Differential Revision: D13330655

Pulled By: teng-li

fbshipit-source-id: ace107e1c3a9b3910a300b22815a9e8096fafb1c

5 years agoMake checkpoint_sequential work with multiple arguments (#14278)
Andy Chen [Wed, 5 Dec 2018 02:45:45 +0000 (18:45 -0800)]
Make checkpoint_sequential work with multiple arguments (#14278)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14278

In this commit, we make checkpoint_sequential work for models with multiple tensor inputs. Previously, it only processed the first tensor and ignored the rest.

We introduce a new test in test/test_utils.py that replicates the issue referenced in this [GitHub issue](https://github.com/pytorch/pytorch/issues/11093), and we make sure that the test passes by changing the behavior of checkpoint_sequential to process all input tensors.

Reviewed By: ezyang

Differential Revision: D13144672

fbshipit-source-id: 24f58233a65a0f5b80b89c8d8cbced6f814004f7

5 years agoAutomatic update of fbcode/onnx to 42804705bdbf179d1a98394008417e1392013547 (#14777)
Lu Fang [Wed, 5 Dec 2018 02:35:46 +0000 (18:35 -0800)]
update of fbcode/onnx to 42804705bdbf179d1a98394008417e1392013547 (#14777)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14777

Previous import was 6b34743d2e361bbc0acb29dd73536478cb92562e

Included changes:
- **[4280470](https://github.com/onnx/onnx/commit/4280470)**: Changes done internally at Facebook (#1668) <Lu Fang>
- **[f85221f](https://github.com/onnx/onnx/commit/f85221f)**: Fuse MatMul and Add into Gemm (#1542) <vloncar>
- **[022230e](https://github.com/onnx/onnx/commit/022230e)**: Replace np.long by np.int64 (#1664) <G. Ramalingam>
- **[0ab3c95](https://github.com/onnx/onnx/commit/0ab3c95)**: Infer shape from data in Constant nodes (#1667) <Shinichiro Hamaji>

Reviewed By: bddppq

Differential Revision: D13330082

fbshipit-source-id: 13cf328626cf872d0983bbd2154d95c45da70f1c

5 years agoEnable testing on Loss modules (#14778)
David Riazati [Wed, 5 Dec 2018 02:32:05 +0000 (18:32 -0800)]
Enable testing on Loss modules (#14778)

Summary:
This PR adds `None` buffers as parameters (similarly to #14715). It also cleans up a bunch of the `test_jit.py` tests that should be covered by `common_nn.py` and brings in `criterion_tests` to test loss functions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14778

Differential Revision: D13330849

Pulled By: driazati

fbshipit-source-id: 924cc4cf94e0dcd11e811a55222fd2ebc42a9e76

5 years agoAdd tests for dropout/batchnorm train/eval, remove training constants (#14780)
Wanchao Liang [Wed, 5 Dec 2018 02:15:14 +0000 (18:15 -0800)]
Add tests for dropout/batchnorm train/eval, remove training constants (#14780)

Summary:
This PR:

1. add tests for batchnorm/dropout for train/eval parameter mutatino
2. remove training constants from all our standard library
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14780

Differential Revision: D13331578

Pulled By: wanchaol

fbshipit-source-id: d92ca3ce38cc2888688d50fe015e3e22539a20a5

5 years agoSplit LegacyDeviceTypeInit from LegacyTypeDispatch. (#14723)
Gregory Chanan [Wed, 5 Dec 2018 01:48:25 +0000 (17:48 -0800)]
Split LegacyDeviceTypeInit from LegacyTypeDispatch. (#14723)

Summary:
The goal here is to have LegacyTHDispatch call into this as well, so LegacyTypeDispatch and LegacyTHDispatch don't have cross dependencies.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14723

Reviewed By: ezyang

Differential Revision: D13314017

Pulled By: gchanan

fbshipit-source-id: 8761cb4af2b2269d2e755203e073bfdba535b8c0

5 years agodon't allow cse to clean up nondeterministic nodes
Michael Suo [Tue, 4 Dec 2018 23:42:22 +0000 (15:42 -0800)]
don't allow cse to clean up nondeterministic nodes

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14776

Differential Revision: D13330229

Pulled By: suo

fbshipit-source-id: 6bc88811e1889949f0f079cffccd8cd4270584cc

5 years agoReenable all forward-pass fusions that worked before the AD fix (#14558)
Adam Paszke [Tue, 4 Dec 2018 23:40:41 +0000 (15:40 -0800)]
Reenable all forward-pass fusions that worked before the AD fix (#14558)

Summary:
Dealing with so many `aten::size` calls (in particular calls on elements computed inside fusion groups) requires us to do some extra graph processing in the fuser (to compute the sizes by explicit broadcasts, instead of writing the intermediate tensors only to check their size). This restores the forward expects of LSTM and MiLSTM to a single big kernel. Unfortunately the backward is much harder, because as long as we can't prove that the reductions are unnecessary (or if we can't distribute them over the op), we will not be able to fuse them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14558

Differential Revision: D13321748

Pulled By: zou3519

fbshipit-source-id: c04fc2f70d106d2bfb56206b5aec517a93b79d1f

5 years agoBatchNorm support not tracking stats
David Riazati [Tue, 4 Dec 2018 23:09:30 +0000 (15:09 -0800)]
BatchNorm support not tracking stats

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14764

Differential Revision: D13325800

Pulled By: driazati

fbshipit-source-id: a3e4773dc31b83565e7a4de33614d6efd4a12de9

5 years agoMinor doc change in c10/Device.h (#14762)
Lu Fang [Tue, 4 Dec 2018 22:48:56 +0000 (14:48 -0800)]
Minor doc change in c10/Device.h (#14762)

Summary:
Make sure it's a valid regex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14762

Reviewed By: zrphercule

Differential Revision: D13326108

Pulled By: houseroad

fbshipit-source-id: fdcae2d5d42774c4071651b7477f08047d385dfa

5 years agoIntroduce LegacyTHDispatcher for dispatching to TH functions. (#14754)
Gregory Chanan [Tue, 4 Dec 2018 22:41:03 +0000 (14:41 -0800)]
Introduce LegacyTHDispatcher for dispatching to TH functions. (#14754)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14754

This isn't hooked up to anything yet, this is just putting the skeleton in place.
The idea here is that the functions generated via Declarations.cwrap and nn.yaml are not actually operators, they are implementation details of operators, and thus don't need to participate in VariableType, JIT dispatch generation.

So, we will split these functions out from the usual Type/operator hierarchy; for now the dispatch will be done by a Type-like class called LegacyTHDispatcher.  Once this is done this probably means we can collapse Type to be backend-specific, not Type/ScalarType specific, because all the ScalarType specific code will live in the LegacyTHDispatcher.

Reviewed By: ezyang

Differential Revision: D13321605

fbshipit-source-id: 25d1bbc9827a42d6ab5d69aabbad3eac72bf364c

5 years agodisable batch mm if we have mutable ops (#14771)
Michael Suo [Tue, 4 Dec 2018 22:28:10 +0000 (14:28 -0800)]
disable batch mm if we have mutable ops (#14771)

Summary:
Just to be safe, disable batch mm for mutable ops. We don't lose much for doing this, and we can go back at a calmer time to re-enable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14771

Reviewed By: eellison

Differential Revision: D13327641

Pulled By: suo

fbshipit-source-id: 96611e21ed3cb8492a2cd040f7d33fb58c52bd5e

5 years agoReplace at::Half non-vectorized conversions with implementations from FP16 (#14411)
Chandler Zuo [Tue, 4 Dec 2018 22:23:22 +0000 (14:23 -0800)]
Replace at::Half non-vectorized conversions with implementations from FP16 (#14411)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14411
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14579

Folded the fp16 codes into c10.

Reviewed By: ezyang

Differential Revision: D13206450

fbshipit-source-id: 472208dd230dc49d33935622ff3286b17eeb0894

5 years agoUse .to to convert new tensors in new_tensor (#14097)
Thomas Viehmann [Tue, 4 Dec 2018 21:58:31 +0000 (13:58 -0800)]
Use .to to convert new tensors in new_tensor (#14097)

Summary:
This would solve the tracing problems of #13969.
Fixes: #14732

I would appreciate if this got good scrutiny before applied.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14097

Differential Revision: D13323181

Pulled By: ezyang

fbshipit-source-id: dcd104b497c0bfddb751923c6166a3824b7a3702

5 years agoExport generator constructor (#14041)
Zeming Lin [Tue, 4 Dec 2018 21:43:28 +0000 (13:43 -0800)]
Export generator constructor (#14041)

Summary:
Missed a spot :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14041

Reviewed By: ezyang

Differential Revision: D13283803

Pulled By: ebetica

fbshipit-source-id: 482e245f57b0cea6ca3886355ea3ae487d024d4b

5 years agoc10d doesn't work with torch namespace (#14042)
Zeming Lin [Tue, 4 Dec 2018 21:42:11 +0000 (13:42 -0800)]
c10d doesn't work with torch namespace (#14042)

Summary:
If both `Utils.hpp` and the `torch` namespace is included in the same file, the compiler won't know which fmap to use. I believe this is because of ADL. This change fixes that issue for me.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14042

Reviewed By: ezyang

Differential Revision: D13283810

Pulled By: ebetica

fbshipit-source-id: b68233336518230ba730e83ddac1226a66896533

5 years agoAdd resnet test, convert more modules (#14437)
Wanchao Liang [Tue, 4 Dec 2018 21:40:11 +0000 (13:40 -0800)]
Add resnet test, convert more modules (#14437)

Summary:
This PR add resnet to test_jit and convert more nn modules, stacked on #14533 and #14715
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14437

Differential Revision: D13325871

Pulled By: wanchaol

fbshipit-source-id: 6c94a988b36794a373af6541c0c262a07291f7b1

5 years agoAdd missing test skip
David Riazati [Tue, 4 Dec 2018 21:33:41 +0000 (13:33 -0800)]
Add missing test skip

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14763

Differential Revision: D13325350

Pulled By: driazati

fbshipit-source-id: 4d64a7616b227983c2fc2748c5fbecd1bcbff832

5 years agoRename _local_scalar to item() (#13676)
Peter Goldsborough [Tue, 4 Dec 2018 21:17:17 +0000 (13:17 -0800)]
Rename _local_scalar to item() (#13676)

Summary:
Make `at::_local_scalar` more "official" by renaming it to `item()`.

gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13676

Differential Revision: D13003020

Pulled By: goldsborough

fbshipit-source-id: 0ac25f5237fb81a1576304a0a02f840ff44168a4