platform/upstream/pytorch.git
5 years agomultinomial: fix detection of zero probability (#16075)
Thomas Viehmann [Wed, 16 Jan 2019 20:15:12 +0000 (12:15 -0800)]
multinomial: fix detection of zero probability (#16075)

Summary:
The cumsum over the probabilities can be not monotonically
non-decreasing. Thus it is hard to detect zero probability
classes using just the cumsum.
This changes the binary search postprocessing to use the
(non-cumulated) distribution instead.

Thank you, jcjohnson, for the bug report with
reproducing case.

Fixes: #13867
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16075

Differential Revision: D13695565

Pulled By: soumith

fbshipit-source-id: 02c4d6f868f0050c1ae7d333f4317c5610e49cd9

5 years agoEnable single graph sharing between multiple threads for onnxifiop (#16047)
Kimish Patel [Wed, 16 Jan 2019 19:46:04 +0000 (11:46 -0800)]
Enable single graph sharing between multiple threads for onnxifiop (#16047)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16047

Implements single thead safe map enabling sharing of generated graph between
different ops.
Added model_id to every onnxified op to help create a unique id in the map.
Some formatting fix.

Reviewed By: yinghai

Differential Revision: D13663927

fbshipit-source-id: 27417e8fe752fdd48abb6a87966cd76d592e1206

5 years agoFix error message formatting in AT_CHECK/AT_ERROR (#16067)
vishwakftw [Wed, 16 Jan 2019 19:12:47 +0000 (11:12 -0800)]
Fix error message formatting in AT_CHECK/AT_ERROR (#16067)

Summary:
Changelog:

- Fix formatting for error messages in prelu, EmbeddingBag, RNN

Fixes https://github.com/pytorch/pytorch/issues/16043
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16067

Differential Revision: D13693286

Pulled By: soumith

fbshipit-source-id: b0760d13c9a45e82dababfc44dabe648e5345ca3

5 years agoCorrect sphinx-note in symeig (wrong indentation)
Rasmus Diederichsen [Wed, 16 Jan 2019 18:22:08 +0000 (10:22 -0800)]
Correct sphinx-note in symeig (wrong indentation)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16073

Differential Revision: D13692874

Pulled By: soumith

fbshipit-source-id: ea2a98e88679d382f9a2edab199e9ba7c8ce2213

5 years agoFix the caffe2_gpu linkage with torch on Windows (#16071)
peter [Wed, 16 Jan 2019 17:06:22 +0000 (09:06 -0800)]
Fix the caffe2_gpu linkage with torch on Windows (#16071)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/15992.
Inspired by https://docs.microsoft.com/en-us/cpp/build/reference/optimization-best-practices?view=vs-2017. But this PR needs to be tested.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16071

Differential Revision: D13693006

Pulled By: soumith

fbshipit-source-id: e83e9ae2591fa4da01d2b1b593558dba3bdc3cf7

5 years agoPort legacy all(*) to ATen (#15540)
Shen Li [Wed, 16 Jan 2019 17:02:44 +0000 (09:02 -0800)]
Port legacy all(*) to ATen (#15540)

Summary:
Questions:

1. ~This PR disables `common_dtype` computation [in `TensorIterator.cpp`](https://github.com/mrshenli/pytorch/blob/all/aten/src/ATen/native/TensorIterator.cpp#L489-L491) for `all*` operators. The reason is that, [this code](https://github.com/mrshenli/pytorch/blob/all/aten/src/ATen/native/TensorIterator.cpp#L120) otherwise complains type mismatch, where the `op.tensor` is `type Variable[CPUByteType]` while the `op` is `CPUByteType`. I am not sure if this is the right solution for this problem.~

2. Should I clean up all occurrences of `_th_all` and `_th_all_out` (and `logicalAnd`, `logicalAndAll`)?

3. Do I need to implement derivatives for `all`?

gchanan

Benchmark:

<img width="590" alt="screen shot 2018-12-26 at 3 24 31 pm" src="https://user-images.githubusercontent.com/16999635/50456505-e9596a00-0922-11e9-844e-00c4b4aad7ca.png">

<img width="587" alt="screen shot 2018-12-26 at 3 26 10 pm" src="https://user-images.githubusercontent.com/16999635/50456509-ef4f4b00-0922-11e9-96bf-0a30c8574fe7.png">

<img width="590" alt="screen shot 2018-12-26 at 3 26 54 pm" src="https://user-images.githubusercontent.com/16999635/50456510-ef4f4b00-0922-11e9-8a63-e47988843cc8.png">

<img width="589" alt="screen shot 2018-12-26 at 3 27 16 pm" src="https://user-images.githubusercontent.com/16999635/50456511-ef4f4b00-0922-11e9-9004-2518aebcdc6e.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15540

Differential Revision: D13548938

Pulled By: mrshenli

fbshipit-source-id: 5a2e5eef1047decb4c79906cb9f3332034908c9c

5 years agoRename away uses of THAllocator and THCDeviceAllocator (#16061)
Edward Yang [Wed, 16 Jan 2019 13:33:14 +0000 (05:33 -0800)]
Rename away uses of THAllocator and THCDeviceAllocator (#16061)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16061

I discovered I needed to delete these names in preparation of moving
THCCachingAllocator to c10_cuda; might as well also fix all the other
sites too.

Reviewed By: dzhulgakov

Differential Revision: D13686869

fbshipit-source-id: e8cc55d39ac4bfd3e3a22c761f89a7a111ce5f5e

5 years agoStop pretending that TH headers are both C++ and C compatible. (#16059)
Edward Yang [Wed, 16 Jan 2019 13:33:14 +0000 (05:33 -0800)]
Stop pretending that TH headers are both C++ and C compatible. (#16059)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16059

Just deleted all __cplusplus ifdef guards; we only ever use
these headers in C++ contexts.

Reviewed By: dzhulgakov

Differential Revision: D13686580

fbshipit-source-id: ce28c4a32f3596bfb17aeeb34904a02899991453

5 years agoFix logic errors when accumulating reductions in output (CUDA) (#16023)
Brennan Vincent [Wed, 16 Jan 2019 03:55:13 +0000 (19:55 -0800)]
Fix logic errors when accumulating reductions in output (CUDA) (#16023)

Summary:
The correct logic is as follows:

* If there is an earlier split, we need to combine with its result
* If there is *not* a later split, we need to project before saving into the output.

This should partially f i x #15837  . For example:
```
In [7]: a=torch.ones([1838860800], dtype=torch.float, device="cuda:1")

In [8]: a.mean()
Out[8]: tensor(1., device='cuda:1')
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16023

Differential Revision: D13678449

Pulled By: umanwizard

fbshipit-source-id: ab5078484c88e96bb30121b5cf24a0e8b0a8c2f8

5 years agoRemove deprecated caffe2::Tensor APIs (#15814)
Jerry Zhang [Wed, 16 Jan 2019 02:39:29 +0000 (18:39 -0800)]
Remove deprecated caffe2::Tensor APIs (#15814)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15814

Plan is to remove the APIs we want to deprecate one by one and make sure it still builds in sandcastle and ossci

Reviewed By: ezyang

Differential Revision: D12812029

fbshipit-source-id: ea0c3dd882bec95fcd4507160ebc61f598b6d040

5 years agoRemaining Tensor API fixes - dims() -> sizes() (#15743)
Jerry Zhang [Wed, 16 Jan 2019 02:39:28 +0000 (18:39 -0800)]
Remaining Tensor API fixes - dims() -> sizes() (#15743)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15743

Remaining fixes so that D12812029 will compile

Reviewed By: dzhulgakov

Differential Revision: D13535559

fbshipit-source-id: 2c8b3403570c8c35ac8efe2d827233abc0e6e0d1

5 years agoComment about CuDNNWrapper (#15496)
Edward Yang [Wed, 16 Jan 2019 01:57:27 +0000 (17:57 -0800)]
Comment about CuDNNWrapper (#15496)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15496

Differential Revision: D13544130

Pulled By: ezyang

fbshipit-source-id: 51bdd8312b482925b30a478774cdfa629c57ee4e

5 years agoPort FractionalMaxPool2d from TH to ATen (#15531)
Chandler Zuo [Wed, 16 Jan 2019 01:54:20 +0000 (17:54 -0800)]
Port FractionalMaxPool2d from TH to ATen (#15531)

Summary:
Tested:

pytest test/test_nn.py -k Fractional
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15531

Differential Revision: D13612833

Pulled By: chandlerzuo

fbshipit-source-id: b919d698d068b97ba7a4f8021367e7f6c8aae39c

5 years agoSupport tracing GenericList (#15969)
James Reed [Wed, 16 Jan 2019 01:29:48 +0000 (17:29 -0800)]
Support tracing GenericList (#15969)

Summary:
Treat GenericList similarly to tuples and TensorList: recursively unpack them and assignValueTrace accordingly. Also add interpreter support for ListUnpack on GenericList
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15969

Differential Revision: D13665139

Pulled By: jamesr66a

fbshipit-source-id: cd8cb3dd7475f424e48a69d217f2eac529df9f6a

5 years agos/fwdproxy.any/fwdproxy/g in fbsource (#16024)
Kyle Lexmond [Wed, 16 Jan 2019 01:20:24 +0000 (17:20 -0800)]
s/fwdproxy.any/fwdproxy/g in fbsource (#16024)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16024

codemod with 'Yes to all': s/fwdproxy.any/fwdproxy/g in fbsource

Reviewed By: maxgeorg

Differential Revision: D13666336

fbshipit-source-id: a5a694d66efec5304a1c8c231d638441f88efe1d

5 years agoAutomatic update of fbcode/onnx to 84a0441ae28795a928005863dc142bee81827566 (#16046)
Lu Fang [Wed, 16 Jan 2019 01:10:56 +0000 (17:10 -0800)]
update of fbcode/onnx to 84a0441ae28795a928005863dc142bee81827566 (#16046)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16046

Previous import was 7abd834091f1024c11749dcfd25126802db9fdd5

Included changes:
- **[84a0441](https://github.com/onnx/onnx/commit/84a0441)**: Clarify namescopes in the presence of nested subgraphs (#1665) <G. Ramalingam>
- **[118fec5](https://github.com/onnx/onnx/commit/118fec5)**: Add Where op. (#1569) <Sergii Dymchenko>
- **[beefa15](https://github.com/onnx/onnx/commit/beefa15)**: Use strings directly for casing as np.object w/o redundant StringHolder. (#1736) <Dmitri Smirnov>
- **[4023bae](https://github.com/onnx/onnx/commit/4023bae)**: Add a capability to input/output unicode strings (#1734) <Dmitri Smirnov>
- **[1a8a7fc](https://github.com/onnx/onnx/commit/1a8a7fc)**: typos fixed: iutput -> input (#1726) <Beomsoo Kim>
- **[0128478](https://github.com/onnx/onnx/commit/0128478)**: Scan test update (#1732) <G. Ramalingam>
- **[c6a24fd](https://github.com/onnx/onnx/commit/c6a24fd)**: turn rtol to 0.002 on densenet121, since AMD and Nvidia GPU's precion difference (#1733) <Lu Fang>
- **[5b7ac72](https://github.com/onnx/onnx/commit/5b7ac72)**: Add Shrink operator (#1622) <Rui Zhu>

Reviewed By: yinghai

Differential Revision: D13676711

fbshipit-source-id: 513cc137223469b47af48919432aaecf58006012

5 years agoAdd count_include_pad to average_pool_gradient_op (#15997)
Xiaomeng Yang [Wed, 16 Jan 2019 00:44:33 +0000 (16:44 -0800)]
Add count_include_pad to average_pool_gradient_op (#15997)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15997

Add count_include_pad to average_pool_gradient_op

Reviewed By: houseroad

Differential Revision: D13648339

fbshipit-source-id: 205cb2acb32dc24a85256b628298b1a11f0ffa2c

5 years agoRemove cuda from autograd profiler (#15898)
Zachary DeVito [Wed, 16 Jan 2019 00:25:28 +0000 (16:25 -0800)]
Remove cuda from autograd profiler (#15898)

Summary:
This puts stubs in the autograd profiler for the use of cuda APIs allowing the cuda parts of libtorch to be linked separately from the CPU parts.

This also edits the buck build.

Previous:

For GPU builds:
_C -> csrc -> caffe2
For CPU builds:
_C -> csrc-cpu -> caffe2

Now:
GPU:
_C -> libtorch_cuda -> (libtorch -> caffe2, for CPU)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/15898

Reviewed By: ailzhang

Differential Revision: D13617991

Pulled By: zdevito

fbshipit-source-id: 6d84a50bb356a54b4217f93219902755601b00e1

5 years agoFix namespace typo. (#16021)
Yavuz Yetim [Wed, 16 Jan 2019 00:17:01 +0000 (16:17 -0800)]
Fix namespace typo. (#16021)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16021

Adds nom:: so that TRIVIAL_CONVERTER works more generally.

Reviewed By: janewangfb

Differential Revision: D13664748

fbshipit-source-id: 100f47a8326e41bd0ac2ae281669f5a0363fe060

5 years agoFixing missing cpp tests for Caffe2 setup.py builds (#16037)
Jesse Hellemn [Tue, 15 Jan 2019 20:10:23 +0000 (12:10 -0800)]
Fixing missing cpp tests for Caffe2 setup.py builds (#16037)

Summary:
These were broken (always skipped in setup.py builds) by https://github.com/pytorch/pytorch/pull/15917
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16037

Differential Revision: D13675549

Pulled By: pjh5

fbshipit-source-id: fed50855dd0b5d0c80fface3d8b2156f18aae4e7

5 years agoTest cases for calling caffe2 LayerNorm from PyTorch and JIT
Sebastian Messmer [Tue, 15 Jan 2019 19:24:00 +0000 (11:24 -0800)]
Test cases for calling caffe2 LayerNorm from PyTorch and JIT

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15895

Reviewed By: dzhulgakov

Differential Revision: D13615336

fbshipit-source-id: de28fef8ce025d6d37a4c80c029ec97b7195cfd9

5 years agoEnhance cpu support on gloo based multi-nodes mode. (#11330)
Shane Li [Tue, 15 Jan 2019 19:07:55 +0000 (11:07 -0800)]
Enhance cpu support on gloo based multi-nodes mode. (#11330)

Summary:
1. Add some gloo communication operators into related fallback list;
2. Work around to avoid compiling errors while using fallback operator whose CPU operator inherits from 'OperatorBase' directly like PrefetchOperator;
3. Add new cpu context support for some python module files and resnet50 training example file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11330

Reviewed By: yinghai

Differential Revision: D13624519

Pulled By: wesolwsk

fbshipit-source-id: ce39d57ddb8cd7786db2e873bfe954069d972f4f

5 years agoConstant prop prim::None (#15979)
Elias Ellison [Tue, 15 Jan 2019 18:56:17 +0000 (10:56 -0800)]
Constant prop prim::None (#15979)

Summary:
Previously we were only constant propping prim::Constants, but we should be constant propping prim::None as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15979

Differential Revision: D13664692

Pulled By: eellison

fbshipit-source-id: 01839403576c21fc030c427e49275b8e1210fa8f

5 years agoAdd a note about THNN height/width/etc argument reordering. (#15819)
Edward Yang [Tue, 15 Jan 2019 18:19:22 +0000 (10:19 -0800)]
Add a note about THNN height/width/etc argument reordering. (#15819)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15819

Differential Revision: D13665297

Pulled By: ezyang

fbshipit-source-id: 4570275bc9e65269788f836f2447d09474cefeff

5 years agoFix Python path finding for benchmark tests
Jesse Hellemn [Tue, 15 Jan 2019 18:12:18 +0000 (10:12 -0800)]
Fix Python path finding for benchmark tests

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16022

Differential Revision: D13673792

Pulled By: pjh5

fbshipit-source-id: 177a823ef343b7f60e26ad9ef51415332045438d

5 years agoQuantized RNNCell modules (#15469)
James Reed [Tue, 15 Jan 2019 18:07:18 +0000 (10:07 -0800)]
Quantized RNNCell modules (#15469)

Summary:
Similarly to https://github.com/pytorch/pytorch/pull/13777, we apply post-processing quantization to RNN cell modules (`RNNCell`, `LSTMCell`, and `GRUCell`).

A further follow-up PR will involve quantizing the full `RNN`, `GRU`, and `LSTM` modules. This depends on those modules being scriptable as part of the standard library scripting effort, though. Note that infrastructure in this pr such as `gather_quantized_params` is currently unused but should be used in the future when we can port over the full RNN modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15469

Differential Revision: D13545802

Pulled By: jamesr66a

fbshipit-source-id: ad3b694517842893ea619438e9f5e88fd7b96510

5 years agoMiscellaneous broken RSTs fixed (#16033)
Derek Kim [Tue, 15 Jan 2019 17:44:50 +0000 (09:44 -0800)]
Miscellaneous broken RSTs fixed (#16033)

Summary:
https://pytorch.org/docs/master/tensors.html#torch.Tensor.bernoulli_
https://pytorch.org/docs/master/torch.html#torch.addmm
https://pytorch.org/docs/master/distributed_deprecated.html#torch.distributed.deprecated.reduce_multigpu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16033

Differential Revision: D13671202

Pulled By: soumith

fbshipit-source-id: 276e10e610affe205376573e7f0f9894695d218d

5 years agoAdd PyTorchPredictorContainer (#15899)
Lu Fang [Tue, 15 Jan 2019 17:13:16 +0000 (09:13 -0800)]
Add PyTorchPredictorContainer (#15899)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15899

Add PyTorchPredictorContainer to support multiple jit script modules

Reviewed By: pritamdamania87

Differential Revision: D13596139

fbshipit-source-id: 3ce0bdf2f4dbba7aa1d20e824d03e5ac98f5d887

5 years agoAdd `itertools.{prod, combinations, combinations_with_replacement}` like op to pytorc...
Xiang Gao [Tue, 15 Jan 2019 16:24:27 +0000 (08:24 -0800)]
Add `itertools.{prod, combinations, combinations_with_replacement}` like op to pytorch (#9393)

Summary:
closes https://github.com/pytorch/pytorch/issues/7580
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9393

Differential Revision: D13659628

Pulled By: zou3519

fbshipit-source-id: 3a233befa785709395a793ba8833413be394a6fd

5 years agouse fbgemm gconv in dnnlowp (#16020)
Jongsoo Park [Tue, 15 Jan 2019 07:59:33 +0000 (23:59 -0800)]
use fbgemm gconv in dnnlowp (#16020)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16020

Needs to go over more iterations. For conv, I think we need a high level interface that abstracts out low-level details of which code path will be taken (acc16, outlier-aware, depth-wise, group conv, ...) otherwise the client code will be complex as can be seen from DNNLOWP Conv ops. This will also help us to make interface more stable.

Reviewed By: dskhudia, jianyuh

Differential Revision: D13588996

fbshipit-source-id: 9afce9e441bcaf20437fcc2874fb9d4165a46bcb

5 years ago`var` for multiple dimensions (#15892)
Brennan Vincent [Tue, 15 Jan 2019 04:14:04 +0000 (20:14 -0800)]
`var` for multiple dimensions (#15892)

Summary:
Timings are the same as for `std` .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15892

Differential Revision: D13651173

Pulled By: umanwizard

fbshipit-source-id: a26bf1021dd972aa9e3e60fb901cd4983bfa190f

5 years agoUpdating submodules
svcscm [Tue, 15 Jan 2019 02:42:13 +0000 (18:42 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 19841cff4a7fd69318d7828db75c16cd75757edd

5 years agoUpdating submodules
svcscm [Tue, 15 Jan 2019 02:35:14 +0000 (18:35 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 68b7c41366618ffd636c2b9c45c7ffbbcbc44f85

5 years agonomnigraph - easy - use new test utils in converter_nomnigraph_test (#15751)
Duc Ngo [Tue, 15 Jan 2019 02:33:46 +0000 (18:33 -0800)]
nomnigraph - easy - use new test utils in converter_nomnigraph_test (#15751)

Summary:
Use new test utils in converter_nomnigraph_test , and add utils to set device option name, external inputs, outputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15751

Differential Revision: D13586228

Pulled By: duc0

fbshipit-source-id: ff809dd7bf9f30641ce2a6fef7e2810f005521c2

5 years agoRemove code duplication (#15880)
Sebastian Messmer [Tue, 15 Jan 2019 01:55:13 +0000 (17:55 -0800)]
Remove code duplication (#15880)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15880

The layer_norm reference was implemented twice. Removing one of them.

Reviewed By: dzhulgakov

Differential Revision: D13611232

fbshipit-source-id: cee96c78d3255c3a4e34300693bf9260cf096615

5 years agoFix ormqr docs, fixes #15565 (#15694)
Edward Yang [Tue, 15 Jan 2019 01:03:52 +0000 (17:03 -0800)]
Fix ormqr docs, fixes #15565 (#15694)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
cc meganset
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15694

Differential Revision: D13573064

Pulled By: zou3519

fbshipit-source-id: 1d0b693d7c26db91826b81e6c98b45a69b5e9bc4

5 years agoFix c10d checking errno unconditionally (#15986)
SsnL [Mon, 14 Jan 2019 23:59:29 +0000 (15:59 -0800)]
Fix c10d checking errno unconditionally (#15986)

Summary:
In #15964, I learned that `errno` is only meaningful if the function call fails. E.g., on some macos, a successful `fork()` sets `errno` to `EINVAL` in child process. This commit changes the `SYSCALL` macro so error checking is only done when an error happens. This means checking whether `rv == -1` for most calls, but is checking `rv == nullptr` for `inet_ntop`.

Now `SYSCALL` accepts a second argument `success_cond`, which should be an expression returning whether the call succeeded. `SYSCHECK_ERR_RETURN_NEG1` is the shorthand for checking if rv is `-1`.

Any suggestion on better macro names is welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15986

Reviewed By: janewangfb

Differential Revision: D13661790

Pulled By: pietern

fbshipit-source-id: 9551b14b9f88805454a7bfb8e4d39e0f3aed8131

5 years agoadd tensor.to to script (#15976)
Elias Ellison [Mon, 14 Jan 2019 23:44:50 +0000 (15:44 -0800)]
add tensor.to to script (#15976)

Summary:
Previously it only worked with keyword arguments. Now it is fully compatible.

Fix for: https://github.com/pytorch/pytorch/issues/15478
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15976

Differential Revision: D13643979

Pulled By: eellison

fbshipit-source-id: 6a47bce7db362da80452adffebd2732f8e62a240

5 years agoSplit Caffe2 CI into cmake-only and python builds (#15917)
Jesse Hellemn [Mon, 14 Jan 2019 23:10:49 +0000 (15:10 -0800)]
Split Caffe2 CI into cmake-only and python builds (#15917)

Summary:
bypass-lint

- Change all Caffe2 builds to use setup.py instead of cmake
- Add a -cmake- Caffe2 build configuration that uses cmake and only builds cpp
- Move skipIfCI logic from onnx test scripts to the rest of CI logic
- Removal of old PYTHONPATH/LD_LIBRARY_PATH/etc. env management
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15917

Reviewed By: orionr

Differential Revision: D13637583

Pulled By: pjh5

fbshipit-source-id: c5c5639db0251ba12b6e4b51b2ac3b26a8953153

5 years agoMake call operator on module holder call forward (#15831)
Peter Goldsborough [Mon, 14 Jan 2019 22:32:32 +0000 (14:32 -0800)]
Make call operator on module holder call forward (#15831)

Summary:
In Python, you can use the call operator to invoke the `forward()` method of a module. In C++ this was currently not possible, because I couldn't figure out how to deduce the return type of a module's `forward()` method under the constraint that `forward()` may not exist at all (since the base module class in C++ does not mandate a `forward()` method). I now figured it out, so the call operator can be used.

ezyang ebetica
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15831

Differential Revision: D13652676

Pulled By: goldsborough

fbshipit-source-id: ccab45a15215dda56460e560f0038781b539135f

5 years agoUpdating submodules
svcscm [Mon, 14 Jan 2019 20:40:28 +0000 (12:40 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 0e31357e8a34614226e8948ae76d67e0786a9196

5 years agoFix broken rst of torch.nn.utils.spectral_norm and others (#15995)
Derek Kim [Mon, 14 Jan 2019 15:28:58 +0000 (07:28 -0800)]
Fix broken rst of torch.nn.utils.spectral_norm and others (#15995)

Summary:
- Currently, the [rst](https://pytorch.org/docs/stable/nn.html#torch.nn.utils.spectral_norm) looks broken, at least in my browser. So I fixed it.
- I thought a subscript may be needed to the left W in the definition.
- A few typos fixed.

crcrpar
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15995

Differential Revision: D13649888

Pulled By: soumith

fbshipit-source-id: 00a2c3b043c7c8ebdd9fc2bf77ba27ae695fee3f

5 years agoAdd cuda.reset_max_memory_* (#15985)
SsnL [Mon, 14 Jan 2019 15:28:50 +0000 (07:28 -0800)]
Add cuda.reset_max_memory_* (#15985)

Summary:
Addresses #15968
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15985

Differential Revision: D13649916

Pulled By: soumith

fbshipit-source-id: a207aea5709a79dba7a6fc541d0a70103f49efff

5 years agolibshm retry on EINTR (#15964)
SsnL [Mon, 14 Jan 2019 12:24:50 +0000 (04:24 -0800)]
libshm retry on EINTR (#15964)

Summary:
fixes https://github.com/pytorch/pytorch/issues/14314
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15964

Differential Revision: D13639034

Pulled By: soumith

fbshipit-source-id: 44592762aa46982e5d3616d55b5666a2c2ce9105

5 years agoImproved the documentation for torch.nn.functional.pad (#15984)
Derek Kim [Mon, 14 Jan 2019 12:06:38 +0000 (04:06 -0800)]
Improved the documentation for torch.nn.functional.pad (#15984)

Summary:
- Fixed a few typos and grammar errors.
- Changed the sentences a bit.
- Changed the format of the tuples to be consistent with padding notations in the other places. For example, `ReflectionPad2d`'s dostring contains :math:`H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}`.

I also made sure that the generated html doesn't break.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15984

Differential Revision: D13649939

Pulled By: soumith

fbshipit-source-id: 0abfa22a7bf1cbc6546ac4859652ce8741d41232

5 years agoImprove the docstring of nn.random.fork_rng (#15960)
Derek Kim [Mon, 14 Jan 2019 10:38:36 +0000 (02:38 -0800)]
Improve the docstring of nn.random.fork_rng (#15960)

Summary:
Improved the docstring of nn.random.fork_rng
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15960

Differential Revision: D13649929

Pulled By: soumith

fbshipit-source-id: d3843179a2f1f838792c2f07f34deda2c06af56e

5 years agodoc fixes (#15990)
surgan12 [Mon, 14 Jan 2019 07:35:07 +0000 (23:35 -0800)]
doc fixes (#15990)

Summary: fixes  #15597 ,  #15283 and #10258

Differential Revision: D13649905

Pulled By: soumith

fbshipit-source-id: 753f46c2c96c61fba460019d9ed3e0d047d42ee7

5 years agosimplify lambda function use in conv dnnlowp ops to fix #15911 (#15996)
Jongsoo Park [Mon, 14 Jan 2019 07:30:09 +0000 (23:30 -0800)]
simplify lambda function use in conv dnnlowp ops to fix #15911 (#15996)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15996

As reported in issue #15911, gcc 4.9 was getting internal compiler error due to a complex use of lambda function in conv_dnnlowp_op.cc and conv_acc16_op.cc . This diff simplifies them.

Reviewed By: viswanathgs

Differential Revision: D13648264

fbshipit-source-id: 1551ae8a0a7653749185dca51ccceb2471b96b82

5 years agofix RandomSampler length (#15991)
kyryl [Mon, 14 Jan 2019 07:07:16 +0000 (23:07 -0800)]
fix RandomSampler length (#15991)

Summary:
Hi!

This PR addresses #15537  issue.
Please review.

Thanks!

Differential Revision: D13649890

Pulled By: soumith

fbshipit-source-id: 166212ae383331345423236dfc4fa2ea907d265d

5 years agoFix static build on Windows (#15989)
peter [Mon, 14 Jan 2019 06:50:07 +0000 (22:50 -0800)]
Fix static build on Windows (#15989)

Summary:
Tested locally. It could be now be started by running `set EXTRA_CAFFE2_CMAKE_FLAGS= -DTORCH_STATIC=1` before build. If we want to make sure it works, then maybe we should add it into CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15989

Differential Revision: D13649935

Pulled By: soumith

fbshipit-source-id: 956945ed572819d8cf0bc9bd48df3ea9bc6f4a8a

5 years agoCaffe 2: Reshape Op upgrade (#15380)
Sergei Nikolaev [Mon, 14 Jan 2019 06:46:39 +0000 (22:46 -0800)]
Caffe 2: Reshape Op upgrade (#15380)

Summary:
This is follow up on #13945 where we had to turn off some TRT tests because some ops were not ready to accept ONNX opset 9+ models. This PR fixes Reshape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15380

Differential Revision: D13649825

Pulled By: houseroad

fbshipit-source-id: b72e62803de5b63cc001c3fe4b3bf64dfa996e94

5 years agofix compile error reported in issue #15911 (#15953)
Jongsoo Park [Sun, 13 Jan 2019 05:00:25 +0000 (21:00 -0800)]
fix compile error reported in issue #15911 (#15953)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15953

Fix issue reported in https://github.com/pytorch/pytorch/issues/15911

Reviewed By: csummersea

Differential Revision: D13633256

fbshipit-source-id: 3808f100ff7dedfe5e20708e72e6081ff07eb32c

5 years agoBack out "[pt1][tensor] Remove caffe2::ShareData" (#15983)
Jerry Zhang [Sat, 12 Jan 2019 15:04:49 +0000 (07:04 -0800)]
Back out "[pt1][tensor] Remove caffe2::ShareData" (#15983)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15983

Original commit changeset: 6e4275d02f4c

Reviewed By: supertopher, Yangqing

Differential Revision: D13644123

fbshipit-source-id: 4b15a4c62995c0e68aad58465600409e302e6504

5 years agoRemove StopGradient op when it is inplace in inference (#12152)
wuhuikx [Sat, 12 Jan 2019 07:52:28 +0000 (23:52 -0800)]
Remove StopGradient op when it is inplace in inference (#12152)

Summary:
For Inference, if the StopGradient op is inpalce, we just remove it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12152

Differential Revision: D13633946

Pulled By: yinghai

fbshipit-source-id: 57762bcc37b38a1d39cb4af316ca50bfe961b105

5 years agoAdd global pooling specialization and also update MaxPooling on GPU (#15824)
Xiaomeng Yang [Sat, 12 Jan 2019 06:35:12 +0000 (22:35 -0800)]
Add global pooling specialization and also update MaxPooling on GPU (#15824)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15824

Add global pooling specialization and also update MaxPooling on GPU

Reviewed By: houseroad

Differential Revision: D13596340

fbshipit-source-id: c8a42aa69ee92c383c9f19d3ed57b77cb3e5bd28

5 years agoAliasDB interface cleanup (#15656)
Michael Suo [Sat, 12 Jan 2019 04:04:14 +0000 (20:04 -0800)]
AliasDB interface cleanup (#15656)

Summary:
This is the first of several PRs to simplify AliasDb usage.
- Hide the concept wildcards from users. They are too hard to think about and too easy to forget about.
- Start moving "mutability-safe" graph mutation methods into AliasDb (right now, the various methods that deal with topological move).

Eventually I want to create a "mutability-aware" handle to the graph. If you only use that handle to transform the graph, you can be sure that all transformations are safe with respect to mutability.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15656

Differential Revision: D13615492

Pulled By: suo

fbshipit-source-id: 5c39a157b4ea76f1f976315d06a314a89cc4f22f

5 years agoUpdating submodules
svcscm [Sat, 12 Jan 2019 03:51:33 +0000 (19:51 -0800)]
Updating submodules

Reviewed By: zpao

fbshipit-source-id: 2671ea6bb594280a9d3352fbfa3628f28c6847aa

5 years agoAdd the normalize transform to the core library (#15891)
Peter Goldsborough [Sat, 12 Jan 2019 03:45:40 +0000 (19:45 -0800)]
Add the normalize transform to the core library (#15891)

Summary:
Adds the `Normalize` transform to the core C++ frontend library.

ebetica ezyang soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15891

Differential Revision: D13642167

Pulled By: goldsborough

fbshipit-source-id: 573428e626d6106cf2aadf3dc2e2aecb9a85efc3

5 years ago3x3x3 depthwise convolution with per channel quantization (#15775)
Jongsoo Park [Sat, 12 Jan 2019 03:33:40 +0000 (19:33 -0800)]
3x3x3 depthwise convolution with per channel quantization (#15775)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15775

Pull Request resolved: https://github.com/pytorch/FBGEMM/pull/55

fbgemm didn't have per-channel quantization for 3x3x3 depth-wise convolution

Reviewed By: jianyuh

Differential Revision: D13587438

fbshipit-source-id: 91c36fae7a0e8386e3bc49808e18918b01681dd1

5 years agoMake it consistent for OperatorBase usage (#15908)
Jianyu Huang [Sat, 12 Jan 2019 03:21:47 +0000 (19:21 -0800)]
Make it consistent for OperatorBase usage (#15908)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15908

"OperatorBase::" is changed to "this->template ".

For example,

  # This no longer works
  OperatorBase::GetSingleArgument<>()
  # Should change to:
  this->template GetSingleArgument<>()

https://fb.workplace.com/groups/101100140348621/permalink/576804082778222/

Follow up of D13574832.

Sample Diff:
D9319742D10045844.

Reviewed By: jspark1105

Differential Revision: D13613574

fbshipit-source-id: 2cb4094557b4af78d41e289816cad3e1194fb82c

5 years agorocm build (#15981)
Jerry Zhang [Sat, 12 Jan 2019 02:37:03 +0000 (18:37 -0800)]
rocm build (#15981)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15981

caffe2/operators/unique_ops.cu translated to caffe2/operators/hip/unique_ops.hip breaks rocm build

Reviewed By: BIT-silence

Differential Revision: D13646129

fbshipit-source-id: 900a14e14216686ec4560b30df2eabbd7ec2ff91

5 years agoUpdating submodules
svcscm [Sat, 12 Jan 2019 01:57:02 +0000 (17:57 -0800)]
Updating submodules

Reviewed By: zpao

fbshipit-source-id: 3bbf550cb0bfe71c05b73b8bc4ce97285b50608b

5 years agoTensor construction codemod(ResizeLike) - 2/3 (#15940)
Jerry Zhang [Sat, 12 Jan 2019 01:39:11 +0000 (17:39 -0800)]
Tensor construction codemod(ResizeLike) - 2/3 (#15940)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15940

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13629047

fbshipit-source-id: 5f0641a9aaab9045fa63c32c6a07a4cab3340cc3

5 years agoFixed typo in batchnorm docstrings
James Webber [Sat, 12 Jan 2019 01:26:12 +0000 (17:26 -0800)]
Fixed typo in batchnorm docstrings

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15975

Differential Revision: D13642271

Pulled By: soumith

fbshipit-source-id: 60ffa392bf1f916f2b93c943bb44a642a9815c42

5 years agoTensor reinitialization codemod - 4/5 (#15967)
Jerry Zhang [Sat, 12 Jan 2019 00:38:15 +0000 (16:38 -0800)]
Tensor reinitialization codemod - 4/5 (#15967)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15967

Codemod generated with clangr shard mode, 25 files per diff,
To eliminiate partially initialized Tensor, we split the initialization of local Tensor variables into two steps, first declare un uninitialized Tensor, and
call `ReinitializeTensor` to initialize it.
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13586735

fbshipit-source-id: eae2d79e1107a2e813ce3809e690af4706aaa9ca

5 years agoFix the lint (#15973)
Lu Fang [Fri, 11 Jan 2019 23:57:12 +0000 (15:57 -0800)]
Fix the lint (#15973)

Summary:
Fix the lint error introduced in https://github.com/pytorch/pytorch/pull/15965
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15973

Differential Revision: D13640856

Pulled By: houseroad

fbshipit-source-id: 3f14d9898dcfb0fc469468f63fa1461c88b66b2e

5 years agoTensor reinitialization codemod - 2/5 (#15947)
Jerry Zhang [Fri, 11 Jan 2019 22:55:56 +0000 (14:55 -0800)]
Tensor reinitialization codemod - 2/5 (#15947)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15947

Codemod generated with clangr shard mode, 25 files per diff,
To eliminiate partially initialized Tensor, we split the initialization of local Tensor variables into two steps, first declare un uninitialized Tensor, and
call `ReinitializeTensor` to initialize it.
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13586732

fbshipit-source-id: 5295ab27ca0155f96a4fccf9c0ba8a609101ba24

5 years agoExpose dim() on type and use it in ONNX symbolics (#15933)
James Reed [Fri, 11 Jan 2019 22:51:17 +0000 (14:51 -0800)]
Expose dim() on type and use it in ONNX symbolics (#15933)

Summary:
While integrating fork/join into production translation, we found that trying to export `transpose()` where the input is of `TensorType` (rather than `CompleteTensorType`) failed. This is not ideal, since `TensorType` still contains the number of dimensions of the tensor, and that's all the `transpose` symbolic needs.

This PR introduces a pybind binding for `dim()` on `TensorType` (and `CompleteTensorType` by inheritance). We now use this in places where it logically makes sense in the symbolics: those symbolics which only require knowledge of the number of dimensions rather than concrete sizes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15933

Differential Revision: D13639657

Pulled By: jamesr66a

fbshipit-source-id: 6e50e407e93060085fd00a686a928764d0ec888d

5 years agoTensor construction codemod(ResizeLike) - 3/3 (#15943)
Jerry Zhang [Fri, 11 Jan 2019 22:10:14 +0000 (14:10 -0800)]
Tensor construction codemod(ResizeLike) - 3/3 (#15943)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15943

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: smessmer

Differential Revision: D13629082

fbshipit-source-id: d3863615fd612f73bb73ac67159fd0f0d237fe5c

5 years agoFC shape inference should use int64_t (#15961)
Lin Yang [Fri, 11 Jan 2019 22:09:50 +0000 (14:09 -0800)]
FC shape inference should use int64_t (#15961)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15961

as title

Reviewed By: yinghai

Differential Revision: D13634427

fbshipit-source-id: ec7d168b6272f0dac8a693401cfd0bea368f929a

5 years agoUndo norm optimizations and add more documentation for parallel.h (#15885)
Christian Puhrsch [Fri, 11 Jan 2019 21:28:52 +0000 (13:28 -0800)]
Undo norm optimizations and add more documentation for parallel.h (#15885)

Summary:
See https://github.com/pytorch/pytorch/issues/15602
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15885

Differential Revision: D13614841

Pulled By: cpuhrsch

fbshipit-source-id: 5d3e45f499d36ac287dbbc2e45798aa51eb5bfdf

5 years agoAdd/fallback some operators for mkl-dnn (#11696)
Cheng,Penghui [Fri, 11 Jan 2019 20:48:57 +0000 (12:48 -0800)]
Add/fallback some operators for mkl-dnn (#11696)

Summary:
Implementation LeakyRelu operator for mkl-dnn,the speed-up of a single operation is up to 10X on BDW.
Implementation rashape operator for mkl-dnn,it will resolve occasionally crash issue which use fallback reshape operator.
Implementation CreateBlobQueue and SafeEnqueueBlobs operators,it will resolve crash issue which use fallback operators.
Fallback CreateBlobsQueueDBOp,TensorProtosDBInput,CloseBlobsQueue operators.
Implement adam operator for mkl-dnn,the speed-up of a single operator is up to 6X on BDW.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11696

Reviewed By: yinghai

Differential Revision: D10100438

Pulled By: wesolwsk

fbshipit-source-id: 0b6e06897cc11e0a8e349d80a870b1e72e47f10d

5 years agoDon't call cudaStreamDestroy at destruction time (#15692)
Dmytro Dzhulgakov [Fri, 11 Jan 2019 20:32:50 +0000 (12:32 -0800)]
Don't call cudaStreamDestroy at destruction time (#15692)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15692

It was leading to ocassional crashes with dynamically linked CUDA because runtime was already destroyed.

Also, unique_ptr<T[]> is more suitable than deque<T> for the purpose.

Reviewed By: Yangqing

Differential Revision: D13571988

fbshipit-source-id: 37eb26dfbe361c49160367b53f87bd037c6c0e46

5 years agoTensor construction codemod(ResizeLike) - 1/3 (#15944)
Jerry Zhang [Fri, 11 Jan 2019 20:14:58 +0000 (12:14 -0800)]
Tensor construction codemod(ResizeLike) - 1/3 (#15944)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15944

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: dzhulgakov

Differential Revision: D13628999

fbshipit-source-id: e17c44cec6746674dfd5c2a89c28c4ac0a3da450

5 years agoMove nightly binary builds to 05:05 UTC (#15966)
Jesse Hellemn [Fri, 11 Jan 2019 19:41:22 +0000 (11:41 -0800)]
Move nightly binary builds to 05:05 UTC (#15966)

Summary:
This corresponds to 00:05 EST
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15966

Differential Revision: D13639027

Pulled By: pjh5

fbshipit-source-id: 6685a7af74329b2730e519afd10e350ef2258f32

5 years agoAdd backend checks for batch norm (#15955)
vishwakftw [Fri, 11 Jan 2019 19:19:56 +0000 (11:19 -0800)]
Add backend checks for batch norm (#15955)

Summary:
Fixes #15826

Changelog:
- Add backend checks in `batch_norm_cpu` and `batch_norm_cuda`
- Modify check in `checkBackend` to pass on undefined tensors.

Differential Revision: D13636410

Pulled By: soumith

fbshipit-source-id: 3b1cfe5ca8b7c0346569077163503065e75c2659

5 years agoAdd scalar_type_to_pytorch_type dict in ONNX symbolic
zrphercule [Fri, 11 Jan 2019 18:45:47 +0000 (10:45 -0800)]
Add scalar_type_to_pytorch_type dict in ONNX symbolic

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15965

Differential Revision: D13637521

Pulled By: zrphercule

fbshipit-source-id: 922cadc56f6380f67c14444cff4aa354a87150af

5 years agoRegister CPU/CUDA fuser dynamically (#15887)
Zachary DeVito [Fri, 11 Jan 2019 18:45:40 +0000 (10:45 -0800)]
Register CPU/CUDA fuser dynamically (#15887)

Summary:
This avoids a bunch of conditional compilation logic
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15887

Reviewed By: eellison

Differential Revision: D13613239

Pulled By: zdevito

fbshipit-source-id: a18fc69676b3ef19b4469ab58d8714d1f6efccbb

5 years agoSimplify cat fusion (#15633)
Adam Paszke [Fri, 11 Jan 2019 18:17:54 +0000 (10:17 -0800)]
Simplify cat fusion (#15633)

Summary:
That makes that definition of a "fusable node" much simpler,
as we don't need to keep considering whether something has to be an
"exit node" at every step. The fuser now tries to maximize the
pointwise fusions first, and proceeds to prepending chunks and appending
concats only once a fix point is reached.

This patch not only makes the fuser much simpler to reason about,
making it siginifcantly easier to implement features like SumToSize
fusion, to improve performance of derivative graphs.

cc zou3519 mruberry
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15633

Differential Revision: D13575306

Pulled By: zou3519

fbshipit-source-id: 0c55ea61d65d1f1ed3d75a8e1e83bc85a83f3aff

5 years agoAdd bindings for .cpu() & .cuda() to script (#15904)
Elias Ellison [Fri, 11 Jan 2019 18:00:37 +0000 (10:00 -0800)]
Add bindings for .cpu() & .cuda() to script (#15904)

Summary:
Adding bindings for .cpu() and .cuda() to script.

It's worth noting that if the device remains unchanged, than the returned tensor aliases the input, but if it does change than they do not alias each other.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15904

Differential Revision: D13632879

Pulled By: eellison

fbshipit-source-id: 024a04f267909674aa1e510562efd9cb081f407c

5 years agocomment out large test cases for tril(u)_indices (#15959)
Shen Li [Fri, 11 Jan 2019 17:22:26 +0000 (09:22 -0800)]
comment out large test cases for tril(u)_indices (#15959)

Summary:
4GB is still too large and leads to CUDA OOM failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15959

Differential Revision: D13635146

Pulled By: mrshenli

fbshipit-source-id: 3dc34a03d6ed65c458839d8fa37cd05bf3bc8106

5 years agoAutomatic update of fbcode/onnx to 7abd834091f1024c11749dcfd25126802db9fdd5 (#15942)
Lu Fang [Fri, 11 Jan 2019 16:25:08 +0000 (08:25 -0800)]
update of fbcode/onnx to 7abd834091f1024c11749dcfd25126802db9fdd5 (#15942)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15942

Previous import was 8384c788939bc65463f9754b6a7a00b212b18ba1

Included changes:
- **[7abd834](https://github.com/onnx/onnx/commit/7abd834)**: Clarify some aspects of the Loop spec. (#1587) <Scott McKay>
- **[5a5b15f](https://github.com/onnx/onnx/commit/5a5b15f)**: Support rtol and atol at the model granularity (#1723) <Lu Fang>
- **[ba76e45](https://github.com/onnx/onnx/commit/ba76e45)**: print some information (#1724) <Lu Fang>
- **[797390d](https://github.com/onnx/onnx/commit/797390d)**: Update README.md (#1722) <Prasanth Pulavarthi>
- **[40cdb5f](https://github.com/onnx/onnx/commit/40cdb5f)**: repaire convtranspose shape inference (#1660) <peter yang>
- **[68fdb3f](https://github.com/onnx/onnx/commit/68fdb3f)**: [Minor] Fix Windows line ending in test coverage generating script (#1717) <Raymond Yang>
- **[00101bf](https://github.com/onnx/onnx/commit/00101bf)**: Remove ConstantLike op. Updates to ConstantOfShape op. (#1716) <Spandan Tiwari>
- **[c59e90a](https://github.com/onnx/onnx/commit/c59e90a)**: add a shape inference test for group conv (#1719) <Lu Fang>

Reviewed By: zrphercule

Differential Revision: D13629499

fbshipit-source-id: 4b3e4cb29bdb84c3777a8fb26263548efb20f317

5 years agoMatch NumPy by considering NaNs to be larger than any number when sorting (#15886)
Brennan Vincent [Fri, 11 Jan 2019 16:09:06 +0000 (08:09 -0800)]
Match NumPy by considering NaNs to be larger than any number when sorting (#15886)

Summary:
Fixes #15764
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15886

Differential Revision: D13612971

Pulled By: umanwizard

fbshipit-source-id: 91f552a25d1fd108f2f0b10e09a0ce0364f8c21e

5 years agoPort empty_strided to ATen. (#15948)
Gregory Chanan [Fri, 11 Jan 2019 15:55:17 +0000 (07:55 -0800)]
Port empty_strided to ATen. (#15948)

Summary:
Turns out this has basically been implemented already in Resize.h / Resize.cuh.
Also added some testing, basically just to check that empty_strided behaves equivalently to as_strided.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15948

Differential Revision: D13631098

Pulled By: gchanan

fbshipit-source-id: eb0e04eead45e4cff393ebde340f9d265779e185

5 years agoMove cudaDeviceProp to ATen (#14834)
Syed Tousif Ahmed [Fri, 11 Jan 2019 15:06:34 +0000 (07:06 -0800)]
Move cudaDeviceProp to ATen (#14834)

Summary:
This PR moves `deviceProperties` from `THCState` struct to `CUDAContext` in ATen and hence, takes one more step towards removing `THCState`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14834

Differential Revision: D13633956

Pulled By: soumith

fbshipit-source-id: 51820ac224fc566f17aa92570fd378cff4248596

5 years agoTrivial typo fixings in nn.functional dropout* docstrings (#15951)
Derek Kim [Fri, 11 Jan 2019 06:40:25 +0000 (22:40 -0800)]
Trivial typo fixings in nn.functional dropout* docstrings (#15951)

Summary:
Defualt -> Default
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15951

Differential Revision: D13633875

Pulled By: soumith

fbshipit-source-id: 0da823ef235418396e9322089f6610b592e6990f

5 years agoResolves ptxas warnings when compiling for CUDA_ARCH 750 and a memoryType deprecation...
Syed Tousif Ahmed [Fri, 11 Jan 2019 05:41:48 +0000 (21:41 -0800)]
Resolves ptxas warnings when compiling for CUDA_ARCH 750 and a memoryType deprecation warning (#15461)

Summary:
When compiling for `TORCH_CUDA_ARCH_LIST=7.5` we were getting ptxas warnings (https://github.com/pytorch/pytorch/issues/14310). This was because we had some hardcoded values when using launch_bounds in kernels. The maximum number of threads per multiprocessor is 1024 for Turing architecture (7.5) but 2048 for previous architectures. The hardcoded launch_bounds in the kernel were requesting for 2048 threads when compiling for Turing and hence were generating the warning.

This PR adds a macro that checks for the bounds on the launch bounds value supplied. The max number of threads per block across all architectures is 1024. If a user supplies more than 1024, I just clamp it down to 512. Depending on this value, I set the minimum number of blocks per sm. This PR should resolve https://github.com/pytorch/pytorch/issues/14310. The gradient computation being wrong reported in that PR is probably due to the faulty card.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15461

Differential Revision: D13633952

Pulled By: soumith

fbshipit-source-id: 795aa151109f343ab5433bf3cb070cb6ec896fff

5 years agoFix fallback issues to handle inplace case (#15726)
Gu, Jinghui [Fri, 11 Jan 2019 03:44:29 +0000 (19:44 -0800)]
Fix fallback issues to handle inplace case (#15726)

Summary:
Fix fallback issues to handle inplace case
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15726

Differential Revision: D13591243

Pulled By: yinghai

fbshipit-source-id: 6897f1daacb36beabcdfc22c39242bbdfdd0e534

5 years agoOptimize CPU version performance of the nonzero function. (#15925)
Vitaly Fedyunin [Fri, 11 Jan 2019 01:47:49 +0000 (17:47 -0800)]
Optimize CPU version performance of the nonzero function. (#15925)

Summary:
Same as #15190 but compatible with MSVS compiler
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15925

Differential Revision: D13623473

Pulled By: VitalyFedyunin

fbshipit-source-id: d0db9dbc1a0d8fc9bda08348cb1d3763ae9f8679

5 years agoTensor reinitialization codemod - 5/5 (#15884)
Jerry Zhang [Fri, 11 Jan 2019 00:24:34 +0000 (16:24 -0800)]
Tensor reinitialization codemod - 5/5 (#15884)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15884

Codemod generated with clangr shard mode, 25 files per diff,
To eliminiate partially initialized Tensor, we split the initialization of local Tensor variables into two steps, first declare un uninitialized Tensor, and
call `ReinitializeTensor` to initialize it.
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: hyuen

Differential Revision: D13586737

fbshipit-source-id: dc8e49e9f29505b8898bb19f84c1a983f2d811ab

5 years agoAdd backward pass notes for eig() and symeig()
Evgeniy Zheltonozhskiy [Fri, 11 Jan 2019 00:20:19 +0000 (16:20 -0800)]
Add backward pass notes for eig() and symeig()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15929

Differential Revision: D13626158

Pulled By: soumith

fbshipit-source-id: ab869560926036053c39d20b217ccef8767e7d3f

5 years agocaffe2::Tensor::is_same() (#15407)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:29 +0000 (16:06 -0800)]
caffe2::Tensor::is_same() (#15407)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15407

Don't ask the tensor for its intrusive pointer if we just want to check if two tensors are the same.
This mirrors ATen APIs.

Reviewed By: dzhulgakov

Differential Revision: D13520389

fbshipit-source-id: 681317f36f480ab60e532bb08a073f98f39770fd

5 years agoClean up Half (#15317)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:27 +0000 (16:06 -0800)]
Clean up Half (#15317)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15317

- Merge bitcasts.h and Half.h
- Remove 'static' keyword

Reviewed By: dzhulgakov

Differential Revision: D13498492

fbshipit-source-id: 46d47143e7d3a9d3f4aa7d92379dbba015c97435

5 years agoMove files to/from c10/core and c10/util (#15316)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:27 +0000 (16:06 -0800)]
Move files to/from c10/core and c10/util (#15316)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15316

This starts cleaning up the files in c10 according to the module structure we decided on.

Move to c10/util:
- Half.h, Half-inl.h, Half.cpp, bitcasts.h

Move to c10/core:
- Device.h, Device.cpp
- DeviceType.h, DeviceType.cpp

i-am-not-moving-c2-to-c10

Reviewed By: dzhulgakov

Differential Revision: D13498493

fbshipit-source-id: dfcf1c490474a12ab950c72ca686b8ad86428f63

5 years agoRemove Context from c10 operator schemas (#15312)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:26 +0000 (16:06 -0800)]
Remove Context from c10 operator schemas (#15312)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15312

Context will soon be entirely obsolete. Remove it from the operator schema interface.

Reviewed By: dzhulgakov

Differential Revision: D13495323

fbshipit-source-id: caa0f8f092cd6284e510c3e1e3374fe2f8338364

5 years agoEnable calling caffe2 LayerNorm from PyTorch and JIT (#15243)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:26 +0000 (16:06 -0800)]
Enable calling caffe2 LayerNorm from PyTorch and JIT (#15243)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15243

Register it as a custom JIT op.

Reviewed By: dzhulgakov

Differential Revision: D13473791

fbshipit-source-id: 0f7e72e3efc85a75060a7597fadaf0a8bd289651

5 years agofix rocm build
Zachary DeVito [Fri, 11 Jan 2019 00:04:12 +0000 (16:04 -0800)]
fix rocm build

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15945

Differential Revision: D13630505

Pulled By: zdevito

fbshipit-source-id: a4d2ae1370ab475fc1711027c0c9d2a9192be195

5 years agoRemove USE_CUDA and USE_ROCM in engine.cpp
bddppq [Thu, 10 Jan 2019 22:12:54 +0000 (14:12 -0800)]
Remove USE_CUDA and USE_ROCM in engine.cpp

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15893

Differential Revision: D13627319

Pulled By: zdevito

fbshipit-source-id: 7c72c1c6cc242143fb66383423c668c9b9810884

5 years agoExtend note about contributing to the C++ frontend (#15902)
Peter Goldsborough [Thu, 10 Jan 2019 22:05:21 +0000 (14:05 -0800)]
Extend note about contributing to the C++ frontend (#15902)

Summary:
soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15902

Differential Revision: D13628525

Pulled By: goldsborough

fbshipit-source-id: 70cf36d1bacd9d689d4fa4f2290886fd3765e89b

5 years agoFix different env variables in schedules runs pt 2 (#15934)
Jesse Hellemn [Thu, 10 Jan 2019 21:51:59 +0000 (13:51 -0800)]
Fix different env variables in schedules runs pt 2 (#15934)

Summary:
Unfortunately I do not know how to test this without merging it first
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15934

Reviewed By: orionr

Differential Revision: D13627472

Pulled By: pjh5

fbshipit-source-id: 35eced1483bbf3c0c3f6f62fb7bbbf2f200e50e6