platform/upstream/pytorch.git
5 years agoMove schema inference to c10 (#18090)
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Move schema inference to c10 (#18090)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18090

This schema inference is needed by the c10 operator registration mechanism. Move it to c10.
It is going to be used by diffs stacked on top.

Reviewed By: ezyang

Differential Revision: D14491454

fbshipit-source-id: 0f8ddcdbd91467c8347d315dd443a1ca8b216481

5 years agoAllow registering same operator schema multiple times (#18038)
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Allow registering same operator schema multiple times (#18038)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18038

Now that we have named overloads, we can allow registering the same function schema multiple times and just check it's identical.

This is going to be used in custom op registration since they register the schema every time a kernel is registered.

Reviewed By: dzhulgakov

Differential Revision: D14467494

fbshipit-source-id: 2c26cf72a64b65f120afe05e989302ec42597515

5 years agoRename trtrs to triangular_solve (#18213)
vishwakftw [Thu, 21 Mar 2019 21:18:38 +0000 (14:18 -0700)]
Rename trtrs to triangular_solve (#18213)

Summary:
Changelog:
- Renames `trtrs` to `triangular_solve` to remain consistent with `cholesky_solve` and `solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `triangular_solve` under the name `trtrs`, and add a deprecation warning to not promote usage.
- Move `isnan` to _torch_docs.py
- Remove unnecessary imports
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18213

Differential Revision: D14566902

Pulled By: ezyang

fbshipit-source-id: 544f57c29477df391bacd5de700bed1add456d3f

5 years agoFix contribution_guide docs (#18237)
kshitij12345 [Thu, 21 Mar 2019 20:10:34 +0000 (13:10 -0700)]
Fix contribution_guide docs (#18237)

Summary:
Fixes Typo and a Link in the `docs/source/community/contribution_guide.rst`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18237

Differential Revision: D14566907

Pulled By: ezyang

fbshipit-source-id: 3a75797ab6b27d28dd5566d9b189d80395024eaf

5 years agoUpdating submodules
svcscm [Thu, 21 Mar 2019 20:08:10 +0000 (13:08 -0700)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 80b00c33e6f6c7cfa08f645cd33419f6545f45d2

5 years agoOptimize group_norm_op (#17945)
Xiaomeng Yang [Thu, 21 Mar 2019 19:56:20 +0000 (12:56 -0700)]
Optimize group_norm_op (#17945)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17945

Optimize group_norm_op

Reviewed By: houseroad

Differential Revision: D14419908

fbshipit-source-id: 4024b5c5dbeff97f4f026d61fc44af1f0e98ed68

5 years agoEnable running of slow tests in CI. (#18236)
Edward Yang [Thu, 21 Mar 2019 19:37:00 +0000 (12:37 -0700)]
Enable running of slow tests in CI. (#18236)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18236
ghimport-source-id: 2bb80d017c2ea833669a2d55b340a922b2d44685

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18236 Enable running of slow tests in CI.**
* #18231 Add a decorator for marking slow tests.

These tests only run on master, as they are slow.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14563115

fbshipit-source-id: f54ddef4abedc7e872e58657fc9ac537952773d0

5 years agoRun clang-format on torch/csrc/distributed/c10d
Pieter Noordhuis [Thu, 21 Mar 2019 18:49:21 +0000 (11:49 -0700)]
Run clang-format on torch/csrc/distributed/c10d

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18255

Differential Revision: D14563072

Pulled By: pietern

fbshipit-source-id: bd83f90ae949b14bc95f4009ba12319c9b7936d0

5 years agoShut up compiler about unused the_type. (#18278)
Edward Yang [Thu, 21 Mar 2019 18:36:12 +0000 (11:36 -0700)]
Shut up compiler about unused the_type. (#18278)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18278
ghimport-source-id: 3c35f6e7229c3c2b3a27d96370d7c05fad58365e

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18278 Shut up compiler about unused this_type.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14563050

fbshipit-source-id: 4b516f6c9ef3784d1430f793f304066c351b1a93

5 years agoAdd a decorator for marking slow tests. (#18231)
Edward Yang [Thu, 21 Mar 2019 18:08:11 +0000 (11:08 -0700)]
Add a decorator for marking slow tests. (#18231)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18231
ghimport-source-id: 78c230f60c41877fe91b89c8c979b160f36f856b

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18231 Add a decorator for marking slow tests.**

The general strategy:
- It's a normal skip decorator, which triggers a skip if
  PYTORCH_TEST_WITH_SLOW is not set.
- It also annotates the method in question that says it's
  slow.  We use this to implement a catch-all skipper in
  setUp that skips all non-slow tests when
  PYTORCH_TEST_SKIP_FAST is set.

I added a little smoketest to test_torch and showed that I get:

```
Ran 432 tests in 0.017s
OK (skipped=431)
```

when running with PYTORCH_TEST_WITH_SLOW=1 and PYTORCH_TEST_SKIP_FAST=1

CI integration coming in later patch, as well as nontrivial uses of
this decorator.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14544441

fbshipit-source-id: 54435ce4ec827193e019887178c09ebeae3ae2c9

5 years agolint changes
Igor Fedan [Thu, 21 Mar 2019 18:04:15 +0000 (11:04 -0700)]
lint changes

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18276

Differential Revision: D14563385

Pulled By: ifedan

fbshipit-source-id: 12a51dbdb7b9e96be9fefa21fe298796b1ae6b58

5 years agomove median to ATen (#17637)
Thomas Viehmann [Thu, 21 Mar 2019 16:59:20 +0000 (09:59 -0700)]
move median to ATen (#17637)

Summary:
This moves median to ATen.

- median with dimension reduces to kthvalue
- median without dimension (aka medianall) is implemented in parallel to kthvalue because we would not want to reshape (copying for non-contiguous) and then copy again in kthvalue. We can sue the helper functions we moved from kthvalue.
- `median_cuda` was accidentally already put into ATen in #17544.
- The quickselect algorthm without indices for CPU in TH is now obsolete and removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17637

Differential Revision: D14346510

Pulled By: ezyang

fbshipit-source-id: c07ad144efbd6b4194179bb1c02635862521d8cb

5 years agoFix B903 lint: save memory for data classes with slots/namedtuple (#18184)
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B903 lint: save memory for data classes with slots/namedtuple (#18184)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18184
ghimport-source-id: 2ce860b07c58d06dc10cd7e5b97d4ef7c709a50d

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18184 Fix B903 lint: save memory for data classes with slots/namedtuple**
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530872

fbshipit-source-id: e26cecab3a8545e7638454c28e654e7b82a3c08a

5 years agoFix B902 lint error: invalid first argument. (#18181)
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B902 lint error: invalid first argument. (#18181)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18181
ghimport-source-id: 9c23551584a1a1b0b7ac246367f3a7ae1c50b315

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* **#18181 Fix B902 lint error: invalid first argument.**
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

A variety of sins were committed:
- Some code was dead
- Some code was actually a staticmethod
- Some code just named it the wrong way
- Some code was purposely testing the omitted case

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530876

fbshipit-source-id: 292a371d9a76ddc7bfcfd38b6f0da9165290a58e

5 years agoFix B006 lint errors: using mutable structure in default argument. (#18178)
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B006 lint errors: using mutable structure in default argument. (#18178)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18178
ghimport-source-id: 667ee76b418f505fa64b863e52a603c508dcd1bf

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* #18181 Fix B902 lint error: invalid first argument.
* **#18178 Fix B006 lint errors: using mutable structure in default argument.**
* #18177 Fix lstrip bug revealed by B005 lint

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530874

fbshipit-source-id: 38f4456a085bfe55f2a96fff53028ebd0d621604

5 years agoTwo amendments for the shape analysis (#18271)
Thomas Viehmann [Thu, 21 Mar 2019 15:02:30 +0000 (08:02 -0700)]
Two amendments for the shape analysis (#18271)

Summary:
Two small refinements to the shape analysis:
- `detach` can set requires grad to false for dimensioned tensors (not sure if I would also need to deal with Complete?).
- add `batch_norm_stats`.

I noticed these while looking at what's going on when trying to code batch norm manually. (Hi wanchaol )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18271

Differential Revision: D14561303

Pulled By: ezyang

fbshipit-source-id: 64a6879392e77403c44f2ed82f84b6397754d0ea

5 years agoFix lstrip bug revealed by B005 lint (#18177)
Edward Yang [Thu, 21 Mar 2019 14:50:45 +0000 (07:50 -0700)]
Fix lstrip bug revealed by B005 lint (#18177)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18177
ghimport-source-id: fbbf915b66762fc88bc5b541464e71ba27500958

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* **#18177 Fix lstrip bug revealed by B005 lint**

lstrip() doesn't strip a prefix; it strips all of the characters
in the passed in string.  B005 lint revealed this.  Replaced with
substring operation.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530873

fbshipit-source-id: 13b3438fcc3cce13b5110730dc3d0b528a52930f

5 years agoBackward function for torch.cdist
Igor Fedan [Thu, 21 Mar 2019 07:36:26 +0000 (00:36 -0700)]
Backward function for torch.cdist

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17173

Differential Revision: D14111482

Pulled By: ifedan

fbshipit-source-id: d72cfd53c29d0f8cf5f8ad1148d14f3d5abd938e

5 years agoFix ONNX symbolic for argmin and argmax (#18261)
Lu Fang [Thu, 21 Mar 2019 05:45:57 +0000 (22:45 -0700)]
Fix ONNX symbolic for argmin and argmax (#18261)

Summary:
Fix the problem introduced in https://github.com/pytorch/pytorch/pull/17103
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18261

Reviewed By: bddppq

Differential Revision: D14558781

Pulled By: houseroad

fbshipit-source-id: 7bb50072e77d1d7b2a93f4011fa1362f26e9df1c

5 years agoUpdate math::Transpose to support tensor with size > 2G (#17670)
Xiaomeng Yang [Thu, 21 Mar 2019 01:19:09 +0000 (18:19 -0700)]
Update math::Transpose to support tensor with size > 2G (#17670)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17670

Update math::Transpose to support tensor with size > 2G

i-am-not-moving-c2-to-c10

Differential Revision: D14313624

fbshipit-source-id: 0b4a85b913972e5a8981f0d40d0c539407b98f30

5 years agohandle dst_bin_width==0 case properly (#18240)
Jongsoo Park [Thu, 21 Mar 2019 00:02:38 +0000 (17:02 -0700)]
handle dst_bin_width==0 case properly (#18240)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18240

For rare cases when dst_bin_width == 0 we should just put all numbers to an arbitrary bin.

Reviewed By: csummersea

Differential Revision: D14544685

fbshipit-source-id: 02d04ff8bd1555d6cf7e7eeb1196a4ab3325a9e5

5 years agoRevert D14114134: [asr] add fbgemm fp16 (fbfcpacked) support, add global_init_net...
Lu Fang [Wed, 20 Mar 2019 23:27:56 +0000 (16:27 -0700)]
Revert D14114134: [asr] add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta

Differential Revision:
D14114134

Original commit changeset: 112bb2ceb9d3

fbshipit-source-id: 763262c1b78eed88a653caad5adc27d97feb43aa

5 years agoCleanup arg{min, max} (#17103)
Gao, Xiang [Wed, 20 Mar 2019 23:18:49 +0000 (16:18 -0700)]
Cleanup arg{min, max} (#17103)

Summary:
Why do we need this workaround? `PythonArgParser` handles these two cases well.

The discussion started at https://github.com/pytorch/pytorch/pull/6201#issuecomment-378724406. The conclusion at that time by goldsborough was:

> Because we wanted to allow `dim=None` in Python and route to a different function. Essentially the problem was wanting to wrap the C++ function in Python. AFAIK there is no way of translating `dim=None` behavior into C++? So Richard and I came up with this strategy

Maybe at that time `PythonArgParser` was not powerful enough to handle the routing of two function with same name but different C++ signature.

Will keep an eye on the CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17103

Differential Revision: D14523503

Pulled By: VitalyFedyunin

fbshipit-source-id: cae3e2678062da2eccd93b51d4050578c7a9ab80

5 years agoAdded the exception of ignore_index (#18117)
Bharat123Rox [Wed, 20 Mar 2019 23:00:11 +0000 (16:00 -0700)]
Added the exception of ignore_index (#18117)

Summary:
Fix #17801 to add an exception regarding `ignore_index` in the documentation for `torch.nn.CrossEntropyLoss` and `torch.nn.NLLLoss`

If any other files/functions are hit, I'd be glad to incorporate the changes there too! 😊
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18117

Differential Revision: D14542079

Pulled By: ezyang

fbshipit-source-id: 7b918ac61f441dde7d3d6782d080c500cf2097f1

5 years agoAdd .get() for dicts (#18238)
David Riazati [Wed, 20 Mar 2019 21:48:52 +0000 (14:48 -0700)]
Add .get() for dicts (#18238)

Summary:
Fixes #18232
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18238

Differential Revision: D14546689

Pulled By: driazati

fbshipit-source-id: ed021e6f54c891d6c734c8f2345f4e83a3c6c905

5 years agoUpdate nccl submodule to 2.4.2 (#17883)
Pieter Noordhuis [Wed, 20 Mar 2019 21:30:55 +0000 (14:30 -0700)]
Update nccl submodule to 2.4.2 (#17883)

Summary:
Didn't test this. Let's see what happens.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17883

Differential Revision: D14547470

Pulled By: pietern

fbshipit-source-id: c35d232f6bcc5a2dce55da636a0acbea5c2725d8

5 years agoReinstate ncclCommDestroy (#17943)
Pieter Noordhuis [Wed, 20 Mar 2019 21:12:31 +0000 (14:12 -0700)]
Reinstate ncclCommDestroy (#17943)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17943

Together with xw285cornell came up with a solution for static destruction
order fiasco that caused the NCCL context to be destroyed **after**
the CUDA context was already destroyed. In this commit we destroy all
cached NCCL contexts as soon as the last NCCL related Caffe2 operator
instance is destructed, thereby avoiding a dependency on static
variable destruction.

Reviewed By: xw285cornell

Differential Revision: D14429724

fbshipit-source-id: fe5ce4b02b1002af8d9f57f6fa089b7a80e316ce

5 years agoEnable autograd to recognize the XLA backend as one providing multiple devices (...
Davide Libenzi [Wed, 20 Mar 2019 20:47:41 +0000 (13:47 -0700)]
Enable autograd to recognize the XLA backend as one providing multiple devices (#17847)

Summary:
…e devices, while not being CUDA/HIP.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17847

Differential Revision: D14545634

Pulled By: ezyang

fbshipit-source-id: 417181bf2ff4f8978544afe2fb6b042e787854ed

5 years agoadd fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta...
Weiyi Zheng [Wed, 20 Mar 2019 20:45:07 +0000 (13:45 -0700)]
add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta (#17905)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17905

support adding op in global_init_net. because pred_init_net is per thread, and just doesn't cut it.

Reviewed By: jspark1105

Differential Revision: D14114134

fbshipit-source-id: 112bb2ceb9d3d5e663dd430585567f4eaa2db35f

5 years agofixed typo in shape_analysis.cpp (#18227)
Zhang Dong [Wed, 20 Mar 2019 19:41:17 +0000 (12:41 -0700)]
fixed typo in shape_analysis.cpp (#18227)

Summary:
cc: VitalyFedyunin
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18227

Differential Revision: D14541764

Pulled By: VitalyFedyunin

fbshipit-source-id: 9477deb1a99e6581f15a4de4d7631d747f56f3a6

5 years agoRetain the parameter names in ONNX exporter (#17551)
Lu Fang [Wed, 20 Mar 2019 19:03:13 +0000 (12:03 -0700)]
Retain the parameter names in ONNX exporter (#17551)

Summary:
So, we will keep the names of ONNX initializers the same as the names in PyTorch state dict.

Later, we will make this as the default behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17551

Reviewed By: dzhulgakov

Differential Revision: D14491920

Pulled By: houseroad

fbshipit-source-id: f355c02e1b90d7ebbebf4be7c0fb6ae208ec795f

5 years agoFix typo in docstring
Alexandr Morev [Wed, 20 Mar 2019 18:12:55 +0000 (11:12 -0700)]
Fix typo in docstring

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18216

Differential Revision: D14539824

Pulled By: ezyang

fbshipit-source-id: 490b72951a75f3f8b949a2d692d660a3693ee98a

5 years agoAdd batched version of trtrs (#18025)
Vishwak Srinivasan [Wed, 20 Mar 2019 18:06:56 +0000 (11:06 -0700)]
Add batched version of trtrs (#18025)

Summary:
- Remove single batch TH/THC implementations
- Remove `_batch_trtrs_lower` from `multivariate_normal`
- Add tests for batched behavior
- Modify trtrs_backward to accommodate for batched case
- Modify docs

In a future PR, this will be renamed to `triangular_solve`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18025

Differential Revision: D14523004

Pulled By: ifedan

fbshipit-source-id: 11c6a967d107f969b60e5a5c73ce6bb8099ebbe1

5 years agoRemove GLOO usage when USE_GLOO is OFF
Sacha Refshauge [Wed, 20 Mar 2019 16:16:32 +0000 (09:16 -0700)]
Remove GLOO usage when USE_GLOO is OFF

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18203

Differential Revision: D14540520

Pulled By: soumith

fbshipit-source-id: f1c96cc563ed1e913040e3e16b109d3e3030128c

5 years agoEnable 32 bit CPU build on Windows
peterjc123 [Wed, 20 Mar 2019 16:16:28 +0000 (09:16 -0700)]
Enable 32 bit CPU build on Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18176

Differential Revision: D14539884

Pulled By: ezyang

fbshipit-source-id: 0e4bd9c1ef1830cd9bcc40df36b87534f61def08

5 years agoCorrect cmake flags passing (#18217)
peter [Wed, 20 Mar 2019 16:12:40 +0000 (09:12 -0700)]
Correct cmake flags passing (#18217)

Summary:
Fixes #18214.

According to the CMake manual, we should pass the arguments first, and put the directory as the last element. Otherwise, these flags may not be passed correctly.

Reference:
1. https://cmake.org/cmake/help/latest/manual/cmake.1.html#synopsis
2. https://stackoverflow.com/a/27169347
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18217

Differential Revision: D14540588

Pulled By: ezyang

fbshipit-source-id: a027f585dde66c5da7bbbe584fa42c3e56027d59

5 years agoAdd python_variable._is_view for debugging. (#18197)
Gregory Chanan [Wed, 20 Mar 2019 15:39:52 +0000 (08:39 -0700)]
Add python_variable._is_view for debugging. (#18197)

Summary:
I don't know if we actually want to expose this or not, but it's useful for debugging.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18197

Reviewed By: ezyang

Differential Revision: D14530712

Pulled By: gchanan

fbshipit-source-id: 98fdba9cf113738f0db3a198c49365de536b9919

5 years agoDo not apply these explicit unroll pragmas for ROCm. (#18204)
Johannes M Dieterich [Wed, 20 Mar 2019 14:58:11 +0000 (07:58 -0700)]
Do not apply these explicit unroll pragmas for ROCm. (#18204)

Summary:
Loop analysis indicates that there is a runtime trip count and hence
unrolling cannot take place.

This will silence compile-time warnings we have been observing with recent ROCm releases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18204

Differential Revision: D14539875

Pulled By: ezyang

fbshipit-source-id: a7ea7f2a95603754296b76a6b62a154f56f4ad4d

5 years agoCopy-edit CONTRIBUTING and update. (#18131)
Edward Yang [Wed, 20 Mar 2019 14:33:51 +0000 (07:33 -0700)]
Copy-edit CONTRIBUTING and update. (#18131)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18131
ghimport-source-id: 473dae70f6c236d317bec77d894310c0aa0376ec

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18131 Copy-edit CONTRIBUTING and update.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14505049

fbshipit-source-id: 02aeae33c0049889243c56dd0d761487dac2351e

5 years agofix cosine_similarity (#18168)
Ailing Zhang [Wed, 20 Mar 2019 03:02:29 +0000 (20:02 -0700)]
fix cosine_similarity (#18168)

Summary:
fixes #18057 according to colesbury 's suggestion. Thanks!
cc: ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18168

Differential Revision: D14520953

Pulled By: ailzhang

fbshipit-source-id: 970e6cfb482d857a81721ec1d0ee4a4df84a0450

5 years agoBreakup test misc pt2 (#18191)
Elias Ellison [Wed, 20 Mar 2019 02:38:09 +0000 (19:38 -0700)]
Breakup test misc pt2 (#18191)

Summary:
Further breakup test_misc.h. The remaining tests don't directly map to a jit file so I left them in test_misc.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18191

Differential Revision: D14533442

Pulled By: eellison

fbshipit-source-id: 7f538ce0aea208b6b55a4716dfcf039548305041

5 years agoAdd serialization docs to jit/README (#17951)
David Riazati [Tue, 19 Mar 2019 23:42:54 +0000 (16:42 -0700)]
Add serialization docs to jit/README (#17951)

Summary:
Documents the serialization format for `torch.jit.save`. Some of the info is copied from houseroad's internal doc.

[Formatted Markdown](https://github.com/driazati/pytorch/blob/serial_docs/torch/csrc/jit/README.md)

Also refactors the readme to have a heading hierarchy + table of contents
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17951

Differential Revision: D14531644

Pulled By: driazati

fbshipit-source-id: cbcd9462054cc9f8a2f8cea2c98d8aba4e7d227c

5 years agoTurn on Travis builds for ghstack PRs. (#18193)
Edward Yang [Tue, 19 Mar 2019 21:47:24 +0000 (14:47 -0700)]
Turn on Travis builds for ghstack PRs. (#18193)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18193
ghimport-source-id: 540859cf0b238a9832f45b3f4c2351e3343fc1a2

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18193 Turn on Travis builds for ghstack PRs.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14529945

fbshipit-source-id: 4476e996e311a04f2a997ca9b7c4cf2157dd6286

5 years agodo not throw when unicode is seen in pull request info (#18195)
Michael Suo [Tue, 19 Mar 2019 21:32:52 +0000 (14:32 -0700)]
do not throw when unicode is seen in pull request info (#18195)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18195
ghimport-source-id: 05102cb115c6bd6d141f51905e20155bcd79a908

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18195 [build] do not throw when unicode is seen in pull request info**

Differential Revision: D14529707

fbshipit-source-id: 2f6a31b01b3a9b044fd24be466cc5325b70929ad

5 years agoDelete bugbear from Python 2 lint. (#18192)
Edward Yang [Tue, 19 Mar 2019 21:15:55 +0000 (14:15 -0700)]
Delete bugbear from Python 2 lint. (#18192)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18192
ghimport-source-id: 9523a09d7ec202ef08cf0ecdf48c42739ea6b0ce

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18192 Delete bugbear from Python 2 lint.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14529240

fbshipit-source-id: 1a433b53dd38d1c455e8c0750d97c594ac51ef09

5 years agoSupport attributes when emitting function calls (#18156)
David Riazati [Tue, 19 Mar 2019 20:51:25 +0000 (13:51 -0700)]
Support attributes when emitting function calls (#18156)

Summary:
The type of each `initial_ivalue` is completely known at some point but that information is discarded by the time a call to it is emitted. This PR is kind of a hack, as a better (longer) solution, the method should know about the type of each initial value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18156

Differential Revision: D14525768

Pulled By: driazati

fbshipit-source-id: 52d53e9711a07a4551c988bd95fe997e654aa465

5 years agoCustomized pin_memory for PackedSequence (#18079)
Tongzhou Wang [Tue, 19 Mar 2019 20:35:55 +0000 (13:35 -0700)]
Customized pin_memory for PackedSequence (#18079)

Summary:
fixes https://github.com/pytorch/pytorch/issues/18078
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18079

Reviewed By: ezyang

Differential Revision: D14521192

Pulled By: zou3519

fbshipit-source-id: cec773a3a6f2c405a0d9701e213b7caf81649181

5 years agoEnable flake8-bugbear line length checking. (#18138)
Edward Yang [Tue, 19 Mar 2019 20:25:04 +0000 (13:25 -0700)]
Enable flake8-bugbear line length checking. (#18138)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18138
ghimport-source-id: be62a71ef98714e6f168a00f84120f612363528e

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18138 Enable flake8-bugbear line length checking.**

flake8-bugbear's line length checker (B950) which permits violations
of up to 10% but specifies the "true" limit when you go over.

I had to ignore a bunch of flake8-bugbear's other checks when I
turned this on.  They're good checks though (they're turned on
in fbcode) and we should fix them eventually.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Reviewed By: salexspb

Differential Revision: D14508678

fbshipit-source-id: 2610ecc0dd43cc0788d77f4d024ebd85b26b8d41

5 years agofix bug in alias analysis (#18146)
Michael Suo [Tue, 19 Mar 2019 18:01:05 +0000 (11:01 -0700)]
fix bug in alias analysis (#18146)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18146
ghimport-source-id: 4b061c27c5c44ef0d06066490ed16cab3d0c7a64

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18146 [jit] fix bug in alias analysis**

We handled hasWriters() incorrectly in the case of wildcards. There's
even a comment describing the correct behavior. Sad!

Much thanks to t-vi for tracking this down and suggesting the fix!

Differential Revision: D14524208

fbshipit-source-id: 8010b54257241bd64013a0d0a8b6e7d22d8c70af

5 years agoAdd backend checks to solve methods (gesv, cholesky_solve) (#18116)
vishwakftw [Tue, 19 Mar 2019 17:36:23 +0000 (10:36 -0700)]
Add backend checks to solve methods (gesv, cholesky_solve) (#18116)

Summary:
Changelog:
- Incorporate a simple backend check in the linearSolveCheckInputs function in LinearAlgebraUtils.h
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18116

Differential Revision: D14504469

Pulled By: soumith

fbshipit-source-id: 7402b6dbaa8d73048946613b806d54f68bcbd8f4

5 years agofix -Wsign-compare warnings for some files inside c2 (#18123)
Hector Yuen [Tue, 19 Mar 2019 17:30:29 +0000 (10:30 -0700)]
fix -Wsign-compare warnings for some files inside c2 (#18123)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18123

the motivation of this fix is to resolve things like:
for(auto i = 0; i < N; i++) where N is bigger than int32

These instances of comparison were found by enabling -Wsign-compare

There are way too many things to fix, so issuing this as a series of fixes

The plan is to fix all these issues and then enable this flag into Caffe2 to catch future instances

Reviewed By: ZolotukhinM

Differential Revision: D14497094

fbshipit-source-id: bca3927a2188bd33a508fa503ba221c220cdaefe

5 years agoSGD: remove unneeded multiply-add initialization operations (#18114)
Neta Zmora [Tue, 19 Mar 2019 17:29:07 +0000 (10:29 -0700)]
SGD: remove unneeded multiply-add initialization operations (#18114)

Summary:
The momentum buffer is initialized to the value of
d_p, but the current code takes the long way to do this:
1. Create a buffer of zeros
2. Multiply the buffer by the momentum coefficient
3. Add d_p to the buffer

All of these can be collapsed into a single step:
1. Create a clone of d_p
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18114

Differential Revision: D14509122

Pulled By: ezyang

fbshipit-source-id: 4a79b896201d5ff20770b7ae790c244ba744edb8

5 years agospecialized CUDA impl for dropout in AD (#17756)
Ailing Zhang [Tue, 19 Mar 2019 17:20:06 +0000 (10:20 -0700)]
specialized CUDA impl for dropout in AD (#17756)

Summary:
In aten we have a _fused_dropout implementation for CUDA case. As ngimel suggested if we discard it in JIT AD, it hurts performance.

It doesn't seem ideal to include backend specific implementation in AD, but this is helpful to prevent performance regression atm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17756

Differential Revision: D14368999

Pulled By: ailzhang

fbshipit-source-id: 9a371c5020f630e8f6e496849ec9772b6f196169

5 years agoFix underflow issue with dirichlet sample (#17488)
Neeraj Pradhan [Tue, 19 Mar 2019 17:18:12 +0000 (10:18 -0700)]
Fix underflow issue with dirichlet sample (#17488)

Summary:
Addresses #15738, using fritzo's suggestion. This adds a `torch._sample_dirichlet` method in `Distributions.cpp` and `Distributions.cu`.
 - For CPU, this leads to no perf hit since all we do is to promote the `alpha` to double when getting the gamma samples (the gamma sampler anyways uses `accscalar_t`(double for CPU)) and cast it back to float32 on return.
 - I have added an analogous method for CUDA as well, but the default sampler for CUDA uses scalar_t for efficiency, so I have kept it as that. With this, I do not see the bias towards 1 as reported in #15738 with `float32`, but there is a spurious mode at 0.5, as would be expected. Users would need to explicitly use `float64` for GPU to not see the spurious mode at 0.5. (EDIT: see note below, it appears that the bias issue is still there for certain builds).

Added some tests and checked that there is no perf regression. My experience with C++ is very limited, so apologies in advance if I missed something basic. cc. ailzhang, fritzo, fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17488

Differential Revision: D14410301

Pulled By: ezyang

fbshipit-source-id: 62b2f694b4642685eab06db96d74ce28e05c3992

5 years agoKill Backend constructor of TensorOptions. (#18137)
Gregory Chanan [Tue, 19 Mar 2019 14:57:21 +0000 (07:57 -0700)]
Kill Backend constructor of TensorOptions. (#18137)

Summary:
It's wrong and unused.  Use one of the many other constructors instead :).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18137

Differential Revision: D14508364

Pulled By: gchanan

fbshipit-source-id: 19c6ff78ad9d9221d0874425edd02b78627c4ca7

5 years agoRemove deviceTypeToBackend, which is underspecified. (#18135)
Gregory Chanan [Tue, 19 Mar 2019 14:50:31 +0000 (07:50 -0700)]
Remove deviceTypeToBackend, which is underspecified. (#18135)

Summary:
There are multiple backends for a device type, so we just kill this function.
Also, kill an getNonVariableType instance which was also underspecified.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18135

Differential Revision: D14507474

Pulled By: gchanan

fbshipit-source-id: fc791a76d4b851b23d09a070725f3838621eb13d

5 years agoStop generating unimplemented type methods. (#18144)
Gregory Chanan [Tue, 19 Mar 2019 14:36:58 +0000 (07:36 -0700)]
Stop generating unimplemented type methods. (#18144)

Summary:
This gets rid of 'aten_sparse' which was used at one time with legacy THS code, but is now only overloaded in native_parse.py.
The way that 'aten_sparse' worked was wonky -- it extended all backends (default [CPU, CUDA]) to include sparse.
But this is totally unnecessary; we already have the backends we need to generate for from type_method_definition_dispatch.

codegen changes: https://github.com/gchanan/pytorch/blob/fc37c8e171b7ebd1b1755469cf6a146a2abedc13/diff.txt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18144

Reviewed By: ezyang

Differential Revision: D14511324

Pulled By: gchanan

fbshipit-source-id: 8bb4ac4cf0985f8756790779a22bc229e18e8e7f

5 years agoCorrected type of 'swap' in torch.nn.TripletMarginLoss (#18115)
Bharat Raghunathan [Tue, 19 Mar 2019 14:05:29 +0000 (07:05 -0700)]
Corrected type of 'swap' in torch.nn.TripletMarginLoss (#18115)

Summary:
Fix #16428 by correcting type of 'swap' from `float` to `bool`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18115

Differential Revision: D14516615

Pulled By: ezyang

fbshipit-source-id: c61a45d533f3a443edf3c31c1ef3d9742bf46d2b

5 years agohandle scenario when GPU support is not available and p2p_access_pattern is empty...
Deepali Chourasia [Tue, 19 Mar 2019 06:06:03 +0000 (23:06 -0700)]
handle scenario when GPU support is not available and p2p_access_pattern is empty (#17974)

Summary:
Observed that when there is no GPU support available `workspace `sets `GetGpuPeerAccessPattern `to `[]` in
https://github.com/pytorch/pytorch/blob/master/caffe2/python/workspace.py#L79
and this case is not handled in https://github.com/pytorch/pytorch/blob/master/caffe2/python/data_parallel_model.py#L1065.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17974

Differential Revision: D14517066

Pulled By: ezyang

fbshipit-source-id: 186911d95c07e9a55ab82a41d0c7c919e4281bb4

5 years agoFix Caffe2 operator schemas (#15462) (#13229) (#18109)
Lutz Roeder [Tue, 19 Mar 2019 03:51:12 +0000 (20:51 -0700)]
Fix Caffe2 operator schemas (#15462) (#13229) (#18109)

Summary:
Maratyszcza harouwu yinghai

This is broken since #13065. `c_str()` returns a pointer that isn't permanent.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18109

Differential Revision: D14516622

Pulled By: ezyang

fbshipit-source-id: 7113d92eac4f61479c4c7b323cf78cc8aa00b17e

5 years agoIncrease line-width of Declarations.yaml (#18050)
Junji Hashimoto [Tue, 19 Mar 2019 03:44:05 +0000 (20:44 -0700)]
Increase line-width of Declarations.yaml (#18050)

Summary:
There are some line breaks in schema_string of Declarations.yaml.
Is this valid yaml? I am reading yaml-spec.
It seems that the “|” indicator or single/double quote is required to insert line-break.
https://yaml.org/spec/1.2/spec.html
![image](https://user-images.githubusercontent.com/2469618/54405834-1e53ac80-471b-11e9-9925-be13a109eb46.png)
Could you increase line-width of yaml to avoid newline?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18050

Differential Revision: D14516694

Pulled By: ezyang

fbshipit-source-id: 1db9f3bf131b54a783d668de973915892603189e

5 years agoUpdating submodules
svcscm [Tue, 19 Mar 2019 03:33:38 +0000 (20:33 -0700)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: eeeec4229e05916f2c17e525aee5ac4465ef52db

5 years agodelete unnecessary file .gitkeep (#18136)
Zhang Dong [Tue, 19 Mar 2019 03:22:20 +0000 (20:22 -0700)]
delete unnecessary file .gitkeep (#18136)

Summary:
delete unnecessary file .gitkeep in /pytorch/tree/master/torch/csrc/nn
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18136

Differential Revision: D14516584

Pulled By: ezyang

fbshipit-source-id: a7555693cb3df1c5e37fcd3ca9bb379a2258f2d1

5 years agoAttribute serialization (#17423)
David Riazati [Tue, 19 Mar 2019 01:15:17 +0000 (18:15 -0700)]
Attribute serialization (#17423)

Summary:
Allows serialization/loading of attributes (`IValue`s of any type).
* metadata (attribute name, type) is stored in the `model.json`
* The binary format is a subset of the `pickle` module that supports the operations necessary for `IValue`s
    * Attributes are serialized in the order they are defined on a module to a list in a single `attributes` file, with submodule attributes coming first. This order directly matches the order attributes are listed in `model.json`
    * This can be inspected in Python with `pickle.load()` or with `pickletools` (PyTorch need not be installed for this to work)
        * A class is used to store a tensor's index into the tensor table of the model, so to unpickle the file you have to use a custom Unpickler:
        ```python
        class TensorID(object):
            def __setstate__(self, id):
                self.id = id

        class JitUnpickler(pickle.Unpickler):
            def find_class(self, module, name):
                if module == '__main__' and name == 'TensorID':
                    return TensorID

        JitUnpickler(open("my_model/attributes.pkl", "rb")).load()
        ```
    * pickle format: https://svn.python.org/projects/python/trunk/Lib/pickletools.py
* It currently does not support/guarantee that anything saved out with `pickle` (i.e. if you edit `attributes` with `pickle` directly) instead of our tools will be imported correctly

Also will fix #17683 and fix #16367

Followup Work:
* document format / choice of pickle: #17951
* create an example
* list specializations
* int size specializations, large binputs
* do a first pass over attributes to output only necessary `BINPUT` ops
* attribute reassignment (e.g `self.my_attribute = new_value`)
* `tensor.save("some_checkpoint.pkl")` support with tensors embedded in Pickle file
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17423

Differential Revision: D14470965

Pulled By: driazati

fbshipit-source-id: 6a21a9939efdbe59b4bc57fd31d6d630bab5297e

5 years agofix bug in pool_dnnlowp_op_avx2.cc (#18141)
Jongsoo Park [Mon, 18 Mar 2019 23:28:47 +0000 (16:28 -0700)]
fix bug in pool_dnnlowp_op_avx2.cc (#18141)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18141

VLEN should've been 32

Reviewed By: jianyuh

Differential Revision: D14510780

fbshipit-source-id: ddf12746e1c69677a268432432ddb088cc210084

5 years agoUpdating submodules
svcscm [Mon, 18 Mar 2019 23:16:51 +0000 (16:16 -0700)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: ed297c07c681f5f45d3f99edf48680015ca5b138

5 years agoRename gesv to solve (#18060)
Vishwak Srinivasan [Mon, 18 Mar 2019 23:01:02 +0000 (16:01 -0700)]
Rename gesv to solve (#18060)

Summary:
Changelog:

- Renames `gesv` to `solve` to remain consistent with `cholesky_solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `solve` under the name `gesv`, and add a deprecated warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18060

Differential Revision: D14503117

Pulled By: zou3519

fbshipit-source-id: 99c16d94e5970a19d7584b5915f051c030d49ff5

5 years agoModify BeamSearch to support CharSourceEncoder (#18107)
James Reed [Mon, 18 Mar 2019 21:06:31 +0000 (14:06 -0700)]
Modify BeamSearch to support CharSourceEncoder (#18107)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18107

Pull Request resolved: https://github.com/pytorch/translate/pull/396

also:

1. fix issues with OptionalType not having a createWithContainedType (PyTorch diff)
2. Delete tests for ONNX full beam search export (nobody is using it and it just makes things harder. Currently ONNX doesn't support `_unwrap_optional`)

Reviewed By: jmp84

Differential Revision: D14483771

fbshipit-source-id: 0e37ef1cb5a16d03a535eef808b0488b98802128

5 years agoCircular Convolution Function via circular padding (#17240)
Narine Kokhlikyan [Mon, 18 Mar 2019 19:21:52 +0000 (12:21 -0700)]
Circular Convolution Function via circular padding (#17240)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17240

Added circular padding in addition to zero padding to Conv1D, Conv2D and Conv3D based on the solution suggested in: https://github.com/pytorch/pytorch/issues/3858

Reviewed By: ezyang

Differential Revision: D14126416

fbshipit-source-id: a2f1587503ee0cfff98d5cb0d5b0a600ef8aaeb4

5 years agodon't include /usr/include when nvcc is in /usr/bin (#18127)
Thomas Viehmann [Mon, 18 Mar 2019 19:07:38 +0000 (12:07 -0700)]
don't include /usr/include when nvcc is in /usr/bin (#18127)

Summary:
...because gcc will have failures with very strange error messages
if you do.

This affects people with Debian/Ubuntu-provided NVCC, the PR should
not change anything for anyone else.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18127

Differential Revision: D14504386

Pulled By: soumith

fbshipit-source-id: 1aea168723cdc71cdcfffb3193ee116108ae755e

5 years agofix double free in test_jit (#18121)
Michael Suo [Mon, 18 Mar 2019 16:56:25 +0000 (09:56 -0700)]
fix double free in test_jit (#18121)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18121
ghimport-source-id: 70c273bfbcb68f7b25cf87f5614c662960864758

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18121 [jit] fix double free in test_jit**

These definitions used to be in anonymous namespace so they weren't exported from the translation unit. #18071 put those in a `test` namespace so I guess they were getting their destructors called twice on exit somehow. Making them static again fixes the problem.

Reviewed By: ezyang

Differential Revision: D14498349

fbshipit-source-id: f969781695dcbebdfcfce667fce5b986222a373e

5 years agoReplace resize_dim with set_sizes_and_strides in THTensor_(squeeze) (#18059)
Huitong Qiu [Mon, 18 Mar 2019 15:49:44 +0000 (08:49 -0700)]
Replace resize_dim with set_sizes_and_strides in THTensor_(squeeze) (#18059)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18059

Replace resize_dim() with set_sizes_and_strides() in `THTensor_(squeeze)` in aten/src/TH/generic/THTensor.cpp and `THCTensor_(squeeze)` in aten/src/THC/generic/THCTensor.cpp

Reviewed By: ezyang

Differential Revision: D14471066

fbshipit-source-id: 1c8c412ff09246c4df6843736e3bf0279bfadea8

5 years agoupdate exp. family doc (#18118)
Tongzhou Wang [Mon, 18 Mar 2019 04:36:02 +0000 (21:36 -0700)]
update exp. family doc (#18118)

Summary:
sphinx doesn't understand hyphen. it does not merge the two halves together in html.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18118

Differential Revision: D14498012

Pulled By: mrshenli

fbshipit-source-id: d6f4cfddc0a8e3a8f91578da43c26ca9c6fff3ce

5 years agoChange one_hot from IndexTensor to Tensor. (#18073)
Gregory Chanan [Sun, 17 Mar 2019 22:37:42 +0000 (15:37 -0700)]
Change one_hot from IndexTensor to Tensor. (#18073)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18073
ghimport-source-id: f4dadebafa0423c4c5a0e46c15b38129402d830a

Stack:
* #18072 properly device_guard IndexTensor and BoolTensor.
* **#18073 Change one_hot from IndexTensor to Tensor.**

There is no codegen change.

Reviewed By: ezyang

Differential Revision: D14485248

fbshipit-source-id: ee2ba8e5dcbbbaf0214a026c8e7ed4e6712becb0

5 years agoproperly device_guard IndexTensor and BoolTensor. (#18072)
Gregory Chanan [Sun, 17 Mar 2019 22:37:41 +0000 (15:37 -0700)]
properly device_guard IndexTensor and BoolTensor. (#18072)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18072
ghimport-source-id: 9653731602c72f299e095dd50e3afe6bcc8b01d6

Stack:
* **#18072 properly device_guard IndexTensor and BoolTensor.**
* #18073 Change one_hot from IndexTensor to Tensor.

Currently IndexTensor and BoolTensors do not have device_guards applied to them.
This is bad in the case where the only tensor(s) are IndexTensors or BoolTensors, because no device guard is present.

The only case this currently happens is with one_hot which ends up not mattering because of the way the implementation is written.  But I wanted to make sure we are covered here.

Reviewed By: ezyang

Differential Revision: D14485249

fbshipit-source-id: e57b28086fa1ad2fdd248bb1220e8a2e42da03e1

5 years agofix corner case for optional aliasing (#18093)
Michael Suo [Sun, 17 Mar 2019 21:53:41 +0000 (14:53 -0700)]
fix corner case for optional aliasing (#18093)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18093
ghimport-source-id: 021adc52aa7bfe5fff74531c76a8cd28cab30b2a

Stack:
* **#18093 [jit] fix corner case for optional aliasing**

Occasionally the compiler can insert constant Nones to make types line
up. In that case, don't try to make a pointer from the optional type to
None, since we know statically that None won't be mutated or whatever.

Reviewed By: shannonzhu

Differential Revision: D14493004

fbshipit-source-id: 6564065f39d99ee5af664f3a0fe235892973d9be

5 years agoTypo fix (#18089)
Jianyu Huang [Sat, 16 Mar 2019 22:04:02 +0000 (15:04 -0700)]
Typo fix (#18089)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18089

Typo fix for the fully connected layer documentation.

Reviewed By: jspark1105

Differential Revision: D14488632

fbshipit-source-id: ca0271ca0250c1d653ed7f250e8588f7b2ce1056

5 years agoCaffe2 - Add flag to fails if float point exceptions is detected in operator runs...
Duc Ngo [Sat, 16 Mar 2019 19:21:55 +0000 (12:21 -0700)]
Caffe2 - Add flag to fails if float point exceptions is detected in operator runs (#18040)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18040

Add flag to fails if float point exceptions is detected in operator runs

Sample exception

Exception [enforce fail at operator.h:837] !std::fetestexcept(FE_DIVBYZERO). Division by zero floating point exception (FE_DIVBYZERO) reported.
Error from operator:
input: "1" input: "0" output: "out" name: "" type: "Div"

Reviewed By: jspark1105

Differential Revision: D14467731

fbshipit-source-id: fad030b1d619a5a661ff2114edb947e4562cecdd

5 years agoRemove ComputeLibrary submodule
Junjie Bai [Sat, 16 Mar 2019 16:03:17 +0000 (09:03 -0700)]
Remove ComputeLibrary submodule

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18052

Reviewed By: ezyang

Differential Revision: D14477355

fbshipit-source-id: c56b802f6d69701596c327cf9af6782f30e335fa

5 years agoremove unused parameters in optimizer tests (#18084)
Jongsoo Park [Sat, 16 Mar 2019 01:02:53 +0000 (18:02 -0700)]
remove unused parameters in optimizer tests (#18084)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18084

data_strategy parameter was not used in some of unit tests for optimizers

Reviewed By: hyuen

Differential Revision: D14487830

fbshipit-source-id: d757cd06aa2965f4c0570a4a18ba090b98820ef4

5 years agoSpecify overload name in function schema (#18037)
Sebastian Messmer [Fri, 15 Mar 2019 23:54:11 +0000 (16:54 -0700)]
Specify overload name in function schema (#18037)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18037

The FunctionSchema can now store an overload name and the parser knows how to parse it. Specify like this:

    my_func.overload1(arg1: Tensor) -> Tensor
    my_func.overload2(arg1: Tensor, arg2: Tensor) -> Tensor

Reviewed By: zdevito

Differential Revision: D14467497

fbshipit-source-id: 8832b32f07351bb61090357b17b77a6a2fed3650

5 years agoExpose c10 cuda ops to caffe2 (#18036)
Sebastian Messmer [Fri, 15 Mar 2019 23:54:11 +0000 (16:54 -0700)]
Expose c10 cuda ops to caffe2 (#18036)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18036

- Add macros to export c10 cuda operators to caffe2 frontend
- Instead of having a separate caffe2 registry for the c10 operator wrappers, use the existing caffe2 registries

Reviewed By: ezyang

Differential Revision: D14467495

fbshipit-source-id: 7715ed2e38d2bbe16f1446ae82c17193a3fabcb9

5 years agoAutomatic update of fbcode/foxi to 2bcc4064c90e87b9638615c733485f07c47b7558 (#18070)
Jack Montgomery [Fri, 15 Mar 2019 23:43:18 +0000 (16:43 -0700)]
update of fbcode/foxi to 2bcc4064c90e87b9638615c733485f07c47b7558 (#18070)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18070

Previous import was d1f45b1a2b1585d0e9bc65e15e463db344fc3ff6

Included changes:
- **[2bcc406](https://github.com/houseroad/foxi/commit/2bcc406)**: Merge pull request #7 from jackm321/tracing_fixes <Jack Montgomery>
- **[c39033c](https://github.com/houseroad/foxi/commit/c39033c)**: Fixes for tracing events <Jack Montgomery>
- **[50912cf](https://github.com/houseroad/foxi/commit/50912cf)**: Merge pull request #5 from jackm321/add_trace_events <Jack Montgomery>
- **[ba2fdcb](https://github.com/houseroad/foxi/commit/ba2fdcb)**: Merge pull request #5 from jackm321/add_trace_events <Jack Montgomery>
- **[7d42b12](https://github.com/houseroad/foxi/commit/7d42b12)**: address comments <Jack Montgomery>
- **[dcabd8d](https://github.com/houseroad/foxi/commit/dcabd8d)**: Add trace events interface <Jack Montgomery>

Reviewed By: houseroad

Differential Revision: D14483201

fbshipit-source-id: f51ed869c9a89521079df89903abc0ac0a45ac7b

5 years agoAdd backwards compatibility and other fixes to Dispatch macros. (#17996)
Gregory Chanan [Fri, 15 Mar 2019 21:16:22 +0000 (14:16 -0700)]
Add backwards compatibility and other fixes to Dispatch macros. (#17996)

Summary:
Changes:
1) https://github.com/pytorch/pytorch/pull/17527 changed dispatch macros to be ScalarType based instead of at::Type based.  This broke cpp extensions that relied on dispatch macros.  Since IMO these should be ScalarType based (and some extensions have already updated), we allow either at::Type or at::ScalarType to be passed, but passing at::Type will result in a deprecated warning.

2) Reintroduce macros that were deleted (AT_DISPATCH_ALL_TYPES_AND_HALF, AT_DISPATCH_COMPLEX_TYPES, AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX, AT_DISPATCH_ALL_TYPES_AND_COMPLEX); the AND_HALF ones now give a deprecated warning because there are more extensible macros that were introduced in their place.

3) Makes AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND into a ScalarType based macro (and updates usages).  This was the result of a logical merge conflicts.

4) Adds a new macro, C10_DEPRECATED_MESSAGE for passing a deprecated message to the compiler.  I didn't spend much time seeing if this can be enabled for versions before C++14.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17996

Reviewed By: ezyang

Differential Revision: D14446203

Pulled By: gchanan

fbshipit-source-id: 1da56e2e9c15aa8f913ebbf6bf1110c5b6dc375e

5 years agoBreakup Test Misc (batch 1/2) (#18071)
Elias Ellison [Fri, 15 Mar 2019 20:53:23 +0000 (13:53 -0700)]
Breakup Test Misc (batch 1/2) (#18071)

Summary:
Breakup test_misc so that a test for a file is in test_filename. I think we might want to wait on moving test files into the source directory, since that would involve moving some tests over to the C10 folder, and this goes 99% of the way for test discoverability IMO anyway.

I added a file test_utils for common functions invoked in the tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18071

Differential Revision: D14485787

Pulled By: eellison

fbshipit-source-id: dcb20d1978d490999d435ea20c1d0503413a5c80

5 years agoRemove the identical if branch (#18019)
yuanhaoxie [Fri, 15 Mar 2019 20:09:18 +0000 (13:09 -0700)]
Remove the identical if branch (#18019)

Summary:
elif branch and else branch have the same content.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18019

Differential Revision: D14475107

Pulled By: ezyang

fbshipit-source-id: 5075cc938f57649af7537de1a7c9d76ea976cafc

5 years agoRemove Type::elementSizeInBytes
Roy Li [Fri, 15 Mar 2019 19:52:57 +0000 (12:52 -0700)]
Remove Type::elementSizeInBytes

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17785

Reviewed By: ezyang

Differential Revision: D14379074

fbshipit-source-id: 60727f187d61eb571b144bd6eed4dd4908da0b51

5 years agoadd index and count to list (#17446)
Michael Kösel [Fri, 15 Mar 2019 19:39:04 +0000 (12:39 -0700)]
add index and count to list (#17446)

Summary:
see https://github.com/pytorch/pytorch/issues/16662
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17446

Differential Revision: D14461293

Pulled By: Krovatkin

fbshipit-source-id: 03572467cdf85efc909c1864c0558a93085c8ff3

5 years agoONNX Export IsNan op
Lara Haidar-Ahmad [Fri, 15 Mar 2019 19:10:32 +0000 (12:10 -0700)]
ONNX Export IsNan op

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17698

Reviewed By: zrphercule

Differential Revision: D14470646

Pulled By: houseroad

fbshipit-source-id: d3e6adc83c4f9fa288c5fe0ae4c6af71fdd47905

5 years agosupport serialization of classes (#17856)
Michael Suo [Fri, 15 Mar 2019 19:00:50 +0000 (12:00 -0700)]
support serialization of classes (#17856)

Summary:
Stack:
&nbsp;&nbsp;&nbsp;&nbsp;:black_circle:&nbsp; **#17856 [jit] support serialization of classes**&nbsp;&nbsp;[:yellow_heart:](https://our.intern.facebook.com/intern/diff/D14402599/)

Add support for saving/loading TorchScript modules that depend on user-defned classes.

We track class dependencies the same we track tensor constants, then write them
all out such that we can just compile them in order before compiling the module
hierarchy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17856

Reviewed By: shannonzhu

Differential Revision: D14461599

Pulled By: suo

fbshipit-source-id: 7115f87e069fd00dc8381d7de9997864fef7ea9f

5 years agoadd reverse to list (#17001)
Michael Kösel [Fri, 15 Mar 2019 18:43:33 +0000 (11:43 -0700)]
add reverse to list (#17001)

Summary:
Add reverse functionality to list. See https://github.com/pytorch/pytorch/issues/16662

```python
import torch

torch.jit.script
def foo():
    a = [1, 2, 3, 4]
a.reverse()

    return a
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17001

Reviewed By: eellison

Differential Revision: D14092019

Pulled By: driazati

fbshipit-source-id: b353c763677c22312b64dde0db268e2988610ba1

5 years ago1/2 Add Tracing support for C2 Ops (#17899)
Lu Fang [Fri, 15 Mar 2019 18:41:31 +0000 (11:41 -0700)]
1/2 Add Tracing support for C2 Ops (#17899)

Summary:
The C10 ops are not registered as custom ops in PyTorch. So we have to add the explicit support for it, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17899

Reviewed By: dzhulgakov

Differential Revision: D14436999

Pulled By: houseroad

fbshipit-source-id: a31fdf13a5c84f9b156a7288e0ffa57deb23b83f

5 years agoDelete dead code in THTensorMoreMath.cpp (#17993)
Richard Zou [Fri, 15 Mar 2019 14:41:08 +0000 (07:41 -0700)]
Delete dead code in THTensorMoreMath.cpp (#17993)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17993
ghimport-source-id: 5427773f6306bdeddffd9a3ae032acc3f253f458

Stack:
* #17926 Implement at::has_internal_overlap helper function
* #17927 Error out on in-place (unary) ops on tensors that have internal overlap
* **#17993 [easy] Delete dead code in THTensorMoreMath.cpp**

We seem to have new implementations already for these in ATen.

Reviewed By: ezyang

Differential Revision: D14457838

fbshipit-source-id: 8481aad74b2127bd28c0f3e09740889fc0488a31

5 years agoError out on in-place (unary) ops on tensors that have internal overlap (#17927)
Richard Zou [Fri, 15 Mar 2019 14:41:08 +0000 (07:41 -0700)]
Error out on in-place (unary) ops on tensors that have internal overlap (#17927)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17927
ghimport-source-id: 626d321e430b6b5c0ea3aa1eb9df8c1e2d058bf8

Stack:
* #17926 Implement at::has_internal_overlap helper function
* **#17927 Error out on in-place (unary) ops on tensors that have internal overlap**

On the way to #17935.

Works for CPU and CUDA on the following ops:
- abs_, acos_, asin_, atan_, ceil_, cos_, erf_, erfc_, exp_, expm1_
- floor_, log_, log10_, log1p_, log2_, round_, rsqrt_,
- sin_, sqrt_, tan_, tanh_, trunc_

This PR adds a check to see if the out/result tensor has internal
overlap. If it does, then we error out because the result **may** be
incorrect.

This is overly conservative; there are some cases where if the result is
the same as the input, the inplace operation is OK (such as floor_,
round_, and trunc_). However, the current code isn't organized in such a
way that this is easy to check, so enabling those will come in the future.

Reviewed By: ezyang

Differential Revision: D14438871

fbshipit-source-id: 15e12bf1fdb2ab7f74bb806e22bc74840bd6abd1

5 years agoImplement at::has_internal_overlap helper function (#17926)
Richard Zou [Fri, 15 Mar 2019 14:41:08 +0000 (07:41 -0700)]
Implement at::has_internal_overlap helper function (#17926)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17926
ghimport-source-id: 9f7572b5d43e474492363fa17dcb86a6c27ca13c

Stack:
* **#17926 Implement at::has_internal_overlap helper function**
* #17927 Error out on in-place (unary) ops on tensors that have internal overlap

On the way to #17935.

Checks if a tensor's sizes/strides indicate that multiple elements share
the same memory location. This problem in general is hard so
at::has_internal_overlap implements two heuristics and avoids solving
the general problem:

if a tensor is contiguous, it cannot have internal overlap
if a tensor has any zero strides, it does have internal overlap
otherwise, return MemOverlap::kTooHard to indicate that there might be
overlap, but we don't know.

Reviewed By: ezyang

Differential Revision: D14438858

fbshipit-source-id: 607ab31771315921ab6165b2a1f072ac3e75925a

5 years agoFix truncation of default float values in JIT signatures. (#18044)
Gregory Chanan [Fri, 15 Mar 2019 14:36:13 +0000 (07:36 -0700)]
Fix truncation of default float values in JIT signatures. (#18044)

Summary:
In python2, float values get truncated.  We are storing default float values as floats (not 100% sure why?), which results in the defaults being truncated in the JIT and not matching the (specified) native function signatures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18044

Reviewed By: ezyang

Differential Revision: D14469868

Pulled By: gchanan

fbshipit-source-id: a456de599e8dab106966bcac7a6033f02ce3cdd2

5 years agoAllow None for checkpoint (#17969)
Choongwoo Han [Fri, 15 Mar 2019 14:33:31 +0000 (07:33 -0700)]
Allow None for checkpoint (#17969)

Summary:
Currently, we cannot run a checkpointed function with None argument.

```python
out = torch.utils.checkpoint.checkpoint(run_fn, input_var, None)
```

```
  File "/home/tunz/anaconda3/envs/torchdev/lib/python3.7/site-packages/torch/utils/checkpoint.py", line 14, in detach_variable
    x = inp.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
```

This PR makes checkpoint function to safely handle None argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17969

Differential Revision: D14475148

Pulled By: ezyang

fbshipit-source-id: 9afe9e9aac511a6df1e1620e9ac341536890d451

5 years agoFix unclosed files in download.py, test_onnxifi.py, test_trt.py (#18017)
ttup7777 [Fri, 15 Mar 2019 14:26:38 +0000 (07:26 -0700)]
Fix unclosed files in download.py, test_onnxifi.py, test_trt.py (#18017)

Summary:
According to https://docs.python.org/3/tutorial/inputoutput.html, it is good practice to use the "with" keyword when dealing with file objects. If not, you should call f.close() to close the file and immediately free up any system resources used by it.  Thus, I adjust the open file function to "with open() as f".
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18017

Differential Revision: D14475112

Pulled By: ezyang

fbshipit-source-id: d1c0821e39cb8a09f86d6d08b437b4a99746416c

5 years agoRun multi-gpu (single host) resnet50 and resnext101 training in bench (#18043)
Junjie Bai [Fri, 15 Mar 2019 09:49:02 +0000 (02:49 -0700)]
Run multi-gpu (single host) resnet50 and resnext101 training in bench (#18043)

Summary:
This is now working in rocm 2.2

cc xw285cornell
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18043

Differential Revision: D14477493

Pulled By: bddppq

fbshipit-source-id: 4d2dab1d5dbdbd4d6189162c074b19c4e9882c7d

5 years agoUpdate nonzero onnx export (#18047)
BowenBao [Fri, 15 Mar 2019 05:13:11 +0000 (22:13 -0700)]
Update nonzero onnx export (#18047)

Summary:
The output format of NonZero in ONNX(numpy https://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html) differs from that in PyTorch:
In ONNX: `[rank_of_input, num_of_nonzeros]`, whereas in PyTorch: `[num_of_nonzeros, rank_of_input]`.
To resolve the difference a Transpose op after the nonzero output is added in the exporter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18047

Differential Revision: D14475081

Pulled By: ezyang

fbshipit-source-id: 7a3e4899f3419766b6145d3e9261e92859e81dc4