platform/upstream/pytorch.git
5 years agocaffe2::Tensor::is_same() (#15407)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:29 +0000 (16:06 -0800)]
caffe2::Tensor::is_same() (#15407)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15407

Don't ask the tensor for its intrusive pointer if we just want to check if two tensors are the same.
This mirrors ATen APIs.

Reviewed By: dzhulgakov

Differential Revision: D13520389

fbshipit-source-id: 681317f36f480ab60e532bb08a073f98f39770fd

5 years agoClean up Half (#15317)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:27 +0000 (16:06 -0800)]
Clean up Half (#15317)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15317

- Merge bitcasts.h and Half.h
- Remove 'static' keyword

Reviewed By: dzhulgakov

Differential Revision: D13498492

fbshipit-source-id: 46d47143e7d3a9d3f4aa7d92379dbba015c97435

5 years agoMove files to/from c10/core and c10/util (#15316)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:27 +0000 (16:06 -0800)]
Move files to/from c10/core and c10/util (#15316)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15316

This starts cleaning up the files in c10 according to the module structure we decided on.

Move to c10/util:
- Half.h, Half-inl.h, Half.cpp, bitcasts.h

Move to c10/core:
- Device.h, Device.cpp
- DeviceType.h, DeviceType.cpp

i-am-not-moving-c2-to-c10

Reviewed By: dzhulgakov

Differential Revision: D13498493

fbshipit-source-id: dfcf1c490474a12ab950c72ca686b8ad86428f63

5 years agoRemove Context from c10 operator schemas (#15312)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:26 +0000 (16:06 -0800)]
Remove Context from c10 operator schemas (#15312)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15312

Context will soon be entirely obsolete. Remove it from the operator schema interface.

Reviewed By: dzhulgakov

Differential Revision: D13495323

fbshipit-source-id: caa0f8f092cd6284e510c3e1e3374fe2f8338364

5 years agoEnable calling caffe2 LayerNorm from PyTorch and JIT (#15243)
Sebastian Messmer [Fri, 11 Jan 2019 00:06:26 +0000 (16:06 -0800)]
Enable calling caffe2 LayerNorm from PyTorch and JIT (#15243)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15243

Register it as a custom JIT op.

Reviewed By: dzhulgakov

Differential Revision: D13473791

fbshipit-source-id: 0f7e72e3efc85a75060a7597fadaf0a8bd289651

5 years agofix rocm build
Zachary DeVito [Fri, 11 Jan 2019 00:04:12 +0000 (16:04 -0800)]
fix rocm build

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15945

Differential Revision: D13630505

Pulled By: zdevito

fbshipit-source-id: a4d2ae1370ab475fc1711027c0c9d2a9192be195

5 years agoRemove USE_CUDA and USE_ROCM in engine.cpp
bddppq [Thu, 10 Jan 2019 22:12:54 +0000 (14:12 -0800)]
Remove USE_CUDA and USE_ROCM in engine.cpp

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15893

Differential Revision: D13627319

Pulled By: zdevito

fbshipit-source-id: 7c72c1c6cc242143fb66383423c668c9b9810884

5 years agoExtend note about contributing to the C++ frontend (#15902)
Peter Goldsborough [Thu, 10 Jan 2019 22:05:21 +0000 (14:05 -0800)]
Extend note about contributing to the C++ frontend (#15902)

Summary:
soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15902

Differential Revision: D13628525

Pulled By: goldsborough

fbshipit-source-id: 70cf36d1bacd9d689d4fa4f2290886fd3765e89b

5 years agoFix different env variables in schedules runs pt 2 (#15934)
Jesse Hellemn [Thu, 10 Jan 2019 21:51:59 +0000 (13:51 -0800)]
Fix different env variables in schedules runs pt 2 (#15934)

Summary:
Unfortunately I do not know how to test this without merging it first
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15934

Reviewed By: orionr

Differential Revision: D13627472

Pulled By: pjh5

fbshipit-source-id: 35eced1483bbf3c0c3f6f62fb7bbbf2f200e50e6

5 years agoChange PoolOp Functors design to support CuDNN CUDA fallback (#15903)
Xiaomeng Yang [Thu, 10 Jan 2019 21:50:51 +0000 (13:50 -0800)]
Change PoolOp Functors design to support CuDNN CUDA fallback (#15903)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15903

Change PoolOp Functors design to support CuDNN CUDA fallback

Reviewed By: houseroad

Differential Revision: D13617085

fbshipit-source-id: 8a539d77f35bc47afe5dc8e32aaad52e45cb691c

5 years agoFix bug in torch::load and unpack torch::optim::detail namespace (#15926)
Peter Goldsborough [Thu, 10 Jan 2019 21:50:41 +0000 (13:50 -0800)]
Fix bug in torch::load and unpack torch::optim::detail namespace (#15926)

Summary:
Wasn't clearing optimizer buffers before adding new entries to it during deserialization. Successive calls to `torch::load` with the same optimizer would just append to the buffer container. Also moved `serialize()` function from `torch::optim::detail` into `torch::optim` so users can use it for custom optimizers.

Fixes #15792

ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15926

Differential Revision: D13623615

Pulled By: goldsborough

fbshipit-source-id: e193091f25f56a95f2a9648af312cb7caa45f300

5 years agofix aliasing on unwrap optional (#15748)
Elias Ellison [Thu, 10 Jan 2019 20:49:54 +0000 (12:49 -0800)]
fix aliasing on unwrap optional (#15748)

Summary:
Fix for https://github.com/pytorch/pytorch/issues/15604
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15748

Differential Revision: D13583632

Pulled By: eellison

fbshipit-source-id: 9655ee010494179e17e34f3047363477dad15fb1

5 years agoJIT Batch Norm fusion (#15897)
Adam Paszke [Thu, 10 Jan 2019 20:25:22 +0000 (12:25 -0800)]
JIT Batch Norm fusion (#15897)

Summary:
Resubmit of #15146, which has been accidentally reverted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15897

Differential Revision: D13616093

Pulled By: zou3519

fbshipit-source-id: 0c3a3bec8f9fed57274da9f6c7cf40cbc05cf91a

5 years agoFix different env variables in schedules runs
Jesse Hellemn [Thu, 10 Jan 2019 20:06:47 +0000 (12:06 -0800)]
Fix different env variables in schedules runs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15927

Reviewed By: orionr

Differential Revision: D13624127

Pulled By: pjh5

fbshipit-source-id: e8b14f0401b0c278a5d17af6d7979800917e3ae6

5 years agoAllow for registration after GlobalInit (#15876)
Orion Reblitz-Richardson [Thu, 10 Jan 2019 17:32:41 +0000 (09:32 -0800)]
Allow for registration after GlobalInit (#15876)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15876

Build changes made it so some .so libraries are now registered after GlobalInit is called. Although this shouldn't be common, it also shouldn't be explicitly excluded. These changes allow for late Caffe2 registration, but also warn in that case.

Reviewed By: kuttas

Differential Revision: D13608186

fbshipit-source-id: 0ca7bcd32516d374077db0c2548cf8c28ccdd5f6

5 years agoFix TestDataLoader.test_proper_exit (#15665)
SsnL [Thu, 10 Jan 2019 16:44:32 +0000 (08:44 -0800)]
Fix TestDataLoader.test_proper_exit (#15665)

Summary:
Currently, in `test_proper_exit`,
1. we do not kill the correct input `pid` in the `kill_pid` function
https://github.com/pytorch/pytorch/blob/fe15d6a2c231a7bc1b32781217ed336ccf9adff7/test/test_dataloader.py#L325-L329
2. the Windows command that detects process status doesn't actually work
https://github.com/pytorch/pytorch/blob/fe15d6a2c231a7bc1b32781217ed336ccf9adff7/test/test_dataloader.py#L641-L646
3. `worker_error` and `worker_kill` cases (sometimes?) are not tested because the workers may exit naturally due to the pre-fetching mechanism and a too small `dataset size / batch size`.

In this PR, I, in separate commits:
1. Install `psutil` (a python package specifically built for process monitoring) on some CI builds. (Linux builds installation are done in https://github.com/pietern/pytorch-dockerfiles/pull/29 https://github.com/pietern/pytorch-dockerfiles/pull/30  https://github.com/pytorch/ossci-job-dsl/pull/36 and https://github.com/pytorch/pytorch/pull/15795).
2. Rewrite `test_proper_exit` with `psutil` so we

    1. do not rely on the hacky `is_process_alive` https://github.com/pytorch/pytorch/blob/fe15d6a2c231a7bc1b32781217ed336ccf9adff7/test/test_dataloader.py#L640-L653
   2. increase the #task per worker so `worker_error` and `worker_kill` properly trigger
   3. test error message content to ensure that the loader exits with correct message corresponding to each exiting scenario.

3. Fix Windows data loader not having any mechanism to detect worker failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15665

Differential Revision: D13615527

Pulled By: soumith

fbshipit-source-id: cfb2f67837d2d87928a53f00b4d20f09754b7949

5 years agoUnify flags and environmental variable when building LibTorch/PyTorch (#15868)
peter [Thu, 10 Jan 2019 14:44:37 +0000 (06:44 -0800)]
Unify flags and environmental variable when building LibTorch/PyTorch (#15868)

Summary:
Fixes #15858.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15868

Differential Revision: D13622354

Pulled By: soumith

fbshipit-source-id: bb8c49520ebf926c6194d42db75accba867018c7

5 years agoAdding binary builds to circleci
Jesse Hellemn [Thu, 10 Jan 2019 08:03:31 +0000 (00:03 -0800)]
Adding binary builds to circleci

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15577

Reviewed By: orionr

Differential Revision: D13617359

Pulled By: pjh5

fbshipit-source-id: 2b2a1b8735f2af6973a2352bee78912794402ae1

5 years agoFix lint
SsnL [Thu, 10 Jan 2019 07:18:11 +0000 (23:18 -0800)]
Fix lint

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15910

Differential Revision: D13620684

Pulled By: houseroad

fbshipit-source-id: af3b1e2fed55ecd3417f66e549fa921bf4fd758e

5 years agoMake SGD match python (#15840)
an-kumar [Thu, 10 Jan 2019 06:17:45 +0000 (22:17 -0800)]
Make SGD match python (#15840)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/15530
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15840

Differential Revision: D13608503

Pulled By: goldsborough

fbshipit-source-id: aad17c110d64cbe2c126bccd36d228e4108ffa9a

5 years agotest_jit.py: Speedup EndToEnd tests by reducing workload size. (#15906)
Mikhail Zolotukhin [Thu, 10 Jan 2019 05:11:58 +0000 (21:11 -0800)]
test_jit.py: Speedup EndToEnd tests by reducing workload size. (#15906)

Summary:
Currently these tests are taking most of the time in test_jit.py run, with the
proposed changes the testing time is reduced by ~75%:

```
TestEndToEndHybridFrontendModels.test_neural_style: 203.360s -> 10.650s
TestEndToEndHybridFrontendModels.test_snli: 422.315s -> 9.152s
TestEndToEndHybridFrontendModels.test_super_resolution: 73.362s -> 19.185s

time python test/test_jit.py (real): 13m50.828s -> 3m11.768s
time python test/test_jit.py (user): 85m59.745s -> 13m18.135s
time python test/test_jit.py (sys): 144m9.028s -> 25m58.019s
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15906

Differential Revision: D13619659

Pulled By: ZolotukhinM

fbshipit-source-id: 6c22d8740f8ddb865c3a0667af32653723383816

5 years agoPorting legacy reflection_pad2d to ATen
Shen Li [Thu, 10 Jan 2019 04:53:03 +0000 (20:53 -0800)]
Porting legacy reflection_pad2d to ATen

Summary:
Other changes:
1. Avoided using `THCDeviceTensor` by re-calculating the mapping from cuda (blockIdx, threadIdx) to input/output tensor index.
2. Changed Camelcase naming to underscore naming.

Differential Revision: D13546803

fbshipit-source-id: 1df54f13e64934da3d803d9b6586bd5208d42d6d

5 years agoFix log_prob for Gumbel distribution (#15878)
vishwakftw [Thu, 10 Jan 2019 04:04:18 +0000 (20:04 -0800)]
Fix log_prob for Gumbel distribution (#15878)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/15681

Changelog:
- Add hard-coded implementation of log_prob
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15878

Differential Revision: D13613716

Pulled By: soumith

fbshipit-source-id: 2ba74e52748b6213098b167940dcc068f0c056f4

5 years agoTensor method rename sizes().size() -> dim()
Jerry Zhang [Thu, 10 Jan 2019 03:51:25 +0000 (19:51 -0800)]
Tensor method rename sizes().size() -> dim()

Summary: Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: smessmer

Differential Revision: D13568637

fbshipit-source-id: 4e1b6658355d4073097eb666ba73596e0261bef1

5 years agoBatched upper triangular, lower triangular (#15257)
vishwakftw [Thu, 10 Jan 2019 03:36:20 +0000 (19:36 -0800)]
Batched upper triangular, lower triangular (#15257)

Summary:
Changelog:

- Implements `triu` and `tril` for batches of 2D tensors.
- Remove TH/THC binding for `tril`
- Fix CUDA implementation
- Update docstrings for tril and triu.
- Remove mask-based `triu` and `tril` in cholesky forward and backward.
- Remove batched tril in torch.distributions.utils
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15257

Differential Revision: D13613888

Pulled By: mrshenli

fbshipit-source-id: 0949a05b9b8e974c1acfaf02a6284848ec5cc1c4

5 years agoMinor bug fix in dnnlowp (#15841)
Summer Deng [Thu, 10 Jan 2019 01:11:53 +0000 (17:11 -0800)]
Minor bug fix in dnnlowp (#15841)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15841

Fix the bugs in dnnlowp to support int8/int16 quantization for sparsenn.

Reviewed By: jspark1105

Differential Revision: D13600878

fbshipit-source-id: 27f06d7c54a663208320c8f211714220a9b49540

5 years agotest_jit.py: Replace direct `exec` invocation with a wrapper. (#15882)
Mikhail Zolotukhin [Thu, 10 Jan 2019 00:57:22 +0000 (16:57 -0800)]
test_jit.py: Replace direct `exec` invocation with a wrapper. (#15882)

Summary:
Python2 doesn't allow to invoke `exec` from a nested function:

  File "test/test_jit.py", line 4653
     exec(code, globals(), scope)
  SyntaxError: unqualified exec is not allowed in function 'test' it is a nested function

This patch wraps exec with a separate function, making it work for both python2
and python3.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15882

Differential Revision: D13614235

Pulled By: ZolotukhinM

fbshipit-source-id: 9a074308c2379f089402e0bf5a996cc649d6dbca

5 years agoRevert D13468570: [pytorch][PR] Optimize CPU version performance of the nonzero function.
Gregory Chanan [Wed, 9 Jan 2019 23:29:20 +0000 (15:29 -0800)]
Revert D13468570: [pytorch][PR] Optimize CPU version performance of the nonzero function.

Differential Revision:
D13468570

Original commit changeset: e55ce54d6062

fbshipit-source-id: 4c043564b0a69b5af11559e5dc94790e7064841f

5 years agoFix several ResourceWarning: unclosed file (#15746)
Mickaël Schoentgen [Wed, 9 Jan 2019 23:25:58 +0000 (15:25 -0800)]
Fix several ResourceWarning: unclosed file (#15746)

Summary:
Hello,

This is a patch to fix `ResourceWarning: unclosed file`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15746

Differential Revision: D13587286

Pulled By: soumith

fbshipit-source-id: 08ac34c5b51d9334867f65a2927bff11511553f3

5 years agoFix BuildIndexOp (#15580)
Yan Shang [Wed, 9 Jan 2019 23:07:11 +0000 (15:07 -0800)]
Fix BuildIndexOp (#15580)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15580

adding the UNDEFINED datatype.

Reviewed By: itomatik

Differential Revision: D13556099

fbshipit-source-id: b730f7fca8faefb8a013c265296eee26bcedaff0

5 years agoWrap C10 CUDAStream instead of cudaStream_t in THCPStream
Shen Li [Wed, 9 Jan 2019 23:07:07 +0000 (15:07 -0800)]
Wrap C10 CUDAStream instead of cudaStream_t in THCPStream

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15833

Differential Revision: D13608337

Pulled By: mrshenli

fbshipit-source-id: 4c66ef89fad0dc14a11ddb69da92907797cd2828

5 years agouse C10_MOBILE/ANDROID/IOS (#15363)
Jerry Zhang [Wed, 9 Jan 2019 23:05:29 +0000 (15:05 -0800)]
use C10_MOBILE/ANDROID/IOS (#15363)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15363

Didn't define C10_MOBILE in the numa file move diff: D13380559
move CAFFE2_MOBILE/ANDROID/IOS to c10

```
codemod -m -d caffe2 --extensions h,hpp,cc,cpp,mm "CAFFE2_MOBILE" "C10_MOBILE"
codemod -m -d caffe2 --extensions h,hpp,cc,cpp,mm "CAFFE2_ANDROID" "C10_ANDROID"
codemod -m -d caffe2 --extensions h,hpp,cc,cpp,mm "CAFFE2_IOS" "C10_IOS"

```

i-am-not-moving-c2-to-c10

Reviewed By: marcinkwiatkowski

Differential Revision: D13490020

fbshipit-source-id: c4f01cacbefc0f16d5de94155c26c92fd5d780e4

5 years agoOptimize CPU version performance of the nonzero function. (#15190)
Vitaly Fedyunin [Wed, 9 Jan 2019 21:33:34 +0000 (13:33 -0800)]
Optimize CPU version performance of the nonzero function. (#15190)

Summary:
Optimized CPU version of the nonzero. Now 2x faster (in avg.) than numpy.

Can be further optimized for 1D tensors and boolean tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15190

Differential Revision: D13468570

Pulled By: VitalyFedyunin

fbshipit-source-id: e55ce54d60626a42d9a10a02e407856458b8055e

5 years agoRemove TH binding of newWithStorage as it is not used.
Gregory Chanan [Wed, 9 Jan 2019 21:07:30 +0000 (13:07 -0800)]
Remove TH binding of newWithStorage as it is not used.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15838

Differential Revision: D13601517

Pulled By: gchanan

fbshipit-source-id: 71ec107de2c880e7e0fd2ad6b4ea3d112dbb9d86

5 years agoRevert D13598894: [pytorch][PR] [Caffe2] [ROCm] Use correct workspace alloc call...
Junjie Bai [Wed, 9 Jan 2019 18:01:03 +0000 (10:01 -0800)]
Revert D13598894: [pytorch][PR] [Caffe2] [ROCm] Use correct workspace alloc call in MIOpen conv operator

Differential Revision:
D13598894

Original commit changeset: 44886161abdf

fbshipit-source-id: 6c6057136f1ea741fcd1734695356709aeb4bf12

5 years agoRevert D13548303: [pytorch][PR] Add support for batch_norm fusion to the JIT
Topher Lubaway [Wed, 9 Jan 2019 16:51:23 +0000 (08:51 -0800)]
Revert D13548303: [pytorch][PR] Add support for batch_norm fusion to the JIT

Differential Revision:
D13548303

Original commit changeset: a2e2e5abc383

fbshipit-source-id: 5b70cdbcbd1cac06eeefb2a939773358c061183c

5 years agoFix macos build (#15873)
SsnL [Wed, 9 Jan 2019 15:48:12 +0000 (07:48 -0800)]
Fix macos build (#15873)

Summary:
macos builds are broken now with the following error:
```
/usr/local/Homebrew/Library/Homebrew/config.rb:39:in `initialize': no implicit conversion of nil into String (TypeError)
from /usr/local/Homebrew/Library/Homebrew/config.rb:39:in `new'
from /usr/local/Homebrew/Library/Homebrew/config.rb:39:in `<top (required)>'
from /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require'
from /usr/local/Homebrew/Library/Homebrew/global.rb:25:in `<top (required)>'
from /usr/local/Homebrew/Library/Homebrew/brew.rb:13:in `require_relative'
from /usr/local/Homebrew/Library/Homebrew/brew.rb:13:in `<main>'
Exited with code 1
```

No recent commits look suspicious, and I can even reproduce locally on my macbook, so it might be related to some new `brew` updates. Empirically, calling `brew update` first seems to fix this.

Example error build: https://circleci.com/gh/pytorch/pytorch/534392?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15873

Differential Revision: D13608019

Pulled By: soumith

fbshipit-source-id: 1499cb5246929e275a11ca6fccef6ef32918e45e

5 years agoAdd torch.bincount() test case on sliced tensor (#15835)
zou3519 [Wed, 9 Jan 2019 15:28:40 +0000 (07:28 -0800)]
Add torch.bincount() test case on sliced tensor (#15835)

Summary:
This was causing a problem in #15735 but appears to have been fixed.
Adding this test to prevent regressions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15835

Differential Revision: D13600282

Pulled By: zou3519

fbshipit-source-id: d9939e74d372be71c50122a5f6a615fbd7fa4df6

5 years agodeduplicated code in elementwise_op_broadcast_test.py (#15865)
Andre Georg Holzner [Wed, 9 Jan 2019 11:04:29 +0000 (03:04 -0800)]
deduplicated code in elementwise_op_broadcast_test.py (#15865)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15865

factored out code used in tests for operators Add, Mul and Sub
into two new methods: a first one to generate the test vectors, a second
one to run the actual tests given a caffe2 and python operator.

Reviewed By: houseroad

Differential Revision: D13526955

fbshipit-source-id: 8970ba5a1305ca19a54a14b51816d4a19f19d678

5 years agoFixed syntax error in doctest (#15646)
Jon Crall [Wed, 9 Jan 2019 09:21:23 +0000 (01:21 -0800)]
Fixed syntax error in doctest (#15646)

Summary:
I fixed a very small extra parenthesis in a doctest.

I'm also going to use this issue as a place to propose the eventual inclusion of xdoctest (a pip installable library I wrote) in pytorch's test suite. I think there are a lot of problems with Python's built in doctest module, and I've built xdoctest to fix them. I would love for my project to get some exposure and its addition to PyTorch may benefit both projects. Please see the readme for more details on what xdoctest brings to the table over the builtin doctest module: https://github.com/Erotemic/xdoctest

I came across this small syntax error when working on ensuring xdoctest was compatible with pytorch. It isn't 100% there yet, but I'm working on it. My goal is to ensure that xdoctest is 100% compatible with all of torch's doctest out-of-the-box before writing up the PR. I'm also airing the idea out-loud before I commit too much time into this (or get my hopes up), so I'm attaching this little blurb to a no-brainer-merge PR to (1) demonstrate a little bit of value (because xdoctest flagged this syntax error) and (2) see how its received.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15646

Differential Revision: D13606111

Pulled By: soumith

fbshipit-source-id: d4492801a38ee0ae64ea0326a83239cee4d811a4

5 years agocrelu mentioned (#15825)
surgan12 [Wed, 9 Jan 2019 06:53:20 +0000 (22:53 -0800)]
crelu mentioned (#15825)

Summary:
Mentioning crelu near relu in the docs .
fixes #15730  .
cc : ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15825

Differential Revision: D13605782

Pulled By: soumith

fbshipit-source-id: d34932cf82e5407c48548dbdfc1c61b596669a0b

5 years agoInitialize tensor with fp32 in Caffe2Backend.prepare() (#15832)
Yinghai Lu [Wed, 9 Jan 2019 06:24:37 +0000 (22:24 -0800)]
Initialize tensor with fp32 in Caffe2Backend.prepare() (#15832)

Summary:
Fix https://github.com/pytorch/pytorch/issues/14104
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15832

Reviewed By: bddppq

Differential Revision: D13598332

Pulled By: yinghai

fbshipit-source-id: 3302ac47928974f49353c5da8af440e5c1716c22

5 years agoFix cuda native loss_ctc for varying input length (#15798)
Thomas Viehmann [Wed, 9 Jan 2019 06:23:10 +0000 (22:23 -0800)]
Fix cuda native loss_ctc for varying input length (#15798)

Summary:
Thank you, freesouls, for the reproducing example!

This is strictly fixing the bug in gradients for varying length inputs discussed in the middle-to-bottom of the bug report. I'll have a feature patch regarding inf losses -> NaN grads separately.

Fixes: #14401
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15798

Differential Revision: D13605739

Pulled By: soumith

fbshipit-source-id: 167ff42399c7e4cdfbd88d59bac5d25b57c0363f

5 years agoAdd element-wise multiplication in formulas (#15834)
marka17 [Wed, 9 Jan 2019 05:14:54 +0000 (21:14 -0800)]
Add element-wise multiplication in formulas (#15834)

Summary:
Absence of element-wise multiplication can confused some beginners
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15834

Differential Revision: D13603369

Pulled By: soumith

fbshipit-source-id: 1d5c17c57778ddbb4b201122d826d1d6437204d1

5 years agoTypos fixed in CWrapPlugin.get_type_check (#15859)
Derek Kim [Wed, 9 Jan 2019 04:53:11 +0000 (20:53 -0800)]
Typos fixed in CWrapPlugin.get_type_check (#15859)

Summary:
Typos fixed in CWrapPlugin.get_type_check
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15859

Differential Revision: D13605908

Pulled By: soumith

fbshipit-source-id: a8c970f0ac6d54dfd69b9775fc1a2b4f198b4ed6

5 years agoMove LayerNorm op schema to c10 (#15199)
Sebastian Messmer [Wed, 9 Jan 2019 04:22:42 +0000 (20:22 -0800)]
Move LayerNorm op schema to c10 (#15199)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15199

In order to call it from PyTorch, this op schema can't live in caffe2 but must be included from PyTorch.
Moving it to c10. This is not where it should be in the end (that's why there is a large TODO here),
but an intermediate hack to enable this use case and proof-of-concept.

Reviewed By: ezyang

Differential Revision: D13462124

fbshipit-source-id: 1e187b9def8ef049c91e6de947ea4a85758d711b

5 years agoUpdate flat_hash_map (#15367)
Sebastian Messmer [Wed, 9 Jan 2019 04:22:42 +0000 (20:22 -0800)]
Update flat_hash_map (#15367)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15367

This updates flat_hash_map and fixes an issue with singletons across library boundaries
(see the PRs linked at the top of the file)

Reviewed By: ezyang

Differential Revision: D13510912

fbshipit-source-id: e90a297a7a2d69ae3fe48e4fcd8a44ad4b81292a

5 years agoFix C10_API/C10_EXPORT for op schema registration (#15324)
Sebastian Messmer [Wed, 9 Jan 2019 04:22:41 +0000 (20:22 -0800)]
Fix C10_API/C10_EXPORT for op schema registration (#15324)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15324

This was missing but needs to be here, otherwise we can't register schemas without linker errors.

Reviewed By: ezyang

Differential Revision: D13500679

fbshipit-source-id: ba06351cb8ae09ec456cb93e527d388ace578fbb

5 years agoUse C10Tensor in the dispatcher (#15195)
Sebastian Messmer [Wed, 9 Jan 2019 04:22:41 +0000 (20:22 -0800)]
Use C10Tensor in the dispatcher (#15195)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15195

This removes the use of caffe2::Tensor or at::Tensor in the c10 dispatcher and only uses C10::Tensor.
It also changes output tensors to be passed as `const Tensor&` instead of `Tensor*` because we otherwise can't forward them in operator_c10wrapper.h.

Reviewed By: ezyang

Differential Revision: D13461640

fbshipit-source-id: 7f79925a7d60f01660a24bbfda47391af0c70ed3

5 years agoConvert caffe2/aten Tensors to/from c10
Sebastian Messmer [Wed, 9 Jan 2019 04:22:41 +0000 (20:22 -0800)]
Convert caffe2/aten Tensors to/from c10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14820

Reviewed By: dzhulgakov

Differential Revision: D13348044

fbshipit-source-id: 95008e6ead3cfc478696b1c203769241d4cf6ca8

5 years agoImplement c10::Tensor (#14819)
Sebastian Messmer [Wed, 9 Jan 2019 04:22:40 +0000 (20:22 -0800)]
Implement c10::Tensor (#14819)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14819

This is a minimal wrapper for a c10::TensorImpl,
maybe destined for greatness later when we move caffe2::Tensor or at::Tensor into c10.

Reviewed By: dzhulgakov

Differential Revision: D13348039

fbshipit-source-id: 874f515358e94f35dc7a4c3e55b35fde59c51ff1

5 years agoAllow ReadyQueue to handle empty tasks (#15791)
albanD [Wed, 9 Jan 2019 03:57:16 +0000 (19:57 -0800)]
Allow ReadyQueue to handle empty tasks (#15791)

Summary:
Allow the comparison function used in ReadyQueue to handle the empty FunctionTasks created by the reentrant autograd.
Fix #11732
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15791

Differential Revision: D13598006

Pulled By: soumith

fbshipit-source-id: 0bfdf28a735fbfe44f0fdbaf8b74a6198e6a1984

5 years agoIn loop_wrapper, do not copy the passed-in functor (capture it by reference instead...
Brennan Vincent [Wed, 9 Jan 2019 03:51:41 +0000 (19:51 -0800)]
In loop_wrapper, do not copy the passed-in functor (capture it by reference instead). (#15845)

Summary:
The overhead of the copy actually makes an appreciable difference when doing a lot of small reductions (i.e., when the reduced dimension is significantly smaller than the non-reduced dimensions.

```
x=torch.randn((1024,10,1024),dtype=torch.float64)
torch.set_num_threads(1)
%timeit x.std(1)
```

Before: 813.0 ms

After: 708.25 ms
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15845

Differential Revision: D13603246

Pulled By: umanwizard

fbshipit-source-id: 020d224d76fcb8a0b55b75b0f2937e9508891beb

5 years agoAdd NHWC support to Resize Operator (#15553)
David Carrillo Cisneros [Wed, 9 Jan 2019 00:04:41 +0000 (16:04 -0800)]
Add NHWC support to Resize Operator (#15553)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15553

Add unit test and implementation of NHWC layout for Resize operator.

Also, add pragma parallel loop to old NCHWC layout.

Reviewed By: jspark1105

Differential Revision: D13540762

fbshipit-source-id: eebf252bf0d1efdff180a171d804181045f100a5

5 years agoRevert "remove use of tmp_install" (#15847)
andersj [Tue, 8 Jan 2019 23:54:20 +0000 (15:54 -0800)]
Revert "remove use of tmp_install" (#15847)

Summary:
This reverts commit 04bf5285896e52ac118d2f9e9b7f582f695f13e2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15847

Differential Revision: D13603174

Pulled By: anderspapitto

fbshipit-source-id: ae321434d3345ad94fad67bf71fd027cddeb4588

5 years agoCorrecting source pybind11 library to install into Python
Jesse Hellemn [Tue, 8 Jan 2019 23:04:13 +0000 (15:04 -0800)]
Correcting source pybind11 library to install into Python

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15836

Reviewed By: anderspapitto

Differential Revision: D13601331

Pulled By: pjh5

fbshipit-source-id: 36785c501774c01f47acb49cdac265b2c95a5040

5 years agoimplement floordiv with correct integer and division by 0 semantics (#15813)
Zachary DeVito [Tue, 8 Jan 2019 21:09:11 +0000 (13:09 -0800)]
implement floordiv with correct integer and division by 0 semantics (#15813)

Summary:
fixes #15768
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15813

Differential Revision: D13594872

Pulled By: zdevito

fbshipit-source-id: c6c78c9e17fb16ec2bdc42402d203592cf35b7db

5 years agoA trivial error message updates on `at::Tensor _convolution` (#15830)
Derek Kim [Tue, 8 Jan 2019 21:03:16 +0000 (13:03 -0800)]
A trivial error message updates on `at::Tensor _convolution` (#15830)

Summary:
I fixed an grammatical error on this function previously, but I also realized that its content was also wrong. A weight tensors of a convolutional layer should be at least 3 dimensional, not 2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15830

Differential Revision: D13597968

Pulled By: soumith

fbshipit-source-id: 72a75106e88945c68d6462828b149441cfb5acde

5 years agoEnable torch static build on Windows
peter [Tue, 8 Jan 2019 21:03:10 +0000 (13:03 -0800)]
Enable torch static build on Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15769

Reviewed By: yf225, pjh5

Differential Revision: D13597845

Pulled By: orionr

fbshipit-source-id: 99640e22974990ae570a4795ce07274c4447cb01

5 years agoFix sum_to behavior with zero dimensions (#15796)
Richard Zou [Tue, 8 Jan 2019 20:34:43 +0000 (12:34 -0800)]
Fix sum_to behavior with zero dimensions (#15796)

Summary:
Fixes #15223.

This fixes an autograd bug where backprop either fails or produces
gradients of incorrect sizes when tensors with zero-sized dimensions are
involved.

Previously, we were reducing along dimensions that had size greater than 1
when summing to a size in autograd. This is incorrect because we should also reduce
along dimensions with size 0 to produce a tensor of size 1 in that
dimension that then gets viewed to the correct shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15796

Differential Revision: D13593199

Pulled By: zou3519

fbshipit-source-id: 2e2acac34943a9b7fabadc10c9efd4f66db298fd

5 years agoCache workspace size in the BenchmarkCache. (#15742)
mwootton [Tue, 8 Jan 2019 20:23:30 +0000 (12:23 -0800)]
Cache workspace size in the BenchmarkCache. (#15742)

Summary:
Cache the workspace size information for MIOpen for a given configuration as opposed to inquiring it every time. This reduces overhead significantly as inquiring the workspace size forces a full read of the performance database in MIOpen and this database has grown significantly in recent releases. This caching gets us back to ideal performance.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15742

Differential Revision: D13598932

Pulled By: bddppq

fbshipit-source-id: 4e65d247b71dec828293cf0562aac3fbd4fad83a

5 years agoRefactors shape logic out of code generation, fixes possible segfault (#15750)
mruberry [Tue, 8 Jan 2019 20:17:26 +0000 (12:17 -0800)]
Refactors shape logic out of code generation, fixes possible segfault (#15750)

Summary:
This PR:

- Removes shape logic from the code generator, which was previously relied on to return chunk and concat information
- Copies the logic to detect if a kernel has a rand_like node to the executor, making its pass independent of the code generator
- Fixes a possible segfault where references to a vector still being modified were relied upon

The actual shape logic is unchanged.

The possible segfault is in the handling of the former "flat_inputs" in codegen.cpp. This vector holds pairs, and the second element of these pairs is a reference. In some cases these would be references to items in the vector chunk_desc, which could be added to later, possibly invalidating any references to items in it. I hit a similar segfault in testing when naively making parallel code for "flat_outputs."

I'm submitting this small PR because it's separable, self-contained, has a fix, and I am trying to actively get away from large PRs to encourage more stability and incremental change in the fuser.

ngimel zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15750

Differential Revision: D13597451

Pulled By: zou3519

fbshipit-source-id: 0d48b365779b42849b044ba0286258aacc7b0332

5 years agoUse parallel thrust execution policy on ROCm (#15481)
Johannes M Dieterich [Tue, 8 Jan 2019 20:05:14 +0000 (12:05 -0800)]
Use parallel thrust execution policy on ROCm (#15481)

Summary:
The Thrust shipped with ROCm  is recent enough to support this API. Minimize divergence between CUDA/ROCm by changing idef guards.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15481

Differential Revision: D13598739

Pulled By: bddppq

fbshipit-source-id: 20d0a7e3887a4050eea65033161561af47411de1

5 years agoUse correct workspace alloc call in MIOpen conv operator (#15712)
ashishfarmer [Tue, 8 Jan 2019 19:23:01 +0000 (11:23 -0800)]
Use correct workspace alloc call in MIOpen conv operator (#15712)

Summary:
This PR contains changes for:
1. Using memory alloc from HIPContext while allocating workspace for MIOpen conv and transpose_conv operators rather than direct HIP mem alloc
2. Minor cleanup and removing an unnecessary sync call from MIOpen conv op

Differential Revision: D13598894

Pulled By: bddppq

fbshipit-source-id: 44886161abdf91cd29c7c93b3e23620e1b09c7c9

5 years agoTensor method rename dims()->sizes() - 2/2
Jerry Zhang [Tue, 8 Jan 2019 19:22:39 +0000 (11:22 -0800)]
Tensor method rename dims()->sizes() - 2/2

Summary: Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: smessmer

Differential Revision: D13581787

fbshipit-source-id: b04c6aa87fea3a10b522a71fccc1fcfb76a2c212

5 years agoRemove caffe2::ShareData (#15418)
Jerry Zhang [Tue, 8 Jan 2019 18:55:26 +0000 (10:55 -0800)]
Remove caffe2::ShareData (#15418)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15418

Previously we are using Resize + ShareData.
Instead, we'll create a function on Tensor that clones itself with same storage.

Suppose we want `t` to `ShareData` with `t0`, Previous:
```
Tensor t(dims, CPU);
t.Resize(t0.sizes());
t.ShareData(t0);
```
Now:
```
Tensor t = t0.Alias();
```

Reviewed By: dzhulgakov

Differential Revision: D13507609

fbshipit-source-id: 6e4275d02f4c3356cbce91127f1b01111dc86b9f

5 years agoMove isnan to C++ (#15722)
Peter Goldsborough [Tue, 8 Jan 2019 18:26:32 +0000 (10:26 -0800)]
Move isnan to C++ (#15722)

Summary:
Wanted to use `Tensor.isnan` in C++, figured it'd be nice to have, so I made it into a tiny native function.

gchanan ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15722

Differential Revision: D13591315

Pulled By: goldsborough

fbshipit-source-id: a78bd22101fde87a0257f759b9bfcf3b4208f5fa

5 years agouse all_weights instead of _parameters in _flat_weights in rnn (#15766)
Natalia Gimelshein [Tue, 8 Jan 2019 17:45:52 +0000 (09:45 -0800)]
use all_weights instead of _parameters in _flat_weights in rnn (#15766)

Summary:
Fixes #15749
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15766

Differential Revision: D13592320

Pulled By: soumith

fbshipit-source-id: 6c3805f576c3df5a2da8bef1e4305eda379718df

5 years agoUse CUDAGuard when serializing CUDA Tensors (#15807)
Richard Zou [Tue, 8 Jan 2019 15:26:15 +0000 (07:26 -0800)]
Use CUDAGuard when serializing CUDA Tensors (#15807)

Summary:
Fixes #15308. Before this change, `torch.save` and `torch.load` would
initialize the CUDA context on GPU 0 if it hadn't been initialized
already, even if the serialized tensors are only on GPU 1.

This PR fixes that bug by using CUDAGuard in the storage serialization
path.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15807

Differential Revision: D13593201

Pulled By: zou3519

fbshipit-source-id: 4addc91ea5a5278d56a03f3d422577ee39e99897

5 years agoStop leaving garbage files after running test_jit.py
Adam Paszke [Tue, 8 Jan 2019 15:20:22 +0000 (07:20 -0800)]
Stop leaving garbage files after running test_jit.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15404

Differential Revision: D13548316

Pulled By: zou3519

fbshipit-source-id: fe8731d8add59777781d34d9c3f3314f11467b23

5 years agoAdd support for batch_norm fusion to the JIT (#15146)
Adam Paszke [Tue, 8 Jan 2019 14:57:45 +0000 (06:57 -0800)]
Add support for batch_norm fusion to the JIT (#15146)

Summary:
We don't support reductions yet, but simply decomposing batch_norm
into a kernel that computes the stats, and the fusing everything else
with ReLU and following pointwise ops provides nice speedups.

Note that this is only limited to inference mode for now, because we
don't support convolutions and batch norm in AD, so the fuser isn't
applied to those parts.

This commit gives us a 7% end-to-end speedup for ResNet50 with batch size 32. Note that this only applies to inference mode at the moment due to lack of AD support for CNN operations (I'll be adding that soon), and not to the standard `torchvision` models, because they use in-place ops which aren't supported by the fuser (we need a way of proving that de-inplacing them is safe).

cc zou3519 zdevito mruberry ngimel
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15146

Differential Revision: D13548303

Pulled By: zou3519

fbshipit-source-id: a2e2e5abc383f637fae19bd1b423f20c2cbc056a

5 years agoSupport communicating with C2 protobuf in Onnxifi flow (#15472)
Yinghai Lu [Tue, 8 Jan 2019 06:09:38 +0000 (22:09 -0800)]
Support communicating with C2 protobuf in Onnxifi flow (#15472)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15472

Create a path to pass serialized C2 protobuf instead of ONNX during ONNXIFI flow

Reviewed By: houseroad

Differential Revision: D13536603

fbshipit-source-id: 7d016474f4beedbda480ed2e2c0004af7868aafe

5 years agoAdd count_include_pad arg for AveragePoolOp on GPU (#15787)
Xiaomeng Yang [Tue, 8 Jan 2019 05:33:44 +0000 (21:33 -0800)]
Add count_include_pad arg for AveragePoolOp on GPU (#15787)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15787

Add count_include_pad arg for AveragePoolOp on GPU

Reviewed By: houseroad

Differential Revision: D13589185

fbshipit-source-id: 235a84cfcd2033ee796c13e338fc3d03e832b5b1

5 years agoMove Stream.query() implementation down to C++ (#15737)
Shen Li [Tue, 8 Jan 2019 04:55:37 +0000 (20:55 -0800)]
Move Stream.query() implementation down to C++ (#15737)

Summary:
See #15682

Pushing up this small PR to check if I am doing the right thing. If correct, more will follow for other Stream APIs. Questions will be added inline.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15737

Differential Revision: D13581400

Pulled By: mrshenli

fbshipit-source-id: 24afed7847b89b62f0692c79a101ec7ff9d9ee4d

5 years agoA trivial error in the error message of `at::Tensor _convolution` fixed (#15772)
Derek Kim [Tue, 8 Jan 2019 03:59:20 +0000 (19:59 -0800)]
A trivial error in the error message of `at::Tensor _convolution` fixed (#15772)

Summary:
A trivial grammatical error fix.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15772

Differential Revision: D13592279

Pulled By: zou3519

fbshipit-source-id: 14f60c61747a3893cd0e4c860f7b4c4c4ba28c28

5 years agoclean up D13579188 (#15759)
Jongsoo Park [Tue, 8 Jan 2019 02:45:32 +0000 (18:45 -0800)]
clean up D13579188 (#15759)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15759

Some flags have too long names. And some other few minor clean ups.

Reviewed By: jianyuh

Differential Revision: D13587353

fbshipit-source-id: f8aee7f167505644f5d8f80fe2eed70201ef1e54

5 years agoAdd support for exporting onnx split (#15092)
BowenBao [Tue, 8 Jan 2019 00:06:34 +0000 (16:06 -0800)]
Add support for exporting onnx split (#15092)

Summary:
* With the update of split output to dynamic list it breaks the export to onnx.
 Now split ir becomes two ops: 1. Dynamic[] <= Split(), and 2. out1, out2, out3
 <= Prim::ListUnpack. In this fix these two consecutive ops get fused when being
 exported to onnx.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15092

Reviewed By: dzhulgakov

Differential Revision: D13583832

Pulled By: houseroad

fbshipit-source-id: 3eb18c871e750921ad6d5cc179254bee9bcf4c99

5 years agosimplify conv dnnlowp ops by not allowing fp32 in/out (#15758)
Jongsoo Park [Mon, 7 Jan 2019 23:12:25 +0000 (15:12 -0800)]
simplify conv dnnlowp ops by not allowing fp32 in/out (#15758)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15758

DNNLOWP Conv operators became very complex due to many options. This diff simplifies them by not allowing fp32 in/out. This is OK for Conv operators because Conv operators are usually used in deep networks where quantizing and dequantizing using separate operators is not much overhead.

Reviewed By: csummersea

Differential Revision: D13587341

fbshipit-source-id: e88c919dae79d1c5b7d787ea539edf5bcb064afc

5 years agoEnable conv+add fusion, same as conv+sum (#15268)
Gu, Jinghui [Mon, 7 Jan 2019 22:10:27 +0000 (14:10 -0800)]
Enable conv+add fusion, same as conv+sum (#15268)

Summary:
Enable conv+add fusion, same as conv+sum

Caution: only element-wise add is supported on IDEEP without scalar
broadcast. Otherwise, the fusion is illegal.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15268

Differential Revision: D13577375

Pulled By: yinghai

fbshipit-source-id: 92c9c4b667c5ca5f7a262a5bffaa8aa68eeff3bd

5 years agoAllow List arguments to Python Ops (#15721)
David Riazati [Mon, 7 Jan 2019 21:49:20 +0000 (13:49 -0800)]
Allow List arguments to Python Ops (#15721)

Summary:
Adds `List` to eval environment for type lines and allows `List` to be used on PythonOps (follows the same style as the `Tuple` code), fixes #15661
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15721

Differential Revision: D13578540

Pulled By: driazati

fbshipit-source-id: fce54dc3c0931d8b017b2e3483f0ac53826dda94

5 years agoBump CircleCI docker version to 278 (#15795)
SsnL [Mon, 7 Jan 2019 20:29:17 +0000 (12:29 -0800)]
Bump CircleCI docker version to 278 (#15795)

Summary:
Just changing the version number doesn't seem to work. I needed to also fix macos brew parallel conflict

should this merge together with https://github.com/pytorch/ossci-job-dsl/pull/36 ?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15795

Differential Revision: D13591839

Pulled By: yf225

fbshipit-source-id: 6b2a90943e63c8dcc4b6d9159eb54f1b5974c9ac

5 years agoFix C++ Frontend example in frontend.html (#15717)
Peter Goldsborough [Mon, 7 Jan 2019 19:34:16 +0000 (11:34 -0800)]
Fix C++ Frontend example in frontend.html (#15717)

Summary:
The small end-to-end example in https://pytorch.org/cppdocs/frontend.html is a little outdated and needs fixes.

ezyang soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15717

Differential Revision: D13591306

Pulled By: goldsborough

fbshipit-source-id: 3334d68c7f77cf094b66ec2b2f396c4c65bb0d72

5 years agoFix restructured text issue in tensor_basics.rst (#15701)
Peter Goldsborough [Mon, 7 Jan 2019 19:31:45 +0000 (11:31 -0800)]
Fix restructured text issue in tensor_basics.rst (#15701)

Summary:
Fix submitted by huntzhan in https://github.com/pytorch/cppdocs/pull/4. The source is in this repo so the patch has to be applied here.

soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15701

Differential Revision: D13591302

Pulled By: goldsborough

fbshipit-source-id: 796957696fd560a9c5fb42265d7b2d018abaebe3

5 years agoFallback to CPU concat op to handle TensorCPU inputs (#15263)
Gu, Jinghui [Mon, 7 Jan 2019 19:07:51 +0000 (11:07 -0800)]
Fallback to CPU concat op to handle TensorCPU inputs (#15263)

Summary:
Fallback to CPU concat op to handle TensorCPU inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15263

Differential Revision: D13587030

Pulled By: yinghai

fbshipit-source-id: 010a8579d61c3beb8556eb92493a552b2ab0030c

5 years agofix conv unit test for groupwise quantization and pre-packing (#15761)
Jongsoo Park [Mon, 7 Jan 2019 19:04:22 +0000 (11:04 -0800)]
fix conv unit test for groupwise quantization and pre-packing (#15761)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15761

As title says.

Reviewed By: csummersea

Differential Revision: D13587727

fbshipit-source-id: f0631b8cbb89d65a1d952bc25b463de23de93bec

5 years agoAdd is_floating_point to docs (#15704)
vishwakftw [Mon, 7 Jan 2019 18:38:16 +0000 (10:38 -0800)]
Add is_floating_point to docs (#15704)

Summary:
Fixes #15700 .

Changelog:

- Expose torch.*.is_floating_point to docs

Differential Revision: D13580734

Pulled By: zou3519

fbshipit-source-id: 76edb4af666c08237091a2cebf53d9ba5e6c8909

5 years agoPool prim::None nodes (#15745)
Elias Ellison [Mon, 7 Jan 2019 17:58:08 +0000 (09:58 -0800)]
Pool prim::None nodes (#15745)

Summary:
Make the constant pooling pass pool prim::None nodes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15745

Differential Revision: D13583518

Pulled By: eellison

fbshipit-source-id: 7f8aa70522515805ab0991c6db3d96b5a96cdede

5 years agoReplace some malloc+memset pairs with calloc.
Owen Anderson [Mon, 7 Jan 2019 02:54:25 +0000 (18:54 -0800)]
Replace some malloc+memset pairs with calloc.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15765

Differential Revision: D13588723

Pulled By: resistor

fbshipit-source-id: 47d35dc608847a5b173cfcf2aaa2a77359e56722

5 years agoRemoves print statements from test_torch.py (#15747)
mruberry [Sat, 5 Jan 2019 17:04:54 +0000 (09:04 -0800)]
Removes print statements from test_torch.py (#15747)

Summary:
These print statements do not affect the test, and tests (generally) shouldn't print.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15747

Differential Revision: D13587289

Pulled By: soumith

fbshipit-source-id: c758793c9e35faf02bacba6c7c6d072f7c40453f

5 years agoFix several DeprecationWarning: invalid escape sequence (#15733)
Mickaël Schoentgen [Sat, 5 Jan 2019 16:51:14 +0000 (08:51 -0800)]
Fix several DeprecationWarning: invalid escape sequence (#15733)

Summary:
Hello,

This is a little patch to fix `DeprecationWarning: invalid escape sequence`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15733

Differential Revision: D13587291

Pulled By: soumith

fbshipit-source-id: ce68db2de92ca7eaa42f78ca5ae6fbc1d4d90e05

5 years agocaffe2_benchmark msvc build fix (#15619)
ArutyunovG [Sat, 5 Jan 2019 16:23:02 +0000 (08:23 -0800)]
caffe2_benchmark msvc build fix (#15619)

Summary:
Fixing error in caffe2_benchmark binary

```
2018-12-29T14:09:59.7867995Z   d:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.h(90): error C2678: binary '|=': no operator found which takes a left-hand operand of type 'std::_Iosb<int>::_Openmode' (or there is no acceptable conversion) (compiling source file D:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.cc) [D:\a\1\s\caffe2_builders\v141\pytorch\build\Release\binaries\caffe2_benchmark.vcxproj]
2018-12-29T14:09:59.7868252Z   d:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.h(92): error C2678: binary '|=': no operator found which takes a left-hand operand of type 'std::_Iosb<int>::_Openmode' (or there is no acceptable conversion) (compiling source file D:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.cc) [D:\a\1\s\caffe2_builders\v141\pytorch\build\Release\binaries\caffe2_benchmark.vcxproj]
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15619

Differential Revision: D13580195

Pulled By: soumith

fbshipit-source-id: b0a4479cd5f7555801b1977aeee96b6433293da7

5 years agoAdding a hook (wrapper) for non-std stream reader in PyTorchStreamReader (#15551)
Lu Fang [Sat, 5 Jan 2019 06:47:35 +0000 (22:47 -0800)]
Adding a hook (wrapper) for non-std stream reader in PyTorchStreamReader (#15551)

Summary:
To implement a stream is very annoying, since it is closely defined with the underlying storage streambuffer.

So in this PR, we add ReadAdapterInterface and PyTorchStreamReader will use it. We implement IStreamAdapter as a wrapper of std::istream. And keep the user interface unchanged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15551

Reviewed By: zrphercule

Differential Revision: D13568907

Pulled By: houseroad

fbshipit-source-id: 93708cb801248a6c101f35cb14d1631029365c3c

5 years agosupport 0 size in any of the tensor dimensions in mkldnn (#15295)
Cheng,Penghui [Sat, 5 Jan 2019 06:30:48 +0000 (22:30 -0800)]
support 0 size in any of the tensor dimensions in mkldnn (#15295)

Summary:
support 0 size in any of the tensor dimensions in mkldnn
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15295

Differential Revision: D13573747

Pulled By: yinghai

fbshipit-source-id: 5bf7a0b9e2567e80f44981a7823be5407fc94e53

5 years agoPort replication_pad2d and replication_pad3d to ATen (#15538)
Lin Huang [Sat, 5 Jan 2019 00:59:18 +0000 (16:59 -0800)]
Port replication_pad2d and replication_pad3d to ATen (#15538)

Summary:
port replication padding 2D and 3D from legacy TH API implementation
to ATen implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15538

Differential Revision: D13547567

Pulled By: lhuang04

fbshipit-source-id: decfe100d9edfdcfb62f39ee23f37b6cae0d461f

5 years agoFix different types in rsub caused bug (#15707)
zrphercule [Sat, 5 Jan 2019 00:11:23 +0000 (16:11 -0800)]
Fix different types in rsub caused bug (#15707)

Summary:
Before this pr, rsub did not convert two elements into the same dtype, therefore "1 - x" may export to an onnx model that two elements of rsub having different dtype.
By adding this symbolic patch this bug should be fixed.
Related test cases also created.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15707

Differential Revision: D13583042

Pulled By: zrphercule

fbshipit-source-id: 3a2de47a1a8d1ded1a0adfb911adbe6ac729cdef

5 years agoTensor method rename dims()->sizes() - 1/2
Jerry Zhang [Fri, 4 Jan 2019 23:48:21 +0000 (15:48 -0800)]
Tensor method rename dims()->sizes() - 1/2

Summary: Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: BIT-silence

Differential Revision: D13581782

fbshipit-source-id: b16b4198e100617769d84aa599bf141117cfbe5b

5 years agoAutomatic update of fbcode/onnx to 8384c788939bc65463f9754b6a7a00b212b18ba1 (#15739)
Lu Fang [Fri, 4 Jan 2019 23:38:07 +0000 (15:38 -0800)]
update of fbcode/onnx to 8384c788939bc65463f9754b6a7a00b212b18ba1 (#15739)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15739

Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8

Included changes:
- **[8384c78](https://github.com/onnx/onnx/commit/8384c78)**: add constantofshape (#1582) <Rui Zhu>
- **[9afc06c](https://github.com/onnx/onnx/commit/9afc06c)**: Set symbol visibility to hidden for non-Windows (#1707) <Paul Jesse Hellemn>
- **[6f8a9f0](https://github.com/onnx/onnx/commit/6f8a9f0)**: Revert "Add NonMaxSupression operator (#1695)" (#1702) <Lu Fang>
- **[8b89544](https://github.com/onnx/onnx/commit/8b89544)**: Add NonMaxSupression operator (#1695) <Hector Li>
- **[0a7cc48](https://github.com/onnx/onnx/commit/0a7cc48)**: Add bfloat16 support. (#1699) <Dmitri Smirnov>
- **[da7c50c](https://github.com/onnx/onnx/commit/da7c50c)**: ONNX does not maintain versions for experimental ops (#1696) <Ke Zhang>
- **[0c8d857](https://github.com/onnx/onnx/commit/0c8d857)**: Correct type of value_info in Graph (#1694) <Maik Riechert>
- **[f612532](https://github.com/onnx/onnx/commit/f612532)**: Fix typos (#1686) <Eundoo Song>

Reviewed By: zrphercule

Differential Revision: D13581674

fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464

5 years agoremove use of tmp_install
andersj [Fri, 4 Jan 2019 21:45:12 +0000 (13:45 -0800)]
remove use of tmp_install

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14553

Differential Revision: D13583335

Pulled By: anderspapitto

fbshipit-source-id: 8711fead9eda877c1037a0bc59f91a3d2e01f3e0

5 years agoUpdate CI credentials
Will Feng [Fri, 4 Jan 2019 21:30:28 +0000 (13:30 -0800)]
Update CI credentials

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15736

Differential Revision: D13583174

Pulled By: yf225

fbshipit-source-id: 742470db10ef9df8f95e27626453b68ca90723e8

5 years agoTemporarily disable all XXXlike operator tests in pytorch-onnx test (#15740)
zrphercule [Fri, 4 Jan 2019 21:26:32 +0000 (13:26 -0800)]
Temporarily disable all XXXlike operator tests in pytorch-onnx test (#15740)

Summary:
We are going to have some breaking changes in ConstantLike and related operators in onnx, therefore it is better to disable all related tests for these operators for now.
These operators are not currently supported by caffe2, and are not included in our most recently released onnx, therefore we do not need to worry about internal/external production breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15740

Differential Revision: D13582528

Pulled By: zrphercule

fbshipit-source-id: 92a890c1dc2a833969af69edfea85331bb4d562f