surgan12 [Wed, 2 Jan 2019 07:09:45 +0000 (23:09 -0800)]
clamp fixes (#15479)
Summary: fix to #15338 .
Differential Revision:
D13564343
Pulled By: soumith
fbshipit-source-id:
be64b572945533e10ae6f627d335b47f093720a3
svcscm [Wed, 2 Jan 2019 03:41:31 +0000 (19:41 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
acb68439e62ea270af22364183a6ecba883fab66
svcscm [Wed, 2 Jan 2019 01:20:19 +0000 (17:20 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
5c5ad6a5cc9220ee1dd9565d64c7459f866ff74d
Alexander Rodin [Mon, 31 Dec 2018 02:05:29 +0000 (18:05 -0800)]
Fix typo in documentation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15628
Differential Revision:
D13562685
Pulled By: soumith
fbshipit-source-id:
1621fcff465b029142313f717035e935e9159513
vishwakftw [Sun, 30 Dec 2018 20:39:10 +0000 (12:39 -0800)]
Make btriunpack work for high dimensional batches and faster than before (#15286)
Summary:
Changelog:
- Optimize btriunpack by using `torch.where` instead of indexing, inplace operations instead of out place operations and avoiding costly permutations by computing the final permutation over a list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15286
Differential Revision:
D13562038
Pulled By: soumith
fbshipit-source-id:
e2c94cfab5322bf1d24bf56d7b056619f553acc6
Xiaomeng Yang [Sun, 30 Dec 2018 12:13:54 +0000 (04:13 -0800)]
Add count_include_pad arg for average_pool_op on CPU (#15593)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15593
Add count_include_pad arg for average_pool_op on CPU
Reviewed By: houseroad
Differential Revision:
D13558123
fbshipit-source-id:
188879ec3af313105ff66ac0b5a81ea44fca2855
vishwakftw [Sun, 30 Dec 2018 01:50:32 +0000 (17:50 -0800)]
Remove TH/THC link for cholesky (#15595)
Summary:
Changelog:
- Remove TH/THC binding
- Port single matrix case to ATen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15595
Differential Revision:
D13561657
Pulled By: soumith
fbshipit-source-id:
65f8c4b455cf19a0c7b6aeac2e3b985c7a7208f8
Christoph [Sun, 30 Dec 2018 01:48:36 +0000 (17:48 -0800)]
Concatenate directly into shared memory when constructing batches for numpy (#14534)
Summary:
Since #1323 tensors are shared with shared memory, but this feature is not active for numpy.
This PR fix this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14534
Differential Revision:
D13561649
Pulled By: soumith
fbshipit-source-id:
b6bc9e99fb91e8b675c2ef131fba9fa11c1647c0
Mark Harfouche [Sun, 30 Dec 2018 00:09:12 +0000 (16:09 -0800)]
Add a patch for OSX with SDK<10.12 (#15615)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/15614
Build passing on SDK 10.9
https://dev.azure.com/ramonaoptics/feedstock-builds/_build/results?buildId=13
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15615
Differential Revision:
D13561737
Pulled By: soumith
fbshipit-source-id:
2ab0f78338d4949fa3f2735915fd96dce4bcd621
Gao, Xiang [Sat, 29 Dec 2018 06:38:24 +0000 (22:38 -0800)]
Fix typo: szie -> size
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15466
Differential Revision:
D13536343
Pulled By: soumith
fbshipit-source-id:
cb3df30bf346ef6bc0bc1b6430107b3e0e086f8d
peter [Sat, 29 Dec 2018 06:10:08 +0000 (22:10 -0800)]
Make the warning suppression safer (#15560)
Summary:
Address the problem introduced in https://github.com/pytorch/pytorch/pull/15499#issuecomment-
450038494.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15560
Differential Revision:
D13561346
Pulled By: soumith
fbshipit-source-id:
6abf622672bdcb77ae1a7188e8a3817fa97aecbc
Jongsoo Park [Sat, 29 Dec 2018 01:32:11 +0000 (17:32 -0800)]
add NCHW2NHWC and NHWC2NCHW in utils.py (#15588)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15588
Use NHWC2NCHW or NCHW2NHWC functions which is easier to understand compared to code using transpose and generalizable to non-2D convolutions.
Reviewed By: csummersea
Differential Revision:
D13557674
fbshipit-source-id:
c4fdb8850503ea58f6b17b188513ae2b29691ec0
Vishwak Srinivasan [Sat, 29 Dec 2018 00:51:45 +0000 (16:51 -0800)]
Remove TH/THC link for gesv (#15510)
Summary:
This PR removes the TH/THC binding for gesv.
Changelog:
- Remove TH/THC binding
- Port single matrix case to ATen
- Enable test_gesv for CUDA as well
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15510
Differential Revision:
D13559990
Pulled By: soumith
fbshipit-source-id:
9da2825e94d3103627e719709e6b1f8b521a07fb
Dong Li [Fri, 28 Dec 2018 23:00:41 +0000 (15:00 -0800)]
keep extra_info of each op in ProfDagStats (#15244)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15244
This DIFF keeps track of the extra_info information attached to each operator. When getPerOpStas() is called, it attaches the extra_info to the result ProfDagStats protobuf.
Facebook
Net transform attaches a global_op_id which is defined as a tuple of (orig_net_name, original_op_index) to each operator,
The global_op_id is encoded as extra_info in each operator.
Reviewed By: aazzolini
Differential Revision:
D13016289
fbshipit-source-id:
3e2719ec7ed0ebe47740b77581c565ff7e79b102
David Riazati [Fri, 28 Dec 2018 21:52:01 +0000 (13:52 -0800)]
Error when torch.load-ing a JIT model (#15578)
Summary:
Throw a warning when calling `torch.load` on a zip file
Fixes #15570
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15578
Differential Revision:
D13555954
Pulled By: driazati
fbshipit-source-id:
a37ecdb3dd0c23eff809f86e2f8b74cd48ff7277
SsnL [Fri, 28 Dec 2018 19:51:26 +0000 (11:51 -0800)]
default_collate should collate bool list to byte tensors (#14669)
Summary:
Based on #15331 . Review only the last commit.
Fixes https://github.com/pytorch/pytorch/issues/14507.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14669
Reviewed By: ezyang
Differential Revision:
D13528725
Pulled By: soumith
fbshipit-source-id:
f12f1ac1c4ff2a3ddd6877c0c096a5da3a1ffa3c
Jongsoo Park [Fri, 28 Dec 2018 19:49:22 +0000 (11:49 -0800)]
append caffe2 prefix to dnnlowp cmd line options (#15582)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15582
Following convention of having caffe2_ prefix in command line options
Reviewed By: viswanathgs
Differential Revision:
D13252055
fbshipit-source-id:
142a6395b832f211f34d0a87ec2d62c1e5fcdc69
Jesse Hellemn [Fri, 28 Dec 2018 18:44:47 +0000 (10:44 -0800)]
adding nightly build smoke tests to circleci
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15441
Reviewed By: yf225
Differential Revision:
D13552399
Pulled By: pjh5
fbshipit-source-id:
4a52ee2d08324b9ab6b8c266ad6a1cd3bdad1c71
Lingyi Liu [Fri, 28 Dec 2018 01:13:50 +0000 (17:13 -0800)]
add the int support (#15581)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15581
as title
Reviewed By: protonu
Differential Revision:
D13556274
fbshipit-source-id:
ba21f0970257d526e2fe7574eea4f89465b9c618
Will Feng [Fri, 28 Dec 2018 01:12:27 +0000 (17:12 -0800)]
Move VariableImpl functions to AutogradMeta and Variable (#15487)
Summary:
In this PR, we are moving all functions away from `Variable::Impl`, in order to get rid of `Variable::Impl` (and the `data_` Tensor in it) in the next PR. Some of the functions (such as `set_requires_grad` / `requires_grad` / `grad`) will be living in `AutogradMeta` class, while others (such as `backward()` / `rebase_history()` / `grad_accumulator()` / `grad_fn()`) will be living in `Variable` class.
This is the 2nd PR mentioned in https://github.com/pytorch/pytorch/issues/13638.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15487
Differential Revision:
D13553173
Pulled By: yf225
fbshipit-source-id:
691f9432d0cd0640af380c757f3e3a2f64f8851c
Roy Li [Fri, 28 Dec 2018 01:01:19 +0000 (17:01 -0800)]
test basic tensor interop
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12249
Differential Revision:
D13469356
Pulled By: li-roy
fbshipit-source-id:
b49748462aa44ac34b8ce79783f2c895a537a232
David Riazati [Thu, 27 Dec 2018 23:58:32 +0000 (15:58 -0800)]
Allow int/float cast to bool (#13391)
Summary:
This PR adds explicit `bool()` casts to match Python semantics
`bool(1) = True`
`bool(0) = False`
`bool(0.0) = False`
`bool(0.1) = True`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13391
Differential Revision:
D12871213
Pulled By: driazati
fbshipit-source-id:
773a48b2647973138efe854abe725d647f1d727d
Elias Ellison [Thu, 27 Dec 2018 23:35:24 +0000 (15:35 -0800)]
remove print ops before exporting onnx graph (#15550)
Summary:
Removing print ops before exporting onnx graph, fixes https://github.com/pytorch/pytorch/issues/15505
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15550
Differential Revision:
D13551195
Pulled By: eellison
fbshipit-source-id:
1ea1e34cb5b8433eacc2b86fb10b241198af96be
Igor Fedan [Thu, 27 Dec 2018 23:24:22 +0000 (15:24 -0800)]
Added deviceCount() virtual method to DeviceGuardImplInterface (#15574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15574
Added deviceCount() virtual method to DeviceGuardImplInterface, also added correspondent implementation for CPUGuardImpl, CUDAGuardImpl, FakeGuardImpl, VirtualGuardImpl, HIPGuardImplMasqueradingAsCUDA
Reviewed By: soumith
Differential Revision:
D13554609
fbshipit-source-id:
913bf2aad44a0a356efe54505ee4abaf6c4622db
Gregory Chanan [Thu, 27 Dec 2018 23:20:42 +0000 (15:20 -0800)]
Port torch.range to aten and parallelize on CPU.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15484
Differential Revision:
D13538955
Pulled By: gchanan
fbshipit-source-id:
ee3889ad116988d963e603621310b3bbdce0aec9
Lu Fang [Thu, 27 Dec 2018 22:42:01 +0000 (14:42 -0800)]
Export group norm as ATen and add test (#15569)
Summary:
Short term solution, export group norm as an ATen op to unblock users.
Long term will add GroupNorm to onnx.
Add an end to end test for this one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15569
Differential Revision:
D13554293
Pulled By: houseroad
fbshipit-source-id:
b4974c9ea2a1b81338ca1e5c6747efe2715d7932
SsnL [Thu, 27 Dec 2018 22:06:23 +0000 (14:06 -0800)]
Update cuda.get/set_rng_state doc (#14324)
Summary:
Now that `cuda.get/set_rng_state` accept `device` objects, the default value should be an device object, and doc should mention so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14324
Reviewed By: ezyang
Differential Revision:
D13528707
Pulled By: soumith
fbshipit-source-id:
32fdac467dfea6d5b96b7e2a42dc8cfd42ba11ee
Marat Dukhan [Thu, 27 Dec 2018 19:55:02 +0000 (11:55 -0800)]
Update QNNPACK (#15561)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15561
- Update QNNPACK submodule to master (API-incompatible)
- Do matching changes in Caffe2 Int8 operators
Reviewed By: dreiss
Differential Revision:
D13551322
fbshipit-source-id:
066f9087061167f7d7cfbc1c8f8628dfa93d056e
Michael Suo [Thu, 27 Dec 2018 18:53:58 +0000 (10:53 -0800)]
Revert
D13552080: [pytorch][PR] add clang-format check to CI
Differential Revision:
D13552080
Original commit changeset:
462a73894c16
fbshipit-source-id:
ebfc5aa3343cebabbc24ff39e4e9841a372443e2
daquexian [Thu, 27 Dec 2018 09:59:56 +0000 (01:59 -0800)]
Fix wrong class name in jit _make_fail (#15559)
Summary:
It should be ScriptModule rather than TracedModule :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15559
Differential Revision:
D13552058
Pulled By: soumith
fbshipit-source-id:
0aa17639c225818b00d59daec4bc2336f039f658
Michael Suo [Thu, 27 Dec 2018 06:17:59 +0000 (22:17 -0800)]
add clang-format check to CI (#15543)
Summary:
Simple check that runs against your PR's changes and complains if running clang-format would have created a change. Does nothing when run against master, so it's "safe" to accept changes that fail this check and it won't break the build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15543
Reviewed By: soumith
Differential Revision:
D13552080
Pulled By: suo
fbshipit-source-id:
462a73894c16e7108806af7fa88440c377d4d0d2
Ailing Zhang [Thu, 27 Dec 2018 03:43:10 +0000 (19:43 -0800)]
Fix github branch prefix v (#15552)
Summary:
Fixes #15519 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15552
Differential Revision:
D13550780
Pulled By: ailzhang
fbshipit-source-id:
b117e5ced42de207b91045bffcee8907dd73201e
Viswanath Sivakumar [Thu, 27 Dec 2018 02:01:20 +0000 (18:01 -0800)]
Rotated boxes support for GPU GenerateProposals op (#15470)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15470
On top of
D13509114 and
D13017791. Pretty straight-forward.
Reviewed By: newstzpz
Differential Revision:
D13536671
fbshipit-source-id:
ff65981b70c63773ccc9aef3ff28e3c9508f6716
Viswanath Sivakumar [Thu, 27 Dec 2018 02:01:20 +0000 (18:01 -0800)]
CUDA kernel for rotated NMS support, over 200x speedup than CPU (#15365)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15365
On top of
D13017791, adding rotated NMS support with the same kernel building
blocks. Results in 218x speedup on avg.
Reviewed By: SuperIRabbit
Differential Revision:
D13509114
fbshipit-source-id:
c1d33c8dc4bc50b5906b4f01bb0caf1115e2a357
Will Feng [Thu, 27 Dec 2018 00:31:47 +0000 (16:31 -0800)]
Move autograd metadata from VariableImpl to TensorImpl (#13827)
Summary:
Changes originally in this PR:
1. Move Variable::Impl data members into TensorImpl as `AutogradMeta` struct
2. Change Variable::Impl functions to use data members in `AutogradMeta` struct
3. Add `shallow_copy_and_detach()` function to each subclass of TensorImpl
4. Do shallow copy when the user calls `make_variable(tensor)` / `make_variable_view(tensor)` / `variable.set_data(tensor)` / `variable.detach()`
Changes moved from https://github.com/pytorch/pytorch/pull/13645:
1. Add a flag to Variable to disallow size/stride/storage_ptr changes from in-place operations such as `resize_` / `resize_as_` / `set_` / `transpose_`, and set this flag to true when people call `tensor.data` in Python.
2. Write text in the docs to actively discourage changing the shape or storage of `tensor_detached` and expecting `tensor` to also be updated.
This is the 1st+2nd PR mentioned in https://github.com/pytorch/pytorch/issues/13638.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13827
Differential Revision:
D13507173
Pulled By: yf225
fbshipit-source-id:
b177b08438d534a8197e34e1ad4a837e2db0ed6a
Soumith Chintala [Wed, 26 Dec 2018 23:41:46 +0000 (15:41 -0800)]
version bump to 1.1 (#15554)
Summary:
version bump to 1.1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15554
Differential Revision:
D13550818
Pulled By: soumith
fbshipit-source-id:
8a28582c98b42c081e103581551a01fd96c9f42d
derek [Wed, 26 Dec 2018 20:54:17 +0000 (12:54 -0800)]
In README.md CMAKE_PREFIX_PATH should be CONDA_PREFIX when using an conda virtual environment (#15548)
Summary:
In current README.md, `CMAKE_PREFIX_PATH` is set to conda root even when you have activated an virtual environment. When an conda virtualenv is activated, packages are installed in `CONDA_PREFIX`, not conda root. I think `CMAKE_PREFIX_PATH` should also be set to `CONDA_PREFIX` in this case. I think some build issues can be solved with the new instruction. Maybe something like #14954.
soumith,
When I made PR #15335 I was confused and made a wrong point. I think this PR could be the real solution.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15548
Differential Revision:
D13549681
Pulled By: soumith
fbshipit-source-id:
42d855b6e49ee58d735d2f4715d3e5752a748693
David Pollack [Wed, 26 Dec 2018 16:31:00 +0000 (08:31 -0800)]
add from_pretrained method to EmbeddingBag (#15273)
Summary:
The `EmbeddingBag` module does not include a `from_pretrained` method like the `Embedding` module. I added it for consistency between the two modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15273
Differential Revision:
D13547842
Pulled By: soumith
fbshipit-source-id:
8ffde51ff0c1e8fc8310263b6f375da88089ff7d
vishwakftw [Wed, 26 Dec 2018 16:29:57 +0000 (08:29 -0800)]
Make argument size checking consistent across CPU and CUDA for torch.gesv (#15430)
Summary:
There is an inconsistency in the size of arguments for gesv, which is fixed in this PR.
Changelog:
- Replicate check in CPU as done for CUDA
- Fix argument ordering (minor) in CUDA checking
Fixes #15328
Differential Revision:
D13531167
Pulled By: soumith
fbshipit-source-id:
c4b4e4fc12880208d08e88d1e47e730ac98c2ad3
Michael Suo [Wed, 26 Dec 2018 14:52:25 +0000 (06:52 -0800)]
clang format world (#15524)
Summary:
The PR clang-formats everything in `torch/csrc/jit/` and adds it to the pre-commit hook.
Here is a list of non-mechanical changes:
- I went over each file and fixed up whenever I could tell that clang-format was clobbering comment formatting.
- Made the macros in register_prim_ops a little more clang-format friendly by omitting trailing commas
- Refactored autodiff.cpp to use a helper class with explicit state rather than a bunch of capturing lambdas
- Small improvements to the precommit hook clang-format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15524
Differential Revision:
D13547989
Pulled By: suo
fbshipit-source-id:
3ff1541bb06433ccfe6de6e33f29227a2b5bb493
Frank Zhang [Wed, 26 Dec 2018 14:32:44 +0000 (06:32 -0800)]
Added correct isinf handling for Integral tensors (#15489)
Summary:
Currently torch.isinf on integral tensor will raise RuntimeError: value cannot be converted to type int16_t without overflow: inf.
This pr will suppress the error and return false(0) for all integral tensors. The behavior will also be consistent with np.isinf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15489
Reviewed By: zou3519
Differential Revision:
D13540786
Pulled By: flashhack
fbshipit-source-id:
e730dea849da6a59f3752d347bcfbadfd12c6483
Derek Kim [Wed, 26 Dec 2018 10:11:17 +0000 (02:11 -0800)]
Trivial comment update in autograd/function.h (#15529)
Summary:
I removed the explanation on `num_inputs` parameter. This parameter was removed in #8168
colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15529
Differential Revision:
D13547854
Pulled By: soumith
fbshipit-source-id:
8a9ac58f2c93a2533b82ec63089477166ed0bcb9
peter [Wed, 26 Dec 2018 08:46:13 +0000 (00:46 -0800)]
Fix failed type cast in Windows Debug Build (#15333)
Summary:
Fixes #15330
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15333
Differential Revision:
D13531317
Pulled By: soumith
fbshipit-source-id:
b956f27bd7fa33cbdf405338fcbcbc7df2fd629f
Gu, Jinghui [Wed, 26 Dec 2018 06:54:16 +0000 (22:54 -0800)]
Upgrade MKL-DNN to version 0.17 and static build MKL-DNN (#15504)
Summary:
Upgrade MKl-DNN to 0.17 and static build MKL-DNN to fix the potentail build error due to old mkldnn version in host system.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15504
Differential Revision:
D13547885
Pulled By: soumith
fbshipit-source-id:
46f790a3d9289c1e153e51c62be17c5206ea8f9a
Soumith Chintala [Wed, 26 Dec 2018 05:55:26 +0000 (21:55 -0800)]
remove legacy from docs (#15112)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/15062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15112
Differential Revision:
D13547845
Pulled By: soumith
fbshipit-source-id:
61e3e6c6b0f6b6b3d571bee02db2938ea9698c99
Alexander Rodin [Wed, 26 Dec 2018 05:43:38 +0000 (21:43 -0800)]
Use at::zeros instead of torch::zeros in non-differentiable example (#15527)
Summary:
There was a typo in C++ docs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15527
Differential Revision:
D13547858
Pulled By: soumith
fbshipit-source-id:
1f5250206ca6e13b1b1443869b1e1c837a756cb5
peter [Wed, 26 Dec 2018 05:43:22 +0000 (21:43 -0800)]
Fix the compare logic in function `overflows` for MSVC (#15499)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/15497.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15499
Differential Revision:
D13547835
Pulled By: soumith
fbshipit-source-id:
a674da93bf905a0b81f0cc60449ccb97c2746926
SsnL [Mon, 24 Dec 2018 17:08:50 +0000 (09:08 -0800)]
Allow converting char tensor to numpy; add [fi]info.min (#15046)
Summary:
https://github.com/pytorch/pytorch/pull/14710 with test fixed.
Also added `finfo.min` and `iinfo.min` to get castable tensors.
cc soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15046
Reviewed By: soumith
Differential Revision:
D13429388
Pulled By: SsnL
fbshipit-source-id:
9a08004419c83bc5ef51d03b6df3961a9f5dbf47
Lin Huang [Mon, 24 Dec 2018 14:29:34 +0000 (06:29 -0800)]
Port replication_pad1d to ATen (#15507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15507
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15485
port replication_pad1d
Reviewed By: ezyang
Differential Revision:
D13531920
fbshipit-source-id:
dcd64ebd2c24b7431996231b8d5addfb600b1072
Peter Goldsborough [Mon, 24 Dec 2018 14:23:32 +0000 (06:23 -0800)]
Support stateful dataset (#15096)
Summary:
Currently re-implements the dataloader for stateful datasets. Outstanding work:
- Refactor DataLoader and DataLoader2 to have common base classes and only differ in specifi pieces of logic,
- Figure out how to not duplicate the `MapDataset` logic for stateful vs. non-stateful
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15096
Differential Revision:
D13522043
Pulled By: goldsborough
fbshipit-source-id:
08e461ca51783047f11facc4d27dfa2e4f1e4c2a
Michael Suo [Mon, 24 Dec 2018 13:34:17 +0000 (05:34 -0800)]
put interactive prompt in bash (#15521)
Summary:
This makes compatibility with different versions of python a little bit simpler, and fixes a problem where stdin wasn't being read from the terminal properly in the prompt.
zdevito This should fix your EOF exception.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15521
Differential Revision:
D13546358
Pulled By: suo
fbshipit-source-id:
fb7551a86c888196831c046d9d9848e7ff05b925
peter [Mon, 24 Dec 2018 03:47:03 +0000 (19:47 -0800)]
Fix the iterator category for torch::data::Iterator (#15500)
Summary:
Try to fix https://github.com/pytorch/pytorch/issues/14410.
Additional info: From this [page](https://stackoverflow.com/questions/
14062297/canonical-way-to-define-forward-output-iterator), If we change it into `input_iterator_tag`, it doesn't mean the `output_iterator_tag` is lost.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15500
Differential Revision:
D13545773
Pulled By: soumith
fbshipit-source-id:
327bfb7be83d53e42925e0e391b2a4277e3a1b36
Michael Suo [Sun, 23 Dec 2018 22:35:41 +0000 (14:35 -0800)]
Precommit hook: just warn if no clang-tidy (#15514)
Summary:
The precommit hook shouldn't hard fail if there's no `clang-tidy`, just warn and omit the check.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15514
Differential Revision:
D13545776
Pulled By: suo
fbshipit-source-id:
9bf3f8ee18703c6d1a39eb7776092fb5e120d2a1
Gao, Xiang [Sun, 23 Dec 2018 22:28:31 +0000 (14:28 -0800)]
Add torch.rot90 to torch.rst
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15512
Differential Revision:
D13545775
Pulled By: soumith
fbshipit-source-id:
2a8896571745630cff4aaf3d5469ef646bdcddb4
Brennan Vincent [Sun, 23 Dec 2018 20:49:08 +0000 (12:49 -0800)]
fix parallelization detection for CPU foreach_reduced_elt (#15483)
Summary:
This does two things:
(1): revert #15114 , which is incorrect and actually just completely disables parallelization in this function (because `at::get_num_threads` returns `-1` unless it has been set explicitly)
(2): Fix our (FB-internal) failing tests that #15114 was intended to fix, by still working correctly in a setup where `#ifdef _OPENMP` is set and `omp_get_max_threads() > 1` , but `#pragma omp parallel` only launches one thread. I believe such an unusual situation only exists in certain unit tests within FB infra but we still need it to work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15483
Differential Revision:
D13538940
Pulled By: umanwizard
fbshipit-source-id:
a3362c7ac7327ced350d127bb426f82c59e42732
Jongsoo Park [Sat, 22 Dec 2018 18:22:56 +0000 (10:22 -0800)]
add rowwise adagrad lp test (#15082)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15082
We didn't have unit test for low-precision rowwise adagrad
Reviewed By: chocjy
Differential Revision:
D13300732
fbshipit-source-id:
46e7bdfc82c5a6855eeb6f653c0a96b0b3a20546
Jongsoo Park [Sat, 22 Dec 2018 06:17:35 +0000 (22:17 -0800)]
handle empty inputs to SparseLengthsMean correctly (#15389)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15389
SparseLengthsMean was generating uninitialized data for empty inputs (lengths == 0). We should return zeros.
The unit tests were also not covering this special case which is fixed by this diff.
Reviewed By: salexspb
Differential Revision:
D13515970
fbshipit-source-id:
3c35265638f64f13f0262cee930c94f8628005da
Hao Lu [Sat, 22 Dec 2018 04:23:14 +0000 (20:23 -0800)]
Add pthreadpool_create and pthreadpool_destroy (#15492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15492
Add pthreadpool_create and pthreadpool_destroy, which are used by NNPACK tests.
Reviewed By: Maratyszcza
Differential Revision:
D13540997
fbshipit-source-id:
628c599df87b552ca1a3703854ec170243f04d2e
Pritam Damania [Sat, 22 Dec 2018 01:34:51 +0000 (17:34 -0800)]
Metadata for input/output formats in model file proto. (#15252)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15252
We would like to extend the model file format to include strongly type, semantic information
about the model inputs and outputs.
The goal is for a user to be able to consider a model file like a function with
a well defined API describing what the inputs and outputs would be.
Reviewed By: dzhulgakov
Differential Revision:
D13009915
fbshipit-source-id:
5df124a876ad03c05fbdaacae0eab659637734c1
Zachary DeVito [Sat, 22 Dec 2018 00:44:19 +0000 (16:44 -0800)]
add len to nativeResolver (#15488)
Summary:
(otherwise len is not resolvable using torch::jit::compile)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15488
Differential Revision:
D13539991
Pulled By: zdevito
fbshipit-source-id:
3ba85fa7b1adb163f9229c568f7997d22321903d
David Riazati [Sat, 22 Dec 2018 00:30:35 +0000 (16:30 -0800)]
Remove NoneGenerator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15335
Differential Revision:
D13540357
Pulled By: driazati
fbshipit-source-id:
a289e5944b65872103f68faac74e18f10e7c6fff
David Riazati [Fri, 21 Dec 2018 23:59:29 +0000 (15:59 -0800)]
Add self to Python printer reserved words (#15318)
Summary:
This adds `self` to the list of reserved words and also sorts the lines and prevents the tracer from naming values 'self' (which happens in torch/tensor.py)
Fixes #15240
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15318
Differential Revision:
D13540192
Pulled By: driazati
fbshipit-source-id:
46ae02e51b1b31d5c62110fa83ba258ea6bada27
Ailing Zhang [Fri, 21 Dec 2018 23:32:44 +0000 (15:32 -0800)]
AD support for adaptive_avg_pool2d (#15459)
Summary:
This adds AD support for adaptive_avg_pool2d, which is necessary for resnet50 in pytorch/vision:master. cc: soumith asuhan dlibenzi
apaszke I saw that autodiff bug you fixed in #15403 , as it doesn't prevent this PR from passing, so I'll leave it for your PR to fix it. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15459
Differential Revision:
D13534732
Pulled By: ailzhang
fbshipit-source-id:
4e48b93e35d5ecfe7bd64b6a132a55b07843f206
Hao Lu [Fri, 21 Dec 2018 23:05:12 +0000 (15:05 -0800)]
Handling nullptr case
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15467
Reviewed By: Maratyszcza
Differential Revision:
D13536504
fbshipit-source-id:
ab46ff6bb4b6ce881c3e29d7e6a095ea62289db4
Bram Wasti [Fri, 21 Dec 2018 22:11:26 +0000 (14:11 -0800)]
Relax check on outputs (#15458)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15458
many nets in the wild seem to have outputs that are never produced by the net.
Reviewed By: ZolotukhinM
Differential Revision:
D13534185
fbshipit-source-id:
2b23b39c28404c53f68868f3bf6df53c5fea9eab
Zachary DeVito [Fri, 21 Dec 2018 21:46:12 +0000 (13:46 -0800)]
allow non-final returns (#15463)
Summary:
This PR allows a subclass of programs that have return statements that are not final in the graph.
`final_returns.h` contains the a comment describing how this is accomplished.
To minimize complexity in `compiler.cpp`, this pass is done as an AST-to-AST rewrite before the compiler runs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15463
Differential Revision:
D13538962
Pulled By: zdevito
fbshipit-source-id:
67105ca873351825b4a364092ab1873779f3e462
derek [Fri, 21 Dec 2018 19:54:57 +0000 (11:54 -0800)]
Fixed trivial typos in Dropout2D and Dropout3D classes (#15200)
Summary:
Fixed trivial typos in Dropout2D and Dropout3D classes
weiyangfb
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15200
Differential Revision:
D13537888
Pulled By: ezyang
fbshipit-source-id:
8fb06027ca663a2e4bfa016af400698ae3c88ad1
svcscm [Fri, 21 Dec 2018 19:44:29 +0000 (11:44 -0800)]
Updating submodules
Reviewed By: cdelahousse
fbshipit-source-id:
59d7a5b82fb78bc2d2285d0896e35c262512ffb9
surgan12 [Fri, 21 Dec 2018 19:32:02 +0000 (11:32 -0800)]
eq_fixes (#15475)
Summary:
fixes #15464 .
cc : ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15475
Differential Revision:
D13537812
Pulled By: ezyang
fbshipit-source-id:
127adf612ac8b3d3a64baa3d12a53daba7d3e4b8
vishwakftw [Fri, 21 Dec 2018 19:29:36 +0000 (11:29 -0800)]
Enable running collect_env.py without building PyTorch (#15468)
Summary: Closes #15346
Differential Revision:
D13537873
Pulled By: ezyang
fbshipit-source-id:
7765ce4108dae9479d8900c0815cc2f174596a83
Bram Wasti [Fri, 21 Dec 2018 19:06:49 +0000 (11:06 -0800)]
Back out "[nomnigraph][executor] computeChains with nomnigraph" (#15451)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15451
Original commit changeset:
ccd050bfead6
Reviewed By: ilia-cher
Differential Revision:
D13533161
fbshipit-source-id:
1d0dcd54c2e3875aab015f3e996693e67a449b87
James Reed [Fri, 21 Dec 2018 18:32:57 +0000 (10:32 -0800)]
Direct FBGEMM integraton into ATen (#13777)
Summary:
This PR implements infrastructure for post-processing a model to apply int8 quantization to its `nn.Linear` modules. Highlights of the implementation:
1) Inputs and outputs are `float` (quantized and packed internally), but the weight is quantized and packed ahead of time for efficiency. This implementation performs well in small-batch size GEMM calls. It should not be considered a general-purpose quantized GEMM kernel.
2) Weight packing is dependent on machine architecture (e.g. vector register width), so it is done just-in-time. Concretely, it is done on model load for the weights and it is done during operator execution for the input value.
3) Biases are unquantized
4) We fail loudly if we are attempting to run this on a machine that does not support FBGEMM. This is because we do not want a model's numerics to differ based on which machine it is run on. A model containing these FBGEMM ops *must* be run with FBGEMM
The API can be seen in the added test case. Highlights are:
1) `torch.jit.quantized.quantize_linear_modules` walks the module hierarchy of the passed-in Module and replaces all `nn.Linear` modules with a new `QuantizedLinear` module, which encapsulates the behavior described above.
2) `_pack()` and `_unpack()` script methods are present on `QuantizedLinear` modules. These methods should be called before serialization and after deserialization, respectively. This ensures that the weight matrix is properly packed for the running machine's architecture. Note that in the long term, we would like to move toward a more Pickle-style serialization technique, rather than having these explicit methods that mutate member values. This is blocked on being able to assign attributes in a ScriptMethod, among other things.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13777
Differential Revision:
D13383276
Pulled By: jamesr66a
fbshipit-source-id:
00f29c9f34544add2b90107e3cf55a287802c344
Ashwin Ramaswami [Fri, 21 Dec 2018 17:37:25 +0000 (09:37 -0800)]
Replace getargspec with getfullargspec (#15396)
Summary:
Replace `getargspec` with `getfullargspec` to resolve test warnings. Fixes #15344 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15396
Differential Revision:
D13529548
Pulled By: zou3519
fbshipit-source-id:
50d3be92423a9ce89bc4895b67569663e1abbaa6
Fei Sun [Fri, 21 Dec 2018 16:39:05 +0000 (08:39 -0800)]
The benchmark binary support multiple batches in one run (#15443)
Summary:
It is sometimes beneficial to run multiple batches in one benchmark and check the aggregated results.
This PR enables this functionality.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15443
Reviewed By: llyfacebook
Differential Revision:
D13531129
Pulled By: sf-wind
fbshipit-source-id:
553a762a5cbadf5a3d9fd6af767ae34899bc1aa2
Gregory Chanan [Fri, 21 Dec 2018 16:18:37 +0000 (08:18 -0800)]
Move torch.logspace to ATen and parallelize on CPU.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15438
Reviewed By: ezyang
Differential Revision:
D13529626
Pulled By: gchanan
fbshipit-source-id:
896e8afee3d6b5a706c4f5815b91ba6bd8af6672
Dmytro Dzhulgakov [Fri, 21 Dec 2018 16:13:15 +0000 (08:13 -0800)]
Fix cudnn dropout (#15473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15473
Revert accidental changes introduced in
D13335176
IntList is a range and copying it just copies pointers. Thus pointers would point either on deallocated memory or on the same memory causing equality always pass.
Reviewed By: ezyang
Differential Revision:
D13537131
fbshipit-source-id:
c97b3533be689bb4cdadd9e612f1284ac50e4bda
Jongsoo Park [Fri, 21 Dec 2018 07:26:23 +0000 (23:26 -0800)]
format specialized_segment_ops_test.py to prepare
D13515970 (#15408)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15408
Applied formatting to specialized_segment_ops_test.py to prepare
D13515970
Reviewed By: salexspb
Differential Revision:
D13520300
fbshipit-source-id:
c3250b6abe8087c607f65ae60d1da61bd46c342b
Yinghai Lu [Fri, 21 Dec 2018 06:04:09 +0000 (22:04 -0800)]
Clean up onnxifi transformation code (#15453)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15453
Just move things around to facilitate further development. No logic change.
Reviewed By: rdzhabarov
Differential Revision:
D13533959
fbshipit-source-id:
eebab1306939e802aacffb24a711d372fd67916c
Edward Yang [Fri, 21 Dec 2018 05:51:25 +0000 (21:51 -0800)]
Record Caffe2's current stream ID in c10_cuda. (#15174)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15174
Previously, Caffe2 maintained a separate per-thread per-device
current logical CUDA stream ID. In this PR, we switch Caffe2 over
to using c10::Stream to manage the current stream, and also
manage the allocation of cudaStream_t objects.
This results in a slight behavior change: previously, Caffe2
would have been willing to allocate an arbitrary number of
CUDA streams, depending on how high the logical stream IDs
went. The c10::Stream pool has a fixed number of streams, once
you exceed it, it wraps around.
Reviewed By: dzhulgakov
Differential Revision:
D13451550
fbshipit-source-id:
da6cf33ee026932a2d873835f6e090f7b8a7d8dc
Richard Zou [Fri, 21 Dec 2018 01:34:41 +0000 (17:34 -0800)]
Add option to automatically handle unsorted variable-length sequences in RNNs (#15225)
Summary:
Fixes #3584.
Motivation: manually sorting sequences, packing them, and then unsorting them
is something a lot of users have complained about doing, especially when we can
offer library support for them.
Overview: we internally sort sequences before packing them and store a list of
`unsorted_indices` that represent how to unsort the sequences inside
PackedSequence. The packing helper functions return PackedSequence with the
`permutation` field and the unpacking helper functions use it to unsort.
To implement this, the following changes were made:
- PackedSequence now keeps `sorted_indices` and `unsorted_indices`.
These two can be thought of as permutations and are inverses of each other.
`sorted_indices` is how the sequences were sorted; `unsorted_indices` is how
to unsort the sequences.
- Added an `enforce_sorted` argument to pack_sequence and pack_padded_sequence
that maintains the legacy behavior of error-ing out on unsorted-sequences.
When `enforce_sorted=True`, these functions maintain their ONNX exportability.
- pack_sequence(sequences, enforce_sorted) takes in unsorted sequences.
- pack_padded_sequence can take in a padded tensor that represents padded,
unsorted sequences.
- pad_packed_sequence unsorts the PackedSequence such that it is still the
inverse operation of packed_padded_sequence.
- RNNs apply `sort_indices` to their input hidden state and apply
`unsort_indices` to their output hidden state. This is to ensure that the
hidden state batches correspond to the user's ordering of input sequences.
NOT BC-Breaking
- The default for pack_sequence and pack_padded_sequence is
`enforce_sorted=True` to avoid breaking ONNX export. To use the new
functionality, pass in `enforce_sorted=False`
Testing Plan
- Modified TestNN.test_pack_sequence, TestNN.test_packed_padded_sequence,
and TestNN.test_variable_sequence (RNN test) to check the behavior
of unsorted sequences, sorted sequences, and sorted sequences with
enforce_sorted=True
- test/test_jit.py has a test to see if RNNs are exportable with
enforce_sorted=True
cc colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15225
Reviewed By: soumith
Differential Revision:
D13507138
Pulled By: zou3519
fbshipit-source-id:
b871dccd6abefffca81bc4e3efef1873faa242ef
WeihuangXu [Fri, 21 Dec 2018 01:04:14 +0000 (17:04 -0800)]
Change default value of unique to 'sorted=True'
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15379
Differential Revision:
D13531287
Pulled By: ezyang
fbshipit-source-id:
1512da7d660dc413688d99264e6434897c3ac78c
Jongsoo Park [Fri, 21 Dec 2018 01:01:53 +0000 (17:01 -0800)]
add denormal options (ftz and daz)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15423
Reviewed By: yinghai
Differential Revision:
D13526340
fbshipit-source-id:
de2ecc717b4f778f33a8bf940ed144dbb230c7a8
surgan12 [Fri, 21 Dec 2018 00:53:49 +0000 (16:53 -0800)]
collect_env fix (#15447)
Summary:
fixes #15214
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15447
Differential Revision:
D13531523
Pulled By: ezyang
fbshipit-source-id:
8f24f5ae9f3e78f6c5c9ee702ba14faca7aa297a
Lu Fang [Fri, 21 Dec 2018 00:14:16 +0000 (16:14 -0800)]
Remove unused field in jit script module deserializer (#15439)
Summary:
A little bit clean up.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15439
Reviewed By: zrphercule
Differential Revision:
D13532015
Pulled By: houseroad
fbshipit-source-id:
2fb1e01fc28549c7e78af6c65ee68339950bc7da
Edward Yang [Thu, 20 Dec 2018 23:44:09 +0000 (15:44 -0800)]
Revert
D13494873: [pytorch][PR] Fixing ONNX export of logical ops to have correct output datatype
Differential Revision:
D13494873
Original commit changeset:
069d2f956a5a
fbshipit-source-id:
80ef10b2eb623a63da51dc2e4874f2ee446f426d
Viswanath Sivakumar [Thu, 20 Dec 2018 23:33:44 +0000 (15:33 -0800)]
Fix ASAN div by zero error in rotated GenerateProposals op (#15415)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15415
Was introduced in
D13429770
Reviewed By: SuperIRabbit
Differential Revision:
D13524114
fbshipit-source-id:
a890eb3b97c24952c361155d1432a801499f4ddd
Jerry Zhang [Thu, 20 Dec 2018 23:28:12 +0000 (15:28 -0800)]
Tensor construction codemod(ResizeLike) - 7/7 (#15087)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15087
Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407
Reviewed By: ezyang
Differential Revision:
D13419765
fbshipit-source-id:
34d695309a66723281429610a12544598c507d74
rory [Thu, 20 Dec 2018 23:18:39 +0000 (15:18 -0800)]
allow numpy-like boolean-list indexing in pytorch (#14932)
Summary:
Suggested fix to issue #6773, the fix allows numpy-like boolean-list indexing in pytorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14932
Differential Revision:
D13398795
Pulled By: ezyang
fbshipit-source-id:
67f8daf9829db2550ff76d2bde673be6dd2708cd
Teng Li [Thu, 20 Dec 2018 22:46:01 +0000 (14:46 -0800)]
Doc improvement on DDP (#15440)
Summary:
I noticed that some users don't even know we have this support. Adding into the doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15440
Differential Revision:
D13531045
Pulled By: teng-li
fbshipit-source-id:
9757c400c0010608758c754df04e603b36035a10
Edward Yang [Thu, 20 Dec 2018 22:26:23 +0000 (14:26 -0800)]
Fix type annotation error. (#15448)
Summary:
According to mypy, the trailing -> None is mandatory.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15448
Differential Revision:
D13532179
Pulled By: ezyang
fbshipit-source-id:
e8972f8c9ada4657c518cd7bcd46e489ab8ddf5f
Johannes M Dieterich [Thu, 20 Dec 2018 22:26:14 +0000 (14:26 -0800)]
Add launch bounds needed for ROCm 2.0 (#15400)
Summary:
ROCm 2.0's compiler requires launch_bounds annotations if flat work group sizes are larger than the default of 256.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15400
Differential Revision:
D13531239
Pulled By: ezyang
fbshipit-source-id:
c0b40600a8c332823da6c7113c644d8dba424a9c
Zachary DeVito [Thu, 20 Dec 2018 22:26:06 +0000 (14:26 -0800)]
Support enough of closures to write autograd functions (#15411)
Summary:
This PR adds enough of the infra for supporting closures (inner script functions) in order to allow us to expression symbolic gradients using them. We do not actually ever run graphs that contain these closures. The symbolic_script infrastructure just extracts them out of the original forward graph and turns them into discrete forward/backward pairs. This cuts down on the type annotations necessary to write forward/backward pairs and aligns closely with the "differentiator" function approach to expression reverse-mode AD.
Example:
This code:
```
import torch
r = torch.jit.CompilationUnit(
'''
def mul_forward(self, other):
def backward(grad_output):
grad_self = (grad_output * other).sum_to_size(self.size())
grad_other = (grad_output * self).sum_to_size(other.size())
return grad_self, grad_other
return self * other, backward
''')
print(r.module.code)
```
Will produce this graph (pretty printed for clarity):
```
def mul_forward(self,
self: Tensor,
other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
backward = (self.__lambda, (other, self))
return (torch.mul(self, other), backward)
def __lambda(self,
context: Tuple[Tensor, Tensor],
grad_output: Tensor) -> Tuple[Tensor, Tensor]:
other, self, = context
grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
return (grad_self, grad_other)
```
symbolic_script will then do some modifications to remove the unsuppored prim::Function node, yielding:
```
def mul_forward(self,
self: Tensor,
other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
return (torch.mul(self, other), (other, self))
def backward(self,
context: Tuple[Tensor, Tensor],
grad_output: Tensor) -> Tuple[Tensor, Tensor]:
other, self, = context
grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
return (grad_self, grad_other)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15411
Differential Revision:
D13523340
Pulled By: zdevito
fbshipit-source-id:
4d4a269460e595b16802c00ec55ae00e3e682d49
hbraun@nvidia.com [Thu, 20 Dec 2018 22:24:27 +0000 (14:24 -0800)]
Adding CUDA version for C2 operators generate proposals and nms (#13694)
Summary:
Related to issue #13684
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13694
Reviewed By: wat3rBro
Differential Revision:
D13017791
Pulled By: newstzpz
fbshipit-source-id:
4bdc58e474d8e1f6cd73a02bf51f91542a2b9d0b
Gao, Xiang [Thu, 20 Dec 2018 22:09:09 +0000 (14:09 -0800)]
Add at::one_hot (#15208)
Summary: Closes: https://github.com/pytorch/pytorch/issues/15060
Differential Revision:
D13528014
Pulled By: ezyang
fbshipit-source-id:
5a18689a4c5638d92f9390c91517f741e5396293
Fei Sun [Thu, 20 Dec 2018 21:24:01 +0000 (13:24 -0800)]
Extract arguments to its own file and pass arguments to ios apps (#15413)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15413
In order to pass arguments to the ios app, need to extarct the arguments
to its own file. Also, in the ios app, do not use the benchmark.json, which
parses the arguments.
This is an incompatible change, needs to add hot fix to the tests.
Reviewed By: llyfacebook
Differential Revision:
D13523240
fbshipit-source-id:
b559cc7f52d8f50ee206a7ff8d7b59292d855197
Spandan Tiwari [Thu, 20 Dec 2018 20:24:42 +0000 (12:24 -0800)]
Fixing ONNX export of logical ops to have correct output datatype (#15185)
Summary:
Currently PyTorch ONNX exporter exports the logical ops (`lt`, `gt`, `le`, `ge`, `eq`) with output type in corresponding ONNX ops as type `tensor(uint8)`. But ONNX spec allows for only `tensor(bool)`, which is why models that have these ops fail to load properly.
This issue is captured in https://github.com/pytorch/pytorch/issues/11339. Part of this issue, relating to the allowed input types, has been fixed in ONNX spec by houseroad. This PR fixes the other part pertaining to output type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15185
Differential Revision:
D13494873
Pulled By: houseroad
fbshipit-source-id:
069d2f956a5ae9bf0ac2540a32594a31b01adef8
David Riazati [Thu, 20 Dec 2018 20:20:42 +0000 (12:20 -0800)]
Miscellaneous small doc fixes (#15373)
Summary:
This PR makes some small changes for better consistency in our README and
CONTRIBUTING docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15373
Differential Revision:
D13512753
Pulled By: driazati
fbshipit-source-id:
44398ad1894eef521d5f5acb1d06acaad67728cf
Edward Yang [Thu, 20 Dec 2018 19:14:21 +0000 (11:14 -0800)]
Extend README for ATen/native/cpu (#15437)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15437
Differential Revision:
D13529436
Pulled By: ezyang
fbshipit-source-id:
2e2193d54ea7f7626fe7392e4d0c130c2f87a76f
Shen Li [Thu, 20 Dec 2018 18:21:02 +0000 (10:21 -0800)]
Implementing cuda kernel for tril_indices and triu_indices (#15203)
Summary:
Followup PR of #14904, and the stretch goal of #12653.
Directly calculate coordinates in the original tensor using column index in the result tensor. Every GPU thread takes care of a column (two numbers) in the output tensor.
The implementation detects and handles precision loss during calculating the square root of a `int64_t` variable, and supports tensors with up to `row * column = 2 ^ 59` numbers.
Algorithm details are describe in [comments of TensorFactories.cu](https://github.com/pytorch/pytorch/blob/
23ddb6f58a1c8a7a660a793f174cf014230176c6/aten/src/ATen/native/cuda/TensorFactories.cu#L109-L255).
zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15203
Reviewed By: zou3519
Differential Revision:
D13517695
Pulled By: mrshenli
fbshipit-source-id:
86b305d22cac08c8962a3b0cf8e9e620b7ec33ea
Edward Yang [Thu, 20 Dec 2018 18:00:09 +0000 (10:00 -0800)]
Revert
D13498974: [pytorch][PR] [jit] Add self to Python printer reserved words
Differential Revision:
D13498974
Original commit changeset:
488efb661476
fbshipit-source-id:
3b991bccf4cf2ffdafe70f145aff0ae2837e31f8