Jerry Zhang [Mon, 4 Feb 2019 19:09:19 +0000 (11:09 -0800)]
Tensor method rename ndim()->dim() - 2/3 (#16679)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16679
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: houseroad
Differential Revision:
D13929450
fbshipit-source-id:
fcc222744c28b41f2cedffc0c2ef5d04aceaa5af
JerryShih [Mon, 4 Feb 2019 16:50:35 +0000 (08:50 -0800)]
Update the cmake build configuration for AppleClang compiler (#15820)
Summary:
This pr try to merge the https://github.com/pytorch/pytorch/pull/11563 again and fix the linking error in https://github.com/pytorch/pytorch/pull/14837.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15820
Differential Revision:
D13942024
Pulled By: ezyang
fbshipit-source-id:
dc6d1e9c4b0f177914f3745665244272a03ce33c
Dmytro Dzhulgakov [Mon, 4 Feb 2019 06:11:43 +0000 (22:11 -0800)]
Fix build with cuda but no cudnn in caffe2 (#16701)
Summary:
Just noticed while building on a machine without cudnn present - it was building but the runtime failed since some methods weren't bound
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16701
Differential Revision:
D13937247
Pulled By: dzhulgakov
fbshipit-source-id:
c81f05be7a9e64a1a8591036dcf8692c0ed4064e
Dmytro Dzhulgakov [Mon, 4 Feb 2019 05:28:48 +0000 (21:28 -0800)]
Fix ReservoirSampling zero-initialization reliance (#16702)
Summary:
The op was implicitly relying on pos_to_output to be zero-initialized after extending. We're removing this functionality from allocator, thus fixing here. For some reason it wasn't spotted by junk-initialization but was reliably reproducible with standard malloc() if both junk_fill and zero_fill flags are turned off.
cc kittipatv jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16702
Reviewed By: kittipatv
Differential Revision:
D13937257
Pulled By: dzhulgakov
fbshipit-source-id:
3ee520b05467108e6c3e64eb3e6c60589bdf3d87
Pieter Noordhuis [Sun, 3 Feb 2019 21:36:18 +0000 (13:36 -0800)]
Remove --without-parallel (#16704)
Summary:
See homebrew/homebrew-core@
60c72ba9 and homebrew/homebrew-core#31510.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16704
Differential Revision:
D13938093
Pulled By: pietern
fbshipit-source-id:
8a70d462180257f96202a0373a86a273b524045c
Pieter Noordhuis [Sun, 3 Feb 2019 19:49:25 +0000 (11:49 -0800)]
Bump gloo (#16638)
Summary:
This bump includes:
* Memory leak fix where the Gloo transport would hold on to auxiliary
structures for send/recv pairs after they finished.
* Fix write-after-free from Gloo thread during stack unwinding on error.
* Removal of the PATENTS file.
Fixes #16144.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16638
Differential Revision:
D13937950
Pulled By: pietern
fbshipit-source-id:
3cfecaf13ee0f214c06681386557a4b1c3e1d6b9
vishwakftw [Sun, 3 Feb 2019 02:52:55 +0000 (18:52 -0800)]
Fix issue with scalars and __rpow__ (#16687)
Summary:
Changelog:
- Modify __rpow__ function in tensor.py to adapt to scalars
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16687
Differential Revision:
D13936720
Pulled By: soumith
fbshipit-source-id:
b0c8727968b04efbc6e7461807c812d962f03370
Sebastian Messmer [Sun, 3 Feb 2019 00:23:55 +0000 (16:23 -0800)]
Improve LeftRight (#16524)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16524
- Make it exception safe. When an exception happens during write, the old state is recovered.
- Use RAII instead of try/catch to increment counters in readers. This is more readable, and it also makes it work with reader closures that return void, which previously didn't work because the reader return value was stored on the stack.
- Assert there's no reads or writes happening when it's destructed to avoid destruction race conditions
- Explain the algorithm in detail in comments
- Add test cases
Reviewed By: ezyang
Differential Revision:
D13866609
fbshipit-source-id:
01306a282a3f555569caa13d8041486f960d00e2
svcscm [Sat, 2 Feb 2019 18:53:43 +0000 (10:53 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
e66e01e164d1784740fcb8bebc4817d2a8cd7903
svcscm [Sat, 2 Feb 2019 13:01:51 +0000 (05:01 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
31a8d843ffba2d7405b4742ea553937a00dff216
James Reed [Sat, 2 Feb 2019 08:52:38 +0000 (00:52 -0800)]
fix conditional in mean workaround (#16686)
Summary:
When trying to get a test to pass I was missing an exclamation mark. Instead now I just use a different function in the conditional
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16686
Differential Revision:
D13935182
Pulled By: jamesr66a
fbshipit-source-id:
7525a1a829276641dbafe06734f03f6202df6b22
Xiaomeng Yang [Sat, 2 Feb 2019 07:45:38 +0000 (23:45 -0800)]
Use macro for reduce on 2d blocks (#16344)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16344
Use macro for reduce on 2d blocks
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D13808988
fbshipit-source-id:
b68c0fb6079c1b6e203a072083aba7a95c202bc2
Sebastian Messmer [Sat, 2 Feb 2019 05:31:13 +0000 (21:31 -0800)]
Simplify layer_norm_op_test
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16570
Reviewed By: ezyang
Differential Revision:
D13883913
fbshipit-source-id:
7437d3cbc00c0de92bb01562c620cb658aa9f0d3
Hao Lu [Sat, 2 Feb 2019 04:55:50 +0000 (20:55 -0800)]
Make predictor base class
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16541
Reviewed By: ajtulloch
Differential Revision:
D13858261
fbshipit-source-id:
acbfdbea59bd20ab1cc7956ee0d8856d6faa8361
Yinghai Lu [Sat, 2 Feb 2019 02:45:44 +0000 (18:45 -0800)]
Tag model_id and onnxifi index in OnnxifiOp (#16648)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16648
We added onnxGraph sharing keyed on model id and net seq number but we forgot to supply these info to the Onnxifi. Therefore, we will only create ONE onnxGraph whatsoever... This diff adds necessary info to the OnnxifiOp to prevent this from happening.
Reviewed By: bertmaher, rdzhabarov
Differential Revision:
D13912356
fbshipit-source-id:
fe8982327287a35f32fe3b125d94b617d18c0ab5
svcscm [Sat, 2 Feb 2019 02:41:00 +0000 (18:41 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
ed389204bc423d2d5f7a36e2d61c0f55fe0522e1
David Riazati [Sat, 2 Feb 2019 00:24:36 +0000 (16:24 -0800)]
Add @ignore annotation (#16055)
Summary:
Adds a decorator `torch.jit.ignore` for Python functions that tells the compiler to skip over these Python values, putting a `prim::Error` in their place which always throws an exception when run.
This lets you have Python-only code in your model in an explicit way, which is useful for debugging, and still be able to save/load the model.
Fixes #15815
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16055
Differential Revision:
D13797286
Pulled By: driazati
fbshipit-source-id:
29d36776608ec101649a702952fc6ff3c27655b1
Hui Wu [Sat, 2 Feb 2019 00:19:39 +0000 (16:19 -0800)]
Add Winograd Conv method for CPU (#15196)
Summary:
Add winograd conv method. Users can select the direct conv or winograd conv in the model file.
We close the origin pr https://github.com/pytorch/pytorch/pull/12154 and create this new one for better rebasing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15196
Differential Revision:
D13463721
Pulled By: yinghai
fbshipit-source-id:
c5cd5c8aa7622ae7e52aeabd3dbb8ffb99b9b4ee
Jesse Hellemn [Sat, 2 Feb 2019 00:15:14 +0000 (16:15 -0800)]
Increase timeout on anaconda logins
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16682
Differential Revision:
D13931438
Pulled By: pjh5
fbshipit-source-id:
9961e91a80d8c59ab6347e830b1da38533524dd2
Jerry Zhang [Sat, 2 Feb 2019 00:11:24 +0000 (16:11 -0800)]
Tensor method rename ndim()->dim() - 3/3 (#16680)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16680
Codemod generated with clangr shard mode, 25 files per diff,
Reviewed By: houseroad
Differential Revision:
D13929471
fbshipit-source-id:
b284ead11031f96fd8b6d96d2f29ffeb14207faa
Lu Fang [Fri, 1 Feb 2019 23:55:01 +0000 (15:55 -0800)]
fix the ONNX ci (#16674)
Summary:
~~Let's see whether this trigger and fix the problem~~
remove the expect files from test_verify
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16674
Reviewed By: zrphercule
Differential Revision:
D13930668
Pulled By: houseroad
fbshipit-source-id:
092157af07f475cf3809c95a4fe586e050c53b7e
Jesse Hellemn [Fri, 1 Feb 2019 23:17:12 +0000 (15:17 -0800)]
Allow USE_NINJA to be toggled by an env variable
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16665
Differential Revision:
D13930021
Pulled By: pjh5
fbshipit-source-id:
4b490f952a56e8561329ab8898be2bf779b46b9d
Michael Suo [Fri, 1 Feb 2019 22:36:02 +0000 (14:36 -0800)]
fix tracing using a dictionary as input (#16616)
Summary:
Previously this would fail with the error message:
```
ValueError: Auto nesting doesn't know how to process an input object of type dict. Accepted types: Tensors, or lists/tuples of them
```
Turns out we're not using the line that causes this error (or a side effect of that line), so removing it fixes the issue. Also cleaned up some related dead code (cc apaszke to make sure the code isn't useful in some way)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16616
Differential Revision:
D13908352
Pulled By: suo
fbshipit-source-id:
27094f1f4ea0af215b901f7ed3520e94fbc587b3
Sebastian Messmer [Fri, 1 Feb 2019 20:44:55 +0000 (12:44 -0800)]
Implement new c10 dispatcher (#16625)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16625
This is a squash of multiple PRs that refactored the old c10 dispatcher into a new one that follows the c10 dispatcher design doc.
It is now unboxed and follows the Stack semantics from JIT. It also uses the runtime JIT schema instead of its own compile time schema definitions.
Reviewed By: ezyang
Differential Revision:
D13907069
fbshipit-source-id:
edcc4806ccd21474fdfb5a98516219b1956db13d
Will Feng [Fri, 1 Feb 2019 20:42:28 +0000 (12:42 -0800)]
Add train() / eval() / is_training() to C++ ScriptModule API (#16044)
Summary:
This PR aims to fix https://discuss.pytorch.org/t/how-to-change-a-loaded-model-to-evaluation-mode-in-c/32330, by adding `train()` / `eval()` / `is_training()` to C++ ScriptModule API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16044
Differential Revision:
D13857724
Pulled By: yf225
fbshipit-source-id:
16d3969fb5840ff7e66c7f72e800e6c75db8d2ff
Syed Tousif Ahmed [Fri, 1 Feb 2019 20:38:15 +0000 (12:38 -0800)]
Revert "Fixes selection of cuDNN algorithm (#15881)" (#16484)
Summary:
There is a regression in cudnnGet*_v7 that causes slowdown in resnet50 training. I am opening a bug with cuDNN team about this. This reverts commit
38374468832e307ca741901870914857a836dd5d.
ezyang :crying_cat_face:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16484
Differential Revision:
D13924755
Pulled By: soumith
fbshipit-source-id:
8c719345fc443f1289539bfae630eea9224ba4a5
Soumith Chintala [Fri, 1 Feb 2019 19:08:36 +0000 (11:08 -0800)]
Revert "Upgrade mkl-dnn to v0.17.3 to fix core dump issue (github#161… (#16660)
Summary:
…83) (#16653)"
This reverts commit
87ae1558a6c8c7c0693bfa995458d16239c484d7.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16660
Differential Revision:
D13924272
Pulled By: soumith
fbshipit-source-id:
79747d728adff1a9c32d8529846f0305052e57e8
Roy Li [Fri, 1 Feb 2019 18:55:00 +0000 (10:55 -0800)]
Expose backend extensions to python
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16582
Reviewed By: gchanan
Differential Revision:
D13887539
fbshipit-source-id:
8755babf2e3e849af974655f2f3a91740efe977e
Roy Li [Fri, 1 Feb 2019 18:55:00 +0000 (10:55 -0800)]
Introduce backend extensions (overriding operators on custom backends)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15153
Reviewed By: gchanan
Differential Revision:
D13445571
fbshipit-source-id:
62e2ebe0a6e81c4983b47cddb57ee5eb78e96708
Roy Li [Fri, 1 Feb 2019 18:54:59 +0000 (10:54 -0800)]
Dispatch factory functions on Type (#15093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15093
Needed for backend extensions.
Reviewed By: ezyang
Differential Revision:
D13427897
fbshipit-source-id:
d0b34b0072e597ae599bd3bc25356831d7a18d6a
Edward Yang [Fri, 1 Feb 2019 17:28:33 +0000 (09:28 -0800)]
Only run Travis on master branch, not on export-DXXXXX branches. (#16628)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16628
Differential Revision:
D13922097
Pulled By: ezyang
fbshipit-source-id:
eb16d90cc61167af5edc0c4e361d7a807a3099e5
Ailing Zhang [Fri, 1 Feb 2019 16:53:45 +0000 (08:53 -0800)]
Ignore assert_git_not_dirty for xla tests (#16611)
Summary:
Testing, will restore the branch filter before landing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16611
Differential Revision:
D13902234
Pulled By: ailzhang
fbshipit-source-id:
7fa4048b891645f5253c48b905fb9630e3079524
Asher Mancinelli [Fri, 1 Feb 2019 15:59:56 +0000 (07:59 -0800)]
Better bounds checks in ctcloss (#16269)
Summary:
Adds better bounds checks for target lengths in CTC loss, checks for integral types for target and prediction lengths, and adds tests for each, according to #15946
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16269
Differential Revision:
D13847567
Pulled By: ezyang
fbshipit-source-id:
5d7a975565e02baf78fe388813a1d1ef56dfb212
Gu, Jinghui [Fri, 1 Feb 2019 15:13:38 +0000 (07:13 -0800)]
Upgrade mkl-dnn to v0.17.3 to fix core dump issue (github#16183) (#16653)
Summary:
Upgrade mkl-dnn to 0.17.3 to fix core dump issue in #16183
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16653
Differential Revision:
D13918278
Pulled By: soumith
fbshipit-source-id:
b9c09c50ef188b4099966216e155c9f3f2542276
peter.yeh@amd.com [Fri, 1 Feb 2019 08:50:24 +0000 (00:50 -0800)]
Skip dag_net_forking test on Rocm (#16639)
Summary:
-Skip the test due to flaky behavior on AMD/Rocm
-The fix is expected in Rocm 2.2 ( HSA runtime)
bddppq
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16639
Differential Revision:
D13915231
Pulled By: bddppq
fbshipit-source-id:
66e1d275836337170b15ceb9d60cfdd3242d4df8
Amy Yang [Fri, 1 Feb 2019 07:44:01 +0000 (23:44 -0800)]
add SingleLoadedNetSupplier (#16620)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16620
LogfiledbNetLoader loads all external input blobs into a workspace instance, we pack a shared pointer to this loaded workspace into the SingleLoadedNetSupplier.
SingleLoadedNetSupplier will pass this workspace to BlackBoxPredictor to be executed. (
D13891759 is a WIP of how it all comes together)
Reviewed By: pjh5
Differential Revision:
D13901467
fbshipit-source-id:
20589f898922f5f1aec50be131dad17a8c38e9b2
Xiaomeng Yang [Fri, 1 Feb 2019 07:42:49 +0000 (23:42 -0800)]
Update conv_base to support empty batch (#16603)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16603
Update conv_base to support empty batch
Reviewed By: houseroad
Differential Revision:
D13894111
fbshipit-source-id:
fc4370ff16ba6046f374e77bd845d28e6af05ea3
James Malcolm [Fri, 1 Feb 2019 06:02:35 +0000 (22:02 -0800)]
Improving docs for MultiLabelSoftMarginLoss (#16644)
Summary:
Resolves #15863
Changed the documentation for MultiLabelSoftMarginLoss and MultiLabelMarginLoss to be more explicit about the `target` format.
More than happy to change the messaging based on discussion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16644
Differential Revision:
D13912395
Pulled By: soumith
fbshipit-source-id:
24a3c214c5f6f9d043e25b13ac758c1c1211b641
Zachary DeVito [Fri, 1 Feb 2019 04:52:30 +0000 (20:52 -0800)]
respect MAX_JOBS (#16641)
Summary:
We inadvertently switch the OSX build over to ninja on CI. It then fails to respect MAX_JOBS and hits the same scache deadlock bug, this makes the ninja build respect MAX_JOBS.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16641
Differential Revision:
D13910751
Pulled By: zdevito
fbshipit-source-id:
61bec500539519b019b74421a13cd87fc1d86090
James Reed [Fri, 1 Feb 2019 04:46:57 +0000 (20:46 -0800)]
Workaround unvectorized mean implementation (#16618)
Summary:
Workaround for https://github.com/pytorch/pytorch/issues/16617
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16618
Differential Revision:
D13904276
Pulled By: jamesr66a
fbshipit-source-id:
f8b5ea4c5f12dbc405123c9080c55b342c95bcd1
svcscm [Fri, 1 Feb 2019 03:34:05 +0000 (19:34 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
4d94eb18d4da58541a96c9f9c2ecc9746f779933
Edward Yang [Fri, 1 Feb 2019 01:34:13 +0000 (17:34 -0800)]
Add compare_exchange_deleter to DataPtr/UniqueVoidPtr (#16513)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16513
compare_exchange_deleter makes it easier to replace a
deleter on a DataPtr with a new one, without requiring
allocating another closure to hold the old deleter.
See comment for details.
This diff was originally landed as part of
D13762540
(#16226) but we are reverting that diff
D13863610 (#16510)
Reviewed By: smessmer
Differential Revision:
D13864245
fbshipit-source-id:
56eda4748238dd3a5130ba6434fda463fe7c690e
Bram Wasti [Fri, 1 Feb 2019 01:25:16 +0000 (17:25 -0800)]
Shim caffe2 GetRepeatedArgument helper for use with Ivalue (#16519)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16519
GetRepeatedArguments is needed for some ops
Reviewed By: dzhulgakov
Differential Revision:
D13864293
fbshipit-source-id:
a39255cd391c28acd75a6f0e81d558542417e032
SsnL [Fri, 1 Feb 2019 00:09:41 +0000 (16:09 -0800)]
Add torch.backends.openmp.is_available(); fix some cmake messages (#16425)
Summary:
1. add `torch.backends.openmp.is_available()`
2. Improve various `cmake` outputs
3. Fix LDFLAGS not respected by `caffe2_pybind11_state_*` targets
4. Fix `MKL` warning message, and QUIET flag.
5. Fix various typos
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16425
Differential Revision:
D13903395
Pulled By: soumith
fbshipit-source-id:
d15c5d46f53e1ff1c27fca2887b9d23d0bd85b4d
Xiang Gao [Fri, 1 Feb 2019 00:00:02 +0000 (16:00 -0800)]
Move outplace ops to ATen (#12413)
Summary:
So that things like below can be JITable, and available in C++ API:
```python
import torch
torch.jit.script
def f(x, y, z):
x.index_add(0, y, z)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12413
Differential Revision:
D13899948
Pulled By: suo
fbshipit-source-id:
b0006b4bee2d1085c813733e1037e2dcde4ce626
Jesse Hellemn [Thu, 31 Jan 2019 23:55:02 +0000 (15:55 -0800)]
Grant credentials to s3 html update job
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16631
Differential Revision:
D13908331
Pulled By: pjh5
fbshipit-source-id:
846a4f933d947f7217b856bd79ff85b7f97288a8
Jerry Zhang [Thu, 31 Jan 2019 23:42:37 +0000 (15:42 -0800)]
fix scope related naming issue in build_quant_conv_bn_relu, and also format function signature
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14885
Reviewed By: harouwu
Differential Revision:
D13374077
fbshipit-source-id:
5082c4ea0d2fdc197243b022b9b489f38b04c8e9
Dmytro Dzhulgakov [Thu, 31 Jan 2019 23:39:22 +0000 (15:39 -0800)]
Disable layernorm_c10 test for now (#16630)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16630
two PRs landed concurrently - enforcing tensor constraints and refactoring c10. Since it's not a prod code - disable test and I'll let Sebastian to fix it properly.
Reviewed By: ezyang
Differential Revision:
D13908117
fbshipit-source-id:
381c5626078b794afa1fc7a95cb1ea529650424c
Elias Ellison [Thu, 31 Jan 2019 23:37:52 +0000 (15:37 -0800)]
Remove constant propagation expect files (#16348)
Summary:
Remove constant prop expect files, and express graph conditions via python bindings.
First diff in larger effort to remove expect files
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16348
Differential Revision:
D13906929
Pulled By: eellison
fbshipit-source-id:
7963caa3ccbc7bfc0006a160c952aa173d1ce633
James Reed [Thu, 31 Jan 2019 22:13:45 +0000 (14:13 -0800)]
Fix a lot of C++ build warnings (#16411)
Summary:
I went through my build log and did what I thought were reasonable fixes to all the C++ compilation warnings that came up
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16411
Differential Revision:
D13901006
Pulled By: jamesr66a
fbshipit-source-id:
02df4e3e5a5c8dd9e69ac9f065cd3f2a80645033
David Riazati [Thu, 31 Jan 2019 22:06:44 +0000 (14:06 -0800)]
Add immutable dict support (#16208)
Summary:
This PR adds basic support (creation and indexing) for immutable dictionaries in Script. This includes Python/string frontend support and a `IValue::GenericDict` type backed by a `std::unordered_map`. Only `str`, `int`, and `float` are supported as keys, any type can be a value. Structure is pretty similar to list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16208
Differential Revision:
D13881686
Pulled By: driazati
fbshipit-source-id:
29ce9835b953c3456f57bcc2bbdf7fe0cbf941c0
Jithun Nair [Thu, 31 Jan 2019 22:00:00 +0000 (14:00 -0800)]
Make the miopen handle part of ConvolutionParams (#16613)
Summary:
so that it's included in the hashed key that decides whether to call Find or not. This is required to ensure that Find is run for all devices
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16613
Differential Revision:
D13901769
Pulled By: bddppq
fbshipit-source-id:
7d29ea9e40231cd4eef80847afa1307efeb0945c
Dmytro Dzhulgakov [Thu, 31 Jan 2019 21:30:58 +0000 (13:30 -0800)]
Back out "Revert
D13596031: Improve c2-aten tensor interop and add proper testing" (#16514)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16514
Original commit changeset:
dc371697f14b
Relanding https://github.com/pytorch/pytorch/pull/15860 - the problem was that layer_norm was using at::empty which is not yet on mobile
Reviewed By: ezyang
Differential Revision:
D13861480
fbshipit-source-id:
e2116da32bc117175c96b9151b1beba9b31eff36
Zachary DeVito [Thu, 31 Jan 2019 21:11:35 +0000 (13:11 -0800)]
use distutils to discover msvc compiler paths (#16540)
Summary:
This simplifies the process for building on windows, since users no longer have to find and run the vcvarsall.bat file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16540
Differential Revision:
D13893596
Pulled By: zdevito
fbshipit-source-id:
79b7ad55c3251b3f573fd8464931138f8a52dd1d
Bram Wasti [Thu, 31 Jan 2019 20:41:55 +0000 (12:41 -0800)]
Fix SIOF in torch using caffe2 registry (#16473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16473
This resolves the issues associated with caffe2 initialization (specifically the REGISTER_FUNCTION_SCHEMA_OPERATOR calls) being run after Torch's static op registration calls.
The fix employs a meyer's singleton wrapped by the constructor of a type. Everything is placed inside a macro to make it easier for users to use.
Reviewed By: smessmer
Differential Revision:
D13854306
fbshipit-source-id:
ecf60861f229532826fae254974e9af4389055df
Bram Wasti [Thu, 31 Jan 2019 20:41:55 +0000 (12:41 -0800)]
Swap Caffe2 operator constructor to pass arguments by value (#16576)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16576
allows instantiation of operator with arguments passed by move rather than explicit copies
per Sebastian's suggestion
Reviewed By: smessmer
Differential Revision:
D13882416
fbshipit-source-id:
bc8d50e73f5a1ae87155b0cf96799b8573a7a8fa
David Riazati [Thu, 31 Jan 2019 19:58:56 +0000 (11:58 -0800)]
Allow ScriptModule(optimize=False) when jit disabled (#16297)
Summary:
Fixes #16285
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16297
Differential Revision:
D13797276
Pulled By: driazati
fbshipit-source-id:
3a93500d4233cfbb8f5af7feba43f6ff4c3d22c7
Thomas Viehmann [Thu, 31 Jan 2019 19:57:56 +0000 (11:57 -0800)]
Get more fusion after autodiff uses SumToSize (#14957)
Summary:
Here is a fresh attempt at getting some fusion back in autodiff-generated graphs in the presence of SumToSize.
- The sum to size operator is now `aten::_grad_sum_to_size` to allow symbolic script differentiation (and that in turn would need to use this in place of sum_to_size to signal that it strictly operates on gradients). This is also used in the autodiff code, replacing `prim::SumToSize`.
- `_grad_sum_to_size` is now fusable, `cat`s - which are fused afterwards thanks to Adam's simplification of the code - are only fused if there is no `_grad_sum_to_size` in the fusion group.
- I push the `_grad_sum_to_size` out of the the fusion group when compiling and record the desired summations in the KernelSpec. The reasoning is the following:
- As the autodiff is a repeated applicaiton of the chain rule, we always have the pattern `grad_in = mm(A, grad_out)`, with A often diagonal for cases interesting to the fuser, whence it is `grad_in = a * grad_out` (a pointwise multiplication). We know that only `grad_out` may have AutodiffGradSumToSize applied, so we can commute AutodiffGradSumToSize with the `mul` (and `div` and `neg` are of similar origin).
- For `type_as` the gradient might be giving the type, so just skip SumToSize,
- `add` (which was inserted as `prim::AutogradAdd`) adding gradients when the forward used the same value in several places. This is non-broadcasting, so we know that the two arguments would have the same sizes as inputs - which is good so we don't have to do bookkeeping of the two parts.
Details:
- During fusion, the Tensor arguments are always kept as the first parameters of the fusion group to accomodate indexing assumptions in the fuser.
- The rewriting of the fusion group to record the necessary output transformation and eliminate `_grad_sum_to_size` from the fusion group is now in the fuser compile step.
- In the execution step, the arguments are split into Tensor / Non-Tensor and the non-tensor args are mostly forgotten about except for doing `sum_to_size` at the end. This would want to be improved if/when we fuse nonconstant scalar arguments.
- In a number of places in the fuser, the non-Tensor arguments to the fusion group needed to be ignored.
Thank you, apaszke for the insightful discussion. All bad ideas and errors are my own.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14957
Differential Revision:
D13888173
Pulled By: zou3519
fbshipit-source-id:
071992c876e8b845f2b3e6329ae03a835d39a0ea
peter [Thu, 31 Jan 2019 19:19:31 +0000 (11:19 -0800)]
Enable USE_NINJA in build_pytorch_libs.py if it is in PATH (#16545)
Summary:
It is required to fix the nightly conda builds.
cc zdevito ezyang soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16545
Differential Revision:
D13900610
Pulled By: soumith
fbshipit-source-id:
676f903a082f6f083e07245a1df38175bb82b2f7
sebftw [Thu, 31 Jan 2019 19:14:26 +0000 (11:14 -0800)]
Replaced "from_numpy" with "as_tensor" in docs. (#16587)
Summary:
In the warning box on https://pytorch.org/docs/stable/tensors.html#torch.Tensor.new_tensor it says:
> new_tensor() always copies data. [...] If you have a numpy array and want to avoid a copy, use **torch.from_numpy()**.
But then further up the page we have another warning box with the message:
> torch.tensor() always copies data. [...] If you have a numpy array and want to avoid a copy, use **torch.as_tensor()**.
Now I believe this is just a small oversight, since from_numpy is to be deprecated in favour of as_tensor. See for example https://github.com/pytorch/pytorch/issues/6885 and https://github.com/pytorch/pytorch/issues/8611. I suggest to just use **torch.as_tensor()** in both of the warning boxes.
cc gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16587
Differential Revision:
D13897038
Pulled By: gchanan
fbshipit-source-id:
2eb3cd47d2c0b5bf4350f980de3be9fe59b4a846
bhushan [Thu, 31 Jan 2019 19:12:21 +0000 (11:12 -0800)]
printing correct dimension while indexing (#16495)
Summary:
applySelect does modify the tensor and removes the top most dimension which makes it complicated to track just using dim and need to use another parameter as real_dim to signify original dimension
fixes #16192
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16495
Differential Revision:
D13897182
Pulled By: gchanan
fbshipit-source-id:
105581dbbff6b431cc8e2539a07e0058161e53a1
Brennan Vincent [Thu, 31 Jan 2019 18:41:17 +0000 (10:41 -0800)]
remove unused capture (#16526)
Summary:
We don't use this in the lambda body anymore. Remove it to fix a warning.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16526
Differential Revision:
D13867043
Pulled By: umanwizard
fbshipit-source-id:
4c9a9d194fdfcb63fde16823517d2c6c8e2ae93d
Michael Suo [Thu, 31 Jan 2019 18:25:40 +0000 (10:25 -0800)]
split up AliasTracker into a separate file (#16588)
Summary:
This just moves thing around to make AliasTracker independently testable and keep things a little more separate. Follow-on PRs will change the interfaces of AliasDb and AliasTracker to be more clearly distinct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16588
Differential Revision:
D13891894
Pulled By: suo
fbshipit-source-id:
c5b590b5fdd462afefe743e499034068bf35784a
Zachary DeVito [Thu, 31 Jan 2019 18:06:26 +0000 (10:06 -0800)]
Access profiler from cpp (#16580)
Summary:
jamesr66a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16580
Differential Revision:
D13891299
Pulled By: zdevito
fbshipit-source-id:
83b335bf3231a9ab30e9318f2bce6d741ba5ffae
SsnL [Thu, 31 Jan 2019 14:53:57 +0000 (06:53 -0800)]
Fix cuFFT plan cache size on CUDA 10 cannot be set to > 4096 (#16384)
Summary:
Doc doesn't need to be changed. Also clarifies two inaccurate comments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16384
Differential Revision:
D13886637
Pulled By: soumith
fbshipit-source-id:
227385008211a6f3ad9135c54fd2d3754cc9daaf
Jesse Hellemn [Thu, 31 Jan 2019 07:36:32 +0000 (23:36 -0800)]
Clean up binary jobs in CircleCI (#16511)
Summary:
- Add libtorch upload jobs
- Unify checkout and env code for binary jobs (san binary test jobs)
- Compress variables passed into binary jobs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16511
Differential Revision:
D13893714
Pulled By: pjh5
fbshipit-source-id:
b8bd72e1397dec569a8ec3e859e319178c7c6f8b
svcscm [Thu, 31 Jan 2019 07:29:16 +0000 (23:29 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
36c332beab1aaccb281d5ee07952d399056b7f8c
Jongsoo Park [Thu, 31 Jan 2019 06:46:07 +0000 (22:46 -0800)]
more careful use of inline/template function in perfkernels (#15388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15388
This is another pass to make perfkernels code safer from illegal instruction error.
Removed dependency to c10/util/Logging.h
We're err on the safer side at the expense of some verbosity.
Reviewed By: dskhudia
Differential Revision:
D13502902
fbshipit-source-id:
4f833115df885c5b4f8c1ca83b9badea1553f944
svcscm [Thu, 31 Jan 2019 05:08:38 +0000 (21:08 -0800)]
Updating submodules
Reviewed By: zpao
fbshipit-source-id:
a0a2a635f86ef3bebfb4ca1a36f7ec9c2b09d7bb
Jerry Zhang [Thu, 31 Jan 2019 02:26:48 +0000 (18:26 -0800)]
DeviceScope support for CUDA and testing (#15357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15357
Supporting device option in FQ bn folding for ITER related ops
Reviewed By: wat3rBro
Differential Revision:
D13370259
fbshipit-source-id:
4324c2716dfa69ddedc661ae2b1ad34c2f6fc4b6
Antoine Busque [Thu, 31 Jan 2019 02:11:04 +0000 (18:11 -0800)]
Fix: avoid race condition on model zoo directory creation (#16578)
Summary:
The current implementation of the `torch.utils.model_zoo.load_url`
function is prone to a race condition when creating the directory in
which it saves the loaded models, since it checks whether the
directory exists and then creates it in two separate steps. The
directory can be created after the check was made but before we
attempt to create the directory, resulting in an unhandled exception.
Instead, try to create the directory directly, and do nothing if it
already exists.
Note: for Python versions ≥ 3.2, we could simply use the
`exist_ok=True` flag on `os.makedirs`, but this is unavailable in
Python 2.7.
Signed-off-by: Antoine Busque <antoine.busque@elementai.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16578
Differential Revision:
D13886470
Pulled By: soumith
fbshipit-source-id:
88815c8a65eec96caea32d6e9a7f83802502fdb9
Iurii Zdebskyi [Thu, 31 Jan 2019 02:09:56 +0000 (18:09 -0800)]
Remove redundant declarations (#16463)
Summary:
As there are no checks that all the functions are actually being used, we can end up with stale entries. This diff removes unused entries from Declarations.cwrap
Testing:
Successful build via "python setup.py develop"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16463
Differential Revision:
D13885815
Pulled By: izdeby
fbshipit-source-id:
4e35c2ac9196167af74dff3d4f971210721285f8
Michael Suo [Thu, 31 Jan 2019 01:48:59 +0000 (17:48 -0800)]
begin splitting up cpp tests (#16536)
Summary:
Start splitting up these tests so we don't have a massive test file. Doesn't change how you run them, since `gtest.cpp` and `no-gtest.cpp` will still collect everything.
Renamed `tests.h` to `test_misc.h` to vaguely discourage people from adding yet more stuff to it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16536
Reviewed By: zdevito, eellison
Differential Revision:
D13882215
Pulled By: suo
fbshipit-source-id:
61cf97f3c2c50703dcf6a3a34da01415ecb7e7d6
Christian Puhrsch [Thu, 31 Jan 2019 01:19:20 +0000 (17:19 -0800)]
Use dispatch tensor for device_guard instead of first Tensor argument
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16579
Differential Revision:
D13886593
Pulled By: cpuhrsch
fbshipit-source-id:
0722ec61da13c2541f7de51bf5c1ecfb9a12fad2
Owen Anderson [Thu, 31 Jan 2019 01:04:02 +0000 (17:04 -0800)]
Eliminate PYCMD in favor of PYTHON_EXECUTABLE in CMake.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16522
Differential Revision:
D13867376
Pulled By: resistor
fbshipit-source-id:
6bce68facea83c5161a31fcdfafe08827999eb2b
ParticularlyPythonicBS [Thu, 31 Jan 2019 00:43:51 +0000 (16:43 -0800)]
added example to clear ambiguity in torch.Tensor.view (#16563)
Summary:
Added example to the documentation of [torch.Tensor.view](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view) to avoid the misunderstanding referenced in issue [#16560](https://github.com/pytorch/pytorch/issues/16560)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16563
Differential Revision:
D13885008
Pulled By: soumith
fbshipit-source-id:
b7e7fbea1f16124bc4e679ae9c50ab619e1f043d
Gregory Chanan [Thu, 31 Jan 2019 00:01:51 +0000 (16:01 -0800)]
Fix uninitialized data and broken broadcasting with sparse.mm and spa… (#16572)
Summary:
…rse.addmm.
Fixes https://github.com/pytorch/pytorch/issues/16543.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16572
Differential Revision:
D13884235
Pulled By: gchanan
fbshipit-source-id:
308916051364d72f72ec56f0495c6c7c09845131
SsnL [Wed, 30 Jan 2019 23:05:38 +0000 (15:05 -0800)]
add new build files to gitignore; test that build does not leave git repo checkout dirty (#16565)
Summary:
These appear when I run
```
MACOSX_DEPLOYMENT_TARGET=10.13 CC=clang CXX=clang++ NO_CUDA=1 NO_DISTRIBUTED=1 BUILD_CAFFE2_OPS=0 DEBUG=1 python3 setup.py develop --cmake
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16565
Differential Revision:
D13885790
Pulled By: ezyang
fbshipit-source-id:
af0e028d7fa7832a945aaee4e241ceb5418f4ec8
Edward Yang [Wed, 30 Jan 2019 21:51:14 +0000 (13:51 -0800)]
Move Deprecated.h to c10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16504
Reviewed By: smessmer
Differential Revision:
D13860570
fbshipit-source-id:
4742dc30c78d49b0f655b4e9536f51b36a595636
Elias Ellison [Wed, 30 Jan 2019 21:48:36 +0000 (13:48 -0800)]
Allow generic containers as module inputs (#16482)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16326
Previously we didn't handle module inputs which included Generic Lists. When checking whether a generic list if a subvalue of the input arg type, I currently recurse on every element of the list. This shouldn't be too slow since the innermost list will be specialized and we won't have to check it's elements.
E.g. Tensor[][] -> GenericList [TensorList ].
The error message could be improved, but extracting the complete type of nested lists would have to deal with unifying types across lists / empty lists & typevars so I'm going to save that for a follow up PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16482
Differential Revision:
D13882582
Pulled By: eellison
fbshipit-source-id:
3609bc572f0ee9ebf20a77ea5ebc8fa3b165e24b
Erik Brinkman [Wed, 30 Jan 2019 21:30:35 +0000 (13:30 -0800)]
Explicit pdist captures (#16286)
Summary:
Per discussion with cpuhrsch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16286
Differential Revision:
D13883001
Pulled By: erikbrinkman
fbshipit-source-id:
86f35d35fde5db67e3fbb09abc418da0116c9aac
Mikhail Zolotukhin [Wed, 30 Jan 2019 21:30:30 +0000 (13:30 -0800)]
Include ATen/core/functional.h directly instead of torch/csrc/utils/functional.h. (#16377)
Summary:
One more shim removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16377
Differential Revision:
D13821816
Pulled By: ZolotukhinM
fbshipit-source-id:
007f014d404de51841437db7eef28367a2f6e46b
Jesse Hellemn [Wed, 30 Jan 2019 21:29:33 +0000 (13:29 -0800)]
Remove --no-update-dependencies (#16575)
Summary:
Absolutely no idea why this is needed. This should be a valid argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16575
Differential Revision:
D13884796
Pulled By: pjh5
fbshipit-source-id:
6011e721e2870499f6b5e627d5ad00ece08b530b
Edward Yang [Wed, 30 Jan 2019 21:21:02 +0000 (13:21 -0800)]
Update PyTorch DockerVersion to 285. (#16507)
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16507
Differential Revision:
D13884588
Pulled By: ezyang
fbshipit-source-id:
b7e22daa15874f9a226195d4749b4f9f827d7c1e
Tim Khatkevich [Wed, 30 Jan 2019 21:15:59 +0000 (13:15 -0800)]
Support fallback for more operators (#16566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16566
it's a follow-up to https://github.com/pytorch/pytorch/pull/16456
Reviewed By: yinghai
Differential Revision:
D13881462
fbshipit-source-id:
eff063580ac8f622477417ed4b25320299451811
Lu Fang [Wed, 30 Jan 2019 21:12:42 +0000 (13:12 -0800)]
fix the linter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16567
Differential Revision:
D13882166
Pulled By: houseroad
fbshipit-source-id:
daf760f51e4fce376ca09421900405970d00c4d2
Sebastian Messmer [Wed, 30 Jan 2019 21:12:33 +0000 (13:12 -0800)]
Add a test case calling caffe2 layer_norm from caffe2 executor but through the c10 dispatcher
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16283
Reviewed By: ezyang
Differential Revision:
D13792591
fbshipit-source-id:
9c190649e38e8706549102b2e136ceaf508eb37f
Jerry Zhang [Wed, 30 Jan 2019 20:37:55 +0000 (12:37 -0800)]
Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" (#16516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16516
Original commit changeset:
64abce3dbaed
Reviewed By: dzhulgakov
Differential Revision:
D13863715
fbshipit-source-id:
f1923fdca4a1a82768d9c280a8493ff15a7eb2ba
zrphercule [Wed, 30 Jan 2019 19:23:48 +0000 (11:23 -0800)]
Remove the debugging info of pytorch=>onnx coverage script (#16538)
Summary:
Remove the debug info.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16538
Reviewed By: houseroad
Differential Revision:
D13872068
Pulled By: zrphercule
fbshipit-source-id:
7572668d0048c37f6b6029a48e5ae4b8b21823f7
Jacie Fan [Wed, 30 Jan 2019 19:20:44 +0000 (11:20 -0800)]
CUDA histogram implementation
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15842
Reviewed By: zou3519
Differential Revision:
D13868982
Pulled By: jaciefan
fbshipit-source-id:
bce81dc121c4538d204047506f8f14d0b4d8f905
Michael Suo [Wed, 30 Jan 2019 19:06:32 +0000 (11:06 -0800)]
Use a points-to graph for alias analysis (#16386)
Summary:
This PR changes the way we store aliasing information from a "set" approach to a "points-to" analysis. Set-based approaches lose information in ways that make it difficult to do "live" updates to the alias DB as one as mutating the graph.
The tradeoff is that simple queries get more expensive, since they require traversing the points-to graph to answer most questions. In practice, this is unlikely to be that costly since we don't have massive aliasing chains, but we could create an approximation/caching layer if this becomes a problem.
My rough plan is:
1. This PR, switching to a points-to graph
2. Make it "live": analyzing a node should record all the edges the node added, so that we can rollback when the node is destroyed.
3. Reduce wildcard scope: we can make the wildcard a special vertex that points to anything that we're not "sure" about; namely, things that have been put inside lists, or graph inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16386
Differential Revision:
D13855117
Pulled By: suo
fbshipit-source-id:
f009f58143173c275501624eb105d07ab60fe5e1
Lara Haidar-Ahmad [Wed, 30 Jan 2019 18:57:08 +0000 (10:57 -0800)]
ONNX Export Flatten operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16240
Reviewed By: bddppq
Differential Revision:
D13800025
Pulled By: houseroad
fbshipit-source-id:
ae4c5e42026477b28ffd44eda2438d93936ea510
Edward Yang [Wed, 30 Jan 2019 18:49:22 +0000 (10:49 -0800)]
Revert
D13880053: [pytorch][PR] add new build files to gitignore; test that build doesn't leave repo dirty
Differential Revision:
D13880053
Original commit changeset:
0171f42438ef
fbshipit-source-id:
a734f8704c1cbe16434c672289c505b19b2b490a
vishwakftw [Wed, 30 Jan 2019 18:44:59 +0000 (10:44 -0800)]
Allow list and tuples to be passed as output_size to max_unpool1d (#16489)
Summary:
Changelog:
- Modify concantenation of [1] to a tuple by using cases for list and non-list types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16489
Differential Revision:
D13875838
Pulled By: soumith
fbshipit-source-id:
fade65cc47385986b773b9bde9b4601ab93fe1cf
Lu Fang [Wed, 30 Jan 2019 17:30:22 +0000 (09:30 -0800)]
Fix the flake8 linter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16549
Reviewed By: bddppq
Differential Revision:
D13877435
Pulled By: houseroad
fbshipit-source-id:
dbe575ba3f6dd30d27ac6aa5eec2eea025063540
Ailing Zhang [Wed, 30 Jan 2019 17:27:06 +0000 (09:27 -0800)]
add example multiprocess code (#16345)
Summary: fixes #16141
Differential Revision:
D13868539
Pulled By: ailzhang
fbshipit-source-id:
03e858d0aff7804c5e9e03a8666f42fd12836ef2
Yinghai Lu [Wed, 30 Jan 2019 16:55:37 +0000 (08:55 -0800)]
Support int64_t shape data for ideep reshape op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16533
Reviewed By: jerryzh168
Differential Revision:
D13867402
fbshipit-source-id:
ff53a851f142ef915ad69da3868bb3aab4d48987
SsnL [Wed, 30 Jan 2019 16:38:49 +0000 (08:38 -0800)]
add new build files to gitignore; test that build doesn't leave repo dirty
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16441
Differential Revision:
D13880053
Pulled By: ezyang
fbshipit-source-id:
0171f42438efdd651b6af22e521b80e85b12681c
Tim Khatkevich [Wed, 30 Jan 2019 11:43:48 +0000 (03:43 -0800)]
Fallback support for more operators (#16456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16456
Adding fallbacks for more operators and fixing ifndef for expand_op.h
Reviewed By: yinghai
Differential Revision:
D13845382
fbshipit-source-id:
b7c5b7f7f176707b9ddffade139562a8085967ed
Lu Fang [Wed, 30 Jan 2019 09:13:16 +0000 (01:13 -0800)]
Fix the dropout onnx symbolic, and ensure all exported models in test_operators.py are eval mode (#16547)
Summary:
In eval mode, skip dropout operator in onnx exporter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16547
Reviewed By: houseroad
Differential Revision:
D13877136
Pulled By: dzhulgakov
fbshipit-source-id:
c366da156f83677bcf4989b79166aae5b6c36125