platform/upstream/pytorch.git
5 years agoFix lint errors in test_autograd
Edward Yang [Tue, 12 Mar 2019 15:46:52 +0000 (08:46 -0700)]
Fix lint errors in test_autograd

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17812

Reviewed By: eellison

Differential Revision: D14388897

fbshipit-source-id: 6e2671805dc8d57af68eb0a0cd6ccb24d9db45e2

5 years agoAdded a few extra python bindings to help with walking the IR graph from Python ...
Andras Tantos [Tue, 12 Mar 2019 15:46:16 +0000 (08:46 -0700)]
Added a few extra python bindings to help with walking the IR graph from Python (#17822)

Summary:
These changes add the following new Python bindings:

- Values have a 'type' property now that allows getting to the 'type' object
- Blocks have now inputs and outputs as well as returnNode and paramNode properties
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17822

Differential Revision: D14410123

Pulled By: ezyang

fbshipit-source-id: 64ef79f85a7a43b83e4b127b1d39efcaa64b74dc

5 years agokthvalue consistency with sort in the presence of NaN (#17824)
Thomas Viehmann [Tue, 12 Mar 2019 15:45:17 +0000 (08:45 -0700)]
kthvalue consistency with sort in the presence of NaN (#17824)

Summary:
This PR causes kthvalue to be consistent with sort
(i.e. treat NaN as larger than any number), so that
`a.kthvalue(n) == a.sort()[n - 1]`.

One drawback is that median with a NaN argument does not return NaN,
which is a deviation from NumPy.

Thank you, ngimel, for raising this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17824

Differential Revision: D14410092

Pulled By: ezyang

fbshipit-source-id: bdec2d8272dc4c65bcf2f9b8995e237774c44c02

5 years agoFix minor grammatical mistakes in torch/nn/modules/loss.py (#17892)
joy [Tue, 12 Mar 2019 15:39:37 +0000 (08:39 -0700)]
Fix minor grammatical mistakes in torch/nn/modules/loss.py (#17892)

Summary:
Fixes some minor grammatical mistakes in the doc of `loss.py`.

I think in the doc:
>  Note that for some losses, there multiple elements per sample.

the "are" is lost between "there" and "multiple".

This mistake takes place in all the descriptions of parameter `size_average` and there are 17 of them.
It's minor but perfects the doc I think. 😁
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17892

Differential Revision: D14418177

Pulled By: ezyang

fbshipit-source-id: 412759f2f9b215819463bf8452ab0e0513218cd6

5 years agoRemove (almost all) TensorOptions from native_functions.yaml (#17385)
Christian Puhrsch [Tue, 12 Mar 2019 14:54:17 +0000 (07:54 -0700)]
Remove (almost all) TensorOptions from native_functions.yaml (#17385)

Summary:
Stacked on top of https://github.com/pytorch/pytorch/pull/17386

Brings us to 1014/1106 of writing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17385

Differential Revision: D14248008

Pulled By: cpuhrsch

fbshipit-source-id: 033e00de91e3edf7ae01ca03ebe436c0446b3b5c

5 years agoRestore full Windows tests (#17102)
Karl Ostmo [Tue, 12 Mar 2019 13:28:54 +0000 (06:28 -0700)]
Restore full Windows tests (#17102)

Summary:
closes #17101
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17102

Differential Revision: D14420716

Pulled By: ezyang

fbshipit-source-id: 0134736e2d919afa683afa84cb2140f659042643

5 years agoPrevent VS2017 from emitting ambiguous symbol errors (second time)
peter [Tue, 12 Mar 2019 08:53:42 +0000 (01:53 -0700)]
Prevent VS2017 from emitting ambiguous symbol errors (second time)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17863

Differential Revision: D14404818

Pulled By: soumith

fbshipit-source-id: 9dac6b926e270e2a29ec2e4dba2e93984da0e5f5

5 years agoFix windows test hang (#17778)
xuzhu [Tue, 12 Mar 2019 08:43:45 +0000 (01:43 -0700)]
Fix windows test hang (#17778)

Summary:
This PR resolves two concurrent issues discovered when running the test in windows. Details about the windows test can be found here: https://github.com/pytorch/pytorch/issues/17609

The change covers two fixes:
1. update running_preloaders_ upfront before creating worker thread to prevent underflow.
2. add a lock when updating stop_ to prevent dead lock in condition variable cv_write_.

The fix has been tested on both Windows and Linux. With --gtest_repeat=1000, the tests runs smoothly without issues.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17778

Differential Revision: D14404910

Pulled By: soumith

fbshipit-source-id: 2fbb8007e4b0bce4613e9a9fd31b8aace1bbfa8d

5 years agotorch.btrifact for tensors with greater than 3 dimensions (#14964)
vishwakftw [Tue, 12 Mar 2019 08:42:28 +0000 (01:42 -0700)]
torch.btrifact for tensors with greater than 3 dimensions (#14964)

Summary:
Motivation:
- Earlier, `torch.btrifact` could not handle tensors with greater than 3 dimensions. This is because of the check:
>   AT_CHECK(THTensor_(nDimension)(a) == 3, "expected 3D tensor, got size: ", a->sizes());

What is in this PR?:
- Move `btrifact` to ATen
- Remove relation to TH/THC.
- Handle tensors with more than three dimensions
- Tests
- Docs modifications: added a note about the non-pivoting variant.

[blocked due to old magma-cuda binaries]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14964

Differential Revision: D14405106

Pulled By: soumith

fbshipit-source-id: f051f5d6aaa45f85836a2867176c065733563184

5 years agoSmall clean up of aten_op
Roy Li [Tue, 12 Mar 2019 04:01:21 +0000 (21:01 -0700)]
Small clean up of aten_op

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17530

Reviewed By: ezyang

Differential Revision: D14237931

fbshipit-source-id: fb73d63d89fab0622097a49be6ed0b75ddb02a7c

5 years agoadd support for parsing class defs to the string frontend (#17628)
Michael Suo [Tue, 12 Mar 2019 02:07:58 +0000 (19:07 -0700)]
add support for parsing class defs to the string frontend (#17628)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17628

This is not hooked up anywhere yet, just adding support.
This shares the same restrictions as the python frontend—namely, that the only exprs allowed right now are method defs.

Reviewed By: shannonzhu

Differential Revision: D14291654

fbshipit-source-id: 7798e5ff412a52ef8803c7bae8f439e50968a73a

5 years agoadd test for out of order methods (#17624)
Michael Suo [Tue, 12 Mar 2019 02:07:57 +0000 (19:07 -0700)]
add test for out of order methods (#17624)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17624

Just to make sure this path works

Reviewed By: shannonzhu

Differential Revision: D14288056

fbshipit-source-id: b719c0e90252b6821b1f9b22d3d98982985a6cb3

5 years agoinitializing class value (#17585)
Michael Suo [Tue, 12 Mar 2019 02:07:57 +0000 (19:07 -0700)]
initializing class value (#17585)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17585

Create a sugared value that represents a class during initialization. This is
so that assignments to attributes correctly define attributes in __init__ but
raise an error elsewhere.

Reviewed By: shannonzhu

Differential Revision: D14263403

fbshipit-source-id: 09b2feeb272302f00a79c2a0302fbdf5483aed6a

5 years agoRemove unused parameter in ProcessGroupGloo (#17718)
Pieter Noordhuis [Tue, 12 Mar 2019 00:57:56 +0000 (17:57 -0700)]
Remove unused parameter in ProcessGroupGloo (#17718)

Summary:
This is not used anywhere and wasn't cleaned up prior to 1.0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17718

Reviewed By: janewangfb

Differential Revision: D14355154

Pulled By: pietern

fbshipit-source-id: f8ff3c8f50cd6365b369a5c5b85d72d8940df048

5 years agoRevert D14414435: [pytorch][PR] Remove remaining IR Expect files
Elias Ellison [Tue, 12 Mar 2019 00:27:50 +0000 (17:27 -0700)]
Revert D14414435: [pytorch][PR] Remove remaining IR Expect files

Differential Revision:
D14414435

Original commit changeset: 0bfd7ce66ac2

fbshipit-source-id: 02de1814f3c4e581d3798059cee752517b176ed9

5 years agoRemove remaining IR Expect files (#17886)
Elias Ellison [Tue, 12 Mar 2019 00:23:27 +0000 (17:23 -0700)]
Remove remaining IR Expect files (#17886)

Summary:
Last batch of IR expect files removed. Includes some removal of expect files that are no longer used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17886

Differential Revision: D14414435

Pulled By: eellison

fbshipit-source-id: 0bfd7ce66ac2f72a57f15f45ebd60b95e80b6c16

5 years agoBool tensor creation (cpu) (#17376)
Iurii Zdebskyi [Mon, 11 Mar 2019 23:58:09 +0000 (16:58 -0700)]
Bool tensor creation (cpu) (#17376)

Summary:
This PR enables bool tensor creation and some basic operations for the CPU backend. This is a part of Bool Tensor feature implementation work. The whole plan looks like this:
    1. Storage Implementation [Done]
    2. Tensor Creation.
        a) CPU (this PR)
        b) CUDA
    3. Tensor Conversions.
    4. Tensor Indexing.
    5. Tensor Operations.
    6. Back compatibility related changes.

**Change**:
Enable CPU tensors and these operations:
- torch.zeros
- torch.tensor
- torch.ones
- torch.randint
- torch.full
- torch.full_like
- torch.empty
- torch.empty_like

**Tested via**:
1) unit tests

2)
torch.zeros(2,2, dtype=torch.bool)
torch.tensor([True, False], dtype=torch.bool)
torch.tensor([-1, -1.1, 0, 1, 1.1, 2], dtype=torch.bool)
torch.ones([1,2], dtype=torch.bool)
torch.randint(10, (2, 2), dtype=torch.bool)
torch.full((2, 3), True, dtype=torch.bool)
torch.empty(4, dtype=torch.bool)

a = torch.tensor([0,0,1])
b = torch.full_like(a, True)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17376

Reviewed By: ezyang

Differential Revision: D14375995

Pulled By: izdeby

fbshipit-source-id: a65490b5360ee0e6e3accc54ce7e32e49ad2d2a8

5 years agoRemove device guard from TypeDefault::copy()
Roy Li [Mon, 11 Mar 2019 22:50:45 +0000 (15:50 -0700)]
Remove device guard from TypeDefault::copy()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17833

Reviewed By: ezyang

Differential Revision: D14400901

Pulled By: li-roy

fbshipit-source-id: ababc95dadfc94a996a80c5332f45f76a300963d

5 years agore-enable torch.split tests (#17859)
Michael Suo [Mon, 11 Mar 2019 22:09:00 +0000 (15:09 -0700)]
re-enable torch.split tests (#17859)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17859

this has been fixed due to improvements in shape analysis

Reviewed By: driazati

Differential Revision: D14402781

fbshipit-source-id: 4ef2722ffedd9c8ac1eff55c244b421d7d3715ed

5 years agoFix lint in test_dataloader.py
Edward Yang [Mon, 11 Mar 2019 21:42:49 +0000 (14:42 -0700)]
Fix lint in test_dataloader.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17878

Reviewed By: eellison

Differential Revision: D14409933

fbshipit-source-id: 20ee8953a21e29b4557aff62b5e48dddd630eef6

5 years agoOptimize fused_dropout_kernel launch bounds for AMD hardware
Johannes M Dieterich [Mon, 11 Mar 2019 21:39:07 +0000 (14:39 -0700)]
Optimize fused_dropout_kernel launch bounds for AMD hardware

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17870

Differential Revision: D14409990

Pulled By: ezyang

fbshipit-source-id: 0452282f459770823641b2527f47b1186ab14666

5 years agoDeprecate torch.pstrf (#17866)
Vishwak Srinivasan [Mon, 11 Mar 2019 19:15:41 +0000 (12:15 -0700)]
Deprecate torch.pstrf (#17866)

Summary:
Changelog:
- Add deprecation warning to torch.pstrf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17866

Differential Revision: D14405527

Pulled By: soumith

fbshipit-source-id: 73f3b7d61c60eb57e4bffd08112e552ae3e6dfdc

5 years agoAllow structseq to be input of operators where tuple is expected (#17208)
Gao, Xiang [Mon, 11 Mar 2019 18:30:08 +0000 (11:30 -0700)]
Allow structseq to be input of operators where tuple is expected (#17208)

Summary:
Currently the following code gives an error on python 2 because `ret` is a structseq which is not a tuple
```python
ret = a.max(dim=0)
ret1 = torch.max(a, dim=0, out=ret)
```

This PR modify tuple check in python arg parser to allow structseq to be input of operators where tuple is expected, which would make the above code work.

Depend on: https://github.com/pytorch/pytorch/pull/17136
Partially fixes: https://github.com/pytorch/pytorch/issues/16813
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17208

Differential Revision: D14280198

Pulled By: VitalyFedyunin

fbshipit-source-id: beffebfd3951c4f5c7c8fe99a5847616a89491f3

5 years agoAdd PyTorch Governance, Contributor Guide, and List of Persons of Interest
Eric Nakagawa [Mon, 11 Mar 2019 17:29:54 +0000 (10:29 -0700)]
Add PyTorch Governance, Contributor Guide, and List of Persons of Interest

Summary: Adding new documents to the PyTorch website to describe how PyTorch is governed, how to contribute to the project, and lists persons of interest.

Reviewed By: orionr

Differential Revision: D14394573

fbshipit-source-id: ad98b807850c51de0b741e3acbbc3c699e97b27f

5 years agoFix compilation error (#17860)
Yinghai Lu [Mon, 11 Mar 2019 17:20:11 +0000 (10:20 -0700)]
Fix compilation error (#17860)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17860

att

Reviewed By: bddppq

Differential Revision: D14402751

fbshipit-source-id: 2d53b230dfd775372addeab1d3eaf0b9552fae9f

5 years agoRevert D14392864: Fix lint in test_dataloader.py
Edward Yang [Mon, 11 Mar 2019 17:13:40 +0000 (10:13 -0700)]
Revert D14392864: Fix lint in test_dataloader.py

Differential Revision:
D14392864

Original commit changeset: 12477b9cfe29

fbshipit-source-id: 1864a80d5cfaceeae55d0145340a578f978ab4a7

5 years agoRemoved dead code from THTensorMath.h (#17769)
Iurii Zdebskyi [Mon, 11 Mar 2019 17:11:09 +0000 (10:11 -0700)]
Removed dead code from THTensorMath.h (#17769)

Summary:
This PR removes dead code from THTensorMath.h
I found these unused methods while working on a PR where i plan to move **fill** and **zero** methods from TH/THC to Aten.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17769

Differential Revision: D14372732

Pulled By: izdeby

fbshipit-source-id: 94fd3b52c691ebc89d2bdc8905452e7498038bf5

5 years agoIntroducing array-like sequence methods __contains__ (#17733)
bhushan [Mon, 11 Mar 2019 15:55:01 +0000 (08:55 -0700)]
Introducing array-like sequence methods __contains__ (#17733)

Summary:
for tensor

Fixes: #17000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17733

Differential Revision: D14401952

Pulled By: soumith

fbshipit-source-id: c841b128c5a1fceda1094323ed4ef1d0cf494909

5 years agoRevert "Add check for x64 Python before setup (#17707)" (#17864)
peter [Mon, 11 Mar 2019 15:49:30 +0000 (08:49 -0700)]
Revert "Add check for x64 Python before setup (#17707)" (#17864)

Summary:
This reverts commit 08fb9021da32e73bd7dec73104eea6a76dd44439.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17864

Differential Revision: D14404920

Pulled By: soumith

fbshipit-source-id: d41fc06e249f3437d4f80d1d6a5fdbd44c90462b

5 years agoRegistering of kl-divergence for independent distribution (#17681)
Nicki Skafte [Mon, 11 Mar 2019 15:07:22 +0000 (08:07 -0700)]
Registering of kl-divergence for independent distribution (#17681)

Summary:
This address issue https://github.com/pytorch/pytorch/issues/13545 and implements the proposed fix together with a single test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17681

Differential Revision: D14360161

Pulled By: ezyang

fbshipit-source-id: 427afc88e9054b5b0dc39ebbab1087b990695ea5

5 years agoFix lint in test_dataloader.py
Edward Yang [Mon, 11 Mar 2019 14:58:12 +0000 (07:58 -0700)]
Fix lint in test_dataloader.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17820

Reviewed By: eellison

Differential Revision: D14392864

fbshipit-source-id: 12477b9cfe290428d51cc28e024c8cbe8bb7bf51

5 years agoFurther improvements of nn.container docs
Tongzhou Wang [Mon, 11 Mar 2019 01:26:20 +0000 (18:26 -0700)]
Further improvements of nn.container docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17731

Differential Revision: D14401894

Pulled By: soumith

fbshipit-source-id: cebb25859f78589cc4f4f8afb1e84c97f82b6962

5 years agofix faq typo
Tongzhou Wang [Sun, 10 Mar 2019 22:26:07 +0000 (15:26 -0700)]
fix faq typo

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17851

Differential Revision: D14401791

Pulled By: soumith

fbshipit-source-id: ed6d64d6f5985e7ce76dca1e9e376782736b90f9

5 years agoFix log_softmax and softmax if any dimension is 0-d (#17651)
bhushan [Sun, 10 Mar 2019 22:18:48 +0000 (15:18 -0700)]
Fix log_softmax and softmax if any dimension is 0-d (#17651)

Summary:
- Test added
- test_dim_function_empty: softmax and log_softmax on last dimension

fixes: #17262
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17651

Differential Revision: D14349009

Pulled By: gchanan

fbshipit-source-id: b6f728f5c6be8ae7615749e3f0c201886632923e

5 years agoCorrect loss docstrings (#17300)
ZhuBaohe [Sun, 10 Mar 2019 18:50:56 +0000 (11:50 -0700)]
Correct loss docstrings (#17300)

Summary:
In the loss doc description, replace the deprecated 'reduct' and 'size_average' parameters with the 'reduction' parameter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17300

Differential Revision: D14195789

Pulled By: soumith

fbshipit-source-id: 625e650ec20f13b2d22153a4a535656cf9c8f0eb

5 years agoWhen openblas exists, "OpenBLAS_FOUND" is defined, rather than "OPENBLAS_FOUND"....
HE, Tao [Sun, 10 Mar 2019 16:21:13 +0000 (09:21 -0700)]
When openblas exists, "OpenBLAS_FOUND" is defined, rather than "OPENBLAS_FOUND". (#17841)

Summary:
See https://github.com/pytorch/pytorch/blob/master/cmake/Modules/FindOpenBLAS.cmake#L36

This typo lead to cmake fails to detect openblas on ubuntu.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17841

Differential Revision: D14400261

Pulled By: soumith

fbshipit-source-id: 287e019e122230cf6b70ab1ea94e5c514f429c88

5 years agoPassing indices as a list to Subset instead of Tensor (#17649)
bhushan [Sun, 10 Mar 2019 16:20:30 +0000 (09:20 -0700)]
Passing indices as a list to Subset instead of Tensor (#17649)

Summary:
Indices in Subset were stored as tensors earlier
passing as list in random_split to ensure integer indexing

fixes: #17466
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17649

Differential Revision: D14400250

Pulled By: soumith

fbshipit-source-id: cd20a959f33773c4babf8e861ea37ec61c2713a0

5 years agoClarify JIT docs
James Reed [Sun, 10 Mar 2019 07:10:26 +0000 (23:10 -0800)]
Clarify JIT docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17846

Differential Revision: D14400363

Pulled By: jamesr66a

fbshipit-source-id: 862316b5fd95526b6edebeca19d2cc522779df11

5 years agoAdd metadata for torch jit TracedModules. (#17640)
Pritam Damania [Sun, 10 Mar 2019 05:31:42 +0000 (21:31 -0800)]
Add metadata for torch jit TracedModules. (#17640)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17640

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17311

I've extended our model metadata framework in this diff to support
traced modules as well. Re-used a lot of components from the previous
implementation of ScriptModule metadata.

Tracing is a little different from Scripting since you can't just create a
subclass of TopLevelTraceModule (type returned by torch.jit.trace) and attach
metadata the way we did for ScriptModule. As a result, I've introduced a
separate API torch.fb.jit_trace which returns an instance of
TracedModuleWithMetadata which is a subclass of TopLevelTracedModule. As a
result, we can now attach metadata to this instance.

Reviewed By: dzhulgakov

Differential Revision: D14117966

fbshipit-source-id: 3eee5eef733cb8d6a219c02e2f41d08698eca326

5 years agoFix PySlice_Unpack not available on PyPy 3.6 yet (#17836)
Konstantin Lopuhin [Sun, 10 Mar 2019 04:06:57 +0000 (20:06 -0800)]
Fix PySlice_Unpack not available on PyPy 3.6 yet (#17836)

Summary:
This is one of the fixes needed to support compilation on PyPy 3.6, see https://github.com/pytorch/pytorch/issues/17835
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17836

Differential Revision: D14399404

Pulled By: soumith

fbshipit-source-id: ca650a6e2066aed86ddd3314a95d0cb3c515c633

5 years agoPyPy compatibility: let unmodified slots be inherited in the standard way (#17837)
Ronan Lamy [Sat, 9 Mar 2019 19:38:05 +0000 (11:38 -0800)]
PyPy compatibility: let unmodified slots be inherited in the standard way (#17837)

Summary:
This is needed to fix a segfault on PyPy 3.6, see https://bitbucket.org/pypy/pypy/issues/2968/segfault-calling-cpyext_tp_new_tuple and https://github.com/pytorch/pytorch/issues/17835
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17837

Differential Revision: D14399408

Pulled By: soumith

fbshipit-source-id: 75328a30018313d3223dd3e3eef9240a416c049b

5 years agoRun fp16 resnet50 training in bench script (#17831)
Junjie Bai [Sat, 9 Mar 2019 05:50:20 +0000 (21:50 -0800)]
Run fp16 resnet50 training in bench script (#17831)

Summary:
cc xw285cornell
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17831

Differential Revision: D14398532

Pulled By: bddppq

fbshipit-source-id: 37c03cc2eebe3a6083e05631cb6ff03474e4a8a2

5 years agoInt8 FC performance debugging (#17700)
Summer Deng [Sat, 9 Mar 2019 03:00:43 +0000 (19:00 -0800)]
Int8 FC performance debugging (#17700)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17700

Add performance debugging utilities in DNNLOWP FC operator and the python script

Reviewed By: amylittleyang

Differential Revision: D14321299

fbshipit-source-id: 50dbd7b352a1da5d2ecb659d8003e71e70750063

5 years agoOptimize LayerNormOp (#17604)
Xiaomeng Yang [Sat, 9 Mar 2019 01:35:17 +0000 (17:35 -0800)]
Optimize LayerNormOp (#17604)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17604

Optimize LayerNormOp

i-am-not-moving-c2-to-c10

Reviewed By: houseroad

Differential Revision: D14274175

fbshipit-source-id: a7aa263a1b0eb109682d2be99306e7b2cdcc0faf

5 years agoRemove some simple use cases of Type::ScalarType()
Roy Li [Sat, 9 Mar 2019 00:39:04 +0000 (16:39 -0800)]
Remove some simple use cases of Type::ScalarType()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17529

Reviewed By: ezyang

Differential Revision: D14237932

fbshipit-source-id: be633a1fc19215d53cfe083fdd7196acf2b7dd2f

5 years agoChange Dispatch.h to use ScalarType over Type
Roy Li [Sat, 9 Mar 2019 00:39:04 +0000 (16:39 -0800)]
Change Dispatch.h to use ScalarType over Type

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17527

Reviewed By: zou3519

Differential Revision: D14235395

fbshipit-source-id: 3f53e33f6794f1f14c2edf79014b8ef8397822c5

5 years agoRevert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initialize...
Lu Fang [Sat, 9 Mar 2019 00:27:00 +0000 (16:27 -0800)]
Revert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initializers to be name-based

Differential Revision:
D14361993

Original commit changeset: da93e945d557

fbshipit-source-id: 15eea001fbcd059ac13903405aeb9ea182c6ee8b

5 years agoOpen registration for c10 thread pool (#17788)
James Reed [Fri, 8 Mar 2019 23:33:34 +0000 (15:33 -0800)]
Open registration for c10 thread pool (#17788)

Summary:
1. Move ATen threadpool & open registration mechanism to C10
2. Move the `global_work_queue` to use this open registration mechanism, to allow users to substitute in their own
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17788

Reviewed By: zdevito

Differential Revision: D14379707

Pulled By: jamesr66a

fbshipit-source-id: 949662d0024875abf09907d97db927f160c54d45

5 years agoCast nn.Upsample.scale_factor to a float (#17732)
David Riazati [Fri, 8 Mar 2019 23:26:25 +0000 (15:26 -0800)]
Cast nn.Upsample.scale_factor to a float (#17732)

Summary:
Fixes #17106
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17732

Differential Revision: D14388192

Pulled By: driazati

fbshipit-source-id: d9c9e87a7c6db63c1de3ddebbb8dcf619f0dc34d

5 years agoFix lint in run_test.py
Edward Yang [Fri, 8 Mar 2019 22:30:15 +0000 (14:30 -0800)]
Fix lint in run_test.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17815

Reviewed By: eellison

Differential Revision: D14390308

fbshipit-source-id: 22efd62a1bbd1fc8155a942d7160d5b7d3158e6b

5 years agoFix lint in test/common_utils.py
Edward Yang [Fri, 8 Mar 2019 22:19:23 +0000 (14:19 -0800)]
Fix lint in test/common_utils.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17814

Reviewed By: eellison

Differential Revision: D14390194

fbshipit-source-id: b4b3bbe20a15d0b9ed127b255e01c0d6d0832c1b

5 years agoReplace tensor.type().scalarType() calls with tensor.scalar_type()
Roy Li [Fri, 8 Mar 2019 22:05:01 +0000 (14:05 -0800)]
Replace tensor.type().scalarType() calls with tensor.scalar_type()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17515

Reviewed By: ezyang

Differential Revision: D14233250

fbshipit-source-id: 6c7af8d2291c0c2b148001b30cf03834f34366c0

5 years agoCatch exceptions in bound_shape_inference (#17775)
Yinghai Lu [Fri, 8 Mar 2019 21:15:05 +0000 (13:15 -0800)]
Catch exceptions in bound_shape_inference (#17775)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17775

Handles use input shape hint properly.

Reviewed By: zrphercule

Differential Revision: D14368735

fbshipit-source-id: 504cd96589e47aa432617e56362aa6b01a25ba9b

5 years agorefactor caffe2 operator constructors - 11/9 (#17722)
Sebastian Messmer [Fri, 8 Mar 2019 20:33:31 +0000 (12:33 -0800)]
refactor caffe2 operator constructors - 11/9 (#17722)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17722

clangr codemod

Reviewed By: ezyang

Differential Revision: D14350584

fbshipit-source-id: adef54cedc9409b4fb365f6644e2621a9e47b2ff

5 years agoSuppress C408 lint (don't use dict constructor) (#17813)
Edward Yang [Fri, 8 Mar 2019 20:15:49 +0000 (12:15 -0800)]
Suppress C408 lint (don't use dict constructor) (#17813)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17813

We have a lot of manually written out dict() constructors,
and (1) I don't think use of curly brace syntax is much
of an improvement and (2) it seems like a waste of time to
fix them all.

Reviewed By: eellison

Differential Revision: D14390136

fbshipit-source-id: 6199bef4dea75b6079bcb9d9e8acf20a2e1a86e1

5 years agoAdd matches_jit_signature to recent native functions
Christian Puhrsch [Fri, 8 Mar 2019 19:37:01 +0000 (11:37 -0800)]
Add matches_jit_signature to recent native functions

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17805

Differential Revision: D14388004

Pulled By: cpuhrsch

fbshipit-source-id: c50580b6fe1e9cfefed91aaa526376325d9f9c0d

5 years agoAdd /MD to prevent linking errors on Windows
peterjc123 [Fri, 8 Mar 2019 18:39:06 +0000 (10:39 -0800)]
Add /MD to prevent linking errors on Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17799

Differential Revision: D14385777

Pulled By: ezyang

fbshipit-source-id: 8c1d9f80c48399087f5fae4474690e6d80d740e6

5 years agoChange message on unknown db type to be friendly (#17795)
Dmytro Dzhulgakov [Fri, 8 Mar 2019 18:36:29 +0000 (10:36 -0800)]
Change message on unknown db type to be friendly (#17795)

Summary:
CreateDB actually returns nullptr when db type is unknown and throws when the file is missing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17795

Reviewed By: ezyang

Differential Revision: D14383226

Pulled By: dzhulgakov

fbshipit-source-id: 1dcf75a6b4ba8b64a24d4e5daf02db3189d56b7b

5 years agoTrace rnn max_batch_size (#17727)
David Riazati [Fri, 8 Mar 2019 18:29:51 +0000 (10:29 -0800)]
Trace rnn max_batch_size (#17727)

Summary:
This causes the tracer to record the select / cast to int operation instead of just an int constant

Fixes #15319 but relies on a fix for #17583 first
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17727

Differential Revision: D14377886

Pulled By: driazati

fbshipit-source-id: 59453def54ba72756303f723993844dbeb5d2f8b

5 years agoRemove legacy way of exposing caffe2 operators to PyTorch (#17742)
Sebastian Messmer [Fri, 8 Mar 2019 18:19:49 +0000 (10:19 -0800)]
Remove legacy way of exposing caffe2 operators to PyTorch (#17742)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17742

This path isn't used anymore, and is incompatible with the changes stacked on top of this diff.
Removing it.
cc bwasti to check and confirm these can really be deleted

Reviewed By: ezyang

Differential Revision: D14362426

fbshipit-source-id: 32cdc19f28c2a981ae1e204901420998367ee588

5 years agoRemove 'Tensor' key from ATen codegen. (#17782)
Gregory Chanan [Fri, 8 Mar 2019 17:41:33 +0000 (09:41 -0800)]
Remove 'Tensor' key from ATen codegen. (#17782)

Summary:
We used to have different ATen Tensor types, but we don't anymore.  This was just being maintained by a codegen'ed comment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17782

Reviewed By: ezyang

Differential Revision: D14378004

Pulled By: gchanan

fbshipit-source-id: 1bbf276393a391252d372cc385230c784bd78588

5 years agoRemove ProcessorSpecificPlugin. (#17789)
Gregory Chanan [Fri, 8 Mar 2019 17:39:03 +0000 (09:39 -0800)]
Remove ProcessorSpecificPlugin. (#17789)

Summary:
It doesn't seem to be used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17789

Reviewed By: ezyang

Differential Revision: D14382423

Pulled By: gchanan

fbshipit-source-id: 0ac3236c48979a1b2bcd615e307e55f10fd8eb77

5 years agoRemove THPPlugin. (#17790)
Gregory Chanan [Fri, 8 Mar 2019 17:38:48 +0000 (09:38 -0800)]
Remove THPPlugin. (#17790)

Summary:
It doesn't seem to be used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17790

Reviewed By: ezyang

Differential Revision: D14380897

Pulled By: gchanan

fbshipit-source-id: 3c3884a08c3b6c1489347d439509b19e079c5861

5 years agoReplace tens with hundreds.
Edward Yang [Fri, 8 Mar 2019 15:23:16 +0000 (07:23 -0800)]
Replace tens with hundreds.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17752

Differential Revision: D14366743

fbshipit-source-id: 39f6ac08180d780866e284024918d9abd197d239

5 years agoSupport failback for more operators in ideep (#17747)
Tim Khatkevich [Fri, 8 Mar 2019 13:43:17 +0000 (05:43 -0800)]
Support failback for more operators in ideep (#17747)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17747

RMACRegions, Normalize and RoIPooling

Reviewed By: dskhudia

Differential Revision: D14365096

fbshipit-source-id: dafcb7077515e03c2880832a442015b70fc7140d

5 years agoCleanup include files in jit/passes/common_subexpression_elimination.h.
Mikhail Zolotukhin [Fri, 8 Mar 2019 09:08:17 +0000 (01:08 -0800)]
Cleanup include files in jit/passes/common_subexpression_elimination.h.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17784

Differential Revision: D14381529

Pulled By: ZolotukhinM

fbshipit-source-id: e32e17ee644ef888a6d56a8ee3648e7ac21758bf

5 years agoUse return names in JIT operators
Christian Puhrsch [Fri, 8 Mar 2019 07:31:00 +0000 (23:31 -0800)]
Use return names in JIT operators

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17638

Differential Revision: D14295606

Pulled By: cpuhrsch

fbshipit-source-id: 62040ac65434411357808735f0fe6cd33cc1c30f

5 years agoChange ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize ...
Jerry Zhang [Fri, 8 Mar 2019 02:31:33 +0000 (18:31 -0800)]
Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#17764)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17764

Original commit changeset: f1923fdca4a1

reverted int8 ops fixes the original runtime regression.
We'll ignore the memory regression since it is flaky, see D14228484

Reviewed By: dzhulgakov

Differential Revision: D13885233

fbshipit-source-id: ccbe4b94acb44b7b4cb3ae4d73e3f6091e1e1195

5 years agoClean up some old ScalarType stuff
Roy Li [Fri, 8 Mar 2019 00:16:43 +0000 (16:16 -0800)]
Clean up some old ScalarType stuff

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17755

Differential Revision: D14377135

Pulled By: li-roy

fbshipit-source-id: 35305760a1621340ba66c61a193ff61cfedfa7e8

5 years agoadd reference to flake8-mypy in contributing.md
Elias Ellison [Thu, 7 Mar 2019 23:23:16 +0000 (15:23 -0800)]
add reference to flake8-mypy in contributing.md

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17759

Differential Revision: D14376813

Pulled By: eellison

fbshipit-source-id: cca1128e967ef7368633b94a3fa3c8e76a4a16f4

5 years agoMove lerp to ATen, add functionality for tensor weights (#17348)
vishwakftw [Thu, 7 Mar 2019 22:01:47 +0000 (14:01 -0800)]
Move lerp to ATen, add functionality for tensor weights (#17348)

Summary:
Changelog:
- Remove TH/THC bindings
- Add tensor weights for `lerp`
- Modify derivatives appropriately
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17348

Differential Revision: D14355845

Pulled By: soumith

fbshipit-source-id: eaede4c09ee589d77ba6cf52583510ea8e3a2fcf

5 years agoRefactor dispatcher (#17753)
Iurii Zdebskyi [Thu, 7 Mar 2019 21:38:59 +0000 (13:38 -0800)]
Refactor dispatcher (#17753)

Summary:
This is a side PR for a bool tensor feature. The idea of this change came from a feedback received in this [PR](https://github.com/pytorch/pytorch/pull/17376).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17753

Differential Revision: D14367989

Pulled By: izdeby

fbshipit-source-id: 4fa380e56e20f18e480be68920170dbc3a4eb91c

5 years agoadd layernorm to AD
Wanchao Liang [Thu, 7 Mar 2019 21:31:55 +0000 (13:31 -0800)]
add layernorm to AD

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17702

Differential Revision: D14368472

Pulled By: wanchaol

fbshipit-source-id: 8db390e39444078258ad1d34ba74d6ddafa5d02b

5 years agomove half<->float conversions to oss operators (#17548)
Hector Yuen [Thu, 7 Mar 2019 20:52:54 +0000 (12:52 -0800)]
move half<->float conversions to oss operators (#17548)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17548

expose half float operators to OSS

common/math/Float16.h is the original implementation
this is substituted by caffe2/c10/util/Half.h

from the comments seems like the both implementations don't handle denormals

Reviewed By: jspark1105

Differential Revision: D14244200

fbshipit-source-id: f90ba28c5bf6a2b451b429cc4925b8cc376ac651

5 years agoFix the update ONNX expect files (#17767)
Lu Fang [Thu, 7 Mar 2019 20:51:09 +0000 (12:51 -0800)]
Fix the update ONNX expect files (#17767)

Summary:
Fix the CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17767

Reviewed By: zrphercule

Differential Revision: D14370483

Pulled By: houseroad

fbshipit-source-id: e7b0bbde0797c41f5a010fa206fab80fe2792eb7

5 years agoCleanup testFusion/testOne: there are unused arguments.
Mikhail Zolotukhin [Thu, 7 Mar 2019 19:13:48 +0000 (11:13 -0800)]
Cleanup testFusion/testOne: there are unused arguments.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17737

Differential Revision: D14366584

Pulled By: ZolotukhinM

fbshipit-source-id: 3c2dd2aabfecca475909e4eec4a077d900795da9

5 years agoAutomatic update of fbcode/onnx to 96c58ceeacf0f2b73d752e413e4fd78787a12da3 (#17676)
Lu Fang [Thu, 7 Mar 2019 19:03:57 +0000 (11:03 -0800)]
update of fbcode/onnx to 96c58ceeacf0f2b73d752e413e4fd78787a12da3 (#17676)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17676

Previous import was e18bb41d255a23daf368ffd62a2645db55db4c72

Included changes:
- **[96c58ce](https://github.com/onnx/onnx/commit/96c58ce)**: Fix shape inference when auto_pad is notset again (#1830) <Li-Wen Chang>
- **[873ddbb](https://github.com/onnx/onnx/commit/873ddbb)**: More extendable Runner (#1809) <Michał Karzyński>

Reviewed By: zrphercule

Differential Revision: D14321241

fbshipit-source-id: 12de9021afc61f5435f1b719cccf7b0f4ad73a84

5 years agoSet the default ONNX opset to the latest stable opset (i.e., 9) (#17736)
Lu Fang [Thu, 7 Mar 2019 18:51:29 +0000 (10:51 -0800)]
Set the default ONNX opset to the latest stable opset (i.e., 9) (#17736)

Summary:
1) The changes in the new opset won't affect internal pipeline.
2) The CI won't be affected by the ONNX changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17736

Reviewed By: zrphercule

Differential Revision: D14358710

Pulled By: houseroad

fbshipit-source-id: 4ef15d2246b50f6875ee215ce37ecf92d555ca6a

5 years agoAdd module attributes (#17309)
David Riazati [Thu, 7 Mar 2019 18:41:13 +0000 (10:41 -0800)]
Add module attributes (#17309)

Summary:
Similar to `nn.Parameter`s, this PR lets you store any `IValue` on a module as an attribute on a `ScriptModule` (only from the Python front-end currently). To mark something as an attribute, it should wrapped in `jit.Attribute(value, type)` (ex. `self.table = torch.jit.Attribute(table, Dict[str, torch.Tensor])`)

Followup Work:
* (de)serializing for use in C++
* change `self.training` to be a `bool` attribute instead of a buffer
* mutable attributes
* string frontend support
* documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17309

Differential Revision: D14354316

Pulled By: driazati

fbshipit-source-id: 67e08ab5229366b67fbc837e67b58831a4fb3318

5 years ago- refactoring serialization of ONNX initializers to be name-based (#17420)
Spandan Tiwari [Thu, 7 Mar 2019 18:06:17 +0000 (10:06 -0800)]
- refactoring serialization of ONNX initializers to be name-based (#17420)

Summary:
Currently, serialization of model parameters in ONNX export depends on the order in which they are stored in a container (`list` on Python side and `std::vector` on C++ side). This has worked fine till now, but if we need to do any pass on that graph that mutates the parameter list, then strictly order-based serialization may not work.

This PR is the first in a set to bring in more passes (such as constant folding) related to ONNX export. This PR lays the groundwork by moving the serialization in ONNX export from order-based to name based approach, which is more amenable to some of the passes.

houseroad - As discussed this change uses a map for export, and removes the code from `export.cpp` that relies on the order to compute initializer names.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17420

Differential Revision: D14361993

Pulled By: houseroad

fbshipit-source-id: da93e945d55755c126de06641f35df87d1648cc4

5 years agoONNX Export for Max and Average Pooling in CEIL_MODE
Lara Haidar-Ahmad [Thu, 7 Mar 2019 17:59:28 +0000 (09:59 -0800)]
ONNX Export for Max and Average Pooling in CEIL_MODE

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16769

Differential Revision: D14362175

Pulled By: houseroad

fbshipit-source-id: 65cfb1dfba6a43d39cc85374add368fe8e4e5645

5 years agouse flake8-mypy (#17721)
Elias Ellison [Thu, 7 Mar 2019 17:12:35 +0000 (09:12 -0800)]
use flake8-mypy (#17721)

Summary:
Use flake8 installed with mypy checks so that our linter matches fbcode. Mypy type errors also provide valuable signal
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17721

Differential Revision: D14357778

Pulled By: eellison

fbshipit-source-id: d8c9ea3fe3b5f550c3b70fe259e0eabf95e4c92d

5 years agouse fp16<->fp32 intrinsic (#17496)
Jongsoo Park [Thu, 7 Mar 2019 10:17:42 +0000 (02:17 -0800)]
use fp16<->fp32 intrinsic (#17496)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17496

As title.

Reviewed By: hyuen

Differential Revision: D14222907

fbshipit-source-id: d5d6c032e725ca8b52aca2be7401ec3c59f6a242

5 years agoImplement a Caffe2 standalone LSTM operator (#17726)
Ahmed Aly [Thu, 7 Mar 2019 09:03:51 +0000 (01:03 -0800)]
Implement a Caffe2 standalone LSTM operator (#17726)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17726

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17725

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17461

Implementing a standalone LSTM Operator in Caffe2 adopted from this Aten implementation: diffusion/FBS/browse/master/fbcode/caffe2/aten/src/ATen/native/RNN.cpp. The most tricky thing in this exercise was that caffe2::Tensor has no copy constructor that made it necessary to implement a custom templated copy constructor for the different Tensor containers used in the code. Also there was no way to use off-the-shelf C2 operators in my code easily so I had to copy some code that is doing basic matmul, cat, split, transpose and linear as utility functions.

Two things missing:

- Profiling this implementation against the current ONNXified LSTM op
- Make this operator available to use in PyTorch

Reviewed By: dzhulgakov

Differential Revision: D14351575

fbshipit-source-id: 3b99b53212cf593c7a49e45580b5a07b90809e64

5 years agocaffe2:libtorch_cuda depends on caffe2:caffe2_gpu (#17729)
Sebastian Messmer [Thu, 7 Mar 2019 07:50:14 +0000 (23:50 -0800)]
caffe2:libtorch_cuda depends on caffe2:caffe2_gpu (#17729)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17729

When doing "import torch" in fbcode, previously the caffe2 cuda kernels weren't loaded because libcaffe2_gpu.so wasn't loaded.
Once you also did "from caffe2.python import workspace", then the cuda kernels were loaded because that triggered a runtime mechanism for loading libcaffe2_gpu.so.

We want the cuda kernels to always be available, so this diff adds a dependency from caffe2:libtorch_cuda to caffe2:caffe2_gpu.

Reviewed By: ezyang

Differential Revision: D14353498

fbshipit-source-id: 76a9fe69f231b308ab40eac393bb216c6fad3658

5 years agoadd tensor and cost inference functions (#17684)
Jongsoo Park [Thu, 7 Mar 2019 07:26:27 +0000 (23:26 -0800)]
add tensor and cost inference functions (#17684)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17684

Adding tensor and cost inference functions to more int8 operators.

Reviewed By: yinghai

Differential Revision: D14174746

fbshipit-source-id: dfad975fa75899565c8fb61f1b7747a9206ebd22

5 years agoONNX Export Narrow op
Lara Haidar [Thu, 7 Mar 2019 06:35:12 +0000 (22:35 -0800)]
ONNX Export Narrow op

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17550

Differential Revision: D14350401

Pulled By: houseroad

fbshipit-source-id: 4d88079bb7a8bbd270b0272009826eb3b202cc33

5 years agoKeep the dim_type of hinted shape as BATCH if possible (#17734)
Yinghai Lu [Thu, 7 Mar 2019 03:55:39 +0000 (19:55 -0800)]
Keep the dim_type of hinted shape as BATCH if possible (#17734)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17734

If input is not BATCH, we will skip adjust its batch size during onnxifi transformation. So when we take hints, we take it as CONSTANT but later need to change it to BATCH if possible.

Reviewed By: jackm321

Differential Revision: D14355983

fbshipit-source-id: 63eb54a44afb1565c71486fdd73db07ca0ac4fd4

5 years agofix different round behavior on CPU and GPU #16498 (#17443)
jwu [Thu, 7 Mar 2019 03:37:03 +0000 (19:37 -0800)]
fix different round behavior on CPU and GPU #16498 (#17443)

Summary:
xxtemp, colesbury, bhushan23, zou3519,  convert gpu round behavior to half-to-even, consistent with torch cpu version and numpy. You feedback are welcomed.
See #16498
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17443

Differential Revision: D14261786

Pulled By: VitalyFedyunin

fbshipit-source-id: 98156436b545d72769831a89e2775d43ad913ebc

5 years agoWarn about memory overlaps on expanded tensors (#17576)
zou3519 [Thu, 7 Mar 2019 01:37:13 +0000 (17:37 -0800)]
Warn about memory overlaps on expanded tensors (#17576)

Summary:
Eventually we should remove these when we're certain that all our ops
handle memory overlaps correctly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17576

Differential Revision: D14349990

Pulled By: zou3519

fbshipit-source-id: c3a09f6113b9b1bf93e7f13c0b426c45b2cdf21f

5 years agofix exp fam. formula
Tongzhou Wang [Wed, 6 Mar 2019 23:35:25 +0000 (15:35 -0800)]
fix exp fam. formula

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17719

Differential Revision: D14349029

Pulled By: soumith

fbshipit-source-id: cf016756a9319436f7379e8377f8bd1e1b672b40

5 years agorefactor caffe2 operator constructors - 10/9 (#17659)
Sebastian Messmer [Wed, 6 Mar 2019 23:08:44 +0000 (15:08 -0800)]
refactor caffe2 operator constructors - 10/9 (#17659)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17659

clangr codemod

Reviewed By: ezyang

Differential Revision: D14304675

fbshipit-source-id: 45fbd84c50651a70ae29bf46df3322715e99d225

5 years agoImprove ONNX symbolic for logsoftmax and softmax (#17672)
Lu Fang [Wed, 6 Mar 2019 22:59:16 +0000 (14:59 -0800)]
Improve ONNX symbolic for logsoftmax and softmax (#17672)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17672

support dtype in the onnx symbolic

Reviewed By: zrphercule

Differential Revision: D14313987

fbshipit-source-id: e9364621b3f795191d880599711dfbcb220d0e31

5 years agoEnable using CMD when building cpp extensions on Windows
peter [Wed, 6 Mar 2019 22:40:05 +0000 (14:40 -0800)]
Enable using CMD when building cpp extensions on Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17706

Differential Revision: D14346482

Pulled By: ezyang

fbshipit-source-id: 7c85e51c701f6c0947ad324ef19fafda40ae1cb9

5 years agoDo not rename net boundary inputs/outputs during ssaRewrite. (#17545)
Yinghai Lu [Wed, 6 Mar 2019 22:24:02 +0000 (14:24 -0800)]
Do not rename net boundary inputs/outputs during ssaRewrite. (#17545)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17545

This diff avoids renaming boundary inputs of net during onnxifi transform.
It also removes adding mappings for the initializer during onnxifi op creation.
Thus gets read of the mapped ws creation during onnxifi op creation.

Reviewed By: zrphercule

Differential Revision: D14243161

fbshipit-source-id: 6eafa920c45f6a6bfacbbb443e8e84cf9778644c

5 years agoReapply D14078519 (#17596)
Sebastian Messmer [Wed, 6 Mar 2019 21:47:27 +0000 (13:47 -0800)]
Reapply D14078519 (#17596)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17596

Was reverted before, now fixed version.

Reviewed By: ezyang

Differential Revision: D14270288

fbshipit-source-id: c72490b5d02cc6098cb60145fa9a842b3c9a24c5

5 years agoBatch of expect file removals (#17581)
eellison [Wed, 6 Mar 2019 21:41:13 +0000 (13:41 -0800)]
Batch of expect file removals (#17581)

Summary:
Another batch of removing expect files.

One note - I removed the Batched expect files without adding equivalent tests since they are already being tested in another ways, and we are no longer actively maintaining that project.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17581

Differential Revision: D14343578

Pulled By: eellison

fbshipit-source-id: ce0b1fd2b5b4ec80ad9003bab1b58f41645d3da6

5 years ago(#14267)
jiej [Wed, 6 Mar 2019 21:36:14 +0000 (13:36 -0800)]
(#14267)

Summary:
- Summary:

Added synchronized batch normalization, allows synchronization of stats across mini-batches between processes within a process group.
Current implementation uses a mixture of extended ATen native functions (cpp cuda extension) + torch.nn.modules (c10d python API)

- User-facing api:

1. torch.nn.utils.convert_sync_batchnorm(modules, process_group=None)

2. torch.nn.SyncBatchNorm(num_features, eps=1e-5, momentum=0.1, affine=True, track_running_stats=True, ***process_group=None***)

- supported use case:
DistributedDataParallel with ***single-gpu multi-process***

a. User creates model containing `torch.nn.SyncBatchNorm` layers through one of the ways listed below:

  1. use layers directly:

     torch.nn.SyncBatchNorm(...)

     similar API as with torch.nn.BatchNormXd(...)
     with added argument `process_group` which is used to limit the scope of
     synchronization within each process group. Default value is None, which
     implies synchronization across all GPUs

  2. use torch.nn.utils.convert_sync_batchnorm(modules, process_group)

     recursively convert all `torch.nn.BatchNormXd` into `torch.nn.SyncBatchNorm`
     preserving values of parameters/buffers.
     the utility function also allows user to specify process_group value to all
     converted layers.

b. user wraps their model with
   `torch.distributed.parallel.DataParallelDistributed`, from this point, user
   should follow the general guidelines for DDP use guide

- Error checking

For use cases not supported, we error out:

1. Application launched without ddp:
   > import torch
   > sbn = torch.nn.SyncBatchNorm(10).cuda()
   > inp = torch.randn(5, 10, 3, 3).cuda()
   > sbn(inp) --> Error!
   > AttributeError: SyncBatchNorm is only supported within torch.nn.parallel.DistributedDataParallel

2. Application launched using DDP with multi-GPU per-process:
   > ddp_module = nn.parallel.DistributedDataParallel(module, device_ids=device_ids, output_device=args.local_rank)
   > ValueError: SyncBatchNorm is only supported for DDP with single GPU per process
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14267

Differential Revision: D14270035

Pulled By: ezyang

fbshipit-source-id: 4956d8fa565c32e9df5408d53719ff9f945f4d6d

5 years agoUpdate ModuleDict doc about order
Tongzhou Wang [Wed, 6 Mar 2019 21:06:41 +0000 (13:06 -0800)]
Update ModuleDict doc about order

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17717

Differential Revision: D14346557

Pulled By: ezyang

fbshipit-source-id: 2484c7d8105f9aa8bce5567d1fa2d4f587cc9cc2

5 years agoUpdate CODEOWNERS (#17720)
Pieter Noordhuis [Wed, 6 Mar 2019 20:30:05 +0000 (12:30 -0800)]
Update CODEOWNERS (#17720)

Summary:
teng-li is passing the baton to mrshenli. Thanks for all your work on distributed teng-li!! :tada:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17720

Differential Revision: D14350120

Pulled By: pietern

fbshipit-source-id: edfe784520c54630203cc8fbb296455d3dbf341b