platform/upstream/pytorch.git
5 years agobuiltin ivalues sort (#19572)
Elias Ellison [Tue, 23 Apr 2019 23:30:49 +0000 (16:30 -0700)]
builtin ivalues sort (#19572)

Summary:
Add sorting to all the lists which we specialize on (Tensor, int, float, bool).

First part of https://github.com/pytorch/pytorch/issues/19372
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19572

Differential Revision: D15052677

Pulled By: eellison

fbshipit-source-id: 301e8e0e3e29e04aca1311410db0a474fd833cff

5 years agoGuard {set,rebase}_history on grad_fn check (#19623)
James Reed [Tue, 23 Apr 2019 22:24:41 +0000 (15:24 -0700)]
Guard {set,rebase}_history on grad_fn check (#19623)

Summary:
We would previously have statements like

```
set_history(flatten_tensor_args( result ), grad_fn);
```

Internally, {set,rebase}_history would check grad_fn and short circuit if it is nullptr. However, this means that we are executing the expression `flatten_tensor_args( result )` and immediately throwing away the results. This was causing unnecessary allocations + overhead.

My JIT overhead benchmark script (with custom benchmark method):

```
import torch, time

torch.jit.script
def add(x, y):
    return x + y

a = torch.rand([])
b = torch.rand([])

niter = 1000000

with torch.no_grad():
    s = time.time()
    add.__getattr__('forward').benchmark(niter, a, b)
    e = time.time() - s
    print('overhead per call (us)', e / niter * 1e6)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19623

Differential Revision: D15053399

Pulled By: jamesr66a

fbshipit-source-id: 8777e1a2b5c5a5bbd3a035b7247c8154c5fc4aa6

5 years agooptimize BatchMatmulOp (#18612)
Xiaomeng Yang [Tue, 23 Apr 2019 22:24:03 +0000 (15:24 -0700)]
optimize BatchMatmulOp (#18612)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18612

optimize BatchMatmulOp

Reviewed By: houseroad

Differential Revision: D14681665

fbshipit-source-id: cf5ea4909ace58fd44fe6fa634531102ac84e851

5 years agofix lint (#19632)
Jerry Zhang [Tue, 23 Apr 2019 22:15:04 +0000 (15:15 -0700)]
fix lint (#19632)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19632

at

Differential Revision: D15052952

fbshipit-source-id: 7c38fad99799e5ac914685c36eadf932afe52b74

5 years agoAdd base support to torch.logspace, default base=10 (#19542)
Phúc Lê [Tue, 23 Apr 2019 21:49:50 +0000 (14:49 -0700)]
Add base support to torch.logspace, default base=10 (#19542)

Summary:
Add base support for torch.logspace. See #19220 for details.
SsnL can you feedback? Thanks a lot.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19542

Differential Revision: D15028484

Pulled By: soumith

fbshipit-source-id: fe5a58a203b279103abbc192c754c25d5031498e

5 years agodisable flake8 E302 (two blank lines) (#19634)
Michael Suo [Tue, 23 Apr 2019 21:48:18 +0000 (14:48 -0700)]
disable flake8 E302 (two blank lines) (#19634)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19634
ghimport-source-id: 68b11ac3c19daf8df3bbf11e6181e9450899e90a

Differential Revision: D15053466

Pulled By: suo

fbshipit-source-id: 09d7859aa2059fc9eb3b47fa62467537bab40e05

5 years agofix nn.Sequential doc
Tongzhou Wang [Tue, 23 Apr 2019 21:47:56 +0000 (14:47 -0700)]
fix nn.Sequential doc

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19597

Differential Revision: D15042383

Pulled By: soumith

fbshipit-source-id: f912ed2a726a17fcc25795ff66b73ae4caacd247

5 years agocaffe2 | Windows compat fixes
Oleg Bogdanov [Tue, 23 Apr 2019 21:23:26 +0000 (14:23 -0700)]
caffe2 | Windows compat fixes

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19531

Reviewed By: hlu1

Differential Revision: D15024541

fbshipit-source-id: cd8249a6d529afb65fa8afd74a05dbfe73eb1fb0

5 years agoRemove fixed TODO (#19590)
Sebastian Messmer [Tue, 23 Apr 2019 20:41:34 +0000 (13:41 -0700)]
Remove fixed TODO (#19590)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19590

-

Reviewed By: ezyang

Differential Revision: D15039561

fbshipit-source-id: 246cf4fa91a33cb4c96750b534b8c3d0c312f311

5 years agocorrect comments in group_norm_op (#19621)
Huamin Li [Tue, 23 Apr 2019 20:05:55 +0000 (13:05 -0700)]
correct comments in group_norm_op (#19621)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19621

Comments for group_norm_op is not accurate (i.e., the math part), this diff will fix it.

Reviewed By: BIT-silence

Differential Revision: D15048695

fbshipit-source-id: 27d41d3ae21054257967815254134849944d56ca

5 years agoSimplify argument test cases (#19593)
Sebastian Messmer [Tue, 23 Apr 2019 19:43:24 +0000 (12:43 -0700)]
Simplify argument test cases (#19593)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19593

Removes a lot of duplication

Reviewed By: dzhulgakov

Differential Revision: D15039887

fbshipit-source-id: e90fe024b84220dd337fdd314d8f7e3620baec28

5 years agoAdd test cases for optional of list (#19592)
Sebastian Messmer [Tue, 23 Apr 2019 19:43:23 +0000 (12:43 -0700)]
Add test cases for optional of list (#19592)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19592

This is already supported but wasn't tested yet

Reviewed By: ezyang

Differential Revision: D15039888

fbshipit-source-id: dc8ea724c76dd1719b1d4810a20c8f958e5beecc

5 years agoPort adaptive_max_pool3d() to ATen (#19547)
Stefan Krah [Tue, 23 Apr 2019 19:43:14 +0000 (12:43 -0700)]
Port adaptive_max_pool3d() to ATen (#19547)

Summary:
This is the second part of #18064.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19547

Differential Revision: D15046630

Pulled By: ezyang

fbshipit-source-id: 03f80602b94d47bca66bfd0dcab1b7bb99e5b7f1

5 years agoadd torch.tensor requires grad (#19445)
Elias Ellison [Tue, 23 Apr 2019 19:21:32 +0000 (12:21 -0700)]
add torch.tensor requires grad (#19445)

Summary:
Add setting requires_grad = True within torchscript to torch.Tensor

Within constant propagation, we can't insert any constants that require grad.

Also added shape analysis and requires grad analysis to torch.tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19445

Differential Revision: D15046211

Pulled By: eellison

fbshipit-source-id: b4ef7a6b4b6b8dc03e1fa49f87dc415874cd1998

5 years agoSurface the Glow traces to C2 (#19087)
Yinghai Lu [Tue, 23 Apr 2019 19:17:59 +0000 (12:17 -0700)]
Surface the Glow traces to C2 (#19087)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19087

att

Reviewed By: jackm321

Differential Revision: D14863112

fbshipit-source-id: 2680161b9f05391e73bb8dac4fbbeabb87a82c05

5 years agoFix lack of state init for adagrad and add share_memory flag (#17679)
Kaiyu Shi [Tue, 23 Apr 2019 19:15:53 +0000 (12:15 -0700)]
Fix lack of state init for adagrad and add share_memory flag (#17679)

Summary:
The current code initialize the `state` in `__init__` method, but the initialization process is not invoked in `add_parameter_group`.

I followed the same approach in other Optimizers to init the `state`.

```python
import torch

emb = torch.nn.Embedding(10,10)
emb2 = torch.nn.Embedding(10,10)

optim = torch.optim.Adagrad(emb.parameters())
print(optim.state[emb.weight])  # already initialized

optim.add_param_group({'params': emb2.parameters()})
print(optim.state[emb2.weight])  # empty dict

loss = emb2.weight.sum() + emb.weight.sum()
loss.backward()
optim.step()  # raised KeyError
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17679

Differential Revision: D14577575

Pulled By: ezyang

fbshipit-source-id: 12440079ac964b9eedad48e393d47f558babe300

5 years agoAllow extracting element-wise loss in softmax (#19579)
Priya Goyal [Tue, 23 Apr 2019 18:41:44 +0000 (11:41 -0700)]
Allow extracting element-wise loss in softmax (#19579)

Summary:
Often times, we want to experiment with loss per element (image etc.). This changeset allows getting per element loss as well. This output is optional.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19579

Reviewed By: jerryzh168

Differential Revision: D15035797

Pulled By: prigoyal

fbshipit-source-id: 562dea514f49c1f2f1cbbc083a1938dc019a75c4

5 years agodispatch max_pools with no indices, expose max_pools to torch namespace (#19449)
Wanchao Liang [Tue, 23 Apr 2019 18:16:28 +0000 (11:16 -0700)]
dispatch max_pools with no indices, expose max_pools to torch namespace (#19449)

Summary:
in functional interfaces we do boolean dispatch, but all to max_pool\*d_with_indices. This change it to emit max_pool\*d op instead when it's not necessary to expose with_indices ops to different backends (for jit).

It also bind max_pool\*d to the torch namespace, which is the same behavior with avg_pool\*d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19449

Differential Revision: D15016839

Pulled By: wanchaol

fbshipit-source-id: f77cd5f0bcd6d8534c1296d89b061023a8288a2c

5 years agoAdds `fakeQuantizePerTensorAffineOp` to pytorch (#19387)
Jerry Zhang [Tue, 23 Apr 2019 18:03:38 +0000 (11:03 -0700)]
Adds `fakeQuantizePerTensorAffineOp` to pytorch (#19387)

Summary:
Adding fakequant op so that we can use it in pytorch models, the exact implementation might change.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19387

Differential Revision: D13739657

fbshipit-source-id: d5cb084e843d236bb1da9827ac1ba3900ed99786

5 years ago-fno-math-errno -fno-trapping-math (#19552)
James Reed [Tue, 23 Apr 2019 18:03:31 +0000 (11:03 -0700)]
-fno-math-errno -fno-trapping-math (#19552)

Summary:
As suggested in https://github.com/pytorch/pytorch/pull/19152#discussion_r275925767, this may give the compiler more opportunities for auto-vectorization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19552

Differential Revision: D15048358

Pulled By: jamesr66a

fbshipit-source-id: db2c2c515c3e9f7d22305c039ab0c8a867fc43a2

5 years agoOnly require python print on certain namespaces (#19383)
Bram Wasti [Tue, 23 Apr 2019 17:49:39 +0000 (10:49 -0700)]
Only require python print on certain namespaces (#19383)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19383
ghimport-source-id: b93c7849a52d11ecbf26b614704740d44a2447f9

Differential Revision: D15032727

Pulled By: bwasti

fbshipit-source-id: a19f72abb99e63d87eab13022538f325b2e20526

5 years agoUse `fbgemm` for quantize/dequantize ops (#19500)
Zafar Takhirov [Tue, 23 Apr 2019 17:26:23 +0000 (10:26 -0700)]
Use `fbgemm` for quantize/dequantize ops (#19500)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19500

Changes the `quantize_linear` and `dequantize` to `fbgemm`-based implementation.

Reviewed By: jianyuh, jerryzh168

Differential Revision: D15014561

fbshipit-source-id: b651e69d336b5b08b4a75a4a4eddf46c040a4934

5 years agoSpecify to use Float16UniformFill if necessary in sparse lookup layer (#18499)
Jiyan Yang [Tue, 23 Apr 2019 17:05:57 +0000 (10:05 -0700)]
Specify to use Float16UniformFill if necessary in sparse lookup layer (#18499)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18499

If the init op is not fp16 compatible, it should throw.
However, in the special case where the original init op is UniformFill,
we replace it with Float16UniformFill

Reviewed By: kennyhorror

Differential Revision: D14627209

fbshipit-source-id: eb427772874a732ca8b3a25d06670d119ce8ac14

5 years agoFix the Division by Zero Bug of CosineAnnealingLR (#19180)
Chandler Zuo [Tue, 23 Apr 2019 16:47:37 +0000 (09:47 -0700)]
Fix the Division by Zero Bug of CosineAnnealingLR (#19180)

Summary:
Added the formula for the corner case. Updated unit tests.

Fixes #17913
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19180

Differential Revision: D14942023

Pulled By: ezyang

fbshipit-source-id: 167c109b97a7830d5b24541dc91e4788d531feec

5 years agoFix the documentation for BCEWithLogitsLoss (#17218, #16804) (#19212)
Vadim Velicodnii [Tue, 23 Apr 2019 16:46:46 +0000 (09:46 -0700)]
Fix the documentation for BCEWithLogitsLoss (#17218, #16804) (#19212)

Summary:
I fixed a mistake in the explanation of `pos_weight` argument in `BCEWithLogitsLoss` and added an example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19212

Differential Revision: D14923431

Pulled By: ezyang

fbshipit-source-id: 15696c67d56789102ac72afbe9bdd7b667eae5a0

5 years agofix the docstring of `RandomSampler` (#19113)
crcrpar [Tue, 23 Apr 2019 16:45:42 +0000 (09:45 -0700)]
fix the docstring of `RandomSampler` (#19113)

Summary:
fix
- the order of `Arguments` in `RandomSampler` doc
- the meaningless check of `replacement`'s type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19113

Differential Revision: D15013081

Pulled By: ezyang

fbshipit-source-id: 39e367f42841de6814b1214eb9df7b75f14f747e

5 years agoAvoid (future) cusparse name collision (#19591)
mruberry [Tue, 23 Apr 2019 16:34:23 +0000 (09:34 -0700)]
Avoid (future) cusparse name collision (#19591)

Summary:
A future version of cusparse will define "cusparseGetErrorString." This PR simply updates PyTorch's name for this function to "getCusparseErrorString" to avoid the collision.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19591

Differential Revision: D15046871

Pulled By: ezyang

fbshipit-source-id: 821304f75fe84c68a26680a93809a18cfdbd540b

5 years agoAdd docs and test guaranteeing indices from torch.nonzero ordered C-style (#19539)
jhultman [Tue, 23 Apr 2019 16:23:06 +0000 (09:23 -0700)]
Add docs and test guaranteeing indices from torch.nonzero ordered C-style (#19539)

Summary:
See #17556.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19539

Differential Revision: D15030151

Pulled By: ezyang

fbshipit-source-id: d46ee56a66d89b0113f86e3f8693dc1680d0adb9

5 years agoRemove unnecessary printing from tests
Tongzhou Wang [Tue, 23 Apr 2019 16:16:05 +0000 (09:16 -0700)]
Remove unnecessary printing from tests

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19606

Differential Revision: D15046583

Pulled By: ezyang

fbshipit-source-id: ea9bb691d23855e7eddbabe68bf112a726641ba4

5 years agoFix lr_scheduler's last_epoch value at the time of initialization (BC BREAKING!)...
Bado Lee [Tue, 23 Apr 2019 15:11:24 +0000 (08:11 -0700)]
Fix lr_scheduler's last_epoch value at the time of initialization (BC BREAKING!) (#7889)

Summary:
Hello everyone :) !!

I've found that lr_scheduler was initialized with last_epoch as -1.
This causes that even after the first step (not the one in init but explicit step of scheduler),
learning rate of scheduler's optimizer remains as the previous.
```python
>>> import torch
>>> cc = torch.nn.Conv2d(10,10,3)
>>> myinitial_lr = 0.1
>>> myoptimizer = torch.optim.Adam(cc.parameters(), lr=myinitial_lr)
>>> mylrdecay = 0.5
>>> myscheduler = torch.optim.lr_scheduler.ExponentialLR(myoptimizer,mylrdecay)

>>> myscheduler.get_lr()
[0.2]    # this is because of  get_lr calculates lr by 0.1 * 0.5^-1
>>> myscheduler.optimizer.param_groups[0]["lr"]
0.1    # this is not consistent with get_lr value
>>> myscheduler.last_epoch
-1

>>> myscheduler.step()
>>> myscheduler.get_lr()
[0.1]    # this should be the value right after the init, not after first step
>>> myscheduler.optimizer.param_groups[0]["lr"]
0.1    # since this is after first step, it should have been decayed as 0.05
>>> myscheduler.last_epoch
0

>>> myscheduler.step()
>>> myscheduler.last_epoch
1
>>> myscheduler.get_lr()
[0.05]
>>> myscheduler.optimizer.param_groups[0]["lr"]
0.05
>>> myscheduler.last_epoch
1
```

First problem is, even after the init of lr_scheduler, you get the inconsistent parameter values.

The second problem is, you are stuck with same learning rate in the first 2 epochs if the step function of lr_scheduler is not called in the beginning of the epoch loop.
Of course, you can avoid this by calling lr_scheduler's step in the beginning,
but I don't think this is proper use since, incase of optimizer, step is called in the end of the iteration loop.

I've simply avoided all above issues by setting last_epoch as 0 after the initialization.

This also makes sense when you init with some value of last_epoch which is not -1.
For example, if you want to init with last epoch 10,
lr should not be set with decayed 1 step further. Which is
last_epoch gets +1 in the previous code.
base_lr * self.gamma ** self.last_epoch

Instead, it should be set with step 10 exact value.

I hope this fix find it's way with all your help :)
I'm really looking forward & excited to become a contributor for pytorch!
Pytorch Rocks!!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/7889

Differential Revision: D15012769

Pulled By: ezyang

fbshipit-source-id: 258fc3009ea7b7390a3cf2e8a3682eafb506b08b

5 years agoRemoves variable which is assigned but not used (#19194)
SebFar [Tue, 23 Apr 2019 15:10:00 +0000 (08:10 -0700)]
Removes variable which is assigned but not used (#19194)

Summary:
n was set as self.in_channels, but not used within the scope of the function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19194

Differential Revision: D14937764

Pulled By: ezyang

fbshipit-source-id: 55cb599109309503fee897f77d798fd454fcc02d

5 years agoadd torch.cuda.synchronize(device=None) (#19573)
SsnL [Tue, 23 Apr 2019 14:51:31 +0000 (07:51 -0700)]
add torch.cuda.synchronize(device=None) (#19573)

Summary:
fixes https://github.com/pytorch/pytorch/issues/19509
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19573

Differential Revision: D15045730

Pulled By: ezyang

fbshipit-source-id: 732721b4b360fc4348ca7c87d4cd1386e7651bdd

5 years agoPort adaptive_max_pool2d() to ATen (#19409)
Stefan Krah [Tue, 23 Apr 2019 14:34:12 +0000 (07:34 -0700)]
Port adaptive_max_pool2d() to ATen (#19409)

Summary:
This is the first part of  #18064.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19409

Differential Revision: D15037390

Pulled By: ezyang

fbshipit-source-id: 16a3feed2fd9cc66033696da224a7d5fb7208534

5 years agoFix math formatting of PairwiseDistance and CosineSimilarity docs and fix math format...
zhiqiang [Tue, 23 Apr 2019 14:20:32 +0000 (07:20 -0700)]
Fix math formatting of PairwiseDistance and CosineSimilarity docs and fix math formatting of CTC loss docs.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19534

Differential Revision: D15034011

Pulled By: ezyang

fbshipit-source-id: 60b81c970c919508a57c86fb23edc9f64973117c

5 years agoRevert D15039713: [pytorch][PR] add torch.tensor requires grad
Michael Suo [Tue, 23 Apr 2019 06:10:18 +0000 (23:10 -0700)]
Revert D15039713: [pytorch][PR] add torch.tensor requires grad

Differential Revision:
D15039713

Original commit changeset: 47f1931b6fc4

fbshipit-source-id: fd91ce8ddd6d2f4e0016054dcdc2541dacc0e191

5 years agoBugfix for fusion device check (#19594)
James Reed [Tue, 23 Apr 2019 03:49:21 +0000 (20:49 -0700)]
Bugfix for fusion device check (#19594)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19594

I missed a callsite

Reviewed By: wanchaol

Differential Revision: D15041457

fbshipit-source-id: eef76ad51bee06a56d31b4ab64f19250fe2ad8f0

5 years agoadd torch.tensor requires grad (#19445)
Elias Ellison [Tue, 23 Apr 2019 00:56:51 +0000 (17:56 -0700)]
add torch.tensor requires grad (#19445)

Summary:
Add setting requires_grad = True within torchscript to torch.Tensor

Within constant propagation, we can't insert any constants that require grad.

Also added shape analysis and requires grad analysis to torch.tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19445

Differential Revision: D15039713

Pulled By: eellison

fbshipit-source-id: 47f1931b6fc4a1137c13d80110cc404465bfdf06

5 years agoAdd onnx support for _unique2 operator
Vitaly Fedyunin [Tue, 23 Apr 2019 00:48:43 +0000 (17:48 -0700)]
Add onnx support for _unique2 operator

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19582

Reviewed By: ezyang, jamesr66a

Differential Revision: D15037375

fbshipit-source-id: 6060476925bf02fa07f852054e06d2107f046e38

5 years agoAutomatic update of fbcode/onnx to 0e8d2bc5e51455c70ef790b9f65aa632ed9bc8a7 (#19568)
Lu Fang [Tue, 23 Apr 2019 00:24:57 +0000 (17:24 -0700)]
update of fbcode/onnx to 0e8d2bc5e51455c70ef790b9f65aa632ed9bc8a7 (#19568)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19568

Previous import was 83dd62659fc07d5b7fa93b5d1c1879f93509c7db

Included changes:
- **[0e8d2bc5](https://github.com/onnx/onnx/commit/0e8d2bc5)**: [Minor need to be in 1.5]Fix an issue in NMS test data which introduce wrong shape. (#1953) <Hector Li>
- **[9346dd5d](https://github.com/onnx/onnx/commit/9346dd5d)**: adding modulus operator (#1874) <Jeff Saremi>
- **[414dbc73](https://github.com/onnx/onnx/commit/414dbc73)**: Fix shape inference for slice (#1950) <Hariharan Seshadri>
- **[6fb0775d](https://github.com/onnx/onnx/commit/6fb0775d)**: Fix shape inference for ConstantOfShape op (#1951) <Ashwini Khade>

Reviewed By: bddppq, zrphercule, benoitsteiner

Differential Revision: D15033070

fbshipit-source-id: f7eb90b142cbdc9bf1600cfd33e5a8df709045fb

5 years agoDon't create FusionGroups for known-CPU producer values (#19342)
James Reed [Mon, 22 Apr 2019 23:54:19 +0000 (16:54 -0700)]
Don't create FusionGroups for known-CPU producer values (#19342)

Summary:
I believe the existing check in FuseGraph was only `false` if PyTorch was built with NO_CUDA=1. Otherwise, we would create fusion groups even if we're on a CPU-only machine running CPU code. This is confusing. Instead I've made it so that the decision to fuse or not is dependent on if the producer Value is a known CPU tensor. If it is, we skip fusion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19342

Differential Revision: D15038351

Pulled By: jamesr66a

fbshipit-source-id: fce9d83929309a7bf14346833f84b996f3e7f6db

5 years agoExplicitly define supported types (#19516)
Sebastian Messmer [Mon, 22 Apr 2019 23:16:30 +0000 (16:16 -0700)]
Explicitly define supported types (#19516)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19516

Explicitly define types that are supported in kernel inputs and outputs.
Also, this allows us to show much nicer error messages if a user writes kernels with wrong argument types.

Reviewed By: ezyang

Differential Revision: D15020306

fbshipit-source-id: 55ebec81e075e874777acd59aa29a5578fc19ef7

5 years agoIRParser: optionally create name->value map of the parsed IR. (#19551)
Mikhail Zolotukhin [Mon, 22 Apr 2019 23:02:40 +0000 (16:02 -0700)]
IRParser: optionally create name->value map of the parsed IR. (#19551)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19551
ghimport-source-id: e666e3c00786a3b1c747f2dd6e85a48a63bdd69d

Differential Revision: D15028056

Pulled By: ZolotukhinM

fbshipit-source-id: 37e08d6df1d43513748ecfdd8549738eac7ec24e

5 years agoProfiling : Adding Profile Op to provide storage for profiling lambdas
Nikolay Korovaiko [Mon, 22 Apr 2019 22:03:48 +0000 (15:03 -0700)]
Profiling : Adding Profile Op to provide storage for profiling lambdas

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19365

Differential Revision: D14998968

Pulled By: Krovatkin

fbshipit-source-id: a7f7d1529cbe4e8b30638c6eb8e2ff68f6e114c3

5 years agoStep 5: remove _unique_dim in favor of unique_dim (#18654)
Xiang Gao [Mon, 22 Apr 2019 19:32:14 +0000 (12:32 -0700)]
Step 5: remove _unique_dim in favor of unique_dim (#18654)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18654
ghimport-source-id: 63c84cedc3335719fca4a085fa19bdc57d2bc88a

Differential Revision: D15000635

Pulled By: VitalyFedyunin

fbshipit-source-id: 9e8594622a867a79d8e2b6be96579816aa22ae2d

5 years agoAdd back option to not adjust output batch size (#19442)
Yinghai Lu [Mon, 22 Apr 2019 19:23:05 +0000 (12:23 -0700)]
Add back option to not adjust output batch size (#19442)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19442

For cases like CV, some of ops like transpose and tile will mangle the batch size so that we don't know how to adjust output batch size. In this case, the current solution is just fix the input batch statically and do not adjust output batch size.

Reviewed By: zrphercule

Differential Revision: D15007237

fbshipit-source-id: a21b943a52ee5462d9d7804dfae44360f579f8cf

5 years agoAdd debug logic to c2_ref_test and its helpers (#19359)
Michael Antonov [Mon, 22 Apr 2019 19:04:07 +0000 (12:04 -0700)]
Add debug logic to c2_ref_test and its helpers (#19359)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19359

Even with file IO exception handling, some of the sandcastle c2_ref_tests are still failing in length-check assert, as can be seen here:
https://our.intern.facebook.com/intern/test/844424932589974?ref_report_id=0

This is an attempt to add printing logic to debug what's going on.

Reviewed By: dzhulgakov

Differential Revision: D14966274

fbshipit-source-id: adce6d4780d664c5ef59f9341b6133b0d09324cb

5 years agofix variable shadowing issus (#19567)
Dehua Cheng [Mon, 22 Apr 2019 18:52:12 +0000 (11:52 -0700)]
fix variable shadowing issus (#19567)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19567

fix variable shadowing

Reviewed By: bddppq, wx1988

Differential Revision: D15032114

fbshipit-source-id: 895ea21f22b87db8c7c8684f54fa186d22f24d10

5 years agoAdd manual_seed in script (#19510)
Elias Ellison [Mon, 22 Apr 2019 17:52:28 +0000 (10:52 -0700)]
Add manual_seed in script (#19510)

Summary:
Add manual_seed to torch script.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19510

Reviewed By: suo, driazati

Differential Revision: D15018823

Pulled By: eellison

fbshipit-source-id: d7734a8ad05ba254c0d88abf3fb58c4ce6a4e53b

5 years agoAutomatic update of fbcode/onnx to 83dd62659fc07d5b7fa93b5d1c1879f93509c7db (#19454)
Lu Fang [Mon, 22 Apr 2019 17:37:15 +0000 (10:37 -0700)]
update of fbcode/onnx to 83dd62659fc07d5b7fa93b5d1c1879f93509c7db (#19454)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19454

Previous import was ad7313470a9119d7e1afda7edf1d654497ee80ab

Included changes:
- **[83dd6265](https://github.com/onnx/onnx/commit/83dd6265)**: Add NonMaxSuppression operator (#1703) <Hector Li>
- **[31ca5d6f](https://github.com/onnx/onnx/commit/31ca5d6f)**: add node tests for quantized ops (#1944) <Ashwini Khade>
- **[e6076c1d](https://github.com/onnx/onnx/commit/e6076c1d)**: Fix test stat coverage script (#1948) <Raymond Yang>
- **[ad036405](https://github.com/onnx/onnx/commit/ad036405)**: Add IsInf to detect infinity values (#1884) <Wei-Sheng Chin>

Reviewed By: benoitsteiner

Differential Revision: D15010015

fbshipit-source-id: 4b29de21de60f8e6a2db75309809a4e619c92532

5 years agoGet rid of unnecessary matches_jit_signature: True specifications. (#19549)
Gregory Chanan [Mon, 22 Apr 2019 17:19:17 +0000 (10:19 -0700)]
Get rid of unnecessary matches_jit_signature: True specifications. (#19549)

Summary:
Unstacked version of https://github.com/pytorch/pytorch/pull/19431.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19549

Reviewed By: ezyang

Differential Revision: D15027965

Pulled By: gchanan

fbshipit-source-id: a4456326a999d77d6baeb0edbb1bb5db5208a8f8

5 years agoRename potri to cholesky_inverse (#19498)
vishwakftw [Mon, 22 Apr 2019 15:14:49 +0000 (08:14 -0700)]
Rename potri to cholesky_inverse (#19498)

Summary:
Changelog:
- Rename `potri` to `cholesky_inverse` to remain consistent with names of `cholesky` methods (`cholesky`, `cholesky_solve`)
- Fix all callsites
- Rename all tests
- Create a tentative alias for `cholesky_inverse` under the name `potri` and add a deprecation warning to not promote usage
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19498

Differential Revision: D15029901

Pulled By: ezyang

fbshipit-source-id: 2074286dc93d8744cdc9a45d54644fe57df3a57a

5 years agoAdd assertion to make sure init op is always fp16 compatible in fp16 training
Jiyan Yang [Mon, 22 Apr 2019 06:40:24 +0000 (23:40 -0700)]
Add assertion to make sure init op is always fp16 compatible in fp16 training

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18498

Reviewed By: kennyhorror

Differential Revision: D14626755

fbshipit-source-id: d8a0b3c02920ab3835911a21bf05e8956853fcd7

5 years agoGenerate only one Type class per backend (#19295)
Roy Li [Mon, 22 Apr 2019 04:12:21 +0000 (21:12 -0700)]
Generate only one Type class per backend (#19295)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19295
ghimport-source-id: 9345110f91f044a449804ddd5116cc9179444a00

Differential Revision: D14948581

Pulled By: li-roy

fbshipit-source-id: a317b03d58d621e8df162918038f7543bfb13ba2

5 years agoMake complex its own backend (#19275)
Roy Li [Mon, 22 Apr 2019 04:12:21 +0000 (21:12 -0700)]
Make complex its own backend (#19275)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19275
ghimport-source-id: 73fd40b02152aed6f24225a88d7ffde7f700899e

Differential Revision: D14948582

Pulled By: li-roy

fbshipit-source-id: a1be6e57057defc74a007c5351c5edb2b9dcaf30

5 years agoAdd ScalarType argument to Type::options() (#19270)
Roy Li [Mon, 22 Apr 2019 04:12:21 +0000 (21:12 -0700)]
Add ScalarType argument to Type::options() (#19270)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19270
ghimport-source-id: a5ade6131f3260066c5750ea1fa9ed5c998bb791

Differential Revision: D14938707

Pulled By: li-roy

fbshipit-source-id: 018fb3f01706531a06515d6d861e5683a455a705

5 years agoGenerate cases for all ScalarTypes in Type functions that call to TH (#19230)
Roy Li [Mon, 22 Apr 2019 04:12:21 +0000 (21:12 -0700)]
Generate cases for all ScalarTypes in Type functions that call to TH (#19230)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19230
ghimport-source-id: 81f360f2ebd137b8e7d8e885b85246cc219761aa

Differential Revision: D14927991

Pulled By: li-roy

fbshipit-source-id: 1b6a57918ecdc9c87858d3e50578edef0b6e7ad5

5 years agoFix clang-format. (#19550)
Mikhail Zolotukhin [Mon, 22 Apr 2019 03:28:15 +0000 (20:28 -0700)]
Fix clang-format. (#19550)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19550
ghimport-source-id: 980d96762426d3e97c26839edbaf107a3fc18b2f

Differential Revision: D15028055

Pulled By: ZolotukhinM

fbshipit-source-id: a50a0aaa74d0f1b9249ad79ab80e4b7747c3bffc

5 years agoFix some typos in jit README
Shen Li [Mon, 22 Apr 2019 02:39:54 +0000 (19:39 -0700)]
Fix some typos in jit README

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19548

Differential Revision: D15028275

Pulled By: mrshenli

fbshipit-source-id: 84ff635be3b4681962451b4c301271683174d7a8

5 years agoMatch JIT signature with triu_indices / tril_indices. (#19484)
Gregory Chanan [Sun, 21 Apr 2019 22:52:37 +0000 (15:52 -0700)]
Match JIT signature with triu_indices / tril_indices. (#19484)

Summary:
This just plugs into the existing mechanism to do a direct translation to TensorOptions in the backend, so no codegen changes.

After this lands, all native_functions will match the JIT signature.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19484

Differential Revision: D15013051

Pulled By: gchanan

fbshipit-source-id: 6818f868d2f765ca3e56e7e6f75fe4f68492466c

5 years agoMake one_hot non-differentiable. (#19524)
Gregory Chanan [Sun, 21 Apr 2019 21:11:14 +0000 (14:11 -0700)]
Make one_hot non-differentiable. (#19524)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19524
ghimport-source-id: ceda3ad43471242ebbd272a21de11731c7d8bef6

Differential Revision: D15021417

Pulled By: gchanan

fbshipit-source-id: 65d1f17a32f81f47dba5e58e343d0b7b828e1d51

5 years agoRemove 'BoolTensor', 'IndexTensor' from frontend specifications. (#19523)
Gregory Chanan [Sun, 21 Apr 2019 21:11:14 +0000 (14:11 -0700)]
Remove 'BoolTensor', 'IndexTensor' from frontend specifications. (#19523)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19523
ghimport-source-id: 618a15c2d1d9af9f87b46e32f10ff77111c2e3b7

Differential Revision: D15021420

Pulled By: gchanan

fbshipit-source-id: 048af8da3128de10bdee5827b6fbc169c3ad25a8

5 years agoHave _embedding_bag_dense_backward match JIT signature. (#19522)
Gregory Chanan [Sun, 21 Apr 2019 21:11:14 +0000 (14:11 -0700)]
Have _embedding_bag_dense_backward match JIT signature. (#19522)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19522
ghimport-source-id: ad645d87396de645a1aff5fd9d9939cb79cf6558

Differential Revision: D15021419

Pulled By: gchanan

fbshipit-source-id: bd7017edadb4ec9d43cefddf0aee8c52c5cca6a4

5 years agoHave embedding_dense_backward match JIT signature. (#19521)
Gregory Chanan [Sun, 21 Apr 2019 21:11:14 +0000 (14:11 -0700)]
Have embedding_dense_backward match JIT signature. (#19521)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19521
ghimport-source-id: 817d3defb5f4ee98bae1f0488f99cb0e9a5226a2

Differential Revision: D15021376

Pulled By: gchanan

fbshipit-source-id: 2e29f1d3913f94fab3347dc48676303510d7da46

5 years agoUpdate mkldnn-bridge to fix crash issue in DNNLOWP dequantize op (#19159)
Gu, Jinghui [Sun, 21 Apr 2019 20:59:26 +0000 (13:59 -0700)]
Update mkldnn-bridge to fix crash issue in DNNLOWP dequantize op (#19159)

Summary:
Remove an useless format checker in mkldnn-bridge to fix the crash issue in DNNLOWP dequantize op.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19159

Differential Revision: D15027670

Pulled By: yinghai

fbshipit-source-id: ac97d6ff94de013105108b9596b1bd7621c5aa75

5 years agoHook up non_differentiability in derivatives.yaml when no autograd function is genera...
Gregory Chanan [Sun, 21 Apr 2019 20:43:02 +0000 (13:43 -0700)]
Hook up non_differentiability in derivatives.yaml when no autograd function is generated. (#19520)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19520
ghimport-source-id: a1272aa0b23692fb189974c4daba7b2e4e0dad50

Differential Revision: D15021380

Pulled By: gchanan

fbshipit-source-id: ec83efd4bb6d17714c060f13a0527a33a10452db

5 years agoMove non_differentiable_arg_names from autograd functions to differentiability_info...
Gregory Chanan [Sun, 21 Apr 2019 18:03:09 +0000 (11:03 -0700)]
Move non_differentiable_arg_names from autograd functions to differentiability_info. (#19519)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19519
ghimport-source-id: 74e603688b2e4ed33f6c46c7da9d009336140e74

Differential Revision: D15021378

Pulled By: gchanan

fbshipit-source-id: e366a914c67a90ba0552b67d0bf5b347edbaf189

5 years agoMove cuFFT plan cache note outside Best Practices (#19538)
Tongzhou Wang [Sun, 21 Apr 2019 04:36:54 +0000 (21:36 -0700)]
Move cuFFT plan cache note outside Best Practices (#19538)

Summary:
I mistakenly put it there.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19538

Differential Revision: D15026500

Pulled By: soumith

fbshipit-source-id: 0c13499571fdfd789c3bd1c4b58abd870725d422

5 years agoRevert D14689639: [pytorch] Allow passing lists as trace inputs.
Michael Suo [Sat, 20 Apr 2019 15:45:22 +0000 (08:45 -0700)]
Revert D14689639: [pytorch] Allow passing lists as trace inputs.

Differential Revision:
D14689639

Original commit changeset: 6dcec8a64319

fbshipit-source-id: 03a5e7c80e7f2420e33b056b5844a78d7fd41141

5 years agoImprove optimizations for DNNLOWP support on MKL-DNN (#18843)
Gu, Jinghui [Sat, 20 Apr 2019 09:09:15 +0000 (02:09 -0700)]
Improve optimizations for DNNLOWP support on MKL-DNN (#18843)

Summary:
In this PR, the fusion alogrithms are improved to support DNNLOWP.
1. Enabled conv fusions for DNNLOWP
2. Fused order switch op into following quantize op
3. Improve conv+sum fusion to parse larger scope/window
4. re-org fusion code to fix random crash issue due to changing graph
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18843

Differential Revision: D15021030

Pulled By: yinghai

fbshipit-source-id: 88d2199d9fc69f392de9bfbe1f291e0ebf78ab08

5 years agoMake Observer class as template Quant class for QuantConfig (#19418)
Nishant Pandit [Sat, 20 Apr 2019 04:39:00 +0000 (21:39 -0700)]
Make Observer class as template Quant class for QuantConfig (#19418)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19418

This change makes Observer class template which always
takes an observer function as argument. Second test-case becomes redundant, hence removing
it.

Reviewed By: jerryzh168

Differential Revision: D15000594

fbshipit-source-id: 9555fe98a5f2054b8fd01e64e9ac2db72c043bfa

5 years agoSupport compilation on gcc-7.4.0 (#19470)
Sam Leeman-Munk [Sat, 20 Apr 2019 04:35:35 +0000 (21:35 -0700)]
Support compilation on gcc-7.4.0 (#19470)

Summary:
There are two corrections in this pull request.
The first is specific to gcc-7.4.0.
compiled with -std=c++14 gcc-7.4.0 has __cplusplus = 201402L
This does not meet the check set in Deprecated.h, which asks for >201402L.
The compiler goes down to the __GNUC__ check, which passes and sets C10_DEPRECATED_MESSAGE to a value that c++14 does not appear to support or even recognize, leading to a compile time error.
My recommended solution, which worked for my case, was to change the = into a >=

The second correction comes in response to this error:
caffe2/operators/crash_op.cc: In member function ‘virtual bool caffe2::CrashOp::RunOnDevice()’:
caffe2/operators/crash_op.cc:14:11: error: ‘SIGABRT’ was not declared in this scope

I am merely committing to the repository the solution suggested here (which worked for me)
https://discuss.pytorch.org/t/building-pytorch-from-source-in-conda-fails-in-pytorch-caffe2-operators-crash-op-cc/42859
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19470

Differential Revision: D15019529

Pulled By: ailzhang

fbshipit-source-id: 9ce9d713c860ee5fd4266e5c2a7f336a97d7a90d

5 years agoImprove embedding_bag add kernel (#19329)
James Reed [Sat, 20 Apr 2019 02:13:10 +0000 (19:13 -0700)]
Improve embedding_bag add kernel (#19329)

Summary:
This was actually getting pretty poor throughput with respect to memory bandwidth. I used this test to measure the memory bandwidth specifically for the AXPY call: https://gist.github.com/jamesr66a/b27ff9ecbe036eed5ec310c0a3cc53c5

And I got ~8 GB/s before this change, but ~14 GB/s after this change.

This seems to speed up the operator overall by around 1.3x (benchmark: https://gist.github.com/jamesr66a/c533817c334d0be432720ef5e54a4166):

== Before ==

time_per_iter 0.0001298875093460083
GB/s 3.082544287868467

== After ==

time_per_iter 0.00010104801654815674
GB/s 3.9623142905451076

The large difference between the local BW increase and the full-op BW increase likely indicates significant time is being spent elsewhere in the op, so I will investigate that.

EDIT: I updated this PR to include a call into caffe2/perfkernels. This is the progression:

before

time_per_iter 8.983819484710693e-05
GB/s 4.456723564864611

After no axpy
time_per_iter 7.19951868057251e-05
GB/s 5.56126065872172

AFter perfkernels
time_per_iter 5.6699180603027346e-05
GB/s 7.061548257694262

After perfkernels no grad
time_per_iter 4.388842582702637e-05
GB/s 9.122769670026413
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19329

Reviewed By: dzhulgakov

Differential Revision: D14969630

Pulled By: jamesr66a

fbshipit-source-id: 42d1015772c87bedd119e33c0aa2c8105160a738

5 years agoMake finding unused model parameters optional (#19515)
Pieter Noordhuis [Sat, 20 Apr 2019 00:20:37 +0000 (17:20 -0700)]
Make finding unused model parameters optional (#19515)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19515

This is still done by default, but can now be disabled by specifying
`find_unused_parameters=False`. There are use cases where finding
unused parameters results in erroneous behavior, because a subset of
model parameters is used *outside* the `forward` function. One can
argue that doing this is not a good idea, but we should not break
existing use cases without an escape hatch. This configuration
parameter is that escape hatch.

Reviewed By: bddppq

Differential Revision: D15016381

fbshipit-source-id: f2f86b60771b3801ab52776e62b5fd6748ddeed0

5 years agoDisallow std::vector arguments (#19511)
Sebastian Messmer [Fri, 19 Apr 2019 23:59:50 +0000 (16:59 -0700)]
Disallow std::vector arguments (#19511)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19511

In the c10 operator registration API, disallow std::vector arguments and show a nice error message
pointing users towards using ArrayRef instead.

Reviewed By: ezyang

Differential Revision: D15017423

fbshipit-source-id: 157ecc1298bbc598d2e310a16041edf195aaeff5

5 years agoDrop instead of pop (#19503)
Sebastian Messmer [Fri, 19 Apr 2019 23:59:50 +0000 (16:59 -0700)]
Drop instead of pop (#19503)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19503

After reading the arguments from the stack, the c10 kernel wrapper accidentally popped them again, causing a vector to be allocated.
Instead, it should just drop them because they have already been read.

Reviewed By: ezyang

Differential Revision: D15016023

fbshipit-source-id: b694a2929f97fa77cebe247ec2e49820a3c818d5

5 years agoAdd minimalistic implementation of subgraph matcher. (#19322)
Mikhail Zolotukhin [Fri, 19 Apr 2019 23:29:02 +0000 (16:29 -0700)]
Add minimalistic implementation of subgraph matcher. (#19322)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19322
ghimport-source-id: 93c713f829d1b2a9aa5d104cb1f30148dd37c967

Differential Revision: D14962182

Pulled By: ZolotukhinM

fbshipit-source-id: 3989fba06502011bed9c24f12648d0baa2a4480c

5 years agoFix op benchmarks error in OSS environment (#19518)
Mingzhe Li [Fri, 19 Apr 2019 23:22:13 +0000 (16:22 -0700)]
Fix op benchmarks error in OSS environment (#19518)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19518

Previous design needs to run the op benchmarks from PyTorch root directory which could lead to `module not found` error in OSS environment. This diff fixes that issue by making the benchmark to be launched in the `benchmarks` folder.

Reviewed By: ilia-cher

Differential Revision: D15020787

fbshipit-source-id: eb09814a33432a66cc857702bc86538cd17bea3b

5 years agofix AI-PEP path error (#19514)
Mingzhe Li [Fri, 19 Apr 2019 23:22:12 +0000 (16:22 -0700)]
fix AI-PEP path error (#19514)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19514

as title

Reviewed By: hl475

Differential Revision: D15018499

fbshipit-source-id: 9ce38e3a577432e0575a6743f5dcd2e907d3ab9d

5 years agoFirst step at container aliasing (#18710)
eellison [Fri, 19 Apr 2019 23:04:01 +0000 (16:04 -0700)]
First step at container aliasing (#18710)

Summary:
First step at allowing container types within alias analysis.

Since the current implementation hides the concept of Wildcards within alias analysis and does not expose it to memory dag, we cannot represent whether a container type holds a wildcard. As a result, only handle TupleConstruct, where we can directly inspect if any input values are wildcards, and don't handle nested containers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18710

Differential Revision: D15017068

Pulled By: eellison

fbshipit-source-id: 3ee76a5482cef1cc4a10f034593ca21019161c18

5 years agoFix relu bug for empty tensor (#19451)
Xiaomeng Yang [Fri, 19 Apr 2019 22:14:50 +0000 (15:14 -0700)]
Fix relu bug for empty tensor (#19451)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19451

Fix relu bug for empty tensor

Reviewed By: xianjiec

Differential Revision: D15009811

fbshipit-source-id: b75e567c3bec08d7d12b950d8f1380c50c138704

5 years agoAllow passing lists as trace inputs.
Eric Faust [Fri, 19 Apr 2019 20:28:42 +0000 (13:28 -0700)]
Allow passing lists as trace inputs.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18636

Differential Revision: D14689639

fbshipit-source-id: 6dcec8a64319ae3c4da9a93f574a13ce8ec223a5

5 years agoAllow for segmented printing in PythonPrint (#19238)
Michael Suo [Fri, 19 Apr 2019 19:48:39 +0000 (12:48 -0700)]
Allow for segmented printing in PythonPrint (#19238)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19238
ghimport-source-id: 469d33cd187fa68840b201d625800a0f4fead547

Differential Revision: D14928291

Reviewed By: zdevito

Pulled By: suo

fbshipit-source-id: 257fce3dd1601ba192092d3fc318374e3752907e

5 years agoadd resolveType to Resolver (#19237)
Michael Suo [Fri, 19 Apr 2019 19:48:39 +0000 (12:48 -0700)]
add resolveType to Resolver (#19237)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19237
ghimport-source-id: 70777ec37155be37efef1b743d564752e4dff9de

Differential Revision: D14928289

Reviewed By: zdevito

Pulled By: suo

fbshipit-source-id: 46827da9ace16730669fc654bf781d83172d18b1

5 years agoTurn resolver into a class (#19236)
Michael Suo [Fri, 19 Apr 2019 19:48:39 +0000 (12:48 -0700)]
Turn resolver into a class (#19236)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19236
ghimport-source-id: d36705ea5ecff085d0d84ea57bb96d18d7c260dd

Differential Revision: D14928292

Reviewed By: zdevito

Pulled By: suo

fbshipit-source-id: cd038100ac423fa1c19d0547b9e5487a633a2258

5 years agoFix bad annotation in docs (#19501)
davidriazati [Fri, 19 Apr 2019 19:38:23 +0000 (12:38 -0700)]
Fix bad annotation in docs (#19501)

Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19501 [jit] Fix bad annotation in docs**

Pull Request resolved: https://github.com/pytorch/pytorch/pull/19501

Pulled By: driazati

Differential Revision: D15016062

fbshipit-source-id: 3dcd0481eb48b84e98ffe8c5df2cbc9c2abf99f9

5 years agoFix out-of-topological-order issue in Nomnigraph (#19458)
Yinghai Lu [Fri, 19 Apr 2019 19:15:59 +0000 (12:15 -0700)]
Fix out-of-topological-order issue in Nomnigraph (#19458)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19458

The algorithm in https://fburl.com/ggh9iyvc fails to really ensure topological ordering of nodes. The fix is ugly but effective. I think we need a real topological sort to fix this issue more nicely. Mikhail Zolotukhin, Bram Wasti.

Differential Revision: D15011893

fbshipit-source-id: 130c3aa442f5d578adfb14fbe5f16aa722434942

5 years agoRemove uses of TypeID (#19452)
Roy Li [Fri, 19 Apr 2019 18:55:46 +0000 (11:55 -0700)]
Remove uses of TypeID (#19452)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19452
ghimport-source-id: 816ae7fe1a18d76f064d5796dec44dca6a138a21

Differential Revision: D15009920

Pulled By: li-roy

fbshipit-source-id: 722f05a927528148555561da62839f84dba645c6

5 years agoExpose QScheme in frontend (#19381)
Jerry Zhang [Fri, 19 Apr 2019 18:53:46 +0000 (11:53 -0700)]
Expose QScheme in frontend (#19381)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19381

Expose QScheme enum in frontend so that people can use it in
quantization configs in modules.

Differential Revision: D14922992

fbshipit-source-id: ab07b8a7ec42c1c1f5fe84a4a0c805adbcad408d

5 years agoRevert D15003385: Have embedding_dense_backward match JIT signature.
Gregory Chanan [Fri, 19 Apr 2019 18:23:53 +0000 (11:23 -0700)]
Revert D15003385: Have embedding_dense_backward match JIT signature.

Differential Revision:
D15003385

Original commit changeset: 53cbe18aa454

fbshipit-source-id: be904ee2212aa9e402715c436a84d95f6cde326f

5 years agoRevert D15003379: Have _embedding_bag_dense_backward match JIT signature.
Gregory Chanan [Fri, 19 Apr 2019 18:23:53 +0000 (11:23 -0700)]
Revert D15003379: Have _embedding_bag_dense_backward match JIT signature.

Differential Revision:
D15003379

Original commit changeset: f8e82800171f

fbshipit-source-id: 55f83557998d166aeb41d00d7a590acdc76fcf22

5 years agoRevert D15003387: Remove 'BoolTensor', 'IndexTensor' from frontend specifications.
Gregory Chanan [Fri, 19 Apr 2019 18:23:52 +0000 (11:23 -0700)]
Revert D15003387: Remove 'BoolTensor', 'IndexTensor' from frontend specifications.

Differential Revision:
D15003387

Original commit changeset: e518e8ce3228

fbshipit-source-id: af5b107239446ea8d6f229a427d5b157fcafd224

5 years agoRevert D15003382: Make one_hot non-differentiable.
Gregory Chanan [Fri, 19 Apr 2019 18:23:52 +0000 (11:23 -0700)]
Revert D15003382: Make one_hot non-differentiable.

Differential Revision:
D15003382

Original commit changeset: e9244c7a5f0a

fbshipit-source-id: 84789cf4c46c77cce655e70c2a8ff425f32f48bd

5 years agoMake empty_affine_quantized private (#19446)
Jerry Zhang [Fri, 19 Apr 2019 18:06:02 +0000 (11:06 -0700)]
Make empty_affine_quantized private (#19446)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19446

change empty_affine_quantized to _empty_affine_quantized

Reviewed By: dzhulgakov

Differential Revision: D15008757

fbshipit-source-id: c7699ac0c208a8f17d88e95193970c75ba7219d3

5 years agoMake one_hot non-differentiable. (#19430)
Gregory Chanan [Fri, 19 Apr 2019 17:57:46 +0000 (10:57 -0700)]
Make one_hot non-differentiable. (#19430)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19430
ghimport-source-id: 6787473873fdc21400138a4322e17fee8db62607

Differential Revision: D15003382

Pulled By: gchanan

fbshipit-source-id: e9244c7a5f0ad7cd2f79635944a8b37f910231c9

5 years agoRemove 'BoolTensor', 'IndexTensor' from frontend specifications. (#19429)
Gregory Chanan [Fri, 19 Apr 2019 17:57:03 +0000 (10:57 -0700)]
Remove 'BoolTensor', 'IndexTensor' from frontend specifications. (#19429)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19429
ghimport-source-id: 6116682b84210a34babb8b87a92e7050433e5d59

Differential Revision: D15003387

Pulled By: gchanan

fbshipit-source-id: e518e8ce322810e06175bb4e6672d4ea1eb18efd

5 years agoHave embedding_dense_backward match JIT signature. (#19427)
Gregory Chanan [Fri, 19 Apr 2019 17:56:00 +0000 (10:56 -0700)]
Have embedding_dense_backward match JIT signature. (#19427)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19427
ghimport-source-id: 93438cd495129a1e41118c62e6339909783035fd

Differential Revision: D15003385

Pulled By: gchanan

fbshipit-source-id: 53cbe18aa4541a2501f496abfee526e40093c0ff

5 years agoHave _embedding_bag_dense_backward match JIT signature. (#19428)
Gregory Chanan [Fri, 19 Apr 2019 17:53:33 +0000 (10:53 -0700)]
Have _embedding_bag_dense_backward match JIT signature. (#19428)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19428
ghimport-source-id: 037efa3df95efc1fbff631826351d1698a3c49ec

Differential Revision: D15003379

Pulled By: gchanan

fbshipit-source-id: f8e82800171f632e28535e416283d858156068ec

5 years agoStop generating autograd functions for derivatives.yaml entries that only specify...
Gregory Chanan [Fri, 19 Apr 2019 17:53:13 +0000 (10:53 -0700)]
Stop generating autograd functions for derivatives.yaml entries that only specify output differentiability. (#19424)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19424
ghimport-source-id: e9d1b86742607f5cbe39fb278fa7f378739cd6ef

Differential Revision: D15003380

Pulled By: gchanan

fbshipit-source-id: 8efb94fbc0b843863021bf25deab57c492086237

5 years agoFix ord() when dealing with utf8 chars (#19423)
David Riazati [Fri, 19 Apr 2019 17:20:43 +0000 (10:20 -0700)]
Fix ord() when dealing with utf8 chars (#19423)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19423
ghimport-source-id: e7449489fbc86ec1116f94027b3c1561942413ee

Reviewed By: eellison

Differential Revision: D15002847

Pulled By: driazati

fbshipit-source-id: 4560cebcfca695447423d48d65ed364e7dbdbedb

5 years agoFix copied optimizer (#19308)
barrh [Fri, 19 Apr 2019 17:12:46 +0000 (10:12 -0700)]
Fix copied optimizer (#19308)

Summary:
Add the defaults field to the copied object.
Prior to this patch, optimizer.__getattr__ has excluded the defaults
attribute of optimizer source object, required by some LR schedulers. (e.g. CyclicLR with momentum)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19308

Differential Revision: D15012801

Pulled By: soumith

fbshipit-source-id: 95801b269f6f9d78d531d4fed95c973b280cc96f