platform/upstream/pytorch.git
5 years agoFix deprecated: type() -> scalar_type()
Gao, Xiang [Mon, 25 Mar 2019 02:24:08 +0000 (19:24 -0700)]
Fix deprecated: type() -> scalar_type()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18394

Differential Revision: D14593890

Pulled By: soumith

fbshipit-source-id: 92b9a8c22008341c0cc3b7a721bef1973c528daf

5 years agoAdded tensor size warning to F.mse_loss() (#18349)
mc-robinson [Mon, 25 Mar 2019 02:17:00 +0000 (19:17 -0700)]
Added tensor size warning to F.mse_loss() (#18349)

Summary:
To address the issue of broadcasting giving the wrong result in `nn.MSELoss()` as mentioned here https://github.com/pytorch/pytorch/issues/16045 . In particular, the issue often arises when computing the loss between tensors with shapes (n, 1) and (n,)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18349

Differential Revision: D14594176

Pulled By: soumith

fbshipit-source-id: f23ae68a4bf42f3554ad7678a314ba2c7532a6db

5 years agoFix For Requires Grad Infinite Loop (#18361)
Elias Ellison [Sun, 24 Mar 2019 21:28:22 +0000 (14:28 -0700)]
Fix For Requires Grad Infinite Loop (#18361)

Summary:
Previously, we would continue to run requires grad on a loop body when the outputs and inputs disagreed. This adds a check so that we don't continue running if the results haven't changed since the last run.

Fix for https://github.com/pytorch/pytorch/issues/18320
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18361

Differential Revision: D14584332

Pulled By: eellison

fbshipit-source-id: 696b225f80a2036318540946428b525985a9e735

5 years agoupdate magma instructions (#18410)
Soumith Chintala [Sun, 24 Mar 2019 20:11:20 +0000 (13:11 -0700)]
update magma instructions (#18410)

Summary:
fixes https://github.com/pytorch/pytorch/issues/18389

cc: stas00
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18410

Differential Revision: D14594198

Pulled By: soumith

fbshipit-source-id: fb46ef77a36c90ad95e47f7066f5d32aa1f1370f

5 years agoRemoved some dead code (#18201)
Iurii Zdebskyi [Sun, 24 Mar 2019 15:17:34 +0000 (08:17 -0700)]
Removed some dead code (#18201)

Summary:
Removed some dead code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18201

Differential Revision: D14555251

Pulled By: izdeby

fbshipit-source-id: f49640133ef4ae1b0306f7cec6655f23869cc6e7

5 years agoSpecialize optional tensor inputs to graphs in the JIT (#18360)
Thomas Viehmann [Sun, 24 Mar 2019 05:54:36 +0000 (22:54 -0700)]
Specialize optional tensor inputs to graphs in the JIT (#18360)

Summary:
This specializes optional tensor inputs to either a DimensionedTensorType or, when None is passed,
UndefinedTensor (aka AutogradZeroTensorType).
This works because we already have different specs and thus separate plans for the two cases.
It enhances the shape analysis - because now unwrapped optional tensors will have DimensionedTensorType with appropriate shape and required grad etc.
Also, when combined with "if-pruning" (which I understand #18259 works towards), we actually get much nicer concrete graphs, too.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18360

Differential Revision: D14590577

Pulled By: soumith

fbshipit-source-id: cac204a506d1d38b15703cbcc67a6b75fd4979f4

5 years agoMove pyobj_ to TensorImpl (#18225)
Will Feng [Sat, 23 Mar 2019 19:47:15 +0000 (12:47 -0700)]
Move pyobj_ to TensorImpl (#18225)

Summary:
Currently, `THPVariable_Wrap(…)` and `THPVariable_NewWithVar(…)` depend on the existence of `pyobj_` in the autograd metadata of a Variable to convert the Variable to a Python tensor. However, after the Variable/Tensor merge, there will be Variables that don't contain autograd metadata, and to allow the conversion from non-autograd-meta Variable to a Python tensor we need to store the `pyobj_` outside of autograd metadata and in a place where it will always be available.

This PR makes it possible by moving `pyobj_` into TensorImpl, so that `THPVariable_Wrap(…)` and `THPVariable_NewWithVar(…)` can always access a Variable's `pyobj_` and convert the Variable to a Python tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18225

Differential Revision: D14562616

Pulled By: yf225

fbshipit-source-id: 18d4aaace70eee6120abaf9276036d1f8f51b18d

5 years agoFix deprecated scalar type in ATen/native/Distributions.cpp
Xiang Gao [Sat, 23 Mar 2019 17:01:28 +0000 (10:01 -0700)]
Fix deprecated scalar type in ATen/native/Distributions.cpp

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18265

Differential Revision: D14577543

Pulled By: ezyang

fbshipit-source-id: 36674530b32366c51835e4073d7ba23d455d2fda

5 years agoRevert D14446895: [C2] Implement rotated generate_proposals_op without opencv depende...
Edward Yang [Sat, 23 Mar 2019 16:33:40 +0000 (09:33 -0700)]
Revert D14446895: [C2] Implement rotated generate_proposals_op without opencv dependency (~2x faster)

Differential Revision:
D14446895

Original commit changeset: 847f2443e645

fbshipit-source-id: fc6ab5ee59e027f125f5ab0f7ee51ad7db37d4a4

5 years agoRevert D14584266: [pytorch][PR] Better error message for tensor with grad as constant...
Michael Suo [Sat, 23 Mar 2019 09:47:57 +0000 (02:47 -0700)]
Revert D14584266: [pytorch][PR] Better error message for tensor with grad as constant in tracing

Differential Revision:
D14584266

Original commit changeset: 4e7850dadc78

fbshipit-source-id: 3bb3b5006e469edff984c16e0ff8d5dac2862d88

5 years agoBetter error when module attr is used (#18164)
Elias Ellison [Sat, 23 Mar 2019 03:13:02 +0000 (20:13 -0700)]
Better error when module attr is used (#18164)

Summary:
Adds a suggestion to add to __constants__ when a torch.nn.Module attr is accessed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18164

Differential Revision: D14580060

Pulled By: eellison

fbshipit-source-id: 0c5adc21d7341a5691d4b45930947cb1ba84c8e8

5 years agoFix incorrect sparse add behavior when the sparse tensor has non-contiguous values...
Will Feng [Sat, 23 Mar 2019 02:25:58 +0000 (19:25 -0700)]
Fix incorrect sparse add behavior when the sparse tensor has non-contiguous values (#18179)

Summary:
Currently, this code gives incorrect result:
```python
import torch
indices=torch.tensor([[7, 1, 3]])
values=torch.tensor([[1., 1., 1.],
               [1., 1., 1.],
               [1., 1., 1.]])
x = torch.sparse_coo_tensor(indices, values, size=(10, 3))
values=torch.tensor(1.).expand(3, 3)
y = torch.sparse_coo_tensor(indices, values, size=(10, 3))
z = x + y

tensor(indices=tensor([[7, 1, 3]]),
       values=tensor([[2., 1., 1.],
                      [1., 1., 1.],
                      [1., 1., 1.]]),
       size=(10, 3), nnz=3, layout=torch.sparse_coo)
```

This PR fixes the bug by adding special handling for sparse tensors with non-contiguous values in the addition function (specifically, by cat'ing the indices and values together).

This PR closes https://github.com/pytorch/pytorch/issues/17950 and https://github.com/pytorch/pytorch/issues/17919.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18179

Reviewed By: ezyang

Differential Revision: D14569591

Pulled By: yf225

fbshipit-source-id: f5a14c4a31337fc95eab64596212066b4fb18b1a

5 years agoImplement rotated generate_proposals_op without opencv dependency (1.8x faster) ...
Jing Huang [Sat, 23 Mar 2019 01:12:27 +0000 (18:12 -0700)]
Implement rotated generate_proposals_op without opencv dependency (1.8x faster) (#18010)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18010

[C2] Implement rotated generate_proposals_op without opencv dependency.

Reviewed By: newstzpz

Differential Revision: D14446895

fbshipit-source-id: 847f2443e645f8cae1327dfbaa111c48875ca9be

5 years agoRemove empty file (actual file_check.cpp resides in torch/csrc/jit/testing) (#18303)
Mikhail Zolotukhin [Sat, 23 Mar 2019 00:00:10 +0000 (17:00 -0700)]
Remove empty file (actual file_check.cpp resides in torch/csrc/jit/testing) (#18303)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18303
ghimport-source-id: 66f4402075b123e36c6ffdf806b7c93187a1a58a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18307 Convert test_recursive_cse to use Filecheck inline annotations.
* #18306 [Filecheck] Add a feature to parse check annotations from string.
* #18305 Add python bindings for parseIR.
* **#18303 Remove empty file (actual file_check.cpp resides in torch/csrc/jit/testing)**

Differential Revision: D14586003

fbshipit-source-id: a13e57bd4302e4d3f06198068d525de25e2aa8b3

5 years agoTurn script_type_parser into a class (#18211)
Michael Suo [Fri, 22 Mar 2019 23:24:36 +0000 (16:24 -0700)]
Turn script_type_parser into a class (#18211)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18211
ghimport-source-id: 73b81e9ec631937b14db1da10991831788a6894b

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18296 [jit] Add namespacing for ScriptClasses
* #18284 [jit] make test module hook use save/load
* **#18211 [jit] Turn script_type_parser into a class**
* #18148 [jit] python interop for script classes

If we are namespacing classes, the type parser will need to carry around
some state about which namespaces to look in. This PR just wraps it in a
class in preparation.

Also, subscriptToType can no longer be static, since parseTypeFromExpr
may give different results depending on the namespaces available, so
it's been made a regular function instead of a static map lookup.

Reviewed By: eellison

Differential Revision: D14581128

fbshipit-source-id: 711315472ccde1920abf9fdb5a871ac27fb86787

5 years agopython interop for script classes (#18148)
Michael Suo [Fri, 22 Mar 2019 23:24:36 +0000 (16:24 -0700)]
python interop for script classes (#18148)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18148
ghimport-source-id: 40a9d745dc9aeba53d098743323fcbd50ca65137

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18148 py interop**

Support for converting classes between the Python–TorchScript boundary. Like other TorchScript values, ScriptClasses are native Python values when used in Python and IValues when used in TorchScript.

Notably, there is a copy across this boundary, which will be surprising to users who will expect standard Python reference semantics. I have some ideas for fixing that, but it's a more involved process.

Reviewed By: jamesr66a

Differential Revision: D14526259

fbshipit-source-id: 5916e3032488a42dc7da756c1826d7c040a21ebd

5 years agoBetter error message for tensor with grad as constant in tracing (#18298)
Elias Ellison [Fri, 22 Mar 2019 22:25:40 +0000 (15:25 -0700)]
Better error message for tensor with grad as constant in tracing (#18298)

Summary:
Fix for https://github.com/pytorch/pytorch/issues/17583

There's an unrelated issue right now causing a segfault when printing tensor so that might have to fixed first for this to land
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18298

Differential Revision: D14584266

Pulled By: eellison

fbshipit-source-id: 4e7850dadc78ef1e98ad40b9d8adc0fef42acf48

5 years agoSupport for basic list comprehensions (#17267)
Nikolay Korovaiko [Fri, 22 Mar 2019 22:22:23 +0000 (15:22 -0700)]
Support for basic list comprehensions (#17267)

Summary:
Supports the following syntax:
```
        torch.jit.script
        def comp(l):
            # type: (List[float]) -> List[float]

            n = [x * 3 for x in l]
            return n
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17267

Differential Revision: D14581119

Pulled By: Krovatkin

fbshipit-source-id: 6fd091a8a9ab607386ac58fda6ad88bf8aea380e

5 years agoMake it possible to trigger XLA/slow tests via commit message. (#18345)
Edward Yang [Fri, 22 Mar 2019 21:58:35 +0000 (14:58 -0700)]
Make it possible to trigger XLA/slow tests via commit message. (#18345)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18345
ghimport-source-id: 9649d76bb194866859d62e6ba2a3a265c96ebba5

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18345 Make it possible to trigger XLA/slow tests via commit message.**

Four variants are supported: `[xla ci] [ci xla] [xla test] [test xla]`; substitute
xla with slow for slow tests.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14584557

fbshipit-source-id: fcbfdfb28246823135bb3d3910baae073d16e81d

5 years agoAvoid refcount when looking up dispatch key
Sebastian Messmer [Fri, 22 Mar 2019 21:05:50 +0000 (14:05 -0700)]
Avoid refcount when looking up dispatch key

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18294

Reviewed By: ezyang

Differential Revision: D14512979

fbshipit-source-id: 45e548974f06184c375c2bb8339e3049a4ebd880

5 years agoFix DCHECK to handle dangling else (#18295)
Jiakai Liu [Fri, 22 Mar 2019 21:01:41 +0000 (14:01 -0700)]
Fix DCHECK to handle dangling else (#18295)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18295

Replace "if (false)" with "while (false)" which fixes potential dangling else issue as shown in added test case.

Reviewed By: ezyang

Differential Revision: D14569608

fbshipit-source-id: 407052db9182ce27b7a59841e90fa50d3eca262e

5 years agoAllow fusion of float function arguments (#18087)
Natalia Gimelshein [Fri, 22 Mar 2019 20:48:59 +0000 (13:48 -0700)]
Allow fusion of float function arguments (#18087)

Summary:
so that functions like `def fn(x, p:float)` can be fused. Fixes #9940 and #11186. Fuses only float (not integer) arguments to simplify assembling arguments for fusion launch.
CPU fusion is disabled in CI and this won't be tested, but I tested it locally.
cc t-vi, apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18087

Differential Revision: D14581206

Pulled By: wanchaol

fbshipit-source-id: ccb0cf79b1751706f9b2cdf1715115eae5a39fb6

5 years agoFix error reporting in NVRTC use of the fuser (#18327)
Thomas Viehmann [Fri, 22 Mar 2019 20:31:37 +0000 (13:31 -0700)]
Fix error reporting in NVRTC use of the fuser (#18327)

Summary:
Two functions were not directed ad NVRTC.
It's a bit hard to test this, as the fuser usually produces correct code - unless I try to hack on it. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18327

Differential Revision: D14579285

Pulled By: soumith

fbshipit-source-id: 1be7ba461cc473d514ba619507742a47d4d7c97e

5 years agoUsing sqrt for better precision in cosine_similarity (#18250)
Ailing Zhang [Fri, 22 Mar 2019 20:22:52 +0000 (13:22 -0700)]
Using sqrt for better precision in cosine_similarity (#18250)

Summary:
address comment in #18168 .
Testing in CI...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18250

Differential Revision: D14568601

Pulled By: ailzhang

fbshipit-source-id: 39fbbdb08743b53fa665c7e88e4750cbe0976ec7

5 years agoFix alignment issues for Fake BFP16 fp32 -> bfp16 rounding routines (#18321)
Jianyu Huang [Fri, 22 Mar 2019 19:28:04 +0000 (12:28 -0700)]
Fix alignment issues for Fake BFP16 fp32 -> bfp16 rounding routines (#18321)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18321

As title.

Reviewed By: jspark1105

Differential Revision: D14575512

fbshipit-source-id: 0e33cdab54b1aef8b67f0b4c366692c5dbdf631d

5 years agoUntangle internal build python and cpp dependencies
Dmytro Dzhulgakov [Fri, 22 Mar 2019 19:10:19 +0000 (12:10 -0700)]
Untangle internal build python and cpp dependencies

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18326

Reviewed By: ezyang

Differential Revision: D14576294

fbshipit-source-id: 186ce1e3d026d962b7386f861eddf093f583a878

5 years agoCaffe2: crash op (#18207)
Alexander Sidorov [Fri, 22 Mar 2019 18:49:04 +0000 (11:49 -0700)]
Caffe2: crash op (#18207)

Summary:
this is handy when testing various core dump related
things. If in the future we want to unit test our future gdb debugger
extensions, we can use this op to generate a core dump for us within a
unit test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18207

Differential Revision: D14482186

Pulled By: salexspb

fbshipit-source-id: 39a9fffbdd4bd083597f544d1c783a82cf023a89

5 years agocaffe2 - Util to cleanup external inputs and outputs from a NetDef (#18194)
Duc Ngo [Fri, 22 Mar 2019 18:14:40 +0000 (11:14 -0700)]
caffe2 - Util to cleanup external inputs and outputs from a NetDef (#18194)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18194

Add a util method to cleanup external inputs and outputs from a NetDef

The following conditions will be met after the modification
- No duplicate external inputs
- No duplicate external outputs
- Going through list of ops in order, all op inputs must be outputs
from other ops, or registered as external inputs.
- All external outputs must be outputs of some operators.

Reviewed By: ZolotukhinM

Differential Revision: D14528589

fbshipit-source-id: c8d82fda1946aa3696abcbec869a4a8bb22f09b6

5 years agoEnd to end hack to call server side Caffe2 ops (#18267)
Dmytro Dzhulgakov [Fri, 22 Mar 2019 18:11:16 +0000 (11:11 -0700)]
End to end hack to call server side Caffe2 ops (#18267)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18267

Motivation: we don't actually want to use it for real under any circumstances. This is an idea to unblock our internal progress and parallelize workstreams. We can easily define schemas for all ops in question and implement forwarding to C2 ops which is NOT going to be performant. Then several things can be happening in parallel:
* move code of ops outside of C2 ops that depend on protobuf into c10
* development of optimization/fusion passes
* building python-level wrappers with clean API
* improving perf

This demonstrates, Relu, quant, dequant. It seems to cover all use cases necessary (maybe except weights prepacking). Ideally I'd demonstrate Conv, but will get to it later in a separate PR (contributions welcomed)

Reviewed By: ezyang

Differential Revision: D14531232

fbshipit-source-id: 4cd4a71ae0cb373c6c0e81f965c442b82a1b4069

5 years agoOptimize MomentumSGDUpdate maximum block size and make it templated
Bilge Acun [Fri, 22 Mar 2019 16:51:27 +0000 (09:51 -0700)]
Optimize MomentumSGDUpdate maximum block size and make it templated

Summary: Removing the maximum number of blocks limit from the operator and making the nesterov parameter templated to remove branching.

Reviewed By: BIT-silence

Differential Revision: D14567003

fbshipit-source-id: 394c2039ee214adc6ccd2e562e4e9563d307131f

5 years agoAdd test for #17271 (torch.exp incorrect for 2**31 size tensor) (#18292)
Edward Yang [Fri, 22 Mar 2019 14:46:50 +0000 (07:46 -0700)]
Add test for #17271 (torch.exp incorrect for 2**31 size tensor) (#18292)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18292
ghimport-source-id: a3e96584db0eef7b6202a1211808f9f6e59dd529

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18292 Add test for #17271 (torch.exp incorrect for 2**31 size tensor)**
* #18291 Correctly call superclass setUp in TestCase subclasses.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14567642

fbshipit-source-id: c60ee7597a86f5d2c5c0b72cb106f17815950427

5 years agoCorrectly call superclass setUp in TestCase subclasses. (#18291)
Edward Yang [Fri, 22 Mar 2019 14:43:40 +0000 (07:43 -0700)]
Correctly call superclass setUp in TestCase subclasses. (#18291)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18291
ghimport-source-id: d6e95e899bd320407967df41435801e54864ba62

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18292 Add test for #17271 (torch.exp incorrect for 2**31 size tensor)
* **#18291 Correctly call superclass setUp in TestCase subclasses.**

This makes PYTORCH_TEST_SKIP_FAST work correctly for more
tests, reducing the wasted testing effort on our slow_test job.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14567643

fbshipit-source-id: 40cf1d6556e0dd0a0550ff3d9ffed8b6000f8191

5 years agoVerify def before infer fensor (#18129)
Gerard Goossen [Fri, 22 Mar 2019 13:33:24 +0000 (06:33 -0700)]
Verify def before infer fensor (#18129)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18129

A lot of tensor interference function assume the operator passes the schema.
So call Verity to make sure this is actually the case.

Created diff before to add checking in Concat (https://github.com/pytorch/pytorch/pull/17110), but I encountered lot more places where this is assumed (for example ElementwiseOpShapeInference)

Reviewed By: mdschatz

Differential Revision: D14503933

fbshipit-source-id: cf0097b8c3e4beb1cded6b61e092a6adee4b8fcb

5 years agoadd more Python interface functions to make quantization simpler (#18246)
Jongsoo Park [Fri, 22 Mar 2019 07:49:11 +0000 (00:49 -0700)]
add more Python interface functions to make quantization simpler (#18246)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18246

Simplifies histogram collection and quantization process.

Histogram collection before this diff was something like this
```
from caffe2.quantization.server import dnnlowp_pybind11
...
dnnlowp_pybind11.ObserveHistogramOfOutput(hist_file)
for ...
   workspace.RunNet(predict_net)
dnnlowp_pybind11.ClearNetObservers()  # This is to trigger Stop function in the observer to dump out histogram file but this can have unintended consequence of also clearing all the other useful observers we attached
```

After this diff we can
```
workspace.CreateNet(predict_net)  # Note we need to create net to have a net to attach observer
histogram_observer = dnnlowp_pybind11.AddHistogramObserver(predic_net, hist_file)
for ...
   workspace.RunNet(predict_net)
predict_net.RemoveObserver(histogram_observer)
```

Choosing quantization parameters of weights before this diff was something like this
```
dnnlowp_pybind11.ObserveHistogramOfOutput(weight_hist_file)
workspace.RunNetOnce(init_net)
dnnlowp_pybind11.ClearNetObservers() # Has same issue as the histogram collection example above

dnnlowp_pybind11.RegisterQuantizationParamsWithHistogram(
    weight_hist_file, is_weight=True, qparams_output_file_name=qparams_file
)
workspace.CreateNet(init_net, overwrite=True)
dnnlowp_pybind11.ClearNetObservers()

logger.info("Loading quantization params from {}".format(qparams_file))
blobs_to_qparams = {}
with open(qparams_file) as f:
    lines = f.readlines()
for line in lines:
    op_id, op_type, output_id, tensor_name, mini, maxi, scale, zero_point, precision = (
        line.split()
    )
    op_id = int(op_id)
    output_id = int(output_id)
    op = net.Proto().op[op_id]
    if op_type != op.type or op.output[output_id] != tensor_name:
        print(
            "Corrupt qparams file {} {} {} {} {}".format(
                qparams_file, op_type, op.type, op.output[output_id], tensor_name
            )
        )
    blobs_to_qparams[tensor_name] = QuantizationParam(float(scale), int(zero_point))

```

After this diff this can be simplified to
```
blobs_to_qparams = {}
for op in init_net.Proto().op:
    for output in op.output:
        scale, zero_point = dnnlowp_pybind11.ChooseQuantizationParams(output)
        blobs_to_qparams[output] = QuantizationParam(scale, zero_point)
```

Reviewed By: dskhudia

Differential Revision: D14544694

fbshipit-source-id: 4fd06cd63256201e2e9d15c39f503138d1be53c2

5 years agoadd fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta...
Weiyi Zheng [Fri, 22 Mar 2019 07:08:50 +0000 (00:08 -0700)]
add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta (#18257)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18257

support adding op in global_init_net. because pred_init_net is per thread, and just doesn't cut it.

Reviewed By: jspark1105

Differential Revision: D14552695

fbshipit-source-id: 53dd44c84ad019019ab9f35fc04d076b7f941ddc

5 years agoAutomatic update of fbcode/onnx to c05f2ae412daf8fd64136ca354b97ccf73e0ea6c (#18285)
Lu Fang [Fri, 22 Mar 2019 07:07:57 +0000 (00:07 -0700)]
update of fbcode/onnx to c05f2ae412daf8fd64136ca354b97ccf73e0ea6c (#18285)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18285

Previous import was 96c58ceeacf0f2b73d752e413e4fd78787a12da3

Included changes:
- **[c05f2ae4](https://github.com/onnx/onnx/commit/c05f2ae4)**: update both core and ml docs (#1879) <Lu Fang>
- **[f895279b](https://github.com/onnx/onnx/commit/f895279b)**: fix the problems introduced in previous PRs in operator registration (#1878) <Lu Fang>
- **[f6f80657](https://github.com/onnx/onnx/commit/f6f80657)**: Skip the schema check on ops in non-standard domain (#1876) <Lu Fang>
- **[8c8be722](https://github.com/onnx/onnx/commit/8c8be722)**: Introduce Function Body Helper  (#1868) <Sherlock>
- **[b605eafb](https://github.com/onnx/onnx/commit/b605eafb)**: Support down sampling for Upsample with scales < 1. (#1773) <Ke Zhang>
- **[47f7aa71](https://github.com/onnx/onnx/commit/47f7aa71)**: Remove scaledtanh (#1866) <Ashwini Khade>
- **[4dfc56de](https://github.com/onnx/onnx/commit/4dfc56de)**: Add Ceil support for Max and Average Pooling (#1860) <Lara Haidar>
- **[552a8efc](https://github.com/onnx/onnx/commit/552a8efc)**: Add testcase generator for functions (#1862) <Raymond Yang>
- **[fdb978a5](https://github.com/onnx/onnx/commit/fdb978a5)**: Promote Thresholded Relu Op (#1856) <Ashwini Khade>
- **[ce332628](https://github.com/onnx/onnx/commit/ce332628)**: Update Slice with dynamic input & optional input steps (#1836) <Bowen Bao>
- **[3a9a8787](https://github.com/onnx/onnx/commit/3a9a8787)**: Merge function into opschema (#1834) <Raymond Yang>
- **[3dbf8fe9](https://github.com/onnx/onnx/commit/3dbf8fe9)**: Handle string comparision represented as np.objects (#1851) <Dmitri Smirnov>
- **[3b0d3bb2](https://github.com/onnx/onnx/commit/3b0d3bb2)**: remove global variable in header file (#1850) <Lu Fang>
- **[1cca8733](https://github.com/onnx/onnx/commit/1cca8733)**: bump the version for drop out - fix the issue that the version was not bumped when changing its type constraint declaration. (#1848) <Ke Zhang>
- **[1ec81bc6](https://github.com/onnx/onnx/commit/1ec81bc6)**: Change TopK operator to allow dynamic 'k' (#1829) <Hariharan Seshadri>
- **[a89a4a16](https://github.com/onnx/onnx/commit/a89a4a16)**: Remove exp op: Affine, ImageScaler,ParametricSoftplus, Crop. (#1832) <Ke Zhang>

Reviewed By: yinghai

Differential Revision: D14566202

fbshipit-source-id: b1e5912ae6887e2865fc628363071e2b9938dfa4

5 years agoCleanup TorchScript rst docs (#18234)
David Riazati [Fri, 22 Mar 2019 03:15:38 +0000 (20:15 -0700)]
Cleanup TorchScript rst docs (#18234)

Summary:
* Adds more headers for easier scanning
* Adds some line breaks so things are displayed correctly
* Minor copy/spelling stuff
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18234

Reviewed By: ezyang

Differential Revision: D14567737

Pulled By: driazati

fbshipit-source-id: 046d991f7aab8e00e9887edb745968cb79a29441

5 years agoReplace the remaining usages of IntList in caffe2 to IntArrayRef
Junjie Bai [Thu, 21 Mar 2019 23:24:45 +0000 (16:24 -0700)]
Replace the remaining usages of IntList in caffe2 to IntArrayRef

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18282

Differential Revision: D14569269

Pulled By: bddppq

fbshipit-source-id: 5fc33701b83f9efdec4b456d2691764831d10e7f

5 years agoBlacklist certain op types when doing bound shape inference (#18290)
Yinghai Lu [Thu, 21 Mar 2019 22:28:20 +0000 (15:28 -0700)]
Blacklist certain op types when doing bound shape inference (#18290)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18290

Some such as `Tile` will mess up our tracking of batch size and for now it makes sense to stop the shape inference on these ops so that we don't lower it and downstream ops without proper batch info.

Reviewed By: zrphercule

Differential Revision: D14463550

fbshipit-source-id: 2792481efa540f2a7dd310e677c213860c3053ca

5 years agoFix use of c10::guts::apply (#18159)
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Fix use of c10::guts::apply (#18159)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18159

In some instances, the call to forward could clash with std::forward. Fully qualify it to make sure it gets the right one

Reviewed By: ezyang

Differential Revision: D14512189

fbshipit-source-id: 6242607dbe54fcdb93229c1a4aaee8b84a88caa1

5 years agoAllow using C10_DECLARE_TENSOR_TYPE and C10_DEFINE_TENSOR_TYPE from any namespace...
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Allow using C10_DECLARE_TENSOR_TYPE and C10_DEFINE_TENSOR_TYPE from any namespace (#18158)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18158

They didn't work when called from other namespaces before because they didn't fully specify the c10 namespace.

Reviewed By: ezyang

Differential Revision: D14512187

fbshipit-source-id: a496b89a1bbe2b56137cfae03ab94a60f38d7068

5 years agoMove schema inference to c10 (#18090)
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Move schema inference to c10 (#18090)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18090

This schema inference is needed by the c10 operator registration mechanism. Move it to c10.
It is going to be used by diffs stacked on top.

Reviewed By: ezyang

Differential Revision: D14491454

fbshipit-source-id: 0f8ddcdbd91467c8347d315dd443a1ca8b216481

5 years agoAllow registering same operator schema multiple times (#18038)
Sebastian Messmer [Thu, 21 Mar 2019 21:51:38 +0000 (14:51 -0700)]
Allow registering same operator schema multiple times (#18038)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18038

Now that we have named overloads, we can allow registering the same function schema multiple times and just check it's identical.

This is going to be used in custom op registration since they register the schema every time a kernel is registered.

Reviewed By: dzhulgakov

Differential Revision: D14467494

fbshipit-source-id: 2c26cf72a64b65f120afe05e989302ec42597515

5 years agoRename trtrs to triangular_solve (#18213)
vishwakftw [Thu, 21 Mar 2019 21:18:38 +0000 (14:18 -0700)]
Rename trtrs to triangular_solve (#18213)

Summary:
Changelog:
- Renames `trtrs` to `triangular_solve` to remain consistent with `cholesky_solve` and `solve`.
- Rename all tests, fix callsites
- Create a tentative alias for `triangular_solve` under the name `trtrs`, and add a deprecation warning to not promote usage.
- Move `isnan` to _torch_docs.py
- Remove unnecessary imports
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18213

Differential Revision: D14566902

Pulled By: ezyang

fbshipit-source-id: 544f57c29477df391bacd5de700bed1add456d3f

5 years agoFix contribution_guide docs (#18237)
kshitij12345 [Thu, 21 Mar 2019 20:10:34 +0000 (13:10 -0700)]
Fix contribution_guide docs (#18237)

Summary:
Fixes Typo and a Link in the `docs/source/community/contribution_guide.rst`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18237

Differential Revision: D14566907

Pulled By: ezyang

fbshipit-source-id: 3a75797ab6b27d28dd5566d9b189d80395024eaf

5 years agoUpdating submodules
svcscm [Thu, 21 Mar 2019 20:08:10 +0000 (13:08 -0700)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 80b00c33e6f6c7cfa08f645cd33419f6545f45d2

5 years agoOptimize group_norm_op (#17945)
Xiaomeng Yang [Thu, 21 Mar 2019 19:56:20 +0000 (12:56 -0700)]
Optimize group_norm_op (#17945)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17945

Optimize group_norm_op

Reviewed By: houseroad

Differential Revision: D14419908

fbshipit-source-id: 4024b5c5dbeff97f4f026d61fc44af1f0e98ed68

5 years agoEnable running of slow tests in CI. (#18236)
Edward Yang [Thu, 21 Mar 2019 19:37:00 +0000 (12:37 -0700)]
Enable running of slow tests in CI. (#18236)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18236
ghimport-source-id: 2bb80d017c2ea833669a2d55b340a922b2d44685

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18236 Enable running of slow tests in CI.**
* #18231 Add a decorator for marking slow tests.

These tests only run on master, as they are slow.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14563115

fbshipit-source-id: f54ddef4abedc7e872e58657fc9ac537952773d0

5 years agoRun clang-format on torch/csrc/distributed/c10d
Pieter Noordhuis [Thu, 21 Mar 2019 18:49:21 +0000 (11:49 -0700)]
Run clang-format on torch/csrc/distributed/c10d

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18255

Differential Revision: D14563072

Pulled By: pietern

fbshipit-source-id: bd83f90ae949b14bc95f4009ba12319c9b7936d0

5 years agoShut up compiler about unused the_type. (#18278)
Edward Yang [Thu, 21 Mar 2019 18:36:12 +0000 (11:36 -0700)]
Shut up compiler about unused the_type. (#18278)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18278
ghimport-source-id: 3c35f6e7229c3c2b3a27d96370d7c05fad58365e

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18278 Shut up compiler about unused this_type.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14563050

fbshipit-source-id: 4b516f6c9ef3784d1430f793f304066c351b1a93

5 years agoAdd a decorator for marking slow tests. (#18231)
Edward Yang [Thu, 21 Mar 2019 18:08:11 +0000 (11:08 -0700)]
Add a decorator for marking slow tests. (#18231)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18231
ghimport-source-id: 78c230f60c41877fe91b89c8c979b160f36f856b

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18231 Add a decorator for marking slow tests.**

The general strategy:
- It's a normal skip decorator, which triggers a skip if
  PYTORCH_TEST_WITH_SLOW is not set.
- It also annotates the method in question that says it's
  slow.  We use this to implement a catch-all skipper in
  setUp that skips all non-slow tests when
  PYTORCH_TEST_SKIP_FAST is set.

I added a little smoketest to test_torch and showed that I get:

```
Ran 432 tests in 0.017s
OK (skipped=431)
```

when running with PYTORCH_TEST_WITH_SLOW=1 and PYTORCH_TEST_SKIP_FAST=1

CI integration coming in later patch, as well as nontrivial uses of
this decorator.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14544441

fbshipit-source-id: 54435ce4ec827193e019887178c09ebeae3ae2c9

5 years agolint changes
Igor Fedan [Thu, 21 Mar 2019 18:04:15 +0000 (11:04 -0700)]
lint changes

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18276

Differential Revision: D14563385

Pulled By: ifedan

fbshipit-source-id: 12a51dbdb7b9e96be9fefa21fe298796b1ae6b58

5 years agomove median to ATen (#17637)
Thomas Viehmann [Thu, 21 Mar 2019 16:59:20 +0000 (09:59 -0700)]
move median to ATen (#17637)

Summary:
This moves median to ATen.

- median with dimension reduces to kthvalue
- median without dimension (aka medianall) is implemented in parallel to kthvalue because we would not want to reshape (copying for non-contiguous) and then copy again in kthvalue. We can sue the helper functions we moved from kthvalue.
- `median_cuda` was accidentally already put into ATen in #17544.
- The quickselect algorthm without indices for CPU in TH is now obsolete and removed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17637

Differential Revision: D14346510

Pulled By: ezyang

fbshipit-source-id: c07ad144efbd6b4194179bb1c02635862521d8cb

5 years agoFix B903 lint: save memory for data classes with slots/namedtuple (#18184)
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B903 lint: save memory for data classes with slots/namedtuple (#18184)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18184
ghimport-source-id: 2ce860b07c58d06dc10cd7e5b97d4ef7c709a50d

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18184 Fix B903 lint: save memory for data classes with slots/namedtuple**
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530872

fbshipit-source-id: e26cecab3a8545e7638454c28e654e7b82a3c08a

5 years agoFix B902 lint error: invalid first argument. (#18181)
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B902 lint error: invalid first argument. (#18181)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18181
ghimport-source-id: 9c23551584a1a1b0b7ac246367f3a7ae1c50b315

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* **#18181 Fix B902 lint error: invalid first argument.**
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

A variety of sins were committed:
- Some code was dead
- Some code was actually a staticmethod
- Some code just named it the wrong way
- Some code was purposely testing the omitted case

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530876

fbshipit-source-id: 292a371d9a76ddc7bfcfd38b6f0da9165290a58e

5 years agoFix B006 lint errors: using mutable structure in default argument. (#18178)
Edward Yang [Thu, 21 Mar 2019 16:06:30 +0000 (09:06 -0700)]
Fix B006 lint errors: using mutable structure in default argument. (#18178)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18178
ghimport-source-id: 667ee76b418f505fa64b863e52a603c508dcd1bf

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* #18181 Fix B902 lint error: invalid first argument.
* **#18178 Fix B006 lint errors: using mutable structure in default argument.**
* #18177 Fix lstrip bug revealed by B005 lint

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530874

fbshipit-source-id: 38f4456a085bfe55f2a96fff53028ebd0d621604

5 years agoTwo amendments for the shape analysis (#18271)
Thomas Viehmann [Thu, 21 Mar 2019 15:02:30 +0000 (08:02 -0700)]
Two amendments for the shape analysis (#18271)

Summary:
Two small refinements to the shape analysis:
- `detach` can set requires grad to false for dimensioned tensors (not sure if I would also need to deal with Complete?).
- add `batch_norm_stats`.

I noticed these while looking at what's going on when trying to code batch norm manually. (Hi wanchaol )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18271

Differential Revision: D14561303

Pulled By: ezyang

fbshipit-source-id: 64a6879392e77403c44f2ed82f84b6397754d0ea

5 years agoFix lstrip bug revealed by B005 lint (#18177)
Edward Yang [Thu, 21 Mar 2019 14:50:45 +0000 (07:50 -0700)]
Fix lstrip bug revealed by B005 lint (#18177)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18177
ghimport-source-id: fbbf915b66762fc88bc5b541464e71ba27500958

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* #18181 Fix B902 lint error: invalid first argument.
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* **#18177 Fix lstrip bug revealed by B005 lint**

lstrip() doesn't strip a prefix; it strips all of the characters
in the passed in string.  B005 lint revealed this.  Replaced with
substring operation.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14530873

fbshipit-source-id: 13b3438fcc3cce13b5110730dc3d0b528a52930f

5 years agoBackward function for torch.cdist
Igor Fedan [Thu, 21 Mar 2019 07:36:26 +0000 (00:36 -0700)]
Backward function for torch.cdist

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17173

Differential Revision: D14111482

Pulled By: ifedan

fbshipit-source-id: d72cfd53c29d0f8cf5f8ad1148d14f3d5abd938e

5 years agoFix ONNX symbolic for argmin and argmax (#18261)
Lu Fang [Thu, 21 Mar 2019 05:45:57 +0000 (22:45 -0700)]
Fix ONNX symbolic for argmin and argmax (#18261)

Summary:
Fix the problem introduced in https://github.com/pytorch/pytorch/pull/17103
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18261

Reviewed By: bddppq

Differential Revision: D14558781

Pulled By: houseroad

fbshipit-source-id: 7bb50072e77d1d7b2a93f4011fa1362f26e9df1c

5 years agoUpdate math::Transpose to support tensor with size > 2G (#17670)
Xiaomeng Yang [Thu, 21 Mar 2019 01:19:09 +0000 (18:19 -0700)]
Update math::Transpose to support tensor with size > 2G (#17670)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17670

Update math::Transpose to support tensor with size > 2G

i-am-not-moving-c2-to-c10

Differential Revision: D14313624

fbshipit-source-id: 0b4a85b913972e5a8981f0d40d0c539407b98f30

5 years agohandle dst_bin_width==0 case properly (#18240)
Jongsoo Park [Thu, 21 Mar 2019 00:02:38 +0000 (17:02 -0700)]
handle dst_bin_width==0 case properly (#18240)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18240

For rare cases when dst_bin_width == 0 we should just put all numbers to an arbitrary bin.

Reviewed By: csummersea

Differential Revision: D14544685

fbshipit-source-id: 02d04ff8bd1555d6cf7e7eeb1196a4ab3325a9e5

5 years agoRevert D14114134: [asr] add fbgemm fp16 (fbfcpacked) support, add global_init_net...
Lu Fang [Wed, 20 Mar 2019 23:27:56 +0000 (16:27 -0700)]
Revert D14114134: [asr] add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta

Differential Revision:
D14114134

Original commit changeset: 112bb2ceb9d3

fbshipit-source-id: 763262c1b78eed88a653caad5adc27d97feb43aa

5 years agoCleanup arg{min, max} (#17103)
Gao, Xiang [Wed, 20 Mar 2019 23:18:49 +0000 (16:18 -0700)]
Cleanup arg{min, max} (#17103)

Summary:
Why do we need this workaround? `PythonArgParser` handles these two cases well.

The discussion started at https://github.com/pytorch/pytorch/pull/6201#issuecomment-378724406. The conclusion at that time by goldsborough was:

> Because we wanted to allow `dim=None` in Python and route to a different function. Essentially the problem was wanting to wrap the C++ function in Python. AFAIK there is no way of translating `dim=None` behavior into C++? So Richard and I came up with this strategy

Maybe at that time `PythonArgParser` was not powerful enough to handle the routing of two function with same name but different C++ signature.

Will keep an eye on the CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17103

Differential Revision: D14523503

Pulled By: VitalyFedyunin

fbshipit-source-id: cae3e2678062da2eccd93b51d4050578c7a9ab80

5 years agoAdded the exception of ignore_index (#18117)
Bharat123Rox [Wed, 20 Mar 2019 23:00:11 +0000 (16:00 -0700)]
Added the exception of ignore_index (#18117)

Summary:
Fix #17801 to add an exception regarding `ignore_index` in the documentation for `torch.nn.CrossEntropyLoss` and `torch.nn.NLLLoss`

If any other files/functions are hit, I'd be glad to incorporate the changes there too! 😊
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18117

Differential Revision: D14542079

Pulled By: ezyang

fbshipit-source-id: 7b918ac61f441dde7d3d6782d080c500cf2097f1

5 years agoAdd .get() for dicts (#18238)
David Riazati [Wed, 20 Mar 2019 21:48:52 +0000 (14:48 -0700)]
Add .get() for dicts (#18238)

Summary:
Fixes #18232
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18238

Differential Revision: D14546689

Pulled By: driazati

fbshipit-source-id: ed021e6f54c891d6c734c8f2345f4e83a3c6c905

5 years agoUpdate nccl submodule to 2.4.2 (#17883)
Pieter Noordhuis [Wed, 20 Mar 2019 21:30:55 +0000 (14:30 -0700)]
Update nccl submodule to 2.4.2 (#17883)

Summary:
Didn't test this. Let's see what happens.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17883

Differential Revision: D14547470

Pulled By: pietern

fbshipit-source-id: c35d232f6bcc5a2dce55da636a0acbea5c2725d8

5 years agoReinstate ncclCommDestroy (#17943)
Pieter Noordhuis [Wed, 20 Mar 2019 21:12:31 +0000 (14:12 -0700)]
Reinstate ncclCommDestroy (#17943)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17943

Together with xw285cornell came up with a solution for static destruction
order fiasco that caused the NCCL context to be destroyed **after**
the CUDA context was already destroyed. In this commit we destroy all
cached NCCL contexts as soon as the last NCCL related Caffe2 operator
instance is destructed, thereby avoiding a dependency on static
variable destruction.

Reviewed By: xw285cornell

Differential Revision: D14429724

fbshipit-source-id: fe5ce4b02b1002af8d9f57f6fa089b7a80e316ce

5 years agoEnable autograd to recognize the XLA backend as one providing multiple devices (...
Davide Libenzi [Wed, 20 Mar 2019 20:47:41 +0000 (13:47 -0700)]
Enable autograd to recognize the XLA backend as one providing multiple devices (#17847)

Summary:
…e devices, while not being CUDA/HIP.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17847

Differential Revision: D14545634

Pulled By: ezyang

fbshipit-source-id: 417181bf2ff4f8978544afe2fb6b042e787854ed

5 years agoadd fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta...
Weiyi Zheng [Wed, 20 Mar 2019 20:45:07 +0000 (13:45 -0700)]
add fbgemm fp16 (fbfcpacked) support, add global_init_net in predictor_export_meta (#17905)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17905

support adding op in global_init_net. because pred_init_net is per thread, and just doesn't cut it.

Reviewed By: jspark1105

Differential Revision: D14114134

fbshipit-source-id: 112bb2ceb9d3d5e663dd430585567f4eaa2db35f

5 years agofixed typo in shape_analysis.cpp (#18227)
Zhang Dong [Wed, 20 Mar 2019 19:41:17 +0000 (12:41 -0700)]
fixed typo in shape_analysis.cpp (#18227)

Summary:
cc: VitalyFedyunin
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18227

Differential Revision: D14541764

Pulled By: VitalyFedyunin

fbshipit-source-id: 9477deb1a99e6581f15a4de4d7631d747f56f3a6

5 years agoRetain the parameter names in ONNX exporter (#17551)
Lu Fang [Wed, 20 Mar 2019 19:03:13 +0000 (12:03 -0700)]
Retain the parameter names in ONNX exporter (#17551)

Summary:
So, we will keep the names of ONNX initializers the same as the names in PyTorch state dict.

Later, we will make this as the default behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17551

Reviewed By: dzhulgakov

Differential Revision: D14491920

Pulled By: houseroad

fbshipit-source-id: f355c02e1b90d7ebbebf4be7c0fb6ae208ec795f

5 years agoFix typo in docstring
Alexandr Morev [Wed, 20 Mar 2019 18:12:55 +0000 (11:12 -0700)]
Fix typo in docstring

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18216

Differential Revision: D14539824

Pulled By: ezyang

fbshipit-source-id: 490b72951a75f3f8b949a2d692d660a3693ee98a

5 years agoAdd batched version of trtrs (#18025)
Vishwak Srinivasan [Wed, 20 Mar 2019 18:06:56 +0000 (11:06 -0700)]
Add batched version of trtrs (#18025)

Summary:
- Remove single batch TH/THC implementations
- Remove `_batch_trtrs_lower` from `multivariate_normal`
- Add tests for batched behavior
- Modify trtrs_backward to accommodate for batched case
- Modify docs

In a future PR, this will be renamed to `triangular_solve`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18025

Differential Revision: D14523004

Pulled By: ifedan

fbshipit-source-id: 11c6a967d107f969b60e5a5c73ce6bb8099ebbe1

5 years agoRemove GLOO usage when USE_GLOO is OFF
Sacha Refshauge [Wed, 20 Mar 2019 16:16:32 +0000 (09:16 -0700)]
Remove GLOO usage when USE_GLOO is OFF

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18203

Differential Revision: D14540520

Pulled By: soumith

fbshipit-source-id: f1c96cc563ed1e913040e3e16b109d3e3030128c

5 years agoEnable 32 bit CPU build on Windows
peterjc123 [Wed, 20 Mar 2019 16:16:28 +0000 (09:16 -0700)]
Enable 32 bit CPU build on Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18176

Differential Revision: D14539884

Pulled By: ezyang

fbshipit-source-id: 0e4bd9c1ef1830cd9bcc40df36b87534f61def08

5 years agoCorrect cmake flags passing (#18217)
peter [Wed, 20 Mar 2019 16:12:40 +0000 (09:12 -0700)]
Correct cmake flags passing (#18217)

Summary:
Fixes #18214.

According to the CMake manual, we should pass the arguments first, and put the directory as the last element. Otherwise, these flags may not be passed correctly.

Reference:
1. https://cmake.org/cmake/help/latest/manual/cmake.1.html#synopsis
2. https://stackoverflow.com/a/27169347
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18217

Differential Revision: D14540588

Pulled By: ezyang

fbshipit-source-id: a027f585dde66c5da7bbbe584fa42c3e56027d59

5 years agoAdd python_variable._is_view for debugging. (#18197)
Gregory Chanan [Wed, 20 Mar 2019 15:39:52 +0000 (08:39 -0700)]
Add python_variable._is_view for debugging. (#18197)

Summary:
I don't know if we actually want to expose this or not, but it's useful for debugging.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18197

Reviewed By: ezyang

Differential Revision: D14530712

Pulled By: gchanan

fbshipit-source-id: 98fdba9cf113738f0db3a198c49365de536b9919

5 years agoDo not apply these explicit unroll pragmas for ROCm. (#18204)
Johannes M Dieterich [Wed, 20 Mar 2019 14:58:11 +0000 (07:58 -0700)]
Do not apply these explicit unroll pragmas for ROCm. (#18204)

Summary:
Loop analysis indicates that there is a runtime trip count and hence
unrolling cannot take place.

This will silence compile-time warnings we have been observing with recent ROCm releases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18204

Differential Revision: D14539875

Pulled By: ezyang

fbshipit-source-id: a7ea7f2a95603754296b76a6b62a154f56f4ad4d

5 years agoCopy-edit CONTRIBUTING and update. (#18131)
Edward Yang [Wed, 20 Mar 2019 14:33:51 +0000 (07:33 -0700)]
Copy-edit CONTRIBUTING and update. (#18131)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18131
ghimport-source-id: 473dae70f6c236d317bec77d894310c0aa0376ec

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18131 Copy-edit CONTRIBUTING and update.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14505049

fbshipit-source-id: 02aeae33c0049889243c56dd0d761487dac2351e

5 years agofix cosine_similarity (#18168)
Ailing Zhang [Wed, 20 Mar 2019 03:02:29 +0000 (20:02 -0700)]
fix cosine_similarity (#18168)

Summary:
fixes #18057 according to colesbury 's suggestion. Thanks!
cc: ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18168

Differential Revision: D14520953

Pulled By: ailzhang

fbshipit-source-id: 970e6cfb482d857a81721ec1d0ee4a4df84a0450

5 years agoBreakup test misc pt2 (#18191)
Elias Ellison [Wed, 20 Mar 2019 02:38:09 +0000 (19:38 -0700)]
Breakup test misc pt2 (#18191)

Summary:
Further breakup test_misc.h. The remaining tests don't directly map to a jit file so I left them in test_misc.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18191

Differential Revision: D14533442

Pulled By: eellison

fbshipit-source-id: 7f538ce0aea208b6b55a4716dfcf039548305041

5 years agoAdd serialization docs to jit/README (#17951)
David Riazati [Tue, 19 Mar 2019 23:42:54 +0000 (16:42 -0700)]
Add serialization docs to jit/README (#17951)

Summary:
Documents the serialization format for `torch.jit.save`. Some of the info is copied from houseroad's internal doc.

[Formatted Markdown](https://github.com/driazati/pytorch/blob/serial_docs/torch/csrc/jit/README.md)

Also refactors the readme to have a heading hierarchy + table of contents
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17951

Differential Revision: D14531644

Pulled By: driazati

fbshipit-source-id: cbcd9462054cc9f8a2f8cea2c98d8aba4e7d227c

5 years agoTurn on Travis builds for ghstack PRs. (#18193)
Edward Yang [Tue, 19 Mar 2019 21:47:24 +0000 (14:47 -0700)]
Turn on Travis builds for ghstack PRs. (#18193)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18193
ghimport-source-id: 540859cf0b238a9832f45b3f4c2351e3343fc1a2

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18193 Turn on Travis builds for ghstack PRs.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14529945

fbshipit-source-id: 4476e996e311a04f2a997ca9b7c4cf2157dd6286

5 years agodo not throw when unicode is seen in pull request info (#18195)
Michael Suo [Tue, 19 Mar 2019 21:32:52 +0000 (14:32 -0700)]
do not throw when unicode is seen in pull request info (#18195)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18195
ghimport-source-id: 05102cb115c6bd6d141f51905e20155bcd79a908

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18195 [build] do not throw when unicode is seen in pull request info**

Differential Revision: D14529707

fbshipit-source-id: 2f6a31b01b3a9b044fd24be466cc5325b70929ad

5 years agoDelete bugbear from Python 2 lint. (#18192)
Edward Yang [Tue, 19 Mar 2019 21:15:55 +0000 (14:15 -0700)]
Delete bugbear from Python 2 lint. (#18192)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18192
ghimport-source-id: 9523a09d7ec202ef08cf0ecdf48c42739ea6b0ce

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18192 Delete bugbear from Python 2 lint.**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14529240

fbshipit-source-id: 1a433b53dd38d1c455e8c0750d97c594ac51ef09

5 years agoSupport attributes when emitting function calls (#18156)
David Riazati [Tue, 19 Mar 2019 20:51:25 +0000 (13:51 -0700)]
Support attributes when emitting function calls (#18156)

Summary:
The type of each `initial_ivalue` is completely known at some point but that information is discarded by the time a call to it is emitted. This PR is kind of a hack, as a better (longer) solution, the method should know about the type of each initial value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18156

Differential Revision: D14525768

Pulled By: driazati

fbshipit-source-id: 52d53e9711a07a4551c988bd95fe997e654aa465

5 years agoCustomized pin_memory for PackedSequence (#18079)
Tongzhou Wang [Tue, 19 Mar 2019 20:35:55 +0000 (13:35 -0700)]
Customized pin_memory for PackedSequence (#18079)

Summary:
fixes https://github.com/pytorch/pytorch/issues/18078
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18079

Reviewed By: ezyang

Differential Revision: D14521192

Pulled By: zou3519

fbshipit-source-id: cec773a3a6f2c405a0d9701e213b7caf81649181

5 years agoEnable flake8-bugbear line length checking. (#18138)
Edward Yang [Tue, 19 Mar 2019 20:25:04 +0000 (13:25 -0700)]
Enable flake8-bugbear line length checking. (#18138)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18138
ghimport-source-id: be62a71ef98714e6f168a00f84120f612363528e

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18138 Enable flake8-bugbear line length checking.**

flake8-bugbear's line length checker (B950) which permits violations
of up to 10% but specifies the "true" limit when you go over.

I had to ignore a bunch of flake8-bugbear's other checks when I
turned this on.  They're good checks though (they're turned on
in fbcode) and we should fix them eventually.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Reviewed By: salexspb

Differential Revision: D14508678

fbshipit-source-id: 2610ecc0dd43cc0788d77f4d024ebd85b26b8d41

5 years agofix bug in alias analysis (#18146)
Michael Suo [Tue, 19 Mar 2019 18:01:05 +0000 (11:01 -0700)]
fix bug in alias analysis (#18146)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18146
ghimport-source-id: 4b061c27c5c44ef0d06066490ed16cab3d0c7a64

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18146 [jit] fix bug in alias analysis**

We handled hasWriters() incorrectly in the case of wildcards. There's
even a comment describing the correct behavior. Sad!

Much thanks to t-vi for tracking this down and suggesting the fix!

Differential Revision: D14524208

fbshipit-source-id: 8010b54257241bd64013a0d0a8b6e7d22d8c70af

5 years agoAdd backend checks to solve methods (gesv, cholesky_solve) (#18116)
vishwakftw [Tue, 19 Mar 2019 17:36:23 +0000 (10:36 -0700)]
Add backend checks to solve methods (gesv, cholesky_solve) (#18116)

Summary:
Changelog:
- Incorporate a simple backend check in the linearSolveCheckInputs function in LinearAlgebraUtils.h
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18116

Differential Revision: D14504469

Pulled By: soumith

fbshipit-source-id: 7402b6dbaa8d73048946613b806d54f68bcbd8f4

5 years agofix -Wsign-compare warnings for some files inside c2 (#18123)
Hector Yuen [Tue, 19 Mar 2019 17:30:29 +0000 (10:30 -0700)]
fix -Wsign-compare warnings for some files inside c2 (#18123)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18123

the motivation of this fix is to resolve things like:
for(auto i = 0; i < N; i++) where N is bigger than int32

These instances of comparison were found by enabling -Wsign-compare

There are way too many things to fix, so issuing this as a series of fixes

The plan is to fix all these issues and then enable this flag into Caffe2 to catch future instances

Reviewed By: ZolotukhinM

Differential Revision: D14497094

fbshipit-source-id: bca3927a2188bd33a508fa503ba221c220cdaefe

5 years agoSGD: remove unneeded multiply-add initialization operations (#18114)
Neta Zmora [Tue, 19 Mar 2019 17:29:07 +0000 (10:29 -0700)]
SGD: remove unneeded multiply-add initialization operations (#18114)

Summary:
The momentum buffer is initialized to the value of
d_p, but the current code takes the long way to do this:
1. Create a buffer of zeros
2. Multiply the buffer by the momentum coefficient
3. Add d_p to the buffer

All of these can be collapsed into a single step:
1. Create a clone of d_p
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18114

Differential Revision: D14509122

Pulled By: ezyang

fbshipit-source-id: 4a79b896201d5ff20770b7ae790c244ba744edb8

5 years agospecialized CUDA impl for dropout in AD (#17756)
Ailing Zhang [Tue, 19 Mar 2019 17:20:06 +0000 (10:20 -0700)]
specialized CUDA impl for dropout in AD (#17756)

Summary:
In aten we have a _fused_dropout implementation for CUDA case. As ngimel suggested if we discard it in JIT AD, it hurts performance.

It doesn't seem ideal to include backend specific implementation in AD, but this is helpful to prevent performance regression atm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17756

Differential Revision: D14368999

Pulled By: ailzhang

fbshipit-source-id: 9a371c5020f630e8f6e496849ec9772b6f196169

5 years agoFix underflow issue with dirichlet sample (#17488)
Neeraj Pradhan [Tue, 19 Mar 2019 17:18:12 +0000 (10:18 -0700)]
Fix underflow issue with dirichlet sample (#17488)

Summary:
Addresses #15738, using fritzo's suggestion. This adds a `torch._sample_dirichlet` method in `Distributions.cpp` and `Distributions.cu`.
 - For CPU, this leads to no perf hit since all we do is to promote the `alpha` to double when getting the gamma samples (the gamma sampler anyways uses `accscalar_t`(double for CPU)) and cast it back to float32 on return.
 - I have added an analogous method for CUDA as well, but the default sampler for CUDA uses scalar_t for efficiency, so I have kept it as that. With this, I do not see the bias towards 1 as reported in #15738 with `float32`, but there is a spurious mode at 0.5, as would be expected. Users would need to explicitly use `float64` for GPU to not see the spurious mode at 0.5. (EDIT: see note below, it appears that the bias issue is still there for certain builds).

Added some tests and checked that there is no perf regression. My experience with C++ is very limited, so apologies in advance if I missed something basic. cc. ailzhang, fritzo, fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17488

Differential Revision: D14410301

Pulled By: ezyang

fbshipit-source-id: 62b2f694b4642685eab06db96d74ce28e05c3992

5 years agoKill Backend constructor of TensorOptions. (#18137)
Gregory Chanan [Tue, 19 Mar 2019 14:57:21 +0000 (07:57 -0700)]
Kill Backend constructor of TensorOptions. (#18137)

Summary:
It's wrong and unused.  Use one of the many other constructors instead :).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18137

Differential Revision: D14508364

Pulled By: gchanan

fbshipit-source-id: 19c6ff78ad9d9221d0874425edd02b78627c4ca7

5 years agoRemove deviceTypeToBackend, which is underspecified. (#18135)
Gregory Chanan [Tue, 19 Mar 2019 14:50:31 +0000 (07:50 -0700)]
Remove deviceTypeToBackend, which is underspecified. (#18135)

Summary:
There are multiple backends for a device type, so we just kill this function.
Also, kill an getNonVariableType instance which was also underspecified.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18135

Differential Revision: D14507474

Pulled By: gchanan

fbshipit-source-id: fc791a76d4b851b23d09a070725f3838621eb13d

5 years agoStop generating unimplemented type methods. (#18144)
Gregory Chanan [Tue, 19 Mar 2019 14:36:58 +0000 (07:36 -0700)]
Stop generating unimplemented type methods. (#18144)

Summary:
This gets rid of 'aten_sparse' which was used at one time with legacy THS code, but is now only overloaded in native_parse.py.
The way that 'aten_sparse' worked was wonky -- it extended all backends (default [CPU, CUDA]) to include sparse.
But this is totally unnecessary; we already have the backends we need to generate for from type_method_definition_dispatch.

codegen changes: https://github.com/gchanan/pytorch/blob/fc37c8e171b7ebd1b1755469cf6a146a2abedc13/diff.txt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18144

Reviewed By: ezyang

Differential Revision: D14511324

Pulled By: gchanan

fbshipit-source-id: 8bb4ac4cf0985f8756790779a22bc229e18e8e7f

5 years agoCorrected type of 'swap' in torch.nn.TripletMarginLoss (#18115)
Bharat Raghunathan [Tue, 19 Mar 2019 14:05:29 +0000 (07:05 -0700)]
Corrected type of 'swap' in torch.nn.TripletMarginLoss (#18115)

Summary:
Fix #16428 by correcting type of 'swap' from `float` to `bool`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18115

Differential Revision: D14516615

Pulled By: ezyang

fbshipit-source-id: c61a45d533f3a443edf3c31c1ef3d9742bf46d2b

5 years agohandle scenario when GPU support is not available and p2p_access_pattern is empty...
Deepali Chourasia [Tue, 19 Mar 2019 06:06:03 +0000 (23:06 -0700)]
handle scenario when GPU support is not available and p2p_access_pattern is empty (#17974)

Summary:
Observed that when there is no GPU support available `workspace `sets `GetGpuPeerAccessPattern `to `[]` in
https://github.com/pytorch/pytorch/blob/master/caffe2/python/workspace.py#L79
and this case is not handled in https://github.com/pytorch/pytorch/blob/master/caffe2/python/data_parallel_model.py#L1065.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17974

Differential Revision: D14517066

Pulled By: ezyang

fbshipit-source-id: 186911d95c07e9a55ab82a41d0c7c919e4281bb4