platform/upstream/pytorch.git
5 years agoMove flags that do not work on MSVC (#18686)
Sacha [Mon, 1 Apr 2019 14:23:06 +0000 (07:23 -0700)]
Move flags that do not work on MSVC (#18686)

Summary:
MSVC errors on these flags as they are not supported
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18686

Differential Revision: D14704254

Pulled By: ezyang

fbshipit-source-id: 936d33ed6b7474d7774a49505cdac50dbe8dd99a

5 years agoFix unused lambda capture warnings (#18662)
Junjie Bai [Mon, 1 Apr 2019 05:26:27 +0000 (22:26 -0700)]
Fix unused lambda capture warnings (#18662)

Summary:
```
aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.DEFAULT.cpp:109:104: warning: lambda capture 'combs' is not used [-Wunused-lambda-capture]
    parallel_for(0, combs, internal::GRAIN_SIZE / (16 * m), [p, self_start, self_end, n, m, res_start, combs](int64_t k, int64_t end) {
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18662

Differential Revision: D14699379

Pulled By: bddppq

fbshipit-source-id: 5062d4327bb5f7b485c2ffa30c98e10576416f03

5 years agohandle a rare case of histogram min is inf/nan (#18239)
Jongsoo Park [Mon, 1 Apr 2019 04:25:17 +0000 (21:25 -0700)]
handle a rare case of histogram min is inf/nan (#18239)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18239

When min is inf or nan, we get UBSAN errors

Reviewed By: csummersea

Differential Revision: D14537668

fbshipit-source-id: e70ffb5ecd2b10793356070c69fdabf8f25b203e

5 years agoDelete duplicated technical content from contribution_guide.rst (#18628)
Edward Yang [Mon, 1 Apr 2019 02:08:03 +0000 (19:08 -0700)]
Delete duplicated technical content from contribution_guide.rst (#18628)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18628
ghimport-source-id: d94b81a6f303883d97beaae25344fd591e13ce52

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18629 Provide flake8 install instructions.
* **#18628 Delete duplicated technical content from contribution_guide.rst**

There's useful guide in contributing_guide.rst, but the
technical bits were straight up copy-pasted from CONTRIBUTING.md,
and I don't think it makes sense to break the CONTRIBUTING.md
link.  Instead, I deleted the duplicate bits and added a cross
reference to the rst document.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14701003

fbshipit-source-id: 3bbb102fae225cbda27628a59138bba769bfa288

5 years agoProvide flake8 install instructions. (#18629)
Edward Yang [Mon, 1 Apr 2019 01:56:12 +0000 (18:56 -0700)]
Provide flake8 install instructions. (#18629)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18629
ghimport-source-id: 66a8871c56ffcfa7d4bfdf601e180fae99194e28

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18629 Provide flake8 install instructions.**
* #18628 Delete duplicated technical content from contribution_guide.rst

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14701004

fbshipit-source-id: b64292f0ef01b7894cf6b9ff8d5fd9e921c8d162

5 years agoAdding quantized tensor shape/type info support for caffe2=>glow in caffe2 side ...
Rui Zhu [Mon, 1 Apr 2019 00:37:17 +0000 (17:37 -0700)]
Adding quantized tensor shape/type info support for caffe2=>glow in caffe2 side (#18621)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18621

This diff added caffe2 support for onnxifi quantization.

Reviewed By: yinghai

Differential Revision: D14648767

fbshipit-source-id: 4ddb492cacbba6142305866e6dbb875880acaea3

5 years agoFix test on windows (#18667)
David Riazati [Sun, 31 Mar 2019 23:17:28 +0000 (16:17 -0700)]
Fix test on windows (#18667)

Summary:
Breakage in #18188
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18667

Differential Revision: D14700133

Pulled By: driazati

fbshipit-source-id: 4cc26bd579fc1b074b3bef6046cc1030facee130

5 years agoEnforce check ad in test_jit (#18509)
Ailing Zhang [Sun, 31 Mar 2019 15:41:46 +0000 (08:41 -0700)]
Enforce check ad in test_jit (#18509)

Summary:
If a test triggers autodiff, it must have a `DifferentiableGraph` in its differentiated forward graph, and this subgraph must have either the original aten node, or the corresponding nodes used in AD formula.

Typically a forward differentiable graph looks like this:
```
graph(%i0 : Float(),
      %i1 : Float()):
  %3 : Float() = prim::DifferentiableGraph_0(%i0, %i1)
  return (%3)
with prim::DifferentiableGraph_0 = graph(%0 : Float(),
      %1 : Float()):
  %2 : Float() = aten::max(%0, %1)
  return (%2)
```
which tells us `aten::max(Tensor self, Tensor other) -> Tensor` is symbolically differentiable.

Update: there're a lot of cases (fusions/ConstantChunk/python implementations) that breaks it so I decided to make the check optionally take node names if different from function name.
~~[OLD]Theoretically I could also check if `aten::max` is in the differentiable block or not to be more precise, but there're also cases like `chunk` where in a differentiable block it's replaced with a prim node (ConstantChunk) and we will have to special case them. Any suggestions here (to be more precise or no) is very welcome!~~

We used to have a list containing nn tests should be run against AD, I moved it to an field when constructing our test(either torch or nn). I think it's cleaner this way, and it matches the fact that for the same op we support one schema of it but not all, in this way we could just turn on the corresponding test which triggers that supported schema.

cc: apaszke zdevito wanchaol ngimel for a review

[Done] :
- Going through a manual second pass of all tests to check if they should enable AD test or not....
- Add a readme about how to add AD for an op and how to add/enable its test in test_jit.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18509

Differential Revision: D14696811

Pulled By: ailzhang

fbshipit-source-id: c5e693277baac585cd3aed5ab2c0e7faa5e6f29f

5 years agoUse proper isnan check
Junjie Bai [Sun, 31 Mar 2019 09:05:14 +0000 (02:05 -0700)]
Use proper isnan check

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18663

Differential Revision: D14699385

Pulled By: bddppq

fbshipit-source-id: 596ad3371e7704802591e49f7e1c55dc6cd2896f

5 years agopad_circular -> _pad_circular (#18608)
Soumith Chintala [Sat, 30 Mar 2019 20:24:11 +0000 (13:24 -0700)]
pad_circular -> _pad_circular (#18608)

Summary:
pad_circular is really private, as circular padding is exposed via `F.pad`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18608

Differential Revision: D14691704

Pulled By: soumith

fbshipit-source-id: 8c2f90596feed670976115041efed3ca071e8306

5 years agoFix wrap(at::Scalar) (#18632)
Will Feng [Sat, 30 Mar 2019 18:28:44 +0000 (11:28 -0700)]
Fix wrap(at::Scalar) (#18632)

Summary:
Problem:
```cpp
// This function expects a `Variable` as input
inline PyObject* wrap(at::Tensor tensor) {
  return THPVariable_Wrap(Variable(std::move(tensor)));
}

inline PyObject* wrap(at::Scalar scalar) {
  // This function calls `wrap(at::Tensor tensor)` (the function above), but since
  // `scalar_to_tensor(...)` returns a `Tensor` and not a `Variable`, the call to
  // `wrap(at::Tensor tensor)` will fail with "Tensor that was converted to Variable
  // was not actually a Variable", which is not what we want.
  return wrap(scalar_to_tensor(scalar));
}
```

The right fix is to call `make_variable(...)` with the tensor returned from `scalar_to_tensor(scalar)`.

This unblocks https://github.com/pytorch/pytorch/pull/18230 as it is the only patch that hits this code path now. All other native functions that return Scalar (such as `item()` or `_local_scalar_dense()`) either has custom-defined implementation that doesn't go through this path, or is not exposed to Python at all.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18632

Differential Revision: D14689293

Pulled By: yf225

fbshipit-source-id: be7ba5d3de83a69533a2997de97ad92989ff78ee

5 years agoDeprecated type() -> scalar_type()
Gao, Xiang [Sat, 30 Mar 2019 17:50:48 +0000 (10:50 -0700)]
Deprecated type() -> scalar_type()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18642

Differential Revision: D14696848

Pulled By: ezyang

fbshipit-source-id: 43d1f86ecee5f6c6c5b70fd7d0e2063c3fc473ab

5 years agoTurn on F401: Unused import warning. (#18598)
Edward Yang [Sat, 30 Mar 2019 15:58:10 +0000 (08:58 -0700)]
Turn on F401: Unused import warning. (#18598)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3

5 years agoUpdate documentation for CTCLoss (#18415)
ryan [Sat, 30 Mar 2019 08:20:55 +0000 (01:20 -0700)]
Update documentation for CTCLoss (#18415)

Summary:
This is meant to resolve #18249, where I pointed out a few things that could improve the CTCLoss docs.

My main goal was to clarify:
- Target sequences are sequences of class indices, excluding the blank index
- Lengths of `target` and `input` are needed for masking unequal length sequences, and do not necessarily = S, which is the length of the longest sequence in the batch.

I thought about Thomas's suggestion to link the distill.pub article, but I'm not sure about it. I think that should be up to y'all to decide.

I have no experience with .rst, so it might not render as expected :)

t-vi ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18415

Differential Revision: D14691969

Pulled By: soumith

fbshipit-source-id: 381a2d52307174661c58053ae9dfae6e40cbfd46

5 years agoFallback kernels (#18443)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:46 +0000 (00:03 -0700)]
Fallback kernels (#18443)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18443

Allow registering a kernel without a dispatch key. In this case, the kernel becomes a fallback kernel that is called whenever no other kernel matches.
This is also useful for the legacy function based API (since that API doesn't know about dispatch keys) or any other custom ops that don't care about dispatch
and just want one kernel to be called no matter the dispatch key.

Reviewed By: dzhulgakov

Differential Revision: D14603258

fbshipit-source-id: 242dc8871dad2989ca25079854d0cc97429e7199

5 years agoIntroduce lambda-based kernel API (#18541)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:46 +0000 (00:03 -0700)]
Introduce lambda-based kernel API (#18541)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18541

Allow registering lambdas as c10 kernels.

Reviewed By: dzhulgakov

Differential Revision: D14653005

fbshipit-source-id: f867cc776b1339e83b7a2e1935f5cf924cfba44a

5 years agoReport better errors when kernel or dispatch key are missing (#18302)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:46 +0000 (00:03 -0700)]
Report better errors when kernel or dispatch key are missing (#18302)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18302

These might be use cases we want to support in the future, but they don't work yet.
Let's at least report an error instead of doing segfaults or worse.

Reviewed By: dzhulgakov

Differential Revision: D14572346

fbshipit-source-id: 49262ce131493bc887defe2978d8b22f202cd8cc

5 years agoMove stuff to cpp files (#18301)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:44 +0000 (00:03 -0700)]
Move stuff to cpp files (#18301)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18301

Move code out of headers and templates into source files and non-templates.

Reviewed By: dzhulgakov

Differential Revision: D14572347

fbshipit-source-id: 9fd5d62d54000a95e93076cd73f591ba2c5c2653

5 years agoCheck kernel against function schema in c10 op registration (#18256)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:44 +0000 (00:03 -0700)]
Check kernel against function schema in c10 op registration (#18256)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18256

This diff infers the function schema from the kernel function/functor and checks that it matches the specified function schema.

This diff does not allow (yet) to omit specifying the function schema in the registration API. That will come in a future diff.

Reviewed By: dzhulgakov

Differential Revision: D14552738

fbshipit-source-id: 00202b489ede19f26ae686c97416b38c72c11532

5 years agoAdd functor- and function-based kernel registration API (#18162)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:44 +0000 (00:03 -0700)]
Add functor- and function-based kernel registration API (#18162)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18162

- Adds the API to register a functor- and function-based kernel.
- Change the experimental c10 ops to use this new API instead of the old one
- Deletes the old APIs in KernelRegistration.h and OpSchemaRegistration.h

Reviewed By: dzhulgakov

Differential Revision: D14514239

fbshipit-source-id: 35b2f6e8f62964e54886450a6a5fac812ed20f26

5 years agoNew operator registration MVP (#18161)
Sebastian Messmer [Sat, 30 Mar 2019 07:03:43 +0000 (00:03 -0700)]
New operator registration MVP (#18161)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18161

This introduces version 0 for the new operator registration.

For now, it only works with kernels that are defined as stack-based functions.
This is actually not the intended public API for defining kernels, but it's the basis which is going to be used to define the public APIs (see diffs on top for them),
and it's also the API used for exposing caffe2 operators.

This diff also switches the mechanism for exposing caffe2 operators to the new mechanism.

Reviewed By: dzhulgakov

Differential Revision: D14514231

fbshipit-source-id: 454ab7b5b46a10203aa27b175400d23f818dd1df

5 years agoFix trt installation in CI (#18609)
Junjie Bai [Sat, 30 Mar 2019 05:42:18 +0000 (22:42 -0700)]
Fix trt installation in CI (#18609)

Summary:
caffe2_py2_cuda9_0_cudnn7_ubuntu16_04_build is failing
```
...
Mar 29 04:44:46 Need to get 174 MB of archives.
Mar 29 04:44:46 After this operation, 576 MB of additional disk space will be used.
Mar 29 04:44:46 Do you want to continue? [Y/n] Abort.
Exited with code 1
...
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18609

Differential Revision: D14694990

Pulled By: bddppq

fbshipit-source-id: 260446a8650f660a2baf123a3f17efdf0a8d6c64

5 years agoAttribute serialization improvements (#18188)
David Riazati [Sat, 30 Mar 2019 02:06:06 +0000 (19:06 -0700)]
Attribute serialization improvements (#18188)

Summary:
* adds attributes to `ScriptModule.__getattr__` so they can be accessed in Python after re-importing
* full support for all the possible values for an `int64_t`
    * this necessitated a bunch more `pushWhatever` functions, so re-introduced a templated version to cut down on duplicate code
* tests to validate references / value sharing works
* adds `torch.jit.Unpickler` which people can use to de-serialize the pickle files into Python / have a quick reference on how to do this without PyTorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18188

Differential Revision: D14527490

Pulled By: driazati

fbshipit-source-id: efd15579cc04aa2e28c4b2c9490d82d849dee559

5 years agosupport pre-convert filter format for mkldnn training mode and change 'OptimizeForIde...
Cheng,Penghui [Sat, 30 Mar 2019 01:51:50 +0000 (18:51 -0700)]
support pre-convert filter format for mkldnn training mode and change 'OptimizeForIdeep' to 'OptimizeForMkldnn' (#15171)

Summary:
For MKL-DNN,the filter data will be reorderd to primitive format, it takes a lot of time.
So the patch provide a method to convert filter format before training.
And "OptimizeForIdeep" will be changed to "OptimizeForMkldnn" in this patch.
 This patch depends on https://github.com/pytorch/pytorch/pull/12866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15171

Differential Revision: D14590741

Pulled By: yinghai

fbshipit-source-id: 07971c9977edac3c8eec08ca2c39cda639683492

5 years agoTensor construction codemod(raw_mutable_data) (#16373)
Jerry Zhang [Sat, 30 Mar 2019 01:26:07 +0000 (18:26 -0700)]
Tensor construction codemod(raw_mutable_data) (#16373)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16373

motivation: https://github.com/pytorch/pytorch/pull/12407
This is a manual diff.
most of the fixes should be:

```
auto* Y = Output(0);
Y->Resize(dims);
Y->raw_mutable_data(dtype);
```
-->
```
auto* Y = Output(0, dims, at::dtype(dtype));
```
But there might be other cases.

Reviewed By: dzhulgakov

Differential Revision: D13725460

fbshipit-source-id: 649a4b0e42f62cda1a60171dd9fa3e440dc9dca1

5 years agoAdd hash() global (#18258)
David Riazati [Sat, 30 Mar 2019 01:23:28 +0000 (18:23 -0700)]
Add hash() global (#18258)

Summary:
This adds `hash()` which supports `int`, `str`, and `float`. It relies on `std::hash` which is implementation defined, so the result of `hash()` in TorchScript is not the same as in Python, but should satisfy the same properties.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18258

Differential Revision: D14692317

Pulled By: driazati

fbshipit-source-id: 909df5d024bb3feea157d5a203b7de53c72261c9

5 years agoMove fuser to test_jit_fuser (#18590)
Elias Ellison [Sat, 30 Mar 2019 01:10:36 +0000 (18:10 -0700)]
Move fuser to test_jit_fuser (#18590)

Summary:
Start of breaking up test_jit.py

New files will have the format test_jit_* so they are easily grepable but remain in the same directory so we don't have to go through multiple sources for imports.

I am adding a test that's expected to fail to be sure it's running.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18590

Reviewed By: wanchaol

Differential Revision: D14677094

Pulled By: eellison

fbshipit-source-id: 9782c6aa9525bb6f332fc75cfff004c83a417522

5 years agoExperimental logging/counters API (#18235)
James Reed [Sat, 30 Mar 2019 00:06:08 +0000 (17:06 -0700)]
Experimental logging/counters API (#18235)

Summary:
This defines a generic counters API that users can utilize to provide monitoring functionality in e.g. a production service. We expose both counters for runtime internals as well as a TorchScript API to create user-defined counters. Synopsis of the API:

- `torch/csrc/jit/script/logging.h` specifies the externally-facing API in C++
- `torch/jit/_logging.py` specifies the Python API

We use an interface, `LoggerBase`, to define the interactions between users and a logging backend. Implementing a subclass of `LoggerBase` allows the user to handle these events in a custom way, such as logging into a DB or calling into an infra-specific counters API.

From the frontend perspective, we can create log events in two ways:
1. We provide an `add_stat_value(name, val)` function. This calls into the Logger backend with a key/value pair. For example, we might call `add_stat_value('foo', 1)` to bump an event counter.
2. We provide a `time_point()` function to record a timestamp in nanoseconds. This can be used in conjunction with `add_stat_value` to record runtime wall clock durations.

Examples of frontend usage can be found in `test_jit.py TestLogging`.

We provide a trivial `LockingLogger` implementation as an example and for testing purposes. It is likely not ready for production usage. It demonstrates that a backend implementing the API can do things like specify aggregation types and report these aggregate stats via the `get_counters()` API.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18235

Differential Revision: D14545060

Pulled By: jamesr66a

fbshipit-source-id: 04099543a1898cfdd411511e46e03d5dce9b4881

5 years agoRevert D14668859: [pytorch][PR] Re-land Parsing file check
David Riazati [Sat, 30 Mar 2019 00:03:50 +0000 (17:03 -0700)]
Revert D14668859: [pytorch][PR] Re-land Parsing file check

Differential Revision:
D14668859

Original commit changeset: 3825a35ddc61

fbshipit-source-id: f3343ec6b63fe8f1f04959adfac4331865990047

5 years agoUpdate argument names of torch::autograd::FunctionPostHook (#18140)
Pieter Noordhuis [Fri, 29 Mar 2019 23:15:10 +0000 (16:15 -0700)]
Update argument names of torch::autograd::FunctionPostHook (#18140)

Summary:
They are called as (outputs, inputs) and were named (inputs, outputs).

Possible follow up fix is to make the outputs argument an lvalue to allow for calling multiple post hooks without ever copying outputs vector. It looks like the copy is now forced because the hook takes a const reference as input and returns an value. This would change the prototype of the function, so needs further discussion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18140

Differential Revision: D14684498

Pulled By: pietern

fbshipit-source-id: 1bd3ddbdd1ff7fe0a18241de5a9ec745a4e7ef07

5 years agonote on updating existing source (#18409)
Soumith Chintala [Fri, 29 Mar 2019 23:02:02 +0000 (16:02 -0700)]
note on updating existing source (#18409)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/18388
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18409

Differential Revision: D14597666

Pulled By: soumith

fbshipit-source-id: 156104c0cd19da06f6f96a225228d1e8cf831af1

5 years agoRe-land Parsing file check (#18570)
eellison [Fri, 29 Mar 2019 22:35:37 +0000 (15:35 -0700)]
Re-land Parsing file check (#18570)

Summary:
The last time I tried to land it there was a merge race with the docs coverage test lol. Re-landing with the fix.

Re-land of https://github.com/pytorch/pytorch/pull/18304
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18570

Differential Revision: D14668859

Pulled By: eellison

fbshipit-source-id: 3825a35ddc6179a0d433d70d22b5c1a96c20b21a

5 years agoRefactoring serialization of ONNX initializers to be name-based (Resubmission) (...
Spandan Tiwari [Fri, 29 Mar 2019 22:17:14 +0000 (15:17 -0700)]
Refactoring serialization of ONNX initializers to be name-based (Resubmission) (#17830)

Summary:
houseroad - this is the resubmission of https://github.com/pytorch/pytorch/pull/17420, as suggested.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17830

Reviewed By: zrphercule

Differential Revision: D14398714

Pulled By: houseroad

fbshipit-source-id: bda475f1ae8a5273ebdb0f6883fc66036c29d326

5 years agoInitial implementation of InsertObserverNodes pass. (#18152)
Mikhail Zolotukhin [Fri, 29 Mar 2019 22:01:36 +0000 (15:01 -0700)]
Initial implementation of InsertObserverNodes pass. (#18152)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18152
ghimport-source-id: 1dd5e62c4d93394dcd8d8af2871554575c8d3d1a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18152 Initial implementation of InsertObserverNodes pass.**
* #18151 Add quant-passes stubs.

gh-metadata: pytorch pytorch 18150 gh/zolotukhinm@gmail.com/2/head

Differential Revision: D14584223

fbshipit-source-id: 30896acc1a8901d22c6a167eb87d2fbaafbbeb6f

5 years agoFix bug in tensor feed which caused crash due to wrong tensor type (#18552)
Gu, Jinghui [Fri, 29 Mar 2019 21:06:09 +0000 (14:06 -0700)]
Fix bug in tensor feed which caused crash due to wrong tensor type (#18552)

Summary:
In blob feeder for ideep device, the wrong device option is given and led to a crash issue.
This patch aims to correct the device option to fix this bug.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18552

Differential Revision: D14679838

Pulled By: yinghai

fbshipit-source-id: bde11e6a6fe44822166881dcb7c9bd0b34b4ecf3

5 years agoUpgrade mkldnn-bridge to revert tensor capacity patch and prepare for DNNLOWP support...
Gu, Jinghui [Fri, 29 Mar 2019 20:50:51 +0000 (13:50 -0700)]
Upgrade mkldnn-bridge to revert tensor capacity patch and prepare for DNNLOWP support (#18471)

Summary:
1. Upgrade mkldnn-bridge to revert tensor capacity patch to avoid ASAN issue.
2. Prepare for DNNLOWP support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18471

Differential Revision: D14621569

Pulled By: yinghai

fbshipit-source-id: 9df300b77d0f2acd1a4f63c2925b7a7cab7a474e

5 years agoregister BoxWithNMSLimit with C10
Yanghan Wang [Fri, 29 Mar 2019 20:31:45 +0000 (13:31 -0700)]
register BoxWithNMSLimit with C10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17956

Reviewed By: houseroad

Differential Revision: D14417300

fbshipit-source-id: eb5e2ba84513b3b7bfa509dc442424b13fe9148f

5 years agoFix c10d build without nccl.
Gregory Chanan [Fri, 29 Mar 2019 20:31:42 +0000 (13:31 -0700)]
Fix c10d build without nccl.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18582

Differential Revision: D14672928

Pulled By: gchanan

fbshipit-source-id: 74e9805cbaf5ebe8e3f579fe08dad72eb410b80a

5 years agoAdd named submodule support to nn::Sequential (#17552)
Will Feng [Fri, 29 Mar 2019 19:59:29 +0000 (12:59 -0700)]
Add named submodule support to nn::Sequential (#17552)

Summary:
Previously, we were not able to assign names to `nn::Sequential`'s submodules. This PR adds this feature to match the Python API. Example use:
```cpp
Sequential sequential(named_submodule({
      {"linear", Linear(10, 3)},
      {"conv2d", Conv2d(1, 2, 3)},
      {"dropout", Dropout(0.5)},
      {"batchnorm", BatchNorm(5)},
      {"embedding", Embedding(4, 10)},
      {"lstm", LSTM(4, 5)}
}));
```

It also enables loading parameters of Python `nn.Sequential` module with custom submodules names into C++ frontend, unblocking https://github.com/pytorch/vision/pull/728#issuecomment-466661344.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17552

Differential Revision: D14246834

Pulled By: yf225

fbshipit-source-id: 3030b5c5d68f6dd5d3e37ac4b4f98dc6d6d9ba72

5 years agoRename `btriunpack` to `lu_unpack` (#18529)
Vishwak Srinivasan [Fri, 29 Mar 2019 19:58:23 +0000 (12:58 -0700)]
Rename `btriunpack` to `lu_unpack` (#18529)

Summary:
Changelog:
- Renames `btriunpack` to `lu_unpack` to remain consistent with the `lu` function interface.
- Rename all relevant tests, fix callsites
- Create a tentative alias for `lu_unpack` under the name `btriunpack` and add a deprecation warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18529

Differential Revision: D14683161

Pulled By: soumith

fbshipit-source-id: 994287eaa15c50fd74c2f1c7646edfc61e8099b1

5 years agofix lint (#18623)
Elias Ellison [Fri, 29 Mar 2019 18:45:38 +0000 (11:45 -0700)]
fix lint (#18623)

Summary:
Fix lint
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18623

Differential Revision: D14686265

Pulled By: eellison

fbshipit-source-id: 4bbe0f5bc58f508cbf4bc1baef2029ce1eaa42d8

5 years agoManual hipify caffe2/distributed and rocm update (no hcc modules support) (#18088)
Xiaodong Wang [Fri, 29 Mar 2019 18:04:23 +0000 (11:04 -0700)]
Manual hipify caffe2/distributed and rocm update (no hcc modules support) (#18088)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18088

Manually hipify the distributed folder

Reviewed By: bddppq

Differential Revision: D14482702

fbshipit-source-id: cc0abdf525b423ab1f18db8010d21e27c6668d36

5 years agoChange dnnlowp log level from warning to v2 (#18576)
Summer Deng [Fri, 29 Mar 2019 16:24:07 +0000 (09:24 -0700)]
Change dnnlowp log level from warning to v2 (#18576)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18576

As in title

Reviewed By: feiyu1990

Differential Revision: D14670898

fbshipit-source-id: 1983099b2ba57daab393278553f10dcdb1812fdf

5 years agomultiline KeyError msg python bug workaround (#18557)
Stas Bekman [Fri, 29 Mar 2019 13:48:53 +0000 (06:48 -0700)]
multiline KeyError msg python bug workaround (#18557)

Summary:
make multiline KeyError msg readable by working around a python bug https://bugs.python.org/issue2651

discussion: https://github.com/pytorch/pytorch/issues/16647
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18557

Differential Revision: D14681086

Pulled By: soumith

fbshipit-source-id: acbd13a823302c854c3d364028ed414fd8ce6bc8

5 years agoReduceLrOnPlateau: best=current -> best=copy(current) (#16364) (#16697)
Søren Rasmussen [Fri, 29 Mar 2019 13:42:52 +0000 (06:42 -0700)]
ReduceLrOnPlateau: best=current -> best=copy(current) (#16364) (#16697)

Summary:
Fixes #16364
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16697

Differential Revision: D14680879

Pulled By: soumith

fbshipit-source-id: c50c22f3eacea4474fb3a04fe85fbf11d5a177c9

5 years agomake InstanceNorm1d raise an error if the input is 2D (#11992)
crcrpar [Fri, 29 Mar 2019 13:41:49 +0000 (06:41 -0700)]
make InstanceNorm1d raise an error if the input is 2D (#11992)

Summary:
Resolves #11991 .

Any comment is welcome.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11992

Differential Revision: D14680974

Pulled By: soumith

fbshipit-source-id: 8e287a9c32bf43b35edc9d127f16ed6b72c61d91

5 years agoFixed torch.arange docs (#18604)
Arunava [Fri, 29 Mar 2019 13:36:40 +0000 (06:36 -0700)]
Fixed torch.arange docs (#18604)

Summary:
Kindly let me know if its okay and if any places i need to make a fix. Closes #18534
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18604

Differential Revision: D14680712

Pulled By: soumith

fbshipit-source-id: 030e4a3d8f7839cbe2b8a3ef386323f0d39eb81a

5 years agoMinor fixes in fastrnns benchmarks
Junjie Bai [Fri, 29 Mar 2019 08:16:52 +0000 (01:16 -0700)]
Minor fixes in fastrnns benchmarks

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18613

Reviewed By: wanchaol

Differential Revision: D14681838

fbshipit-source-id: 60bd5c9b09398c74335f003cd21ea32dd1c45876

5 years agoRename `btrifact*` to `lu` (#18435)
Vishwak Srinivasan [Fri, 29 Mar 2019 07:27:48 +0000 (00:27 -0700)]
Rename `btrifact*` to `lu` (#18435)

Summary:
Changelog:

- Renames `btrifact` and `btrifact_with_info` to `lu`to remain consistent with other factorization methods (`qr` and `svd`).
- Now, we will only have one function and methods named `lu`, which performs `lu` decomposition. This function takes a get_infos kwarg, which when set to True includes a infos tensor in the tuple.
- Rename all tests, fix callsites
- Create a tentative alias for `lu` under the name `btrifact` and `btrifact_with_info`, and add a deprecation warning to not promote usage.
- Add the single batch version for `lu` so that users don't have to unsqueeze and squeeze for a single square matrix (see changes in determinant computation in `LinearAlgebra.cpp`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18435

Differential Revision: D14680352

Pulled By: soumith

fbshipit-source-id: af58dfc11fa53d9e8e0318c720beaf5502978cd8

5 years agoOptimize relu op on GPU (#18506)
Xiaomeng Yang [Fri, 29 Mar 2019 07:20:25 +0000 (00:20 -0700)]
Optimize relu op on GPU (#18506)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18506

Optimize relu op on GPU

Reviewed By: houseroad

Differential Revision: D14633171

fbshipit-source-id: bd3afa9a0bae1325d32ad4153736a0c7ecb0ec64

5 years agoAutomatic update of fbcode/onnx to fb1a80692c1ab0bd27b1072f2e7bffacba336777 (#18585)
Lu Fang [Fri, 29 Mar 2019 06:44:20 +0000 (23:44 -0700)]
update of fbcode/onnx to fb1a80692c1ab0bd27b1072f2e7bffacba336777 (#18585)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18585

Previous import was b29e78a4efb8e5d8995f576bbf19a846807829b6

Included changes:
- **[fb1a8069](https://github.com/onnx/onnx/commit/fb1a8069)**: Fix wrongly handled attribute in MVN and test generating scripts (#1877) <Raymond Yang>
- **[b22041c3](https://github.com/onnx/onnx/commit/b22041c3)**: Add dilation attribute to MaxPool (#1864) <karljang>

Reviewed By: zrphercule, benoitsteiner

Differential Revision: D14668623

fbshipit-source-id: fa7f44b1ecc949d8dd654939d20b1e93db98b1d2

5 years agoAutomatic update of fbcode/foxi to 81e1683d6348eee4b5ed1145222dc2c41be4269c (#18596)
Lu Fang [Fri, 29 Mar 2019 06:17:18 +0000 (23:17 -0700)]
update of fbcode/foxi to 81e1683d6348eee4b5ed1145222dc2c41be4269c (#18596)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18596

Previous import was 2bcc4064c90e87b9638615c733485f07c47b7558

Included changes:
- **[81e1683](https://github.com/houseroad/foxi/commit/81e1683)**: Merge pull request #9 from zrphercule/add_foxi_quantization <Rui Zhu>
- **[580559c](https://github.com/houseroad/foxi/commit/580559c)**: char=>uint8 <zrphercule>
- **[1a572f7](https://github.com/houseroad/foxi/commit/1a572f7)**: add quantization <zrphercule>

Reviewed By: zrphercule

Differential Revision: D14677404

fbshipit-source-id: 09429b3bf0e7783a25b8145020e505761bad887d

5 years agoDelete batch tensor (#18575)
Elias Ellison [Fri, 29 Mar 2019 06:07:45 +0000 (23:07 -0700)]
Delete batch tensor (#18575)

Summary:
Deleting batch tensor since we are no longer maintaining the project and keeping it functional is blocking other improvements.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18575

Differential Revision: D14671126

Pulled By: eellison

fbshipit-source-id: b42d5b699c4d12171ed95e6d3a977532167f0d2c

5 years agoUpdate NNPACK to current master (#18580)
Thomas Viehmann [Fri, 29 Mar 2019 05:20:08 +0000 (22:20 -0700)]
Update NNPACK to current master (#18580)

Summary:
This fixes builds on x86 (32 bits).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18580

Differential Revision: D14672462

Pulled By: soumith

fbshipit-source-id: 7629b001c2bfa3e5b6ade7f1b03a8280232a4c16

5 years agoEnhance build_ios.sh to be consistent with build_android.sh (#18564)
Gemfield [Fri, 29 Mar 2019 04:35:17 +0000 (21:35 -0700)]
Enhance build_ios.sh to be consistent with build_android.sh (#18564)

Summary:
1, Enhance build_ios.sh to be consistent with build_android.sh;
2, Add docs for build_ios.sh.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18564

Differential Revision: D14680752

Pulled By: soumith

fbshipit-source-id: 6d2667ed8a3c85a057a522838f5d0461dd4788cf

5 years agoSerialization supports pathlib.Path object for the input argument (#18562)
Hyungjoo Andrew Cho [Fri, 29 Mar 2019 03:49:43 +0000 (20:49 -0700)]
Serialization supports pathlib.Path object for the input argument (#18562)

Summary:
This will allow pathlib.Path object to the torch.load as an input argument.
Fixes #16607
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18562

Differential Revision: D14668255

Pulled By: soumith

fbshipit-source-id: 0ae4f7c210918582912f2d1ef2a98f1ab288c540

5 years agoTarget and input sizes mismatch warning in L1 Loss / L1 Smooth Loss (#18565)
Aurélien Roy [Fri, 29 Mar 2019 03:46:03 +0000 (20:46 -0700)]
Target and input sizes mismatch warning in L1 Loss / L1 Smooth Loss (#18565)

Summary:
Addind the same warning message already present in the mse_loss function to the L1 losses when input and target sizes are different.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18565

Differential Revision: D14671415

Pulled By: soumith

fbshipit-source-id: 01f5e1fb1ea119dbb2aecf1d94d0cb462f284982

5 years agoResubmit PR-18512: Improved onnx export for 3 onnx ops (#18571)
bddppq [Fri, 29 Mar 2019 01:07:10 +0000 (18:07 -0700)]
Resubmit PR-18512: Improved onnx export for 3 onnx ops (#18571)

Summary:
Fix ROCm CI failure
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18571

Differential Revision: D14669323

Pulled By: bddppq

fbshipit-source-id: 022afe5c20e680295c9cfdfe1ec14650305955a8

5 years agoin caching allocator, ignore and clear the error if not ready
Jeff Daily [Fri, 29 Mar 2019 00:43:22 +0000 (17:43 -0700)]
in caching allocator, ignore and clear the error if not ready

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18584

Differential Revision: D14675041

Pulled By: bddppq

fbshipit-source-id: c1fab797e0d224e0a481a0395a3f9975c4265ff6

5 years agoAdd external callbacks into RecordFunction (#17844)
Ilia Cherniavskii [Fri, 29 Mar 2019 00:42:47 +0000 (17:42 -0700)]
Add external callbacks into RecordFunction (#17844)

Summary:
Add a way to insert external callbacks into PT's RecordFunction
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17844

Differential Revision: D14399664

Pulled By: ilia-cher

fbshipit-source-id: 76654799811fefd3ffed4abfb46ed95b492cebab

5 years agoImplement rotated generate_proposals_op without opencv dependency (CPU version)
Jing Huang [Thu, 28 Mar 2019 23:58:54 +0000 (16:58 -0700)]
Implement rotated generate_proposals_op without opencv dependency (CPU version)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18533

Reviewed By: ezyang

Differential Revision: D14648083

fbshipit-source-id: e53e8f537100862f8015c4efa4efe4d387cef551

5 years agoUse SetOutputTensor instead of copying outputs manually (#17770)
Ahmed Aly [Thu, 28 Mar 2019 22:58:24 +0000 (15:58 -0700)]
Use SetOutputTensor instead of copying outputs manually (#17770)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17770

As title

Reviewed By: dzhulgakov

Differential Revision: D14370937

fbshipit-source-id: f415490c38556cf03bb13dce3643775331483448

5 years agoFix NCCL/Gloo process groups and DDP stream sync bug (#18465)
Shen Li [Thu, 28 Mar 2019 22:05:53 +0000 (15:05 -0700)]
Fix NCCL/Gloo process groups and DDP stream sync bug (#18465)

Summary:
DDP with NCCL backend uses a [worker stream](https://github.com/pytorch/pytorch/blob/d3eb941ed96774efb8d89a0b20c9e49807ea85a7/torch/csrc/distributed/c10d/ddp.cpp#L142) to flatten grand batch
tensors, and passes the flattened tensor to [another stream](https://github.com/pytorch/pytorch/blob/d3eb941ed96774efb8d89a0b20c9e49807ea85a7/torch/lib/c10d/ProcessGroupNCCL.cpp#L379) to
conduct ncclAllReduce. The flattened tensor has to record the
ncclAllReduce stream, otherwise multiple streams might access the
same memory space.

cc ppwwyyxx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18465

Differential Revision: D14613449

Pulled By: mrshenli

fbshipit-source-id: b62773732552d12cc87b7adeb6897e9e11753ea9

5 years agoInference LSTM integration test (#18559)
Ahmed Aly [Thu, 28 Mar 2019 18:23:22 +0000 (11:23 -0700)]
Inference LSTM integration test (#18559)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18559

Adding integration test for inference LSTM

Reviewed By: houseroad

Differential Revision: D14656698

fbshipit-source-id: 80fb2a72be30fcb695f4471b72bf9d6e3965bf81

5 years agoAdd Slot type to abstract the raw pointers being used for slots. (#18226)
Zachary DeVito [Thu, 28 Mar 2019 17:31:45 +0000 (10:31 -0700)]
Add Slot type to abstract the raw pointers being used for slots. (#18226)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18226
ghimport-source-id: b9ec8651212875b30971cc6859d2ddec6559ae3a

If modules become first-class IValues, then the slots will no longer be raw pointers but (IValue, index) pairs. This commit inserts the Slot abstraction so that this change can be made in later patches.

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18226 Add Slot type to abstract the raw pointers being used for slots.**

Differential Revision: D14542022

fbshipit-source-id: b81d7f4334c983d663e7551bda82df43680d7c5f

5 years agoRevert D14635130: Improved onnx export for 3 onnx ops.
Junjie Bai [Thu, 28 Mar 2019 17:18:46 +0000 (10:18 -0700)]
Revert D14635130: Improved onnx export for 3 onnx ops.

Differential Revision:
D14635130

Original commit changeset: d54a2b6e2950

fbshipit-source-id: f624e2befdde245cb88435a95508b2a8e6b12e61

5 years agoImproved onnx export for 3 onnx ops. (#18512)
Benoit Steiner [Thu, 28 Mar 2019 15:52:01 +0000 (08:52 -0700)]
Improved onnx export for 3 onnx ops. (#18512)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18512

Ceil and Floor have been supported since version 6 of ONNX: export them using the native onnx ops instead of an Aten op.
Similarly, support for the Where op has been added in version 9, so we don't need to wrap these op in an Aten op.

Reviewed By: houseroad

Differential Revision: D14635130

fbshipit-source-id: d54a2b6e295074a6214b5939b21051a6735c9958

5 years agoRevert D14652372: [pytorch][PR] Add parsing to file check
Elias Ellison [Thu, 28 Mar 2019 07:09:36 +0000 (00:09 -0700)]
Revert D14652372: [pytorch][PR] Add parsing to file check

Differential Revision:
D14652372

Original commit changeset: 7430b9d1dc2b

fbshipit-source-id: fa3d0f68515fe53447746469844d2db20c1292e0

5 years agoC++17.h: forward -> c10::guts::forward (#18492)
Ilia Cherniavskii [Thu, 28 Mar 2019 04:07:36 +0000 (21:07 -0700)]
C++17.h: forward -> c10::guts::forward (#18492)

Summary:
Use c10::guts::forward instead of forward
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18492

Reviewed By: smessmer

Differential Revision: D14625513

Pulled By: ilia-cher

fbshipit-source-id: 8bc4e20f102fe2a107a22f3e172882d60b95ab0e

5 years agoUse __ldg for CUDA kernels in fuser (#18540)
Thomas Viehmann [Thu, 28 Mar 2019 03:17:01 +0000 (20:17 -0700)]
Use __ldg for CUDA kernels in fuser (#18540)

Summary:
While benchmarking a kernel with broadcasted inputs, I noticed
that is was much slower than a hand-coded kernel for the smae task.

The kernel in question computed a * b + c for a of shape
32 x 32 x 10240 and b and c of shape 1 x 32 x 1.

This patch accellerates said kernel from 450us to 250us on my GTX1080Ti.

I didn't change half because there doesn't seem to be __ldg for
half.

An alternative could be to sprinkle const and restrict.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18540

Differential Revision: D14657840

Pulled By: soumith

fbshipit-source-id: 408847346ec12d1d1d9b119ac50bbc70f0d9ed33

5 years agoAdds Cyclical Learning Rate and Momentum (#18001)
Sam Pepose [Thu, 28 Mar 2019 02:47:43 +0000 (19:47 -0700)]
Adds Cyclical Learning Rate and Momentum (#18001)

Summary:
This implements a cyclical learning rate (CLR) schedule with an optional inverse cyclical momentum. More info about CLR: https://github.com/bckenstler/CLR

This is finishing what #2016 started. Resolves #1909.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18001

Differential Revision: D14451845

Pulled By: sampepose

fbshipit-source-id: 8f682e0c3dee3a73bd2b14cc93fcf5f0e836b8c9

5 years agoCompletely synchronize behavior of Facebook flake8 and public flake8. (#18538)
Edward Yang [Thu, 28 Mar 2019 02:46:23 +0000 (19:46 -0700)]
Completely synchronize behavior of Facebook flake8 and public flake8. (#18538)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18538
ghimport-source-id: 665b09f158d1c5dd94686d4212792504b55b7f73

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18538 Completely synchronize behavior of Facebook flake8 and public flake8.**

Previously, developers at Facebook had the very funny experience
wherein /usr/local/bin/flake8 behaved differently than a freshly
installed flake8 from pip.  In this commit, I add enough ignores to
.flake8 and install enough plugins to make the Facebook flake8
and public flake8 line up exactly.  These means you don't have
to care which flake8 you use; they all will report accurate information
on your Python files.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14652336

fbshipit-source-id: ba7776eaa139cf2e3df2e65349da6fd7c99acca4

5 years agoadd slow tests annotation to some jit tests (#18545)
Elias Ellison [Thu, 28 Mar 2019 02:21:32 +0000 (19:21 -0700)]
add slow tests annotation to some jit tests (#18545)

Summary:
Adds slow test annotation to the following very slow tests -

70.33s     test/test_jit.py::TestScript::test_script_module_script_resnet
32.33s     test/test_jit.py::TestBatched::test_beam_search
17.70s     test/test_jit.py::TestBatched::test_greedy_search
15.58s     test/test_jit.py::TestScript::test_script_module_trace_resnet18

The list of remaining slow tests is below. Let me know if you think any of the others should be added to slow tests as well. Slow tests will only run on master.

15.28s call     test/test_jit.py::TestJit::test_export_batchnorm
12.96s call     test/test_jit.py::TestEndToEndHybridFrontendModels::test_snli
11.65s call     test/test_jit.py::TestEndToEndHybridFrontendModels::test_neural_style
6.38s call     test/test_jit.py::TestJitGeneratedModule::test_nn_LocalResponseNorm_1d
5.96s call     test/test_jit.py::TestJitGeneratedModule::test_nn_LocalResponseNorm_2d_uneven_pad
5.91s call     test/test_jit.py::TestJitGeneratedModule::test_nn_LocalResponseNorm_3d_custom_params
4.76s call     test/test_jit.py::TestJit::test_alexnet
3.82s call     test/test_jit.py::TestScript::test_number_math
3.81s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_no_bias
3.76s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_groups_thnn
3.65s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_stride_pad1circular
3.49s call     test/test_jit.py::TestBatched::test_lstm
3.33s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_pad2circular
3.19s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv1d_stride1_pad2circular
3.11s call     test/test_jit.py::TestEndToEndHybridFrontendModels::test_dcgan_models
3.11s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_stride_padding
3.11s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_stride
3.08s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_no_bias
3.08s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv1d_stride1_pad1circular
3.07s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_groups
3.05s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_dilated
3.05s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_depthwise_with_multiplier
3.04s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_groups
3.03s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_dilated
3.02s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv2d_depthwise_dilated
3.02s call     test/test_jit.py::TestJitGeneratedModule::test_nn_Conv3d_dilated_strided
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18545

Differential Revision: D14656064

Pulled By: eellison

fbshipit-source-id: d17ee23c3b3679276cee983555d43e83ce099356

5 years agoAdd parsing to file check (#18304)
Elias Ellison [Thu, 28 Mar 2019 01:11:45 +0000 (18:11 -0700)]
Add parsing to file check (#18304)

Summary:
This allows you to embed checks in IR, making the test more readable.

E.g.
```
graph_str = 'graph(%0 : Double(5, 5)):
          # CHECK: aten::relu
          %1 : Double(5, 5) = aten::relu(%0)
          return (%1)'
FileCheck().run(graph_str, parseIR(graph_str))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18304

Differential Revision: D14652372

Pulled By: eellison

fbshipit-source-id: 7430b9d1dc2b7584704375aac02d7392ecec76a0

5 years agobug fix for node with writers in create autodiff subgraph (#18491)
Elias Ellison [Wed, 27 Mar 2019 23:02:10 +0000 (16:02 -0700)]
bug fix for node with writers in create autodiff subgraph (#18491)

Summary:
Previously we were moving nodes with writers into differentiable subgraphs, without necessarily preserving whether or not they were written to. This can lead to bugs with CSE, which needs that context.

I'm not completely sure if there's anything else we can do to be more aggresive here - inline these nodes and not run CSE and just run constant pooling, or possibly something else, but I think we should land this correctness condition first and then possibly think further.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18491

Differential Revision: D14648562

Pulled By: eellison

fbshipit-source-id: bc1e444774ccdb708e22f0e06a477a221a231f9e

5 years agoadd extra info for the auto gen sum ops
Xianjie Chen [Wed, 27 Mar 2019 21:52:13 +0000 (14:52 -0700)]
add extra info for the auto gen sum ops

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17934

Reviewed By: iroot900

Differential Revision: D14418689

fbshipit-source-id: 9e11e461001467f0000ea7c355d5b0f0d738fa85

5 years agoClarify error text of the pin_memory function
Vitaly Fedyunin [Wed, 27 Mar 2019 21:44:00 +0000 (14:44 -0700)]
Clarify error text of the pin_memory function

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18530

Reviewed By: ezyang

Differential Revision: D14647578

Pulled By: VitalyFedyunin

fbshipit-source-id: ddd70240d52d2e9a96e26f5a0dfea8d76fe25078

5 years agoMove fast rnn benchmark to pytorch/pytorch
Wanchao Liang [Wed, 27 Mar 2019 21:39:33 +0000 (14:39 -0700)]
Move fast rnn benchmark to pytorch/pytorch

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18369

Differential Revision: D14652039

Pulled By: wanchaol

fbshipit-source-id: 1177b1f60d96672c3e2c9d527b56ee06ca7c0af1

5 years agoRename isTensor api -> isCompleteTensor (#18437)
eellison [Wed, 27 Mar 2019 21:29:45 +0000 (14:29 -0700)]
Rename isTensor api -> isCompleteTensor (#18437)

Summary:
Is Tensor has been brought up as misleading a couple times, rename it isCompleteTensor for clarity.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18437

Differential Revision: D14605223

Pulled By: eellison

fbshipit-source-id: 189f67f12cbecd76516a04e67d8145c260c79036

5 years agoConst trace error v2 (#18535)
Elias Ellison [Wed, 27 Mar 2019 21:28:11 +0000 (14:28 -0700)]
Const trace error v2 (#18535)

Summary:
Trying to reland https://github.com/pytorch/pytorch/pull/18298
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18535

Differential Revision: D14652391

Pulled By: eellison

fbshipit-source-id: 699e30045dd5f14f0a2b98378272045a292e1e2a

5 years agoenable more unit tests (#18537)
jithunnair-amd [Wed, 27 Mar 2019 21:16:01 +0000 (14:16 -0700)]
enable more unit tests (#18537)

Summary:
Enable unit tests working with ROCm 2.3. In particular, these are unit tests where we skipped for double data types previously and some tests for multi-GPU setups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18537

Differential Revision: D14651822

Pulled By: ezyang

fbshipit-source-id: 7dd575504ebe235a91489866c91000e9754b1235

5 years agoSkip tests if C2/ONNX models cannot be read (#18494)
Min Ni [Wed, 27 Mar 2019 18:14:32 +0000 (11:14 -0700)]
Skip tests if C2/ONNX models cannot be read (#18494)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18494

Today we have some C2 end2end test run requiring reading model data from external filesystem (for example, Gluster and AWS). This could be a source for flaky test when the external filesystems are not reachable during the tests.

In this diff, we add try/catch logic around where we download models and open model files from external system. In case such attempts fails, we will catch the excption and let the unittest skip the current test instead of failure.

I also refactor the code a little bit by removing some duplicated logic on downloading and build the c2 model data. It has been duplicated in two classes and a few functions...

Reviewed By: yinghai

Differential Revision: D14442241

fbshipit-source-id: da8bf56c8d096efa34ca2070de5cd10a18aad70c

5 years agoAdd qtensors in caffe2 protobuf argument (#18486)
zrphercule [Wed, 27 Mar 2019 18:11:01 +0000 (11:11 -0700)]
Add qtensors in caffe2 protobuf argument (#18486)

Summary:
We are about to merge onnxifi quantization support soon. Before that, I would like to merge this diff seperately to make sure it doesnt break anything.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18486

Reviewed By: bddppq, houseroad

Differential Revision: D14626419

Pulled By: yinghai

fbshipit-source-id: 504c1eae60be1e629203267b59defb8b69d82c0a

5 years agoGenerate sphinx docs with secure content. (#18508)
Paul O’Shannessy [Wed, 27 Mar 2019 17:55:12 +0000 (10:55 -0700)]
Generate sphinx docs with secure content. (#18508)

Summary:
There are a number of pages in the docs that serve insecure content. AFAICT this is the sole source of that.

I wasn't sure if docs get regenerated for old versions as part of the automation, or if those would need to be manually done.

cf. https://github.com/pytorch/pytorch.github.io/pull/177
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18508

Differential Revision: D14645665

Pulled By: zpao

fbshipit-source-id: 003563b06048485d4f539feb1675fc80bab47c1b

5 years agoFix loss functions doc (#18420)
ZhuBaohe [Wed, 27 Mar 2019 17:15:20 +0000 (10:15 -0700)]
Fix loss functions doc (#18420)

Summary:
Correct docstring display error on web page caused by my previous PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18420

Differential Revision: D14642467

Pulled By: soumith

fbshipit-source-id: 16fdd3301a4c5bad27fbcd8686f7fbfcc1e908ee

5 years agoUpgrade flake8-bugbear to master, fix the new lints. (#18507)
Edward Yang [Wed, 27 Mar 2019 15:01:15 +0000 (08:01 -0700)]
Upgrade flake8-bugbear to master, fix the new lints. (#18507)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18507
ghimport-source-id: 1c3642befad2da78a7e5f39d6d58732b85c76267

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18507 Upgrade flake8-bugbear to master, fix the new lints.**

It turns out Facebobok is internally using the unreleased master
flake8-bugbear, so upgrading it grabs a few more lints that Phabricator
was complaining about but we didn't get in open source.

A few of the getattr sites that I fixed look very suspicious (they're
written as if Python were a lazy language), but I didn't look more
closely into the matter.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14633682

fbshipit-source-id: fc3f97c87dca40bbda943a1d1061953490dbacf8

5 years agoAdd export annotations for functions in c10 (#18464)
peter [Wed, 27 Mar 2019 14:55:48 +0000 (07:55 -0700)]
Add export annotations for functions in c10 (#18464)

Summary:
Fixes #18461.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18464

Differential Revision: D14620963

Pulled By: ezyang

fbshipit-source-id: c11f3967de2ac69c7140767c8fe73a85555e9f40

5 years agoBack out "Revert D14613517: [pytorch][PR] Updating onnxtrt submodule to master branch...
Li Yu [Wed, 27 Mar 2019 06:41:35 +0000 (23:41 -0700)]
Back out "Revert D14613517: [pytorch][PR] Updating onnxtrt submodule to master branch" (#18514)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18514

Original commit changeset: d6267ddfc339

Reviewed By: bddppq

Differential Revision: D14634476

fbshipit-source-id: 2633b0b4c512d71001e5c20cd79c0c0d7856f942

5 years agoAutomatic update of fbcode/onnx to b29e78a4efb8e5d8995f576bbf19a846807829b6 (#18503)
Lu Fang [Wed, 27 Mar 2019 04:51:10 +0000 (21:51 -0700)]
update of fbcode/onnx to b29e78a4efb8e5d8995f576bbf19a846807829b6 (#18503)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18503

Previous import was c05f2ae412daf8fd64136ca354b97ccf73e0ea6c

Included changes:
- **[b29e78a4](https://github.com/onnx/onnx/commit/b29e78a4)**: update copyright for open governance (#1885) <Prasanth Pulavarthi>
- **[3b0ecd55](https://github.com/onnx/onnx/commit/3b0ecd55)**: open governance (#1881) <Prasanth Pulavarthi>
- **[bbe28349](https://github.com/onnx/onnx/commit/bbe28349)**: Revert "Adding Reverse op (#1804)" (#1882) <Lu Fang>
- **[5be3e223](https://github.com/onnx/onnx/commit/5be3e223)**: Adding Reverse op (#1804) <Peyman Manikashani>

Reviewed By: zrphercule

Differential Revision: D14632717

fbshipit-source-id: 2637a4090e7071a59caff3a910fa4f077906bf3c

5 years agoMove weight offload inside backend construction functor (#18385)
Yinghai Lu [Wed, 27 Mar 2019 03:57:18 +0000 (20:57 -0700)]
Move weight offload inside backend construction functor (#18385)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18385

By moving the weight offload into the backend initialization function, we can instantiate the backend once by creating the OnnxifiOp once and then clean up the parameter workspace. And we need to keep hold of that instantiated net (OnnxifiOp) without cleaning it. Subsequent ctor of OnnxifiOp of the same model will hit the cached backend and they will not look into weight offloading, which is safe as the weight is already gone.

Reviewed By: ipiszy

Differential Revision: D14590379

fbshipit-source-id: f7f34016e09777ad3df0af487885cd14658e1044

5 years agofix #16448 (#18479)
Tongzhou Wang [Wed, 27 Mar 2019 03:55:25 +0000 (20:55 -0700)]
fix #16448 (#18479)

Summary:
Fixes #16448

bddppq
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18479

Differential Revision: D14635360

Pulled By: ezyang

fbshipit-source-id: 4010319fbce050dd0bdf4da3cd1171b9737f3c4c

5 years agoAdd section about .code to docs
James Reed [Wed, 27 Mar 2019 03:47:23 +0000 (20:47 -0700)]
Add section about .code to docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18493

Differential Revision: D14634677

Pulled By: jamesr66a

fbshipit-source-id: 9ee065f6ce4218f725b93deb4c64b4ef55926145

5 years agohow to use the `ccache` package on Ubuntu (#18495)
Stas Bekman [Wed, 27 Mar 2019 02:56:39 +0000 (19:56 -0700)]
how to use the `ccache` package on Ubuntu (#18495)

Summary:
Added full instructions for how to use the `ccache` package. Thanks.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18495

Differential Revision: D14635351

Pulled By: ezyang

fbshipit-source-id: 158e1052bae580e95f73644252fdbddcc0213128

5 years agoAppend c10 libs to TorchConfig.cmake (#18418)
peterjc123 [Wed, 27 Mar 2019 02:47:37 +0000 (19:47 -0700)]
Append c10 libs to TorchConfig.cmake (#18418)

Summary:
Fixes #18416.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18418

Differential Revision: D14635322

Pulled By: ezyang

fbshipit-source-id: 81cb658f73583e4cd0358173617f747ebf4f7f8a

5 years agoAdd some missing docs for tensor methods and attributes, new unittest to enforce...
Xiang Gao [Wed, 27 Mar 2019 01:00:15 +0000 (18:00 -0700)]
Add some missing docs for tensor methods and attributes, new unittest to enforce tensors.rst no longer miss anything (#16057)

Summary:
This depend on https://github.com/pytorch/pytorch/pull/16039

This prevent people (reviewer, PR author) from forgetting adding things to `tensors.rst`.

When something new is added to `_tensor_doc.py` or `tensor.py` but intentionally not in `tensors.rst`, people should manually whitelist it in `test_docs_coverage.py`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16057

Differential Revision: D14619550

Pulled By: ezyang

fbshipit-source-id: e1c6dd6761142e2e48ec499e118df399e3949fcc

5 years agoRevert D14613517: [pytorch][PR] Updating onnxtrt submodule to master branch
Li Yu [Wed, 27 Mar 2019 00:30:17 +0000 (17:30 -0700)]
Revert D14613517: [pytorch][PR] Updating onnxtrt submodule to master branch

Differential Revision:
D14613517

Original commit changeset: dd20d718db55

fbshipit-source-id: d6267ddfc339d04f182e2de1750a601c8d6bf8c6

5 years agoFix direct comparison of OperatorDef proto structs (#18466)
Junjie Bai [Wed, 27 Mar 2019 00:16:23 +0000 (17:16 -0700)]
Fix direct comparison of OperatorDef proto structs (#18466)

Summary:
arguments order is okay to be different

ajyu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18466

Differential Revision: D14627258

Pulled By: bddppq

fbshipit-source-id: 430e1fb1bea2c5639a547ae7c1652368788c86b9

5 years agoRevert D14605905: [pytorch][PR] Add return_counts to torch.unique
Soumith Chintala [Wed, 27 Mar 2019 00:14:26 +0000 (17:14 -0700)]
Revert D14605905: [pytorch][PR] Add return_counts to torch.unique

Differential Revision:
D14605905

Original commit changeset: 555f5a12a8e2

fbshipit-source-id: c7874f5987893e956c022180a37763d88bba38db

5 years agoFix typo in Github links in elementwise_ops_schema.cc (#18018)
Sameer Indarapu [Tue, 26 Mar 2019 22:29:55 +0000 (15:29 -0700)]
Fix typo in Github links in elementwise_ops_schema.cc (#18018)

Summary:
s/elementwise_op_schema.cc/elementwise_ops_schema.cc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18018

Differential Revision: D14612291

Pulled By: soumith

fbshipit-source-id: 09276283b9ff92c039ce530165c62cc8421fb443

5 years agoImprove numerical precision of (s)logdet (#18449)
Tongzhou Wang [Tue, 26 Mar 2019 22:25:26 +0000 (15:25 -0700)]
Improve numerical precision of (s)logdet (#18449)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/18448 and https://github.com/pytorch/pytorch/issues/18450
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18449

Differential Revision: D14611638

Pulled By: soumith

fbshipit-source-id: 4f1f27ab5316a92d2783e734169f599afed743cf