platform/upstream/pytorch.git
5 years agoAdd is_floating_point to docs (#15704)
vishwakftw [Mon, 7 Jan 2019 18:38:16 +0000 (10:38 -0800)]
Add is_floating_point to docs (#15704)

Summary:
Fixes #15700 .

Changelog:

- Expose torch.*.is_floating_point to docs

Differential Revision: D13580734

Pulled By: zou3519

fbshipit-source-id: 76edb4af666c08237091a2cebf53d9ba5e6c8909

5 years agoPool prim::None nodes (#15745)
Elias Ellison [Mon, 7 Jan 2019 17:58:08 +0000 (09:58 -0800)]
Pool prim::None nodes (#15745)

Summary:
Make the constant pooling pass pool prim::None nodes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15745

Differential Revision: D13583518

Pulled By: eellison

fbshipit-source-id: 7f8aa70522515805ab0991c6db3d96b5a96cdede

5 years agoReplace some malloc+memset pairs with calloc.
Owen Anderson [Mon, 7 Jan 2019 02:54:25 +0000 (18:54 -0800)]
Replace some malloc+memset pairs with calloc.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15765

Differential Revision: D13588723

Pulled By: resistor

fbshipit-source-id: 47d35dc608847a5b173cfcf2aaa2a77359e56722

5 years agoRemoves print statements from test_torch.py (#15747)
mruberry [Sat, 5 Jan 2019 17:04:54 +0000 (09:04 -0800)]
Removes print statements from test_torch.py (#15747)

Summary:
These print statements do not affect the test, and tests (generally) shouldn't print.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15747

Differential Revision: D13587289

Pulled By: soumith

fbshipit-source-id: c758793c9e35faf02bacba6c7c6d072f7c40453f

5 years agoFix several DeprecationWarning: invalid escape sequence (#15733)
Mickaël Schoentgen [Sat, 5 Jan 2019 16:51:14 +0000 (08:51 -0800)]
Fix several DeprecationWarning: invalid escape sequence (#15733)

Summary:
Hello,

This is a little patch to fix `DeprecationWarning: invalid escape sequence`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15733

Differential Revision: D13587291

Pulled By: soumith

fbshipit-source-id: ce68db2de92ca7eaa42f78ca5ae6fbc1d4d90e05

5 years agocaffe2_benchmark msvc build fix (#15619)
ArutyunovG [Sat, 5 Jan 2019 16:23:02 +0000 (08:23 -0800)]
caffe2_benchmark msvc build fix (#15619)

Summary:
Fixing error in caffe2_benchmark binary

```
2018-12-29T14:09:59.7867995Z   d:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.h(90): error C2678: binary '|=': no operator found which takes a left-hand operand of type 'std::_Iosb<int>::_Openmode' (or there is no acceptable conversion) (compiling source file D:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.cc) [D:\a\1\s\caffe2_builders\v141\pytorch\build\Release\binaries\caffe2_benchmark.vcxproj]
2018-12-29T14:09:59.7868252Z   d:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.h(92): error C2678: binary '|=': no operator found which takes a left-hand operand of type 'std::_Iosb<int>::_Openmode' (or there is no acceptable conversion) (compiling source file D:\a\1\s\caffe2_builders\v141\pytorch\binaries\benchmark_helper.cc) [D:\a\1\s\caffe2_builders\v141\pytorch\build\Release\binaries\caffe2_benchmark.vcxproj]
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15619

Differential Revision: D13580195

Pulled By: soumith

fbshipit-source-id: b0a4479cd5f7555801b1977aeee96b6433293da7

5 years agoAdding a hook (wrapper) for non-std stream reader in PyTorchStreamReader (#15551)
Lu Fang [Sat, 5 Jan 2019 06:47:35 +0000 (22:47 -0800)]
Adding a hook (wrapper) for non-std stream reader in PyTorchStreamReader (#15551)

Summary:
To implement a stream is very annoying, since it is closely defined with the underlying storage streambuffer.

So in this PR, we add ReadAdapterInterface and PyTorchStreamReader will use it. We implement IStreamAdapter as a wrapper of std::istream. And keep the user interface unchanged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15551

Reviewed By: zrphercule

Differential Revision: D13568907

Pulled By: houseroad

fbshipit-source-id: 93708cb801248a6c101f35cb14d1631029365c3c

5 years agosupport 0 size in any of the tensor dimensions in mkldnn (#15295)
Cheng,Penghui [Sat, 5 Jan 2019 06:30:48 +0000 (22:30 -0800)]
support 0 size in any of the tensor dimensions in mkldnn (#15295)

Summary:
support 0 size in any of the tensor dimensions in mkldnn
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15295

Differential Revision: D13573747

Pulled By: yinghai

fbshipit-source-id: 5bf7a0b9e2567e80f44981a7823be5407fc94e53

5 years agoPort replication_pad2d and replication_pad3d to ATen (#15538)
Lin Huang [Sat, 5 Jan 2019 00:59:18 +0000 (16:59 -0800)]
Port replication_pad2d and replication_pad3d to ATen (#15538)

Summary:
port replication padding 2D and 3D from legacy TH API implementation
to ATen implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15538

Differential Revision: D13547567

Pulled By: lhuang04

fbshipit-source-id: decfe100d9edfdcfb62f39ee23f37b6cae0d461f

5 years agoFix different types in rsub caused bug (#15707)
zrphercule [Sat, 5 Jan 2019 00:11:23 +0000 (16:11 -0800)]
Fix different types in rsub caused bug (#15707)

Summary:
Before this pr, rsub did not convert two elements into the same dtype, therefore "1 - x" may export to an onnx model that two elements of rsub having different dtype.
By adding this symbolic patch this bug should be fixed.
Related test cases also created.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15707

Differential Revision: D13583042

Pulled By: zrphercule

fbshipit-source-id: 3a2de47a1a8d1ded1a0adfb911adbe6ac729cdef

5 years agoTensor method rename dims()->sizes() - 1/2
Jerry Zhang [Fri, 4 Jan 2019 23:48:21 +0000 (15:48 -0800)]
Tensor method rename dims()->sizes() - 1/2

Summary: Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: BIT-silence

Differential Revision: D13581782

fbshipit-source-id: b16b4198e100617769d84aa599bf141117cfbe5b

5 years agoAutomatic update of fbcode/onnx to 8384c788939bc65463f9754b6a7a00b212b18ba1 (#15739)
Lu Fang [Fri, 4 Jan 2019 23:38:07 +0000 (15:38 -0800)]
update of fbcode/onnx to 8384c788939bc65463f9754b6a7a00b212b18ba1 (#15739)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15739

Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8

Included changes:
- **[8384c78](https://github.com/onnx/onnx/commit/8384c78)**: add constantofshape (#1582) <Rui Zhu>
- **[9afc06c](https://github.com/onnx/onnx/commit/9afc06c)**: Set symbol visibility to hidden for non-Windows (#1707) <Paul Jesse Hellemn>
- **[6f8a9f0](https://github.com/onnx/onnx/commit/6f8a9f0)**: Revert "Add NonMaxSupression operator (#1695)" (#1702) <Lu Fang>
- **[8b89544](https://github.com/onnx/onnx/commit/8b89544)**: Add NonMaxSupression operator (#1695) <Hector Li>
- **[0a7cc48](https://github.com/onnx/onnx/commit/0a7cc48)**: Add bfloat16 support. (#1699) <Dmitri Smirnov>
- **[da7c50c](https://github.com/onnx/onnx/commit/da7c50c)**: ONNX does not maintain versions for experimental ops (#1696) <Ke Zhang>
- **[0c8d857](https://github.com/onnx/onnx/commit/0c8d857)**: Correct type of value_info in Graph (#1694) <Maik Riechert>
- **[f612532](https://github.com/onnx/onnx/commit/f612532)**: Fix typos (#1686) <Eundoo Song>

Reviewed By: zrphercule

Differential Revision: D13581674

fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464

5 years agoremove use of tmp_install
andersj [Fri, 4 Jan 2019 21:45:12 +0000 (13:45 -0800)]
remove use of tmp_install

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14553

Differential Revision: D13583335

Pulled By: anderspapitto

fbshipit-source-id: 8711fead9eda877c1037a0bc59f91a3d2e01f3e0

5 years agoUpdate CI credentials
Will Feng [Fri, 4 Jan 2019 21:30:28 +0000 (13:30 -0800)]
Update CI credentials

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15736

Differential Revision: D13583174

Pulled By: yf225

fbshipit-source-id: 742470db10ef9df8f95e27626453b68ca90723e8

5 years agoTemporarily disable all XXXlike operator tests in pytorch-onnx test (#15740)
zrphercule [Fri, 4 Jan 2019 21:26:32 +0000 (13:26 -0800)]
Temporarily disable all XXXlike operator tests in pytorch-onnx test (#15740)

Summary:
We are going to have some breaking changes in ConstantLike and related operators in onnx, therefore it is better to disable all related tests for these operators for now.
These operators are not currently supported by caffe2, and are not included in our most recently released onnx, therefore we do not need to worry about internal/external production breaking.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15740

Differential Revision: D13582528

Pulled By: zrphercule

fbshipit-source-id: 92a890c1dc2a833969af69edfea85331bb4d562f

5 years agoTensor construction codemod - 2/2 (#15600)
Jerry Zhang [Fri, 4 Jan 2019 21:23:21 +0000 (13:23 -0800)]
Tensor construction codemod - 2/2 (#15600)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15600

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: dzhulgakov

Differential Revision: D13542455

fbshipit-source-id: 8a3b15b0a1f81565f34e309114e1c3e1f7f65a3c

5 years agoPrint out operator suggestions for unknown builtin op (#15183)
Elias Ellison [Fri, 4 Jan 2019 21:01:49 +0000 (13:01 -0800)]
Print out operator suggestions for unknown builtin op (#15183)

Summary:
This improves the error message for "unknown builtin op" to suggest similarly named ops.

Currently it prints out all operators with a name within two edits.

Related issue: https://github.com/pytorch/pytorch/issues/13409
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15183

Differential Revision: D13578509

Pulled By: eellison

fbshipit-source-id: 5c73408eda1f7aa456f5bd28790c34df0c76aeca

5 years agoUpdating submodules
svcscm [Fri, 4 Jan 2019 20:15:25 +0000 (12:15 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: b8be56b57d109dfef5980ea7255e2ab021da099e

5 years agoTensor construction codemod - 1/2 (#15598)
Jerry Zhang [Fri, 4 Jan 2019 19:50:17 +0000 (11:50 -0800)]
Tensor construction codemod - 1/2 (#15598)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15598

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: dzhulgakov

Differential Revision: D13542429

fbshipit-source-id: db1059c78e85724d9b4fdab70466cf329db68359

5 years agoremove dependency to fp32 batch permutation op (#15723)
Jongsoo Park [Fri, 4 Jan 2019 15:53:26 +0000 (07:53 -0800)]
remove dependency to fp32 batch permutation op (#15723)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15723

As title says.

Reviewed By: jianyuh

Differential Revision: D13578604

fbshipit-source-id: 0da0ac31ae83c1e0daa9077e878feb4deffed6a3

5 years agoCudnn Handle Pool 3: At Wit's End (#15668)
Michael Carilli [Fri, 4 Jan 2019 14:18:43 +0000 (06:18 -0800)]
Cudnn Handle Pool 3: At Wit's End (#15668)

Summary:
ezyang Here's a freshly rebased version of https://github.com/pytorch/pytorch/pull/15080 with the if statement that relieved the hangs that occasionally, nondeterministically, occurred on cudnnCreate on a particular windows build ([example w/debug statements](https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-win-ws2016-cuda9-cudnn7-py3-test2/19238/console))  in https://github.com/pytorch/pytorch/pull/15280.

I'd like to run the CI over this several times before it's considered mergeable.  Sometimes the windows hang doesn't manifest for 2 or 3 consecutive trials.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15668

Differential Revision: D13579291

Pulled By: soumith

fbshipit-source-id: 3972eb98bad6ece933ca5e67a10fc4bc2ed06068

5 years agoRemove TH/THC link for cholesky_solve (#15691)
vishwakftw [Fri, 4 Jan 2019 14:18:35 +0000 (06:18 -0800)]
Remove TH/THC link for cholesky_solve (#15691)

Summary:
Changelog:
- Remove TH/THC binding
- Port single matrix case to ATen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15691

Differential Revision: D13579317

Pulled By: soumith

fbshipit-source-id: 63a55606c656396e777e8e6828acd2ef88ed1543

5 years agoModify torch.gesv error message (#15654)
Youngseok [Fri, 4 Jan 2019 05:37:28 +0000 (21:37 -0800)]
Modify torch.gesv error message (#15654)

Summary:
[doc](https://pytorch.org/docs/stable/torch.html#torch.gesv) uses `B` uppercase so error message should follow to avoid confusion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15654

Differential Revision: D13571297

Pulled By: soumith

fbshipit-source-id: 0b4e7797eceff92618f808bbfa65d13c1dcc2da0

5 years agomake conv_depthwise_dnnlowp_op_test faster (#15725)
Jongsoo Park [Fri, 4 Jan 2019 05:37:03 +0000 (21:37 -0800)]
make conv_depthwise_dnnlowp_op_test faster (#15725)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15725

As title says.

Reviewed By: jianyuh

Differential Revision: D13579188

fbshipit-source-id: 382072c95929ccf9e189e2338e35b046c4a0650f

5 years agoclarified language of doc for torch.mul (#15664)
Elad Zippory [Fri, 4 Jan 2019 05:36:49 +0000 (21:36 -0800)]
clarified language of doc for torch.mul (#15664)

Summary:
see issue #15636

Please note - I build the documents but the HTML is not updated with the edited content.
I did not also build the fork.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15664

Differential Revision: D13571310

Pulled By: soumith

fbshipit-source-id: d43be0f61705693d778cc12c13e86d6b06130ac7

5 years agodisallow nbits_in_non_outlier == 0 in acc16 conv; option to fallback to acc32 (#15708)
Jongsoo Park [Fri, 4 Jan 2019 04:28:09 +0000 (20:28 -0800)]
disallow nbits_in_non_outlier == 0 in acc16 conv; option to fallback to acc32 (#15708)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15708

nbits_in_non_outlier == 0 doesn't make sense because it means everything is outlier and we can just use 32-bit accumulation.
Depending on architecture, break-even point between acc16 and acc32 can be different. Adding thresholds for falling back to acc32.

Reviewed By: jianyuh

Differential Revision: D13574832

fbshipit-source-id: b7a37aacbfdc7867e31838dafcdd5f7c2ac282af

5 years agoTorch tensor (#15224)
Elias Ellison [Fri, 4 Jan 2019 01:31:56 +0000 (17:31 -0800)]
Torch tensor (#15224)

Summary:
Support torch.tensor in script. Already been accepted, trying to reland
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15224

Differential Revision: D13466616

Pulled By: eellison

fbshipit-source-id: f7850da07b0eb11af98f255fc15bd3cf861f2a40

5 years agoA quick fix for Stream operation errors on non-current device (#15689)
Shen Li [Thu, 3 Jan 2019 23:12:13 +0000 (15:12 -0800)]
A quick fix for Stream operation errors on non-current device (#15689)

Summary:
see #15682

This is a quick fix by implementing the simpler solution as suggested by colesbury. As benchmark result shows, it slows down `Stream.query()` by ~20%, I would be happy to further pursue a more complex solution by implementing this in C++/ATen. But I would still vote for merge this quick fix first just to get rid of the bug sooner.

~Test TBA~ Added

FYI jeffreyksmithjr

now

```python
In [1]: def f():
   ...:     d0 = torch.device('cuda:0')
   ...:     d1 = torch.device('cuda:1')
   ...:     with torch.cuda.device(d0):
   ...:         s0 = torch.cuda.current_stream()
   ...:     with torch.cuda.device(d1):
   ...:         s1 = torch.cuda.current_stream()
   ...:     s0.query()
   ...:     s1.query()

In [4]: %timeit f()
38.1 µs ± 4.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [5]: %timeit f()
37.6 µs ± 2.7 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```

before

```python
In [4]: %timeit f()
28.5 µs ± 1.74 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [5]: %timeit f()
35.3 µs ± 2.91 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15689

Differential Revision: D13571697

Pulled By: mrshenli

fbshipit-source-id: 4fe697f91248c6419136d37bb5b7147e612e2f4c

5 years agoBreak up generated tests (#13992)
David Riazati [Thu, 3 Jan 2019 22:31:09 +0000 (14:31 -0800)]
Break up generated tests (#13992)

Summary:
This PR breaks up `TestJitGenerated` into 3 classes. This makes for
easier testing of specific groups (e.g. run all generated functional
tests without having to wait for the autograd tests)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13992

Differential Revision: D13076371

Pulled By: driazati

fbshipit-source-id: 1267af59be7d69feb690f5805fcd43fea58a7159

5 years agoflake8 hook fix (#15693)
Michael Suo [Thu, 3 Jan 2019 21:50:42 +0000 (13:50 -0800)]
flake8 hook fix (#15693)

Summary:
This PR bypasses checking the user's configuration entirely and always use strict, since the CI considers it a hard failure if you can't pass flake8.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15693

Differential Revision: D13574889

Pulled By: suo

fbshipit-source-id: f5e1c5731cc49b6223b415317033c275bc7d4fec

5 years agoPrevent VS2017 from emitting ambiguous symbol errors (#15697)
Stuart Golodetz [Thu, 3 Jan 2019 21:37:50 +0000 (13:37 -0800)]
Prevent VS2017 from emitting ambiguous symbol errors (#15697)

Summary:
These `std::forward` calls cause VS2017 to emit:

    error C2872: 'std': ambiguous symbol

This fix prevents the ambiguity by specifying that `::std` is intended.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15697

Differential Revision: D13573483

Pulled By: goldsborough

fbshipit-source-id: 0439de3523a37a18df7af0cff4a1284a53833ddd

5 years agotrace s_copy_ (#15690)
Zachary DeVito [Thu, 3 Jan 2019 20:14:17 +0000 (12:14 -0800)]
trace s_copy_ (#15690)

Summary:
s_copy_ was previously special-cased for out of place tracing.
This adds support for inplace tracing, which fixes tracing of
inception_v3

Fixes #15216
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15690

Differential Revision: D13572011

Pulled By: zdevito

fbshipit-source-id: 1d565dec039a4b8c59179254285e61d2517ef9a9

5 years agoAdd mkldnn conv double backward (#15686)
Ailing Zhang [Thu, 3 Jan 2019 18:42:35 +0000 (10:42 -0800)]
Add mkldnn conv double backward (#15686)

Summary:
Fixes #15353 .

Like cudnn conv implementation, mkldnn also falls back to the default `_convolution_double_backward` as double backward.

This bug wasn't caught by CI before because mkldnn is only used when input scalar type is float, but our tests are all using double as default.

Adding test for float inputs, but mkldnn seems to have imprecision issues similar to cudnn implementation, so here I only check if double backward exists instead of calling `gradgradcheck`. Please correct me if the precision should actually be checked.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15686

Differential Revision: D13571682

Pulled By: ailzhang

fbshipit-source-id: f1762439762370f276cfd59e8b8b8a4dee960a4b

5 years agoFix ONNX export of logical ops, including torch.ne, to have correct output datatype...
Spandan Tiwari [Thu, 3 Jan 2019 18:29:03 +0000 (10:29 -0800)]
Fix ONNX export of logical ops, including torch.ne, to have correct output datatype (#15677)

Summary:
This is the an updated version of the earlier PR https://github.com/pytorch/pytorch/pull/15185, since that one was closed.

Currently PyTorch ONNX exporter exports the logical ops (lt, gt, le, ge, eq, ne) with output type in corresponding ONNX ops as type tensor(uint8). But ONNX spec allows for only tensor(bool), which is why models that have these ops fail to load properly.

This issue is captured in #11339. Part of this issue, relating to the allowed input types, has been fixed in ONNX spec by houseroad. This PR fixes the other part pertaining to output type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15677

Reviewed By: dzhulgakov

Differential Revision: D13568450

Pulled By: houseroad

fbshipit-source-id: a6afbea1afdb4edad8f8b1bc492f50b14e5f2fce

5 years agoPort legacy reflection_pad1d to ATen (#15480)
Shen Li [Thu, 3 Jan 2019 18:23:07 +0000 (10:23 -0800)]
Port legacy reflection_pad1d to ATen (#15480)

Summary:
1. Avoided using `THCDeviceTensor` by re-calculating the mapping from cuda (blockIdx, threadIdx) to input/output tensor index.
2. Changed Camelcase naming to underscore naming.

Profiling:

Legacy:

```bash
$py.test test/test_nn.py -k ReflectionPad1d -v -s
....
=========== 2 passed, 1258 deselected, 800 warnings in 4.35 seconds ============
```

Now:

```bash
$py.test test/test_nn.py -k ReflectionPad1d -v -s
...
=========== 2 passed, 1258 deselected, 800 warnings in 4.03 seconds ============
```

I have two questions about the code. Any insights are appreciated. gchanan zou3519

1. I can verify that [this magic](https://github.com/pytorch/pytorch/blob/master/aten/src/THCUNN/TemporalReflectionPadding.cu#L32-L36) correctly maps output index to input index in different cases. But, I have no idea about how did you come up with this algorithm that merges three categories (in left padding, in original input, in right padding) into a single statement?

2. Why do we need [get contiguous](https://github.com/pytorch/pytorch/blob/master/aten/src/THNN/generic/TemporalReflectionPadding.c#L80) tensors when calculating forward and backward propagation?

Reflection_pad2d porting will come in the next PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15480

Differential Revision: D13544924

Pulled By: mrshenli

fbshipit-source-id: 182045434f210032a82cab721a190da0cd781fbf

5 years agobug fix in 3d group conv (#15625)
Jongsoo Park [Thu, 3 Jan 2019 17:43:46 +0000 (09:43 -0800)]
bug fix in 3d group conv (#15625)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15625

3D group conv (both NCHW and NHWC layout) was not correct.
Added group=2 in test_1d_convolution and test_3d_convolution in conv_test

Reviewed By: protonu

Differential Revision: D13562099

fbshipit-source-id: 586e8a7574a2764f2a3b559db6c2415b3ab90453

5 years agoPort torch.arange to aten and parallelize on CPU.
Gregory Chanan [Thu, 3 Jan 2019 17:16:16 +0000 (09:16 -0800)]
Port torch.arange to aten and parallelize on CPU.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15667

Differential Revision: D13566631

Pulled By: gchanan

fbshipit-source-id: e3243a4e81ecb58373681df8bf6a00428352fb14

5 years agoIgnore flake8 warning about whitespace before ':' (#15663)
Gerard Goossen [Thu, 3 Jan 2019 12:59:41 +0000 (04:59 -0800)]
Ignore flake8 warning about whitespace before ':' (#15663)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15663

Ignore sometimes incorrect flake8 warning about whitespace before ':'

See https://github.com/ambv/black/issues/315

Reviewed By: soumith

Differential Revision: D13565818

fbshipit-source-id: 9d5ec2335899527ee71f4b505c00865a354e3bf0

5 years agoAdd count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue...
Xiaomeng Yang [Thu, 3 Jan 2019 08:16:03 +0000 (00:16 -0800)]
Add count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue. (#15651)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15651

Add count_include_pad arg for PoolOpGradient on CPU and fix ARM performance issue.

Reviewed By: houseroad

Differential Revision: D13564257

fbshipit-source-id: 3a143f1122bc507ccb7827e9b46908d5c7203735

5 years agoUnify the usage of Dequantize (#15685)
Jianyu Huang [Thu, 3 Jan 2019 05:05:55 +0000 (21:05 -0800)]
Unify the usage of Dequantize (#15685)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15685

The declaration of "Dequantize" is in "fbsource/fbcode/deeplearning/fbgemm2/QuantUtils.h", so it requires the "namespace fbgemm".

<T> is actually optional, since the type can de deduced from the first argument.

In some places we have "Dequantize<T>(...)", while in other places we have "Dequantize(...)". We'd better unify them. As a reference, all occurrences of "Quantize" are using "fbgemm::Quantize<T>(...)".

Reviewed By: jspark1105

Differential Revision: D13570847

fbshipit-source-id: 7fca9f7f9e4e0d9e5eb27ac44b8707adc3c80717

5 years agoFix vec256 inversion (#15659)
Shen Li [Thu, 3 Jan 2019 05:01:13 +0000 (21:01 -0800)]
Fix vec256 inversion (#15659)

Summary:
soumith zou3519

I was browsing the code, and think `vec256_int.h` might need a minor revision, but not 100% sure.

1. It currently invert the result by `XOR` with 0. Should it `XOR` with 1 instead?
~2. AVX2 logical operations would set all bits in a byte/word/... to `1` if the condition holds. So functions, such as `_mm256_cmpeq_epi64 ` would return `0/-1` instead of `0/1`. Should it be masked with `1` to make sure it returns 0/1?~

~Would I be correct if I assume that the code revised below is not yet activated, but will be after we port legacy code to ATen?~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15659

Differential Revision: D13565929

Pulled By: mrshenli

fbshipit-source-id: 8ae3daf256c3d915dd855a2215c95275e899ea8c

5 years agoAdd min/max on numbers to JIT
Zachary DeVito [Thu, 3 Jan 2019 04:07:55 +0000 (20:07 -0800)]
Add min/max on numbers to JIT

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15680

Differential Revision: D13568806

Pulled By: zdevito

fbshipit-source-id: ef0f33cc12a057184293bc31d28cc7b24f73eb94

5 years agoinitialize with ident value in global reduction (#15653)
Natalia Gimelshein [Thu, 3 Jan 2019 03:50:19 +0000 (19:50 -0800)]
initialize with ident value in global reduction (#15653)

Summary:
Fixes #15647. cc colesbury.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15653

Differential Revision: D13571132

Pulled By: soumith

fbshipit-source-id: 8f25943c974b3b931f4528e0e0a370bc095dab51

5 years agoUpdating submodules
svcscm [Thu, 3 Jan 2019 02:53:15 +0000 (18:53 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: f7b540159cf1fe72825d09d55d56117d14ff90eb

5 years agoSupport for Jetson Xavier (#15660)
rtarquini [Thu, 3 Jan 2019 02:48:31 +0000 (18:48 -0800)]
Support for Jetson Xavier (#15660)

Summary:
The request changes are to support building Pytorch 1.0 on the Jetson Xavier with Openblas.  Jetson Xavier with Jetpack 3.3 has generic lapack installed. To pick up the CUDA accelerated BLAS/Lapack, I had to build Openblas and build/link pytorch from source. Otherwise, I got a runtime error indicating lapack routines were not cuda enabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15660

Differential Revision: D13571324

Pulled By: soumith

fbshipit-source-id: 9b148d081d6e7fa7e1824dfdd93283c67f69e683

5 years agoFixing cuda100 smoke tests
Jesse Hellemn [Thu, 3 Jan 2019 01:10:35 +0000 (17:10 -0800)]
Fixing cuda100 smoke tests

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15673

Reviewed By: yf225

Differential Revision: D13568746

Pulled By: pjh5

fbshipit-source-id: e636de417d61b48074399da75bfb2576c9f62743

5 years agoRemove PythonOp non-CPU path and PytorchOp (#15417)
Jerry Zhang [Thu, 3 Jan 2019 00:32:02 +0000 (16:32 -0800)]
Remove PythonOp non-CPU path and PytorchOp (#15417)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15417

Right now the way we test whether Blob contains a CPU tensor is broken in ```PythonOpBase``` is broken, which means non-CPU path might never be taken.
Searching through the codebase, non-gpu path is used in PythonDLPack, and it is used in PytorchOp which is unused. So we'll remove non-gpu path in this diff.

Reviewed By: dzhulgakov

Differential Revision: D13495011

fbshipit-source-id: 9fe9537f05026d2a2cf7051efa81d184de722710

5 years agoUpdating submodules
svcscm [Wed, 2 Jan 2019 22:55:43 +0000 (14:55 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: bb142e8f91046cc2b7ea32dac46ec0753b4bc218

5 years agofix select after chunk op (#15672)
Michael Suo [Wed, 2 Jan 2019 22:32:00 +0000 (14:32 -0800)]
fix select after chunk op (#15672)

Summary:
Fixes #15669.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15672

Differential Revision: D13567274

Pulled By: suo

fbshipit-source-id: a63e6cfc9dacedd4cb99dc51eee452038418001e

5 years agomake flake8 failure blocking (#15675)
Michael Suo [Wed, 2 Jan 2019 20:50:13 +0000 (12:50 -0800)]
make flake8 failure blocking (#15675)

Summary:
Right now it just prints whatever flake8 errors and moves forward with the commit. This is too easy to miss.

It should block the commit so that the user can fix the issue
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15675

Differential Revision: D13567821

Pulled By: suo

fbshipit-source-id: 5f0de40ddd771bad8d6848417408cffbceb03183

5 years agoredo sleef build fix (#15549)
Zachary DeVito [Wed, 2 Jan 2019 20:45:38 +0000 (12:45 -0800)]
redo sleef build fix (#15549)

Summary:
This was accidentally reverted by #14866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15549

Differential Revision: D13549674

Pulled By: zdevito

fbshipit-source-id: e209aac53dccb082b91cfa2d292310eabeb459e3

5 years agoformat conv_test.py to prepare D13562099 (#15632)
Jongsoo Park [Wed, 2 Jan 2019 19:25:41 +0000 (11:25 -0800)]
format conv_test.py to prepare D13562099 (#15632)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15632

Just formatting and a few lints.

Reviewed By: yinghai

Differential Revision: D13562403

fbshipit-source-id: c56f8ee61f68cdaccc0828a764ff729454f68259

5 years agoFix torch.gesv args in doc
kiendang [Wed, 2 Jan 2019 08:18:07 +0000 (00:18 -0800)]
Fix torch.gesv args in doc

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15649

Differential Revision: D13564312

Pulled By: soumith

fbshipit-source-id: b3bba2ece600880077eb09b092ce17e331995bd6

5 years agoclamp fixes (#15479)
surgan12 [Wed, 2 Jan 2019 07:09:45 +0000 (23:09 -0800)]
clamp fixes (#15479)

Summary: fix to #15338 .

Differential Revision: D13564343

Pulled By: soumith

fbshipit-source-id: be64b572945533e10ae6f627d335b47f093720a3

5 years agoUpdating submodules
svcscm [Wed, 2 Jan 2019 03:41:31 +0000 (19:41 -0800)]
Updating submodules

Reviewed By: cdelahousse

fbshipit-source-id: acb68439e62ea270af22364183a6ecba883fab66

5 years agoUpdating submodules
svcscm [Wed, 2 Jan 2019 01:20:19 +0000 (17:20 -0800)]
Updating submodules

Reviewed By: cdelahousse

fbshipit-source-id: 5c5ad6a5cc9220ee1dd9565d64c7459f866ff74d

5 years agoFix typo in documentation
Alexander Rodin [Mon, 31 Dec 2018 02:05:29 +0000 (18:05 -0800)]
Fix typo in documentation

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15628

Differential Revision: D13562685

Pulled By: soumith

fbshipit-source-id: 1621fcff465b029142313f717035e935e9159513

5 years agoMake btriunpack work for high dimensional batches and faster than before (#15286)
vishwakftw [Sun, 30 Dec 2018 20:39:10 +0000 (12:39 -0800)]
Make btriunpack work for high dimensional batches and faster than before (#15286)

Summary:
Changelog:
- Optimize btriunpack by using `torch.where` instead of indexing, inplace operations instead of out place operations and avoiding costly permutations by computing the final permutation over a list.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15286

Differential Revision: D13562038

Pulled By: soumith

fbshipit-source-id: e2c94cfab5322bf1d24bf56d7b056619f553acc6

5 years agoAdd count_include_pad arg for average_pool_op on CPU (#15593)
Xiaomeng Yang [Sun, 30 Dec 2018 12:13:54 +0000 (04:13 -0800)]
Add count_include_pad arg for average_pool_op on CPU (#15593)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15593

Add count_include_pad arg for average_pool_op on CPU

Reviewed By: houseroad

Differential Revision: D13558123

fbshipit-source-id: 188879ec3af313105ff66ac0b5a81ea44fca2855

5 years agoRemove TH/THC link for cholesky (#15595)
vishwakftw [Sun, 30 Dec 2018 01:50:32 +0000 (17:50 -0800)]
Remove TH/THC link for cholesky (#15595)

Summary:
Changelog:
- Remove TH/THC binding
- Port single matrix case to ATen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15595

Differential Revision: D13561657

Pulled By: soumith

fbshipit-source-id: 65f8c4b455cf19a0c7b6aeac2e3b985c7a7208f8

5 years agoConcatenate directly into shared memory when constructing batches for numpy (#14534)
Christoph [Sun, 30 Dec 2018 01:48:36 +0000 (17:48 -0800)]
Concatenate directly into shared memory when constructing batches for numpy (#14534)

Summary:
Since #1323 tensors are shared with shared memory, but this feature is not active for numpy.
This PR fix this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14534

Differential Revision: D13561649

Pulled By: soumith

fbshipit-source-id: b6bc9e99fb91e8b675c2ef131fba9fa11c1647c0

5 years agoAdd a patch for OSX with SDK<10.12 (#15615)
Mark Harfouche [Sun, 30 Dec 2018 00:09:12 +0000 (16:09 -0800)]
Add a patch for OSX with SDK<10.12 (#15615)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/15614

Build passing on SDK 10.9
https://dev.azure.com/ramonaoptics/feedstock-builds/_build/results?buildId=13
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15615

Differential Revision: D13561737

Pulled By: soumith

fbshipit-source-id: 2ab0f78338d4949fa3f2735915fd96dce4bcd621

5 years agoFix typo: szie -> size
Gao, Xiang [Sat, 29 Dec 2018 06:38:24 +0000 (22:38 -0800)]
Fix typo: szie -> size

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15466

Differential Revision: D13536343

Pulled By: soumith

fbshipit-source-id: cb3df30bf346ef6bc0bc1b6430107b3e0e086f8d

5 years agoMake the warning suppression safer (#15560)
peter [Sat, 29 Dec 2018 06:10:08 +0000 (22:10 -0800)]
Make the warning suppression safer (#15560)

Summary:
Address the problem introduced in https://github.com/pytorch/pytorch/pull/15499#issuecomment-450038494.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15560

Differential Revision: D13561346

Pulled By: soumith

fbshipit-source-id: 6abf622672bdcb77ae1a7188e8a3817fa97aecbc

5 years agoadd NCHW2NHWC and NHWC2NCHW in utils.py (#15588)
Jongsoo Park [Sat, 29 Dec 2018 01:32:11 +0000 (17:32 -0800)]
add NCHW2NHWC and NHWC2NCHW in utils.py (#15588)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15588

Use NHWC2NCHW or NCHW2NHWC functions which is easier to understand compared to code using transpose and generalizable to non-2D convolutions.

Reviewed By: csummersea

Differential Revision: D13557674

fbshipit-source-id: c4fdb8850503ea58f6b17b188513ae2b29691ec0

5 years agoRemove TH/THC link for gesv (#15510)
Vishwak Srinivasan [Sat, 29 Dec 2018 00:51:45 +0000 (16:51 -0800)]
Remove TH/THC link for gesv (#15510)

Summary:
This PR removes the TH/THC binding for gesv.

Changelog:
- Remove TH/THC binding
- Port single matrix case to ATen
- Enable test_gesv for CUDA as well
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15510

Differential Revision: D13559990

Pulled By: soumith

fbshipit-source-id: 9da2825e94d3103627e719709e6b1f8b521a07fb

5 years agokeep extra_info of each op in ProfDagStats (#15244)
Dong Li [Fri, 28 Dec 2018 23:00:41 +0000 (15:00 -0800)]
keep extra_info of each op in ProfDagStats (#15244)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15244

This DIFF keeps track of the extra_info information attached to each operator. When getPerOpStas() is called, it attaches the extra_info to the result ProfDagStats protobuf.

Facebook
Net transform attaches a global_op_id which is defined as a tuple of (orig_net_name, original_op_index) to each operator,
The global_op_id is encoded as extra_info in each operator.

Reviewed By: aazzolini

Differential Revision: D13016289

fbshipit-source-id: 3e2719ec7ed0ebe47740b77581c565ff7e79b102

5 years agoError when torch.load-ing a JIT model (#15578)
David Riazati [Fri, 28 Dec 2018 21:52:01 +0000 (13:52 -0800)]
Error when torch.load-ing a JIT model (#15578)

Summary:
Throw a warning when calling `torch.load` on a zip file

Fixes #15570
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15578

Differential Revision: D13555954

Pulled By: driazati

fbshipit-source-id: a37ecdb3dd0c23eff809f86e2f8b74cd48ff7277

5 years agodefault_collate should collate bool list to byte tensors (#14669)
SsnL [Fri, 28 Dec 2018 19:51:26 +0000 (11:51 -0800)]
default_collate should collate bool list to byte tensors (#14669)

Summary:
Based on #15331 . Review only the last commit.

Fixes https://github.com/pytorch/pytorch/issues/14507.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14669

Reviewed By: ezyang

Differential Revision: D13528725

Pulled By: soumith

fbshipit-source-id: f12f1ac1c4ff2a3ddd6877c0c096a5da3a1ffa3c

5 years agoappend caffe2 prefix to dnnlowp cmd line options (#15582)
Jongsoo Park [Fri, 28 Dec 2018 19:49:22 +0000 (11:49 -0800)]
append caffe2 prefix to dnnlowp cmd line options (#15582)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15582

Following convention of having caffe2_ prefix in command line options

Reviewed By: viswanathgs

Differential Revision: D13252055

fbshipit-source-id: 142a6395b832f211f34d0a87ec2d62c1e5fcdc69

5 years agoadding nightly build smoke tests to circleci
Jesse Hellemn [Fri, 28 Dec 2018 18:44:47 +0000 (10:44 -0800)]
adding nightly build smoke tests to circleci

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15441

Reviewed By: yf225

Differential Revision: D13552399

Pulled By: pjh5

fbshipit-source-id: 4a52ee2d08324b9ab6b8c266ad6a1cd3bdad1c71

5 years agoadd the int support (#15581)
Lingyi Liu [Fri, 28 Dec 2018 01:13:50 +0000 (17:13 -0800)]
add the int support (#15581)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15581

as title

Reviewed By: protonu

Differential Revision: D13556274

fbshipit-source-id: ba21f0970257d526e2fe7574eea4f89465b9c618

5 years agoMove VariableImpl functions to AutogradMeta and Variable (#15487)
Will Feng [Fri, 28 Dec 2018 01:12:27 +0000 (17:12 -0800)]
Move VariableImpl functions to AutogradMeta and Variable (#15487)

Summary:
In this PR, we are moving all functions away from `Variable::Impl`, in order to get rid of `Variable::Impl` (and the `data_` Tensor in it) in the next PR. Some of the functions (such as `set_requires_grad` / `requires_grad` / `grad`) will be living in `AutogradMeta` class, while others (such as `backward()` / `rebase_history()` / `grad_accumulator()` / `grad_fn()`) will be living in `Variable` class.

This is the 2nd PR mentioned in https://github.com/pytorch/pytorch/issues/13638.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15487

Differential Revision: D13553173

Pulled By: yf225

fbshipit-source-id: 691f9432d0cd0640af380c757f3e3a2f64f8851c

5 years agotest basic tensor interop
Roy Li [Fri, 28 Dec 2018 01:01:19 +0000 (17:01 -0800)]
test basic tensor interop

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12249

Differential Revision: D13469356

Pulled By: li-roy

fbshipit-source-id: b49748462aa44ac34b8ce79783f2c895a537a232

5 years agoAllow int/float cast to bool (#13391)
David Riazati [Thu, 27 Dec 2018 23:58:32 +0000 (15:58 -0800)]
Allow int/float cast to bool (#13391)

Summary:
This PR adds explicit `bool()` casts to match Python semantics

`bool(1) = True`
`bool(0) = False`
`bool(0.0) = False`
`bool(0.1) = True`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13391

Differential Revision: D12871213

Pulled By: driazati

fbshipit-source-id: 773a48b2647973138efe854abe725d647f1d727d

5 years agoremove print ops before exporting onnx graph (#15550)
Elias Ellison [Thu, 27 Dec 2018 23:35:24 +0000 (15:35 -0800)]
remove print ops before exporting onnx graph (#15550)

Summary:
Removing print ops before exporting onnx graph, fixes https://github.com/pytorch/pytorch/issues/15505
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15550

Differential Revision: D13551195

Pulled By: eellison

fbshipit-source-id: 1ea1e34cb5b8433eacc2b86fb10b241198af96be

5 years agoAdded deviceCount() virtual method to DeviceGuardImplInterface (#15574)
Igor Fedan [Thu, 27 Dec 2018 23:24:22 +0000 (15:24 -0800)]
Added deviceCount() virtual method to DeviceGuardImplInterface (#15574)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15574

Added deviceCount() virtual method to DeviceGuardImplInterface, also added correspondent implementation for CPUGuardImpl, CUDAGuardImpl, FakeGuardImpl, VirtualGuardImpl, HIPGuardImplMasqueradingAsCUDA

Reviewed By: soumith

Differential Revision: D13554609

fbshipit-source-id: 913bf2aad44a0a356efe54505ee4abaf6c4622db

5 years agoPort torch.range to aten and parallelize on CPU.
Gregory Chanan [Thu, 27 Dec 2018 23:20:42 +0000 (15:20 -0800)]
Port torch.range to aten and parallelize on CPU.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15484

Differential Revision: D13538955

Pulled By: gchanan

fbshipit-source-id: ee3889ad116988d963e603621310b3bbdce0aec9

5 years agoExport group norm as ATen and add test (#15569)
Lu Fang [Thu, 27 Dec 2018 22:42:01 +0000 (14:42 -0800)]
Export group norm as ATen and add test (#15569)

Summary:
Short term solution, export group norm as an ATen op to unblock users.
Long term will add GroupNorm to onnx.

Add an end to end test for this one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15569

Differential Revision: D13554293

Pulled By: houseroad

fbshipit-source-id: b4974c9ea2a1b81338ca1e5c6747efe2715d7932

5 years agoUpdate cuda.get/set_rng_state doc (#14324)
SsnL [Thu, 27 Dec 2018 22:06:23 +0000 (14:06 -0800)]
Update cuda.get/set_rng_state doc (#14324)

Summary:
Now that `cuda.get/set_rng_state` accept `device` objects, the default value should be an device object, and doc should mention so.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14324

Reviewed By: ezyang

Differential Revision: D13528707

Pulled By: soumith

fbshipit-source-id: 32fdac467dfea6d5b96b7e2a42dc8cfd42ba11ee

5 years agoUpdate QNNPACK (#15561)
Marat Dukhan [Thu, 27 Dec 2018 19:55:02 +0000 (11:55 -0800)]
Update QNNPACK (#15561)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15561

- Update QNNPACK submodule to master (API-incompatible)
- Do matching changes in Caffe2 Int8 operators

Reviewed By: dreiss

Differential Revision: D13551322

fbshipit-source-id: 066f9087061167f7d7cfbc1c8f8628dfa93d056e

5 years agoRevert D13552080: [pytorch][PR] add clang-format check to CI
Michael Suo [Thu, 27 Dec 2018 18:53:58 +0000 (10:53 -0800)]
Revert D13552080: [pytorch][PR] add clang-format check to CI

Differential Revision:
D13552080

Original commit changeset: 462a73894c16

fbshipit-source-id: ebfc5aa3343cebabbc24ff39e4e9841a372443e2

5 years agoFix wrong class name in jit _make_fail (#15559)
daquexian [Thu, 27 Dec 2018 09:59:56 +0000 (01:59 -0800)]
Fix wrong class name in jit _make_fail (#15559)

Summary:
It should be ScriptModule rather than TracedModule :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15559

Differential Revision: D13552058

Pulled By: soumith

fbshipit-source-id: 0aa17639c225818b00d59daec4bc2336f039f658

5 years agoadd clang-format check to CI (#15543)
Michael Suo [Thu, 27 Dec 2018 06:17:59 +0000 (22:17 -0800)]
add clang-format check to CI (#15543)

Summary:
Simple check that runs against your PR's changes and complains if running clang-format would have created a change. Does nothing when run against master, so it's "safe" to accept changes that fail this check and it won't break the build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15543

Reviewed By: soumith

Differential Revision: D13552080

Pulled By: suo

fbshipit-source-id: 462a73894c16e7108806af7fa88440c377d4d0d2

5 years agoFix github branch prefix v (#15552)
Ailing Zhang [Thu, 27 Dec 2018 03:43:10 +0000 (19:43 -0800)]
Fix github branch prefix v (#15552)

Summary:
Fixes #15519 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15552

Differential Revision: D13550780

Pulled By: ailzhang

fbshipit-source-id: b117e5ced42de207b91045bffcee8907dd73201e

5 years agoRotated boxes support for GPU GenerateProposals op (#15470)
Viswanath Sivakumar [Thu, 27 Dec 2018 02:01:20 +0000 (18:01 -0800)]
Rotated boxes support for GPU GenerateProposals op (#15470)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15470

On top of D13509114 and D13017791. Pretty straight-forward.

Reviewed By: newstzpz

Differential Revision: D13536671

fbshipit-source-id: ff65981b70c63773ccc9aef3ff28e3c9508f6716

5 years agoCUDA kernel for rotated NMS support, over 200x speedup than CPU (#15365)
Viswanath Sivakumar [Thu, 27 Dec 2018 02:01:20 +0000 (18:01 -0800)]
CUDA kernel for rotated NMS support, over 200x speedup than CPU (#15365)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15365

On top of D13017791, adding rotated NMS support with the same kernel building
blocks. Results in 218x speedup on avg.

Reviewed By: SuperIRabbit

Differential Revision: D13509114

fbshipit-source-id: c1d33c8dc4bc50b5906b4f01bb0caf1115e2a357

5 years agoMove autograd metadata from VariableImpl to TensorImpl (#13827)
Will Feng [Thu, 27 Dec 2018 00:31:47 +0000 (16:31 -0800)]
Move autograd metadata from VariableImpl to TensorImpl (#13827)

Summary:
Changes originally in this PR:
1. Move Variable::Impl data members into TensorImpl as `AutogradMeta` struct
2. Change Variable::Impl functions to use data members in `AutogradMeta` struct
3. Add `shallow_copy_and_detach()` function to each subclass of TensorImpl
4. Do shallow copy when the user calls `make_variable(tensor)` / `make_variable_view(tensor)` / `variable.set_data(tensor)` / `variable.detach()`

Changes moved from https://github.com/pytorch/pytorch/pull/13645:
1. Add a flag to Variable to disallow size/stride/storage_ptr changes from in-place operations such as `resize_` / `resize_as_` / `set_` / `transpose_`, and set this flag to true when people call `tensor.data` in Python.
2. Write text in the docs to actively discourage changing the shape or storage of `tensor_detached` and expecting `tensor` to also be updated.

This is the 1st+2nd PR mentioned in https://github.com/pytorch/pytorch/issues/13638.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13827

Differential Revision: D13507173

Pulled By: yf225

fbshipit-source-id: b177b08438d534a8197e34e1ad4a837e2db0ed6a

5 years agoversion bump to 1.1 (#15554)
Soumith Chintala [Wed, 26 Dec 2018 23:41:46 +0000 (15:41 -0800)]
version bump to 1.1 (#15554)

Summary:
version bump to 1.1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15554

Differential Revision: D13550818

Pulled By: soumith

fbshipit-source-id: 8a28582c98b42c081e103581551a01fd96c9f42d

5 years agoIn README.md CMAKE_PREFIX_PATH should be CONDA_PREFIX when using an conda virtual...
derek [Wed, 26 Dec 2018 20:54:17 +0000 (12:54 -0800)]
In README.md CMAKE_PREFIX_PATH should be CONDA_PREFIX when using an conda virtual environment (#15548)

Summary:
In current README.md, `CMAKE_PREFIX_PATH` is set to conda root even when you have activated an virtual environment. When an conda virtualenv is activated, packages are installed in `CONDA_PREFIX`, not conda root. I think `CMAKE_PREFIX_PATH` should also be set to `CONDA_PREFIX` in this case. I think some build issues can be solved with the new instruction. Maybe something like #14954.

soumith,
When I made PR #15335 I was confused and made a wrong point. I think this PR could be the real solution.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15548

Differential Revision: D13549681

Pulled By: soumith

fbshipit-source-id: 42d855b6e49ee58d735d2f4715d3e5752a748693

5 years agoadd from_pretrained method to EmbeddingBag (#15273)
David Pollack [Wed, 26 Dec 2018 16:31:00 +0000 (08:31 -0800)]
add from_pretrained method to EmbeddingBag (#15273)

Summary:
The `EmbeddingBag` module does not include a `from_pretrained` method like the `Embedding` module.  I added it for consistency between the two modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15273

Differential Revision: D13547842

Pulled By: soumith

fbshipit-source-id: 8ffde51ff0c1e8fc8310263b6f375da88089ff7d

5 years agoMake argument size checking consistent across CPU and CUDA for torch.gesv (#15430)
vishwakftw [Wed, 26 Dec 2018 16:29:57 +0000 (08:29 -0800)]
Make argument size checking consistent across CPU and CUDA for torch.gesv (#15430)

Summary:
There is an inconsistency in the size of arguments for gesv, which is fixed in this PR.

Changelog:
- Replicate check in CPU as done for CUDA
- Fix argument ordering (minor) in CUDA checking

Fixes #15328

Differential Revision: D13531167

Pulled By: soumith

fbshipit-source-id: c4b4e4fc12880208d08e88d1e47e730ac98c2ad3

5 years agoclang format world (#15524)
Michael Suo [Wed, 26 Dec 2018 14:52:25 +0000 (06:52 -0800)]
clang format world (#15524)

Summary:
The PR clang-formats everything in `torch/csrc/jit/` and adds it to the pre-commit hook.

Here is a list of non-mechanical changes:
- I went over each file and fixed up whenever I could tell that clang-format was clobbering comment formatting.
- Made the macros in register_prim_ops a little more clang-format friendly by omitting trailing commas
- Refactored autodiff.cpp to use a helper class with explicit state rather than a bunch of capturing lambdas
- Small improvements to the precommit hook clang-format
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15524

Differential Revision: D13547989

Pulled By: suo

fbshipit-source-id: 3ff1541bb06433ccfe6de6e33f29227a2b5bb493

5 years agoAdded correct isinf handling for Integral tensors (#15489)
Frank Zhang [Wed, 26 Dec 2018 14:32:44 +0000 (06:32 -0800)]
Added correct isinf handling for Integral tensors (#15489)

Summary:
Currently torch.isinf on integral tensor will raise RuntimeError: value cannot be converted to type int16_t without overflow: inf.
This pr will suppress the error and return false(0) for all integral tensors. The behavior will also be consistent with np.isinf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15489

Reviewed By: zou3519

Differential Revision: D13540786

Pulled By: flashhack

fbshipit-source-id: e730dea849da6a59f3752d347bcfbadfd12c6483

5 years agoTrivial comment update in autograd/function.h (#15529)
Derek Kim [Wed, 26 Dec 2018 10:11:17 +0000 (02:11 -0800)]
Trivial comment update in autograd/function.h (#15529)

Summary:
I removed the explanation on `num_inputs` parameter. This parameter was removed in #8168

colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15529

Differential Revision: D13547854

Pulled By: soumith

fbshipit-source-id: 8a9ac58f2c93a2533b82ec63089477166ed0bcb9

5 years agoFix failed type cast in Windows Debug Build (#15333)
peter [Wed, 26 Dec 2018 08:46:13 +0000 (00:46 -0800)]
Fix failed type cast in Windows Debug Build (#15333)

Summary:
Fixes #15330
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15333

Differential Revision: D13531317

Pulled By: soumith

fbshipit-source-id: b956f27bd7fa33cbdf405338fcbcbc7df2fd629f

5 years agoUpgrade MKL-DNN to version 0.17 and static build MKL-DNN (#15504)
Gu, Jinghui [Wed, 26 Dec 2018 06:54:16 +0000 (22:54 -0800)]
Upgrade MKL-DNN to version 0.17 and static build MKL-DNN (#15504)

Summary:
Upgrade MKl-DNN to 0.17 and static build MKL-DNN to fix the potentail build error due to old mkldnn version in host system.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15504

Differential Revision: D13547885

Pulled By: soumith

fbshipit-source-id: 46f790a3d9289c1e153e51c62be17c5206ea8f9a

5 years agoremove legacy from docs (#15112)
Soumith Chintala [Wed, 26 Dec 2018 05:55:26 +0000 (21:55 -0800)]
remove legacy from docs (#15112)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/15062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15112

Differential Revision: D13547845

Pulled By: soumith

fbshipit-source-id: 61e3e6c6b0f6b6b3d571bee02db2938ea9698c99

5 years agoUse at::zeros instead of torch::zeros in non-differentiable example (#15527)
Alexander Rodin [Wed, 26 Dec 2018 05:43:38 +0000 (21:43 -0800)]
Use at::zeros instead of torch::zeros in non-differentiable example (#15527)

Summary:
There was a typo in C++ docs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15527

Differential Revision: D13547858

Pulled By: soumith

fbshipit-source-id: 1f5250206ca6e13b1b1443869b1e1c837a756cb5

5 years agoFix the compare logic in function `overflows` for MSVC (#15499)
peter [Wed, 26 Dec 2018 05:43:22 +0000 (21:43 -0800)]
Fix the compare logic in function `overflows` for MSVC (#15499)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/15497.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15499

Differential Revision: D13547835

Pulled By: soumith

fbshipit-source-id: a674da93bf905a0b81f0cc60449ccb97c2746926