platform/upstream/pytorch.git
5 years agoUpdate pooling.py (#14998)
paland3 [Tue, 11 Dec 2018 06:34:17 +0000 (22:34 -0800)]
Update pooling.py (#14998)

Summary:
Strange line in the documentation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14998

Differential Revision: D13413235

Pulled By: soumith

fbshipit-source-id: 80d05ec1185719b785f0aac914bc2369c1174f2f

5 years agoClean up casting ops (#14947)
Zachary DeVito [Tue, 11 Dec 2018 06:10:11 +0000 (22:10 -0800)]
Clean up casting ops (#14947)

Summary:
This removes FloatToInt style names replacing it with just the destination
name (e.g. FloatToInt -> Float). This makes it more consistent with the
syntax and makes it easier to add type conversions (just add a new
prim::Int op, for instance).

None of these ops get serialized so this should not effect loading of
old models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14947

Differential Revision: D13408409

Pulled By: zdevito

fbshipit-source-id: d773fe863f14d9de893f686832769f8cc8903a8e

5 years agoshare code between adagrad and rowwise adagrad tests (#14692)
Jongsoo Park [Tue, 11 Dec 2018 06:08:04 +0000 (22:08 -0800)]
share code between adagrad and rowwise adagrad tests (#14692)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14692

Remove some code duplication

Reviewed By: chocjy

Differential Revision: D13296731

fbshipit-source-id: 5924e037ca64fc4b89234be922bc5ca47fb8bd32

5 years agoTBB task graph (#15041)
Ilia Cherniavskii [Tue, 11 Dec 2018 05:30:53 +0000 (21:30 -0800)]
TBB task graph (#15041)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15041

Adding an alternative implementation of a task graph based on TBB

Reviewed By: dmudiger

Differential Revision: D13412517

fbshipit-source-id: f5efedd680bbe0072bf38d504e5682ab51dd630f

5 years agoEnable more caffe2 fp16 rocm tests (#15040)
bddppq [Tue, 11 Dec 2018 05:25:45 +0000 (21:25 -0800)]
Enable more caffe2 fp16 rocm tests (#15040)

Summary:
cc rohithkrn petrex
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15040

Reviewed By: houseroad

Differential Revision: D13413068

Pulled By: bddppq

fbshipit-source-id: b2967f16f8da0b9e80083138fb8632c14e9e9b63

5 years agoEnable the build of tests in ATen/core (#15032)
Lu Fang [Tue, 11 Dec 2018 05:22:44 +0000 (21:22 -0800)]
Enable the build of tests in ATen/core (#15032)

Summary:
Otherwise they won't build
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15032

Reviewed By: yinghai

Differential Revision: D13409801

Pulled By: houseroad

fbshipit-source-id: 95464aa8f3604835997ba1bb7f3c3e51485d1686

5 years agoMore scaffolding for LegacyTHDispatch. (#14852)
Gregory Chanan [Tue, 11 Dec 2018 03:51:27 +0000 (19:51 -0800)]
More scaffolding for LegacyTHDispatch. (#14852)

Summary:
1) at::functions are now also exposed in the at::legacy::th namespace and we move relevant calls over to use them (to avoid merge conflicts)
2) LegacyTHDispatch now handles device-type initialization
3) We generate derived LegacyTHDispatchers, e.g. THLegacyCPULongDispatcher, although they are currently empty.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14852

Reviewed By: ezyang

Differential Revision: D13360852

Pulled By: gchanan

fbshipit-source-id: af6705aeba3593ea5dba9bfc62890e5257bc81f8

5 years agoBack out "Revert D13043261: [caffe2] Task graph and task future abstractions in executor"
Ilia Cherniavskii [Tue, 11 Dec 2018 03:18:06 +0000 (19:18 -0800)]
Back out "Revert D13043261: [caffe2] Task graph and task future abstractions in executor"

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15030

Reviewed By: bddppq

Differential Revision: D13408998

fbshipit-source-id: 9eb675e09fbc4829eab34df7aa660a0590816feb

5 years agoTensor construction codemod - 2/3 (#14836)
Jerry Zhang [Tue, 11 Dec 2018 03:16:18 +0000 (19:16 -0800)]
Tensor construction codemod - 2/3 (#14836)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14836

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: bddppq

Differential Revision: D13335176

fbshipit-source-id: 8d89510670e2cf70559d2f75e68f7181feb0b6d9

5 years agoFixing reading of FBGEMM from env variables
Jesse Hellemn [Tue, 11 Dec 2018 02:16:42 +0000 (18:16 -0800)]
Fixing reading of FBGEMM from env variables

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15023

Reviewed By: orionr

Differential Revision: D13406778

Pulled By: pjh5

fbshipit-source-id: 2265f01170fb7969cbdf4e44ca6ef183f5d8017d

5 years agoAlignas Array struct (#14920)
Syed Tousif Ahmed [Tue, 11 Dec 2018 01:53:33 +0000 (17:53 -0800)]
Alignas Array struct (#14920)

Summary:
This PR aligns the Array struct such that cuda vector performance improvements can be utilized.

I tested this by using it on our Philox header. Note how the vector store instruction gets used for cuda vector types and when using alignas on Array, vs when not using alignas on Array.

With cuda vector type (uint4, uint2, float4): https://godbolt.org/z/UaWOmR
With alignas: https://godbolt.org/z/Eeh0t5
Without alignas: https://godbolt.org/z/QT63gq
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14920

Differential Revision: D13406751

Pulled By: soumith

fbshipit-source-id: 685b1010ef1f576dde30c278b1e9b642f87c843d

5 years agoIntegrate rocBLAS fp16 api into Caffe2 (#14882)
rohithkrn [Tue, 11 Dec 2018 01:25:46 +0000 (17:25 -0800)]
Integrate rocBLAS fp16 api into Caffe2 (#14882)

Summary:
This PR integrates rocBLAS half and mixed precision APIs in to Caffe2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14882

Differential Revision: D13407840

Pulled By: bddppq

fbshipit-source-id: 75cb0d74da066776fa66575f1d255e879d36121e

5 years agoFix old tensor CopyFrom usage in boolean mask operator
Junjie Bai [Tue, 11 Dec 2018 01:19:15 +0000 (17:19 -0800)]
Fix old tensor CopyFrom usage in boolean mask operator

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15025

Differential Revision: D13407323

Pulled By: bddppq

fbshipit-source-id: 1bc1d28ad0c6c71d25d788549be18917e393ee50

5 years agounit test with multiple omp threads (#14958)
Jongsoo Park [Tue, 11 Dec 2018 01:16:32 +0000 (17:16 -0800)]
unit test with multiple omp threads (#14958)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14958

Test with multiple threads

Reviewed By: jianyuh

Differential Revision: D13394791

fbshipit-source-id: 931a6c3bda15ebc816807e537dd0841c383e7a6f

5 years agoRemove partially initialized Tensor in Deserialization (#14197)
Jerry Zhang [Tue, 11 Dec 2018 01:13:51 +0000 (17:13 -0800)]
Remove partially initialized Tensor in Deserialization (#14197)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14197

Pull Request resolved: https://github.com/pytorch/pytorch/pull/13642

Previously we pass in a patially initialized Tensor to Deserialize and it will fill
it with the result of deserialization of a tensor proto. Now we want it to return
a Tensor directly since it's just a shared pointer to TensorImpl.

Reviewed By: dzhulgakov

Differential Revision: D12874357

fbshipit-source-id: 12b80a763375da23cfa64a74d6bc186d8d03b94f

5 years agoRevert D13043261: [caffe2] Task graph and task future abstractions in executor
Junjie Bai [Mon, 10 Dec 2018 23:56:37 +0000 (15:56 -0800)]
Revert D13043261: [caffe2] Task graph and task future abstractions in executor

Differential Revision:
D13043261

Original commit changeset: d89424354aea

fbshipit-source-id: b307e3281c4d83b60ba2bfadcbcf69afb7a41412

5 years agoapply() for ScriptModules (#14655)
James Reed [Mon, 10 Dec 2018 23:35:11 +0000 (15:35 -0800)]
apply() for ScriptModules (#14655)

Summary:
This can be use to initialize state that is not necessarily eligible for serialization/is implementation-specific. Concretely, I'm going to use this to pack the weight matrices for quantized Linear modules according to the FBGEMM APIs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14655

Differential Revision: D13404438

Pulled By: jamesr66a

fbshipit-source-id: 2d327cef5520fdd716b5b1b29effd60a049e8a4a

5 years agoSimplify THPPointer implementation for Storage. (#14897)
Edward Yang [Mon, 10 Dec 2018 23:16:27 +0000 (15:16 -0800)]
Simplify THPPointer implementation for Storage. (#14897)

Summary:
We've virtualized the destructor for storage, so we
no longer have to forward to a particular backend.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14897

Differential Revision: D13399216

Pulled By: ezyang

fbshipit-source-id: 531d29c3f278477cfa8759f30ab4f304d695b659

5 years agoDisable getNumGPUs rewrite (#14993)
Edward Yang [Mon, 10 Dec 2018 23:10:23 +0000 (15:10 -0800)]
Disable getNumGPUs rewrite (#14993)

Summary:
cc iotamudelta

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14993

Differential Revision: D13405804

Pulled By: ezyang

fbshipit-source-id: c4aa9ed29ee2a4f3abf76c1e0fa8babfd738db35

5 years agoFix include path for WrapDimMinimal.h
Sebastian Messmer [Mon, 10 Dec 2018 23:06:30 +0000 (15:06 -0800)]
Fix include path for WrapDimMinimal.h

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14794

Reviewed By: dzhulgakov

Differential Revision: D13336842

fbshipit-source-id: ca49a9fd1d409d8a75e43eeb9b9b02c305ebb79a

5 years agoMove WrapDimMinimal to c10
Sebastian Messmer [Mon, 10 Dec 2018 23:06:30 +0000 (15:06 -0800)]
Move WrapDimMinimal to c10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14793

Reviewed By: ezyang

Differential Revision: D13336841

fbshipit-source-id: 4365a799e1856cc68dd94a273e97663fee5f51db

5 years agoStop disabling maybeOverlappingIndices (#14999)
Edward Yang [Mon, 10 Dec 2018 22:52:16 +0000 (14:52 -0800)]
Stop disabling maybeOverlappingIndices (#14999)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
cc iotamudelta
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14999

Differential Revision: D13405754

Pulled By: ezyang

fbshipit-source-id: 98459496494390ad1115b4f1f6738d53c14f0745

5 years agoadd gloo allgather support on GPU (#14576)
Jane Wang [Mon, 10 Dec 2018 22:29:41 +0000 (14:29 -0800)]
add gloo allgather support on GPU (#14576)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14576

as titled

Reviewed By: pietern

Differential Revision: D13266063

fbshipit-source-id: e262f77d63724a7504a7112907bbfba49612fe75

5 years agoTask graph and task future abstractions in executor
Ilia Cherniavskii [Mon, 10 Dec 2018 22:21:24 +0000 (14:21 -0800)]
Task graph and task future abstractions in executor

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14116

Reviewed By: dmudiger

Differential Revision: D13043261

fbshipit-source-id: d89424354aea14d1d14eb8320fb3aa34908a4e81

5 years agocaffe2/caffe2/contrib/script (#15007)
Jerry Zhang [Mon, 10 Dec 2018 22:17:43 +0000 (14:17 -0800)]
caffe2/caffe2/contrib/script (#15007)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15007

Pull Request resolved: https://github.com/pytorch/pytorch/pull/14979

att

Reviewed By: dzhulgakov

Differential Revision: D13286191

fbshipit-source-id: b8a6bc7aea44487aea4dcf7f44c858fd30c6293c

5 years agos/Torch Script/TorchScript/g (#15011)
Michael Suo [Mon, 10 Dec 2018 21:43:11 +0000 (13:43 -0800)]
s/Torch Script/TorchScript/g (#15011)

Summary:
pls
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15011

Differential Revision: D13404158

Pulled By: suo

fbshipit-source-id: e906281463d65c86e4e9073eb0c0a26f4f29e307

5 years agoImprove the docs of interpolate(align_corners=) (#14806)
Yuxin Wu [Mon, 10 Dec 2018 20:48:11 +0000 (12:48 -0800)]
Improve the docs of interpolate(align_corners=) (#14806)

Summary:
ailzhang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14806

Reviewed By: ailzhang

Differential Revision: D13366332

Pulled By: ppwwyyxx

fbshipit-source-id: 08fcea95d5c86b11cdfe464fdd9daa50050871f1

5 years agoImprove build time of register_symbols.cpp without compiler hacks (#14911)
Giuseppe Ottaviano [Mon, 10 Dec 2018 19:54:45 +0000 (11:54 -0800)]
Improve build time of register_symbols.cpp without compiler hacks (#14911)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14911

In optimized modes the compiler tries to inline all the
`unordered_map::operator[]` calls, creating a massive amount of code
which takes several minutes to optimize. Instead, create a table of
PODs and populate the maps using a simple loop.

Reviewed By: soumith, luciang

Differential Revision: D13382948

fbshipit-source-id: b6752921e0f7213595d26b39e4397f6a3897960b

5 years agoDelete defunct THP_API.h header. (#14899)
Edward Yang [Mon, 10 Dec 2018 18:44:13 +0000 (10:44 -0800)]
Delete defunct THP_API.h header. (#14899)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14899

Differential Revision: D13383687

Pulled By: ezyang

fbshipit-source-id: f2a08a769cc3775ba55f9c58d622a83df622d816

5 years agoDisable test_leaf_variable_sharing on ASAN runs
Edward Yang [Mon, 10 Dec 2018 18:40:25 +0000 (10:40 -0800)]
Disable test_leaf_variable_sharing on ASAN runs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15001

Reviewed By: orionr

Differential Revision: D13399119

fbshipit-source-id: 6b1d098e55a67b1f5bc6d08a8ee3c1be8234a654

5 years agoRevert D13306052: [pytorch][PR] Allow converting CharTensor to np arrays
Edward Yang [Mon, 10 Dec 2018 18:29:43 +0000 (10:29 -0800)]
Revert D13306052: [pytorch][PR] Allow converting CharTensor to np arrays

Differential Revision:
D13306052

Original commit changeset: 202d038f139c

fbshipit-source-id: 11f6bdd687f8ea5ce2e5f28f48d19449a5c403eb

5 years agoNon-INTERFACE AT_LINK_STYLE is dead code (#14822)
Edward Yang [Mon, 10 Dec 2018 17:32:55 +0000 (09:32 -0800)]
Non-INTERFACE AT_LINK_STYLE is dead code (#14822)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14822

Differential Revision: D13355574

Pulled By: ezyang

fbshipit-source-id: a7173084f8735424619b2e393df2715a05918b44

5 years agoSupport torch.load with encoding (#14743)
SsnL [Mon, 10 Dec 2018 16:05:06 +0000 (08:05 -0800)]
Support torch.load with encoding (#14743)

Summary:
Addresses a common compatibility issue when loading Py2 checkpoints in Py3 regarding to bytes.

E.g.,
[1] https://github.com/pytorch/pytorch/issues/5994,
[2] https://github.com/CSAILVision/places365/issues/25,
[3] https://discuss.pytorch.org/t/how-to-load-a-saved-model-trained-on-pytorch-0-3-1-python-2-7-on-pyorch-1-0-python-3-7/31212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14743

Reviewed By: weiyangfb

Differential Revision: D13350888

Pulled By: soumith

fbshipit-source-id: 2df4e828a8b70509118a355307ca3ebe51e108f6

5 years agoConvert int8 numpy array to CharTensor (#14700)
SsnL [Mon, 10 Dec 2018 15:36:06 +0000 (07:36 -0800)]
Convert int8 numpy array to CharTensor (#14700)

Summary:
When rewriting `default_collate`, I noticed that `from_numpy` and `as_tensor` and `tensor` all do not work on `np.int8` arrays.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14700

Reviewed By: weiyangfb

Differential Revision: D13305297

Pulled By: soumith

fbshipit-source-id: 2937110f65ed714ee830d50098db292238e9b2a9

5 years agoAllow converting CharTensor to np arrays (#14710)
SsnL [Mon, 10 Dec 2018 15:33:26 +0000 (07:33 -0800)]
Allow converting CharTensor to np arrays (#14710)

Summary:
The other direction of #14700

cc soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14710

Reviewed By: weiyangfb

Differential Revision: D13306052

Pulled By: soumith

fbshipit-source-id: 202d038f139cf05e01069ff8d05268c66354c983

5 years agopre-pack operation of dnnlowp conv with 16-bit accumulation (#14881)
Jongsoo Park [Mon, 10 Dec 2018 09:06:17 +0000 (01:06 -0800)]
pre-pack operation of dnnlowp conv with 16-bit accumulation (#14881)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14881

This diff allows us to pre-quantize and pre-pack weight matrix used in DNNLOWP_ACC16 .
The intended use pattern is run Int8ConvPackWeight in init_net that generates a packed weight and Int8Conv with DNNLOWP_ACC16 engine uses the the packed weight.

Reviewed By: csummersea

Differential Revision: D13374662

fbshipit-source-id: dd02b9a4eb7af1fe208aa857fcd0b445e6e395af

5 years agoRespect -q of setup.py (#14972)
Zachary DeVito [Mon, 10 Dec 2018 06:45:18 +0000 (22:45 -0800)]
Respect -q of setup.py (#14972)

Summary:
1. Changes the prints along the 'rebuild' pathway to respect the '-q' flag of setup.py
A clean rebuild now only prints:

    [zdevito@devgpu172.prn2 /data/users/zdevito/pytorch] python setup.py -q rebuild develop
    [0/1] Install the project...
    -- Install configuration: "RelWithDebInfo"
    ninja: no work to do.
    ninja: no work to do.
    ninja: no work to do.
    ninja: no work to do.
    ninja: no work to do.
    ninja: no work to do.

2. Deletes apparently dead calls to `generate_code`. Now that CMake builds these files,
it appears that it is getting called twice and the second version is never used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14972

Reviewed By: soumith

Differential Revision: D13396330

Pulled By: zdevito

fbshipit-source-id: 83c45143bbc6a6d2c1cfee929291ec059f2b5dc3

5 years ago_get_device_index supports parsing device strings
SsnL [Mon, 10 Dec 2018 05:10:39 +0000 (21:10 -0800)]
_get_device_index supports parsing device strings

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14929

Reviewed By: weiyangfb

Differential Revision: D13394498

Pulled By: soumith

fbshipit-source-id: 948c6118abdf6c1e1a8a17709333954cafb2345e

5 years agoremove mingfeima mkldnn reference from README, as no longer necessary (#14975)
Soumith Chintala [Mon, 10 Dec 2018 04:41:44 +0000 (20:41 -0800)]
remove mingfeima mkldnn reference from README, as no longer necessary (#14975)

Summary: we now get mkldnn automatically from third_party/ideep

Differential Revision: D13396480

Pulled By: soumith

fbshipit-source-id: 20f819ba4b78cbe9c7d0baeab1c575669cbf6c20

5 years agofixing some rebuild issues (#14969)
Zachary DeVito [Mon, 10 Dec 2018 00:29:38 +0000 (16:29 -0800)]
fixing some rebuild issues (#14969)

Summary:
This fixes rebuild issues with the ninja part of the build. With this patch all ninja files will now report `nothing to do` if nothing has changed assuming `BUILD_CAFFE2_OPS=0`.

1. This only does the python file processing for caffe2 when BUILD_CAFFE2_OPS=1, this part of the build file is written in such a way that it is always required to rerun and can take substantial time to move files around in the no-op build. In the future this part should be rewritten to use a faster method of copying the files or should treat copying the files as part of the build rules and only run when the files are out of date.

2. This points `sleef` to a patched version that fixes a dead build output that is causing everything to relink all the time. See https://github.com/shibatch/sleef/pull/231#partial-pull-merging for the upstream change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14969

Reviewed By: soumith

Differential Revision: D13395998

Pulled By: zdevito

fbshipit-source-id: ca85b7be9e99c5c578103c144ef0f2c3b927e724

5 years agoRemove deprecated info argument in btrifact (#14935)
vishwakftw [Sun, 9 Dec 2018 23:53:34 +0000 (15:53 -0800)]
Remove deprecated info argument in btrifact (#14935)

Summary:
As specified in title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14935

Differential Revision: D13394449

Pulled By: soumith

fbshipit-source-id: 569d59414f3a1a43ea641bded4b5433eb53e3490

5 years agoadd fix for CUDA 10 (#14971)
Soumith Chintala [Sun, 9 Dec 2018 23:52:25 +0000 (15:52 -0800)]
add fix for CUDA 10 (#14971)

Summary:
Linux binaries-only fix for CUDA10
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14971

Differential Revision: D13395932

Pulled By: soumith

fbshipit-source-id: a72d6ab6b98c6c936e6391d55d2e4e45b9f1e6dd

5 years agoFix mismatched test_{full,ones,zeros}_like onnx expect files (#14956)
Your Name [Sun, 9 Dec 2018 16:55:26 +0000 (08:55 -0800)]
Fix mismatched test_{full,ones,zeros}_like onnx expect files (#14956)

Summary:
master broken #14903
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14956

Differential Revision: D13395363

Pulled By: bddppq

fbshipit-source-id: 31f0913843292e557807fd5a976f8907fa6cae4b

5 years agofix auto grad summing for IfOp where intermediate output needs renaming (#14772)
Yiming Wu [Sun, 9 Dec 2018 16:23:36 +0000 (08:23 -0800)]
fix auto grad summing for IfOp where intermediate output needs renaming (#14772)

Summary:
fix auto grad summing for IfOp where intermediate output needs renaming.

Bug before this diff:
- we only renames the output of IfOp without changing the subnet ops output
- this results in blob not found error

the unittest provides an example
this diff fix that for IfOp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14772

Differential Revision: D13327090

Pulled By: harouwu

fbshipit-source-id: ec40ee88526ace3619c54551e223dd71158a02f8

5 years agoExport ones_like, zeros_like and full_like using ONNX ConstantLike op. (#14903)
Spandan Tiwari [Sun, 9 Dec 2018 06:46:03 +0000 (22:46 -0800)]
Export ones_like, zeros_like and full_like using ONNX ConstantLike op. (#14903)

Summary:
This PR does the following:
1) Updates the ONNX export for `torch.zeros_like` and `torch.full_like` ops to use ONNX op `ConstantLike`. This reduces the export of experimental op `ConstantFill`, which may possibly be removed in future, see https://github.com/onnx/onnx/pull/1434).
2) It also adds export support for `torch.ones_like`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14903

Differential Revision: D13383700

Pulled By: houseroad

fbshipit-source-id: 566d00a943e9497172fcd5a034b638a650ab13a2

5 years agoCanonicalize all includes in PyTorch. (#14849)
Edward Yang [Sun, 9 Dec 2018 03:32:01 +0000 (19:32 -0800)]
Canonicalize all includes in PyTorch. (#14849)

Summary:
Anywhere we used #include "foo.h", we now say #include <foo.h>
Paths are adjusted to be rooted out of aten/src, torch/lib, or
the root level directory.

I modified CMakeLists.txt by hand to remove TH and THC from
the include paths.

I used the following script to do the canonicalization:

```
  import subprocess
  import re
  import os.path

  files = subprocess.check_output(['git', 'ls-files']).decode('utf-8').rstrip().split('\n')
  for fn in files:
      if not any(fn.endswith(suff) for suff in ['.cu', '.cpp', '.in', '.h', '.hpp', '.cu', '.cuh', '.cc']):
          continue
      if not any(fn.startswith(pref) for pref in ["aten/", "torch/"]):
          continue
      with open(fn, 'r') as f:
          c = f.read()
      def fmt(p):
          return "#include <{}>".format(p)
      def repl(m):
          p = m.group(1)
          if p in ["dlfcn.h", "unistd.h", "nvrtc.h", "cuda.h", "cuda_runtime.h", "cstdint", "cudnn.h", "Python.h", "cusparse.h", "cuda_runtime_api.h", "cuda_fp16.h", "cublas_v2.h", "stdint.h", "curand_kernel.h"]:
              return fmt(p)
          if any(p.startswith(pref) for pref in ["torch/csrc", "c10/", "ATen/", "caffe2/", "TH/", "THC/", "Eigen/", "gtest/", "zdl/", "gloo/", "onnx/", "miopen/"]):
              return fmt(p)
          for root in ["aten/src", "torch/lib", ""]:
              for bad_root in [os.path.dirname(fn), "aten/src/TH", "aten/src/THC", "torch/csrc"]:
                  new_p = os.path.relpath(os.path.join(bad_root, p), root)
                  if not new_p.startswith("../") and (os.path.exists(os.path.join(root, new_p)) or os.path.exists(os.path.join(root, new_p + ".in"))):
                      return fmt(new_p)
          print("ERROR: ", fn, p)
          return m.group(0)
      new_c = re.sub(r'#include "([^"]+)"', repl, c)
      if new_c != c:
          print(fn)
          with open(fn, 'w') as f:
              f.write(new_c)
```

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14849

Reviewed By: dzhulgakov

Differential Revision: D13363445

Pulled By: ezyang

fbshipit-source-id: 52361f878a672785f9306c9e9ab2513128092b68

5 years agorace condition fix of calling mutable_data inside a openmp region (#14921)
Jongsoo Park [Sun, 9 Dec 2018 02:15:00 +0000 (18:15 -0800)]
race condition fix of calling mutable_data inside a openmp region (#14921)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14921

Fix race condition introduced in D13188595 .
Let's reminder ourselves "never call mutable_data from an OpenMP region!!!"

Reviewed By: jianyuh

Differential Revision: D13387692

fbshipit-source-id: 6a3aeedeeda55a9ede660de8f1f44d4eee76ae2b

5 years agoAdd crop argument, can crop rec as well, first resize and then crop
Fei Sun [Sat, 8 Dec 2018 19:12:40 +0000 (11:12 -0800)]
Add crop argument, can crop rec as well, first resize and then crop

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14894

Reviewed By: llyfacebook

Differential Revision: D13377604

Pulled By: sf-wind

fbshipit-source-id: 333d0d864e6c2dc85f405baa25ed58029d62750f

5 years agoSwitch Int8Sigmoid to QNNPACK (#14883)
Marat Dukhan [Sat, 8 Dec 2018 10:45:41 +0000 (02:45 -0800)]
Switch Int8Sigmoid to QNNPACK (#14883)

Summary:
50x-100x speedup compared to current version.
Also, fixes a bug in the current version when batch size exceeds 1 (current version processes only the first image in this case).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14883

Differential Revision: D13390655

Pulled By: Maratyszcza

fbshipit-source-id: 1b33a97bf2d0866d38faa2b42e64fd2859017898

5 years agoONNX changes to use int32_t (instead of enum) to store data type
Your Name [Sat, 8 Dec 2018 09:04:02 +0000 (01:04 -0800)]
ONNX changes to use int32_t (instead of enum) to store data type

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14926

Reviewed By: houseroad

Differential Revision: D13390642

Pulled By: bddppq

fbshipit-source-id: c2314b24d9384f188fda2b9a5cc16465ad39581e

5 years agoRemove at references from c10
Sebastian Messmer [Sat, 8 Dec 2018 08:26:14 +0000 (00:26 -0800)]
Remove at references from c10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14432

Reviewed By: dzhulgakov

Differential Revision: D13223904

fbshipit-source-id: 43b06e33e088e7789ccea6d92267936fe30d8571

5 years agoImplement `std` for multiple dimensions on CPU devices. (#14535)
Brennan Vincent [Sat, 8 Dec 2018 04:13:31 +0000 (20:13 -0800)]
Implement `std` for multiple dimensions on CPU devices. (#14535)

Summary:
Tested on a tensor with 1 billion elements and 3 dimensions on a powerful, highly
multi-core Linux machine.

parallelized: All operations (e.g., `t.std(1)`) that could be done in the old code are now several times faster. All
new operations (e.g., `t.std((0,2))` are significantly faster than the NumPy equivalents.
`t.std((0, 1, 2))`, a new operation, is logically equivalent to the
old `t.std()`, but faster.

serial: The above comment about old operationos now being faster still
holds, but `t.std((t1, ..., tn))` is now a few
times slower than `t.std()`. If this turns out to be important, we can
special-case that to use the old algorithm.

The approach is to create a new method, `TensorIterator::foreach_reduced_elt`,
valid for `TensorIterator`s that represent a dimension reduction. This
method calls a supplied function for each element in the output,
supplying it with the input elements that correspond to that output.

Given that primitive, we can implement reductions like the following pseudocode:

If there is more than one output element:
```
PARALLEL FOR EACH element IN output:
    accumulator = identity
    SERIAL FOR EACH data_point IN element.corresponding_input:
        accumulator.update(data_point)
    element = accumulator.to_output()
```

If there is only one output element, we still want to parallelize, so we
do so along the *input* instead:

```
accumulators[n_threads]
PARALLEL FOR EACH input_chunk IN input.chunks():
    accumulators[thread_num()] = identity
    SERIAL FOR EACH data_point IN input_chunk:
        accumulators[thread_num()].update_with_data(data_point)
accumulator = identity
SERIAL FOR EACH acc in accumulators:
    accumulator.update_with_other_accumulator(acc)
output_element = accumulator.to_output()
```

Note that accumulators and data points do not have to be the same type
in general, since it might be necessary to track arbitrary amounts of
data at intermediate stages.

For example, for `std`, we use a parallel version of Welford's
algorithm, which requies us to track the mean, second moment, and number
of elements, so the accumulator type for `std` contains three pieces of
data.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14535

Differential Revision: D13283887

Pulled By: umanwizard

fbshipit-source-id: 8586b7bf00bf9f663c55d6f8323301e257f5ec3f

5 years agoAdd CAFFE2_API to video processing functions (#14900)
Orion Reblitz-Richardson [Sat, 8 Dec 2018 03:48:38 +0000 (19:48 -0800)]
Add CAFFE2_API to video processing functions (#14900)

Summary:
Extracted from https://github.com/pytorch/pytorch/pull/13733

Some tests were failing because these methods didn't have an export.

cc pjh5 yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14900

Reviewed By: pjh5

Differential Revision: D13381130

Pulled By: orionr

fbshipit-source-id: 030536f8fb09765c09a7b0bd45400161053f2e18

5 years agoEnable unit tests known to work on ROCm (#14011)
Johannes M Dieterich [Sat, 8 Dec 2018 02:55:21 +0000 (18:55 -0800)]
Enable unit tests known to work on ROCm (#14011)

Summary:
* Enable unit tests known to work on ROCm.
* Disable a few that are known to be flaky for the time being.
* Use std::abs for Half
* No more special casing for ROCm in TensorMathReduce
* Document an important detail for a hardcoded block size w.r.t. ROCm in TensorMathReduce

ezyang bddppq for awareness
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14011

Differential Revision: D13387679

Pulled By: bddppq

fbshipit-source-id: 4177f2a57b09d866ccbb82a24318f273e3292f71

5 years agoAutomatic update of fbcode/onnx to aca8473a40cf43f01958c81b648efcee7f3a755a (#14865)
Lu Fang [Sat, 8 Dec 2018 01:24:01 +0000 (17:24 -0800)]
update of fbcode/onnx to aca8473a40cf43f01958c81b648efcee7f3a755a (#14865)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14865

Previous import was 42804705bdbf179d1a98394008417e1392013547

Included changes:
- **[aca8473](https://github.com/onnx/onnx/commit/aca8473)**: Add Erf operator for computing error function (#1675) <bddppq>
- **[3fc82ca](https://github.com/onnx/onnx/commit/3fc82ca)**: Add IsNaN operator. (#1656) <Pranav Sharma>
- **[0685f01](https://github.com/onnx/onnx/commit/0685f01)**: Add Sign Op (#1658) <Rui Zhu>
- **[2a8fae8](https://github.com/onnx/onnx/commit/2a8fae8)**: Fix unused var warning (#1669) <Yinghai Lu>
- **[e212833](https://github.com/onnx/onnx/commit/e212833)**: Update scan (#1653) <G. Ramalingam>

Reviewed By: zrphercule

Differential Revision: D13370727

fbshipit-source-id: 13a93d5acc8d4758f682278ea162ec9124ced22d

5 years agoEnable fp16 for MIOPEN operators in Caffe2 (#14905)
rohithkrn [Sat, 8 Dec 2018 01:23:49 +0000 (17:23 -0800)]
Enable fp16 for MIOPEN operators in Caffe2 (#14905)

Summary:
This PR enables fp16 MIOPEN operators in Caffe2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14905

Differential Revision: D13383439

Pulled By: bddppq

fbshipit-source-id: 840afa8d08bef2952ca0039dee2423f1542bb330

5 years agoUpgrade MKL-DNN to version 0.17 (#14308)
Gu, Jinghui [Sat, 8 Dec 2018 00:42:39 +0000 (16:42 -0800)]
Upgrade MKL-DNN to version 0.17 (#14308)

Summary:
upgrade MKL-DNN to version 0.17
update mkldnn bridge to latest.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14308

Differential Revision: D13383102

Pulled By: yinghai

fbshipit-source-id: c434f0e0ddff2ee2c86db2d6c44a37298fd005a3

5 years agoFix build with OpenCV 4.0 (#14356)
Daniel Bermond [Sat, 8 Dec 2018 00:37:30 +0000 (16:37 -0800)]
Fix build with OpenCV 4.0 (#14356)

Summary:
Fixes #14355
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14356

Differential Revision: D13356237

Pulled By: bddppq

fbshipit-source-id: 2bf6ee21995c2c7b617c4e78ea7341f975f1b937

5 years agoRemove unused TensorImpl dependencies
Sebastian Messmer [Sat, 8 Dec 2018 00:18:20 +0000 (16:18 -0800)]
Remove unused TensorImpl dependencies

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14792

Reviewed By: ezyang

Differential Revision: D13336843

fbshipit-source-id: 12f84799a70c2e90a8b934dd8dc031c09a6782f0

5 years agoRemove TensorImpl -> context_base dependency (#14658)
Sebastian Messmer [Sat, 8 Dec 2018 00:18:20 +0000 (16:18 -0800)]
Remove TensorImpl -> context_base dependency (#14658)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14658

Remove this dependency by moving at::CopyBytes to c10.
The implementations for at::CopyBytes will have to live in aten/caffe2 for now because they're not unified for CUDA yet.
They'll be moved into c10/backend/xxx later.

Reviewed By: dzhulgakov

Differential Revision: D13288655

fbshipit-source-id: 1c92379345308b3cd39a402779d7b7999613fc0d

5 years agoFix include paths for TensorOptions
Sebastian Messmer [Sat, 8 Dec 2018 00:18:19 +0000 (16:18 -0800)]
Fix include paths for TensorOptions

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14747

Reviewed By: ezyang

Differential Revision: D13318645

fbshipit-source-id: f5ba77a93f6019fbf5faffb47a2837c95fad474d

5 years agoUpdate graph printouts in JIT docs (#14914)
James Reed [Fri, 7 Dec 2018 23:06:48 +0000 (15:06 -0800)]
Update graph printouts in JIT docs (#14914)

Summary:
Tracing records variable names and we have new types and stuff in the IR, so this updates the graph printouts in the docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14914

Differential Revision: D13385101

Pulled By: jamesr66a

fbshipit-source-id: 6477e4861f1ac916329853763c83ea157be77f23

5 years agoImprove hub documentation (#14862)
Ailing Zhang [Fri, 7 Dec 2018 22:56:56 +0000 (14:56 -0800)]
Improve hub documentation (#14862)

Summary:
Added a few examples and explains to how publish/load models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14862

Differential Revision: D13384790

Pulled By: ailzhang

fbshipit-source-id: 008166e84e59dcb62c0be38a87982579524fb20e

5 years agoUSE_FBGEMM=True by default
James Reed [Fri, 7 Dec 2018 22:14:25 +0000 (14:14 -0800)]
USE_FBGEMM=True by default

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14868

Differential Revision: D13383390

Pulled By: jamesr66a

fbshipit-source-id: 1880c07dfd239e19153bd4fde2ab2c8d0604f956

5 years agoUSE_TENSORRT support and TensorRT 5 compatibility
Sergei Nikolaev [Fri, 7 Dec 2018 21:52:56 +0000 (13:52 -0800)]
USE_TENSORRT support and TensorRT 5 compatibility

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13945

Differential Revision: D13317525

Pulled By: yinghai

fbshipit-source-id: 8630dfec1bbc5aac19539e344e7c38a7fd8b051d

5 years agoAdd __init__.py so files get picked up on install (#14898)
Orion Reblitz-Richardson [Fri, 7 Dec 2018 21:35:58 +0000 (13:35 -0800)]
Add __init__.py so files get picked up on install (#14898)

Summary:
This will let us install tests and other Caffe2 python code as a part of running Caffe2 tests in PyTorch.

Broken out of https://github.com/pytorch/pytorch/pull/13733/

cc pjh5 yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14898

Reviewed By: pjh5

Differential Revision: D13381123

Pulled By: orionr

fbshipit-source-id: 0ec96629b0570f6cc2abb1d1d6fce084e7464dbe

5 years agoReplace calls of Type::_th_tensor. (#14877)
Gregory Chanan [Fri, 7 Dec 2018 20:37:03 +0000 (12:37 -0800)]
Replace calls of Type::_th_tensor. (#14877)

Summary:
_th_tensor is moving off Type, so these calls need to be replaced.

Unfortunately, replacing these with a full-fledged solution [e.g. from_storage(..., TensorOptions)] is a bit complicated because the storage itself fully defines the Type (modulo variable).  It's simpler to just wait for the Variable/Tensor merge rather than to solve this now, so instead I changed the call sites to: at::empty({0}, type.options()).set_(storage...).

This isn't great because we are also trying to get rid of Type::options, but this seems to be the lesser-of-two-evils.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14877

Differential Revision: D13374310

Pulled By: gchanan

fbshipit-source-id: eb953ed041507e6190d6f32e383912e5a08311cd

5 years agoLarge scale fix of python-related files in torch/csrc/
Peter Goldsborough [Fri, 7 Dec 2018 20:22:49 +0000 (12:22 -0800)]
Large scale fix of python-related files in torch/csrc/

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14515

Differential Revision: D13247966

Pulled By: goldsborough

fbshipit-source-id: 7a127c508fc576a7a92626dd6b729f660162d628

5 years agoImplementation of WeightedSum op for mkl-dnn and fix FC op output shape issue.
PenghuiCheng [Fri, 7 Dec 2018 20:01:44 +0000 (12:01 -0800)]
Implementation of WeightedSum op for mkl-dnn and fix FC op output shape issue.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14407

Reviewed By: yinghai

Differential Revision: D13364364

Pulled By: wesolwsk

fbshipit-source-id: e69bcd1bc52e35b2f0e45e5dc40184f1bd66605d

5 years agoRevert D13205604: Move numa.{h, cc} to c10/util
Yudong Guang [Fri, 7 Dec 2018 17:58:54 +0000 (09:58 -0800)]
Revert D13205604: Move numa.{h, cc} to c10/util

Differential Revision:
D13205604

Original commit changeset: 54166492d318

fbshipit-source-id: 89b6833518c0b554668c88ae38d97fbc47e2de17

5 years agoExpose torch.roll function and method (#14880)
vishwakftw [Fri, 7 Dec 2018 15:25:55 +0000 (07:25 -0800)]
Expose torch.roll function and method (#14880)

Summary: Fixes #14859 .

Differential Revision: D13376915

Pulled By: zou3519

fbshipit-source-id: f1fc0e8492a159431a3fc0a19a41aa10429ecc80

5 years agoMake autograd engine compatible with hip
Junjie Bai [Fri, 7 Dec 2018 08:07:05 +0000 (00:07 -0800)]
Make autograd engine compatible with hip

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14873

Differential Revision: D13375053

Pulled By: bddppq

fbshipit-source-id: f3051640386667bbf0566856ed433eb83276c39e

5 years agoFixed ConvT docstring (#14876)
Jon Crall [Fri, 7 Dec 2018 07:55:34 +0000 (23:55 -0800)]
Fixed ConvT docstring (#14876)

Summary:
Fixes #14099

I attempted to be as consistent as possible with the formatting, hence why my equation reads d*(k - 1) instead of (k - 1)*d.

Also there is an unused variable on line 46: `n = self.in_channels`. I could fix that here too if that's not too out of scope.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14876

Differential Revision: D13374317

Pulled By: soumith

fbshipit-source-id: a9f110acafa58cdb4206956dbe3ab4738d48292d

5 years agoUpdating submodules
svcscm [Fri, 7 Dec 2018 06:50:09 +0000 (22:50 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 7da015701f18f8a0b5a8092aae02a42ede7bfd44

5 years agoRemove weak module test expect files (#14871)
David Riazati [Fri, 7 Dec 2018 05:50:35 +0000 (21:50 -0800)]
Remove weak module test expect files (#14871)

Summary:
This PR removes some expect files that aren't really testing anything
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14871

Differential Revision: D13373762

Pulled By: driazati

fbshipit-source-id: e3537ee83df23b3b3b854f9b1253fd0cc8e9dd33

5 years agogradcheck (#14596)
Wei Yang [Fri, 7 Dec 2018 01:58:16 +0000 (17:58 -0800)]
gradcheck (#14596)

Summary:
- allow gradcheck to take sparse tensor as input
- sparse output is not allowed yet at gradcheck
- add backward for `to_dense()` to get around sparse output
- calling gradcheck at test_sparse, so that we can use `_gen_sparse()` and also easily cover coalesced / uncoalesced test cases
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14596

Differential Revision: D13271904

Pulled By: weiyangfb

fbshipit-source-id: 5317484104404fd38058884c86e987546011dd86

5 years agoSkipping two c10d tests only if there are multi-GPUs (#14860)
Teng Li [Fri, 7 Dec 2018 01:22:04 +0000 (17:22 -0800)]
Skipping two c10d tests only if there are multi-GPUs (#14860)

Summary:
Otherwise, these tests will fail, even though there are never meant to run on single GPU machines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14860

Differential Revision: D13369060

Pulled By: teng-li

fbshipit-source-id: 8a637a6d57335491ba8602cd09927700b2bbf8a0

5 years agoMove TensorOptions, DefaultTensorOptions to c10
Sebastian Messmer [Thu, 6 Dec 2018 23:52:15 +0000 (15:52 -0800)]
Move TensorOptions, DefaultTensorOptions to c10

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14746

Reviewed By: ezyang

Differential Revision: D13318644

fbshipit-source-id: b703d7dc67e75d9e9571c80d62a100c5fc4e84df

5 years agoSwitch Int8MaxPool operator to QNNPACK (#14832)
Marat Dukhan [Thu, 6 Dec 2018 23:12:35 +0000 (15:12 -0800)]
Switch Int8MaxPool operator to QNNPACK (#14832)

Summary:
1.6-2.4X speedup on ARM when compiled with gcc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14832

Differential Revision: D13358160

Pulled By: Maratyszcza

fbshipit-source-id: 39e9791886fac62650bb53a9df341889f0bb5d49

5 years agocollect_env.py: get conda magma and mkl information (#14854)
Richard Zou [Thu, 6 Dec 2018 22:55:55 +0000 (14:55 -0800)]
collect_env.py: get conda magma and mkl information (#14854)

Summary:
Fixes #12371
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14854

Differential Revision: D13363635

Pulled By: zou3519

fbshipit-source-id: f8b5d05038bf5ce451399dfeed558ae298178128

5 years agoAdd LogSigmoid support in ONNX symbolic (#14830)
zrphercule [Thu, 6 Dec 2018 22:04:44 +0000 (14:04 -0800)]
Add LogSigmoid support in ONNX symbolic (#14830)

Summary:
Add LogSigmoid:

torch.LogSigmoid(x) = onnx.Log(onnx.Sigmoid(x))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14830

Differential Revision: D13353891

Pulled By: zrphercule

fbshipit-source-id: bf456170b9e6c4edad07b3333cd5797f8e0fa97f

5 years agoKill GPU memory logs in normal runs (#14838)
Ashwin Bharambe [Thu, 6 Dec 2018 21:44:33 +0000 (13:44 -0800)]
Kill GPU memory logs in normal runs (#14838)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14838

The GPU memory tracking logs are incredibly annoying and merely serve
to pollute output. I `VLOG(1)`ed them. Hopefully, this is non-controversial.

Reviewed By: kuttas

Differential Revision: D13343290

fbshipit-source-id: b3cae99346c97b66e97ea660061e15dc5c99b9fc

5 years agoStop inserting static casts in Hipify (#14853)
Junjie Bai [Thu, 6 Dec 2018 21:17:28 +0000 (13:17 -0800)]
Stop inserting static casts in Hipify (#14853)

Summary:
Latest hcc can now properly cast to correct type internally, so there is no need to insert static_cast in hipify scripts anymore.
However the hcc included in the latest ROCm release (1.9.2) doesn't have this fix, so leaving a flag to continue doing static_cast for those using the official ROCm releases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14853

Differential Revision: D13363171

Pulled By: bddppq

fbshipit-source-id: a36476a8511222ff3c933d31788e8a0ffb04f5ca

5 years agoTensor construction codemod - 3/3 (#14835)
Jerry Zhang [Thu, 6 Dec 2018 19:16:07 +0000 (11:16 -0800)]
Tensor construction codemod - 3/3 (#14835)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14835

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: bddppq

Differential Revision: D13335184

fbshipit-source-id: 26d8247e16b30bdff045530034af9b72c76d066f

5 years agoTensor construction codemod - 1/3 (#14828)
Jerry Zhang [Thu, 6 Dec 2018 19:14:48 +0000 (11:14 -0800)]
Tensor construction codemod - 1/3 (#14828)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14828

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: bddppq

Differential Revision: D13335160

fbshipit-source-id: a3ae4c5a86bfbdaf2d5aa14e0eef57255e829fd4

5 years agoMove numa.{h, cc} to c10/util (#14393)
Jerry Zhang [Thu, 6 Dec 2018 18:56:14 +0000 (10:56 -0800)]
Move numa.{h, cc} to c10/util (#14393)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14393

att

Reviewed By: ezyang

Differential Revision: D13205604

fbshipit-source-id: 54166492d31827b0343ed070cc36a825dd86e2ed

5 years agoUpgrade CI to ROCm 1.9.2 (#14216)
Johannes M Dieterich [Thu, 6 Dec 2018 18:04:37 +0000 (10:04 -0800)]
Upgrade CI to ROCm 1.9.2 (#14216)

Summary:
Drop custom hcc/hip as the 1.9.2 release should contain the relevant patches therein.

Most notable feature in 1.9.2 is mixed precision support in rocBLAS and MIOpen. These features will be enabled by subsequent PRs.

bddppq ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14216

Differential Revision: D13354294

Pulled By: bddppq

fbshipit-source-id: 2541d4a196af21c9432c1aff7f6e65b572628028

5 years agoAllow linspace and logspace with steps=1 and start != end like numpy (#14748)
Jan Schlüter [Thu, 6 Dec 2018 17:29:01 +0000 (09:29 -0800)]
Allow linspace and logspace with steps=1 and start != end like numpy (#14748)

Summary:
`torch.linspace(0, 1, 1)` fails with `RuntimeError: invalid argument 3: invalid number of points at ../aten/src/TH/generic/THTensorMoreMath.cpp:2119`, while `np.linspace(0, 1, 1)` works fine.
Looking at the code, there is even a comment by gchanan asking: "NumPy allows you to pass different points even if n <= 1 -- should we?"
I would say "yes". Currently, I would need to handle the case of `steps == 1` or `steps == 0` separately, making sure to change the `end` when calling `torch.linspace`. This is impractical. If we support `start != end`, there are two possibilities for the result: Either we ensure the first value in the resulting sequence always equals `start`, or we ensure the last value in the resulting sequence always equals `end`. Numpy chose the former, which also allows it to support a boolean `endpoint` flag. I'd say we should follow numpy.

This PR adapts `linspace` and `logspace` to mimic the behavior of numpy, adapts the tests accordingly, and extends the docstrings to make clear what happens when passing `steps=1`.

If you decide against this PR, the error message should become explicit about what I did wrong, and the documentation should be extended to mention this restriction.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14748

Differential Revision: D13356136

Pulled By: ezyang

fbshipit-source-id: db85b8f0a98a5e24b3acd766132ab71c91794a82

5 years ago(#14580)
Jie [Thu, 6 Dec 2018 16:57:39 +0000 (08:57 -0800)]
(#14580)

Summary:
Removes cast of half to float in torch.sum, with float16 input tensor and
float32 output tensor, instead we cast data when loading input in kernel.

This supposingly would save a kernel launch as well as a full global memory load
on promoted data type (float).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14580

Differential Revision: D13356203

Pulled By: ezyang

fbshipit-source-id: 85e91225b880a65fe3ceb493371b9b36407fdf48

5 years agoConsistent formatting in losses' docs
Ricardo Cuenca [Thu, 6 Dec 2018 16:57:31 +0000 (08:57 -0800)]
Consistent formatting in losses' docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14739

Differential Revision: D13356143

Pulled By: ezyang

fbshipit-source-id: 9ae8316dd8ba6e910247b64cec22db63df10e11c

5 years agoAdd (partial) autodiff support for nll_loss (#14305)
Alex Şuhan [Thu, 6 Dec 2018 16:56:25 +0000 (08:56 -0800)]
Add (partial) autodiff support for nll_loss (#14305)

Summary:
Not ready yet, need some comments / help with this. It's good enough for https://github.com/pytorch/xla immediate goals (forward + backward trace fusion), but there are at least two issues with it:

1. If we don't allow it, `test/test_jit.py` fails to cover the change.
2. If we allow the weight to be set, running `test/test_jit.py TestJitGenerated.test_nn_nll_loss` fails with:

```
======================================================================
ERROR: test_nn_nll_loss (__main__.TestJitGenerated)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test/test_jit.py", line 10001, in do_test
    fn, f_args_variable, kwargs_variable, no_grad=no_grad)
  File "test/test_jit.py", line 9360, in check_against_reference
    outputs_test = self.runAndSaveRNG(func, recording_inputs, kwargs)
  File "test/test_jit.py", line 425, in runAndSaveRNG
    results = func(*inputs, **kwargs)
  File "test/test_jit.py", line 9298, in script_fn
    self.assertExportImport(CU.the_method.graph, tensors)
  File "test/test_jit.py", line 415, in assertExportImport
    self.assertExportImportModule(m, inputs)
  File "test/test_jit.py", line 419, in assertExportImportModule
    self.assertEqual(self.runAndSaveRNG(m.forward, inputs),
  File "test/test_jit.py", line 425, in runAndSaveRNG
    results = func(*inputs, **kwargs)
RuntimeError:
arguments for call are not valid:

  for operator aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight, *, Tensor out) -> Tensor:
  expected a value of type Tensor for argument 'total_weight' but found bool
  <internally-created-node>
  ~ <--- HERE

  for operator aten::nll_loss_backward(Tensor grad_output, Tensor self, Tensor target, Tensor? weight, int reduction, int ignore_index, Tensor total_weight) -> Tensor:
  expected a value of type Tensor for argument 'total_weight' but found bool
  <internally-created-node>
  ~ <--- HERE
for call at:
<internally-created-node>
~ <--- HERE
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14305

Differential Revision: D13356265

Pulled By: ezyang

fbshipit-source-id: 504d783b2d87f923e698a6a4efc0fd9935a94a41

5 years agoUpdating submodules
svcscm [Thu, 6 Dec 2018 11:18:17 +0000 (03:18 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: 2adbb6f97d4b8f067a2538fec855063510b0ca3f

5 years agoUpdating submodules
svcscm [Thu, 6 Dec 2018 10:53:28 +0000 (02:53 -0800)]
Updating submodules

Reviewed By: yns88

fbshipit-source-id: e0509413215f3b7578b825c52365fec4da625bd5

5 years agoFixed MIOpen RNN Segfault issue and enabled RNN test (#14810)
lcskrishna [Thu, 6 Dec 2018 07:52:42 +0000 (23:52 -0800)]
Fixed MIOpen RNN Segfault issue and enabled RNN test (#14810)

Summary:
This pull request contains changes for:
1. Added MIOpen RNN API miopenGetRNNLayerBiasSize and miopenGetRNNLayerParamSize.
2. Fixed usage of API miopenGetRNNLayerParam.
3. Modifying the RNN test to run using MIOpen engine.

Differential Revision: D13355699

Pulled By: bddppq

fbshipit-source-id: 6f750657f8049c5446eca893880b397804120b69

5 years agoExport complete subgraph io info when calling onnxGetBackendCompatibility (#14827)
Yinghai Lu [Thu, 6 Dec 2018 07:50:12 +0000 (23:50 -0800)]
Export complete subgraph io info when calling onnxGetBackendCompatibility (#14827)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14827

We need to send complete IO info when doing `onnxGetBackendCompatibility` to backend like Glow. Previously we are missing some info because sometimes we generate more than one nodes from one C2 op. This fixes the issue.

Reviewed By: jackm321

Differential Revision: D13352049

fbshipit-source-id: 8d8ac70656a0ac42f3a0ccecad61456a4f3b2435

5 years agoFix clip gradient with empty input (#14709)
Huan Gui [Thu, 6 Dec 2018 06:51:23 +0000 (22:51 -0800)]
Fix clip gradient with empty input (#14709)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14709

As titled

Reviewed By: Wakeupbuddy

Differential Revision: D13305554

fbshipit-source-id: 380062d4b0e4f9dc0207a27766cac7b8d05384d5

5 years agoRemove protobuf dependency in pytorch cmake file. (#14182)
JerryShih [Thu, 6 Dec 2018 06:47:54 +0000 (22:47 -0800)]
Remove protobuf dependency in pytorch cmake file. (#14182)

Summary:
Currently, pytorch doesn't dependent on protobuf. So, we don't need to include the protobuf dir in pytorch cmake file.
And if we build caffe2 without custom-protobuf[1], we will have the protobuf mismatched problem.

[1]
https://github.com/pytorch/pytorch/blob/92dbd0219f6fbdb1db105386386ccf92c0758e86/CMakeLists.txt#L65
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14182

Differential Revision: D13356273

Pulled By: ezyang

fbshipit-source-id: 8120c3452d158dc51d70156433d7b9076c6aed47

5 years agoOptimize images (#14084)
Xiang Gao [Thu, 6 Dec 2018 06:44:27 +0000 (22:44 -0800)]
Optimize images (#14084)

Summary:
This is a PR that [ImgBot](https://imgbot.net/) opened on my fork https://github.com/zasdfgbnm/pytorch/pull/1, I forward it here.  ImgBot does lossless compression on images to reduce file size.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14084

Differential Revision: D13356293

Pulled By: ezyang

fbshipit-source-id: 731236d95ad870db8ccb99b03ed306704365242c

5 years agoPrevent `profile_observer_test` from being run by CPU test (#14168)
Aldian Fazrihady [Thu, 6 Dec 2018 06:31:39 +0000 (22:31 -0800)]
Prevent `profile_observer_test` from being run by CPU test (#14168)

Summary:
Fix CMakeLists.txt, so the test for CPU won't run profile_observer_test.cc, as currently it only supports GPU
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14168

Differential Revision: D13356274

Pulled By: ezyang

fbshipit-source-id: 7d105f2e18675e5fab129864958148b0f18d582c

5 years agoCAFFE2_INCLUDE_DIRS points to invalid path (#14306)
Achal Shah [Thu, 6 Dec 2018 06:30:07 +0000 (22:30 -0800)]
CAFFE2_INCLUDE_DIRS points to invalid path (#14306)

Summary:
I know that including CAFFE2_INCLUDE_DIRS in include headers are not necessary for newer cmakes. But I had this in one of my old projects and **cmake gave me error that "/usr/lib/include" is invalid path**.

It seems like "${_INSTALL_PREFIX}/lib/include" should be changed to "${_INSTALL_PREFIX}/include" as all caffe2 headers are in /include rather than /lib/include/

Please correct me if I am wrong?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14306

Differential Revision: D13356246

Pulled By: ezyang

fbshipit-source-id: e2d5d3c42352e59b245714ad90fd7a9ef48170d7