platform/upstream/pytorch.git
5 years agoFix some typos in distributed.py.
Elliot Waite [Wed, 13 Mar 2019 16:18:34 +0000 (09:18 -0700)]
Fix some typos in distributed.py.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17959

Differential Revision: D14437347

Pulled By: soumith

fbshipit-source-id: 4c33571f56e9da687666516a310f91924cddd4d9

5 years agoFix Windows test CI
peter [Wed, 13 Mar 2019 16:07:57 +0000 (09:07 -0700)]
Fix Windows test CI

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17954

Differential Revision: D14437473

Pulled By: soumith

fbshipit-source-id: f0d79ff0c5d735f822be3f42bbca91c1928dacaf

5 years agoFix lint in test_utils.py (#17944)
Edward Yang [Wed, 13 Mar 2019 15:35:26 +0000 (08:35 -0700)]
Fix lint in test_utils.py (#17944)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17944
ghimport-source-id: 5b45086428b5a36e737882c78f285141121fd1bc

Stack:
* **#17944 Fix lint in test_utils.py**

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14430132

fbshipit-source-id: b00de7b4c685645ad5a4dc8c5fe6ce7e1893a3eb

5 years agoSpeed up gemm by reordering the for loops (#17730)
Guanheng Zhang [Wed, 13 Mar 2019 15:22:56 +0000 (08:22 -0700)]
Speed up gemm by reordering the for loops (#17730)

Summary:
Optimize the order of the "for" loops.

Note: For "transa = true" cases, the order of the "for" loops has been optimzied in the original code. Therefore, no significant improvement is observed in those case (i.e. "transa && transb" and "transa && !transb")

mode/opt (i.e. static libary)
//////////////////////////////////////////////////////////////////////////////
transa && transb
after:
loops:  2229     x:     128      y:     128      z:     128      time:  2243ns      =>  acceleration multiplier:  0.90
loops:  124      x:     128      y:     1024     z:     128      time:  40381ns      =>  acceleration multiplier:  0.97
loops:  121      x:     1024     y:     128      z:     128      time:  41651ns      =>  acceleration multiplier:  0.96
loops:  15       x:     1024     y:     1024     z:     128      time:  333771ns       =>  acceleration multiplier:  0.98
loops:  4610     x:     128      y:     128      z:     64       time:  1084ns       =>  acceleration multiplier:  0.95
loops:  252      x:     128      y:     1024     z:     64       time:  19860ns      =>  acceleration multiplier:  0.98
loops:  248      x:     1024     y:     128      z:     64       time:  20232ns      =>  acceleration multiplier:  0.98
loops:  30       x:     1024     y:     1024     z:     64       time:  167338ns      =>  acceleration multiplier:  0.99

before:
loops:  2468     x:     128      y:     128      z:     128      time:  2026ns
loops:  128      x:     128      y:     1024     z:     128      time:  39338ns
loops:  126      x:     1024     y:     128      z:     128      time:  39930ns
loops:  16       x:     1024     y:     1024     z:     128      time:  327549ns
loops:  4840     x:     128      y:     128      z:     64       time:  1033ns
loops:  258      x:     128      y:     1024     z:     64       time:  19441ns
loops:  252      x:     1024     y:     128      z:     64       time:  19854ns
loops:  31       x:     1024     y:     1024     z:     64       time:  166254ns

//////////////////////////////////////////////////////////////////////////////
transa && !transb
after:
loops:  4880     x:     128      y:     128      z:     128      time:  1024ns      =>  acceleration multiplier:  0.98
loops:  638      x:     128      y:     1024     z:     128      time:  7839ns      =>  acceleration multiplier:  1.04
loops:  605      x:     1024     y:     128      z:     128      time:  8276ns      =>  acceleration multiplier:  1.01
loops:  77       x:     1024     y:     1024     z:     128      time:  65713ns      =>  acceleration multiplier:  1.00
loops:  9935     x:     128      y:     128      z:     64       time:  503ns      =>  acceleration multiplier:  1.00
loops:  1252     x:     128      y:     1024     z:     64       time:  3994ns      =>  acceleration multiplier:  1.00
loops:  1183     x:     1024     y:     128      z:     64       time:  4226ns      =>  acceleration multiplier:  0.98
loops:  153      x:     1024     y:     1024     z:     64       time:  32766ns      =>  acceleration multiplier:  0.99

before:
loops:  4985     x:     128      y:     128      z:     128      time:  1003ns
loops:  615      x:     128      y:     1024     z:     128      time:  8140ns
loops:  599      x:     1024     y:     128      z:     128      time:  8357ns
loops:  76       x:     1024     y:     1024     z:     128      time:  65934ns
loops:  9897     x:     128      y:     128      z:     64       time:  505ns
loops:  1248     x:     128      y:     1024     z:     64       time:  4008ns
loops:  1203     x:     1024     y:     128      z:     64       time:  4159ns
loops:  154      x:     1024     y:     1024     z:     64       time:  32499ns

//////////////////////////////////////////////////////////////////////////////
!transa && transb
after:
loops:  3919     x:     128      y:     128      z:     128      time:  1276ns      =>  acceleration multiplier:  2.97
loops:  497      x:     128      y:     1024     z:     128      time:  10069ns      =>  acceleration multiplier:  7.85
loops:  449      x:     1024     y:     128      z:     128      time:  11145ns      =>  acceleration multiplier:  4.77
loops:  57       x:     1024     y:     1024     z:     128      time:  88595ns      =>  acceleration multiplier:  7.12
loops:  7575     x:     128      y:     128      z:     64       time:  660ns      =>  acceleration multiplier:  3.00
loops:  967      x:     128      y:     1024     z:     64       time:  5173ns      =>  acceleration multiplier:  7.66
loops:  877      x:     1024     y:     128      z:     64       time:  5702ns      =>  acceleration multiplier:  4.76
loops:  111      x:     1024     y:     1024     z:     64       time:  45232ns      =>  acceleration multiplier:  7.03

before:
loops:  1320     x:     128      y:     128      z:     128      time:  3789ns
loops:  64       x:     128      y:     1024     z:     128      time:  79061ns
loops:  95       x:     1024     y:     128      z:     128      time:  53107ns
loops:  8        x:     1024     y:     1024     z:     128      time:  631161ns
loops:  2521     x:     128      y:     128      z:     64       time:  1983ns
loops:  127      x:     128      y:     1024     z:     64       time:  39604ns
loops:  185      x:     1024     y:     128      z:     64       time:  27128ns
loops:  16       x:     1024     y:     1024     z:     64       time:  318155ns

//////////////////////////////////////////////////////////////////////////////
!transa && !transb
after:
loops:  3895     x:     128      y:     128      z:     128      time:  1283ns      =>  acceleration multiplier:  1.73
loops:  393      x:     128      y:     1024     z:     128      time:  12746ns      =>  acceleration multiplier:  3.36
loops:  411      x:     1024     y:     128      z:     128      time:  12170ns      =>  acceleration multiplier:  1.93
loops:  46       x:     1024     y:     1024     z:     128      time:  110116ns      =>  acceleration multiplier:  3.17
loops:  7404     x:     128      y:     128      z:     64       time:  675ns      =>  acceleration multiplier:  1.58
loops:  636      x:     128      y:     1024     z:     64       time:  7872ns      =>  acceleration multiplier:  2.70
loops:  724      x:     1024     y:     128      z:     64       time:  6911ns      =>  acceleration multiplier:  1.32
loops:  73       x:     1024     y:     1024     z:     64       time:  68502ns      =>  acceleration multiplier:  2.49

before:
loops:  2253     x:     128      y:     128      z:     128      time:  2219ns
loops:  117      x:     128      y:     1024     z:     128      time:  42788ns
loops:  214      x:     1024     y:     128      z:     128      time:  23465ns
loops:  15       x:     1024     y:     1024     z:     128      time:  349076ns
loops:  4694     x:     128      y:     128      z:     64       time:  1065ns
loops:  236      x:     128      y:     1024     z:     64       time:  21251ns
loops:  549      x:     1024     y:     128      z:     64       time:  9108ns
loops:  30       x:     1024     y:     1024     z:     64       time:  170799ns
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17730

Differential Revision: D14325149

Pulled By: zhangguanheng66

fbshipit-source-id: a7a5a83890fdf99fee6eb87a3a5060b7b6bd862f

5 years agofix punctuation
livc [Wed, 13 Mar 2019 15:06:56 +0000 (08:06 -0700)]
fix punctuation

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17973

Differential Revision: D14438725

Pulled By: zou3519

fbshipit-source-id: 30a5485b508b4ae028057e0b66a8abb2b163d66b

5 years agofixes for AVX detection (#17915)
Thomas Viehmann [Wed, 13 Mar 2019 10:44:16 +0000 (03:44 -0700)]
fixes for AVX detection (#17915)

Summary:
Our AVX2 routines use functions such as _mm256_extract_epi64
that do not exist on 32 bit systems even when they have AVX2.
This disables AVX2 when _mm256_extract_epi64 does not exist.

This fixes the "local" part of #17901 (except disabling FBGEMM),
but there also is sleef to be updated and NNPACK to be fixed,
see the bug report for further discussion.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17915

Differential Revision: D14437338

Pulled By: soumith

fbshipit-source-id: d4ef7e0801b5d1222a855a38ec207dd88b4680da

5 years agoDisable FBGEMM when building under x86 32bit (#17922)
Thomas Viehmann [Wed, 13 Mar 2019 10:43:58 +0000 (03:43 -0700)]
Disable FBGEMM when building under x86 32bit (#17922)

Summary:
FBGEMM doesn't work on x86 32bit and prior to this patch, it will
generate x86_64 objects in a build that is supposed to be x86 32bit.
FBGEMM actually relies on registers not available on x86_32, so
we disable it.

This takes of one element of #17901. There are more dependencies
and a separate PR (#17915) regarding AVX detection for the code in the
main repository.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17922

Differential Revision: D14437340

Pulled By: soumith

fbshipit-source-id: bd9fc98cf607d9b0bc28127fbbc8b04fa10eecbe

5 years agoUpdate docs for `mark_non_differentiable` method (#17891)
serhii-havrylov [Wed, 13 Mar 2019 10:16:40 +0000 (03:16 -0700)]
Update docs for `mark_non_differentiable` method (#17891)

Summary:
The current documentation doesn't reflect the real values of tensors during the backward pass.
This issue is mentioned in https://github.com/pytorch/pytorch/issues/12631
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17891

Differential Revision: D14419949

Pulled By: soumith

fbshipit-source-id: 8b495628c3f017bc880f8096682cd176a53974e5

5 years agoSimplify OpKernel (#17925)
Sebastian Messmer [Wed, 13 Mar 2019 08:20:57 +0000 (01:20 -0700)]
Simplify OpKernel (#17925)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17925

There's no need for OpKernel to keep the cache creator around if we initialize cache on construction.

This basically means, kernel caches are now constructed when the kernel is looked up from the dispatcher, and not delayed to the first call anymore.
This gives us the benefit of cheaper calling because now kernel calling doesn't have to check if the cache is already initialized.

Also, this improves thread-safety. Now, OpKernel is thread-safe if the kernel is thread-safe.

Reviewed By: ezyang

Differential Revision: D14424907

fbshipit-source-id: a0d09a3a560dfe78aab53d558c9ebb91b57722df

5 years agoMark DispatchTable move ctor and move assignment operator as deleted (#17948)
Junjie Bai [Wed, 13 Mar 2019 08:01:13 +0000 (01:01 -0700)]
Mark DispatchTable move ctor and move assignment operator as deleted (#17948)

Summary:
```
21:39:50 /var/lib/jenkins/workspace/aten/src/ATen/core/dispatch/DispatchTable.h:125:3: warning: explicitly defaulted move constructor is implicitly deleted [-Wdefaulted-function-deleted]
21:39:50   DispatchTable(DispatchTable&&) = default;
21:39:50   ^
21:39:50 /var/lib/jenkins/workspace/aten/src/ATen/core/dispatch/DispatchTable.h:212:36: note: move constructor of 'DispatchTable' is implicitly deleted because field 'kernels_' has a deleted move constructor
21:39:50   detail::ThreadsafeOperatorTable_ kernels_;
21:39:50                                    ^
21:39:50 /var/lib/jenkins/workspace/aten/src/ATen/core/dispatch/DispatchTable.h:105:68: note: copy constructor of 'ThreadsafeOperatorTable_' is implicitly deleted because field 'map_' has a deleted copy constructor
21:39:50    LeftRight<ska::flat_hash_map<TensorTypeId, DispatchTableEntry>> map_;
21:39:50                                                                    ^
21:39:50 /var/lib/jenkins/workspace/c10/util/LeftRight.h:152:16: note: copy constructor of 'LeftRight<ska::flat_hash_map<c10::TensorTypeId, c10::DispatchTableEntry, std::hash<c10::TensorTypeId>, std::equal_to<c10::TensorTypeId>, std::allocator<std::pair<c10::TensorTypeId, c10::DispatchTableEntry> > > >' is implicitly deleted because field '_writeMutex' has a deleted copy constructor
21:39:50     std::mutex _writeMutex;
21:39:50                ^
21:39:50 /usr/lib/gcc/x86_64-linux-gnu/5.4.0/../../../../include/c++/5.4.0/mutex:129:5: note: 'mutex' has been explicitly marked deleted here
21:39:50     mutex(const mutex&) = delete;
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17948

Reviewed By: ezyang

Differential Revision: D14431344

Pulled By: bddppq

fbshipit-source-id: b1c6593b73cb467a58b09a3470b8899b82564d5e

5 years agoAdd more hint in the JIT tracer (#17957)
Lu Fang [Wed, 13 Mar 2019 07:49:07 +0000 (00:49 -0700)]
Add more hint in the JIT tracer (#17957)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17957

So developer knows what action should be taken when model contains nondeterministic node

Reviewed By: dzhulgakov

Differential Revision: D14435923

fbshipit-source-id: 12d930185852f78c54efc8e90c51aa7c7c7faab5

5 years agoFix half-float conversion ops to handle tensors larger than 2B of params (#17952)
Andrey Malevich [Wed, 13 Mar 2019 05:57:44 +0000 (22:57 -0700)]
Fix half-float conversion ops to handle tensors larger than 2B of params (#17952)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17952

As desc.

Reviewed By: hyuen

Differential Revision: D14435092

fbshipit-source-id: dc614ba16ad531101d04d01aec8f1fbd534ebec5

5 years agoOverride the resolve_library_path in FBCode (#17497)
Lu Fang [Wed, 13 Mar 2019 05:06:25 +0000 (22:06 -0700)]
Override the resolve_library_path in FBCode (#17497)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17497

The following problems have been addressed: 1) import torch.ops correctly, 2) make realpath call optional

Reviewed By: dzhulgakov

Differential Revision: D14094358

fbshipit-source-id: 2f9a6fca656867287a7c82c465a4554384ff7323

5 years agoupdate ccache guide (#17938)
Karl Ostmo [Wed, 13 Mar 2019 04:40:13 +0000 (21:40 -0700)]
update ccache guide (#17938)

Summary:
closes #17937
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17938

Differential Revision: D14435791

Pulled By: kostmo

fbshipit-source-id: b1d0db8902f78bde51150606e2a67fb9ddfe7812

5 years agounify cpp tests (#17947)
Michael Suo [Wed, 13 Mar 2019 04:31:59 +0000 (21:31 -0700)]
unify cpp tests (#17947)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17947

Instead of having a gtest and a no-gtest file that you have to remember to register tests in, add a single registration point and use some macro magic to make it work for both gtest and non-gtest builds

Reviewed By: eellison

Differential Revision: D14431302

fbshipit-source-id: e1abac135992577a943eaa7abcc81a6ed31fa6e5

5 years agoUpdating submodules
svcscm [Wed, 13 Mar 2019 03:28:59 +0000 (20:28 -0700)]
Updating submodules

Reviewed By: zpao

fbshipit-source-id: 7d454d0f58898741f293b356dfc10d7fc31fd55c

5 years agoRemove sinkMaxPool transformation (#17694)
Duc Ngo [Wed, 13 Mar 2019 03:05:36 +0000 (20:05 -0700)]
Remove sinkMaxPool transformation (#17694)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17694

Remove sinkMaxPool transformation as it's unused

Differential Revision: D14328624

fbshipit-source-id: bd245403b756157120faa61a0f9253c15120e7a8

5 years agoFix Windows build (#17917)
Alexey Kozhevnikov [Wed, 13 Mar 2019 02:48:11 +0000 (19:48 -0700)]
Fix Windows build (#17917)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17917

D14375995 introduced instantiation of the following templates with `bool` type (more specifically `To` is `int64_t`, `From` is `bool`):
```
template <typename To, typename From>
typename std::enable_if<std::is_integral<From>::value, bool>::type overflows(
    From f) {
  using limit = std::numeric_limits<typename scalar_value_type<To>::type>;
  if (!limit::is_signed && std::numeric_limits<From>::is_signed) {
    // allow for negative numbers to wrap using two's complement arithmetic.
    // For example, with uint8, this allows for `a - b` to be treated as
    // `a + 255 * b`.
    return f > limit::max() ||
        (f < 0 && -static_cast<uint64_t>(f) > limit::max());
  } else {
    return f < limit::lowest() || f > limit::max();
  }
}

template <typename To, typename From>
typename std::enable_if<std::is_floating_point<From>::value, bool>::type
overflows(From f) {
  using limit = std::numeric_limits<typename scalar_value_type<To>::type>;
  if (limit::has_infinity && std::isinf(static_cast<double>(f))) {
    return false;
  }
  if (!limit::has_quiet_NaN && (f != f)) {
    return true;
  }
  return f < limit::lowest() || f > limit::max();
}
```
MSVC gives C4804 warning and because "treat warnings as errors" is on it fails to build on Windows. Disabling such warning for those 2 templates.

Reviewed By: mingzhe09088

Differential Revision: D14421157

fbshipit-source-id: e72ba34406628c84da48518b32a46f851819bad1

5 years agofix overly restrictive assertion (#17939)
Jongsoo Park [Wed, 13 Mar 2019 01:14:50 +0000 (18:14 -0700)]
fix overly restrictive assertion (#17939)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17939

Instead of just asserting min <= 0 and max >= 0 , we adjust histogram to include 0 in the range.
We need to include 0 in the range during norm error minimization to correctly represent our quantization method that includes 0.

Reviewed By: csummersea

Differential Revision: D14428732

fbshipit-source-id: 6669a9d2c7d409ec3b31aee0afe48071986b9b71

5 years agoEnable threadpool threads to greedily acquire new tasks if available. (#17808)
Owen Anderson [Wed, 13 Mar 2019 01:00:23 +0000 (18:00 -0700)]
Enable threadpool threads to greedily acquire new tasks if available. (#17808)

Summary:
This improves locality and affinity by keeping work on the same
threads preferentially to starting work on new ones, and reduces
contention on the threadpool lock more generally.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17808

Differential Revision: D14391282

Pulled By: resistor

fbshipit-source-id: 3aec81656a50460a725aa4187c61864295d4f46e

5 years agoJIT IR - Add option to remove prefix string when converting from JIT IR to NetDef...
Duc Ngo [Tue, 12 Mar 2019 23:54:47 +0000 (16:54 -0700)]
JIT IR - Add option to remove prefix string when converting from JIT IR to NetDef (#17931)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17931

When converting from NetDef to IR and back, the prefix string should be removed so the operator types are preserved in caffe2.

Reviewed By: ZolotukhinM

Differential Revision: D14425954

fbshipit-source-id: 2807e7337b0f804f126970768b1250a4a8c5f35c

5 years agoMisleading documentation for module._load_from_state_dict (#17618)
Kai Zhang [Tue, 12 Mar 2019 23:52:38 +0000 (16:52 -0700)]
Misleading documentation for module._load_from_state_dict (#17618)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17618

Base on the code, we only add key to `missing_keys` and `unexpected_keys` if `$strict` is `True`. The documentation is confusing.

This diff also fix one FLAKE8 warning.

Reviewed By: ailzhang

Differential Revision: D14280593

fbshipit-source-id: d368f5596bdf74ff62ee4d28d79120f5af91e0a3

5 years agoEnable detectron on AMD GPU
Sandeep Kumar [Tue, 12 Mar 2019 23:22:41 +0000 (16:22 -0700)]
Enable detectron on AMD GPU

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17862

Differential Revision: D14429234

Pulled By: bddppq

fbshipit-source-id: 5cb8750bd9db0ff8a179977d2bfbb180265cce81

5 years agoRemoved dead code from THTensorMath.h (#17873)
Iurii Zdebskyi [Tue, 12 Mar 2019 20:49:40 +0000 (13:49 -0700)]
Removed dead code from THTensorMath.h (#17873)

Summary:
This PR removes dead code from THTensorMath.h
I found these unused methods while working on a PR where i plan to move fill and zero methods from TH/THC to Aten.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17873

Differential Revision: D14407013

Pulled By: izdeby

fbshipit-source-id: a3551c5d91e7b380931a8b3bd4b3ae972d16911d

5 years agoFix lint in test_torch.py (#17807)
Edward Yang [Tue, 12 Mar 2019 20:34:40 +0000 (13:34 -0700)]
Fix lint in test_torch.py (#17807)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17807

Lint also detected a bug in test_linspace where we weren't
actually testing the CUDA case.

Differential Revision: D14388241

fbshipit-source-id: e219e46400f4952c6b384bca3baa0724ef94acde

5 years agoUpdating submodules
svcscm [Tue, 12 Mar 2019 20:22:58 +0000 (13:22 -0700)]
Updating submodules

Reviewed By: zpao

fbshipit-source-id: 06c0f738c791cccf79025d15f1fc2076bf34fcd1

5 years agoEliminate the use of Type. (#17804)
jainkunal3004 [Tue, 12 Mar 2019 19:50:29 +0000 (12:50 -0700)]
Eliminate the use of Type. (#17804)

Summary:
Stack:
&nbsp;&nbsp;&nbsp;&nbsp;:black_circle:&nbsp; **#17804 Eliminate the use of Type.**&nbsp;&nbsp;[:yellow_heart:](https://our.intern.facebook.com/intern/diff/D14382165/)

at::CPU produces Type object which is then casted into TensorOptions, instead directly using TensorOptions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17804

Differential Revision: D14407851

Pulled By: ezyang

fbshipit-source-id: 6462d698305b7c24382c1bfd440d3227bd28d9e4

5 years agoMake Variable::set_data non-const; cosmetic fixes.
Dan Povey [Tue, 12 Mar 2019 19:37:58 +0000 (12:37 -0700)]
Make Variable::set_data non-const; cosmetic fixes.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17761

Differential Revision: D14406603

Pulled By: ezyang

fbshipit-source-id: bc8bba73352eb4b3e21196b36522e9cec70f6676

5 years agoremove warning for upsample code (#17921)
Ailing Zhang [Tue, 12 Mar 2019 19:01:49 +0000 (12:01 -0700)]
remove warning for upsample code (#17921)

Summary:
IIRC we decided to remove warning in code in #11568. This got reverted accidentally in #14123.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17921

Differential Revision: D14422811

Pulled By: ailzhang

fbshipit-source-id: 7067264bd1d3e3b7861d29e18ade2969ed705ca1

5 years agoOptimize TileOp (#17290)
Xiaomeng Yang [Tue, 12 Mar 2019 18:54:29 +0000 (11:54 -0700)]
Optimize TileOp (#17290)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17290

Optimize TileOp

Reviewed By: wesolwsk

Differential Revision: D14145844

fbshipit-source-id: 1571fa0512218dbc48080592ede4e23903be85dd

5 years agoOptimize channel_stats_op (#16243)
Xiaomeng Yang [Tue, 12 Mar 2019 18:52:01 +0000 (11:52 -0700)]
Optimize channel_stats_op (#16243)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16243

Optimize channel_stats_op and add NHWC impl

Reviewed By: takatosp1

Differential Revision: D13775515

fbshipit-source-id: decb889e646f5316d4afefdf9f9b6bc6343613cd

5 years agoenable shape inference for elementwise operators (#17885)
Hector Yuen [Tue, 12 Mar 2019 18:51:37 +0000 (11:51 -0700)]
enable shape inference for elementwise operators (#17885)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17885

enable shape inference for elementwise operators

Reviewed By: yinghai

Differential Revision: D14411014

fbshipit-source-id: b19bcaabb2bba26fb79745ec84af0e4e5ed18ff0

5 years agoRemove remaining test jit expects redux (#17924)
Elias Ellison [Tue, 12 Mar 2019 18:25:37 +0000 (11:25 -0700)]
Remove remaining test jit expects redux (#17924)

Summary:
Trying to reland https://github.com/pytorch/pytorch/pull/17886 since it broke a build and I reverted it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17924

Differential Revision: D14423842

Pulled By: eellison

fbshipit-source-id: f219e786bd07f7da3b7f9e866981199f5ccf6318

5 years agoHandle Scalars Better (#17875)
Elias Ellison [Tue, 12 Mar 2019 17:36:38 +0000 (10:36 -0700)]
Handle Scalars Better (#17875)

Summary:
This PR allows Scalars to be castable with `int()` and `float()`, allows scalars to match with float arguments, and prints out a better error message if `x.item()` is used as an int.

Scalars are a very uncommon case, and I don't think we want to add the maintenance burden of building out op coverage for it. It's more maintainable to better handle converting it to int/float.

Fix https://github.com/pytorch/pytorch/issues/17652

Also note: https://github.com/pytorch/pytorch/issues/16849
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17875

Differential Revision: D14411138

Pulled By: eellison

fbshipit-source-id: a4e957cefb0ffd10ddb234d92f6d1558cfce8751

5 years agoFixed a formatting issue in doc comments (#17505)
Brian Johnson [Tue, 12 Mar 2019 16:52:05 +0000 (09:52 -0700)]
Fixed a formatting issue in doc comments (#17505)

Summary:
for torch.distributed.broadcast_multigpu per issue #17243
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17505

Reviewed By: janewangfb

Differential Revision: D14373865

Pulled By: pietern

fbshipit-source-id: 6d7e91a3da50a7c9ba417ad852f7746eb5200043

5 years agoAdd nbytes, itemsize, element_size to at::Tensor. (#17810)
Edward Yang [Tue, 12 Mar 2019 16:45:06 +0000 (09:45 -0700)]
Add nbytes, itemsize, element_size to at::Tensor. (#17810)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17810

Partially addresses #12728. Also, switch the element_size bindings
to use the new function, rather than the method on Type.

We don't add Python bindings yet, as they need to be special
(they will be properties.)

Differential Revision: D14388790

fbshipit-source-id: 294183d0c8a59b0c13f2bf21d6f1cd557333e83b

5 years agoFix lint in test_distributions.py
Edward Yang [Tue, 12 Mar 2019 16:28:43 +0000 (09:28 -0700)]
Fix lint in test_distributions.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17821

Differential Revision: D14392899

fbshipit-source-id: 99f75b1d3a71bde8050caef8df7e5b9ecfe0c755

5 years agoFix lint in test_jit.py
Edward Yang [Tue, 12 Mar 2019 16:17:16 +0000 (09:17 -0700)]
Fix lint in test_jit.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17823

Differential Revision: D14392996

fbshipit-source-id: b9aa83898768c929e753c0f17bb09a54d724ae4d

5 years agoFix lint errors in test_autograd
Edward Yang [Tue, 12 Mar 2019 15:46:52 +0000 (08:46 -0700)]
Fix lint errors in test_autograd

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17812

Reviewed By: eellison

Differential Revision: D14388897

fbshipit-source-id: 6e2671805dc8d57af68eb0a0cd6ccb24d9db45e2

5 years agoAdded a few extra python bindings to help with walking the IR graph from Python ...
Andras Tantos [Tue, 12 Mar 2019 15:46:16 +0000 (08:46 -0700)]
Added a few extra python bindings to help with walking the IR graph from Python (#17822)

Summary:
These changes add the following new Python bindings:

- Values have a 'type' property now that allows getting to the 'type' object
- Blocks have now inputs and outputs as well as returnNode and paramNode properties
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17822

Differential Revision: D14410123

Pulled By: ezyang

fbshipit-source-id: 64ef79f85a7a43b83e4b127b1d39efcaa64b74dc

5 years agokthvalue consistency with sort in the presence of NaN (#17824)
Thomas Viehmann [Tue, 12 Mar 2019 15:45:17 +0000 (08:45 -0700)]
kthvalue consistency with sort in the presence of NaN (#17824)

Summary:
This PR causes kthvalue to be consistent with sort
(i.e. treat NaN as larger than any number), so that
`a.kthvalue(n) == a.sort()[n - 1]`.

One drawback is that median with a NaN argument does not return NaN,
which is a deviation from NumPy.

Thank you, ngimel, for raising this.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17824

Differential Revision: D14410092

Pulled By: ezyang

fbshipit-source-id: bdec2d8272dc4c65bcf2f9b8995e237774c44c02

5 years agoFix minor grammatical mistakes in torch/nn/modules/loss.py (#17892)
joy [Tue, 12 Mar 2019 15:39:37 +0000 (08:39 -0700)]
Fix minor grammatical mistakes in torch/nn/modules/loss.py (#17892)

Summary:
Fixes some minor grammatical mistakes in the doc of `loss.py`.

I think in the doc:
>  Note that for some losses, there multiple elements per sample.

the "are" is lost between "there" and "multiple".

This mistake takes place in all the descriptions of parameter `size_average` and there are 17 of them.
It's minor but perfects the doc I think. 😁
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17892

Differential Revision: D14418177

Pulled By: ezyang

fbshipit-source-id: 412759f2f9b215819463bf8452ab0e0513218cd6

5 years agoRemove (almost all) TensorOptions from native_functions.yaml (#17385)
Christian Puhrsch [Tue, 12 Mar 2019 14:54:17 +0000 (07:54 -0700)]
Remove (almost all) TensorOptions from native_functions.yaml (#17385)

Summary:
Stacked on top of https://github.com/pytorch/pytorch/pull/17386

Brings us to 1014/1106 of writing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17385

Differential Revision: D14248008

Pulled By: cpuhrsch

fbshipit-source-id: 033e00de91e3edf7ae01ca03ebe436c0446b3b5c

5 years agoRestore full Windows tests (#17102)
Karl Ostmo [Tue, 12 Mar 2019 13:28:54 +0000 (06:28 -0700)]
Restore full Windows tests (#17102)

Summary:
closes #17101
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17102

Differential Revision: D14420716

Pulled By: ezyang

fbshipit-source-id: 0134736e2d919afa683afa84cb2140f659042643

5 years agoPrevent VS2017 from emitting ambiguous symbol errors (second time)
peter [Tue, 12 Mar 2019 08:53:42 +0000 (01:53 -0700)]
Prevent VS2017 from emitting ambiguous symbol errors (second time)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17863

Differential Revision: D14404818

Pulled By: soumith

fbshipit-source-id: 9dac6b926e270e2a29ec2e4dba2e93984da0e5f5

5 years agoFix windows test hang (#17778)
xuzhu [Tue, 12 Mar 2019 08:43:45 +0000 (01:43 -0700)]
Fix windows test hang (#17778)

Summary:
This PR resolves two concurrent issues discovered when running the test in windows. Details about the windows test can be found here: https://github.com/pytorch/pytorch/issues/17609

The change covers two fixes:
1. update running_preloaders_ upfront before creating worker thread to prevent underflow.
2. add a lock when updating stop_ to prevent dead lock in condition variable cv_write_.

The fix has been tested on both Windows and Linux. With --gtest_repeat=1000, the tests runs smoothly without issues.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17778

Differential Revision: D14404910

Pulled By: soumith

fbshipit-source-id: 2fbb8007e4b0bce4613e9a9fd31b8aace1bbfa8d

5 years agotorch.btrifact for tensors with greater than 3 dimensions (#14964)
vishwakftw [Tue, 12 Mar 2019 08:42:28 +0000 (01:42 -0700)]
torch.btrifact for tensors with greater than 3 dimensions (#14964)

Summary:
Motivation:
- Earlier, `torch.btrifact` could not handle tensors with greater than 3 dimensions. This is because of the check:
>   AT_CHECK(THTensor_(nDimension)(a) == 3, "expected 3D tensor, got size: ", a->sizes());

What is in this PR?:
- Move `btrifact` to ATen
- Remove relation to TH/THC.
- Handle tensors with more than three dimensions
- Tests
- Docs modifications: added a note about the non-pivoting variant.

[blocked due to old magma-cuda binaries]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14964

Differential Revision: D14405106

Pulled By: soumith

fbshipit-source-id: f051f5d6aaa45f85836a2867176c065733563184

5 years agoSmall clean up of aten_op
Roy Li [Tue, 12 Mar 2019 04:01:21 +0000 (21:01 -0700)]
Small clean up of aten_op

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17530

Reviewed By: ezyang

Differential Revision: D14237931

fbshipit-source-id: fb73d63d89fab0622097a49be6ed0b75ddb02a7c

5 years agoadd support for parsing class defs to the string frontend (#17628)
Michael Suo [Tue, 12 Mar 2019 02:07:58 +0000 (19:07 -0700)]
add support for parsing class defs to the string frontend (#17628)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17628

This is not hooked up anywhere yet, just adding support.
This shares the same restrictions as the python frontend—namely, that the only exprs allowed right now are method defs.

Reviewed By: shannonzhu

Differential Revision: D14291654

fbshipit-source-id: 7798e5ff412a52ef8803c7bae8f439e50968a73a

5 years agoadd test for out of order methods (#17624)
Michael Suo [Tue, 12 Mar 2019 02:07:57 +0000 (19:07 -0700)]
add test for out of order methods (#17624)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17624

Just to make sure this path works

Reviewed By: shannonzhu

Differential Revision: D14288056

fbshipit-source-id: b719c0e90252b6821b1f9b22d3d98982985a6cb3

5 years agoinitializing class value (#17585)
Michael Suo [Tue, 12 Mar 2019 02:07:57 +0000 (19:07 -0700)]
initializing class value (#17585)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17585

Create a sugared value that represents a class during initialization. This is
so that assignments to attributes correctly define attributes in __init__ but
raise an error elsewhere.

Reviewed By: shannonzhu

Differential Revision: D14263403

fbshipit-source-id: 09b2feeb272302f00a79c2a0302fbdf5483aed6a

5 years agoRemove unused parameter in ProcessGroupGloo (#17718)
Pieter Noordhuis [Tue, 12 Mar 2019 00:57:56 +0000 (17:57 -0700)]
Remove unused parameter in ProcessGroupGloo (#17718)

Summary:
This is not used anywhere and wasn't cleaned up prior to 1.0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17718

Reviewed By: janewangfb

Differential Revision: D14355154

Pulled By: pietern

fbshipit-source-id: f8ff3c8f50cd6365b369a5c5b85d72d8940df048

5 years agoRevert D14414435: [pytorch][PR] Remove remaining IR Expect files
Elias Ellison [Tue, 12 Mar 2019 00:27:50 +0000 (17:27 -0700)]
Revert D14414435: [pytorch][PR] Remove remaining IR Expect files

Differential Revision:
D14414435

Original commit changeset: 0bfd7ce66ac2

fbshipit-source-id: 02de1814f3c4e581d3798059cee752517b176ed9

5 years agoRemove remaining IR Expect files (#17886)
Elias Ellison [Tue, 12 Mar 2019 00:23:27 +0000 (17:23 -0700)]
Remove remaining IR Expect files (#17886)

Summary:
Last batch of IR expect files removed. Includes some removal of expect files that are no longer used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17886

Differential Revision: D14414435

Pulled By: eellison

fbshipit-source-id: 0bfd7ce66ac2f72a57f15f45ebd60b95e80b6c16

5 years agoBool tensor creation (cpu) (#17376)
Iurii Zdebskyi [Mon, 11 Mar 2019 23:58:09 +0000 (16:58 -0700)]
Bool tensor creation (cpu) (#17376)

Summary:
This PR enables bool tensor creation and some basic operations for the CPU backend. This is a part of Bool Tensor feature implementation work. The whole plan looks like this:
    1. Storage Implementation [Done]
    2. Tensor Creation.
        a) CPU (this PR)
        b) CUDA
    3. Tensor Conversions.
    4. Tensor Indexing.
    5. Tensor Operations.
    6. Back compatibility related changes.

**Change**:
Enable CPU tensors and these operations:
- torch.zeros
- torch.tensor
- torch.ones
- torch.randint
- torch.full
- torch.full_like
- torch.empty
- torch.empty_like

**Tested via**:
1) unit tests

2)
torch.zeros(2,2, dtype=torch.bool)
torch.tensor([True, False], dtype=torch.bool)
torch.tensor([-1, -1.1, 0, 1, 1.1, 2], dtype=torch.bool)
torch.ones([1,2], dtype=torch.bool)
torch.randint(10, (2, 2), dtype=torch.bool)
torch.full((2, 3), True, dtype=torch.bool)
torch.empty(4, dtype=torch.bool)

a = torch.tensor([0,0,1])
b = torch.full_like(a, True)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17376

Reviewed By: ezyang

Differential Revision: D14375995

Pulled By: izdeby

fbshipit-source-id: a65490b5360ee0e6e3accc54ce7e32e49ad2d2a8

5 years agoRemove device guard from TypeDefault::copy()
Roy Li [Mon, 11 Mar 2019 22:50:45 +0000 (15:50 -0700)]
Remove device guard from TypeDefault::copy()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17833

Reviewed By: ezyang

Differential Revision: D14400901

Pulled By: li-roy

fbshipit-source-id: ababc95dadfc94a996a80c5332f45f76a300963d

5 years agore-enable torch.split tests (#17859)
Michael Suo [Mon, 11 Mar 2019 22:09:00 +0000 (15:09 -0700)]
re-enable torch.split tests (#17859)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17859

this has been fixed due to improvements in shape analysis

Reviewed By: driazati

Differential Revision: D14402781

fbshipit-source-id: 4ef2722ffedd9c8ac1eff55c244b421d7d3715ed

5 years agoFix lint in test_dataloader.py
Edward Yang [Mon, 11 Mar 2019 21:42:49 +0000 (14:42 -0700)]
Fix lint in test_dataloader.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17878

Reviewed By: eellison

Differential Revision: D14409933

fbshipit-source-id: 20ee8953a21e29b4557aff62b5e48dddd630eef6

5 years agoOptimize fused_dropout_kernel launch bounds for AMD hardware
Johannes M Dieterich [Mon, 11 Mar 2019 21:39:07 +0000 (14:39 -0700)]
Optimize fused_dropout_kernel launch bounds for AMD hardware

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17870

Differential Revision: D14409990

Pulled By: ezyang

fbshipit-source-id: 0452282f459770823641b2527f47b1186ab14666

5 years agoDeprecate torch.pstrf (#17866)
Vishwak Srinivasan [Mon, 11 Mar 2019 19:15:41 +0000 (12:15 -0700)]
Deprecate torch.pstrf (#17866)

Summary:
Changelog:
- Add deprecation warning to torch.pstrf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17866

Differential Revision: D14405527

Pulled By: soumith

fbshipit-source-id: 73f3b7d61c60eb57e4bffd08112e552ae3e6dfdc

5 years agoAllow structseq to be input of operators where tuple is expected (#17208)
Gao, Xiang [Mon, 11 Mar 2019 18:30:08 +0000 (11:30 -0700)]
Allow structseq to be input of operators where tuple is expected (#17208)

Summary:
Currently the following code gives an error on python 2 because `ret` is a structseq which is not a tuple
```python
ret = a.max(dim=0)
ret1 = torch.max(a, dim=0, out=ret)
```

This PR modify tuple check in python arg parser to allow structseq to be input of operators where tuple is expected, which would make the above code work.

Depend on: https://github.com/pytorch/pytorch/pull/17136
Partially fixes: https://github.com/pytorch/pytorch/issues/16813
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17208

Differential Revision: D14280198

Pulled By: VitalyFedyunin

fbshipit-source-id: beffebfd3951c4f5c7c8fe99a5847616a89491f3

5 years agoAdd PyTorch Governance, Contributor Guide, and List of Persons of Interest
Eric Nakagawa [Mon, 11 Mar 2019 17:29:54 +0000 (10:29 -0700)]
Add PyTorch Governance, Contributor Guide, and List of Persons of Interest

Summary: Adding new documents to the PyTorch website to describe how PyTorch is governed, how to contribute to the project, and lists persons of interest.

Reviewed By: orionr

Differential Revision: D14394573

fbshipit-source-id: ad98b807850c51de0b741e3acbbc3c699e97b27f

5 years agoFix compilation error (#17860)
Yinghai Lu [Mon, 11 Mar 2019 17:20:11 +0000 (10:20 -0700)]
Fix compilation error (#17860)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17860

att

Reviewed By: bddppq

Differential Revision: D14402751

fbshipit-source-id: 2d53b230dfd775372addeab1d3eaf0b9552fae9f

5 years agoRevert D14392864: Fix lint in test_dataloader.py
Edward Yang [Mon, 11 Mar 2019 17:13:40 +0000 (10:13 -0700)]
Revert D14392864: Fix lint in test_dataloader.py

Differential Revision:
D14392864

Original commit changeset: 12477b9cfe29

fbshipit-source-id: 1864a80d5cfaceeae55d0145340a578f978ab4a7

5 years agoRemoved dead code from THTensorMath.h (#17769)
Iurii Zdebskyi [Mon, 11 Mar 2019 17:11:09 +0000 (10:11 -0700)]
Removed dead code from THTensorMath.h (#17769)

Summary:
This PR removes dead code from THTensorMath.h
I found these unused methods while working on a PR where i plan to move **fill** and **zero** methods from TH/THC to Aten.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17769

Differential Revision: D14372732

Pulled By: izdeby

fbshipit-source-id: 94fd3b52c691ebc89d2bdc8905452e7498038bf5

5 years agoIntroducing array-like sequence methods __contains__ (#17733)
bhushan [Mon, 11 Mar 2019 15:55:01 +0000 (08:55 -0700)]
Introducing array-like sequence methods __contains__ (#17733)

Summary:
for tensor

Fixes: #17000
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17733

Differential Revision: D14401952

Pulled By: soumith

fbshipit-source-id: c841b128c5a1fceda1094323ed4ef1d0cf494909

5 years agoRevert "Add check for x64 Python before setup (#17707)" (#17864)
peter [Mon, 11 Mar 2019 15:49:30 +0000 (08:49 -0700)]
Revert "Add check for x64 Python before setup (#17707)" (#17864)

Summary:
This reverts commit 08fb9021da32e73bd7dec73104eea6a76dd44439.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17864

Differential Revision: D14404920

Pulled By: soumith

fbshipit-source-id: d41fc06e249f3437d4f80d1d6a5fdbd44c90462b

5 years agoRegistering of kl-divergence for independent distribution (#17681)
Nicki Skafte [Mon, 11 Mar 2019 15:07:22 +0000 (08:07 -0700)]
Registering of kl-divergence for independent distribution (#17681)

Summary:
This address issue https://github.com/pytorch/pytorch/issues/13545 and implements the proposed fix together with a single test.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17681

Differential Revision: D14360161

Pulled By: ezyang

fbshipit-source-id: 427afc88e9054b5b0dc39ebbab1087b990695ea5

5 years agoFix lint in test_dataloader.py
Edward Yang [Mon, 11 Mar 2019 14:58:12 +0000 (07:58 -0700)]
Fix lint in test_dataloader.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17820

Reviewed By: eellison

Differential Revision: D14392864

fbshipit-source-id: 12477b9cfe290428d51cc28e024c8cbe8bb7bf51

5 years agoFurther improvements of nn.container docs
Tongzhou Wang [Mon, 11 Mar 2019 01:26:20 +0000 (18:26 -0700)]
Further improvements of nn.container docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17731

Differential Revision: D14401894

Pulled By: soumith

fbshipit-source-id: cebb25859f78589cc4f4f8afb1e84c97f82b6962

5 years agofix faq typo
Tongzhou Wang [Sun, 10 Mar 2019 22:26:07 +0000 (15:26 -0700)]
fix faq typo

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17851

Differential Revision: D14401791

Pulled By: soumith

fbshipit-source-id: ed6d64d6f5985e7ce76dca1e9e376782736b90f9

5 years agoFix log_softmax and softmax if any dimension is 0-d (#17651)
bhushan [Sun, 10 Mar 2019 22:18:48 +0000 (15:18 -0700)]
Fix log_softmax and softmax if any dimension is 0-d (#17651)

Summary:
- Test added
- test_dim_function_empty: softmax and log_softmax on last dimension

fixes: #17262
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17651

Differential Revision: D14349009

Pulled By: gchanan

fbshipit-source-id: b6f728f5c6be8ae7615749e3f0c201886632923e

5 years agoCorrect loss docstrings (#17300)
ZhuBaohe [Sun, 10 Mar 2019 18:50:56 +0000 (11:50 -0700)]
Correct loss docstrings (#17300)

Summary:
In the loss doc description, replace the deprecated 'reduct' and 'size_average' parameters with the 'reduction' parameter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17300

Differential Revision: D14195789

Pulled By: soumith

fbshipit-source-id: 625e650ec20f13b2d22153a4a535656cf9c8f0eb

5 years agoWhen openblas exists, "OpenBLAS_FOUND" is defined, rather than "OPENBLAS_FOUND"....
HE, Tao [Sun, 10 Mar 2019 16:21:13 +0000 (09:21 -0700)]
When openblas exists, "OpenBLAS_FOUND" is defined, rather than "OPENBLAS_FOUND". (#17841)

Summary:
See https://github.com/pytorch/pytorch/blob/master/cmake/Modules/FindOpenBLAS.cmake#L36

This typo lead to cmake fails to detect openblas on ubuntu.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17841

Differential Revision: D14400261

Pulled By: soumith

fbshipit-source-id: 287e019e122230cf6b70ab1ea94e5c514f429c88

5 years agoPassing indices as a list to Subset instead of Tensor (#17649)
bhushan [Sun, 10 Mar 2019 16:20:30 +0000 (09:20 -0700)]
Passing indices as a list to Subset instead of Tensor (#17649)

Summary:
Indices in Subset were stored as tensors earlier
passing as list in random_split to ensure integer indexing

fixes: #17466
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17649

Differential Revision: D14400250

Pulled By: soumith

fbshipit-source-id: cd20a959f33773c4babf8e861ea37ec61c2713a0

5 years agoClarify JIT docs
James Reed [Sun, 10 Mar 2019 07:10:26 +0000 (23:10 -0800)]
Clarify JIT docs

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17846

Differential Revision: D14400363

Pulled By: jamesr66a

fbshipit-source-id: 862316b5fd95526b6edebeca19d2cc522779df11

5 years agoAdd metadata for torch jit TracedModules. (#17640)
Pritam Damania [Sun, 10 Mar 2019 05:31:42 +0000 (21:31 -0800)]
Add metadata for torch jit TracedModules. (#17640)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17640

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17311

I've extended our model metadata framework in this diff to support
traced modules as well. Re-used a lot of components from the previous
implementation of ScriptModule metadata.

Tracing is a little different from Scripting since you can't just create a
subclass of TopLevelTraceModule (type returned by torch.jit.trace) and attach
metadata the way we did for ScriptModule. As a result, I've introduced a
separate API torch.fb.jit_trace which returns an instance of
TracedModuleWithMetadata which is a subclass of TopLevelTracedModule. As a
result, we can now attach metadata to this instance.

Reviewed By: dzhulgakov

Differential Revision: D14117966

fbshipit-source-id: 3eee5eef733cb8d6a219c02e2f41d08698eca326

5 years agoFix PySlice_Unpack not available on PyPy 3.6 yet (#17836)
Konstantin Lopuhin [Sun, 10 Mar 2019 04:06:57 +0000 (20:06 -0800)]
Fix PySlice_Unpack not available on PyPy 3.6 yet (#17836)

Summary:
This is one of the fixes needed to support compilation on PyPy 3.6, see https://github.com/pytorch/pytorch/issues/17835
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17836

Differential Revision: D14399404

Pulled By: soumith

fbshipit-source-id: ca650a6e2066aed86ddd3314a95d0cb3c515c633

5 years agoPyPy compatibility: let unmodified slots be inherited in the standard way (#17837)
Ronan Lamy [Sat, 9 Mar 2019 19:38:05 +0000 (11:38 -0800)]
PyPy compatibility: let unmodified slots be inherited in the standard way (#17837)

Summary:
This is needed to fix a segfault on PyPy 3.6, see https://bitbucket.org/pypy/pypy/issues/2968/segfault-calling-cpyext_tp_new_tuple and https://github.com/pytorch/pytorch/issues/17835
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17837

Differential Revision: D14399408

Pulled By: soumith

fbshipit-source-id: 75328a30018313d3223dd3e3eef9240a416c049b

5 years agoRun fp16 resnet50 training in bench script (#17831)
Junjie Bai [Sat, 9 Mar 2019 05:50:20 +0000 (21:50 -0800)]
Run fp16 resnet50 training in bench script (#17831)

Summary:
cc xw285cornell
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17831

Differential Revision: D14398532

Pulled By: bddppq

fbshipit-source-id: 37c03cc2eebe3a6083e05631cb6ff03474e4a8a2

5 years agoInt8 FC performance debugging (#17700)
Summer Deng [Sat, 9 Mar 2019 03:00:43 +0000 (19:00 -0800)]
Int8 FC performance debugging (#17700)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17700

Add performance debugging utilities in DNNLOWP FC operator and the python script

Reviewed By: amylittleyang

Differential Revision: D14321299

fbshipit-source-id: 50dbd7b352a1da5d2ecb659d8003e71e70750063

5 years agoOptimize LayerNormOp (#17604)
Xiaomeng Yang [Sat, 9 Mar 2019 01:35:17 +0000 (17:35 -0800)]
Optimize LayerNormOp (#17604)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17604

Optimize LayerNormOp

i-am-not-moving-c2-to-c10

Reviewed By: houseroad

Differential Revision: D14274175

fbshipit-source-id: a7aa263a1b0eb109682d2be99306e7b2cdcc0faf

5 years agoRemove some simple use cases of Type::ScalarType()
Roy Li [Sat, 9 Mar 2019 00:39:04 +0000 (16:39 -0800)]
Remove some simple use cases of Type::ScalarType()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17529

Reviewed By: ezyang

Differential Revision: D14237932

fbshipit-source-id: be633a1fc19215d53cfe083fdd7196acf2b7dd2f

5 years agoChange Dispatch.h to use ScalarType over Type
Roy Li [Sat, 9 Mar 2019 00:39:04 +0000 (16:39 -0800)]
Change Dispatch.h to use ScalarType over Type

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17527

Reviewed By: zou3519

Differential Revision: D14235395

fbshipit-source-id: 3f53e33f6794f1f14c2edf79014b8ef8397822c5

5 years agoRevert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initialize...
Lu Fang [Sat, 9 Mar 2019 00:27:00 +0000 (16:27 -0800)]
Revert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initializers to be name-based

Differential Revision:
D14361993

Original commit changeset: da93e945d557

fbshipit-source-id: 15eea001fbcd059ac13903405aeb9ea182c6ee8b

5 years agoOpen registration for c10 thread pool (#17788)
James Reed [Fri, 8 Mar 2019 23:33:34 +0000 (15:33 -0800)]
Open registration for c10 thread pool (#17788)

Summary:
1. Move ATen threadpool & open registration mechanism to C10
2. Move the `global_work_queue` to use this open registration mechanism, to allow users to substitute in their own
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17788

Reviewed By: zdevito

Differential Revision: D14379707

Pulled By: jamesr66a

fbshipit-source-id: 949662d0024875abf09907d97db927f160c54d45

5 years agoCast nn.Upsample.scale_factor to a float (#17732)
David Riazati [Fri, 8 Mar 2019 23:26:25 +0000 (15:26 -0800)]
Cast nn.Upsample.scale_factor to a float (#17732)

Summary:
Fixes #17106
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17732

Differential Revision: D14388192

Pulled By: driazati

fbshipit-source-id: d9c9e87a7c6db63c1de3ddebbb8dcf619f0dc34d

5 years agoFix lint in run_test.py
Edward Yang [Fri, 8 Mar 2019 22:30:15 +0000 (14:30 -0800)]
Fix lint in run_test.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17815

Reviewed By: eellison

Differential Revision: D14390308

fbshipit-source-id: 22efd62a1bbd1fc8155a942d7160d5b7d3158e6b

5 years agoFix lint in test/common_utils.py
Edward Yang [Fri, 8 Mar 2019 22:19:23 +0000 (14:19 -0800)]
Fix lint in test/common_utils.py

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17814

Reviewed By: eellison

Differential Revision: D14390194

fbshipit-source-id: b4b3bbe20a15d0b9ed127b255e01c0d6d0832c1b

5 years agoReplace tensor.type().scalarType() calls with tensor.scalar_type()
Roy Li [Fri, 8 Mar 2019 22:05:01 +0000 (14:05 -0800)]
Replace tensor.type().scalarType() calls with tensor.scalar_type()

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17515

Reviewed By: ezyang

Differential Revision: D14233250

fbshipit-source-id: 6c7af8d2291c0c2b148001b30cf03834f34366c0

5 years agoCatch exceptions in bound_shape_inference (#17775)
Yinghai Lu [Fri, 8 Mar 2019 21:15:05 +0000 (13:15 -0800)]
Catch exceptions in bound_shape_inference (#17775)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17775

Handles use input shape hint properly.

Reviewed By: zrphercule

Differential Revision: D14368735

fbshipit-source-id: 504cd96589e47aa432617e56362aa6b01a25ba9b

5 years agorefactor caffe2 operator constructors - 11/9 (#17722)
Sebastian Messmer [Fri, 8 Mar 2019 20:33:31 +0000 (12:33 -0800)]
refactor caffe2 operator constructors - 11/9 (#17722)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17722

clangr codemod

Reviewed By: ezyang

Differential Revision: D14350584

fbshipit-source-id: adef54cedc9409b4fb365f6644e2621a9e47b2ff

5 years agoSuppress C408 lint (don't use dict constructor) (#17813)
Edward Yang [Fri, 8 Mar 2019 20:15:49 +0000 (12:15 -0800)]
Suppress C408 lint (don't use dict constructor) (#17813)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17813

We have a lot of manually written out dict() constructors,
and (1) I don't think use of curly brace syntax is much
of an improvement and (2) it seems like a waste of time to
fix them all.

Reviewed By: eellison

Differential Revision: D14390136

fbshipit-source-id: 6199bef4dea75b6079bcb9d9e8acf20a2e1a86e1

5 years agoAdd matches_jit_signature to recent native functions
Christian Puhrsch [Fri, 8 Mar 2019 19:37:01 +0000 (11:37 -0800)]
Add matches_jit_signature to recent native functions

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17805

Differential Revision: D14388004

Pulled By: cpuhrsch

fbshipit-source-id: c50580b6fe1e9cfefed91aaa526376325d9f9c0d

5 years agoAdd /MD to prevent linking errors on Windows
peterjc123 [Fri, 8 Mar 2019 18:39:06 +0000 (10:39 -0800)]
Add /MD to prevent linking errors on Windows

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17799

Differential Revision: D14385777

Pulled By: ezyang

fbshipit-source-id: 8c1d9f80c48399087f5fae4474690e6d80d740e6

5 years agoChange message on unknown db type to be friendly (#17795)
Dmytro Dzhulgakov [Fri, 8 Mar 2019 18:36:29 +0000 (10:36 -0800)]
Change message on unknown db type to be friendly (#17795)

Summary:
CreateDB actually returns nullptr when db type is unknown and throws when the file is missing
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17795

Reviewed By: ezyang

Differential Revision: D14383226

Pulled By: dzhulgakov

fbshipit-source-id: 1dcf75a6b4ba8b64a24d4e5daf02db3189d56b7b

5 years agoTrace rnn max_batch_size (#17727)
David Riazati [Fri, 8 Mar 2019 18:29:51 +0000 (10:29 -0800)]
Trace rnn max_batch_size (#17727)

Summary:
This causes the tracer to record the select / cast to int operation instead of just an int constant

Fixes #15319 but relies on a fix for #17583 first
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17727

Differential Revision: D14377886

Pulled By: driazati

fbshipit-source-id: 59453def54ba72756303f723993844dbeb5d2f8b

5 years agoRemove legacy way of exposing caffe2 operators to PyTorch (#17742)
Sebastian Messmer [Fri, 8 Mar 2019 18:19:49 +0000 (10:19 -0800)]
Remove legacy way of exposing caffe2 operators to PyTorch (#17742)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17742

This path isn't used anymore, and is incompatible with the changes stacked on top of this diff.
Removing it.
cc bwasti to check and confirm these can really be deleted

Reviewed By: ezyang

Differential Revision: D14362426

fbshipit-source-id: 32cdc19f28c2a981ae1e204901420998367ee588

5 years agoRemove 'Tensor' key from ATen codegen. (#17782)
Gregory Chanan [Fri, 8 Mar 2019 17:41:33 +0000 (09:41 -0800)]
Remove 'Tensor' key from ATen codegen. (#17782)

Summary:
We used to have different ATen Tensor types, but we don't anymore.  This was just being maintained by a codegen'ed comment.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17782

Reviewed By: ezyang

Differential Revision: D14378004

Pulled By: gchanan

fbshipit-source-id: 1bbf276393a391252d372cc385230c784bd78588

5 years agoRemove ProcessorSpecificPlugin. (#17789)
Gregory Chanan [Fri, 8 Mar 2019 17:39:03 +0000 (09:39 -0800)]
Remove ProcessorSpecificPlugin. (#17789)

Summary:
It doesn't seem to be used.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17789

Reviewed By: ezyang

Differential Revision: D14382423

Pulled By: gchanan

fbshipit-source-id: 0ac3236c48979a1b2bcd615e307e55f10fd8eb77