platform/upstream/pytorch.git
5 years agoAdd torch.rot90 to torch.rst
Gao, Xiang [Sun, 23 Dec 2018 22:28:31 +0000 (14:28 -0800)]
Add torch.rot90 to torch.rst

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15512

Differential Revision: D13545775

Pulled By: soumith

fbshipit-source-id: 2a8896571745630cff4aaf3d5469ef646bdcddb4

5 years agofix parallelization detection for CPU foreach_reduced_elt (#15483)
Brennan Vincent [Sun, 23 Dec 2018 20:49:08 +0000 (12:49 -0800)]
fix parallelization detection for CPU foreach_reduced_elt (#15483)

Summary:
This does two things:

(1): revert #15114 , which is incorrect and actually just completely disables parallelization in this function (because `at::get_num_threads` returns `-1` unless it has been set explicitly)

(2): Fix our (FB-internal) failing tests that #15114 was intended to fix, by still working correctly in a setup where `#ifdef _OPENMP` is set and `omp_get_max_threads() > 1` , but `#pragma omp parallel` only launches one thread. I believe such an unusual situation only exists in certain unit tests within FB infra but we still need it to work.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15483

Differential Revision: D13538940

Pulled By: umanwizard

fbshipit-source-id: a3362c7ac7327ced350d127bb426f82c59e42732

5 years agoadd rowwise adagrad lp test (#15082)
Jongsoo Park [Sat, 22 Dec 2018 18:22:56 +0000 (10:22 -0800)]
add rowwise adagrad lp test (#15082)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15082

We didn't have unit test for low-precision rowwise adagrad

Reviewed By: chocjy

Differential Revision: D13300732

fbshipit-source-id: 46e7bdfc82c5a6855eeb6f653c0a96b0b3a20546

5 years agohandle empty inputs to SparseLengthsMean correctly (#15389)
Jongsoo Park [Sat, 22 Dec 2018 06:17:35 +0000 (22:17 -0800)]
handle empty inputs to SparseLengthsMean correctly (#15389)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15389

SparseLengthsMean was generating uninitialized data for empty inputs (lengths == 0). We should return zeros.
The unit tests were also not covering this special case which is fixed by this diff.

Reviewed By: salexspb

Differential Revision: D13515970

fbshipit-source-id: 3c35265638f64f13f0262cee930c94f8628005da

5 years agoAdd pthreadpool_create and pthreadpool_destroy (#15492)
Hao Lu [Sat, 22 Dec 2018 04:23:14 +0000 (20:23 -0800)]
Add pthreadpool_create and pthreadpool_destroy (#15492)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15492

Add pthreadpool_create and pthreadpool_destroy, which are used by NNPACK tests.

Reviewed By: Maratyszcza

Differential Revision: D13540997

fbshipit-source-id: 628c599df87b552ca1a3703854ec170243f04d2e

5 years agoMetadata for input/output formats in model file proto. (#15252)
Pritam Damania [Sat, 22 Dec 2018 01:34:51 +0000 (17:34 -0800)]
Metadata for input/output formats in model file proto. (#15252)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15252

We would like to extend the model file format to include strongly type, semantic information
about the model inputs and outputs.

The goal is for a user to be able to consider a model file like a function with
a well defined API describing what the inputs and outputs would be.

Reviewed By: dzhulgakov

Differential Revision: D13009915

fbshipit-source-id: 5df124a876ad03c05fbdaacae0eab659637734c1

5 years agoadd len to nativeResolver (#15488)
Zachary DeVito [Sat, 22 Dec 2018 00:44:19 +0000 (16:44 -0800)]
add len to nativeResolver (#15488)

Summary:
(otherwise len is not resolvable using torch::jit::compile)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15488

Differential Revision: D13539991

Pulled By: zdevito

fbshipit-source-id: 3ba85fa7b1adb163f9229c568f7997d22321903d

5 years agoRemove NoneGenerator
David Riazati [Sat, 22 Dec 2018 00:30:35 +0000 (16:30 -0800)]
Remove NoneGenerator

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15335

Differential Revision: D13540357

Pulled By: driazati

fbshipit-source-id: a289e5944b65872103f68faac74e18f10e7c6fff

5 years agoAdd self to Python printer reserved words (#15318)
David Riazati [Fri, 21 Dec 2018 23:59:29 +0000 (15:59 -0800)]
Add self to Python printer reserved words (#15318)

Summary:
This adds `self` to the list of reserved words and also sorts the lines and prevents the tracer from naming values 'self' (which happens in torch/tensor.py)

Fixes #15240
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15318

Differential Revision: D13540192

Pulled By: driazati

fbshipit-source-id: 46ae02e51b1b31d5c62110fa83ba258ea6bada27

5 years agoAD support for adaptive_avg_pool2d (#15459)
Ailing Zhang [Fri, 21 Dec 2018 23:32:44 +0000 (15:32 -0800)]
AD support for adaptive_avg_pool2d (#15459)

Summary:
This adds AD support for adaptive_avg_pool2d, which is necessary for resnet50 in pytorch/vision:master. cc: soumith asuhan dlibenzi

apaszke  I saw that autodiff bug you fixed in #15403 , as it doesn't prevent this PR from passing, so I'll leave it for your PR to fix it. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15459

Differential Revision: D13534732

Pulled By: ailzhang

fbshipit-source-id: 4e48b93e35d5ecfe7bd64b6a132a55b07843f206

5 years agoHandling nullptr case
Hao Lu [Fri, 21 Dec 2018 23:05:12 +0000 (15:05 -0800)]
Handling nullptr case

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15467

Reviewed By: Maratyszcza

Differential Revision: D13536504

fbshipit-source-id: ab46ff6bb4b6ce881c3e29d7e6a095ea62289db4

5 years agoRelax check on outputs (#15458)
Bram Wasti [Fri, 21 Dec 2018 22:11:26 +0000 (14:11 -0800)]
Relax check on outputs (#15458)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15458

many nets in the wild seem to have outputs that are never produced by the net.

Reviewed By: ZolotukhinM

Differential Revision: D13534185

fbshipit-source-id: 2b23b39c28404c53f68868f3bf6df53c5fea9eab

5 years agoallow non-final returns (#15463)
Zachary DeVito [Fri, 21 Dec 2018 21:46:12 +0000 (13:46 -0800)]
allow non-final returns (#15463)

Summary:
This PR allows a subclass of programs that have return statements that are not final in the graph.

`final_returns.h` contains the a comment describing how this is accomplished.
To minimize complexity in `compiler.cpp`, this pass is done as an AST-to-AST rewrite before the compiler runs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15463

Differential Revision: D13538962

Pulled By: zdevito

fbshipit-source-id: 67105ca873351825b4a364092ab1873779f3e462

5 years agoFixed trivial typos in Dropout2D and Dropout3D classes (#15200)
derek [Fri, 21 Dec 2018 19:54:57 +0000 (11:54 -0800)]
Fixed trivial typos in Dropout2D and Dropout3D classes (#15200)

Summary:
Fixed trivial typos in Dropout2D and Dropout3D classes

weiyangfb
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15200

Differential Revision: D13537888

Pulled By: ezyang

fbshipit-source-id: 8fb06027ca663a2e4bfa016af400698ae3c88ad1

5 years agoUpdating submodules
svcscm [Fri, 21 Dec 2018 19:44:29 +0000 (11:44 -0800)]
Updating submodules

Reviewed By: cdelahousse

fbshipit-source-id: 59d7a5b82fb78bc2d2285d0896e35c262512ffb9

5 years agoeq_fixes (#15475)
surgan12 [Fri, 21 Dec 2018 19:32:02 +0000 (11:32 -0800)]
eq_fixes (#15475)

Summary:
fixes #15464 .
cc : ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15475

Differential Revision: D13537812

Pulled By: ezyang

fbshipit-source-id: 127adf612ac8b3d3a64baa3d12a53daba7d3e4b8

5 years agoEnable running collect_env.py without building PyTorch (#15468)
vishwakftw [Fri, 21 Dec 2018 19:29:36 +0000 (11:29 -0800)]
Enable running collect_env.py without building PyTorch (#15468)

Summary: Closes #15346

Differential Revision: D13537873

Pulled By: ezyang

fbshipit-source-id: 7765ce4108dae9479d8900c0815cc2f174596a83

5 years agoBack out "[nomnigraph][executor] computeChains with nomnigraph" (#15451)
Bram Wasti [Fri, 21 Dec 2018 19:06:49 +0000 (11:06 -0800)]
Back out "[nomnigraph][executor] computeChains with nomnigraph" (#15451)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15451

Original commit changeset: ccd050bfead6

Reviewed By: ilia-cher

Differential Revision: D13533161

fbshipit-source-id: 1d0dcd54c2e3875aab015f3e996693e67a449b87

5 years agoDirect FBGEMM integraton into ATen (#13777)
James Reed [Fri, 21 Dec 2018 18:32:57 +0000 (10:32 -0800)]
Direct FBGEMM integraton into ATen (#13777)

Summary:
This PR implements infrastructure for post-processing a model to apply int8 quantization to its `nn.Linear` modules. Highlights of the implementation:

1) Inputs and outputs are `float` (quantized and packed internally), but the weight is quantized and packed ahead of time for efficiency. This implementation performs well in small-batch size GEMM calls. It should not be considered a general-purpose quantized GEMM kernel.
2) Weight packing is dependent on machine architecture (e.g. vector register width), so it is done just-in-time. Concretely, it is done on model load for the weights and it is done during operator execution for the input value.
3) Biases are unquantized
4) We fail loudly if we are attempting to run this on a machine that does not support FBGEMM. This is because we do not want a model's numerics to differ based on which machine it is run on. A model containing these FBGEMM ops *must* be run with FBGEMM

The API can be seen in the added test case. Highlights are:
1) `torch.jit.quantized.quantize_linear_modules` walks the module hierarchy of the passed-in Module and replaces all `nn.Linear` modules with a new `QuantizedLinear` module, which encapsulates the behavior described above.
2) `_pack()` and `_unpack()` script methods are present on `QuantizedLinear` modules. These methods should be called before serialization and after deserialization, respectively. This ensures that the weight matrix is properly packed for the running machine's architecture. Note that in the long term, we would like to move toward a more Pickle-style serialization technique, rather than having these explicit methods that mutate member values. This is blocked on being able to assign attributes in a ScriptMethod, among other things.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13777

Differential Revision: D13383276

Pulled By: jamesr66a

fbshipit-source-id: 00f29c9f34544add2b90107e3cf55a287802c344

5 years agoReplace getargspec with getfullargspec (#15396)
Ashwin Ramaswami [Fri, 21 Dec 2018 17:37:25 +0000 (09:37 -0800)]
Replace getargspec with getfullargspec (#15396)

Summary:
Replace `getargspec` with `getfullargspec` to resolve test warnings. Fixes #15344 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15396

Differential Revision: D13529548

Pulled By: zou3519

fbshipit-source-id: 50d3be92423a9ce89bc4895b67569663e1abbaa6

5 years agoThe benchmark binary support multiple batches in one run (#15443)
Fei Sun [Fri, 21 Dec 2018 16:39:05 +0000 (08:39 -0800)]
The benchmark binary support multiple batches in one run (#15443)

Summary:
It is sometimes beneficial to run multiple batches in one benchmark and check the aggregated results.

This PR enables this functionality.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15443

Reviewed By: llyfacebook

Differential Revision: D13531129

Pulled By: sf-wind

fbshipit-source-id: 553a762a5cbadf5a3d9fd6af767ae34899bc1aa2

5 years agoMove torch.logspace to ATen and parallelize on CPU.
Gregory Chanan [Fri, 21 Dec 2018 16:18:37 +0000 (08:18 -0800)]
Move torch.logspace to ATen and parallelize on CPU.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15438

Reviewed By: ezyang

Differential Revision: D13529626

Pulled By: gchanan

fbshipit-source-id: 896e8afee3d6b5a706c4f5815b91ba6bd8af6672

5 years agoFix cudnn dropout (#15473)
Dmytro Dzhulgakov [Fri, 21 Dec 2018 16:13:15 +0000 (08:13 -0800)]
Fix cudnn dropout (#15473)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15473

Revert accidental changes introduced in D13335176

IntList is a range and copying it just copies pointers. Thus pointers would point either on deallocated memory or on the same memory causing equality always pass.

Reviewed By: ezyang

Differential Revision: D13537131

fbshipit-source-id: c97b3533be689bb4cdadd9e612f1284ac50e4bda

5 years agoformat specialized_segment_ops_test.py to prepare D13515970 (#15408)
Jongsoo Park [Fri, 21 Dec 2018 07:26:23 +0000 (23:26 -0800)]
format specialized_segment_ops_test.py to prepare D13515970 (#15408)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15408

Applied formatting to specialized_segment_ops_test.py to prepare D13515970

Reviewed By: salexspb

Differential Revision: D13520300

fbshipit-source-id: c3250b6abe8087c607f65ae60d1da61bd46c342b

5 years agoClean up onnxifi transformation code (#15453)
Yinghai Lu [Fri, 21 Dec 2018 06:04:09 +0000 (22:04 -0800)]
Clean up onnxifi transformation code (#15453)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15453

Just move things around to facilitate further development. No logic change.

Reviewed By: rdzhabarov

Differential Revision: D13533959

fbshipit-source-id: eebab1306939e802aacffb24a711d372fd67916c

5 years agoRecord Caffe2's current stream ID in c10_cuda. (#15174)
Edward Yang [Fri, 21 Dec 2018 05:51:25 +0000 (21:51 -0800)]
Record Caffe2's current stream ID in c10_cuda. (#15174)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15174

Previously, Caffe2 maintained a separate per-thread per-device
current logical CUDA stream ID.  In this PR, we switch Caffe2 over
to using c10::Stream to manage the current stream, and also
manage the allocation of cudaStream_t objects.

This results in a slight behavior change: previously, Caffe2
would have been willing to allocate an arbitrary number of
CUDA streams, depending on how high the logical stream IDs
went.  The c10::Stream pool has a fixed number of streams, once
you exceed it, it wraps around.

Reviewed By: dzhulgakov

Differential Revision: D13451550

fbshipit-source-id: da6cf33ee026932a2d873835f6e090f7b8a7d8dc

5 years agoAdd option to automatically handle unsorted variable-length sequences in RNNs (#15225)
Richard Zou [Fri, 21 Dec 2018 01:34:41 +0000 (17:34 -0800)]
Add option to automatically handle unsorted variable-length sequences in RNNs (#15225)

Summary:
Fixes #3584.

Motivation: manually sorting sequences, packing them, and then unsorting them
is something a lot of users have complained about doing, especially when we can
offer library support for them.

Overview: we internally sort sequences before packing them and store a list of
`unsorted_indices` that represent how to unsort the sequences inside
PackedSequence. The packing helper functions return PackedSequence with the
`permutation` field and the unpacking helper functions use it to unsort.

To implement this, the following changes were made:
- PackedSequence now keeps `sorted_indices` and `unsorted_indices`.
  These two can be thought of as permutations and are inverses of each other.
  `sorted_indices` is how the sequences were sorted; `unsorted_indices` is how
  to unsort the sequences.
- Added an `enforce_sorted` argument to pack_sequence and pack_padded_sequence
  that maintains the legacy behavior of error-ing out on unsorted-sequences.
  When `enforce_sorted=True`, these functions maintain their ONNX exportability.
- pack_sequence(sequences, enforce_sorted) takes in unsorted sequences.
- pack_padded_sequence can take in a padded tensor that represents padded,
  unsorted sequences.
- pad_packed_sequence unsorts the PackedSequence such that it is still the
  inverse operation of packed_padded_sequence.
- RNNs apply `sort_indices` to their input hidden state and apply
  `unsort_indices` to their output hidden state. This is to ensure that the
  hidden state batches correspond to the user's ordering of input sequences.

NOT BC-Breaking
- The default for pack_sequence and pack_padded_sequence is
  `enforce_sorted=True` to avoid breaking ONNX export. To use the new
  functionality, pass in `enforce_sorted=False`

Testing Plan
- Modified TestNN.test_pack_sequence, TestNN.test_packed_padded_sequence,
  and TestNN.test_variable_sequence (RNN test) to check the behavior
  of unsorted sequences, sorted sequences, and sorted sequences with
  enforce_sorted=True
- test/test_jit.py has a test to see if RNNs are exportable with
  enforce_sorted=True

cc colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15225

Reviewed By: soumith

Differential Revision: D13507138

Pulled By: zou3519

fbshipit-source-id: b871dccd6abefffca81bc4e3efef1873faa242ef

5 years agoChange default value of unique to 'sorted=True'
WeihuangXu [Fri, 21 Dec 2018 01:04:14 +0000 (17:04 -0800)]
Change default value of unique to 'sorted=True'

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15379

Differential Revision: D13531287

Pulled By: ezyang

fbshipit-source-id: 1512da7d660dc413688d99264e6434897c3ac78c

5 years agoadd denormal options (ftz and daz)
Jongsoo Park [Fri, 21 Dec 2018 01:01:53 +0000 (17:01 -0800)]
add denormal options (ftz and daz)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15423

Reviewed By: yinghai

Differential Revision: D13526340

fbshipit-source-id: de2ecc717b4f778f33a8bf940ed144dbb230c7a8

5 years agocollect_env fix (#15447)
surgan12 [Fri, 21 Dec 2018 00:53:49 +0000 (16:53 -0800)]
collect_env fix (#15447)

Summary:
fixes #15214
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15447

Differential Revision: D13531523

Pulled By: ezyang

fbshipit-source-id: 8f24f5ae9f3e78f6c5c9ee702ba14faca7aa297a

5 years agoRemove unused field in jit script module deserializer (#15439)
Lu Fang [Fri, 21 Dec 2018 00:14:16 +0000 (16:14 -0800)]
Remove unused field in jit script module deserializer (#15439)

Summary:
A little bit clean up.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15439

Reviewed By: zrphercule

Differential Revision: D13532015

Pulled By: houseroad

fbshipit-source-id: 2fb1e01fc28549c7e78af6c65ee68339950bc7da

5 years agoRevert D13494873: [pytorch][PR] Fixing ONNX export of logical ops to have correct...
Edward Yang [Thu, 20 Dec 2018 23:44:09 +0000 (15:44 -0800)]
Revert D13494873: [pytorch][PR] Fixing ONNX export of logical ops to have correct output datatype

Differential Revision:
D13494873

Original commit changeset: 069d2f956a5a

fbshipit-source-id: 80ef10b2eb623a63da51dc2e4874f2ee446f426d

5 years agoFix ASAN div by zero error in rotated GenerateProposals op (#15415)
Viswanath Sivakumar [Thu, 20 Dec 2018 23:33:44 +0000 (15:33 -0800)]
Fix ASAN div by zero error in rotated GenerateProposals op (#15415)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15415

Was introduced in D13429770

Reviewed By: SuperIRabbit

Differential Revision: D13524114

fbshipit-source-id: a890eb3b97c24952c361155d1432a801499f4ddd

5 years agoTensor construction codemod(ResizeLike) - 7/7 (#15087)
Jerry Zhang [Thu, 20 Dec 2018 23:28:12 +0000 (15:28 -0800)]
Tensor construction codemod(ResizeLike) - 7/7 (#15087)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15087

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: ezyang

Differential Revision: D13419765

fbshipit-source-id: 34d695309a66723281429610a12544598c507d74

5 years agoallow numpy-like boolean-list indexing in pytorch (#14932)
rory [Thu, 20 Dec 2018 23:18:39 +0000 (15:18 -0800)]
allow numpy-like boolean-list indexing in pytorch (#14932)

Summary:
Suggested fix to issue #6773, the fix allows numpy-like boolean-list indexing in pytorch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14932

Differential Revision: D13398795

Pulled By: ezyang

fbshipit-source-id: 67f8daf9829db2550ff76d2bde673be6dd2708cd

5 years agoDoc improvement on DDP (#15440)
Teng Li [Thu, 20 Dec 2018 22:46:01 +0000 (14:46 -0800)]
Doc improvement on DDP (#15440)

Summary:
I noticed that some users don't even know we have this support. Adding into the doc
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15440

Differential Revision: D13531045

Pulled By: teng-li

fbshipit-source-id: 9757c400c0010608758c754df04e603b36035a10

5 years agoFix type annotation error. (#15448)
Edward Yang [Thu, 20 Dec 2018 22:26:23 +0000 (14:26 -0800)]
Fix type annotation error. (#15448)

Summary:
According to mypy, the trailing -> None is mandatory.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15448

Differential Revision: D13532179

Pulled By: ezyang

fbshipit-source-id: e8972f8c9ada4657c518cd7bcd46e489ab8ddf5f

5 years agoAdd launch bounds needed for ROCm 2.0 (#15400)
Johannes M Dieterich [Thu, 20 Dec 2018 22:26:14 +0000 (14:26 -0800)]
Add launch bounds needed for ROCm 2.0 (#15400)

Summary:
ROCm 2.0's compiler requires launch_bounds annotations if flat work group sizes are larger than the default of 256.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15400

Differential Revision: D13531239

Pulled By: ezyang

fbshipit-source-id: c0b40600a8c332823da6c7113c644d8dba424a9c

5 years agoSupport enough of closures to write autograd functions (#15411)
Zachary DeVito [Thu, 20 Dec 2018 22:26:06 +0000 (14:26 -0800)]
Support enough of closures to write autograd functions (#15411)

Summary:
This PR adds enough of the infra for supporting closures (inner script functions) in order to allow us to expression symbolic gradients using them. We do not actually ever run graphs that contain these closures. The symbolic_script infrastructure just extracts them out of the original forward graph and turns them into discrete forward/backward pairs. This cuts down on the type annotations necessary to write forward/backward pairs and aligns closely with the "differentiator" function approach to expression reverse-mode AD.

Example:

This code:
```
import torch

r = torch.jit.CompilationUnit(
'''
def mul_forward(self, other):
    def backward(grad_output):
        grad_self = (grad_output * other).sum_to_size(self.size())
        grad_other = (grad_output * self).sum_to_size(other.size())
        return grad_self, grad_other
    return self * other, backward
''')

print(r.module.code)
```

Will produce this graph (pretty printed for clarity):

```
def mul_forward(self,
    self: Tensor,
    other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
  backward = (self.__lambda, (other, self))
  return (torch.mul(self, other), backward)

def __lambda(self,
    context: Tuple[Tensor, Tensor],
    grad_output: Tensor) -> Tuple[Tensor, Tensor]:
  other, self, = context
  grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
  grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
  return (grad_self, grad_other)
```

symbolic_script will then do some modifications to remove the unsuppored prim::Function node, yielding:

```
def mul_forward(self,
    self: Tensor,
    other: Tensor) -> Tuple[Tensor, Tuple[None, Tuple[Tensor, Tensor]]]:
  return (torch.mul(self, other), (other, self))

def backward(self,
    context: Tuple[Tensor, Tensor],
    grad_output: Tensor) -> Tuple[Tensor, Tensor]:
  other, self, = context
  grad_self = torch.sum_to_size(torch.mul(grad_output, other), torch.size(self))
  grad_other = torch.sum_to_size(torch.mul(grad_output, self), torch.size(other))
  return (grad_self, grad_other)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15411

Differential Revision: D13523340

Pulled By: zdevito

fbshipit-source-id: 4d4a269460e595b16802c00ec55ae00e3e682d49

5 years agoAdding CUDA version for C2 operators generate proposals and nms (#13694)
hbraun@nvidia.com [Thu, 20 Dec 2018 22:24:27 +0000 (14:24 -0800)]
Adding CUDA version for C2 operators generate proposals and nms (#13694)

Summary:
Related to issue #13684
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13694

Reviewed By: wat3rBro

Differential Revision: D13017791

Pulled By: newstzpz

fbshipit-source-id: 4bdc58e474d8e1f6cd73a02bf51f91542a2b9d0b

5 years agoAdd at::one_hot (#15208)
Gao, Xiang [Thu, 20 Dec 2018 22:09:09 +0000 (14:09 -0800)]
Add at::one_hot (#15208)

Summary: Closes: https://github.com/pytorch/pytorch/issues/15060

Differential Revision: D13528014

Pulled By: ezyang

fbshipit-source-id: 5a18689a4c5638d92f9390c91517f741e5396293

5 years agoExtract arguments to its own file and pass arguments to ios apps (#15413)
Fei Sun [Thu, 20 Dec 2018 21:24:01 +0000 (13:24 -0800)]
Extract arguments to its own file and pass arguments to ios apps (#15413)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15413

In order to pass arguments to the ios app, need to extarct the arguments
to its own file. Also, in the ios app, do not use the benchmark.json, which
parses the arguments.

This is an incompatible change, needs to add hot fix to the tests.

Reviewed By: llyfacebook

Differential Revision: D13523240

fbshipit-source-id: b559cc7f52d8f50ee206a7ff8d7b59292d855197

5 years agoFixing ONNX export of logical ops to have correct output datatype (#15185)
Spandan Tiwari [Thu, 20 Dec 2018 20:24:42 +0000 (12:24 -0800)]
Fixing ONNX export of logical ops to have correct output datatype (#15185)

Summary:
Currently PyTorch ONNX exporter exports the logical ops (`lt`, `gt`, `le`, `ge`, `eq`) with output type in corresponding ONNX ops as type `tensor(uint8)`. But ONNX spec allows for only `tensor(bool)`, which is why models that have these ops fail to load properly.

This issue is captured in https://github.com/pytorch/pytorch/issues/11339. Part of this issue, relating to the allowed input types, has been fixed in ONNX spec by houseroad. This PR fixes the other part pertaining to output type.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15185

Differential Revision: D13494873

Pulled By: houseroad

fbshipit-source-id: 069d2f956a5ae9bf0ac2540a32594a31b01adef8

5 years agoMiscellaneous small doc fixes (#15373)
David Riazati [Thu, 20 Dec 2018 20:20:42 +0000 (12:20 -0800)]
Miscellaneous small doc fixes (#15373)

Summary:
This PR makes some small changes for better consistency in our README and
CONTRIBUTING docs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15373

Differential Revision: D13512753

Pulled By: driazati

fbshipit-source-id: 44398ad1894eef521d5f5acb1d06acaad67728cf

5 years agoExtend README for ATen/native/cpu (#15437)
Edward Yang [Thu, 20 Dec 2018 19:14:21 +0000 (11:14 -0800)]
Extend README for ATen/native/cpu (#15437)

Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15437

Differential Revision: D13529436

Pulled By: ezyang

fbshipit-source-id: 2e2193d54ea7f7626fe7392e4d0c130c2f87a76f

5 years agoImplementing cuda kernel for tril_indices and triu_indices (#15203)
Shen Li [Thu, 20 Dec 2018 18:21:02 +0000 (10:21 -0800)]
Implementing cuda kernel for tril_indices and triu_indices (#15203)

Summary:
Followup PR of #14904, and the stretch goal of #12653.

Directly calculate coordinates in the original tensor using column index in the result tensor. Every GPU thread takes care of a column (two numbers) in the output tensor.

The implementation detects and handles precision loss during calculating the square root of a `int64_t` variable, and supports tensors with up to `row * column = 2 ^ 59` numbers.

Algorithm details are describe in [comments of TensorFactories.cu](https://github.com/pytorch/pytorch/blob/23ddb6f58a1c8a7a660a793f174cf014230176c6/aten/src/ATen/native/cuda/TensorFactories.cu#L109-L255).

zou3519
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15203

Reviewed By: zou3519

Differential Revision: D13517695

Pulled By: mrshenli

fbshipit-source-id: 86b305d22cac08c8962a3b0cf8e9e620b7ec33ea

5 years agoRevert D13498974: [pytorch][PR] [jit] Add self to Python printer reserved words
Edward Yang [Thu, 20 Dec 2018 18:00:09 +0000 (10:00 -0800)]
Revert D13498974: [pytorch][PR] [jit] Add self to Python printer reserved words

Differential Revision:
D13498974

Original commit changeset: 488efb661476

fbshipit-source-id: 3b991bccf4cf2ffdafe70f145aff0ae2837e31f8

5 years agoAdd support for batched pdist (#12302)
Erik Brinkman [Thu, 20 Dec 2018 17:35:08 +0000 (09:35 -0800)]
Add support for batched pdist (#12302)

Summary:
This updates pdist to work for batched inputs, and updates the
documentation to reflect issues raised.

closes #9406
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12302

Reviewed By: ezyang

Differential Revision: D13528485

Pulled By: erikbrinkman

fbshipit-source-id: 63d93a6e1cc95b483fb58e9ff021758b341cd4de

5 years agomulti-dim standard deviation for CUDA. (#14990)
Brennan Vincent [Thu, 20 Dec 2018 16:53:44 +0000 (08:53 -0800)]
multi-dim standard deviation for CUDA. (#14990)

Summary:
This is the CUDA version of #14535 .
It refactors Reduce.cuh to allow more general classes of reductions to be performed -- we no longer assume that the temporary data returned during reduction is just one scalar, and instead allow an arbitrary accumulate type.
We also allow 64-bit indexing when necessary, since in general we will no longer be able to accumulate directly in the output. (In the cases when we can, we continue to split the tensors until they can be addressed with 32-bits, as before).
As an initial use-case, we implement `std` in multiple dimensions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14990

Differential Revision: D13405097

Pulled By: umanwizard

fbshipit-source-id: a56c24dc2fd5326d417632089bd3f5c4f9f0d2cb

5 years agoAdd self to Python printer reserved words (#15318)
David Riazati [Thu, 20 Dec 2018 10:25:20 +0000 (02:25 -0800)]
Add self to Python printer reserved words (#15318)

Summary:
This adds `self` to the list of reserved words and also sorts the lines and prevents the tracer from naming values 'self' (which happens in torch/tensor.py)

Fixes #15240
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15318

Differential Revision: D13498974

Pulled By: driazati

fbshipit-source-id: 488efb661476cdcdb8ecb9cb48942f02e3c1e611

5 years agoPretty printing of C++ modules (#15326)
Peter Goldsborough [Thu, 20 Dec 2018 05:38:00 +0000 (21:38 -0800)]
Pretty printing of C++ modules (#15326)

Summary:
A long outstanding nicety: pretty printing of C++ modules. E.g.
```
  Sequential sequential(
      Linear(10, 3),
      Conv2d(1, 2, 3),
      Dropout(0.5),
      BatchNorm(5),
      Embedding(4, 10),
      LSTM(4, 5));
std::cout << sequential;
```
prints
```
torch::nn::Sequential(
  (0): torch::nn::Linear(in=10, out=3, with_bias=true)
  (1): torch::nn::Conv2d(input_channels=1, output_channels=2, kernel_size=[3, 3], stride=[1, 1])
  (2): torch::nn::Dropout(rate=0.5)
  (3): torch::nn::BatchNorm(features=5, eps=1e-05, momentum=0.1, affine=true, stateful=true)
  (4): torch::nn::Embedding(count=4, dimension=10)
  (5): torch::nn::LSTM(input_size=4, hidden_size=5, layers=1, dropout=0)
)
```

apaszke ebetica ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15326

Differential Revision: D13518986

Pulled By: goldsborough

fbshipit-source-id: 63bf753672f0e348951de3645208f263581de5fb

5 years agoRestructuring prof dag counters (#13321)
Hassan Eslami [Thu, 20 Dec 2018 05:35:08 +0000 (21:35 -0800)]
Restructuring prof dag counters (#13321)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13321

This diff simply refactors the `ProfDAGCounters` into two:
* `ProfDAGCounters` that gathers stats at runtime.
* `ProfDAGReport` which holds the report from the gathered stats once stats collection is done.

This refactoring allow us to implement `+=` for `ProfDAGReport`, which can be used for aggregating same-net reports on each host.

Reviewed By: donglimm

Differential Revision: D12837988

fbshipit-source-id: 0470c5fd6437f12711cab25a15a12965d79b2a91

5 years agoRemove python_default_init from ATen and use Optional (#15234)
Wanchao Liang [Thu, 20 Dec 2018 05:35:01 +0000 (21:35 -0800)]
Remove python_default_init from ATen and use Optional (#15234)

Summary:
Optional clean up. This PR remove python_default_init from the yaml files, and the code-gen, and utilize optional type to do the work.

This also fix the bug in the #13149 to correctly adopt as_strided backward.

Fixes #9941
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15234

Differential Revision: D13502044

Pulled By: wanchaol

fbshipit-source-id: 774b61fc4414482cf11d56e22bd0275aefb352a4

5 years agoTensor construction codemod(ResizeLike) - 1/7 (#15073)
Jerry Zhang [Thu, 20 Dec 2018 05:34:36 +0000 (21:34 -0800)]
Tensor construction codemod(ResizeLike) - 1/7 (#15073)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15073

Codemod generated with clangr shard mode, 25 files per diff,
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: dzhulgakov

Differential Revision: D13419563

fbshipit-source-id: 8c284405fa3a867303216df876ee6b20d8a46551

5 years agoDo not use fork to invoke test scripts in pytorch rocm CI
bddppq [Thu, 20 Dec 2018 05:29:41 +0000 (21:29 -0800)]
Do not use fork to invoke test scripts in pytorch rocm CI

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14600

Differential Revision: D13523937

Pulled By: bddppq

fbshipit-source-id: 1493fdd051283650081d7944bb2bd7f0c4c44990

5 years agoReplace Vec256<T>::size with constexpr method (#15406)
Edward Yang [Thu, 20 Dec 2018 04:31:09 +0000 (20:31 -0800)]
Replace Vec256<T>::size with constexpr method (#15406)

Summary:
Stack:
&nbsp;&nbsp;&nbsp;&nbsp;:black_circle:&nbsp; **#15406 Replace Vec256<T>::size with constexpr method**&nbsp;&nbsp;[:yellow_heart:](https://our.intern.facebook.com/intern/diff/D13519902/)

See Note [constexpr static function to avoid odr-usage compiler bug]
for detailed justification.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15406

Differential Revision: D13523774

Pulled By: ezyang

fbshipit-source-id: c0ab44298bb2ef3d68a66d026fc6bc156a909a6b

5 years agoMake cpuinfo logging less verbose (#15405)
Marat Dukhan [Thu, 20 Dec 2018 04:20:47 +0000 (20:20 -0800)]
Make cpuinfo logging less verbose (#15405)

Summary:
Log only errors in cpuinfo.

Fix to #15401 and #15398
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15405

Differential Revision: D13526251

Pulled By: Maratyszcza

fbshipit-source-id: 4d9eba0912f7b45093bed2e343cd77a151ffa8c4

5 years agoSupport error handling in forked threads (#14523)
James Sun [Thu, 20 Dec 2018 02:51:41 +0000 (18:51 -0800)]
Support error handling in forked threads (#14523)

Summary:
Save error info in the future for parent thread to pick up. Throw the error
when the thread is the root thread.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14523

Differential Revision: D13251756

Pulled By: highker

fbshipit-source-id: b40f9a45665e1a934743f131ec5e8bad5622ce67

5 years agodefault options for OutputTensorCopyFrom (#15248)
Jerry Zhang [Thu, 20 Dec 2018 02:10:36 +0000 (18:10 -0800)]
default options for OutputTensorCopyFrom (#15248)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15248

OutputTensorCopyFrom takes four arguments: index, a source Tensor, TensorOptions and whether we want to perform an async call.
We want to provide some default option for TensorOptions, (1). default device to context_.device() (2). default dtype to input.dtype(). User can also explicitly provide these options to override default values.

next diff will change the order of TensorOptions parameter so that user don't need to write down tensor options unless they want to override.

Reviewed By: dzhulgakov

Differential Revision: D13453824

fbshipit-source-id: 87401f81c7c3f9fd3d8936c710e6c2e04a59b689

5 years agoFix Module::copy_into
James Sun [Thu, 20 Dec 2018 01:06:54 +0000 (17:06 -0800)]
Fix Module::copy_into

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15393

Differential Revision: D13519477

Pulled By: highker

fbshipit-source-id: d62928597ec0700b550e7cf481c8febae57b200d

5 years agoadd unpack_outputs to inlineCallTo
Zachary DeVito [Wed, 19 Dec 2018 23:02:13 +0000 (15:02 -0800)]
add unpack_outputs to inlineCallTo

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15382

Differential Revision: D13518844

Pulled By: zdevito

fbshipit-source-id: 981936988080af80629b70bf5f6dfa52ceb09c2f

5 years agoFix documentation (#15372)
Benoit Rostykus [Wed, 19 Dec 2018 22:55:37 +0000 (14:55 -0800)]
Fix documentation (#15372)

Summary:
Current documentation example doesn't compile. This fixes the doc so the example works.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15372

Differential Revision: D13522167

Pulled By: goldsborough

fbshipit-source-id: 5171a5f8e165eafabd9d1a28d23020bf2655f38b

5 years agocomputeChains with nomnigraph (#15366)
Bram Wasti [Wed, 19 Dec 2018 22:31:06 +0000 (14:31 -0800)]
computeChains with nomnigraph (#15366)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15366

swap the old implementation with a slightly easier one to understand

I ran the tests and compared the number of chains compared to the old algorithm.  This one outperforms on every test, but we have yet to see if that impacts performance at all.

old chain 34 nomnigraph chain 25
old chain 46 nomnigraph chain 34
old chain 228 nomnigraph chain 188
old chain 397 nomnigraph chain 338

Reviewed By: ilia-cher

Differential Revision: D13057451

fbshipit-source-id: ccd050bfead6eb94ab9c7b0a70b09a22c2b9e499

5 years agoRefactor dataloader.py (#15331)
SsnL [Wed, 19 Dec 2018 20:26:44 +0000 (12:26 -0800)]
Refactor dataloader.py (#15331)

Summary:
Same as #14668, and was approved there.

ailzhang , please apply this patch to Horizon's `data_streamer.py`: https://gist.github.com/SsnL/020fdb3d6b7016d81b6ba1d04cc41459 Thank you!

Below is the original description at #14668:

As I am working on tasks in https://github.com/pytorch/pytorch/issues/13023, I realized how unreadable the code is because all functions to be run in multiprocessing must be at top global level. Adding more functionalities to `dataloader.py` will only make things worse.

So in this PR, I refactor `dataloader.py` and move much of it into `data._utils`. E.g., the `_worker_loop` and related methods are now in `data._utils.worker`, signal handling code in `data._utils.signal_handling`, collating code in `data._utils.collate`, etc. This split, IMHO, makes code much clearer. I will base my future changes to DataLoader on top of this.

No functionality is changed, except that  I added `torch._six.queue`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15331

Reviewed By: yf225

Differential Revision: D13503120

Pulled By: ailzhang

fbshipit-source-id: 94df16b4d80ad1102c437cde0d5a2e62cffe1f8e

5 years agoRename potrs to cholesky_solve (#15334)
vishwakftw [Wed, 19 Dec 2018 20:11:49 +0000 (12:11 -0800)]
Rename potrs to cholesky_solve (#15334)

Summary:
Changelog:
- Renames `potrs` to `cholesky_solve` to remain consistent with Tensorflow and Scipy (not really, they call their function chol_solve)
- Default argument for upper in cholesky_solve is False. This will allow a seamless interface between `cholesky` and `cholesky_solve`, since the `upper` argument in both function are the same.
- Rename all tests
- Create a tentative alias for `cholesky_solve` under the name `potrs`, and add deprecated warning to not promote usage.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15334

Differential Revision: D13507724

Pulled By: soumith

fbshipit-source-id: b826996541e49d2e2bcd061b72a38c39450c76d0

5 years agocentralize side effects ops as node method (#15188)
Elias Ellison [Wed, 19 Dec 2018 18:45:32 +0000 (10:45 -0800)]
centralize side effects ops as node method (#15188)

Summary:
A number of different passes rely on whether a node has side effects. This centralizes the list of side effectful ops in one place.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15188

Differential Revision: D13508438

Pulled By: eellison

fbshipit-source-id: 2143e782b787731ce007b6dcd50cbde30e1b8dd0

5 years agoOptional ScalarType support for native functions & JIT (#15154)
Tugrul Ates [Wed, 19 Dec 2018 18:40:48 +0000 (10:40 -0800)]
Optional ScalarType support for native functions & JIT (#15154)

Summary:
For #6593 and #9515

This completes the support for optional<ScalarType> in native, JIT and autograd.

Note: Mostly following the existing implementation for optional<Scalar> that was added in https://github.com/pytorch/pytorch/pull/12582.

This PR introduces a way to make functions accept an optional dtype and it will unblock #9515 by allowing the `dtype` param for type promotion interface:
```
func: name(inputs, *, ScalarType? dtype=None, Casting casting=same_kind)
```

An alternative approach could have been using `ScalarType::Undefined` for the same purpose but without optional, though it would have been a bit hacky.
```
func: name(inputs, *, ScalarType dtype=Undefined, Casting casting=same_kind)
```

Here's an example use of this in action: https://github.com/pytorch/pytorch/pull/15133/commits/971f69eac69101955ed90078b44dab975d37a4f7

There are already a bunch of native functions that were getting optional `dtype` through function overloading. https://github.com/pytorch/pytorch/pull/15133 is the attempt to migrate all of those. I will send those changes separately after this since some functions (e.g. sum) need quite a bit of change in the codebase. See the commits over there.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15154

Differential Revision: D13457760

Pulled By: tugrulates

fbshipit-source-id: 706134f0bd578683edd416b96329b49a1ba8ab48

5 years agoImplement 'to' on ScriptModules (#15340)
vfdev-5 [Wed, 19 Dec 2018 18:34:37 +0000 (10:34 -0800)]
Implement 'to' on ScriptModules (#15340)

Summary:
Following #6008
Fixes "Implement 'to' on ScriptModules #7354"

cc zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15340

Differential Revision: D13506646

Pulled By: zdevito

fbshipit-source-id: 318fea2e8e51a37ce9844efa4c8db67d45a66317

5 years agoUpdate cpuinfo submodule (#15385)
Marat Dukhan [Wed, 19 Dec 2018 15:24:27 +0000 (07:24 -0800)]
Update cpuinfo submodule (#15385)

Summary:
Pull cpuinfo changes that should make it work on AWS Lambda servers (which don't have `/sys/devices/system/cpu/{possible,present}` files, and probably don't mount sysfs at all).

I'm not 100% sure it will fix the issue, but getting this update in would make it easier for users to test using a nightly build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15385

Reviewed By: soumith

Differential Revision: D13517467

Pulled By: Maratyszcza

fbshipit-source-id: e8e544cd1f9dad304172ebb7b6ba7a8ad7d34e66

5 years agoUpdating submodules
svcscm [Wed, 19 Dec 2018 07:33:54 +0000 (23:33 -0800)]
Updating submodules

Reviewed By: cdelahousse

fbshipit-source-id: dfbdae40e505c46cd64751c6ec107c84f9434131

5 years agorace condition fix of using mutable_data inside OPENMP region for batched matmul...
Jianyu Huang [Wed, 19 Dec 2018 07:17:11 +0000 (23:17 -0800)]
race condition fix of using mutable_data inside OPENMP region for batched matmul (#15371)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15371

Similar to D13387692:

Never call mutable_data from an OpenMP region!!!

Reviewed By: jspark1105

Differential Revision: D13511259

fbshipit-source-id: 100812d2a547c0a1d5018749d5fdc88162375673

5 years agoadd whitelisted clang-format checks (#15254)
Michael Suo [Wed, 19 Dec 2018 06:31:51 +0000 (22:31 -0800)]
add whitelisted clang-format checks (#15254)

Summary:
This PR adds clang-format automation:
- It only checks on whitelisted files, so we can enable incrementally without noise
- There is a pre-commit hook provided that will do the same check, plus prompt users to apply the clang-format changes (no change is made without the user agreeing).

My plan is to migrate over whole files at a time, clang-formatting them and then adding them to the whitelist. Doing it this way should avoid too many merge pains (the most you'll have to is run clang-format on the affected file before rebasing).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15254

Differential Revision: D13515888

Pulled By: suo

fbshipit-source-id: d098eabcc97aa228c4dfce8fc096c3b5a45b591f

5 years agobuild fix
Zachary DeVito [Wed, 19 Dec 2018 06:08:28 +0000 (22:08 -0800)]
build fix

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15384

Differential Revision: D13515708

Pulled By: zdevito

fbshipit-source-id: ea077cfec30edf41b85dc83c0a969d1146434145

5 years agoSplit up compiler.cpp (#15355)
Zachary DeVito [Wed, 19 Dec 2018 03:41:00 +0000 (19:41 -0800)]
Split up compiler.cpp (#15355)

Summary:
This separates the different parts of compiler.cpp to make their relationship more clear. In particular it adds:

* sugared_value.{h,cpp} - all the public SugaredValues that the compiler defines and a few that were inside compiler.cpp
* type_parser.{h, cpp} - Turns TreeRef's defining types into TypePtr
* schema_matching.{h, cpp} - infrastructure for matching arguments against overloaded schema and emitting builtin operators with a particular schema.
Retains:
* compiler.{h, cpp} - now responsible simply for the `defineMethodsInModule` infra structure.

Some utility functions like inlineCallTo have moved to ir.h.

Only thing that is not a move is some changes in module.h/cpp that remove multiple returns from `Method::emit_call_to`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15355

Reviewed By: suo, wanchaol

Differential Revision: D13507524

Pulled By: zdevito

fbshipit-source-id: 69ec936a9ff1a383c12a883616346b219c72e393

5 years agoAutograd using torchscript (#14604)
Ailing Zhang [Wed, 19 Dec 2018 02:56:06 +0000 (18:56 -0800)]
Autograd using torchscript (#14604)

Summary:
This PR enables autodiff to use the forward/backward graph compiled from python code, instead of using symbolic gradients(modifying the original graph directly).

We put the map in a separate .h file for now to wait for the native_functions.yaml and derivatives.yaml merge. This should ideally go into native_functions.yaml eventually.

This PR should be enough to unblock us for now, we can start writing gradients for aten functions in python.

Differential Revision: D13494635

Pulled By: ailzhang

fbshipit-source-id: f8d51a15243ac46afd09d930c573ccdfcd9fdaaf

5 years agoMinor clean up for test_jit (#15368)
Wanchao Liang [Wed, 19 Dec 2018 02:23:55 +0000 (18:23 -0800)]
Minor clean up for test_jit (#15368)

Summary:
* remove None args in functional tests
* remove some expect files that are not necessary
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15368

Differential Revision: D13512349

Pulled By: wanchaol

fbshipit-source-id: 304cffff966487d15c373057ae8ad114ef8aa7f9

5 years agoAdd RNNCell modules to Script standard library (#14695)
David Riazati [Wed, 19 Dec 2018 01:25:51 +0000 (17:25 -0800)]
Add RNNCell modules to Script standard library (#14695)

Summary:
Adds RNNCell modules to script standard lib

cc apaszke for argument_spec changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14695

Differential Revision: D13467680

Pulled By: driazati

fbshipit-source-id: 13a14da87714325cc4c3d49e5fde8a850d5d757b

5 years agoRemove fully qualified weak script names (#15364)
David Riazati [Wed, 19 Dec 2018 00:44:04 +0000 (16:44 -0800)]
Remove fully qualified weak script names (#15364)

Summary:
Cleanup to make references to `weak_script` consistent across codebase
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15364

Differential Revision: D13509676

Pulled By: driazati

fbshipit-source-id: 93dbbbe57e9b9b6587895f3cc6fac678babd21de

5 years agoRedefine scheduler to set learning rate using recursive formula (#14010)
Chandler Zuo [Wed, 19 Dec 2018 00:40:23 +0000 (16:40 -0800)]
Redefine scheduler to set learning rate using recursive formula (#14010)

Summary:
Modified step_lr for StepLR, MultiStepLR, ExponentialLR and CosineAnnealingLR. In this way, multiple schedulers can be used simultaneously to modify the learning rates.

Related issue: https://github.com/pytorch/pytorch/issues/13022

Added unit tests combining multiple schedulers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14010

Reviewed By: ezyang

Differential Revision: D13494941

Pulled By: chandlerzuo

fbshipit-source-id: 7561270245639ba1f2c00748f8e4a5f7dec7160c

5 years agoReplace resize_dim() with set_sizes_and_strides() in (#15348)
Ruiyang Liu [Wed, 19 Dec 2018 00:28:14 +0000 (16:28 -0800)]
Replace resize_dim() with set_sizes_and_strides() in (#15348)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15348

We have a function resize_dim() on TensorImpl in c10/core/TensorImpl.h which lets you change the dimensionality of a tensor, resizing both sizes and strides. Unfortunately, this API is fairly easy to misuse, because it fills in the new entries with garbage when you size it larger. We want to refactor the call sites to use set_sizes_and_strides() instead, so that there is never an intermediate tensor state where the sizes/strides don't make sense. In this diff, resize_dim() is
replaced with set_sizes_and_strides() in aten/src/TH/THTensor.hpp.

Reviewed By: ezyang

Differential Revision: D13505512

fbshipit-source-id: 193bab89f0018c13ca07488be336d8e967746b76

5 years agoMinor cleanup for TestFuser tests (#15134)
Richard Zou [Wed, 19 Dec 2018 00:13:39 +0000 (16:13 -0800)]
Minor cleanup for TestFuser tests (#15134)

Summary:
Changelog:
- change some expect tests that didn't have to be expect tests,
  instead use self.assertAllFused
- Some of the fuser tests weren't using self.assertAllFused.
- Minor test renames

cc apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15134

Differential Revision: D13507481

Pulled By: zou3519

fbshipit-source-id: dd0788530a60bb5ed2f42b961fae3db2b4404b64

5 years agoadd dense vector to id_list operator (#15090)
Bill Li [Wed, 19 Dec 2018 00:07:55 +0000 (16:07 -0800)]
add dense vector to id_list operator (#15090)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15090

as title
step 2 of the linked task

Reviewed By: ellie-wen

Differential Revision: D13425977

fbshipit-source-id: f3538ed68f42470ba39c5b779af764d4a5591a9d

5 years agofix clang-tidy script for python 3
Michael Suo [Tue, 18 Dec 2018 23:01:10 +0000 (15:01 -0800)]
fix clang-tidy script for python 3

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15360

Differential Revision: D13509668

Pulled By: suo

fbshipit-source-id: a3448a115eaac8dd4c3f179901a23bdbc5098408

5 years agoPort torch.linspace to ATen and parallelize it on CPU.
Gregory Chanan [Tue, 18 Dec 2018 22:56:43 +0000 (14:56 -0800)]
Port torch.linspace to ATen and parallelize it on CPU.

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15320

Reviewed By: ezyang

Differential Revision: D13498995

Pulled By: gchanan

fbshipit-source-id: fba655d51d978fffaa53a5e4cae4a99ebfb0eddc

5 years agoAdd (Un)Fold modules to standard library (#14759)
David Riazati [Tue, 18 Dec 2018 19:43:45 +0000 (11:43 -0800)]
Add (Un)Fold modules to standard library (#14759)

Summary:
Depends on #14597 for the corresponding aten ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14759

Differential Revision: D13325356

Pulled By: driazati

fbshipit-source-id: 99e39449c1ccfa293de05672c31a11e580bdd11f

5 years agoFix the (reduce)min and (reduce)max ONNX exporting (#15241)
Lu Fang [Tue, 18 Dec 2018 19:28:04 +0000 (11:28 -0800)]
Fix the (reduce)min and (reduce)max ONNX exporting (#15241)

Summary:
max and reducemax are smashed together, we need to support one input case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15241

Reviewed By: yinghai

Differential Revision: D13473312

Pulled By: houseroad

fbshipit-source-id: 9b8c847286a2631b006ca900271bc0d26574101a

5 years agoMethod returns a single argument (#15289)
Zachary DeVito [Tue, 18 Dec 2018 18:27:26 +0000 (10:27 -0800)]
Method returns a single argument (#15289)

Summary:
This PR changes Method (just Method not all graphs) to always have a single
return argument.

This is part 1 in a set of changes that will enable us to have better handling if early return statements.
The simplification that this change provides greatly reduces the work for the next step.

This change makes it so that Method and Python handle multiple returns in the same way:
* 0 - None
* 1 - <single value>
* many - Tuple[...]

The result is that a lot of special-case handling in compiler.cpp and its
bindings can be removed. It also fixes several bugs in return handling,
including one where return values were not always checked against their
attributed values.

Notes:
* inferTypeFrom is renamed to be more accurate and discourage use.
* This has uncovered some bugs in other components, which are noted in
  the diff.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15289

Differential Revision: D13481649

Pulled By: zdevito

fbshipit-source-id: 0e2242a40bb28cca2d0e8be48bede96195e4858c

5 years agocaffe2 mobile opengl (#15322)
Jerry Zhang [Tue, 18 Dec 2018 16:17:56 +0000 (08:17 -0800)]
caffe2 mobile opengl (#15322)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15322

caffe2 mobile opengl code is not used, deleting it to reduce complications when we perform other changes

Reviewed By: Maratyszcza

Differential Revision: D13499943

fbshipit-source-id: 6479f6b9f50f08b5ae28f8f0bc4a1c4fc3f3c3c2

5 years agoRevert D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17
Edward Yang [Tue, 18 Dec 2018 15:35:43 +0000 (07:35 -0800)]
Revert D13383102: [pytorch][PR] Upgrade MKL-DNN to version 0.17

Differential Revision:
D13383102

Original commit changeset: c434f0e0ddff

fbshipit-source-id: 690f46ca0710954fa591a5ea77535e9759db4de5

5 years agoUpdating submodules
svcscm [Tue, 18 Dec 2018 05:23:30 +0000 (21:23 -0800)]
Updating submodules

Reviewed By: cdelahousse

fbshipit-source-id: 4bf66581d07d839f459869bc9c6428011063cc5b

5 years agoimprove script/no script save error (#15321)
Zachary DeVito [Tue, 18 Dec 2018 05:11:30 +0000 (21:11 -0800)]
improve script/no script save error (#15321)

Summary:
Improves the error message for #15116
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15321

Differential Revision: D13499379

Pulled By: zdevito

fbshipit-source-id: b8dc0a83efabff74199f4aab2ee98aa41c42608b

5 years agoAllow tracing with fork/wait (#15184)
James Sun [Tue, 18 Dec 2018 04:28:00 +0000 (20:28 -0800)]
Allow tracing with fork/wait (#15184)

Summary:
There is still limitation on this: if a script module is somewhere
in the trace, the inputs/outputs can only be tensors or tuples of
tensors.

resolves #15052
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15184

Differential Revision: D13457691

Pulled By: highker

fbshipit-source-id: 8fe46afc41357a0eb8eadd83f687b31d074deb0e

5 years ago[TensorIterator fixing mean to output correct result for half precisi… (#14878)
Jie [Tue, 18 Dec 2018 04:08:15 +0000 (20:08 -0800)]
[TensorIterator fixing mean to output correct result for half precisi… (#14878)

Summary:
…on](#12115)

mean is calculated in two step sum()/numel(). For half precision, data gets
casted back to half after sum().
We fused the division into the reduction kernel by adding pre_op/post_op.

This allows us to do torch.ones(65536).cuda().half().mean() to return correct
result.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14878

Differential Revision: D13491159

Pulled By: soumith

fbshipit-source-id: e83802e1628b6d2615c45e18d7acf991d143a09e

5 years agoReenable OpenMP by reverting the following two commits. (#15315)
Edward Yang [Tue, 18 Dec 2018 03:50:10 +0000 (19:50 -0800)]
Reenable OpenMP by reverting the following two commits. (#15315)

Summary:
Revert "Put back linker flag for OpenMP to prevent build break on ppc64le (#14569)"

This reverts commit a84e873bb156080ea76ab182171b1f3b4d5395f6.

Revert "Update OpenMP cmake setting for xcode 9 compiler(AppleClang 9.0) (#14473)"

This reverts commit 8901935ad42fe9bf093d1106ea43606008a4024d.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15315

Differential Revision: D13495852

Pulled By: ezyang

fbshipit-source-id: bcd3f60088b14831c53d3c171f10cd1ab6b35dee

5 years agoFix _apply in nn.Module (#15305)
Peter Goldsborough [Tue, 18 Dec 2018 00:08:05 +0000 (16:08 -0800)]
Fix _apply in nn.Module (#15305)

Summary:
Fixes an issue that arose from https://github.com/pytorch/pytorch/pull/13481 where `.shared_memory()` couldn't be called. Effectively undoes all changes to `nn.Module` from that PR and solve the relevant problem in a different way (the goal was to be able to call `._apply()` on the Python wrapper for a C++ module).

soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15305

Differential Revision: D13493937

Pulled By: goldsborough

fbshipit-source-id: 4cb8687f90fc8709a536c5e7eacd0dc8edf6f750

5 years agoAdd a correctness check for C++ types to custom operators (#15247)
Peter Goldsborough [Tue, 18 Dec 2018 00:07:14 +0000 (16:07 -0800)]
Add a correctness check for C++ types to custom operators (#15247)

Summary:
The JIT uses `int64_t` for its integer type and `double` for its floating point type, but users quite often want to write `int` or `float` and that currently fails in not-so-nice ways for custom ops. This PR adds a simple `static_assert` to catch these common failure cases.

zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15247

Differential Revision: D13493941

Pulled By: goldsborough

fbshipit-source-id: c1cd0d10ab5838c75f167c0bdb57e45a0bc1344e

5 years agocaffe2/python/task: added __repr__ methods to all task definitions (#15250)
Tristan Rice [Mon, 17 Dec 2018 23:59:45 +0000 (15:59 -0800)]
caffe2/python/task: added __repr__ methods to all task definitions (#15250)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15250

This adds `__repr__` methods to all of the classes under task.py. This makes the objects much easier to interact with when using them in an interactive manner, such as in a Jupyter notebook.

The default `__repr__` method just returns the object ID which is very unhelpful.

Reviewed By: hanli0612

Differential Revision: D13475758

fbshipit-source-id: 6e1b166ec35163b9776c797b6a2e0d002560cd29

5 years agoPort nn fold and unfold to c++
Roy Li [Mon, 17 Dec 2018 23:44:23 +0000 (15:44 -0800)]
Port nn fold and unfold to c++

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14597

Reviewed By: ezyang

Differential Revision: D13272227

fbshipit-source-id: 6eccab5ff5830a977398a96393b778095120edc6

5 years agoAllow future type parsing
James Sun [Mon, 17 Dec 2018 23:36:28 +0000 (15:36 -0800)]
Allow future type parsing

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14887

Differential Revision: D13490984

Pulled By: highker

fbshipit-source-id: 165fe995867be273793f983154aa6cbce13e4396

5 years agoRemoving BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops
Jesse Hellemn [Mon, 17 Dec 2018 23:27:53 +0000 (15:27 -0800)]
Removing BUILD_C10_EXPERIMENTAL_OPS option and unglobbing experimental/c10d ops

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15064

Reviewed By: orionr

Differential Revision: D13474801

Pulled By: pjh5

fbshipit-source-id: 9d3664c3a3a1b6c2d9f083f8476fe3b037296b98