SsnL [Wed, 30 Jan 2019 16:38:49 +0000 (08:38 -0800)]
add new build files to gitignore; test that build doesn't leave repo dirty
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16441
Differential Revision:
D13880053
Pulled By: ezyang
fbshipit-source-id:
0171f42438efdd651b6af22e521b80e85b12681c
Tim Khatkevich [Wed, 30 Jan 2019 11:43:48 +0000 (03:43 -0800)]
Fallback support for more operators (#16456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16456
Adding fallbacks for more operators and fixing ifndef for expand_op.h
Reviewed By: yinghai
Differential Revision:
D13845382
fbshipit-source-id:
b7c5b7f7f176707b9ddffade139562a8085967ed
Lu Fang [Wed, 30 Jan 2019 09:13:16 +0000 (01:13 -0800)]
Fix the dropout onnx symbolic, and ensure all exported models in test_operators.py are eval mode (#16547)
Summary:
In eval mode, skip dropout operator in onnx exporter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16547
Reviewed By: houseroad
Differential Revision:
D13877136
Pulled By: dzhulgakov
fbshipit-source-id:
c366da156f83677bcf4989b79166aae5b6c36125
Xiaomeng Yang [Wed, 30 Jan 2019 08:01:15 +0000 (00:01 -0800)]
Seperate level1 elementwise functions from math (#16397)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16397
Seperate level1 elementwise functions from math
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D13830626
fbshipit-source-id:
e6e672647076dab8b3b24be181f580a1486250c9
Sebastian Messmer [Wed, 30 Jan 2019 07:29:54 +0000 (23:29 -0800)]
Fix includes for ATen/core/stack.h (#16462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16462
This file was moved, now we change the includes to the new location and remove the proxy header.
Reviewed By: ezyang
Differential Revision:
D13847279
fbshipit-source-id:
4617d52fdcfe785cb7b2154460a6686c437abd8b
Sebastian Messmer [Wed, 30 Jan 2019 02:02:21 +0000 (18:02 -0800)]
Add test case for calling c10 ops from pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16062
Reviewed By: ezyang
Differential Revision:
D13628955
fbshipit-source-id:
f6ed3f07db2675bd9ae9251da990ca7b8c963717
Sebastian Messmer [Wed, 30 Jan 2019 02:02:21 +0000 (18:02 -0800)]
Kernel gets Stack* instead of ArrayRef<IValue> (#16282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16282
This changes the core kernel abstraction to be a function taking a stack, popping its arguments from the stack and pushing results to the stack,
instead of getting arguments as ArrayRef<IValue> and returning an output IValue.
Caffe2 operators need to have a way to pass in preallocated output tensors.
The convention for them is to get all inputs *and* outputs on the stack and also return all of them, i.e. a caffe2 op will always have inputs == outputs.
This will probably change in later diffs towards making the outputs in-arguments optional in the JIT schema.
Reviewed By: ezyang
Differential Revision:
D13792335
fbshipit-source-id:
e9cc2b5e438cc4653e1f701633a154b92b604932
xuzhu [Wed, 30 Jan 2019 01:32:09 +0000 (17:32 -0800)]
Chunk dataset implementation (#15932)
Summary:
This PR contains the implementation of chunk dataset, with the API proposed in PR https://github.com/pytorch/pytorch/pull/15562
A chunk dataset is derived from StatefulDataset. It utilizes worker threads to prefetches chunk data, splits it into batches and caches them into a queue. When get_batch is called from dataloader, batch data is retrieved from the queue, and data in new chunks will be pushed for later following batches.
Chunk dataset uses two samplers (chunk_sampler and example_sampler) to perform sampling. The chunk_sampler decides which chunk to load, and example_sampler shuffles the examples inside a specific chunk. More detail of this sampling approach can be found here: http://martin.zinkevich.org/publications/nips2010.pdf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15932
Differential Revision:
D13868688
Pulled By: soumith
fbshipit-source-id:
a43000c478ca2a3c64cc84b3626d6b8b1ad9a07e
Zachary DeVito [Wed, 30 Jan 2019 00:36:08 +0000 (16:36 -0800)]
try to get rid of tmp_install (#16414)
Summary:
Rehash of previous attempts. This tries a different approach where we accept the install as specified in cmake (leaving bin/ include/ and lib/ alone), and then try to adjust the rest of the files to this more standard layout.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16414
Differential Revision:
D13863635
Pulled By: zdevito
fbshipit-source-id:
23725f5c64d7509bf3ca8f472dcdcad074de9828
Gregory Chanan [Tue, 29 Jan 2019 23:32:56 +0000 (15:32 -0800)]
Fix torch.sparse.sum parsing of dim. (#16517)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16501.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16517
Differential Revision:
D13865322
Pulled By: gchanan
fbshipit-source-id:
fa0ac37a9e7b8f19a5bdf75e5771128e48c41612
Pieter Noordhuis [Tue, 29 Jan 2019 23:31:09 +0000 (15:31 -0800)]
Make Store::setTimeout take milliseconds (#16278)
Summary:
This is particularly useful when using a c10d::Store from tests.
cc jgehring
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16278
Reviewed By: janewangfb
Differential Revision:
D13866271
Pulled By: pietern
fbshipit-source-id:
c8670b5f4ebd5cd009f2cabbe46cc17a9237d775
Edward Yang [Tue, 29 Jan 2019 22:16:44 +0000 (14:16 -0800)]
Back out "Delete duplicate copy of THCCachingAllocator." (#16510)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16510
This diff was supposed to be memory usage neutral, but based on
some internal flows involving cuDNN, it was not. Reverting pending
further investigation.
Original commit changeset:
03f1ebf7f11c
Reviewed By: xw285cornell
Differential Revision:
D13863610
fbshipit-source-id:
15517e255fd6b0c064b65fb99f0ef19742236cfd
Matthew Brandyberry [Tue, 29 Jan 2019 21:25:35 +0000 (13:25 -0800)]
Fix compare_exchange_weak usage in weak_intrusive_ptr (#16302)
Summary:
In the case of spurious failure, refcount is not incremented -- which leads to underflow once all references are released.
This was discovered when exercising multiprocessing on ppc64le.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16302
Differential Revision:
D13845435
Pulled By: ezyang
fbshipit-source-id:
8e264fff9dca8152cb12617e3216d5e48acd9557
Lu Fang [Tue, 29 Jan 2019 21:17:59 +0000 (13:17 -0800)]
update of fbcode/onnx to
15c33c945851907411619f599900c3852108e7e3 (#16493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16493
Previous import was
dc75285d4a1cff9618400164dfdb26c5a1bab70a
Included changes:
- **[15c33c9](https://github.com/onnx/onnx/commit/15c33c9)**: Add ppc64le build (#1768) <Chin Huang>
- **[198f840](https://github.com/onnx/onnx/commit/198f840)**: Update Broadcasting.md (#1769) <Verma-Rajat>
- **[60ac95f](https://github.com/onnx/onnx/commit/60ac95f)**: Merge back from release 1.4.1 (#1767) <Raymond Yang>
- **[a683372](https://github.com/onnx/onnx/commit/a683372)**: Bump up version number for v1.4.0 (#1761) (#1763) <Raymond Yang>
- **[dbf3581](https://github.com/onnx/onnx/commit/dbf3581)**: Add TfIdfVectorizer operator to ONNX (#1721) <Dmitri Smirnov>
Reviewed By: zrphercule
Differential Revision:
D13858840
fbshipit-source-id:
1d00f63f265cc6deed965b92ed00c44f547ff03e
Edward Yang [Tue, 29 Jan 2019 21:13:30 +0000 (13:13 -0800)]
Make the pytorch's cmake minimum required version equal to caffe2's. (#16506)
Summary:
Stack:
:black_circle: **#16506 Make the pytorch's cmake minimum required version equal to caffe2's.** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/
D13861564/)
Originally authored by JerryShih <bignose1007@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16506
Differential Revision:
D13863979
Pulled By: ezyang
fbshipit-source-id:
9275739a820ae03ec6eaa41959f9340c9bba8de3
peter [Tue, 29 Jan 2019 20:37:37 +0000 (12:37 -0800)]
More windows fixes towards the code refactor (#16451)
Summary:
Fixes #16446.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16451
Differential Revision:
D13864388
Pulled By: soumith
fbshipit-source-id:
6cb173eafbd3da33c479c56c85aff75e8be4bf35
SsnL [Tue, 29 Jan 2019 20:23:06 +0000 (12:23 -0800)]
Add stack & cat support for CPU Half (#16389)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/6968
Needed for #14705
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16389
Differential Revision:
D13861446
Pulled By: gchanan
fbshipit-source-id:
7b8700b95aaf252d9669693dbddccb2302e58409
peter [Tue, 29 Jan 2019 20:04:56 +0000 (12:04 -0800)]
Add some smoke tests for Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16496
Differential Revision:
D13863489
Pulled By: soumith
fbshipit-source-id:
518003c27a6b788b5a78b58cdb8698f0bb6ce4d8
Thomas Viehmann [Tue, 29 Jan 2019 19:19:51 +0000 (11:19 -0800)]
create type hint stub files for module torch (#12500)
Summary:
We have:
- This is an initial stab at creating a type stub `torch/__init__.pyi` .
- This is only tested on Python 3, since that's the only Python version mypy
works on.
- So far, we only aim at doing this for torch functions and torch.Tensor.
- Quite a few methods and functions have to be typed manually. These are
done in `torch/__init__.pyi.in`
For me, PyCharm (the non-paid one) didn't seem to indicate errors in the .pyi when opening and seemed to be able to get the type hint for the few functions I tried, but I don't use PyCharm for my usual PyTorch activities, so I didn't extensively try this out.
An example of a generated PYI is at [this gist](https://gist.github.com/ezyang/
bf9b6a5fa8827c52152858169bcb61b1).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12500
Differential Revision:
D13695553
Pulled By: ezyang
fbshipit-source-id:
4566c71913ede4e4c23ebc4a72c17151f94e8e21
Edward Yang [Tue, 29 Jan 2019 15:11:47 +0000 (07:11 -0800)]
Revert
D13596031: Improve c2-aten tensor interop and add proper testing
Differential Revision:
D13596031
Original commit changeset:
d20b601e06ba
fbshipit-source-id:
dc371697f14b3893a9164380a39e7a49d8d68ecf
Soumith Chintala [Tue, 29 Jan 2019 09:26:22 +0000 (01:26 -0800)]
url download bugfix for URLs served without Content-Length header (#16153)
Summary:
Some HTTP servers dont return Content-Length, account for that
Fixes: https://github.com/pytorch/pytorch/issues/16152
Differential Revision:
D13858882
Pulled By: soumith
fbshipit-source-id:
e4293e9368ed4c87548d22adec1ce0c25ea4bd8f
Mikhail Zolotukhin [Tue, 29 Jan 2019 08:17:30 +0000 (00:17 -0800)]
Properly screen string literals when dumping JIT IR
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16056
Differential Revision:
D13719444
Pulled By: ZolotukhinM
fbshipit-source-id:
7113ee9328eff6263513476cdf9254a2e1116f4c
Mikhail Zolotukhin [Tue, 29 Jan 2019 08:15:17 +0000 (00:15 -0800)]
Remove dependency on ResourceGuard from IR.h. (#16351)
Summary:
It looks like `WithInsertionPoint` and `WithCurrentScope` can be easily implemented without
`ResourceGuard` - that helps readability and removes one more dependency. Is there anything I'm missing?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16351
Differential Revision:
D13821826
Pulled By: ZolotukhinM
fbshipit-source-id:
b203200b345fb5508a97dc8656e6f51cde4cc21f
Mikhail Zolotukhin [Tue, 29 Jan 2019 07:44:31 +0000 (23:44 -0800)]
Remove redundant includes from scope.h and attributes.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16472
Differential Revision:
D13852553
Pulled By: ZolotukhinM
fbshipit-source-id:
d5634982c2c42e704d9902774a77660e05fd71eb
Dmytro Dzhulgakov [Tue, 29 Jan 2019 07:39:17 +0000 (23:39 -0800)]
Improve c2-aten tensor interop and add proper testing (#15860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15860
Few changes (which are harder to split in separate diffs, so together):
- make conversion explicit (as they can throw to avoid surprises)
- fix tensor legacy dispatch not initialized when tensor is created on C2 side
- add a bunch of invariants to enforce
Reviewed By: ezyang
Differential Revision:
D13596031
fbshipit-source-id:
d20b601e06ba47aeff2f6e8e15769840e2d46108
Your Name [Tue, 29 Jan 2019 06:56:55 +0000 (22:56 -0800)]
Remove redundant "build" setup.py commond from onnx scripts
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16487
Differential Revision:
D13858628
Pulled By: bddppq
fbshipit-source-id:
e1ff3fc5f9be5d3dbbf96ee73c3a8c901b440b82
James Reed [Tue, 29 Jan 2019 05:44:33 +0000 (21:44 -0800)]
Fix identifier shadowing in tracer (#16480)
Summary:
This was causing build failures under `-Werror` targets under optimized build modes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16480
Differential Revision:
D13857621
Pulled By: jamesr66a
fbshipit-source-id:
2990b987dbca943298ad478c9ee2792236f5fa5b
Owen Anderson [Tue, 29 Jan 2019 04:51:52 +0000 (20:51 -0800)]
Pass WERROR to CMake as an explicit parameter rather than an env var.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16465
Differential Revision:
D13853949
Pulled By: resistor
fbshipit-source-id:
71ccf90a2824ad21c9f26dd753b186f30435d82a
Edward Yang [Tue, 29 Jan 2019 04:43:59 +0000 (20:43 -0800)]
Remove redundant build from build develop instructions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16467
Differential Revision:
D13849661
Pulled By: ezyang
fbshipit-source-id:
d3d58bd31ac65ad9cbf0057b9a4c499c0f59d95a
Jerry Zhang [Tue, 29 Jan 2019 02:24:42 +0000 (18:24 -0800)]
Change SetOutputSize in ConvTransposeUnpoolBaseOp (#16179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16179
to avoid passing partially initialized Tensor around.
Reviewed By: ezyang
Differential Revision:
D13744009
fbshipit-source-id:
4c545765e1cd164b3e87ce08ec4c1cb1e37e2b8f
Sebastian Messmer [Tue, 29 Jan 2019 01:38:38 +0000 (17:38 -0800)]
Move stack.h to ATen/core (#16247)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16247
Stack is going to be used by the c10 dispatcher.
This just moves the file, also changing the namespace turned out to be more complicated than I thought, I'll leave the namespace for now.
Reviewed By: ezyang
Differential Revision:
D13774189
fbshipit-source-id:
66aeee36425e0ea2b3a4f8159604f38572306d57
Sebastian Messmer [Tue, 29 Jan 2019 01:38:38 +0000 (17:38 -0800)]
Remove state from schema and calling API (#16180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16180
Only the kernel knows about its state, the caller doesn't see it anymore.
Reviewed By: ezyang
Differential Revision:
D13744071
fbshipit-source-id:
cb00ff1a881508c1b36ac4123bee1f68ca02ca9c
Mikhail Zolotukhin [Tue, 29 Jan 2019 00:56:44 +0000 (16:56 -0800)]
Remove generic_if.h. (#16354)
Summary:
The current uses of `IR_IF` are mostly trivial, so there is not much value in having special macros for it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16354
Differential Revision:
D13821823
Pulled By: ZolotukhinM
fbshipit-source-id:
1ca73111f5b4868fa38a1f29c9230540773e5de6
Jesse Hellemn [Tue, 29 Jan 2019 00:49:25 +0000 (16:49 -0800)]
Remove CUDA_VERSION to flag and remove JOB_BASE_NAME from binary jobs
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16470
Differential Revision:
D13853387
Pulled By: pjh5
fbshipit-source-id:
a2baccde65ab82b69380ee57b16e43cc80ed3e04
Gregory Chanan [Mon, 28 Jan 2019 23:54:04 +0000 (15:54 -0800)]
Fix cmake byte version issue in build_pytorch_libs.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16457
Differential Revision:
D13846408
Pulled By: gchanan
fbshipit-source-id:
26962bc12d7d9fdad71f9dd7526f6d32e6008295
Jerry Zhang [Mon, 28 Jan 2019 23:51:25 +0000 (15:51 -0800)]
Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#16273)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16273
Previously we have SetOutputSize which accept a partially initialized Output Tensor and set it to the correct size,
the diff change this to GetOutputSize that returns the correct size instead.
e.g.
```
auto* Y = Output(0);
ConvPoolOp<Context>::SetOutputSize(X, Y, channels);
...
Y->mutable_data<T>...
```
-->
```
auto sizes = ConvPoolOp<Context>::GetOutputSize(X, channels);
auto* Y = Output(0, sizes, at::dtype<T>());
```
Reviewed By: dzhulgakov
Differential Revision:
D13736281
fbshipit-source-id:
64abce3dbaed0b375098463333dfd0ea5a3b1945
James Reed [Mon, 28 Jan 2019 23:22:08 +0000 (15:22 -0800)]
Move tracer impls into cpp file (#16410)
Summary:
Working on the tracer was really annoying because a lot of the implementations were in `tracer.h` and editing that file caused us to rebuild almost the whole world. So this moves all the implementations into tracer.cpp
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16410
Differential Revision:
D13847776
Pulled By: jamesr66a
fbshipit-source-id:
ec8500da32b2d4cd990f293a0a96101d3e82f158
Michael Suo [Mon, 28 Jan 2019 23:04:53 +0000 (15:04 -0800)]
fix alias annotations on to, cpu, cuda (#16460)
Summary:
Fix alias annotations for ops that may return a fresh tensor. The previous version was overly conservative.
Currently there is no actual behavior change in the alias analysis, but we may use the information in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16460
Differential Revision:
D13849086
Pulled By: suo
fbshipit-source-id:
cd23b314a800e5e077d866e74456d37a321439d5
Your Name [Mon, 28 Jan 2019 22:01:30 +0000 (14:01 -0800)]
Remove usage of deprecated "min_satisfying_examples" hypothesis setting (#16401)
Summary:
This setting has been deprecated in [hypythesis 3.56.0](https://github.com/HypothesisWorks/hypothesis/blob/
d1b0df5b91051de7d3f9cea6550ce31e9f0ee2c8/hypothesis-python/docs/changes.rst#3560---2018-04-17) and recently has been removed in [hypothesis 4.x](https://github.com/HypothesisWorks/hypothesis/blob/
d1b0df5b91051de7d3f9cea6550ce31e9f0ee2c8/hypothesis-python/docs/changes.rst#400---2019-01-14).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16401
Reviewed By: ezyang
Differential Revision:
D13832528
Pulled By: bddppq
fbshipit-source-id:
04b9f1dfdf2dcfe0ef121dd02f7fbfdf6bf4aead
Christian Puhrsch [Mon, 28 Jan 2019 21:54:14 +0000 (13:54 -0800)]
Support Tensor alias annotations for native_functions.yaml (#16239)
Summary:
Adds Tensor alias annotations.
This isn't a full implementation of alias annotations, but that isn't required to increase compliance with the JIT signature schema. There are some sanity checks within native_parse.py for their usage, which can also help overall correctness. Otherwise, this exists solely for further alignment between the JIT signature schema and the native_functions.yaml func schema.
This gets us to ~85% matches.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16239
Differential Revision:
D13804133
Pulled By: cpuhrsch
fbshipit-source-id:
aa5750f2c7e0f08b8c35d6d8f38cb148e9629855
Johannes M Dieterich [Mon, 28 Jan 2019 21:35:48 +0000 (13:35 -0800)]
Annotate the bicubic interpolation kernels (#16449)
Summary:
with the correct `__launch_bounds__` for ROCm.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16449
Differential Revision:
D13844111
Pulled By: bddppq
fbshipit-source-id:
07ed8552a630f3a6426d9e5648c415f066991e3d
SsnL [Mon, 28 Jan 2019 21:35:36 +0000 (13:35 -0800)]
Clear cmake cache when --cmake (#16426)
Summary:
Also, because sometimes we have `CMakeCache.txt` but cmake errored out so I'm adding the existence of `'build.ninja'` as another criterion of rerunning cmake.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16426
Differential Revision:
D13843801
Pulled By: ezyang
fbshipit-source-id:
ea1efb201062f23b7608f8d061997d8a8e293445
Jerry Zhang [Mon, 28 Jan 2019 20:18:19 +0000 (12:18 -0800)]
Remove dims() in caffe2::Tensor (#16356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16356
att
Reviewed By: dzhulgakov
Differential Revision:
D13813197
fbshipit-source-id:
68c0fb43404536f622422c51949c819d8a037aa5
Sebastian Messmer [Mon, 28 Jan 2019 19:36:31 +0000 (11:36 -0800)]
Op-calling API can handle state (#16177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16177
Change the API for calling operators so that it can store state in an OpKernel object.
This diff doesn't store the state there yet, that comes in a follow up diff.
Reviewed By: ezyang
Differential Revision:
D13742889
fbshipit-source-id:
20511a9a1b9f850074e50634d4b4acf87f8c6ecd
Sebastian Messmer [Mon, 28 Jan 2019 19:36:30 +0000 (11:36 -0800)]
Handle stack correctly (#16246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16246
The op schema says it returns multiple values, so let's actually return multiple values instead of one tuple.
For some reason, this did work when called from python (probably some auto-unpacking),
but once called from JIT, it segfaulted. This diff fixes that.
Reviewed By: dzhulgakov
Differential Revision:
D13780147
fbshipit-source-id:
fe94f82f4c53b7454f77c4484fca4ac9dc444475
Helmut [Mon, 28 Jan 2019 19:25:33 +0000 (11:25 -0800)]
Fix compiler error in swapBytes64 for rare architectures (#16418)
Summary:
swapBytes64 used to use SwapByteOrder_32 and value, both of which dont exist. This commit rewrites that part from scratch.
This happened on Debugbuild on Microsoft compiler. For that case " && !defined(_DEBUG)" is also removed, because _byteswap_uint64 works fine in debug mode (if it is necessary it should me commented why).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16418
Differential Revision:
D13843306
Pulled By: ezyang
fbshipit-source-id:
dde1c7baeccec3aaa750d4b7200b3f4ccb4a00cb
Junjie Bai [Mon, 28 Jan 2019 19:10:18 +0000 (11:10 -0800)]
Fix lint errors introduced in pytorch/pytorch@ceece5d (#16454)
Summary:
ifedan
```
./test/common_utils.py:748:1: E302 expected 2 blank lines, found 1
./test/test_torch.py:1235:5: E303 too many blank lines (2)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16454
Differential Revision:
D13844905
Pulled By: bddppq
fbshipit-source-id:
3dc7c740d86310a8efc9864d7c7798fda8257a21
Syed Tousif Ahmed [Mon, 28 Jan 2019 18:20:47 +0000 (10:20 -0800)]
Report the slowest 10 tests when using pytest (#16423)
Summary:
This flag is useful in identifying if a test is taking way too long like the ones in the following snippet when running the test suite with pytest. https://github.com/pytorch/pytorch/blob/
9757ad35b0b56cf955f294e751de9b437f9bb4ff/test/common_utils.py#L814-L835
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16423
Differential Revision:
D13843507
Pulled By: ezyang
fbshipit-source-id:
643e1766a85905b3b112ea5ca562135a17896a72
Xiaomeng Yang [Mon, 28 Jan 2019 17:26:41 +0000 (09:26 -0800)]
Optimize SpatialBNOp on GPU (#16395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16395
Optimize SpatialBNOp on GPU
i-am-not-moving-c2-to-c10
Reviewed By: houseroad
Differential Revision:
D13829833
fbshipit-source-id:
04d2a63e8e9830c4c39a91cf87fcd7aa765dc55f
Igor Fedan [Mon, 28 Jan 2019 17:14:07 +0000 (09:14 -0800)]
CPU implementation of torch.cdist (#16168)
Summary:
cdist is used for calculating distances between collections of observations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16168
Differential Revision:
D13739147
Pulled By: ifedan
fbshipit-source-id:
9419c2c166891ac7db40672c72f17848f0b446f9
Brennan Vincent [Mon, 28 Jan 2019 16:47:35 +0000 (08:47 -0800)]
Don't initialize a new `std::vector` in a loop. (#15850)
Summary:
Before this diff, we execute `std::vector<optional<acc_t>> buffer((unsigned)max_threads, optional<acc_t> {});` in every iteration of `foreach_reduced_elt`. Change the code to only execute that line if we need it; i.e., we are actually about to parallelize.
This overhead is quite significant when we are doing a lot of small reductions in single-threaded code.
```
x=torch.randn((1024,10,1024),dtype=torch.float64)
torch.set_num_threads(1)
%timeit x.std(1)
```
Before (with #15845 applied): 708.25 ms
After: 508 ms
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15850
Differential Revision:
D13612960
Pulled By: umanwizard
fbshipit-source-id:
f5e61abfe0027775c97ed81ac09c997fbee741df
Edward Yang [Mon, 28 Jan 2019 15:37:43 +0000 (07:37 -0800)]
More documentation on caffe2::Operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16371
Reviewed By: dzhulgakov
Differential Revision:
D13820472
fbshipit-source-id:
efccea0e92c86d30ec2bdda50eb9aab8a3a1504d
rotuna [Mon, 28 Jan 2019 00:26:47 +0000 (16:26 -0800)]
Better error message when creating a module instance in jit.script (#16416)
Summary:
Made the change requested in #15555
PR was failing build due to a time out error while getting packages using pip.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16416
Differential Revision:
D13833873
Pulled By: soumith
fbshipit-source-id:
e2200e9e8015558fcd359dfa3d025b25802d62b5
peter [Sun, 27 Jan 2019 22:59:34 +0000 (14:59 -0800)]
Fix issues on Windows brought by #16289 (#16412)
Summary:
This one needs to be merged ASAP because the CUDA build for Windows is skipped at this time.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16412
Differential Revision:
D13833889
Pulled By: soumith
fbshipit-source-id:
95a401a01fb0f9c1045df0bfd72d8206b8a6f3fd
Gemfield [Sun, 27 Jan 2019 22:13:46 +0000 (14:13 -0800)]
Fix a typo in Parallel.h (#16419)
Summary:
Fix a typo in Parallel.h.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16419
Differential Revision:
D13833705
Pulled By: soumith
fbshipit-source-id:
824ebe753e028fc8e2b5d7a51fdba98a365fd29a
peterjc123 [Sun, 27 Jan 2019 20:26:24 +0000 (12:26 -0800)]
Don't install PDB for Windows static build of caffe2_observers (#16420)
Summary:
Fixes #16292.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16420
Differential Revision:
D13833704
Pulled By: soumith
fbshipit-source-id:
482ad6ce103bed7206e924e8c82454fbb1bfac42
SsnL [Sun, 27 Jan 2019 20:08:09 +0000 (12:08 -0800)]
Fix slogdet sign requiring grad when input requires grad (#16337)
Summary:
The real fix for https://github.com/pytorch/pytorch/issues/15605.
This is sort of BC breaking because now
```py
In [1]: import torch
In [2]: a = torch.randn(3, 3, requires_grad=True)
In [3]: a.slogdet()
Out[3]: (tensor(1.), tensor(0.1356, grad_fn=<SlogdetBackward>))
In [4]: a.slogdet()[0].requires_grad
Out[4]: False
```
while before this patch ` a.slogdet()[0]` requires grad with `grad_fn=<SlogdetBackward>`. But any use of backproping through this value will meet the error in #15605 so I don't think this is a problem.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16337
Differential Revision:
D13832644
Pulled By: soumith
fbshipit-source-id:
f96c477e99edcbdbd966888e5c5ea7fd058429a8
Zachary DeVito [Sun, 27 Jan 2019 09:24:58 +0000 (01:24 -0800)]
CI Fix: restore MAX_JOBS variable (#16415)
Summary:
Restores a CI workaround (https://github.com/pytorch/pytorch/pull/7361) that got dropped with build_pytorch_libs.sh.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16415
Differential Revision:
D13833092
Pulled By: zdevito
fbshipit-source-id:
f78b60cafd8da945790dba28de373b8faf46e9f5
Samuel Fadel [Sun, 27 Jan 2019 01:58:31 +0000 (17:58 -0800)]
Update einsum documentation. (#16323)
Summary:
The documentation stated that operands to einsum should be a list of Tensors, not individual arguments. The function, however, now accepts individual arguments for each Tensor operand *and* a single argument consisting of a list of Tensors. The documentation was updated to reflect this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16323
Differential Revision:
D13832647
Pulled By: soumith
fbshipit-source-id:
c01c2b350f47674d3170337f493b0ee2ea381b3f
James Reed [Sun, 27 Jan 2019 01:39:34 +0000 (17:39 -0800)]
Fix flake8 warnings/errors in test_jit.py (#16409)
Summary:
These were really annoying to see in the phabricator UI when trying to land PRs that touched test_jit.py, so this fixes them.
One remaining item is the T484 error. Locally, flake8 still chokes on that line even though I put the noqa comment there (and tried varying whitespaces around it etc). Not sure why it still persists...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16409
Differential Revision:
D13832658
Pulled By: jamesr66a
fbshipit-source-id:
46356ba6444ae5ee1a141c28489bdcc7c99e39c0
James Reed [Sat, 26 Jan 2019 22:38:12 +0000 (14:38 -0800)]
Trace fork and join calls
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16232
Differential Revision:
D13772974
Pulled By: jamesr66a
fbshipit-source-id:
b2db370271809e26d3301f8cc98eec567db5e62b
vishwakftw [Sat, 26 Jan 2019 19:14:19 +0000 (11:14 -0800)]
Switch to CUDA implementation if batch size >= 65536 for affine_grid (#16403)
Summary:
Changelog:
- Append a condition that switches to the native CUDA implementation for affine_grid
Fixes #16365
Differential Revision:
D13832192
Pulled By: soumith
fbshipit-source-id:
3f484e6673d71e3ba7627b170cb8f1611e12b9b2
SsnL [Sat, 26 Jan 2019 17:42:48 +0000 (09:42 -0800)]
gitignore gdb history
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16404
Differential Revision:
D13832191
Pulled By: soumith
fbshipit-source-id:
ab23d1ad72c041ec2d9616c273bbf399e0feb10d
Juan Miguel Pino [Sat, 26 Jan 2019 06:49:00 +0000 (22:49 -0800)]
Revert
D13821061: [redo][c10] layernorm example
Differential Revision:
D13821061
Original commit changeset:
82f0dade0145
fbshipit-source-id:
e5b0b1bab0c9e731ae04add35e9a6c91656dd178
Jerry Zhang [Sat, 26 Jan 2019 00:59:07 +0000 (16:59 -0800)]
trying to fix testX (#16370)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16370
passed locally but seems testX has some problem
Reviewed By: ezyang
Differential Revision:
D13820250
fbshipit-source-id:
e4ad9d1ec99508867d4ead46753a7fb7019c50bd
Bram Wasti [Sat, 26 Jan 2019 00:45:35 +0000 (16:45 -0800)]
layernorm example (#16374)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16374
this fixes the original attempt in OSS (adds to CMake and python build files)
Reviewed By: smessmer
Differential Revision:
D13821061
fbshipit-source-id:
82f0dade0145fd04bdf8e3cb3954b5790e918162
Bram Wasti [Sat, 26 Jan 2019 00:45:34 +0000 (16:45 -0800)]
plug caffe2 into jit" (#16388)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16388
previous diff broke master -- this refactors out the custom_operator.cpp file into a separate header + cpp pair (caffe2_operator.{h,cpp})
Reviewed By: smessmer
Differential Revision:
D13823550
fbshipit-source-id:
00e005e650336132d05aef97c1f0e5242ccad5ba
Junjie Bai [Sat, 26 Jan 2019 00:06:59 +0000 (16:06 -0800)]
Enable centos pytorch rocm CI
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14879
Differential Revision:
D13821534
Pulled By: bddppq
fbshipit-source-id:
45151b880992f1efa83e29c4985a723374575506
Zachary DeVito [Fri, 25 Jan 2019 23:57:09 +0000 (15:57 -0800)]
Remove bash from build (#16289)
Summary:
This commit removes the dependency on `build_pytorch_libs.sh` by moving the remaining functionality that is not expressible in cmake into python. Removing the indirection through bash also removes over 300 lines of environment munging code that is incredibly hard to understand because it passes a lot of secret parameters through `os.env`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16289
Reviewed By: ezyang
Differential Revision:
D13821662
Pulled By: zdevito
fbshipit-source-id:
d658d26925e3b1169ac1e3d44a159cf8a1f0d9b1
Jerry Zhang [Fri, 25 Jan 2019 23:32:45 +0000 (15:32 -0800)]
Remove caffe2::ShareData (#16139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16139
Original commit changeset:
4b15a4c62995
Reviewed By: dzhulgakov
Differential Revision:
D13677464
fbshipit-source-id:
1a644a88fac02b44feebac48ccc01bc72cc47edb
Jesse Hellemn [Fri, 25 Jan 2019 23:11:33 +0000 (15:11 -0800)]
Trying a fix to anaconda logins on nightlies
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16387
Differential Revision:
D13826227
Pulled By: pjh5
fbshipit-source-id:
769a53e40a4912879faf9716a80c0e0c86acdbf8
Elias Ellison [Fri, 25 Jan 2019 23:02:30 +0000 (15:02 -0800)]
Update Documentation for Optionals (#16380)
Summary:
Now that https://github.com/pytorch/pytorch/pull/15587 has landed, updating docs.
Will close https://github.com/pytorch/pytorch/issues/15278
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16380
Differential Revision:
D13825221
Pulled By: eellison
fbshipit-source-id:
c5a7a7fbb40ba7be46a80760862468f2c9967169
Zachary DeVito [Fri, 25 Jan 2019 20:20:29 +0000 (12:20 -0800)]
Revert
D13740752: [c10] plug caffe2 into jit
Differential Revision:
D13740752
Original commit changeset:
2d9383574d42
fbshipit-source-id:
e9ff217a438720423340a10af7fa263b33f2ae24
Gu, Jinghui [Fri, 25 Jan 2019 19:00:32 +0000 (11:00 -0800)]
Impl Shape op for mkldnn (#15266)
Summary:
Impl Shape op for mkldnn
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15266
Differential Revision:
D13804558
Pulled By: yinghai
fbshipit-source-id:
8a35f608c23973d7a15c3d645aee4059eb55f245
Bram Wasti [Fri, 25 Jan 2019 18:10:07 +0000 (10:10 -0800)]
Back out "[c10] layernorm example"
Summary: Original commit changeset:
87240ca7f48d
Reviewed By: bddppq
Differential Revision:
D13816657
fbshipit-source-id:
bafcf0779d811c7e4a134cfb323a89352fa8c180
Ailing Zhang [Fri, 25 Jan 2019 16:35:55 +0000 (08:35 -0800)]
Add xla test in CI (#15978)
Summary:
Adding xla CPU tests in our CI.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15978
Differential Revision:
D13816344
Pulled By: ailzhang
fbshipit-source-id:
f74c52e846976ea4ac439313847908a0e99d05eb
Edward Yang [Fri, 25 Jan 2019 15:48:00 +0000 (07:48 -0800)]
Delete Tensor::swap(), replace with pointer swap (#12730)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12730
i-am-not-moving-c2-to-c10
Reviewed By: smessmer
Differential Revision:
D10415430
fbshipit-source-id:
8a2ce8611c5fa77bbbd73fb6788c1baa3b370f07
SsnL [Fri, 25 Jan 2019 15:24:18 +0000 (07:24 -0800)]
Make test_proper_exit more robust (#16249)
Summary:
1. Improve error message for better debugging info
2. Increase timeout
3. Also apply the windows worker failure detection mechanism on non-Windows platforms, for better robustness
Attempt to fix #14501
cc ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16249
Differential Revision:
D13784702
Pulled By: ezyang
fbshipit-source-id:
09a7cff83ab9edce561ed69f9fb555ab35d1275f
Si Chen [Fri, 25 Jan 2019 15:23:06 +0000 (07:23 -0800)]
fix contbuild (#16362)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16362
https://our.intern.facebook.com/intern/testinfra/diagnostics/
281475065177800.
844424930381786.
1548397180/
Reviewed By: ezyang
Differential Revision:
D13816639
fbshipit-source-id:
024117233f6d3bc6244013ca2ee1aea065560212
Xiaomeng Yang [Fri, 25 Jan 2019 08:54:09 +0000 (00:54 -0800)]
Minor change of group_norm_gradient on GPU (#16307)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16307
Minor change of group_norm_gradient on GPU
Reviewed By: houseroad
Differential Revision:
D13800613
fbshipit-source-id:
9e55f93b1e322efe3fc2d684b9c47c3dbb7a0f48
Junjie Bai [Fri, 25 Jan 2019 07:50:21 +0000 (23:50 -0800)]
Revert
D13551909: [fbcode] logdevice for generic feature type
Differential Revision:
D13551909
Original commit changeset:
807830c50bee
fbshipit-source-id:
48cacf4ec1765253a9be9d78f4b28cc48330be59
Qin Huang [Fri, 25 Jan 2019 07:21:25 +0000 (23:21 -0800)]
logdevice for generic feature type (#16191)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16191
logdevice related modifications for generic feature type
we directly convert the generic feature structures to json strings, which corresponds to the column input in offline and dper
Reviewed By: itomatik
Differential Revision:
D13551909
fbshipit-source-id:
807830c50bee569de202530bc3700374757793a2
Bram Wasti [Fri, 25 Jan 2019 05:46:50 +0000 (21:46 -0800)]
layernorm example (#16350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16350
Example usage of the new caffe2 integration
Reviewed By: smessmer
Differential Revision:
D13408546
fbshipit-source-id:
87240ca7f48d653a70241d243aa0eb25efa67611
Bram Wasti [Fri, 25 Jan 2019 05:46:50 +0000 (21:46 -0800)]
plug caffe2 into jit (#16331)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16331
Temporary measure to enable caffe2 ops in pytorch
Reviewed By: smessmer
Differential Revision:
D13740752
fbshipit-source-id:
2d9383574d42ce84ee471aba32eeb4f5a0cc7a4c
Bram Wasti [Fri, 25 Jan 2019 05:46:50 +0000 (21:46 -0800)]
Add RunOperator for using FunctionSchema registered ops easily in caffe2 (#16173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16173
Helper to make it easy to run ops in caffe2
Reviewed By: smessmer
Differential Revision:
D13468240
fbshipit-source-id:
2276c7870af6dcdf829957f005fd16ac1ef319b5
Bram Wasti [Fri, 25 Jan 2019 05:46:50 +0000 (21:46 -0800)]
Add correct Input() shim to caffe2 operator impl (#16048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16048
This enables full shimming of the operator (previously it was only
Output() shimmed).
Reviewed By: smessmer
Differential Revision:
D13468241
fbshipit-source-id:
c853b775ab5cdcd968f4a6cc4766e91c3c6b1c45
Shen Li [Fri, 25 Jan 2019 01:11:26 +0000 (17:11 -0800)]
Relax lower bound for nogil timing test to avoid false alarm (#16259)
Summary:
fixes #16250, #16271
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16259
Differential Revision:
D13784505
Pulled By: mrshenli
fbshipit-source-id:
0b7ad98cd3c018b9907d70158de3abc3c4cb57ef
Mikhail Zolotukhin [Fri, 25 Jan 2019 00:34:46 +0000 (16:34 -0800)]
Code-style fixes. (#16342)
Summary:
Some cleanups in ir.{h,cpp}. I plan to continue cleaning it up, so this is a first step.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16342
Differential Revision:
D13808897
Pulled By: ZolotukhinM
fbshipit-source-id:
2dedb414576c3efbf8e36434145d7f14a66b1ee7
Jongsoo Park [Fri, 25 Jan 2019 00:32:24 +0000 (16:32 -0800)]
disable testing group conv with EIGEN engine (#16335)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16335
group conv is not implemented with EIGEN engine so this diff disables related tests
Reviewed By: jamesr66a
Differential Revision:
D13807204
fbshipit-source-id:
41f6de43da40882f57e64474520e185733caefb7
Elias Ellison [Thu, 24 Jan 2019 23:41:50 +0000 (15:41 -0800)]
Remove unneeded manual unwrap optionals (#16245)
Summary:
Remove calls to torch.jit._unwrap_optional that are no longer needed.
The remaining instances would require control flow logic for exceptions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16245
Differential Revision:
D13804292
Pulled By: eellison
fbshipit-source-id:
08c5cbe4b956519be2333de5cf4e202488aff626
Yan Shang [Thu, 24 Jan 2019 23:22:01 +0000 (15:22 -0800)]
fix buildindexop (#16341)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16341
as in the title
Reviewed By: intermilan
Differential Revision:
D13808679
fbshipit-source-id:
0d12d3253f380bec66bc9be899be565861b8163a
Hoa Dinh [Thu, 24 Jan 2019 23:20:09 +0000 (15:20 -0800)]
Revert
D13747581: Optimize SpatialBN on GPU
Differential Revision:
D13747581
Original commit changeset:
48a885a240ef
fbshipit-source-id:
58cec6023843d7459865eb80c9db8dac463cb96c
Jerry Zhang [Thu, 24 Jan 2019 23:01:47 +0000 (15:01 -0800)]
Add Test for ReinitializeTensor (#16338)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16338
att
Reviewed By: ezyang
Differential Revision:
D13806760
fbshipit-source-id:
322b9b7d314aeb0194f52b803ca35c0cb8efcdec
Will Feng [Thu, 24 Jan 2019 22:29:06 +0000 (14:29 -0800)]
Add thread-local guard: at::AutoNonVariableTypeMode (#15939)
Summary:
This PR adds thread-local guard (`at::AutoNonVariableTypeMode`) to make sure that in VariableType.cpp the operations on baseType still dispatch to non-Variable type, even if the parameters will become Variables after the Tensor/Variable merge. We achieve this by making `legacyTensorType()` and `getType()` check the `at::AutoNonVariableTypeMode` guard to decide whether to return non-Variable type for a variable.
This is part of the VariableImpl/TensorImpl merge work: https://github.com/pytorch/pytorch/issues/13638.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15939
Reviewed By: ezyang
Differential Revision:
D13640980
Pulled By: yf225
fbshipit-source-id:
d12c2543822958558d7d70d36c50999a5eb8783f
Jongsoo Park [Thu, 24 Jan 2019 22:09:11 +0000 (14:09 -0800)]
reduce parameter space of test_1x1_conv to avoid timeout (#16223)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16223
As title says
Reviewed By: jamesr66a
Differential Revision:
D13758202
fbshipit-source-id:
3cdffb80a5dad53b29e65e8eb0ae128edba70dbb
Sidney Zhang [Thu, 24 Jan 2019 21:07:35 +0000 (13:07 -0800)]
Update docs to include variable annotation example (#16324)
Summary:
Relates to this issue https://github.com/pytorch/pytorch/issues/16288
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16324
Reviewed By: ezyang
Differential Revision:
D13805412
Pulled By: suo
fbshipit-source-id:
8b80f988262da2c717452a71142327bbc23d1b8f
Edward Yang [Thu, 24 Jan 2019 20:00:34 +0000 (12:00 -0800)]
Delete duplicate copy of THCCachingAllocator. (#16226)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16226
Now that the caching allocator is moved to c10_cuda, we can
delete the duplicate copy from Caffe2.
Reviewed By: dzhulgakov, smessmer
Differential Revision:
D13762540
fbshipit-source-id:
03f1ebf7f11c68c19aa0d66110156fe228da6138
Edward Yang [Thu, 24 Jan 2019 20:00:34 +0000 (12:00 -0800)]
Move THCCachingAllocator to c10_cuda. (#16119)
Summary:
Some renaming and renamespacing also took place. I was originally planning not to do anything, but it turns out that it was easier to make HIPify work by using a namespace CUDACachingAllocator:: rather than THCCachingAllocator_, since :: is a word boundary but _ is not.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16119
Reviewed By: smessmer
Differential Revision:
D13718768
fbshipit-source-id:
884a481d99027fd3e34471c020f826aa12225656
Edward Yang [Thu, 24 Jan 2019 20:00:34 +0000 (12:00 -0800)]
Remove unnecessary includes and headers from THCCachingAllocator, move to at::cuda:: namespace (#16117)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16117
This means I can move it to c10_cuda with minimal fuss.
Reviewed By: smessmer
Differential Revision:
D13717836
fbshipit-source-id:
a94c7dc649af64542480fc1c226b289588886c00
Mikhail Zolotukhin [Thu, 24 Jan 2019 19:05:07 +0000 (11:05 -0800)]
Directly include headers from ATen.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16287
Differential Revision:
D13792949
Pulled By: ZolotukhinM
fbshipit-source-id:
d627d8dc469df048063c70d0b5b8d33fede809a3