platform/upstream/pytorch.git
3 years agoBack out "Temporary fix for remote gpu execution issue" (#63983)
Rong Rong (AI Infra) [Wed, 25 Aug 2021 21:34:40 +0000 (14:34 -0700)]
Back out "Temporary fix for remote gpu execution issue" (#63983)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63983

Test for fixes in D30545351. it should resolve the remote execution flag being populated incorrectly issue.

Test Plan: CI

Reviewed By: malfet, seemethere

Differential Revision: D30549443

fbshipit-source-id: b3895909f5cd654ba163b77950872b332fbad3fe

3 years agoShape Propagation Pass: Fix AdaptiveAveragePooling2d (#63629)
Priya Ramani [Wed, 25 Aug 2021 20:08:12 +0000 (13:08 -0700)]
Shape Propagation Pass: Fix AdaptiveAveragePooling2d (#63629)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63629

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D30461727

Pulled By: priyaramani

fbshipit-source-id: 3873d1d636f79185680b82de06174d8de288c941

3 years agoMove existing target determinator to tools (#63809)
driazati [Wed, 25 Aug 2021 19:58:24 +0000 (12:58 -0700)]
Move existing target determinator to tools (#63809)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63809

This moves out the modulefinder determinator to `tools/testing` since it is supposed to be CI-only. This also simplifies run_test.py a little bit.

Test Plan: Imported from OSS

Reviewed By: malfet, seemethere, janeyx99

Differential Revision: D30497438

Pulled By: driazati

fbshipit-source-id: 1d203037af5af6a20c1e7812da935e7cbb5cd82f

3 years agoAdd a comment on the potential implicit type up-casting (#63905)
Yi Wang [Wed, 25 Aug 2021 19:46:09 +0000 (12:46 -0700)]
Add a comment on the potential implicit type up-casting (#63905)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63905

as title
ghstack-source-id: 136590703

Test Plan: N/A

Reviewed By: mrshenli

Differential Revision: D30527929

fbshipit-source-id: 69402bbfa87cfd8fc166ce313cde9736ee072589

3 years agoadd BFloat16 support for bernoulli and Dropout on CPU (#56372)
mingfeima [Wed, 25 Aug 2021 18:53:52 +0000 (11:53 -0700)]
add BFloat16 support for bernoulli and Dropout on CPU (#56372)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56372

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D28836792

Pulled By: VitalyFedyunin

fbshipit-source-id: ede951d172a59276e11383fd767778ab959b5a6b

3 years agoUpdate torch.distributed.run OMP_NUM_THREADS message to log.warning (#63953)
Howard Huang [Wed, 25 Aug 2021 18:53:24 +0000 (11:53 -0700)]
Update torch.distributed.run OMP_NUM_THREADS message to log.warning (#63953)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63953

Closes #61138

Test:
`python -m torch.distributed.run --nproc_per_node 2 test.py`
Still outputs message

`LOGLEVEL=ERROR python -m torch.distributed.run --nproc_per_node 2 test.py`
Does not output message anymore

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse agolynski SciPioneer H-Huang mrzzd cbalioglu gcramer23

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D30542997

Pulled By: H-Huang

fbshipit-source-id: e7da30dcda51516abf4e56f1f510132e44397027

3 years agoFix ciflow/all label generation (#63954)
zhouzhuojie [Wed, 25 Aug 2021 18:30:28 +0000 (11:30 -0700)]
Fix ciflow/all label generation (#63954)

Summary:
the `ciflow/all` is automatically added but need to be added before we call `gen_root_job_condition`.

- fix the order of adding `ciflow/all`
- refactor all the string into global constants

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63954

Reviewed By: malfet

Differential Revision: D30545596

Pulled By: zhouzhuojie

fbshipit-source-id: 83ab668f0234488afb855a72e3ebd4503f7f1a78

3 years agoReformat run_test.py (#63808)
driazati [Wed, 25 Aug 2021 18:19:49 +0000 (11:19 -0700)]
Reformat run_test.py (#63808)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63808

`black run_test.py`

Test Plan: Imported from OSS

Reviewed By: seemethere

Differential Revision: D30497437

Pulled By: driazati

fbshipit-source-id: 41b29b73f41fa4bb15fce5eaa69f8efe614e02f7

3 years ago[Static Runtime] Added caching for the NNC code generated for Logit. (#63840)
Raghavan Raman [Wed, 25 Aug 2021 18:12:57 +0000 (11:12 -0700)]
[Static Runtime] Added caching for the NNC code generated for Logit. (#63840)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63840

Added NNC generated code for Logit to the cache.

```
Logit NNC Benchmark Time (ns)
            w/o cache w/ cache
logit_nnc_sleef/64 543 536
logit_nnc_sleef/512 3517 3465
logit_nnc_sleef/8192 88483 85881
logit_nnc_sleef/32768 337016 323090
logit_nnc_fast/64 167 163
logit_nnc_fast/512 866 817
logit_nnc_fast/8192 13069 12801
logit_nnc_fast/32768 53429 52530
logit_nnc_vml/64 164 151
logit_nnc_vml/512 783 769
logit_nnc_vml/8192 11563 11674
logit_nnc_vml/32768 46720 46452
```

Test Plan: Unit tests and inline_cvr model.

Reviewed By: hlu1

Differential Revision: D30405424

fbshipit-source-id: 938b1b74758e2612ae151bac890c5f8ebbc42d50

3 years ago[Static Runtime] Added a variable for clamp in the NNC code for Logit. (#63839)
Raghavan Raman [Wed, 25 Aug 2021 18:12:57 +0000 (11:12 -0700)]
[Static Runtime] Added a variable for clamp in the NNC code for Logit. (#63839)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63839

Replaced the use of a constant for clamp in the NNC code for Logit
with a variable. This makes it easier to enable caching for Logit.

There is no performance difference with this change, as shown in the micro-benchmarks below.

```
Logit NNC Benchmark Time (ns)
           const-clamp var-clamp
logit_nnc_sleef/64 550 543
logit_nnc_sleef/512 3514 3517
logit_nnc_sleef/8192 85537 82900
logit_nnc_sleef/32768 347635 337016
logit_nnc_fast/64 173 167
logit_nnc_fast/512 829 866
logit_nnc_fast/8192 13286 13069
logit_nnc_fast/32768 51116 53429
logit_nnc_vml/64 146 164
logit_nnc_vml/512 773 783
logit_nnc_vml/8192 11556 11563
logit_nnc_vml/32768 44815 46720
```

Test Plan: SR unit tests and the inline_cvr model.

Reviewed By: bertmaher

Differential Revision: D30405466

fbshipit-source-id: adb891fdae5746439931ce5f43165291fec08f52

3 years ago[Static Runtime] Moved NNC operator definitions to separate files. (#63838)
Raghavan Raman [Wed, 25 Aug 2021 18:12:57 +0000 (11:12 -0700)]
[Static Runtime] Moved NNC operator definitions to separate files. (#63838)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63838

Refactored NNC operator definitions code into separate files.

Made `TEWrapper` a class with a fixed set of methods and added separate definitions for them based on `TORCH_ENABLE_LLVM` to keep the same functionality as before.

Test Plan: Build and ran Static Runtime tests.

Reviewed By: hlu1

Differential Revision: D30405467

fbshipit-source-id: 606ef852bb820d5e23a0f8af1bf5dc122e90bceb

3 years ago[Reland] Replacing the p.data acccess in utils with tensor.set_ . Passes both test_po...
Aayush Prakash [Wed, 25 Aug 2021 18:11:08 +0000 (11:11 -0700)]
[Reland] Replacing the p.data acccess in utils with tensor.set_ . Passes both test_post_localSGD_optimizer_pari and test_periodic_model_averager tests (#63895)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63895

When updating the model parameter, updating `parameter.data` is no longer recommended, because this `data` field will be deprecated in the future.

The replacement is `tensor.set_`.
ghstack-source-id: 136593433

Test Plan:
buck test mode/dev-nosan //caffe2/test/distributed:distributed_nccl_spawn -- test_periodic_model_averager
buck test mode/dev-nosan //caffe2/test/distributed:distributed_nccl_spawn -- test_post_localSGD_optimizer_parity

Reviewed By: SciPioneer

Differential Revision: D30526178

fbshipit-source-id: a1ac0ec3665d8623edd5bf94f01c1132daff5c00

3 years agoclean up engine.cpp thread state (#63115)
albanD [Wed, 25 Aug 2021 18:07:24 +0000 (11:07 -0700)]
clean up engine.cpp thread state (#63115)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63115

This actually changes:
- callbacks now run with proper grad mode even in worker threads
- graphtask's Future callbacks now run with proper TLS when erroring
  out from a worker thread

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D30388100

Pulled By: albanD

fbshipit-source-id: 7ae9c461c2f0040548dd9e1e314f25e8da0c2e67

3 years ago[fx2trt] Check input device in TRTModule (#63893)
Shiyan Deng [Wed, 25 Aug 2021 17:22:17 +0000 (10:22 -0700)]
[fx2trt] Check input device in TRTModule (#63893)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63893

Add a check to ensure all the inputs are on cuda device.

Test Plan: CI

Reviewed By: kflu, houseroad

Differential Revision: D30525265

fbshipit-source-id: 6e50b70fd535defc1f802d51e8bb991b2dd73741

3 years agobf16 Error message cleanup as well as addition of is_bf16_supported (#63798)
riship [Wed, 25 Aug 2021 16:56:41 +0000 (09:56 -0700)]
bf16 Error message cleanup as well as addition of is_bf16_supported (#63798)

Summary:
ngimel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63798

Reviewed By: heitorschueroff

Differential Revision: D30526187

Pulled By: ngimel

fbshipit-source-id: c484aec14638097c96c720095d3491249b6b2d14

3 years ago[pruner] add getter for pruned outputs in base pruner (#63520)
Karen Zhou [Wed, 25 Aug 2021 16:55:02 +0000 (09:55 -0700)]
[pruner] add getter for pruned outputs in base pruner (#63520)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63520

Rather than having to call `module.parametrizations.weight[0].pruned_outputs` each time we need to access the set of pruned indices, we add a getter `get_module_pruned_outputs` which takes the module as an argument and returns the set.

This is used for testing.
ghstack-source-id: 136561130

Test Plan:
` buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1N4gK

Reviewed By: z-a-f

Differential Revision: D30374558

fbshipit-source-id: e38dfee0879cadde52b942e899a3d8d7151ee493

3 years ago[pruner] add support for pruning BatchNorm2d (#63519)
Karen Zhou [Wed, 25 Aug 2021 16:55:02 +0000 (09:55 -0700)]
[pruner] add support for pruning BatchNorm2d (#63519)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63519

If the pruner should be pruning biases along with weights, then if the model has BatchNorm2d following pruned Conv2d layers, then the corresponding channels of the BatchNorm must also be pruned.

Specifically, they need to zeroed out, rather than fully removed, since in eager mode, the dimensions between layers need to be preserved.

To do this, we add a pruning parametrization called `ZeroesParametrization` which zeroes out pruned channels, rather than removing them.

The user must provide in the config, a tuple of the Conv2d and BatchNorm layers that go together. The `prepare` method will add the tuple to the `module_groups`; then it will add a PruningParametrization to the Conv2d layer, and a ZeroesParametrization to BatchNorm, and then set their pruned sets to be the same set. That way, during `step`, both masks are updated with the same pruned indices.

ghstack-source-id: 136562278

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1N1P6

Reviewed By: z-a-f

Differential Revision: D30349855

fbshipit-source-id: 3199d3688d5a70963f9b32d7a8fdac3962ae6a65

3 years agoMinor OptionalTensorRef updates (#63611)
Peter Bell [Wed, 25 Aug 2021 16:35:26 +0000 (09:35 -0700)]
Minor OptionalTensorRef updates (#63611)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63611

A few minor updates to `OptionalTensorRef`:
1. use `Tensor`'s `unsafe_borrow_t` constructor which avoids an unnecesary `nullptr` check.
2. copy constructor cannot defer to the `const Tensor&` constructor because it checks the tensor is
defined, and so would fail for disengaged optionals.
3. use copy-swap idiom to avoid issues with self-assignment. `x = x` should be a no-op, but the old
version would clear `x`.
4. Add pointer-like access for consistency with `optional` and `MaybeOwned`

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D30484704

Pulled By: ezyang

fbshipit-source-id: 738f4bd22359eaecd0a519a04e89a4b44d92da5b

3 years agoUpdate CMake minimum version to 3.10 (#63660)
Nikita Shulga [Wed, 25 Aug 2021 16:24:27 +0000 (09:24 -0700)]
Update CMake minimum version to 3.10 (#63660)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63660

Test Plan: Imported from OSS

Reviewed By: janeyx99, mruberry

Differential Revision: D30543878

fbshipit-source-id: a7d938807653f39727f2cc7d7ca167200567b6a0

3 years agoTemporary fix for remote gpu execution issue (#63899)
Rong Rong (AI Infra) [Wed, 25 Aug 2021 16:04:28 +0000 (09:04 -0700)]
Temporary fix for remote gpu execution issue (#63899)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63899

See: T99020845

Test Plan: sandcastle

Reviewed By: heitorschueroff

Differential Revision: D30527384

fbshipit-source-id: ce9933e5e181322c02d4ed17f3fdaabe4c5ba29e

3 years agoFix bug in `check_empty_containers` (#63492)
Ansley Ussery [Wed, 25 Aug 2021 16:01:50 +0000 (09:01 -0700)]
Fix bug in `check_empty_containers` (#63492)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63492

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D30402749

Pulled By: ansley

fbshipit-source-id: 7de533355fe91ca4f45b2bafc3bfb205a028c1ed

3 years agoSwap CUDA 11.1 and 11.3 in CI to make 11.1 periodic (#63900)
Jane Xu [Wed, 25 Aug 2021 16:00:13 +0000 (09:00 -0700)]
Swap CUDA 11.1 and 11.3 in CI to make 11.1 periodic (#63900)

Summary:
Preparing for supporting 11.3 in the next release.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63900

Reviewed By: malfet

Differential Revision: D30541437

Pulled By: janeyx99

fbshipit-source-id: a7297da7f7818a4291b1c321d62d76fc2c0f1f90

3 years ago[skip ci] Add generated comment to ruleset json (#63896)
zhouzhuojie [Wed, 25 Aug 2021 15:50:00 +0000 (08:50 -0700)]
[skip ci] Add generated comment to ruleset json (#63896)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63896

Reviewed By: heitorschueroff

Differential Revision: D30529820

Pulled By: zhouzhuojie

fbshipit-source-id: 7529803af23ea36a7bcb673cd399da80da8e3feb

3 years agoRevert D30526034: [pytorch][PR] compute reduction intermediate buffer size in elements
Alban Desmaison [Wed, 25 Aug 2021 14:15:18 +0000 (07:15 -0700)]
Revert D30526034: [pytorch][PR] compute reduction intermediate buffer size in elements

Test Plan: revert-hammer

Differential Revision:
D30526034 (https://github.com/pytorch/pytorch/commit/e69a1398cbe534874060460faf36af21d24ce6e7)

Original commit changeset: 0aca7f887974

fbshipit-source-id: a22472723818d6fe0c11a6e134080df1ac408038

3 years agoRevert D30384746: [fx2trt] Add a test for quantized resnet18
Linbin Yu [Wed, 25 Aug 2021 07:42:03 +0000 (00:42 -0700)]
Revert D30384746: [fx2trt] Add a test for quantized resnet18

Test Plan: revert-hammer

Differential Revision:
D30384746 (https://github.com/pytorch/pytorch/commit/10dfa58eba055a1bbc1cc89df033cd2815cbb403)

Original commit changeset: 1a8638777116

fbshipit-source-id: b93235323e229b391f5456f6e3543988062dd0d4

3 years ago[fx2trt] Add a test for quantized resnet18 (#63446)
Jerry Zhang [Wed, 25 Aug 2021 04:33:12 +0000 (21:33 -0700)]
[fx2trt] Add a test for quantized resnet18 (#63446)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63446

Add a test for quantized resnet18 running in TensorRT

Test Plan: buck run mode/opt -c python.package_style=inplace caffe2:fx2trt_quantized_resnet_test

Reviewed By: 842974287

Differential Revision: D30384746

fbshipit-source-id: 1a863877711618cd23d887694269ed9e44ee606c

3 years ago[quant][graphmode][fx] Make maxpool and flatten produce the reference pattern (#63501)
Jerry Zhang [Wed, 25 Aug 2021 04:28:40 +0000 (21:28 -0700)]
[quant][graphmode][fx] Make maxpool and flatten produce the reference pattern (#63501)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63501

Currently some of the ops are considered as working with both float and quantized input,
so we may have things like "quant - some_op - dequant" this might not work well with the backend,
we may consider change everything to produce "quant - dequant - some_op - quant - dequant" instead
in the future, this PR fixes it for maxpool and flatten only to unblock resnet benchmarking on TensorRT

Test Plan:
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: mruberry

Differential Revision: D30402788

fbshipit-source-id: 892c5ff6552775070e2c1453f65846590fb12735

3 years ago[TensorExpr] LLVMCodegen: Use addFnAttr instead of addAttribute which was deleted...
Mikhail Zolotukhin [Wed, 25 Aug 2021 04:21:57 +0000 (21:21 -0700)]
[TensorExpr] LLVMCodegen: Use addFnAttr instead of addAttribute which was deleted. (#63886)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63886

cc gmagogsfm

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D30523135

Pulled By: ZolotukhinM

fbshipit-source-id: 62e125f917b2a0153eb30879d93cf956587a05e0

3 years ago[qunat][graphmode][fx] Add a separate lower_to_native_backend function for relu ...
Jerry Zhang [Wed, 25 Aug 2021 04:05:14 +0000 (21:05 -0700)]
[qunat][graphmode][fx] Add a separate lower_to_native_backend function for relu (#62861)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62861

This PR adds a lower_to_native_backend function to lower a quantized reference model
to a model that uses fbgemm/qnnpack ops. We'll gradually add support and remove
the fbgemm/qnnpack specific handling in quantization_patterns.py

Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D30165828

fbshipit-source-id: de1149cd7e7c1840c17c251cd4d35004afd015b7

3 years agocompute reduction intermediate buffer size in elements (#63885)
Natalia Gimelshein [Wed, 25 Aug 2021 02:37:54 +0000 (19:37 -0700)]
compute reduction intermediate buffer size in elements (#63885)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/63869
`iter` strides are in bytes, and we are additionally multiplying size computed using those strides by `sizeof(arg_t)`. Computing `output_memory_size` in elements should be enough.
This doesn't fix the still real problem of allocating large intermediate tensor, but it makes this tensor smaller by typically a factor of 4.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63885

Reviewed By: mruberry

Differential Revision: D30526034

Pulled By: ngimel

fbshipit-source-id: 0aca7f887974b7776e380463bbd82d32a5786ee8

3 years agoTST Adds more modules into common module tests (#62999)
Thomas J. Fan [Wed, 25 Aug 2021 02:03:07 +0000 (19:03 -0700)]
TST Adds more modules into common module tests (#62999)

Summary:
This PR moves some modules into `common_modules` to see what it looks like.

While migrating some no batch modules into `common_modules`, I noticed that `desc` is not used for the name. This means we can not use `-k` to filter tests. This PR moves the sample generation into `_parametrize_test`, and passes in the already generated `module_input` into users of `modules(modules_db)`.

I can see this is a little different from opsinfo and would be happy to revert to the original implementation of `modules`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62999

Reviewed By: heitorschueroff

Differential Revision: D30522737

Pulled By: jbschlosser

fbshipit-source-id: 7ed1aeb3753fc97a4ad6f1a3c789727c78e1bc73

3 years agoAllow arbitrary objects in state_dicts (#62976)
Joel Schlosser [Wed, 25 Aug 2021 02:00:33 +0000 (19:00 -0700)]
Allow arbitrary objects in state_dicts (#62976)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/62094

Introduces functionality for adding arbitrary objects to module state_dicts. To take advantage of this, the following functions can be defined on a module:
* `get_extra_state(self) -> dict` - Returns a dict defining any extra state this module wants to save
* `set_extra_state(self, state)` - Subsumes the given state within the module

In the details, a sub-dictionary is stored in the state_dict under the key `_extra_state` for each module that requires extra state.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62976

Reviewed By: heitorschueroff

Differential Revision: D30518657

Pulled By: jbschlosser

fbshipit-source-id: 5fb35ab8e3d36f35e3e96dcd4498f8c917d1f386

3 years agoTST Adds pickle testing for ModuleInfo (#63736)
Thomas J. Fan [Wed, 25 Aug 2021 01:55:23 +0000 (18:55 -0700)]
TST Adds pickle testing for ModuleInfo (#63736)

Summary:
Follow up to https://github.com/pytorch/pytorch/pull/61935

This PR adds `test_pickle` to `test_modules`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63736

Reviewed By: heitorschueroff

Differential Revision: D30522462

Pulled By: jbschlosser

fbshipit-source-id: a03b66ea0d81c6d0845c4fddf0ddc3714bbf0ab1

3 years agoRe-apply: [nnc] Support thread level parallelism in fused kernels (#63776)
Bert Maher [Wed, 25 Aug 2021 01:52:29 +0000 (18:52 -0700)]
Re-apply: [nnc] Support thread level parallelism in fused kernels (#63776)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63776

I reverted this out of an abundance of caution because some test
failures occurred, but they were all due to precision issues fixed lower in
this stack.  Let's try again.

I've rolled the elimination of the allow-parallelism-in-fusions toggle into
this diff since they're pretty tightly coupled.
ghstack-source-id: 136529847

Test Plan: CI

Reviewed By: huiguoo

Differential Revision: D30484555

fbshipit-source-id: 38fd33520f710585d1130c365a8c60c9ce794a59

3 years agoDon't switch executors mid test (#63830)
Bert Maher [Wed, 25 Aug 2021 01:52:29 +0000 (18:52 -0700)]
Don't switch executors mid test (#63830)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63830

It's really not safe to change the executor out from under models that may have
already been partially compiled.
ghstack-source-id: 136526228

Test Plan:
```
DEBUG=1 CFLAGS="-fsanitize=address" CXXFLAGS="-fsanitize=address" USE_LLVM=$(realpath ../llvm-project/install) CMAKE_PREFIX_PATH=$CONDA_PREFIX python setup.py install
LD_PRELOAD=/lib64/libasan.so.5 numactl -C3 pytest -v --cov --cov-report xml:test/coverage.xml --cov-append onnx/test_pytorch_onnx_onnxruntime.py::TestONNXRuntime_opset11 -s
```

Reviewed By: desertfire

Differential Revision: D30504489

fbshipit-source-id: 188581cb53f0cf5bd3442d1e9d46e8c0c7e124f8

3 years ago[nnc] Disable erf and erfc (#63775)
Bert Maher [Wed, 25 Aug 2021 01:52:29 +0000 (18:52 -0700)]
[nnc] Disable erf and erfc (#63775)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63775

These introduce small accuracy differences that cause some internal
tests to fail, and it's not worth fixing the tests right now because they're
slower than the ATen ops anyways.
ghstack-source-id: 136526229

Test Plan:
```
buck test mode/dev //aml/eccv/mcm/training:tests -- --exact 'aml/eccv/mcm/training:tests - test_build_torch_script_model (aml.eccv.mcm.training.tests.publish_helper_tests.TransformerPredictorPublishHelperTests)'
```

Reviewed By: navahgar

Differential Revision: D30484557

fbshipit-source-id: 095a9c810539a499105b76e1d96843dbc61b0079

3 years agoMigrate THCTensor_copyIgnoringOverlaps to ATen (#63505)
Peter Bell [Wed, 25 Aug 2021 01:48:25 +0000 (18:48 -0700)]
Migrate THCTensor_copyIgnoringOverlaps to ATen (#63505)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63505

This isn't a public operator, just a helper function used in CUDA_tensor_apply.

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D30441305

Pulled By: ngimel

fbshipit-source-id: 84fabc701cbd8479e02d80f373a3dd62d70df2ce

3 years ago[quant][graphmode][fx] Add reference option support for binary ops (#62698)
Jerry Zhang [Wed, 25 Aug 2021 01:20:43 +0000 (18:20 -0700)]
[quant][graphmode][fx] Add reference option support for binary ops (#62698)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62698

We also removed the special handling in match_utils for binary ops

Test Plan:
python test/test_quantize.py TestQuantizeFx
python test/test_quantize.py TestQuantizeFxOps

Imported from OSS

Reviewed By: vkuzo

Differential Revision: D30093781

fbshipit-source-id: 58cc972de8211a80dd4d111e25dc4ad36057933f

3 years ago[StaticRuntime] Fix bug in HasInplaceOp (#63842)
Hao Lu [Wed, 25 Aug 2021 00:06:18 +0000 (17:06 -0700)]
[StaticRuntime] Fix bug in HasInplaceOp (#63842)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63842

Reviewed By: mikeiovine

Differential Revision: D30506914

fbshipit-source-id: b2e358cfb991dacdb295b61bbc37beb36b73b852

3 years agoMicrobenchmarking matrix mult (einsum, torch.mult, torch.mm) (#63654)
Harut Movsisyan [Tue, 24 Aug 2021 23:20:13 +0000 (16:20 -0700)]
Microbenchmarking matrix mult (einsum, torch.mult, torch.mm) (#63654)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63654

Test Plan:
```
> buck run mode/opt caffe2/benchmarks/operator_benchmark/pt:matrix_mult_test

# ----------------------------------------
# PyTorch/Caffe2 Operator Micro-benchmarks
# ----------------------------------------
# Tag : short

# Benchmarking PyTorch: einsum_bmm
# Mode: Eager
# Name: einsum_bmm_B4_M5_N3_K2_cpu
# Input: B: 4, M: 5, N: 3, K: 2, device: cpu
Forward Execution Time (us) : 27.970

# Benchmarking PyTorch: einsum_bmm
# Mode: Eager
# Name: einsum_bmm_B32_M25_N20_K30_cpu
# Input: B: 32, M: 25, N: 20, K: 30, device: cpu
Forward Execution Time (us) : 41.830

# Benchmarking PyTorch: einsum_bmm
# Mode: Eager
# Name: einsum_bmm_B128_M100_N120_K110_cpu
# Input: B: 128, M: 100, N: 120, K: 110, device: cpu
Forward Execution Time (us) : 499.114

# Benchmarking PyTorch: bmm
# Mode: Eager
# Name: bmm_B4_M5_N3_K2_cpu
# Input: B: 4, M: 5, N: 3, K: 2, device: cpu
Forward Execution Time (us) : 6.268

# Benchmarking PyTorch: bmm
# Mode: Eager
# Name: bmm_B32_M25_N20_K30_cpu
# Input: B: 32, M: 25, N: 20, K: 30, device: cpu
Forward Execution Time (us) : 12.676

# Benchmarking PyTorch: bmm
# Mode: Eager
# Name: bmm_B128_M100_N120_K110_cpu
# Input: B: 128, M: 100, N: 120, K: 110, device: cpu
Forward Execution Time (us) : 438.219

# Benchmarking PyTorch: einsum_elementwise
# Mode: Eager
# Name: einsum_elementwise_B4_M5_N3_cpu
# Input: B: 4, M: 5, N: 3, device: cpu
Forward Execution Time (us) : 7.657

# Benchmarking PyTorch: einsum_elementwise
# Mode: Eager
# Name: einsum_elementwise_B32_M25_N20_cpu
# Input: B: 32, M: 25, N: 20, device: cpu
Forward Execution Time (us) : 18.523

# Benchmarking PyTorch: einsum_elementwise
# Mode: Eager
# Name: einsum_elementwise_B100_M90_N110_cpu
# Input: B: 100, M: 90, N: 110, device: cpu
Forward Execution Time (us) : 55.103

# Benchmarking PyTorch: mul
# Mode: Eager
# Name: mul_B4_M5_N3_cpu
# Input: B: 4, M: 5, N: 3, device: cpu
Forward Execution Time (us) : 2.501

# Benchmarking PyTorch: mul
# Mode: Eager
# Name: mul_B32_M25_N20_cpu
# Input: B: 32, M: 25, N: 20, device: cpu
Forward Execution Time (us) : 10.589

# Benchmarking PyTorch: mul
# Mode: Eager
# Name: mul_B100_M90_N110_cpu
# Input: B: 100, M: 90, N: 110, device: cpu
Forward Execution Time (us) : 50.102

Reviewed By: ajyu

Differential Revision: D30455179

fbshipit-source-id: 9f2d92b2d2b860f41a8e59be2cc086d75b587f7b

3 years agoTurn off layer norm in jit symbolic differentiation (#63816)
Xiaodong Wang [Tue, 24 Aug 2021 22:45:59 +0000 (15:45 -0700)]
Turn off layer norm in jit symbolic differentiation (#63816)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63816

Test Plan:
Confirmed this can rescue the NE:

https://www.internalfb.com/mast/job/torchx_xdwang-SparseNNApplication_72cf593d

Reviewed By: ngimel

Differential Revision: D30498746

fbshipit-source-id: 4a387f32ee2f70685de6104459c7f21bfbddc187

3 years agoAdd a common autograd TLS state (#63860)
Alban Desmaison [Tue, 24 Aug 2021 22:32:42 +0000 (15:32 -0700)]
Add a common autograd TLS state (#63860)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63860

Test Plan: Imported from OSS

Reviewed By: heitorschueroff

Differential Revision: D30513253

Pulled By: albanD

fbshipit-source-id: 97d76ed54dfbdf4ba3fc7051ce3b9bb636cefb4b

3 years ago.github: Enable with-ssh for Windows (#63440)
Eli Uriegas [Tue, 24 Aug 2021 21:13:04 +0000 (14:13 -0700)]
.github: Enable with-ssh for Windows (#63440)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63440

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS

Reviewed By: janeyx99

Differential Revision: D30521460

Pulled By: seemethere

fbshipit-source-id: e987e170e73fb4f9d9f024bed0e58404ed206848

3 years ago[FX] Fix _replicate_for_data_parallel (#63821)
James Reed [Tue, 24 Aug 2021 20:44:52 +0000 (13:44 -0700)]
[FX] Fix _replicate_for_data_parallel (#63821)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63821

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D30502115

Pulled By: jamesr66a

fbshipit-source-id: 0f004f95def6e1ba21ccbeab40cb0a739a0ad20c

3 years agoDo not modify saved variables in-place for spectral norm during power iteration ...
soulitzer [Tue, 24 Aug 2021 20:02:27 +0000 (13:02 -0700)]
Do not modify saved variables in-place for spectral norm during power iteration (#62293)

Summary:
Interestingly enough, the original code did have a mechanism that aims to prevent this very issue:
but it performs a clone AFTER modifying u and v in-place.
This wouldn't work though because we can later use the cloned u and v in operations that save for backward, and the next time we execute forward, we modify the same cloned u and v in-place.
So if the idea is that we want to avoid modifying saved variable in-place we should clone it BEFORE the in-place operation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62293

Reviewed By: bdhirsh

Differential Revision: D30489750

Pulled By: soulitzer

fbshipit-source-id: cbe8dea885aef97adda8481f7a822e5bd91f7889

3 years agoMigrate legacy lstsq from THC to ATen (CUDA) (#63504)
Peter Bell [Tue, 24 Aug 2021 19:43:27 +0000 (12:43 -0700)]
Migrate legacy lstsq from THC to ATen (CUDA) (#63504)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63504

Closes gh-24592

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D30441304

Pulled By: ngimel

fbshipit-source-id: ec176596f54bc084af48a73d1dbb0dcb82fec593

3 years agoRevert D30513613: Removing tensor.data usage in utils with tensor set_ method
Edward Yang [Tue, 24 Aug 2021 19:19:16 +0000 (12:19 -0700)]
Revert D30513613: Removing tensor.data usage in utils with tensor set_ method

Test Plan: revert-hammer

Differential Revision:
D30513613 (https://github.com/pytorch/pytorch/commit/d08a36f831cbcb4516fc1b68e3e3deff8ab45aba)

Original commit changeset: 402efb9c30fa

fbshipit-source-id: 911c66a9852de77dc5274b5fb373258c0c97739a

3 years agoMerge common fields from TensorInitParams and ShardedTensorMetadata into TensorProper...
Bo Wang [Tue, 24 Aug 2021 18:45:54 +0000 (11:45 -0700)]
Merge common fields from TensorInitParams and ShardedTensorMetadata into TensorProperties (#63731)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63731
1) Follow up [PR/63378 last comment](https://github.com/pytorch/pytorch/pull/63378#discussion_r693143053)
2) Also updated the caller side (usage of ShardedTensorMetadta) in fbcode

Ref: [landing workflow 3](https://www.internalfb.com/intern/wiki/PyTorch/PyTorchDev/Workflow/Landing/#landing-your-prs-from-gi-1)

Test Plan:
Imported from OSS

OSS: (pytorch).. $ python test/distributed/_sharded_tensor/test_sharded_tensor.py --v
FB:  fbcode $ buck test mode/dev //aiplatform/modelstore/checkpointing/pyper/tests:checkpoint_utils_test

Reviewed By: wanchaol, heitorschueroff

Differential Revision: D30472281

fbshipit-source-id: 727fb0e7f10eab4eb7a10476194e9008f2ac1fb5

3 years agoRemoving tensor.data usage in utils with tensor set_ method (#63867)
Aayush Prakash [Tue, 24 Aug 2021 18:19:34 +0000 (11:19 -0700)]
Removing tensor.data usage in utils with tensor set_ method (#63867)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63867

When updating the model parameter, updating `parameter.data` is no longer recommended, because this `data` field will be deprecated in the future.

The replacement is `tensor.set_`.

ghstack-source-id: 136531233

Test Plan: buck test mode/dev-nosan //caffe2/test/distributed:distributed_nccl_spawn -- test_periodic_model_averager

Reviewed By: SciPioneer

Differential Revision: D30513613

fbshipit-source-id: 402efb9c30fafc3f285bebc631639f656ceae585

3 years agoupdate readme and contributing.md (#63843)
Yi Zhang [Tue, 24 Aug 2021 17:50:57 +0000 (10:50 -0700)]
update readme and contributing.md (#63843)

Summary:
1. In fact, Visual Studio isn't supported as CMAKE generator
2. I was asked many times why there's error as 'Could NOT find OpenMP'
3. Add Newly added Best Practices link in contributing.md

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63843

Reviewed By: seemethere, heitorschueroff

Differential Revision: D30514095

Pulled By: janeyx99

fbshipit-source-id: 76715a1d8c049122546e5a7778cafe54e4dfd5d6

3 years agoSubprocess encoding fixes for cpp extension (#63756)
peterjc123 [Tue, 24 Aug 2021 17:44:45 +0000 (10:44 -0700)]
Subprocess encoding fixes for cpp extension (#63756)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/63584

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63756

Reviewed By: bdhirsh

Differential Revision: D30485046

Pulled By: ezyang

fbshipit-source-id: 4f0ac383da4e8843e2a602dceae85f389d7434ee

3 years agoadd bf16 support for bucketize (#55588)
mingfeima [Tue, 24 Aug 2021 17:30:18 +0000 (10:30 -0700)]
add bf16 support for bucketize (#55588)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55588

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D28836796

Pulled By: VitalyFedyunin

fbshipit-source-id: c9ae5b969c30a45473533be5f29bb497f8da5143

3 years ago[pruner] modify base pruner to prune bias by default (#63202)
Karen Zhou [Tue, 24 Aug 2021 17:17:28 +0000 (10:17 -0700)]
[pruner] modify base pruner to prune bias by default (#63202)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63202

By default, the prune will also prune biases, such that the whole output channel is removed. The user can manually set `also_prune_bias` to False when calling `prepare` if they don't want the bias to be pruned.
ghstack-source-id: 136466671

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MV32

modify `fusion_tests` according to API change
`buck test mode/opt //scripts/kazhou:fusion_tests`

https://pxl.cl/1NbKz

Reviewed By: z-a-f

Differential Revision: D30294494

fbshipit-source-id: c84655648bee0035559195ca855b98fb7edaa134

3 years ago[pruner] amend base pruner API to match base sparsifier (#63178)
Karen Zhou [Tue, 24 Aug 2021 17:17:28 +0000 (10:17 -0700)]
[pruner] amend base pruner API to match base sparsifier (#63178)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63178

Update base pruner API to match base sparsifier API as defined in D28970960 / PR58955

Changes include:
- `enable_mask_update = True` in `__init__`
- `prepare` takes model and config instead of constructor
- convert functionality renamed to `squash_mask`, `convert` method call now raises Error
- `activation_handles` ad `bias_handles` initialized in `_prepare` instead of constructor
ghstack-source-id: 136467595

Test Plan:
Function names updates according to changes

`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MTgH

TODO will need to modify `fbcode/scripts/kazhou/fusion_tests.py` to use new API

Reviewed By: z-a-f

Differential Revision: D30287179

fbshipit-source-id: d4727bea1873b500f2d4bb784db26d532bf26cce

3 years ago[pruner] refactor `ActivationReconstruction` forward hooks (#63158)
Karen Zhou [Tue, 24 Aug 2021 17:17:28 +0000 (10:17 -0700)]
[pruner] refactor `ActivationReconstruction` forward hooks (#63158)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63158

Combined functionality for `ActivationReconstruction` for both Linear and Conv2d in one class. The only difference between the old classes was the size and indexing of the reconstructed tensor -- that logic can be generalized by iterating over the size of `output`.
ghstack-source-id: 136467465

Test Plan:
`buck test mode/dev-nosan //caffe2/test:ao -- TestBasePruner`

https://pxl.cl/1MSSv

Reviewed By: raghuramank100

Differential Revision: D30282765

fbshipit-source-id: 08a1e4e0650511019fff85cf52b41dd818b0c7f8

3 years ago[Static Runtime] Implement prim::VarStack out variant (#63579)
Mike Iovine [Tue, 24 Aug 2021 16:38:25 +0000 (09:38 -0700)]
[Static Runtime] Implement prim::VarStack out variant (#63579)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63579

Provide a static runtime out variant implementation for the new op introduced in D30426232 (https://github.com/pytorch/pytorch/commit/1385f9fb12e6607c98d2d9d5edaaaab2bc07386f).

Test Plan: `buck test //caffe2/benchmarks/static_runtime:static_runtime_cpptest -- IndividualOps_VarStack`

Reviewed By: navahgar

Differential Revision: D30410525

fbshipit-source-id: bc59a3d8ad23e3d94561ec2dca9cc20687dbadf8

3 years ago[Reland] Embedding thrust->cub migration (#63806)
Xiang Gao [Tue, 24 Aug 2021 16:24:50 +0000 (09:24 -0700)]
[Reland] Embedding thrust->cub migration (#63806)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/63427

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63806

Reviewed By: bdhirsh

Differential Revision: D30498255

Pulled By: ngimel

fbshipit-source-id: 78b7085a92a168cf0163f53dcb712bac922f5235

3 years agooptimize BFloat16 elemwise operators CPU: sigmoid, sigmoid_backward, tanh_backward...
mingfeima [Tue, 24 Aug 2021 15:54:36 +0000 (08:54 -0700)]
optimize BFloat16 elemwise operators CPU: sigmoid, sigmoid_backward, tanh_backward, addcmul, addcdiv (#55221)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55221

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D28836797

Pulled By: VitalyFedyunin

fbshipit-source-id: 6b79098c902ffe65d228668118ef36fb49bab800

3 years agoEnable BFloat16 LeakyReLU and RReLU in CPU path (#61514)
yanbing-j [Tue, 24 Aug 2021 15:32:33 +0000 (08:32 -0700)]
Enable BFloat16 LeakyReLU and RReLU in CPU path (#61514)

Summary:
Enable and optimize BFloat16 LeakyReLU and RReLU in CPU path.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61514

Reviewed By: ejguan

Differential Revision: D30257612

Pulled By: VitalyFedyunin

fbshipit-source-id: 8cc0d1faacd02dcc9827af724a86d95b6952748f

3 years agoENH Adds no_batch_dim for NLLLoss (#62651)
Thomas J. Fan [Tue, 24 Aug 2021 15:26:21 +0000 (08:26 -0700)]
ENH Adds no_batch_dim for NLLLoss (#62651)

Summary:
Towards https://github.com/pytorch/pytorch/issues/60585

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62651

Reviewed By: VitalyFedyunin

Differential Revision: D30303340

Pulled By: jbschlosser

fbshipit-source-id: 7ab478cf63bf6cd1f850cad5fd101e74a2cfe3f5

3 years agofix batchnorm2d issue when input is non contiguous (#63392)
mingfeima [Tue, 24 Aug 2021 15:22:47 +0000 (08:22 -0700)]
fix batchnorm2d issue when input is non contiguous (#63392)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63392

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D30476317

Pulled By: VitalyFedyunin

fbshipit-source-id: 03055a0aec21cf2c029b6f32315da2b09cb722d0

3 years ago[JIT] Add variadic stack op (#63578)
Mike Iovine [Tue, 24 Aug 2021 15:19:38 +0000 (08:19 -0700)]
[JIT] Add variadic stack op (#63578)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63578

Added a new op `prim::VarStack` and a pass that transforms instances of `aten::stack(list, dim)` into `prim::VarStack(list[0], ..., list[n], dim)`. Also provided a JIT interpreter implementation.

Most of the implementation/tests are the same as `prim::VarConcat`.

Test Plan: `buck test caffe2/test/cpp/jit:jit -- TestStackOpt`

Reviewed By: navahgar

Differential Revision: D30426232

fbshipit-source-id: 9829a7db6e0a5038c9b7528c43c25b0c221aa2ce

3 years ago[BE] add distributed run_test options (#63147)
Rong Rong (AI Infra) [Tue, 24 Aug 2021 15:01:36 +0000 (08:01 -0700)]
[BE] add distributed run_test options (#63147)

Summary:
Currently distributed tests are mixed within test_python.
We would like to split the distributed tests into its own batch thus we need to split them out.

Adding an option to include/exclude distributed tests with CUSTOM_HANDLERS.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63147

Test Plan:
- locally run with the addition run_test.py options.
- CI

Dependency: found a bug in mpiexec test and need https://github.com/pytorch/pytorch/issues/63580 to fix it first.

Reviewed By: bdhirsh

Differential Revision: D30496178

Pulled By: walterddr

fbshipit-source-id: 7903a57b619f2425028028f944211938823918a6

3 years agoRevert D30388099: Add a common autograd TLS state
Alban Desmaison [Tue, 24 Aug 2021 14:20:56 +0000 (07:20 -0700)]
Revert D30388099: Add a common autograd TLS state

Test Plan: revert-hammer

Differential Revision:
D30388099 (https://github.com/pytorch/pytorch/commit/83d9bad44a1e1e6202103cd22e4dbd2bd3d7dae0)

Original commit changeset: 8e03f940150f

fbshipit-source-id: f6d60fec66e8292f5268335bb8a3e7e1a662f23b

3 years agoENH Adds no_batch_dim tests/docs for LPPool1d and Identity (#62190)
Thomas J. Fan [Tue, 24 Aug 2021 13:58:05 +0000 (06:58 -0700)]
ENH Adds no_batch_dim tests/docs for LPPool1d and Identity (#62190)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/60585

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62190

Reviewed By: ejguan

Differential Revision: D29942385

Pulled By: jbschlosser

fbshipit-source-id: 00df6f6f01ad039631bb8679f8de94863aac7650

3 years agoAdd a common autograd TLS state (#63114)
albanD [Tue, 24 Aug 2021 13:52:38 +0000 (06:52 -0700)]
Add a common autograd TLS state (#63114)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63114

This PR collapses the GradMode and InferenceMode thread local booleans into a single thread local uint8.
This helps reducing the number of thread local variable accesses done when we propagate ThreadLocalStates.

Note that this is even more beneficial as we will add a forward mode AD TLS (similar to GradMode) higher in this stack and this new structure should reduce the perf impact of adding this new TLS.

Here is the full benchmark result between master and the top of this stack: https://gist.github.com/albanD/e421101e9ed344e94999bef3a54bf0f3
tl;dr: give a benefit in most cases. It is never detrimental.

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D30388099

Pulled By: albanD

fbshipit-source-id: 8e03f940150ff063c2edd792733663413ae2f486

3 years agoSeparating quantization test from distributed_test (#63058)
Marjan Fariborz [Tue, 24 Aug 2021 08:43:33 +0000 (01:43 -0700)]
Separating quantization test from distributed_test (#63058)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63058

Dedicating separate tests for different quantization methods. Currently supporting FP16 method.
ghstack-source-id: 136499767

Test Plan: uck test mode/dev //caffe2/test/distributed/algorithms/quantization:quantization_gloo_fork -- name_of_the_test

Reviewed By: wanchaol

Differential Revision: D30142580

fbshipit-source-id: 3aacec1a231a662067d2b48c001f0c69fefcdd60

3 years ago[TensorExpr] Nuke KernelArena and KernelScope. (#63587)
Mikhail Zolotukhin [Tue, 24 Aug 2021 07:29:22 +0000 (00:29 -0700)]
[TensorExpr] Nuke KernelArena and KernelScope. (#63587)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63587

Now that there is no classes using KernelArena for memory management we
can remove it.

Differential Revision:
D30429115
D30429115

Test Plan: Imported from OSS

Reviewed By: navahgar

Pulled By: ZolotukhinM

fbshipit-source-id: 375f6f9294d27790645eeb7cb5a8e87047a57544

3 years ago[TensorExpr] Make 'Tensor' a value type. (#63586)
Mikhail Zolotukhin [Tue, 24 Aug 2021 07:29:22 +0000 (00:29 -0700)]
[TensorExpr] Make 'Tensor' a value type. (#63586)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63586

This is another commit in transition from KernelArena memory management.
Tensor is essentially just a pair of <BufPtr, StmtPtr> and we don't need
to dynamically allocate it at all - it's cheap to pass it by value, and
that's what we're switching to in this commit.

After this change nothing uses KernelScope/KernelArena and they can be
safely removed.

Differential Revision:
D30429114
D30429114

Test Plan: Imported from OSS

Reviewed By: navahgar

Pulled By: ZolotukhinM

fbshipit-source-id: f90b859cfe863692b7beffbe9bd0e4143df1e819

3 years ago[TensorExpr] Switch Exprs and Stmt from kernel-arena to shared_ptr. (#63216)
Mikhail Zolotukhin [Tue, 24 Aug 2021 07:29:22 +0000 (00:29 -0700)]
[TensorExpr] Switch Exprs and Stmt from kernel-arena to shared_ptr. (#63216)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63216

Currently there are three classes managed by KernelArena: Expr, Stmt,
and Tensor (and derived classes). KernelArena has been a long standing
painpoint for NNC devs and we're moving away from that memory management
model to ref-count based memory model (using shared_ptr). This commit
switches Expr and Stmt to shared_ptr and is the biggest change in this
transition. Later commits will detach Tensor from KernelArena and kill
the arena + scope altogether.

Differential Revision:
D30353195
D30353195

Test Plan: Imported from OSS

Reviewed By: navahgar

Pulled By: ZolotukhinM

fbshipit-source-id: 9575225ada3d0fb65087ae40435f3dfea4792cae

3 years ago[TensorExpr] More NFC changes like Expr* -> ExprPtr. (#63778)
Mikhail Zolotukhin [Tue, 24 Aug 2021 07:29:22 +0000 (00:29 -0700)]
[TensorExpr] More NFC changes like Expr* -> ExprPtr. (#63778)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63778

This is a preparation for a switch from raw pointers to shared pointers
as a memory model for TE expressions and statements.

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D30487425

Pulled By: ZolotukhinM

fbshipit-source-id: 9cbe817b7d4e5fc2f150b29bb9b3bf578868f20c

3 years agoadd channels last for GroupNorm (#49821)
mingfeima [Tue, 24 Aug 2021 05:53:35 +0000 (22:53 -0700)]
add channels last for GroupNorm (#49821)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49821

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D26007053

Pulled By: VitalyFedyunin

fbshipit-source-id: 34a48d5d3b66a159febf3c3d96748fbaba1b9e31

3 years agoAdd ROCm as a platform for which tests can be disabled (#63813)
Jane Xu [Tue, 24 Aug 2021 01:44:46 +0000 (18:44 -0700)]
Add ROCm as a platform for which tests can be disabled (#63813)

Summary:
Realized we were missing ROCm as a platform on which one could disable a flaky test. (like how this issue specifies windows https://github.com/pytorch/pytorch/issues/61655)

cc jeffdaily sunway513 jithunnair-amd ROCmSupport

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63813

Reviewed By: seemethere

Differential Revision: D30498478

Pulled By: janeyx99

fbshipit-source-id: f1abe8677e1ddd01de3291e1618272ad8e287dc4

3 years ago[Static Runtime] SR clones graph input (#63704)
Mike Iovine [Tue, 24 Aug 2021 01:43:17 +0000 (18:43 -0700)]
[Static Runtime] SR clones graph input (#63704)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63704

Previously SR did not clone the graph. This was leading to subtle bugs in `testStaticRuntime`; static runtime would modify its graph, and the graph used by the JIT interpreter would change as well. The JIT interpreter would then crash if SR-only ops were added!

Cloning the graph is more consistent with the behavior of the `Module` ctor.

Test Plan: `buck test caffe2/benchmarks/static_runtime/...`

Reviewed By: hlu1

Differential Revision: D30463294

fbshipit-source-id: b771551a1f55f95fde79373b23babcf3e5ddf726

3 years ago[fx2trt] Add acc op and converter for torch.pow (#63795)
Shiyan Deng [Tue, 24 Aug 2021 01:17:20 +0000 (18:17 -0700)]
[fx2trt] Add acc op and converter for torch.pow (#63795)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63795

att

Test Plan: buck run mode/opt caffe2/torch/fb/fx2trt:test_binary_ops

Reviewed By: jackm321, wushirong

Differential Revision: D30492488

fbshipit-source-id: 6d615770567b13720316f06fd2f866ea2fdc2995

3 years agoAdding DataLoader2 class as future replacement of DataLoader (#63742)
Vitaly Fedyunin [Tue, 24 Aug 2021 01:07:37 +0000 (18:07 -0700)]
Adding DataLoader2 class as future replacement of DataLoader (#63742)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63742

Supports sharding and batching on loader level**

Supports sharding and batching on loader level

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D30494506

Pulled By: VitalyFedyunin

fbshipit-source-id: 6648e09d955055ac38e3a4e3973f701acefca762

3 years ago[BE] Enable PostLocalSGD tests on windows (#63463)
Rohan Varma [Tue, 24 Aug 2021 00:45:39 +0000 (17:45 -0700)]
[BE] Enable PostLocalSGD tests on windows (#63463)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63463

Now that `torch.distributed.optim` gates DistributedOptimizer on RPC availability, local sgd optimizer can be used on windows.
ghstack-source-id: 136437632

Test Plan: Ci

Reviewed By: SciPioneer

Differential Revision: D30358922

fbshipit-source-id: 9b56aebf1075f026637296d338805ad8851c9d40

3 years ago[BE] Enable functional optim tests for windows (#63462)
Rohan Varma [Tue, 24 Aug 2021 00:45:39 +0000 (17:45 -0700)]
[BE] Enable functional optim tests for windows (#63462)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63462

Now that `torch.distributed.optim` gates DistributedOptimizer on RPC availability, these tests can be run on windows.
ghstack-source-id: 136437635

Test Plan: CI

Reviewed By: SciPioneer

Differential Revision: D30358923

fbshipit-source-id: 36739bdfe7214789f17de652d30c62c2bc124c73

3 years ago[fx_acc] Add mapper for torch.log1p (#63792)
Shiyan Deng [Tue, 24 Aug 2021 00:41:38 +0000 (17:41 -0700)]
[fx_acc] Add mapper for torch.log1p (#63792)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63792

Map `torch.log1p` to `acc_ops.add` + `acc_ops.log`.

Test Plan: buck test mode/opt glow/fb/fx/oss_acc_tracer:test_acc_tracer -- test_log1p

Reviewed By: wushirong

Differential Revision: D30491706

fbshipit-source-id: bcbeddf06131113185d2019cfd7cf5e9193a8a78

3 years agoFix pocketfft include path in mobile build (#63714)
Peter Bell [Tue, 24 Aug 2021 00:39:50 +0000 (17:39 -0700)]
Fix pocketfft include path in mobile build (#63714)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63714

PocketFFT was disabled for CMake < 3.9 but CMake 3.11 is the first version to support `INCLUDE_DIRECTORIES` as a target property. So updating to CMake 3.10 causes the mobile builds to fail. Instead of limiting the CMake support, this just adds the include directory to the entire target,

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D30498369

Pulled By: malfet

fbshipit-source-id: 83372e29c477c97e7015763b7c29d6d7e456bcef

3 years agoSimplify ccache instructions in CONTRIBUTING.md (#62549)
Peter Bell [Tue, 24 Aug 2021 00:39:45 +0000 (17:39 -0700)]
Simplify ccache instructions in CONTRIBUTING.md (#62549)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62549

When building CUDA files with native CMake support, it will respect the
`CMAKE_CUDA_COMPILER_LAUNCHER` setting. So, there's no need for symlinks.

Test Plan: Imported from OSS

Reviewed By: bdhirsh

Differential Revision: D30498488

Pulled By: malfet

fbshipit-source-id: 71c2ae9d4570cfac2a64d777bc95cda3764332a0

3 years agoSkip archiving useless build artifacts (#63785)
driazati [Tue, 24 Aug 2021 00:30:51 +0000 (17:30 -0700)]
Skip archiving useless build artifacts (#63785)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63785

We currently zip up everything in `build/` which includes a lot of cruft (`.o` files, random things copied in from dependencies, etc). This makes the artifact bigger (slower upload/download times, and takes about 1.5 minutes to archive). This change makes archiving instead take ~15 seconds and removes the 50 second upload to GitHub step that isn't as useful now that we have the HUD PR page that lists out all artifacts.

Test Plan: Imported from OSS

Reviewed By: seemethere, janeyx99

Differential Revision: D30494444

Pulled By: driazati

fbshipit-source-id: 93202dba7387daeb4859a938110b02ff2dc2ccc4

3 years agoFix some memory bugs in onnx passes (#63754)
Bert Maher [Tue, 24 Aug 2021 00:28:33 +0000 (17:28 -0700)]
Fix some memory bugs in onnx passes (#63754)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63754

Running onnx tests with ASAN uncovers several memory errors.  These two are caused by: (1) iterating the uses list of a node after mutation, and (2) accessing the `blocks` attribute of a possibly deleted node.

To reproduce (this is on a CentOS 7 box):
```
DEBUG=1 CFLAGS="-fsanitize=address" CXXFLAGS="-fsanitize=address" USE_LLVM=$(realpath ../llvm-project/install) CMAKE_PREFIX_PATH=$CONDA_PREFIX python setup.py install
LD_PRELOAD=$(realpath /lib64/libasan.so.5) numactl -C3 pytest -v --cov --cov-report xml:test/coverage.xml --cov-append onnx/test_pytorch_onnx_onnxruntime.py::TestONNXRuntime_opset11 -s
```

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D30493939

Pulled By: bertmaher

fbshipit-source-id: e16e19dc9b4c9896e102ca8bf04c8bedfdde87af

3 years ago[JIT] Move UseVariadicCat internals (#63577)
Mike Iovine [Tue, 24 Aug 2021 00:26:27 +0000 (17:26 -0700)]
[JIT] Move UseVariadicCat internals (#63577)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63577

Since other variadic ops will have an almost identical implementation, we can generalize the `UseVariadicCat` implementation and put it in a common folder.

Also moved some test utilities that other variadic op tests will likely need.

Test Plan: `buck test caffe2/test/cpp/jit:jit -- ConcatOptTest`

Reviewed By: navahgar

Differential Revision: D30409937

fbshipit-source-id: 925c11c27b58ce98cb8368d2a205e26ba66d3db9

3 years agoFix typo in NNAPI tests (#63797)
Akshit Khurana [Mon, 23 Aug 2021 23:33:07 +0000 (16:33 -0700)]
Fix typo in NNAPI tests (#63797)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63797

nnapi memory format test has a typo

Test Plan:
pytest test/test_nnapi.py::TestNNAPI

Imported from OSS

Reviewed By: Amyh11325

Differential Revision: D30495473

fbshipit-source-id: 8edad7c01a080847a64a2797e077ec4d6077552a

3 years ago[Static Runtime] Add an out variant op for aten::abs (#63675)
Don Jang [Mon, 23 Aug 2021 23:20:27 +0000 (16:20 -0700)]
[Static Runtime] Add an out variant op for aten::abs (#63675)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63675

This change adds an out variant implementation for `aten::abs`.

Test Plan:
- Observed `V0820 14:14:08.880342 101788 impl.cpp:1394] Switch to out variant for node: %3 : Tensor = aten::abs(%a.1)`

- Perf impact: TBD

Reviewed By: hlu1

Differential Revision: D30461317

fbshipit-source-id: 0c0230bd40afe463ae1ccb222c2a1207ebcf4191

3 years agofix git diff issue (#63408)
Rong Rong (AI Infra) [Mon, 23 Aug 2021 22:36:59 +0000 (15:36 -0700)]
fix git diff issue (#63408)

Summary:
Fixes https://github.com/pytorch/pytorch/issues/60111, ideally we should merge this before https://github.com/pytorch/pytorch/issues/63360 but we can also test this with https://github.com/pytorch/pytorch/issues/63360 easily.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63408

Test Plan:
- This is conform working with local test.sh run by setting PR_NUMBER
- should be validated by GHA CI as well

Concern:
- currently GHA CI is running into proxy 403 rate-limit exceeded issue consistently. However the worst case is not generating any git diff files, which is going to be exactly the same as current behavior.
- depends on https://github.com/pytorch/pytorch/issues/63770.

Reviewed By: driazati, janeyx99

Differential Revision: D30489355

Pulled By: walterddr

fbshipit-source-id: a638b7ae5820f29a7aca6cc40ff390ab253cb174

3 years ago.github: Add ec2 information as a step (#63784)
Eli Uriegas [Mon, 23 Aug 2021 22:02:10 +0000 (15:02 -0700)]
.github: Add ec2 information as a step (#63784)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63784

Also creates the common.yml.j2 file as a place to store common code
amongst the templates

Should look like:
![image](https://user-images.githubusercontent.com/1700823/130495226-f18b8c0f-1ea7-4097-8bbb-e998fabb71f2.png)

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS

Reviewed By: malfet, driazati

Differential Revision: D30490682

Pulled By: seemethere

fbshipit-source-id: 18028b4acff938ef54cd6e4877561b2d830a11cf

3 years agoRename DataPipe to Op-er (#63325)
Erjia Guan [Mon, 23 Aug 2021 21:32:56 +0000 (14:32 -0700)]
Rename DataPipe to Op-er (#63325)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63325

Rename each DataPipe to an operation name ending with er. Functional API should remain `verb` such as `read_from_tar` , `shuffle`, ... (Discussed in [here](https://github.com/facebookexternal/torchdata/pull/97#discussion_r688553905))
- Batch -> Batcher
- Collate -> Collator
- Concat -> Concater
- GroupByKey - > ByKeyGrouper ?
- ListDirFiles -> FileLister
- LoadFilesFromDisk -> FileLoader
- Map -> Mapper
- ReadFilesFromTar -> TarArchiveReader
- ReadFilesFromZip -> ZipArchiveReader
- ReadLinesFromFile -> LineReader
- Shuffle -> Shuffler
- ToBytes -> StreamReader
- Transforms -> Transformer
- Zip -> Zipper

Let me know if you have better name for each DataPipe

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D30466950

Pulled By: ejguan

fbshipit-source-id: 72909dca7b3964ab83b965891f96cc1ecf62d049

3 years agoAdd equality constraints for some acc opeartions for symbolic inference (#63689)
Zeina Migeed [Mon, 23 Aug 2021 21:09:10 +0000 (14:09 -0700)]
Add equality constraints for some acc opeartions for symbolic inference (#63689)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63689

Test Plan:
buck run mode/opt-clang caffe2/torch/fb/model_transform/experimental:fx_ir_lower_inline_cvr -- \
    --action=lower_and_run \
    --filename=inline_cvr_7x_dec_2020.model \
    --print_glow_glog=True

Reviewed By: jamesr66a

Differential Revision: D30462113

fbshipit-source-id: 0b2a1ce9770561248527d47c07b80112491dc949

3 years ago[Static Runtime] Remove unused fusion patterns (#63636)
Hao Lu [Mon, 23 Aug 2021 19:53:42 +0000 (12:53 -0700)]
[Static Runtime] Remove unused fusion patterns (#63636)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63636

Reviewed By: d1jang

Differential Revision: D30446573

fbshipit-source-id: 3abb7f697380f3b4e865b98c594de359b5e26b96

3 years ago[nnc] Re-enable CPU fusion" (#63665)
Bert Maher [Mon, 23 Aug 2021 19:41:32 +0000 (12:41 -0700)]
[nnc] Re-enable CPU fusion" (#63665)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63665

This reverts commit 125e2d02e575612eb427104e7c67f1c28f090db8.

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D30471646

Pulled By: bertmaher

fbshipit-source-id: 4189869566f03b5f9ada78d78830f6a34946eed6

3 years agoKill THCUNN (#63429)
Peter Bell [Mon, 23 Aug 2021 19:05:51 +0000 (12:05 -0700)]
Kill THCUNN (#63429)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63429

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D30441308

Pulled By: ngimel

fbshipit-source-id: 3ae342a2f8d5c7f8827b637c4055c5d1b0a1be26

3 years agofix mpi ssh runtime error (#63580)
Rong Rong (AI Infra) [Mon, 23 Aug 2021 16:44:09 +0000 (09:44 -0700)]
fix mpi ssh runtime error (#63580)

Summary:
should fix https://github.com/pytorch/pytorch/issues/60756.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63580

Test Plan:
- this CI.
- validated by running on the bionic_cuda container: https://app.circleci.com/pipelines/github/pytorch/pytorch/366632/workflows/478602fb-698f-4210-ac09-d9c61af5c62b/jobs/15472104

Reviewed By: malfet

Differential Revision: D30486472

Pulled By: walterddr

fbshipit-source-id: d83ab88d163d4a468f03961a13d891b658668a7f

3 years agohotfix clone issue (#63770)
Rong Rong (AI Infra) [Mon, 23 Aug 2021 16:28:21 +0000 (09:28 -0700)]
hotfix clone issue (#63770)

Summary:
This was discovered during https://github.com/pytorch/pytorch/issues/63408. For some reason only this checkout action is not correctly set fetch-depth

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63770

Reviewed By: malfet, janeyx99

Differential Revision: D30486110

Pulled By: walterddr

fbshipit-source-id: a67395cca2487407ed0d49c8c89587935ca5f212

3 years ago[ONNX] add test images to repo (#63717)
Gary Miguel [Mon, 23 Aug 2021 14:41:33 +0000 (07:41 -0700)]
[ONNX] add test images to repo (#63717)

Summary:
This is better than the status quo:
* Test doesn't download files from the internet -> faster and more
  reliable.
* Test doesn't leave the git working directory dirty.

Rather than using the original images, I've copied some images from
the pytorch/vision repo. This will keep the tests in the two repos
in sync, while avoiding adding new assets to the vision repo.

See https://github.com/pytorch/vision/pull/4176.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63717

Reviewed By: janeyx99

Differential Revision: D30466016

Pulled By: malfet

fbshipit-source-id: 2c56d4c11b5c74db1764576bf1c95ce4ae714574

3 years agoAllow implementing either backward or vjp for Function (#63434)
Alban Desmaison [Mon, 23 Aug 2021 14:05:51 +0000 (07:05 -0700)]
Allow implementing either backward or vjp for Function (#63434)

Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63434

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D30431968

Pulled By: albanD

fbshipit-source-id: 0bb88664283486a9fd3364e6c3d79442a44625c2

3 years agoUpdate ROCm PyTorch persons of interest (#55206)
Jithun Nair [Mon, 23 Aug 2021 05:29:04 +0000 (22:29 -0700)]
Update ROCm PyTorch persons of interest (#55206)

Summary:
cc jeffdaily sunway513

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55206

Reviewed By: VitalyFedyunin

Differential Revision: D30296584

Pulled By: dzhulgakov

fbshipit-source-id: 6e5c610cc6b7c7fd58b80fa3f9de31f269341a88

3 years agoRemove `_fork_processes` from common_distributed.py (#63711)
Pritam Damania [Mon, 23 Aug 2021 01:55:45 +0000 (18:55 -0700)]
Remove `_fork_processes` from common_distributed.py (#63711)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63711

This removes `_fork_process` from common_distributed.py and fixes all
other callpoints to use `spawn_process` instead.
ghstack-source-id: 136395719

Test Plan: waitforbuildbot

Reviewed By: xush6528

Differential Revision: D30463834

fbshipit-source-id: 0c09e8a996d0e5b912c8cdd45488a39951bac4db

3 years agoMade FuncTorchBatched decompose CompositeImplicitAutograd (#63616)
Horace He [Sun, 22 Aug 2021 00:13:27 +0000 (17:13 -0700)]
Made FuncTorchBatched decompose CompositeImplicitAutograd (#63616)

Summary:
See https://github.com/facebookresearch/functorch/issues/56

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63616

Reviewed By: zou3519

Differential Revision: D30438316

Pulled By: Chillee

fbshipit-source-id: e84446d9f68b87daa0cfff75b3b8a972f36ec85a