Fix race in AtomicFetchAdd. (#13479)
authorYavuz Yetim <yyetim@fb.com>
Mon, 19 Nov 2018 23:57:28 +0000 (15:57 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Tue, 20 Nov 2018 00:11:58 +0000 (16:11 -0800)
commita20c7ce8484575ef220fdfe8f6f6f286e5cb0e16
tree270ca2c911b31f1ad7eb99e9bc58bd1297e423d3
parent1a299504788ae62cad5ae776130351cf75f3484f
Fix race in AtomicFetchAdd. (#13479)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13479

Increases the lock scope to above Output() calls.

These calls potentially allocate the underlying blob/tensor
objects and multiple invocations race each other over the
same output blobs/tensors.

Reviewed By: bwasti

Differential Revision: D12891629

fbshipit-source-id: a6015cfdb08e352521a1f062eb9d94a971cfbdb0
caffe2/operators/atomic_ops.cc