[Static Runtime] Fix resize_output_check warning coming from prim::VarConcat (#64765)
authorHarut Movsisyan <harutm@fb.com>
Thu, 9 Sep 2021 21:35:00 +0000 (14:35 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Thu, 9 Sep 2021 21:38:50 +0000 (14:38 -0700)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64765

Test Plan: Tested the fix with BR v1 model predictor-replayer setup.

Reviewed By: ajyu

Differential Revision: D30846506

fbshipit-source-id: 3ef3c93f11285c7cd1e2b188ca298a7ab4fba579

torch/csrc/jit/runtime/static/ops.cpp

index 62f5bb2..b09cede 100644 (file)
@@ -1833,6 +1833,7 @@ REGISTER_OPERATOR_FUNCTOR(
           check_cat_no_zero_dim(inputs);
           dim = legacy_cat_wrap_dim(dim, inputs);
           auto& out_t = p_node->Output(0).toTensor();
+          fastResizeToZero(out_t);
           at::native::_cat_out_cpu(inputs, dim, out_t);
         }
       };