Set output of aten::mm to have the same output type as the original node after op...
authorCircleCI <tix@microsoft.com>
Fri, 30 Nov 2018 07:22:15 +0000 (23:22 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Fri, 30 Nov 2018 07:24:27 +0000 (23:24 -0800)
Summary:
In CanonalizeOp, addmm is separated into mm and add. But output dimension and type are not preserved for the aten::mm node. Fixing this so that the dumped graph after this pass contains accurate information.
sample output:
before:
%6 : Dynamic = aten::mm(%input.2, %5), scope: LinearModel/Sequential[model]/Linear[full0]
after:
%6 : Float(32, 200) = aten::mm(%input.2, %5), scope: LinearModel/Sequential[model]/Linear[full0]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14602

Differential Revision: D13273754

Pulled By: soumith

fbshipit-source-id: 82e22b5f30e9eb6ba9249c5a2216955421f39cc7

torch/csrc/jit/passes/canonicalize_ops.cpp

index c88bd81..48bbf6b 100644 (file)
@@ -58,6 +58,9 @@ static void CanonicalizeOps(Block* block) {
       SymbolicVariable mat2(it->inputs()[2]);
 
       auto mm_result = mat1.mm(mat2);
+      // Set this intermediate aten::mm node to have the same output type as the original aten::addmm
+      // otherwise the canonicalized graph will have DynamicType as the output of this node which is incorrect
+      (static_cast<Value*>(mm_result))->setType(it->output()->type());
       auto result = mat + mm_result;
       (static_cast<Value*>(result))->setType(it->output()->type());