[nnc] Some ops fixes, mobilenet support on ONNX. (#2856)
authorАндрей Тищенко/AI Tools Lab /SRR/Staff Engineer/삼성전자 <a.tischenko@partner.samsung.com>
Wed, 16 Jan 2019 09:02:55 +0000 (12:02 +0300)
committerРоман Михайлович Русяев/AI Tools Lab /SRR/Staff Engineer/삼성전자 <r.rusyaev@samsung.com>
Wed, 16 Jan 2019 09:02:55 +0000 (12:02 +0300)
There were fixed/implemented the following operators:
PadOp, GivenTensorFillOp, ConstantOp, ConvOp.
As result we have now 2 working NNs: resnet50 and mobilenet. The last one was tested on interpreter only.

Signed-off-by: Andrew V. Tischenko <a.tischenko@partner.samsung.com>
contrib/nnc/passes/interpreter/Interpreter.cpp
contrib/nnc/passes/onnx_frontend/ONNXImporterImpl.cpp
contrib/nnc/passes/onnx_frontend/ONNXImporterImpl.h
contrib/nnc/passes/onnx_frontend/ONNXOpCreator.cpp
contrib/nnc/passes/onnx_frontend/ONNXOpCreator.h
contrib/nnc/passes/soft_backend/ModelAnalyzer.cpp

index d4b74bc..b50bc8e 100644 (file)
@@ -95,13 +95,12 @@ static void dumpIndex (Index ndx) {
 void NNInterpreter::dump(Operation& op, bool all) {
   // TODO: in theory there could be several outputs from the given 'op'.
   TensorVariant tensor = var(op.getId())[0];
-  std::cout << "Tensor '" << op.getName() << "' DType = " << (int)tensor.getDataType()  << ", ElementSize = " << tensor.getElementSize()
-           << ", Shape = {";
   auto shape = tensor.getShape();
-  for (int i = 0; i < shape.rank(); i++) {
-    std::cout << shape.dim(i) << (i == shape.rank() - 1 ? "} " : ", ");
-  }
-  std::cout << "ElementsNumber " << shape.numElements() << "\n";
+  std::cout << "Tensor '" <<
+               (op.getNextNodes().size() ? op.getNextNodes()[0]->getName() : "output") <<
+                "' DType = " << (int)tensor.getDataType()  << ", ElementSize = " <<
+                tensor.getElementSize() << ", Shape" << shape;
+  std::cout << " ElementsNumber " << shape.numElements() << "\n";
   static bool do_it = false;
   if (do_it || all) {
     auto last_idx = shape.rank() - 1;
@@ -168,6 +167,7 @@ void NNInterpreter::visit(ops::ConcatOp& op) {
 void NNInterpreter::visit(ops::Conv2DOp& op) {
   auto operand = op.getPrevNodes()[0];
   var(op.getId()) = Conv2D(var(operand.op->getId())[operand.index], op)();
+  DUMP(op, true);
 }
 
 void NNInterpreter::visit(ops::ReshapeOp& op) {
@@ -235,6 +235,7 @@ void NNInterpreter::visit(ops::DepthwiseConv2DOp& op){
   auto operand = op.getPrevNodes()[0];
   TensorVariant input(var(operand.op->getId())[operand.index]);
   var(op.getId()) = DepthwiseConv2D(input, op)();
+  DUMP(op, true);
 }
 
 void NNInterpreter::visit(ops::BiasAddOp& op) {
index d37d970..732a93c 100644 (file)
@@ -22,6 +22,7 @@
 #include "core/modelIR/IrDotDumper.h"
 #include "core/modelIR/operations/ConstantOp.h"
 #include "core/modelIR/operations/Conv2DOp.h"
+#include "core/modelIR/operations/DepthwiseConv2DOp.h"
 #include "core/modelIR/operations/ElementwiseOp.h"
 #include "core/modelIR/operations/TransposeOp.h"
 #include "core/modelIR/operations/VariableOp.h"
@@ -54,9 +55,11 @@ static void collectUnsupportedOps(std::unique_ptr<onnx::ModelProto>& model) {
     switch (ir_op_type->opCode) {
       case ONNXOpCode::opAdd:
       case ONNXOpCode::opAveragePool:
+      case ONNXOpCode::opGivenTensorFill:     // experimental
       case ONNXOpCode::opGlobalAveragePool:
       case ONNXOpCode::opBatchNormalization:
       case ONNXOpCode::opConcat:
+      case ONNXOpCode::opConstant:
       case ONNXOpCode::opConv:
       case ONNXOpCode::opDropout:
       case ONNXOpCode::opGather:
@@ -64,6 +67,7 @@ static void collectUnsupportedOps(std::unique_ptr<onnx::ModelProto>& model) {
       case ONNXOpCode::opMax:
       case ONNXOpCode::opMaxPool:
       case ONNXOpCode::opMul:
+      case ONNXOpCode::opPad:
       case ONNXOpCode::opRelu:
       case ONNXOpCode::opReshape:
       case ONNXOpCode::opUnsqueeze:
@@ -84,7 +88,7 @@ static void collectUnsupportedOps(std::unique_ptr<onnx::ModelProto>& model) {
   }
 }
 
-static mir::TensorVariant createTensor(const onnx::TensorProto* tensor) {
+mir::TensorVariant ONNXImporterImpl::createTensor(const onnx::TensorProto* tensor) {
   mir::DTYPE type = mir::DTYPE::FLOAT32;
   size_t element_size;
   size_t buffer_size;
@@ -126,6 +130,13 @@ static mir::TensorVariant createTensor(const onnx::TensorProto* tensor) {
         buffer_size = tensor->raw_data().size();
         src_data = reinterpret_cast<const char*>(tensor->raw_data().data());
         break;
+      case onnx::TensorProto_DataType_INT64: {
+        element_size = sizeof(int64_t);
+        buffer_size = tensor->raw_data().size();
+        src_data = reinterpret_cast<const char*>(tensor->raw_data().data());
+        type = mir::DTYPE::INT64;
+        break;
+      }
       default:
         throw PassException("Don't support this tensor type yet, investigate");
     }
@@ -158,8 +169,8 @@ void ONNXImporterImpl::createGraphInputs() {
 
     if (onnx_tensors.find(name) != onnx_tensors.end()) {
       const onnx::TensorProto* onnx_tensor = onnx_tensors[name];
-      _inputTensors.insert(std::make_pair(name, createTensor(onnx_tensor)));
-      auto constant = _graph->create<mir::ops::ConstantOp>(name, _inputTensors.at(name));
+      _constantTensors.insert(std::make_pair(name, createTensor(onnx_tensor)));
+      auto constant = _graph->create<mir::ops::ConstantOp>(name, _constantTensors.at(name));
       _tensorNameToIODescriptor[name] = constant->getOutput(0);
     } else {
       // We're dealing with graph input (assuming the picture only)
@@ -184,26 +195,26 @@ void ONNXImporterImpl::dump(const std::vector<mir::IODescriptor>& input_descrs,
     auto op = out_descr.op;
     std::cout << onnx_node.op_type() << " '" << op->getName() << "'";
     if (input_descrs[0].op->getNumInputs() > 0) {
-      std::cout << "Input Shape: ";
-      dumpShape(input_descrs[0].op->getOutputShape(input_descrs[0].index));
+      std::cout << "Input Shape: " << input_descrs[0].op->getOutputShape(input_descrs[0].index);
     }
-    std::cout << " Output Shape: ";
-    dumpShape(op->getOutputShape(0));
+    std::cout << " Output Shape: " << op->getOutputShape(0);
     auto* onnx_op_type = ONNXPerfectHash::getONNXOpType(onnx_node.op_type().c_str(), onnx_node.op_type().size());
     switch (onnx_op_type->opCode) {
       case ONNXOpCode::opConv: {
-        auto *conv = dynamic_cast<mir::ops::Conv2DOp *>(op);
-        if (conv == nullptr) {
-          assert(dynamic_cast<mir::ops::TransposeOp *>(op) != nullptr);
-          conv = dynamic_cast<mir::ops::Conv2DOp *>(op->getPrevNodes()[0].op);
+        assert(dynamic_cast<mir::ops::TransposeOp *>(op) != nullptr);
+        if (auto* conv = dynamic_cast<mir::ops::Conv2DOp *>(op->getPrevNodes()[0].op)) {
+          std::cout << " (Conv2D)Weights" << conv->getKernel().getShape() << " Strides" <<
+                      conv->getStrides() << " Padding(" << conv->getPaddingBefore()[0] <<
+                      " " << conv->getPaddingBefore()[1] << ")" << ":(" <<
+                      conv->getPaddingAfter()[0] << " " << conv->getPaddingAfter()[1] << ")";
+        } else {
+          auto *dept = dynamic_cast<mir::ops::DepthwiseConv2DOp *>(op->getPrevNodes()[0].op);
+          assert(dept);
+          std::cout << " (DepthwiseConv2D)Weights" << dept->getKernel().getShape() << " Strides" <<
+                    dept->getStrides() << " Padding(" << dept->getPaddingBefore()[0] <<
+                    " " << dept->getPaddingBefore()[1] << ")" << ":(" <<
+                    dept->getPaddingAfter()[0] << " " << dept->getPaddingAfter()[1] << ")";
         }
-        assert(conv);
-        std::cout << " Weights tensor shape ";
-        dumpShape(conv->getKernel().getShape());
-        std::cout << " Strides  ";
-        dumpShape(conv->getStrides());
-        std::cout << " Padding before:  (" << conv->getPaddingBefore()[0] << " " << conv->getPaddingBefore()[1] << ")";
-        std::cout << " After:  (" << conv->getPaddingAfter()[0] << " " << conv->getPaddingAfter()[1] << ")";
         break;
       }
       case ONNXOpCode::opGlobalAveragePool:
@@ -215,11 +226,9 @@ void ONNXImporterImpl::dump(const std::vector<mir::IODescriptor>& input_descrs,
           pool = dynamic_cast<mir::ops::PoolOp *>(op->getPrevNodes()[0].op);
         }
         assert(pool);
-        std::cout << " Kernel ";
-        dumpShape(pool->getWindowShape());
-        std::cout << " Strides  ";
-        dumpShape(pool->getStrides());
-        std::cout << " Padding before:  " << pool->getPaddingBefore()[0] << " " << pool->getPaddingBefore()[1];
+        std::cout << " Kernel " << pool->getWindowShape()  << " Strides  " << pool->getStrides();
+        std::cout << " Padding before:  " << pool->getPaddingBefore()[0] << " " <<
+                     pool->getPaddingBefore()[1];
         std::cout << " After:  " << pool->getPaddingAfter()[0] << " " << pool->getPaddingAfter()[1];
         break;
       }
@@ -253,14 +262,22 @@ mir::Graph *ONNXImporterImpl::createIR() {
     std::vector<mir::IODescriptor> inputs(onnx_node.input_size());
     for (int i = 0; i < onnx_node.input_size(); i++) {
       auto& name = onnx_node.input(i);
-      assert(_tensorNameToIODescriptor.find(name) != _tensorNameToIODescriptor.end());
-      inputs[i] = _tensorNameToIODescriptor[name];
+      if (name.size() != 0) {
+        assert(_tensorNameToIODescriptor.find(name) != _tensorNameToIODescriptor.end());
+        inputs[i] = _tensorNameToIODescriptor[name];
+      }
     }
 
     std::vector<mir::IODescriptor> outputs;
     auto* onnx_op_type = ONNXPerfectHash::getONNXOpType(op_type, onnx_node.op_type().size());
 
     switch (onnx_op_type->opCode) {
+      case ONNXOpCode::opConstant:
+        outputs = _opCreator.convertConstant(onnx_node, _constantTensors);
+        break;
+      case ONNXOpCode::opPad:
+        outputs = _opCreator.convertPad(inputs, onnx_node);
+        break;
       case ONNXOpCode::opConv:
         outputs = _opCreator.convertConv2D(inputs, onnx_node);
         break;
@@ -282,6 +299,9 @@ mir::Graph *ONNXImporterImpl::createIR() {
       case ONNXOpCode::opMax:
         outputs = _opCreator.convertElementwise(inputs, mir::ops::ElementwiseOp::OpType::max);
         break;
+      case ONNXOpCode::opGivenTensorFill:
+        outputs = _opCreator.convertGivenTensorFill(onnx_node, _constantTensors);
+        break;
       case ONNXOpCode::opGlobalAveragePool:
       case ONNXOpCode::opAveragePool:
       case ONNXOpCode::opMaxPool:
@@ -309,7 +329,7 @@ mir::Graph *ONNXImporterImpl::createIR() {
         outputs = _opCreator.convertScale(inputs, onnx_node);
         break;
       case ONNXOpCode::opBatchNormalization:
-        outputs = _opCreator.convertBatchNorm(inputs, onnx_node, _inputTensors);
+        outputs = _opCreator.convertBatchNorm(inputs, onnx_node, _constantTensors);
         break;
       case ONNXOpCode::opDropout:
         outputs = _opCreator.convertDropout(inputs, onnx_node);
index be4ff8a..51d0693 100644 (file)
@@ -42,19 +42,15 @@ public:
   void dump(const std::vector<mir::IODescriptor>& input_descrs,
             const std::vector<mir::IODescriptor>& out_descrs,
             const onnx::NodeProto& onnx_node);
+  static mir::TensorVariant createTensor(const onnx::TensorProto* tensor);
 
-  static void dumpShape(mir::Shape shape) {
-    std::cout << "{";
-    for (int i = 0; i < shape.rank(); i++) {
-      std::cout << shape.dim(i) << (i == shape.rank() - 1 ? "} " : ", ");
-    }
-  }
   private:
   void createGraphInputs();
   // This map maps onnx tensor names to MIR operations/nodes
   std::map<std::string, mir::IODescriptor> _tensorNameToIODescriptor;
   // This map keeps named tensors used as graph input initializers.
-  std::map<std::string, mir::TensorVariant> _inputTensors;
+  // In addiotn here could be tensors from opGivenTensorFill and opConstant
+  std::map<std::string, mir::TensorVariant> _constantTensors;
   std::vector<mir::IODescriptor> _graphOutputs;
   std::string _modelFilename;
   std::unique_ptr<onnx::ModelProto> _model;
index 04031af..d660571 100644 (file)
@@ -19,6 +19,7 @@
 #include <iostream>
 #include "core/modelIR/Index.h"
 #include "core/modelIR/Graph.h"
+#include "core/modelIR/Scalar.h"
 #include "core/modelIR/ShapeRange.h"
 #include "core/modelIR/Tensor.h"
 #include "core/modelIR/TensorUtil.h"
@@ -33,6 +34,7 @@
 #include "core/modelIR/operations/FullyConnectedOp.h"
 #include "core/modelIR/operations/GatherOp.h"
 #include "core/modelIR/operations/GemmOp.h"
+#include "core/modelIR/operations/PadOp.h"
 #include "core/modelIR/operations/PoolOp.h"
 #include "core/modelIR/operations/ReluOp.h"
 #include "core/modelIR/operations/ReshapeOp.h"
@@ -42,6 +44,7 @@
 #include "core/modelIR/operations/TransposeOp.h"
 #include "core/modelIR/operations/VariableOp.h"
 #include "core/modelIR/operations/ElementwiseOp.h"
+#include "passes/common_frontend/op_creator_helper.h"
 #include "passes/common_frontend/shape_helper.h"
 #include "pass/PassException.h"
 #include "ONNXOpCreator.h"
@@ -83,6 +86,7 @@ static std::pair<bool, float> getFloatAttribute(const onnx::NodeProto& onnx_node
 }
 
 // Create vector tensor filled with the given value
+// TODO: it should be template
 static TensorVariant createTensor(float value, const mir::Shape& shape) {
   mir::DTYPE element_type = mir::DTYPE::FLOAT32;
   size_t element_size = sizeof(value);
@@ -95,6 +99,18 @@ static TensorVariant createTensor(float value, const mir::Shape& shape) {
   return mir::TensorVariant({shape.numElements()}, data, element_type, element_size);
 }
 
+// Create vector tensor filled with the given shape and values
+// TODO: it should be template
+static TensorVariant createTensor(const float* values, const mir::Shape& shape) {
+  mir::DTYPE element_type = mir::DTYPE::FLOAT32;
+  size_t element_size = sizeof(float);
+
+  float* dst_ptr = new float[shape.numElements()];
+  memcpy(dst_ptr, values, element_size * shape.numElements());
+  std::shared_ptr<char> data(reinterpret_cast<char*>(dst_ptr), std::default_delete<char[]>());
+  return mir::TensorVariant(shape, data, element_type, element_size);
+}
+
 struct KernelStridesPadding {
   Shape kernel_shape;
   Shape strides_shape;
@@ -113,7 +129,9 @@ static void getKernelStridesPadding(const onnx::NodeProto &onnx_node, KernelStri
   cdata.strides_shape = ShapeHelper::createShape(strides->ints(), strides->ints_size());
 
   if (pads) {
-    assert(pads->ints_size() >= 2);
+    // FIXME: it's for 2D only
+    assert(pads->ints_size() == 4);
+    // FIXME: how to use padding here?
     cdata.padding_before[0] = pads->ints(0);
     cdata.padding_before[1] = pads->ints(1);
     // TODO: ONNX padding could be for the beginning and ending along each axis that's why we
@@ -136,13 +154,34 @@ ONNXOpCreator::convertConv2D(const std::vector<mir::IODescriptor>& inputs,
   auto* in_weights = dynamic_cast<mir::ops::ConstantOp*>(inputs[1].op);
   assert(in_weights && "Weights could be a constant tensor only");
   const auto& in_weights_tensor = in_weights->getValue();
-  // We should transpose ONNX MCHW to HWOI
-  auto transposed = transposeTensor<2, 3, 1, 0>(in_weights_tensor);
+  // We should transpose ONNX MC(IO)HW to HWOI
+  auto kernel_tensor = transposeTensor<2, 3, 1, 0>(in_weights_tensor);
+  auto in_group_size = kernel_tensor.getShape().dim(2);
+  auto out_channels = kernel_tensor.getShape().dim(3);
+  bool found;
+  int num_groups;
+  std::tie (found, num_groups) = getIntAttribute(onnx_node, "group");
+  if (!found)
+    num_groups = 1;
+  bool is_depthwise = (num_groups != 1) && (in_group_size == 1) && (out_channels == num_groups);
+
+  mir::Operation* result;
+  auto transposed_input = convertONNXToMIR(inputs[0].op->getOutput(0));
+  if (is_depthwise) {
+    // TODO handle properly kernel with layer multiplier
+    auto transposed_tensor = mir::transposeTensor<0, 1, 3, 2>(kernel_tensor);
+    result = createOp<ops::DepthwiseConv2DOp>(transposed_input,
+                                              transposed_tensor, cdata.strides_shape,
+                                              cdata.padding_before, cdata.padding_after);
+  } else {
+    // first we need to convert kernel of grouped convolution to appropriate ordinary kernel
+    if (num_groups != 1)
+      kernel_tensor = fixGroupedKernel(num_groups, kernel_tensor);
+    result = createOp<ops::Conv2DOp>(transposed_input, kernel_tensor,
+                                     cdata.strides_shape, cdata.padding_before,
+                                     cdata.padding_after);
+  }
 
-  // Transpose ONNX NCHW to MIR NHWC
-  auto t_input = convertONNXToMIR(inputs[0]);
-  auto result = createOp<ops::Conv2DOp>(t_input, transposed, cdata.strides_shape,
-                                        cdata.padding_before, cdata.padding_after);
   if (inputs.size() > 2)
     result = createOp<ops::BiasAddOp>(result->getOutput(0), inputs[2]);
 
@@ -173,6 +212,34 @@ ONNXOpCreator::convertGather(const std::vector<mir::IODescriptor>& inputs,
 }
 
 std::vector<IODescriptor>
+ONNXOpCreator::convertPad(const std::vector<mir::IODescriptor>& inputs,
+                          const onnx::NodeProto& onnx_node) {
+  bool found;
+  float value;
+  std::tie(found, value) = getFloatAttribute(onnx_node, "value");
+  assert(found);
+  auto padsAtt = findAttribute(onnx_node, "pads");
+  assert(padsAtt);
+  auto modeAtt = findAttribute(onnx_node, "mode");
+  assert(modeAtt);
+  auto mode = modeAtt->s();
+  const mir::Scalar scalar(reinterpret_cast<const char*>(&value), DTYPE::FLOAT32, sizeof(float));
+  assert(padsAtt->ints_size() > 0);
+  int cnt = padsAtt->ints_size() / 2;
+  assert(cnt % 2 == 0);
+  int last = padsAtt->ints_size() - 1;
+  std::vector<std::pair<int32_t, int32_t >> vec(cnt);
+  auto* data = padsAtt->ints().data();
+  for (int i = 0; i < cnt; i++) {
+    auto pair = std::make_pair(data[i], data[last - i]);
+    vec[i] = pair;
+  }
+  auto result =
+    createOp<ops::PadOp>(inputs[0], inputs[0].op->getOutputShape(0).rank(), vec, scalar);
+  return {result->getOutput(0)};
+}
+
+std::vector<IODescriptor>
 ONNXOpCreator::convertPool(const std::vector<mir::IODescriptor>& inputs,
                            ONNXOpCode op_code,
                            const onnx::NodeProto& onnx_node) {
@@ -189,7 +256,8 @@ ONNXOpCreator::convertPool(const std::vector<mir::IODescriptor>& inputs,
       pool_type = ops::PoolOp::PoolingType::AVG;
       // GlobalAveragePool is equivalent to AveragePool with kernel size equal
       // to the spatial dimension of input tensor
-      cdata.kernel_shape = t_input.op->getOutputShape(0);
+      cdata.kernel_shape = {t_input.op->getOutputShape(0).dim(1),
+                            t_input.op->getOutputShape(0).dim(2)};
       cdata.strides_shape = Shape{1, 1};
       break;
     }
@@ -206,7 +274,6 @@ ONNXOpCreator::convertPool(const std::vector<mir::IODescriptor>& inputs,
     default:
       assert(false);
   }
-
   auto result = createOp<ops::PoolOp>(t_input, pool_type,
                                       cdata.kernel_shape, cdata.strides_shape,
                                       cdata.padding_before, cdata.padding_after,
@@ -242,6 +309,7 @@ ONNXOpCreator::convertReshape(const std::vector<mir::IODescriptor>& inputs) {
   // The vector to build the new shape from
   std::vector<int32_t > shape_vector(cnt);
   ShapeRange out_range(shape_tensor_shape);
+  // FIXME: real type could be int64_t but _elementSize is correct that's why it works
   Tensor<int32_t> tensor_accessor(shape_tensor);
 
   int i = 0;
@@ -313,6 +381,7 @@ ONNXOpCreator::convertBatchNorm(const std::vector<mir::IODescriptor>& inputs,
   std::tie(found, value) = getFloatAttribute(onnx_node, "epsilon");
   float epsilon = found ? value : 1e-05f;
 
+  // TODO: it's better to do it via inputs
   const auto& scale_tensor = input_tensors.at(inputs[1].op->getName());
   const auto& bias_tensor = input_tensors.at(inputs[2].op->getName());
   const auto& mean_tensor = input_tensors.at(inputs[3].op->getName());
@@ -368,6 +437,37 @@ ONNXOpCreator::convertScale(const std::vector<mir::IODescriptor>& inputs,
 }
 
 std::vector<IODescriptor>
+ONNXOpCreator::convertGivenTensorFill(const onnx::NodeProto& onnx_node,
+                                      InputTensors& input_tensors) {
+  auto values_att = findAttribute(onnx_node, "values");
+  auto shape_att = findAttribute(onnx_node, "shape");
+  assert(values_att && shape_att);
+  assert(values_att->floats_size() > 0 && shape_att->ints_size() > 0);
+  Shape shape(shape_att->ints_size());
+  for (int i = 0; i < shape_att->ints_size(); i++)
+    shape.dim(i) = shape_att->ints(i);
+  auto tensor = createTensor(values_att->floats().data(), shape);
+  input_tensors.insert(std::make_pair(onnx_node.output(0), tensor));
+  auto result = createOp<ops::ConstantOp>(tensor);
+  return {result->getOutput(0)};
+}
+
+std::vector<IODescriptor>
+ONNXOpCreator::convertConstant(const onnx::NodeProto& onnx_node,
+                               InputTensors& input_tensors) {
+  assert((onnx_node.attribute_size() == 1) &&
+         (onnx_node.attribute(0).type() == onnx::AttributeProto_AttributeType_TENSOR) &&
+         (onnx_node.attribute(0).tensors_size() == 0));
+  assert(!onnx_node.attribute(0).name().compare("value"));
+  auto name = onnx_node.output(0);
+  auto &onnx_tensor = onnx_node.attribute(0).t();
+  auto mir_tensor = ONNXImporterImpl::createTensor(&onnx_tensor);
+  input_tensors.insert(std::make_pair(name, mir_tensor));
+  auto op = _graph->create<mir::ops::ConstantOp>(name, mir_tensor)->getOutput(0);
+  return {op};
+}
+
+std::vector<IODescriptor>
 ONNXOpCreator::convertGemm(const std::vector<mir::IODescriptor>& inputs,
                            const onnx::NodeProto& onnx_node) {
   bool  found;
index 7ae9144..c89f714 100644 (file)
@@ -48,11 +48,23 @@ public:
                 const onnx::NodeProto& onnx_node);
 
   std::vector<mir::IODescriptor>
+  convertGivenTensorFill(const onnx::NodeProto& onnx_node,
+                         InputTensors& input_tensors);
+
+  std::vector<mir::IODescriptor>
+  convertConstant(const onnx::NodeProto& onnx_node,
+                           InputTensors& input_tensors);
+
+    std::vector<mir::IODescriptor>
   convertPool(const std::vector<mir::IODescriptor>& inputs,
               ONNXOpCode op_code,
               const onnx::NodeProto& onnx_node);
 
   std::vector<mir::IODescriptor>
+  convertPad(const std::vector<mir::IODescriptor>& inputs,
+             const onnx::NodeProto& onnx_node);
+
+  std::vector<mir::IODescriptor>
   convertSoftmax(const std::vector<mir::IODescriptor>& inputs,
                  const onnx::NodeProto& onnx_node);
 
index 4d80c4f..77e12de 100644 (file)
@@ -24,7 +24,7 @@
 #include "core/modelIR/operations/BiasAddOp.h"
 #include "core/modelIR/operations/CappedReluOp.h"
 #include "core/modelIR/operations/ConcatOp.h"
-#include <core/modelIR/operations/ConstantOp.h>
+#include "core/modelIR/operations/ConstantOp.h"
 #include "core/modelIR/operations/Conv2DOp.h"
 #include "core/modelIR/operations/Deconv2DOp.h"
 #include "core/modelIR/operations/DepthwiseConv2DOp.h"