3 `onert-micro`(a.k.a `luci-micro`) is MCU specialized build of luci-interpreter with several benchmark applications.
7 onert-micro contains cmake infrastructure to build:
8 - stand-alone interpreter library
9 - benchmark applications using luci interpreter on arm MCUs
11 ## How to build stand alone library
13 Stand-alone library is simply built by `luci_interpreter_micro_arm` target.
14 Result library will be placed in `<ONE root>/build/compiler/luci-micro/standalone_arm/luci-interpreter/src/libluci_interpreter.a`.
18 - Everything you need for ONE project: see [how-to-build-compiler.md](../../docs/howto/how-to-build-compiler.md)
19 - arm-none-eabi-gcc and arm-none-eabi-g++ compilers
21 To install needed arm compilers on ubuntu:
23 $ sudo apt-get install gcc-arm-none-eabi
32 $ cmake ../infra/onert-micro
33 $ make -j$(nproc) luci_interpreter_micro_arm
38 Interpreter uses TensorFlow headers that produces warnings.
40 `Linux` x86 build uses "-isystem" flag to suppress warnings from external sources,
41 but some old arm compilers have issues with it:
42 [bug](https://bugs.launchpad.net/gcc-arm-embedded/+bug/1698539)
44 `-isystem` hack is disabled for MCU build, because of this MCU build is broken if `-Werror` flag is set.
48 ### Convert tflite model to circle model
50 To inference with tflite model, you need to convert it to circle model format(https://github.com/Samsung/ONE/blob/master/res/CircleSchema/0.4/circle_schema.fbs).
51 Please refer to `tflite2circle` tool(https://github.com/Samsung/ONE/tree/master/compiler/tflite2circle) for this purpose.
53 ### Convert to c array model
55 Many MCU platforms are lack of file system support. The proper way to provide a model to onert-micro is to convert it into c array so that it can be compiled into MCU binary.
58 xxi -i model.circle > model.h
61 Then, model.h looks like this:
64 unsigned char model_circle[] = {
65 0x22, 0x01, 0x00, 0x00, 0xf0, 0x00, 0x0e, 0x00,
68 unsigned int model_circle_len = 1004;
73 Once you have c array model, you are ready to use onert-micro.
75 To run a model with onert-micro, follow the instruction:
77 1. Include onert-micro header
80 #include <luci_interpreter/Interpreter.h>
83 2. Create interpreter instance
85 onert-micro interpreter expects model as c array as mentioned in [Previous Section](#convert-to-c-array-model).
90 luci_interpreter::Interpreter interpreter(model_circle, true);
95 To feed input data into interpreter, we need to do two steps: 1) allocate input tensors and 2) copy input into input tensors.
98 for (int32_t i = 0; i < num_inputs; i++)
100 auto input_data = reinterpret_cast<char *>(interpreter.allocateInputTensor(i));
101 readDataFromFile(std::string(input_prefix) + std::to_string(i), input_data,
102 interpreter.getInputDataSizeByIndex(i));
109 interpreter.interpret();
115 auto data = interpreter.readOutputTensor(i);
119 ### Reduce Binary Size
121 onert-micro provides compile flags to generate reduced-size binary.
123 - `DIS_QUANT` : Flag for Disabling Quantized Type Operation
124 - `DIS_FLOAT` : Flag for Disabling Float Operation
125 - `DIS_DYN_SHAPES` : Flag for Disabling Dynamic Shape Support
127 Also, you can build onert-micro library only with kernels in target models.
128 For this, please remove all the kernels from [KernelsToBuild.lst](./luci-interpreter/pal/mcu/KernelsToBuild.lst) except kernels in your target model.