Now, After train the model and export it to the tflite format, it will be exported with the batch size used for learning.
However, for interference, it is not necessary to have a batch size, and when converting to a tensorflow lite format in tensorflow, the batch size is set to 1.
Signed-off-by: Donghak PARK <donghak.park@samsung.com>
/// `dealloc_weights == false`
model_graph.deallocateTensors();
model_graph.allocateTensors(ExecutionMode::INFERENCE);
+ model_graph.setBatchSize(1); // For now, to inference batch size to be 1
interpreter.serialize(graph_representation, file_path);
model_graph.deallocateTensors();
#else