summary |
shortlog | log |
commit |
commitdiff |
tree
first ⋅ prev ⋅ next
Inki Dae [Fri, 30 Oct 2020 07:17:40 +0000 (16:17 +0900)]
Fix undefined symbole issue
ml_single_invoke_no_alloc has been changed to ml_single_invoke_fast
so fix it.
Change-Id: I2a7d85c0adbc8b28a4139e5038e85ca1f3a4e03e
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Sangjung Woo [Mon, 26 Oct 2020 02:24:57 +0000 (11:24 +0900)]
Fix memory leak issue
* Apply ml_single_invoke_no_alloc() ML API instead of
ml_single_invoke().
* Remove unnecessary memory copies.
Change-Id: I41c6eaf0afe35a4dd481ac57e942dd45f0fb1e4a
Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Wed, 14 Oct 2020 06:38:48 +0000 (15:38 +0900)]
Update various tensor filters support
Change-Id: I2ea104cae60ba5a9049fcc8eaa1b0ec78a220112
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Tue, 15 Sep 2020 08:30:27 +0000 (17:30 +0900)]
Use pre-defined tensor data handles
For invoke request, we don't have to get input tensor information
because we can get the information at GetInputTensorBuffers
and GetOutputTensorBuffers, and use them instead.
Change-Id: I6d2ac7fcb8d4ed129deb54eca7739038571b230e
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Fri, 11 Sep 2020 07:20:22 +0000 (16:20 +0900)]
Add ARMNN and TFLITE backend support
Change-Id: I2fde5e660950a3e2d9d2d1722c351cdd448f86a4
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Wed, 26 Aug 2020 06:35:27 +0000 (06:35 +0000)]
Merge "Support two more tensor type" into tizen
Hyo Jong Kim [Tue, 25 Aug 2020 02:29:42 +0000 (11:29 +0900)]
Support two more tensor type
Support INT64 and UINT64
Change-Id: Iec1445e27bfeb245e1b38bd83679ea8f27e7bcf7
Signed-off-by: Hyo Jong Kim <hue.kim@samsung.com>
Hyo Jong Kim [Tue, 25 Aug 2020 02:20:46 +0000 (11:20 +0900)]
Support multiple output tensor
Get the information and the number of output tensor
Set the output tensor according to that number
Change-Id: Ie803aa0aee194091006db29bd86a3d24a4f922df
Signed-off-by: Hyo Jong Kim <hue.kim@samsung.com>
Inki Dae [Tue, 21 Jul 2020 05:42:58 +0000 (14:42 +0900)]
Fix svace issue[WGID=443239]
Change-Id: I79e3b1c283f9813efb48caffd7c5c30b8c2afc95
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Fri, 26 Jun 2020 06:00:59 +0000 (15:00 +0900)]
Fix build error aarch64 and x86_64
Change-Id: I4aa50fce7ed7b9c822a505be5d21f17bb5f040e9
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Tue, 16 Jun 2020 08:21:46 +0000 (17:21 +0900)]
Check if model file path is valid or not
Change-Id: Id621bac742d9d2a5109462ffd284b956b0feae21
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Tue, 16 Jun 2020 07:47:35 +0000 (16:47 +0900)]
Change in-house NN Runtime backend name
Official name of NNFW is ONE(On-device Neural Engine)
so use it instead of NNFW.
Change-Id: I8bdb279451570074f11a85386c6725afe73ceab9
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Mon, 15 Jun 2020 09:25:23 +0000 (18:25 +0900)]
Add NNFW runtime support
This patch corrects the use of fixed values to support NNFW runtime.
For this, this patch updates tensor buffer information according to
a given model and add ConvertTesnorType function which is used
to convert tensor data type for NNStreamer to the one for MediaVision
Inference engine.
Change-Id: I28d80698feebe9efbb076bb8d979b0ce0d849fee
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Thu, 11 Jun 2020 07:29:14 +0000 (16:29 +0900)]
Set model path according to MLAPI backend
NNFW - in-house NN Runtime - needs NNPackage type of package
which is a directory containing a model file and its meta file.
For more details, you can refer to
https://github.com/Samsung/ONE/tree/master/nnpackage/examples/one_op_in_tflite
ML Single API framework of NNStreamer receives a full path of a given model file
from user - in our case, inference-engine-mlapi backend - and find metadata
in the directory that the a given model file is located.
So inference-engine-mlapi backend should pass a full path of
the a given model file to ML Single API framework.
Change-Id: I6bdd871d5b683dbd6e60fce0f6dbd052985cd514
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Thu, 11 Jun 2020 07:16:42 +0000 (16:16 +0900)]
Set supported_device_types according to MLAPI backend type
NNFW supports only CPU and GPU accelerated NN runtime so
Consider using NNFW tensor filter plugin of NNStreamer.
Change-Id: I3ed4ae5018b984c812f8bad69eebbfdae69dd030
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Thu, 4 Jun 2020 08:00:14 +0000 (17:00 +0900)]
Fix initializer list coding rule
Change-Id: Id95bf653d7b6274c4803a6b240783905e96300ce
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Thu, 4 Jun 2020 05:47:00 +0000 (14:47 +0900)]
Fix coding style based on Tizen SE C++ Coding Rule
Tizen SE C++ Coding Rule:
https://code.sec.samsung.net/confluence/pages/viewpage.action?pageId=
160925159
Change-Id: I1ae54a3676dc9cc0e06d4322eb612ceb07d7626c
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Wed, 3 Jun 2020 08:41:48 +0000 (17:41 +0900)]
Change a function name from SetPluginType to SetPrivateData
Change-Id: I4a0d2ea3345ab650f5ffe9072f8edc79f5fdca98
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Tue, 2 Jun 2020 09:36:54 +0000 (18:36 +0900)]
Change postfix of file name to "mlapi"
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Tue, 2 Jun 2020 09:13:04 +0000 (18:13 +0900)]
Change a backend type from VIVANTE to MLAPI
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Inki Dae [Wed, 27 May 2020 08:29:46 +0000 (17:29 +0900)]
Add init code
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Tizen Infrastructure [Wed, 27 May 2020 08:16:04 +0000 (08:16 +0000)]
Initial empty repository