From: 오형석/On-Device Lab(SR)/Staff Engineer/삼성전자 Date: Mon, 25 Nov 2019 01:43:16 +0000 (+0900) Subject: [neurun/api] Comment for nnfw_session (#9137) X-Git-Tag: submit/tizen/20191205.083104~156 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=b967bb7986c9027f2712d3340f11a538ee5fca10;p=platform%2Fcore%2Fml%2Fnnfw.git [neurun/api] Comment for nnfw_session (#9137) Add more comment for nnfw_session It contains brief how to use nnfw_session for inference Signed-off-by: Hyeongseok Oh --- diff --git a/runtime/neurun/api/include/nnfw.h b/runtime/neurun/api/include/nnfw.h index a1c3841..f1b8a45 100644 --- a/runtime/neurun/api/include/nnfw.h +++ b/runtime/neurun/api/include/nnfw.h @@ -24,7 +24,37 @@ extern "C" { #endif +/** + * nnfw_session is session to query with runtime + * + *

nnfw_session is started and passed by calling {@link nnfw_create_session}. + * Each session has its own inference environment, such as model to inference, backend usage, etc. + * + *

Load model by calling {@link nnfw_load_model_from_file} + * + *

After loading, prepare inference by calling {@link nnfw_prepare}. + * Application can set runtime environment before prepare by calling + * {@link nnfw_set_available_backends} and {@link nnfw_set_op_backend}, and it is optional. + * + *

Application can inference by calling {@link nnfw_run}. + * Before inference, application has responsibility to set input tensor to set input data by calling + * {@link nnfw_set_output}, and output tensor to get output by calling {@link nnfw_set_input} + * + *

To support input and output setting, application can get + * input and output tensor information by calling

+ * + *

Application can inference many times using one session, + * but next inference can do after prior inference end + * + *

Application cannot use muitiple model using one session + */ typedef struct nnfw_session nnfw_session; + typedef struct nnfw_tensorinfo nnfw_tensorinfo; typedef enum {