Add CLTuner support
authorInki Dae <inki.dae@samsung.com>
Thu, 21 Jan 2021 07:36:37 +0000 (16:36 +0900)
committerInki Dae <inki.dae@samsung.com>
Thu, 25 Mar 2021 02:11:40 +0000 (11:11 +0900)
commit9c6930315e7cbf41f2ce73da5a566eab54ff6e79
tree275dff5559344adc66e2714bc556ee0042b8e716
parent9a602b4be14a0d49e56a078d0dbd0cd280bb7fe0
Add CLTuner support

Added CLTuner support.

For CLTuner support, this patch adds a new internal API, SetCLTuner function.
This function passes user-given CLTuner configuration to MLAPI and ARMNN
backends before inference engine loads a given model file.

[How to use]
There are two CLTuner modes:
    READ : inference engine refers to a given tuned file for inference.
    GENERATION : inference engine generates a tuning file to a given model file.

And there are three CLTuner types:
    EXHAUSTIVE : The tuning speed is slow but aggressive optimization.
    NORMAL : The tuning speed is reasonable and reasonable optimization.
    RAPID : The tuning speed is fast but leient optimization.

- For CLTuner read mode,
    inference_engine_cltuner cltuner = {
        .active = true,
        .update = false,
        .cltuner.type = INFERENCE_ENGINE_CLTUNER_READ,
    };

- For CLTuner generation mode,
    inference_engine_cltuner cltuner = {
        .active = true,
        .update = true,
        .cltuner.type = INFERENCE_ENGINE_CLTUNER_{EXHAUSTIVE |
                                                  NORMAL |
                                                  RAPID},
    };

    inference_engine_capacity capacity;
    engine->GetBackendCapacity(&capacity);

    if (capacity.cltuner_supported)
        engine->SetCLTuner(&cltuner);

Change-Id: Id1cc9513e444dfad21b933b46535c1b810f4a4d6
Signed-off-by: Inki Dae <inki.dae@samsung.com>
include/inference_engine_common.h
include/inference_engine_common_impl.h
include/inference_engine_type.h
src/inference_engine_common_impl.cpp
test/src/inference_engine_profiler.cpp