COMMAND ${Python3_EXECUTABLE} ${PYX_FILTER} ${PYTHON_API_OUT}
COMMENT "Pre-process Python API")
- # Plugin API
-
- add_custom_target(plugin_api
- COMMAND ${DOXYGEN_EXECUTABLE} ${PLUGIN_CONFIG_BINARY}
- WORKING_DIRECTORY ${DOCS_BINARY_DIR}
- COMMENT "Generating Plugin API Reference"
- VERBATIM)
-
# Preprocess docs
add_custom_target(preprocess_docs
WORKING_DIRECTORY ${DOCS_BINARY_DIR}
VERBATIM)
+ # Plugin API
+
+ add_custom_target(plugin_api
+ DEPENDS ie_docs
+ COMMAND ${DOXYGEN_EXECUTABLE} ${PLUGIN_CONFIG_BINARY}
+ WORKING_DIRECTORY ${DOCS_BINARY_DIR}
+ COMMENT "Generating Plugin API Reference"
+ VERBATIM)
+
# Umbrella OpenVINO target
add_custom_target(openvino_docs
1. Query the instruction set via system `lscpu | grep avx512_bf16` or `cat /proc/cpuinfo | grep avx512_bf16`.
2. Use [Query API](InferenceEngine_QueryAPI.md) with `METRIC_KEY(OPTIMIZATION_CAPABILITIES)`, which should return `BF16` in the list of CPU optimization options:
-@snippet openvino/docs/snippets/Bfloat16Inference0.cpp part0
+@snippet snippets/Bfloat16Inference0.cpp part0
Current Inference Engine solution for bfloat16 inference uses Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) and supports inference of the following layers in BF16 computation mode:
* Convolution
For default optimization on CPU, source model converts from FP32 or FP16 to BF16 and executes internally on platforms with native BF16 support. In that case, `KEY_ENFORCE_BF16` is set to `YES`.
The code below demonstrates how to check if the key is set:
-@snippet openvino/docs/snippets/Bfloat16Inference1.cpp part1
+@snippet snippets/Bfloat16Inference1.cpp part1
To disable BF16 internal transformations, set the `KEY_ENFORCE_BF16` to `NO`. In this case, the model infers AS IS without modifications with precisions that were set on each layer edge.
-@snippet openvino/docs/snippets/Bfloat16Inference2.cpp part2
+@snippet snippets/Bfloat16Inference2.cpp part2
An exception with message `Platform doesn't support BF16 format` is formed in case of setting `KEY_ENFORCE_BF16` to `YES` on CPU without native BF16 support.
Here is a code example:
-@snippet openvino/docs/snippets/DynamicBatching.cpp part0
+@snippet snippets/DynamicBatching.cpp part0
## Limitations
Based on that, declaration of a operation class can look as follows:
-@snippet op.hpp op:header
+@snippet template_extension/op.hpp op:header
### Class Fields
nGraph operation contains two constructors: a default constructor, which allows to create operation without attributes and a constructor that creates and validates operation with specified inputs and attributes.
-@snippet op.cpp op:ctor
+@snippet template_extension/op.cpp op:ctor
### `validate_and_infer_types()`
`ngraph::Node::validate_and_infer_types` method validates operation attributes and calculates output shapes using attributes of operation.
-@snippet op.cpp op:validate
+@snippet template_extension/op.cpp op:validate
### `clone_with_new_inputs()`
`ngraph::Node::clone_with_new_inputs` method creates a copy of nGraph operation with new inputs.
-@snippet op.cpp op:copy
+@snippet template_extension/op.cpp op:copy
### `visit_attributes()`
`ngraph::Node::visit_attributes` method allows to visit all operation attributes.
-@snippet op.cpp op:visit_attributes
+@snippet template_extension/op.cpp op:visit_attributes
### `evaluate()`
`ngraph::Node::evaluate` method allows to apply constant folding to an operation.
-@snippet op.cpp op:evaluate
+@snippet template_extension/op.cpp op:evaluate
## Register Custom Operations in Extension Class
To add custom operations to the [Extension](Extension.md) class, create an operation set with custom operations and implement the `InferenceEngine::IExtension::getOpSets` method:
-@snippet extension.cpp extension:getOpSets
+@snippet template_extension/extension.cpp extension:getOpSets
This method returns a map of opsets that exist in the extension library.
All custom kernels for the CPU plugin should be inherited from the InferenceEngine::ILayerExecImpl interface.
Based on that, declaration of a kernel implementation class can look as follows:
-@snippet cpu_kernel.hpp cpu_implementation:header
+@snippet template_extension/cpu_kernel.hpp cpu_implementation:header
### Class Fields
An implementation constructor checks parameters of nGraph operation, stores needed attributes, and stores an error message in the case of an error.
-@snippet cpu_kernel.cpp cpu_implementation:ctor
+@snippet template_extension/cpu_kernel.cpp cpu_implementation:ctor
### `getSupportedConfigurations`
InferenceEngine::ILayerExecImpl::getSupportedConfigurations method returns all supported configuration formats (input/output tensor layouts) for your implementation. To specify formats of data, use InferenceEngine::TensorDesc. Refer to the [Memory Primitives](../Memory_primitives.md) section for instructions on how to do it.
-@snippet cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
+@snippet template_extension/cpu_kernel.cpp cpu_implementation:getSupportedConfigurations
### `init`
InferenceEngine::ILayerExecImpl::init method gets a runtime-selected configuration from a vector that is populated from the `getSupportedConfigurations` method and checks the parameters:
-@snippet cpu_kernel.cpp cpu_implementation:init
+@snippet template_extension/cpu_kernel.cpp cpu_implementation:init
### `execute`
InferenceEngine::ILayerExecImpl::execute method accepts and processes the actual tenors as input/output blobs:
-@snippet cpu_kernel.cpp cpu_implementation:execute
+@snippet template_extension/cpu_kernel.cpp cpu_implementation:execute
## Register Implementation in `Extension` Class
InferenceEngine::IExtension::getImplTypes returns a vector of implementation types for an operation.
-@snippet extension.cpp extension:getImplTypes
+@snippet template_extension/extension.cpp extension:getImplTypes
### <a name="getImplementation"><code>getImplementation</code></a>
InferenceEngine::IExtension::getImplementation returns the kernel implementation with a specified type for an operation.
-@snippet extension.cpp extension:getImplementation
+@snippet template_extension/extension.cpp extension:getImplementation
## Load Extension with Executable Kernels to Plugin
Use the `AddExtension` method of the general plugin interface to load your primitives:
-@snippet openvino/docs/snippets/CPU_Kernel.cpp part0
+@snippet snippets/CPU_Kernel.cpp part0
The same principles apply when registering custom ONNX operator based on custom nGraph operations.
This example shows how to register custom ONNX operator based on `Operation` presented in [this tutorial](AddingNGraphOps.md), which is used in [TemplateExtension](Extension.md).
-@snippet extension.cpp extension:ctor
+@snippet template_extension/extension.cpp extension:ctor
Here, the `register_operator` function is called in Extension's constructor, which makes sure that it is called before InferenceEngine::Core::ReadNetwork (since InferenceEngine::Core::AddExtension must be called before a model with custom operator is read).
The example below demonstrates how to unregister operator from Extension's destructor:
-@snippet extension.cpp extension:dtor
+@snippet template_extension/extension.cpp extension:dtor
Note that it is mandatory to unregister custom ONNX operator if it is defined in dynamic shared library.
## Requirements for building with CMake
Based on that, declaration of an extension class can look as follows:
-@snippet extension.hpp extension:header
+@snippet template_extension/extension.hpp extension:header
The extension library should contain and export the method InferenceEngine::CreateExtension, which creates an `Extension` class:
-@snippet extension.cpp extension:CreateExtension
+@snippet template_extension/extension.cpp extension:CreateExtension
Also, an `Extension` object should implement the following methods:
* InferenceEngine::IExtension::GetVersion returns information about version of the library
-@snippet extension.cpp extension:GetVersion
+@snippet template_extension/extension.cpp extension:GetVersion
Implement the InferenceEngine::IExtension::getOpSets method if the extension contains custom layers.
Read the [guide about custom operations](AddingNGraphOps.md) for more information.
* Include a section with your kernels into the global automatically-loaded `cldnn_global_custom_kernels/cldnn_global_custom_kernels.xml` file, which is hosted in the `<INSTALL_DIR>/deployment_tools/inference_engine/bin/intel64/{Debug/Release}` folder
* Call the `InferenceEngine::Core::SetConfig()` method from your application with the `InferenceEngine::PluginConfigParams::KEY_CONFIG_FILE` key and the configuration file name as a value before loading the network that uses custom layers to the plugin:
-@snippet openvino/docs/snippets/GPU_Kernel.cpp part0
+@snippet snippets/GPU_Kernel.cpp part0
All Inference Engine samples, except trivial `hello_classification`,
feature a dedicated command-line option `-c` to load custom kernels. For example, to load custom layers for the classification sample, run the command below:
following line to your code that configures the GPU plugin to output the
custom kernels:
-@snippet openvino/docs/snippets/GPU_Kernel.cpp part1
+@snippet snippets/GPU_Kernel.cpp part1
When the Inference Engine compiles the kernels for the specific network,
it also outputs the resulting code for the custom kernels. In the
The example below shows how to set and use the key files:
-@snippet openvino/docs/snippets/GPU_Kernels_Tuning.cpp part0
+@snippet snippets/GPU_Kernels_Tuning.cpp part0
---
| <code>InferenceEngineProfileInfo</code> | Represents basic inference profiling information per layer |
| Inference Engine | A C++ library with a set of classes that you can use in your application to infer input data (images) and get the result |
| Inference Engine API | The basic default API for all supported devices, which allows you to load a model from Intermediate Representation, set input and output formats and execute the model on various devices |
-| Inference Engine <code>Core<code> | Inference Engine Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, MYRIAD, GNA, etc. |
+| Inference Engine <code>Core</code> | Inference Engine Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, MYRIAD, GNA, etc. |
| Layer catalog or Operations specification | A list of supported layers or operations and its parameters. Sets of supported layers are different for different plugins, please check the documentation on plugins to verify if the Inference Engine supports certain layer on the dedicated hardware |
| <code>Layout</code> | Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch |
| <code>OutputsDataMap</code> | Structure which contains information about output precisions and layouts |
### GetAvailableDevices
-@snippet openvino/docs/snippets/InferenceEngine_QueryAPI0.cpp part0
+@snippet snippets/InferenceEngine_QueryAPI0.cpp part0
The function returns list of available devices, for example:
```
The code below demonstrates how to understand whether `HETERO` device dumps `.dot` files with split graphs during the split stage:
-@snippet openvino/docs/snippets/InferenceEngine_QueryAPI1.cpp part1
+@snippet snippets/InferenceEngine_QueryAPI1.cpp part1
For documentation about common configuration keys, refer to `ie_plugin_config.hpp`. Device specific configuration keys can be found in corresponding plugin folders.
* To extract device properties such as available device, device name, supported configuration keys, and others, use the `InferenceEngine::Core::GetMetric` method:
-@snippet openvino/docs/snippets/InferenceEngine_QueryAPI2.cpp part2
+@snippet snippets/InferenceEngine_QueryAPI2.cpp part2
A returned value looks as follows: `Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz`.
The method is used to get executable network specific metric such as `METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS)`:
-@snippet openvino/docs/snippets/InferenceEngine_QueryAPI3.cpp part3
+@snippet snippets/InferenceEngine_QueryAPI3.cpp part3
Or the current temperature of `MYRIAD` device:
-@snippet openvino/docs/snippets/InferenceEngine_QueryAPI4.cpp part4
+@snippet snippets/InferenceEngine_QueryAPI4.cpp part4
### GetConfig()
The method is used to get information about configuration values the executable network has been created with:
-@snippet openvino/docs/snippets/InferenceEngine_QueryAPI5.cpp part5
+@snippet snippets/InferenceEngine_QueryAPI5.cpp part5
### SetConfig()
1) **Create Inference Engine Core** to manage available devices and read network objects:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part0
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part0
2) **Read a model IR** created by the Model Optimizer (.xml is supported format):
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part1
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part1
**Or read the model from ONNX format** (.onnx and .prototxt are supported formats). You can find more information about the ONNX format support in the document [ONNX format support in the OpenVINO™](./ONNX_Support.md).
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part2
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part2
3) **Configure input and output**. Request input and output information using `InferenceEngine::CNNNetwork::getInputsInfo()`, and `InferenceEngine::CNNNetwork::getOutputsInfo()`
methods:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part3
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part3
Optionally, set the number format (precision) and memory layout for inputs and outputs. Refer to the
[Supported configurations](supported_plugins/Supported_Devices.md) chapter to choose the relevant configuration.
You can use the following code snippet to configure input and output:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part4
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part4
> **NOTE**: NV12 input color format pre-processing differs from other color conversions. In case of NV12,
> Inference Engine expects two separate image planes (Y and UV). You must use a specific
4) **Load the model** to the device using `InferenceEngine::Core::LoadNetwork()`:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part5
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part5
It creates an executable network from a network object. The executable network is associated with single hardware device.
It is possible to create as many networks as needed and to use them simultaneously (up to the limitation of the hardware resources).
Third parameter is a configuration for plugin. It is map of pairs: (parameter name, parameter value). Choose device from
[Supported devices](supported_plugins/Supported_Devices.md) page for more details about supported configuration parameters.
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part6
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part6
5) **Create an infer request**:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part7
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part7
6) **Prepare input**. You can use one of the following options to prepare input:
* **Optimal way for a single network.** Get blobs allocated by an infer request using `InferenceEngine::InferRequest::GetBlob()`
and feed an image and the input data to the blobs. In this case, input data must be aligned (resized manually) with a
given blob size and have a correct color format.
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part8
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part8
* **Optimal way for a cascade of networks (output of one network is input for another).** Get output blob from the first
request using `InferenceEngine::InferRequest::GetBlob()` and set it as input for the second request using
`InferenceEngine::InferRequest::SetBlob()`.
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part9
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part9
* **Optimal way to handle ROI (a ROI object located inside of input of one network is input for another).** It is
possible to re-use shared input by several networks. You do not need to allocate separate input blob for a network if
ROI without allocation of new memory using `InferenceEngine::make_shared_blob()` with passing of
`InferenceEngine::Blob::Ptr` and `InferenceEngine::ROI` as parameters.
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part10
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part10
Make sure that shared input is kept valid during execution of each network. Otherwise, ROI blob may be corrupted if the
original input blob (that ROI is cropped from) has already been rewritten.
* Allocate input blobs of the appropriate types and sizes, feed an image and the input data to the blobs, and call
`InferenceEngine::InferRequest::SetBlob()` to set these blobs for an infer request:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part11
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part11
A blob can be filled before and after `SetBlob()`.
7) **Do inference** by calling the `InferenceEngine::InferRequest::StartAsync` and `InferenceEngine::InferRequest::Wait`
methods for asynchronous request:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part12
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part12
or by calling the `InferenceEngine::InferRequest::Infer` method for synchronous request:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part13
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part13
`StartAsync` returns immediately and starts inference without blocking main thread, `Infer` blocks
main thread and returns when inference is completed.
Note that casting `Blob` to `TBlob` via `std::dynamic_pointer_cast` is not recommended way,
better to access data via `buffer()` and `as()` methods as follows:
-@snippet openvino/docs/snippets/Integrate_with_customer_application_new_API.cpp part14
+@snippet snippets/Integrate_with_customer_application_new_API.cpp part14
## Build Your Application
1. Migrate from the `InferenceEngine::InferencePlugin` initialization:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part0
+@snippet snippets/Migration_CoreAPI.cpp part0
to the `InferenceEngine::Core` class initialization:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part1
+@snippet snippets/Migration_CoreAPI.cpp part1
2. Instead of using `InferenceEngine::CNNNetReader` to read IR:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part2
+@snippet snippets/Migration_CoreAPI.cpp part2
read networks using the Core class:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part3
+@snippet snippets/Migration_CoreAPI.cpp part3
The Core class also allows reading models from the ONNX format (more information is [here](./ONNX_Support.md)):
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part4
+@snippet snippets/Migration_CoreAPI.cpp part4
3. Instead of adding CPU device extensions to the plugin:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part5
+@snippet snippets/Migration_CoreAPI.cpp part5
add extensions to CPU device using the Core class:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part6
+@snippet snippets/Migration_CoreAPI.cpp part6
4. Instead of setting configuration keys to a particular plugin, set (key, value) pairs via `InferenceEngine::Core::SetConfig`
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part7
+@snippet snippets/Migration_CoreAPI.cpp part7
> **NOTE**: If `deviceName` is omitted as the last argument, configuration is set for all Inference Engine devices.
5. Migrate from loading the network to a particular plugin:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part8
+@snippet snippets/Migration_CoreAPI.cpp part8
to `InferenceEngine::Core::LoadNetwork` to a particular device:
-@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part9
+@snippet snippets/Migration_CoreAPI.cpp part9
After you have an instance of `InferenceEngine::ExecutableNetwork`, all other steps are as usual.
To list all supported ONNX ops in a specific version and domain, use the `get_supported_operators`
as shown in the example below:
-@snippet openvino/docs/snippets/OnnxImporterTutorial0.cpp part0
+@snippet snippets/OnnxImporterTutorial0.cpp part0
The above code produces a list of all the supported operators for the `version` and `domain` you specified and outputs a list similar to this:
```cpp
To determine whether a specific ONNX operator in a particular version and domain is supported by the importer, use the `is_operator_supported` function as shown in the example below:
-@snippet openvino/docs/snippets/OnnxImporterTutorial1.cpp part1
+@snippet snippets/OnnxImporterTutorial1.cpp part1
## Import ONNX Model
The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the stream as an input:
-@snippet openvino/docs/snippets/OnnxImporterTutorial2.cpp part2
+@snippet snippets/OnnxImporterTutorial2.cpp part2
### <a name="path">Filepath as Input</a>
The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the filepath as an input:
-@snippet openvino/docs/snippets/OnnxImporterTutorial3.cpp part3
+@snippet snippets/OnnxImporterTutorial3.cpp part3
[onnx_header]: https://github.com/NervanaSystems/ngraph/blob/master/src/ngraph/frontend/onnx_import/onnx.hpp
[onnx_model_zoo]: https://github.com/onnx/models
Here is a code example:
-@snippet openvino/docs/snippets/ShapeInference.cpp part0
+@snippet snippets/ShapeInference.cpp part0
Shape Inference feature is used in [Smart classroom sample](@ref omz_demos_smart_classroom_demo_README).
For more information, see the `InferenceEngine::Core` Class
Reference Documentation.
-@snippet openvino/docs/snippets/protecting_model_guide.cpp part0
+@snippet snippets/protecting_model_guide.cpp part0
Hardware-based protection, such as Intel® Software Guard Extensions
(Intel® SGX), can be utilized to protect decryption operation secrets and
The `ReadNetwork(const std::string& model, const Blob::CPtr& weights)` function
should be called with `weights` passed as an empty `Blob`.
-@snippet openvino/docs/snippets/protecting_model_guide.cpp part1
+@snippet snippets/protecting_model_guide.cpp part1
[deploy_encrypted_model]: img/deploy_encrypted_model.png
This example uses the OpenCL context obtained from an executable network object.
-@snippet openvino/docs/snippets/GPU_RemoteBlob_API0.cpp part0
+@snippet snippets/GPU_RemoteBlob_API0.cpp part0
### Running GPU Plugin Inference within User-Supplied Shared Context
-@snippet openvino/docs/snippets/GPU_RemoteBlob_API1.cpp part1
+@snippet snippets/GPU_RemoteBlob_API1.cpp part1
### Direct Consuming of the NV12 VAAPI Video Decoder Surface on Linux
-@snippet openvino/docs/snippets/GPU_RemoteBlob_API2.cpp part2
+@snippet snippets/GPU_RemoteBlob_API2.cpp part2
## See Also
Another way to annotate a network is to set affinity manually using <code>ngraph::Node::get_rt_info</code> with key `"affinity"`:
-@snippet openvino/docs/snippets/HETERO0.cpp part0
+@snippet snippets/HETERO0.cpp part0
The fallback policy does not work if even one layer has an initialized affinity. The sequence should be calling of automating affinity settings and then fix manually.
> **NOTE**: If you set affinity manually, be careful at the current moment Inference Engine plugins don't support constant (`Constant`->`Result`) and empty (`Parameter`->`Result`) networks. Please avoid such subgraphs when you set affinity manually.
-@snippet openvino/docs/snippets/HETERO1.cpp part1
+@snippet snippets/HETERO1.cpp part1
If you rely on the default affinity distribution, you can avoid calling <code>InferenceEngine::Core::QueryNetwork</code> and just call <code>InferenceEngine::Core::LoadNetwork</code> instead:
-@snippet openvino/docs/snippets/HETERO2.cpp part2
+@snippet snippets/HETERO2.cpp part2
> **NOTE**: `InferenceEngine::Core::QueryNetwork` does not depend on affinities set by a user, but queries for layer support based on device capabilities.
* `hetero_affinity_<network name>.dot` - annotation of affinities per layer. This file is written to the disk only if default fallback policy was executed
* `hetero_subgraphs_<network name>.dot` - annotation of affinities per graph. This file is written to the disk during execution of <code>ICNNNetwork::LoadNetwork()</code> for heterogeneous plugin
-@snippet openvino/docs/snippets/HETERO3.cpp part3
+@snippet snippets/HETERO3.cpp part3
You can use GraphViz* utility or converters to `.png` formats. On Ubuntu* operating system, you can use the following utilities:
* `sudo apt-get install xdot`
Basically, there are three ways to specify the devices to be use by the "MULTI":
-@snippet openvino/docs/snippets/MULTI0.cpp part0
+@snippet snippets/MULTI0.cpp part0
Notice that the priorities of the devices can be changed in real-time for the executable network:
-@snippet openvino/docs/snippets/MULTI1.cpp part1
+@snippet snippets/MULTI1.cpp part1
Finally, there is a way to specify number of requests that the multi-device will internally keep for each device.
Say if your original app was running 4 cameras with 4 inference requests now you would probably want to share these 4 requests between 2 devices used in the MULTI. The easiest way is to specify a number of requests for each device using parentheses: "MULTI:CPU(2),GPU(2)" and use the same 4 requests in your app. However, such an explicit configuration is not performance portable and hence not recommended. Instead, the better way is to configure the individual devices and query the resulting number of requests to be used in the application level (see [Configuring the Individual Devices and Creating the Multi-Device On Top](#configuring-the-individual-devices-and-creating-the-multi-device-on-top)).
```
Simple programmatic way to enumerate the devices and use with the multi-device is as follows:
-@snippet openvino/docs/snippets/MULTI2.cpp part2
+@snippet snippets/MULTI2.cpp part2
Beyond trivial "CPU", "GPU", "HDDL" and so on, when multiple instances of a device are available the names are more qualified.
For example this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:
So the explicit configuration to use both would be "MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480".
Accordingly, the code that loops over all available devices of "MYRIAD" type only is below:
-@snippet openvino/docs/snippets/MULTI3.cpp part3
+@snippet snippets/MULTI3.cpp part3
## Configuring the Individual Devices and Creating the Multi-Device On Top
As discussed in the first section, you shall configure each individual device as usual and then just create the "MULTI" device on top:
-@snippet openvino/docs/snippets/MULTI4.cpp part4
+@snippet snippets/MULTI4.cpp part4
Alternatively, you can combine all the individual device settings into single config and load that, allowing the multi-device plugin to parse and apply that to the right devices. See code example in the next section.
## Querying the Optimal Number of Inference Requests
Notice that until R2 you had to calculate number of requests in your application for any device, e.g. you had to know that Intel® Vision Accelerator Design with Intel® Movidius™ VPUs required at least 32 inference requests to perform well. Now you can use the new GetMetric API to query the optimal number of requests. Similarly, when using the multi-device you don't need to sum over included devices yourself, you can query metric directly:
-@snippet openvino/docs/snippets/MULTI5.cpp part5
+@snippet snippets/MULTI5.cpp part5
## Using the Multi-Device with OpenVINO Samples and Benchmarking the Performance
Notice that every OpenVINO sample that supports "-d" (which stays for "device") command-line option transparently accepts the multi-device.
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*
-EXCLUDE_PATTERNS = cnn_network_ngraph_impl.hpp \
- ie_imemory_state_internal.hpp \
- ie_memory_state_internal.hpp \
- ie_memory_state_base.hpp \
- generic_ie.hpp \
+EXCLUDE_PATTERNS = generic_ie.hpp \
function_name.hpp \
macro_overload.hpp
@snippet src/template_executable_network.cpp executable_network:get_metric
-The IE_SET_METRIC helper macro sets metric value and checks that the actual metric type matches a type of the specified value.
+The IE_SET_METRIC_RETURN helper macro sets metric value and checks that the actual metric type matches a type of the specified value.
### `GetConfig()`
-# Representation of low-precision models
+# Representation of low-precision models {#lp_representation}
The goal of this document is to describe how optimized models are represented in OpenVINO Intermediate Representation (IR) and provide guidance on interpretation rules for such models at runtime.
Currently, there are two groups of optimization methods that can influence on the IR after applying them to the full-precision model:
- **Sparsity**. It is represented by zeros inside the weights and this is up to the hardware plugin how to interpret these zeros (use weights as is or apply special compression algorithms and sparse arithmetic). No additional mask is provided with the model.
- **Quantization**. The rest of this document is dedicated to the representation of quantized models.
## Representation of quantized models
-The OpenVINO Toolkit represents all the quantized models using the so-called FakeQuantize operation (see the description in [this document](../MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md)). This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization process which happens at runtime.
+The OpenVINO Toolkit represents all the quantized models using the so-called FakeQuantize operation (see the description in [this document](@ref openvino_docs_ops_quantization_FakeQuantize_1)). This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization process which happens at runtime.
In order to be able to execute a particular DL operation in low-precision all its inputs should be quantized i.e. should have FakeQuantize between operation and data blobs. The figure below shows an example of quantized Convolution which contains two FakeQuantize nodes: one for weights and one for activations (bias is quantized using the same parameters).
![quantized_convolution]
<div align="center">Figure 1. Example of quantized Convolution operation.</div>
One of the feature of Inference Engine is the support of quantized networks with different precisions: INT8, INT4, etc.
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
All quantized networks which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
-For more details about low-precision model representation please refer to this [document](LowPrecisionModelRepresentation.md).
+For more details about low-precision model representation please refer to this [document](@ref lp_representation).
### Interpreting FakeQuantize at runtime
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
- Independently based on the definition of *FakeQuantize* operation.
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
-such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](../IE_DG/Int8Inference.md).
+such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](@ref openvino_docs_IE_DG_Int8Inference).
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.
</tab>
<!-- API References -->
<tab type="usergroup" title="API REFERENCE">
- <!-- IE Developer Package -->
- <tab type="modules" visible="yes" title="Inference Engine Plugin API Reference"/>
+ <!-- IE Plugin API -->
+ <tab type="user" url="group__ie__dev__api.html" visible="yes" title="Inference Engine Plugin API Reference"/>
+ <!-- IE Transformations API -->
+ <tab type="user" url="group__ie__transformation__api.html" visible="yes" title="Inference Engine Transformations API Reference"/>
</tab>
<tab type="usergroup" title="MAIN OPENVINO™ DOCS" url="../index.html"/>
</navindex>
EXCLUDE_SYMBOLS = INFERENCE_ENGINE_C_API_EXTERN \
INFERENCE_ENGINE_C_API \
+ INFERENCE_ENGINE_C_API_CALLBACK \
IE_NODISCARD
PREDEFINED = "__attribute__(x)=" \
"__VA_ARGS__=" \
"INFERENCE_ENGINE_C_API_EXTERN=" \
+ "INFERENCE_ENGINE_C_API_CALLBACK=" \
"INFERENCE_ENGINE_C_API=" \
"IE_NODISCARD=" \
"__cdecl=" \
"__declspec(x)=" \
- "__GNUC__=" \
"_WIN32"
FILE_PATTERNS = *.h
# exclude all test directories use the pattern */test/*
EXCLUDE_SYMBOLS = InferenceEngine::details \
+ InferenceEngine::gpu::details \
PRECISION_NAME \
- TBLOB_TOP_RESULT \
CASE \
CASE2 \
_CONFIG_KEY \
INFERENCE_ENGINE_API_CPP \
INFERENCE_ENGINE_API_CLASS \
INFERENCE_ENGINE_DEPRECATED \
- INFERENCE_ENGINE_NN_BUILDER_API_CLASS \
- INFERENCE_ENGINE_NN_BUILDER_DEPRECATED \
IE_SUPPRESS_DEPRECATED_START \
IE_SUPPRESS_DEPRECATED_END \
IE_SUPPRESS_DEPRECATED_START_WIN \
IE_SUPPRESS_DEPRECATED_END_WIN \
IE_SUPPRESS_DEPRECATED_END_WIN \
INFERENCE_ENGINE_INTERNAL \
- INFERENCE_ENGINE_INTERNAL_CNNLAYER_CLASS \
IE_DO_PRAGMA \
- REG_VALIDATOR_FOR
+ parallel_* \
+ for_* \
+ splitter \
+ InferenceEngine::parallel_* \
+ NOMINMAX \
+ TBB_PREVIEW_NUMA_SUPPORT \
+ IE_THREAD_*
# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include
# command).
-EXAMPLE_PATH = template_extension \
- ../inference-engine/samples
+EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@"
# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
EXTRACT_LOCAL_CLASSES = NO
INPUT = "@DOCS_BINARY_DIR@/docs/IE_PLUGIN_DG" \
- "@IE_SOURCE_DIR@/src/plugin_api"
+ "@IE_SOURCE_DIR@/src/plugin_api" \
+ "@IE_SOURCE_DIR@/src/transformations/include" \
+ "@OpenVINO_MAIN_SOURCE_DIR@/openvino/itt/include/openvino"
+
+
+RECURSIVE = YES
FILE_PATTERNS = *.c \
*.cpp \
*.hpp \
*.md
-EXCLUDE_PATTERNS = cnn_network_ngraph_impl.hpp \
- ie_imemory_state_internal.hpp \
- ie_memory_state_internal.hpp \
- ie_memory_state_base.hpp \
- convert_function_to_cnn_network.hpp \
- generic_ie.hpp
+EXCLUDE_PATTERNS = generic_ie.hpp
+
+EXCLUDE_SYMBOLS = InferenceEngine::details
-EXCLUDE_SYMBOLS =
+TAGFILES = @DOCS_BINARY_DIR@/ie_api.tag=.."
EXAMPLE_PATH = "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/include" \
"@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/src/CMakeLists.txt" \
- "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/"
- CMakeLists.txt \
- "@CMAKE_CURRENT_SOURCE_DIR@/examples"
+ "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/CMakeLists.txt" \
+ "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/transformations" \
+ "@CMAKE_CURRENT_SOURCE_DIR@/template_plugin/tests/functional/shared_tests_instances/" \
+ "@CMAKE_CURRENT_SOURCE_DIR@/snippets"
+ "@IE_SOURCE_DIR@/tests/functional/plugin/shared/include" \
EXAMPLE_PATTERNS = *.cpp \
*.hpp
EXPAND_ONLY_PREDEF = YES
-PREDEFINED = INFERENCE_ENGINE_API \
- INFERENCE_ENGINE_API_CPP \
- INFERENCE_ENGINE_API_CLASS \
- INFERENCE_ENGINE_DEPRECATED \
- IE_SUPPRESS_DEPRECATED_START \
- IE_SUPPRESS_DEPRECATED_END \
- IE_SUPPRESS_DEPRECATED_START_WIN \
- IE_SUPPRESS_DEPRECATED_END_WIN \
- IE_THREAD=IE_THREAD_TBB
+PREDEFINED = "INFERENCE_ENGINE_API=" \
+ "INFERENCE_ENGINE_API_CPP=" \
+ "INFERENCE_ENGINE_API_CLASS=" \
+ "INFERENCE_ENGINE_DEPRECATED=" \
+ "inference_engine_transformations_EXPORTS" \
+ "TRANSFORMATIONS_API=" \
+ "NGRAPH_HELPER_DLL_EXPORT=" \
+ "NGRAPH_HELPER_DLL_IMPORT=" \
+ "IE_SUPPRESS_DEPRECATED_START=" \
+ "IE_SUPPRESS_DEPRECATED_END=" \
+ "IE_SUPPRESS_DEPRECATED_START_WIN=" \
+ "IE_SUPPRESS_DEPRECATED_END_WIN=" \
+ "IE_THREAD=IE_THREAD_TBB" \
+ "NGRAPH_RTTI_DECLARATION="
</tab>
<!-- API References -->
<tab type="usergroup" title="API REFERENCE">
- <!-- IE Developer Package -->
- <tab type="modules" visible="yes" title="Inference Engine Plugin API Reference"/>
+ <!-- IE Plugin API Reference -->
+ <tab type="user" url="group__ie__dev__api.html" visible="yes" title="Inference Engine Plugin API Reference"/>
+ <!-- IE Transformations API Reference -->
+ <tab type="user" url="group__ie__transformation__api.html" visible="yes" title="Inference Engine Transformations API Reference"/>
</tab>
<tab type="usergroup" title="MAIN OPENVINO™ DOCS" url="../index.html"/>
</navindex>
<tab type="user" title="Hello Query Device C++ Sample" url="@ref openvino_inference_engine_samples_hello_query_device_README"/>
<tab type="user" title="Hello Query Device Python* Sample" url="@ref openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README"/>
<tab type="user" title="nGraph Function C++ Sample" url="@ref openvino_inference_engine_samples_ngraph_function_creation_sample_README"/>
+ <tab type="user" title="nGraph Function Python Sample" url="@ref openvino_inference_engine_ie_bridges_python_samples_ngraph_function_creation_sample_README"/>
<tab type="user" title="Object Detection C++ Sample SSD" url="@ref openvino_inference_engine_samples_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection Python* Sample SSD" url="@ref openvino_inference_engine_ie_bridges_python_sample_object_detection_sample_ssd_README"/>
<tab type="user" title="Object Detection C Sample SSD" url="@ref openvino_inference_engine_ie_bridges_c_samples_object_detection_sample_ssd_README"/>
1. A pointer to an inference request.
2. An ID to keep track of the request.
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part0
+@snippet snippets/movidius-programming-guide.cpp part0
### Declare a Vector of Requests
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part1
+@snippet snippets/movidius-programming-guide.cpp part1
Declare and initialize 2 mutex variables:
1. For each request
For inference requests, use the asynchronous IE API calls:
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part2
+@snippet snippets/movidius-programming-guide.cpp part2
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part3
+@snippet snippets/movidius-programming-guide.cpp part3
### Create a Lambda Function
Inside the Lambda body use the completion callback function:
-@snippet openvino/docs/snippets/movidius-programming-guide.cpp part4
+@snippet snippets/movidius-programming-guide.cpp part4
## Additional Resources
- Model Optimizer can efficiently bake the mean and normalization (scale) values into the model (for example, weights of the first convolution). See <a href="#mo-knobs-related-to-performance">Model Optimizer Knobs Related to Performance</a>.
- If regular 8-bit per channel images are your native media (for instance, decoded frames), do not convert to the `FP32` on your side, as this is something that plugins can accelerate. Use the `InferenceEngine::Precision::U8` as your input format:<br>
-@snippet openvino/docs/snippets/dldt_optimization_guide1.cpp part1
+@snippet snippets/dldt_optimization_guide1.cpp part1
Note that in many cases, you can directly share the (input) data with the Inference Engine.
For Intel MSS, it is recommended to perform a viable pre-processing, for example, crop/resize, and then convert to RGB again with the [Video Processing Procedures (VPP)](https://software.intel.com/en-us/node/696108). Then lock the result and create an Inference Engine blob on top of that. The resulting pointer can be used for the `SetBlob`:
-@snippet openvino/docs/snippets/dldt_optimization_guide2.cpp part2
+@snippet snippets/dldt_optimization_guide2.cpp part2
**WARNING**: The `InferenceEngine::NHWC` layout is not supported natively by most InferenceEngine plugins so internal conversion might happen.
-@snippet openvino/docs/snippets/dldt_optimization_guide3.cpp part3
+@snippet snippets/dldt_optimization_guide3.cpp part3
Alternatively, you can use RGBP (planar RGB) output from Intel MSS. This allows to wrap the (locked) result as regular NCHW which is generally friendly for most plugins (unlike NHWC). Then you can use it with `SetBlob` just like in previous example:
-@snippet openvino/docs/snippets/dldt_optimization_guide4.cpp part4
+@snippet snippets/dldt_optimization_guide4.cpp part4
The only downside of this approach is that VPP conversion to RGBP is not hardware accelerated (and performed on the GPU EUs). Also, it is available only on LInux.
**WARNING**: The `InferenceEngine::NHWC` layout is not supported natively by most InferenceEngine plugins so internal conversion might happen.
-@snippet openvino/docs/snippets/dldt_optimization_guide5.cpp part5
+@snippet snippets/dldt_optimization_guide5.cpp part5
Notice that original `cv::Mat`/blobs cannot be used simultaneously by the application and the Inference Engine. Alternatively, the data that the pointer references to can be copied to unlock the original data and return ownership to the original API.
More importantly, an infer request encapsulates the reference to the “executable” network and actual inputs/outputs. Now, when you load the network to the plugin, you get a reference to the executable network (you may consider that as a queue). Actual infer requests are created by the executable network:
-@snippet openvino/docs/snippets/dldt_optimization_guide6.cpp part6
+@snippet snippets/dldt_optimization_guide6.cpp part6
`GetBlob` is a recommend way to communicate with the network, as it internally allocates the data with right padding/alignment for the device. For example, the GPU inputs/outputs blobs are mapped to the host (which is fast) if the `GetBlob` is used. But if you called the `SetBlob`, the copy (from/to the blob you have set) into the internal GPU plugin structures will happen.
- For the CPU, the best solution, you can use the <a href="#cpu-streams">CPU "throughput" mode</a>.
- If latency is of more concern, you can try the `EXCLUSIVE_ASYNC_REQUESTS` [configuration option](../IE_DG/supported_plugins/CPU.md) that limits the number of the simultaneously executed requests for all (executable) networks that share the specific device to just one:<br>
-@snippet openvino/docs/snippets/dldt_optimization_guide7.cpp part7
+@snippet snippets/dldt_optimization_guide7.cpp part7
<br>For more information on the executable networks notation, see <a href="#new-request-based-api">Request-Based API and “GetBlob” Idiom</a>.
- In the regular way, the frame is captured with OpenCV and then immediately processed:<br>
-@snippet openvino/docs/snippets/dldt_optimization_guide8.cpp part8
+@snippet snippets/dldt_optimization_guide8.cpp part8
![Intel® VTune™ screenshot](../img/vtune_regular.png)
- In the "true" async mode, the `NEXT` request is populated in the main (application) thread, while the `CURRENT` request is processed:<br>
-@snippet openvino/docs/snippets/dldt_optimization_guide9.cpp part9
+@snippet snippets/dldt_optimization_guide9.cpp part9
![Intel® VTune™ screenshot](../img/vtune_async.png)
#endif
#ifndef INFERENCE_ENGINE_C_API_CALLBACK
-#define INFERENCE_ENGINE_C_API_CALLBACK
+ #define INFERENCE_ENGINE_C_API_CALLBACK
#endif
typedef struct ie_core ie_core_t;
* @brief Represents an API version information that reflects the set of supported features
*/
typedef struct ie_version {
- char *api_version;
-}ie_version_t;
+ char *api_version; //!< A string representing Inference Engine version
+} ie_version_t;
/**
* @struct ie_core_version
* @brief Represents version information that describes devices and the inference engine runtime library
*/
typedef struct ie_core_version {
- size_t major;
- size_t minor;
- const char *device_name;
- const char *build_number;
- const char *description;
-}ie_core_version_t;
+ size_t major; //!< A major version
+ size_t minor; //!< A minor version
+ const char *device_name; //!< A device name
+ const char *build_number; //!< A build number
+ const char *description; //!< A device description
+} ie_core_version_t;
/**
* @struct ie_core_versions
* @brief Represents all versions information that describes all devices and the inference engine runtime library
*/
typedef struct ie_core_versions {
- ie_core_version_t *versions;
- size_t num_vers;
-}ie_core_versions_t;
+ ie_core_version_t *versions; //!< An array of device versions
+ size_t num_vers; //!< A number of versions in the array
+} ie_core_versions_t;
/**
* @struct ie_config
* @brief Represents configuration information that describes devices
*/
typedef struct ie_config {
- const char *name;
- const char *value;
- struct ie_config *next;
-}ie_config_t;
+ const char *name; //!< A configuration key
+ const char *value; //!< A configuration value
+ struct ie_config *next; //!< A pointer to the next configuration value
+} ie_config_t;
/**
* @struct ie_param
*/
typedef struct ie_param {
union {
- char *params;
- unsigned int number;
- unsigned int range_for_async_infer_request[3];
- unsigned int range_for_streams[2];
+ char *params;
+ unsigned int number;
+ unsigned int range_for_async_infer_request[3];
+ unsigned int range_for_streams[2];
};
-}ie_param_t;
+} ie_param_t;
/**
* @struct ie_param_config
typedef struct ie_param_config {
char *name;
ie_param_t *param;
-}ie_param_config_t;
+} ie_param_config_t;
/**
* @struct desc
* @brief Represents detailed information for an error
*/
typedef struct desc {
- char msg[256];
-}desc_t;
+ char msg[256]; //!< A description message
+} desc_t;
/**
* @struct dimensions
* @brief Represents dimensions for input or output data
*/
typedef struct dimensions {
- size_t ranks;
- size_t dims[8];
-}dimensions_t;
+ size_t ranks; //!< A runk representing a number of dimensions
+ size_t dims[8]; //!< An array of dimensions
+} dimensions_t;
/**
* @enum layout_e
* @brief Layouts that the inference engine supports
*/
typedef enum {
- ANY = 0, // "any" layout
+ ANY = 0, //!< "ANY" layout
// I/O data layouts
- NCHW = 1,
- NHWC = 2,
- NCDHW = 3,
- NDHWC = 4,
+ NCHW = 1, //!< "NCHW" layout
+ NHWC = 2, //!< "NHWC" layout
+ NCDHW = 3, //!< "NCDHW" layout
+ NDHWC = 4, //!< "NDHWC" layout
// weight layouts
- OIHW = 64,
+ OIHW = 64, //!< "OIHW" layout
// Scalar
- SCALAR = 95,
+ SCALAR = 95, //!< "SCALAR" layout
// bias layouts
- C = 96,
+ C = 96, //!< "C" layout
// Single image layout (for mean image)
- CHW = 128,
+ CHW = 128, //!< "CHW" layout
// 2D
- HW = 192,
- NC = 193,
- CN = 194,
+ HW = 192, //!< "HW" layout
+ NC = 193, //!< "NC" layout
+ CN = 194, //!< "CN" layout
- BLOCKED = 200,
-}layout_e;
+ BLOCKED = 200, //!< "BLOCKED" layout
+} layout_e;
/**
* @enum precision_e
U32 = 74, /**< 32bit unsigned integer value */
BIN = 71, /**< 1bit integer value */
CUSTOM = 80 /**< custom precision has it's own name and size of elements */
-}precision_e;
+} precision_e;
/**
* @struct tensor_desc
layout_e layout;
dimensions_t dims;
precision_e precision;
-}tensor_desc_t;
+} tensor_desc_t;
/**
* @enum colorformat_e
* @brief Extra information about input color format for preprocessing
*/
typedef enum {
- RAW = 0u, ///< Plain blob (default), no extra color processing required
- RGB, ///< RGB color format
- BGR, ///< BGR color format, default in DLDT
- RGBX, ///< RGBX color format with X ignored during inference
- BGRX, ///< BGRX color format with X ignored during inference
- NV12, ///< NV12 color format represented as compound Y+UV blob
- I420, ///< I420 color format represented as compound Y+U+V blob
-}colorformat_e;
+ RAW = 0u, //!< Plain blob (default), no extra color processing required
+ RGB, //!< RGB color format
+ BGR, //!< BGR color format, default in DLDT
+ RGBX, //!< RGBX color format with X ignored during inference
+ BGRX, //!< BGRX color format with X ignored during inference
+ NV12, //!< NV12 color format represented as compound Y+UV blob
+ I420, //!< I420 color format represented as compound Y+U+V blob
+} colorformat_e;
/**
* @enum resize_alg_e
* @brief Represents the list of supported resize algorithms.
*/
typedef enum {
- NO_RESIZE = 0,
- RESIZE_BILINEAR,
- RESIZE_AREA
-}resize_alg_e;
+ NO_RESIZE = 0, //!< "No resize" mode
+ RESIZE_BILINEAR, //!< "Bilinear resize" mode
+ RESIZE_AREA //!< "Area resize" mode
+} resize_alg_e;
/**
* @enum IEStatusCode
NOT_ALLOCATED = -10,
INFER_NOT_STARTED = -11,
NETWORK_NOT_READ = -12
-}IEStatusCode;
+} IEStatusCode;
/**
* @struct roi_t
* @brief This structure describes roi data.
*/
typedef struct roi {
- size_t id; // ID of a roi
- size_t posX; // W upper left coordinate of roi
- size_t posY; // H upper left coordinate of roi
- size_t sizeX; // W size of roi
- size_t sizeY; // H size of roi
-}roi_t;
+ size_t id; //!< ID of a roi
+ size_t posX; //!< W upper left coordinate of roi
+ size_t posY; //!< H upper left coordinate of roi
+ size_t sizeX; //!< W size of roi
+ size_t sizeY; //!< H size of roi
+} roi_t;
/**
* @struct input_shape
typedef struct input_shape {
char *name;
dimensions_t shape;
-}input_shape_t;
+} input_shape_t;
/**
* @struct input_shapes
typedef struct input_shapes {
input_shape_t *shapes;
size_t shape_num;
-}input_shapes_t;
+} input_shapes_t;
/**
* @struct ie_blob_buffer
*/
typedef struct ie_blob_buffer {
union {
- void *buffer; // buffer can be written
- const void *cbuffer; // cbuffer is read-only
+ void *buffer; //!< buffer can be written
+ const void *cbuffer; //!< cbuffer is read-only
};
-}ie_blob_buffer_t;
+} ie_blob_buffer_t;
/**
* @struct ie_complete_call_back
typedef struct ie_complete_call_back {
void (INFERENCE_ENGINE_C_API_CALLBACK *completeCallBackFunc)(void *args);
void *args;
-}ie_complete_call_back_t;
+} ie_complete_call_back_t;
/**
* @struct ie_available_devices
typedef struct ie_available_devices {
char **devices;
size_t num_devices;
-}ie_available_devices_t;
+} ie_available_devices_t;
/**
* @brief Returns number of version that is exported. Use the ie_version_free() to free memory.
/**
* @brief Release the memory allocated by ie_param_t.
- * @param version A pointer to the ie_param_t to free memory.
+ * @param param A pointer to the ie_param_t to free memory.
*/
INFERENCE_ENGINE_C_API(void) ie_param_free(ie_param_t *param);
/**
* @brief Get name of network.
* @ingroup Network
+ * @param network A pointer to the instance of the ie_network_t to get a name from.
* @param name Name of the network.
* @return Status code of the operation: OK(0) for success.
*/
INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_set_input_layout(ie_network_t *network, const char *input_name, const layout_e l);
/**
- * @Gets dimensions/shape of the input data with reversed order.
+ * @brief Gets dimensions/shape of the input data with reversed order.
* @ingroup Network
* @param network A pointer to ie_network_t instance.
* @param input_name Name of input data.
* @ingroup Network
* @param network A pointer to ie_network_t instance.
* @param input_name Name of input data.
- * @parm resize_alg_result The pointer to the resize algorithm used for input blob creation.
+ * @param resize_alg_result The pointer to the resize algorithm used for input blob creation.
* @return Status code of the operation: OK(0) for success.
*/
-INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_get_input_resize_algorithm(const ie_network_t *network, const char *input_name, \
- resize_alg_e *resize_alg_result);
+INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_network_get_input_resize_algorithm(const ie_network_t *network, const char *input_name, resize_alg_e *resize_alg_result);
/**
* @brief Sets resize algorithm to be used during pre-processing
INFERENCE_ENGINE_C_API(IE_NODISCARD IEStatusCode) ie_blob_get_precision(const ie_blob_t *blob, precision_e *prec_result);
/**
- * @Releases the memory occupied by the ie_blob_t pointer.
+ * @brief Releases the memory occupied by the ie_blob_t pointer.
* @ingroup Blob
* @param blob A pointer to the blob pointer to release memory.
*/
-# nGraph Function Python* Sample {#openvino_inference_engine_samples_ngraph_function_creation_sample_README}
+# nGraph Function Python* Sample {#openvino_inference_engine_ie_bridges_python_samples_ngraph_function_creation_sample_README}
This sample demonstrates how to execute an inference using ngraph::Function to create a network. The sample uses the LeNet classifications network as an example.
* Wraps ICNNNetwork::setBatchSize
*
* @param size Size of batch to set
- * @return Status code of the operation
*/
virtual void setBatchSize(const size_t size) {
CALL_STATUS_FNC(setBatchSize, size);
/**
* constructs InferRequest from the initialized shared_pointer
* @param request Initialized shared pointer to IInferRequest interface
- * @param plg Plugin to use. This is required to ensure that InferRequest can work properly even if plugin object is destroyed.
+ * @param splg Plugin to use. This is required to ensure that InferRequest can work properly even if plugin object is destroyed.
*/
explicit InferRequest(IInferRequest::Ptr request,
InferenceEngine::details::SharedObjectLoader::Ptr splg = {}):
//
/**
- * @file
+ * @brief A header file that provides wrapper classes for IVariableState
+ *
+ * @file ie_memory_state.hpp
*/
#pragma once
public:
/**
- * constructs VariableState from the initialized shared_pointer
+ * @brief constructs VariableState from the initialized shared_pointer
* @param pState Initialized shared pointer
+ * @param plg Optional: Plugin to use. This is required to ensure that VariableState can work properly even if plugin object is destroyed.
*/
explicit VariableState(IVariableState::Ptr pState, details::SharedObjectLoader::Ptr plg = {}) : actual(pState), plugin(plg) {
if (actual == nullptr) {
* @copybrief IVariableState::GetState
*
* Wraps IVariableState::GetState
- * @return A blob representing a last state
+ * @return A blob representing a state
*/
Blob::CPtr GetState() const {
Blob::CPtr stateBlob;
return stateBlob;
}
- INFERENCE_ENGINE_DEPRECATED("Use GetState function instead")
+ /**
+ * @copybrief IVariableState::GetLastState
+ * @deprecated Use IVariableState::SetState instead
+ *
+ * Wraps IVariableState::GetLastState
+ * @return A blob representing a last state
+ */
+ INFERENCE_ENGINE_DEPRECATED("Use VariableState::GetState function instead")
Blob::CPtr GetLastState() const {
return GetState();
}
}
};
-/*
+/**
* @brief For compatibility reasons.
*/
using MemoryState = VariableState;
+
} // namespace InferenceEngine
namespace gpu {
/**
-* @brief This class represents an abstraction for GPU plugin remote context
-* which is shared with Direct3D 11 device.
-* The plugin object derived from this class can be obtained either with
-* GetContext() method of Executable network or using CreateContext() Core call.
-* @note User can also obtain OpenCL context handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote context
+ * which is shared with Direct3D 11 device.
+ * The plugin object derived from this class can be obtained either with
+ * GetContext() method of Executable network or using CreateContext() Core call.
+ * @note User can also obtain OpenCL context handle from this class.
+ */
class D3DContext : public ClContext {
public:
/**
- * @brief A smart pointer to the D3DContext object
- */
+ * @brief A smart pointer to the D3DContext object
+ */
using Ptr = std::shared_ptr<D3DContext>;
/**
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which is shared with Direct3D 11 buffer.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can also obtain OpenCL buffer handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which is shared with Direct3D 11 buffer.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can also obtain OpenCL buffer handle from this class.
+ */
class D3DBufferBlob : public ClBufferBlob {
public:
/**
- * @brief A smart pointer to the D3DBufferBlob object
- */
+ * @brief A smart pointer to the D3DBufferBlob object
+ */
using Ptr = std::shared_ptr<D3DBufferBlob>;
/**
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which is shared with Direct3D 11 2D texture.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can also obtain OpenCL 2D image handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which is shared with Direct3D 11 2D texture.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can also obtain OpenCL 2D image handle from this class.
+ */
class D3DSurface2DBlob : public ClImage2DBlob {
public:
/**
- * @brief A smart pointer to the D3DSurface2DBlob object
- */
+ * @brief A smart pointer to the D3DSurface2DBlob object
+ */
using Ptr = std::shared_ptr<D3DSurface2DBlob>;
/**
};
/**
-* @brief This function is used to obtain a NV12 compound blob object from NV12 DXGI video decoder output.
-* The resulting compound contains two remote blobs for Y and UV planes of the surface.
-*/
+ * @brief This function is used to obtain a NV12 compound blob object from NV12 DXGI video decoder output.
+ * The resulting compound contains two remote blobs for Y and UV planes of the surface.
+ * @param height Height of Y plane
+ * @param width Widht of Y plane
+ * @param ctx A pointer to remote context
+ * @param nv12_surf A ID3D11Texture2D instance to create NV12 blob from
+ * @return NV12 remote blob
+ */
static inline Blob::Ptr make_shared_blob_nv12(size_t height, size_t width, RemoteContext::Ptr ctx, ID3D11Texture2D* nv12_surf) {
auto casted = std::dynamic_pointer_cast<D3DContext>(ctx);
if (nullptr == casted) {
}
/**
-* @brief This function is used to obtain remote context object from ID3D11Device
-*/
+ * @brief This function is used to obtain remote context object from ID3D11Device
+ * @param core Inference Engine Core object instance
+ * @param deviceName A name of to create a remote context for
+ * @param device A pointer to ID3D11Device to be used to create a remote context
+ * @return A shared remote context instance
+ */
static inline D3DContext::Ptr make_shared_context(Core& core, std::string deviceName, ID3D11Device* device) {
ParamMap contextParams = {
{ GPU_PARAM_KEY(CONTEXT_TYPE), GPU_PARAM_VALUE(VA_SHARED) },
}
/**
-* @brief This function is used to obtain remote blob object from ID3D11Buffer
-*/
+ * @brief This function is used to obtain remote blob object from ID3D11Buffer
+ * @param desc A tensor description which describes blob configuration
+ * @param ctx A shared pointer to a remote context
+ * @param buffer A pointer to ID3D11Buffer instance to create remote blob based on
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, ID3D11Buffer* buffer) {
auto casted = std::dynamic_pointer_cast<D3DContext>(ctx);
if (nullptr == casted) {
}
/**
-* @brief This function is used to obtain remote blob object from ID3D11Texture2D
-* @param desc Tensor description
-* @param ctx the RemoteContext object whuch owns context for the blob to be created
-* @param surface Pointer to ID3D11Texture2D interface of the objects that owns NV12 texture
-* @param plane ID of the plane to be shared (0 or 1)
-* @return Smart pointer to created RemoteBlob object cast to base class
-* @note The underlying ID3D11Texture2D can also be a plane of output surface of DXGI video decoder
-*/
+ * @brief This function is used to obtain remote blob object from ID3D11Texture2D
+ * @param desc Tensor description
+ * @param ctx the RemoteContext object whuch owns context for the blob to be created
+ * @param surface Pointer to ID3D11Texture2D interface of the objects that owns NV12 texture
+ * @param plane ID of the plane to be shared (0 or 1)
+ * @return Smart pointer to created RemoteBlob object cast to base class
+ * @note The underlying ID3D11Texture2D can also be a plane of output surface of DXGI video decoder
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, ID3D11Texture2D* surface, uint32_t plane = 0) {
auto casted = std::dynamic_pointer_cast<D3DContext>(ctx);
if (nullptr == casted) {
namespace gpu {
/**
-* @brief This class represents an abstraction for GPU plugin remote context
-* which is shared with OpenCL context object.
-* The plugin object derived from this class can be obtained either with
-* GetContext() method of Executable network or using CreateContext() Core call.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote context
+ * which is shared with OpenCL context object.
+ * The plugin object derived from this class can be obtained either with
+ * GetContext() method of Executable network or using CreateContext() Core call.
+ */
class ClContext : public RemoteContext, public details::param_map_obj_getter {
public:
/**
- * @brief A smart pointer to the ClContext object
- */
+ * @brief A smart pointer to the ClContext object
+ */
using Ptr = std::shared_ptr<ClContext>;
/**
};
/**
-* @brief The basic class for all GPU plugin remote blob objects.
-* The OpenCL memory object handle (cl_mem) can be obtained from this class object.
-*/
+ * @brief The basic class for all GPU plugin remote blob objects.
+ * The OpenCL memory object handle (cl_mem) can be obtained from this class object.
+ */
class ClBlob : public RemoteBlob {
public:
/**
- * @brief A smart pointer to the ClBlob object
- */
+ * @brief A smart pointer to the ClBlob object
+ */
using Ptr = std::shared_ptr<ClBlob>;
/**
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which can be shared with user-supplied OpenCL buffer.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can obtain OpenCL buffer handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which can be shared with user-supplied OpenCL buffer.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can obtain OpenCL buffer handle from this class.
+ */
class ClBufferBlob : public ClBlob, public details::param_map_obj_getter {
public:
/**
- * @brief A smart pointer to the ClBufferBlob object
- */
+ * @brief A smart pointer to the ClBufferBlob object
+ */
using Ptr = std::shared_ptr<ClBufferBlob>;
/**
};
/**
-* @brief This class represents an abstraction for GPU plugin remote blob
-* which can be shared with user-supplied OpenCL 2D Image.
-* The plugin object derived from this class can be obtained with CreateBlob() call.
-* @note User can obtain OpenCL image handle from this class.
-*/
+ * @brief This class represents an abstraction for GPU plugin remote blob
+ * which can be shared with user-supplied OpenCL 2D Image.
+ * The plugin object derived from this class can be obtained with CreateBlob() call.
+ * @note User can obtain OpenCL image handle from this class.
+ */
class ClImage2DBlob : public ClBlob, public details::param_map_obj_getter {
public:
/**
- * @brief A smart pointer to the ClImage2DBlob object
- */
+ * @brief A smart pointer to the ClImage2DBlob object
+ */
using Ptr = std::shared_ptr<ClImage2DBlob>;
/**
};
/**
-* @brief This function is used to construct a NV12 compound blob object from two cl::Image2D wrapper objects.
-* The resulting compound contains two remote blobs for Y and UV planes of the surface.
-* @param ctx RemoteContext plugin object derived from ClContext class.
-* @param nv12_image_plane_y cl::Image2D object containing Y plane data.
-* @param nv12_image_plane_uv cl::Image2D object containing UV plane data.
-* @return Pointer to plugin-specific context class object, which is derived from RemoteContext.
-*/
+ * @brief This function is used to construct a NV12 compound blob object from two cl::Image2D wrapper objects.
+ * The resulting compound contains two remote blobs for Y and UV planes of the surface.
+ * @param ctx RemoteContext plugin object derived from ClContext class.
+ * @param nv12_image_plane_y cl::Image2D object containing Y plane data.
+ * @param nv12_image_plane_uv cl::Image2D object containing UV plane data.
+ * @return A shared remote blob instance
+ */
static inline Blob::Ptr make_shared_blob_nv12(RemoteContext::Ptr ctx, cl::Image2D& nv12_image_plane_y, cl::Image2D& nv12_image_plane_uv) {
auto casted = std::dynamic_pointer_cast<ClContext>(ctx);
if (nullptr == casted) {
}
/**
-* @brief This function is used to obtain remote context object from user-supplied OpenCL context handle
-*/
+ * @brief This function is used to obtain remote context object from user-supplied OpenCL context handle
+ * @param core A reference to Inference Engine Core object
+ * @param deviceName A name of device to create a remote context for
+ * @param ctx A OpenCL context to be used to create shared remote context
+ * @return A shared remote context instance
+ */
static inline RemoteContext::Ptr make_shared_context(Core& core, std::string deviceName, cl_context ctx) {
ParamMap contextParams = {
{ GPU_PARAM_KEY(CONTEXT_TYPE), GPU_PARAM_VALUE(OCL) },
}
/**
-* @brief This function is used to create remote blob object within default GPU plugin OpenCL context
-*/
+ * @brief This function is used to create remote blob object within default GPU plugin OpenCL context
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, ClContext::Ptr ctx) {
return std::dynamic_pointer_cast<Blob>(ctx->CreateBlob(desc));
}
/**
-* @brief This function is used to obtain remote blob object from user-supplied cl::Buffer wrapper object
-*/
+ * @brief This function is used to obtain remote blob object from user-supplied cl::Buffer wrapper object
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @param buffer A cl::Buffer object wrapped by a remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl::Buffer& buffer) {
auto casted = std::dynamic_pointer_cast<ClContext>(ctx);
if (nullptr == casted) {
}
/**
-* @brief This function is used to obtain remote blob object from user-supplied OpenCL buffer handle
-*/
+ * @brief This function is used to obtain remote blob object from user-supplied OpenCL buffer handle
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @param buffer A cl_mem object wrapped by a remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl_mem buffer) {
auto casted = std::dynamic_pointer_cast<ClContext>(ctx);
if (nullptr == casted) {
}
/**
-* @brief This function is used to obtain remote blob object from user-supplied cl::Image2D wrapper object
-*/
+ * @brief This function is used to obtain remote blob object from user-supplied cl::Image2D wrapper object
+ * @param desc A tensor descriptor object representing remote blob configuration
+ * @param ctx A remote context used to create remote blob
+ * @param buffer A cl::Image2D object wrapped by a remote blob
+ * @return A remote blob instance
+ */
static inline Blob::Ptr make_shared_blob(const TensorDesc& desc, RemoteContext::Ptr ctx, cl::Image2D& image) {
auto casted = std::dynamic_pointer_cast<ClContext>(ctx);
if (nullptr == casted) {
/**
* @brief Returns the tensor description
+ * @return A const reference to a tensor descriptor
*/
virtual const TensorDesc& getTensorDesc() const noexcept {
return tensorDesc;
/**
* @brief Returns the tensor description
+ * @return A reference to a tensor descriptor
*/
virtual TensorDesc& getTensorDesc() noexcept {
return tensorDesc;
* @brief By default, returns the total number of elements (a product of all the dims or 1 for scalar)
*
* Return value and its interpretation heavily depend on the blob type
+ *
+ * @return The total number of elements
*/
virtual size_t size() const noexcept {
if (tensorDesc.getLayout() == Layout::SCALAR) return 1;
/**
* @brief Returns the size of the current Blob in bytes.
+ * @return Blob's size in bytes
*/
virtual size_t byteSize() const noexcept {
return size() * element_size();
* @deprecated Cast to MemoryBlob and use its API instead.
* Blob class can represent compound blob, which do not refer to the only solid memory.
*
- * @brief Returns the number of bytes per element.
+ * @brief Provides the number of bytes per element.
*
* The overall Blob capacity is size() * element_size(). Abstract method.
+ *
+ * @return Returns the number of bytes per element
*/
virtual size_t element_size() const noexcept = 0;
* @brief Releases previously allocated data.
*
* Abstract method.
+ *
+ * @return `True` if deallocation happens successfully, `false` otherwise.
*/
virtual bool deallocate() noexcept = 0;
*/
virtual void* getHandle() const noexcept = 0;
+ /// private
template <typename>
friend class TBlobProxy;
};
/**
* @brief Helper cast function to work with shared Blob objects
- *
+ * @param blob A blob to cast
* @return shared_ptr to the type T. Returned shared_ptr shares ownership of the object with the
* input Blob::Ptr
*/
/**
* @brief Helper cast function to work with shared Blob objects
- *
+ * @param blob A blob to cast
* @return shared_ptr to the type const T. Returned shared_ptr shares ownership of the object with
* the input Blob::Ptr
*/
/**
* @brief Returns the total number of elements, which is a product of all the dimensions
+ * @return The total number of elements
*/
size_t size() const noexcept override {
if (tensorDesc.getLayout() == Layout::SCALAR) return 1;
*/
void* getHandle() const noexcept override = 0;
+ /// private
template <typename>
friend class TBlobProxy;
};
return _handle.get();
}
+ /**
+ * @brief Creates a blob from the existing blob with a given ROI
+ * @param origBlob An original blob
+ * @param roi A ROI object
+ */
TBlob(const TBlob& origBlob, const ROI& roi) :
MemoryBlob(make_roi_desc(origBlob.getTensorDesc(), roi, true)),
_allocator(origBlob._allocator) {
BLOCKED = 200, //!< A blocked layout
};
+
+/**
+ * @brief Prints a string representation of InferenceEngine::Layout to a stream
+ * @param out An output stream to send to
+ * @param p A layout value to print to a stream
+ * @return A reference to the `out` stream
+ */
inline std::ostream& operator<<(std::ostream& out, const Layout& p) {
switch (p) {
#define PRINT_LAYOUT(name) \
NV12, ///< NV12 color format represented as compound Y+UV blob
I420, ///< I420 color format represented as compound Y+U+V blob
};
+
+/**
+ * @brief Prints a string representation of InferenceEngine::ColorFormat to a stream
+ * @param out An output stream to send to
+ * @param fmt A color format value to print to a stream
+ * @return A reference to the `out` stream
+ */
inline std::ostream& operator<<(std::ostream& out, const ColorFormat& fmt) {
switch (fmt) {
#define PRINT_COLOR_FORMAT(name) \
char msg[4096] = {};
};
-
/**
* @brief Response structure encapsulating information about supported layer
*/
class InferNotStarted : public std::logic_error {
using std::logic_error::logic_error;
};
-} // namespace InferenceEngine
/** @brief This class represents StatusCode::NETWORK_NOT_READ exception */
class NetworkNotRead : public std::logic_error {
using std::logic_error::logic_error;
};
+} // namespace InferenceEngine
+
#if defined(_WIN32)
#define __PRETTY_FUNCTION__ __FUNCSIG__
#else
explicit CompoundBlob(std::vector<Blob::Ptr>&& blobs);
/**
- * @brief Always returns 0
+ * @brief Always returns `0`
+ * @return Returns `0`
*/
size_t byteSize() const noexcept override;
/**
- * @brief Always returns 0
+ * @brief Always returns `0`
+ * @return Returns `0`
*/
size_t element_size() const noexcept override;
/**
* @brief No operation is performed. Compound blob does not allocate/deallocate any data
- * @return false
+ * @return Returns `false`
*/
bool deallocate() noexcept override;
* This method need to be called to find output names for using them later
* when calling InferenceEngine::InferRequest::GetBlob or InferenceEngine::InferRequest::SetBlob
*
- * @param out Reference to the ::ConstOutputsDataMap object
+ * @param out Reference to the InferenceEngine::ConstOutputsDataMap object
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
*/
/**
* @brief Gets the executable network input Data node information.
*
- * The received info is stored in the given ::ConstInputsDataMap object.
+ * The received info is stored in the given InferenceEngine::ConstInputsDataMap object.
* This method need to be called to find out input names for using them later
* when calling InferenceEngine::InferRequest::SetBlob
*
- * @param inputs Reference to ::ConstInputsDataMap object.
+ * @param inputs Reference to InferenceEngine::ConstInputsDataMap object.
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
*/
/**
* @interface IVariableState
- * @brief manages data for reset operations
+ * @brief Manages data for reset operations
*/
class IVariableState : public details::no_copy {
public:
using Ptr = std::shared_ptr<IVariableState>;
/**
- * @brief Gets name of current memory state, if length of array is not enough name is truncated by len, null
- * terminator is inserted as well. As memory state name variable_id from according ReadValue used.
+ * @brief Gets name of current variable state, if length of array is not enough name is truncated by len, null
+ * terminator is inserted as well. As variable state name `variable_id` from according `ReadValue` used.
*
* @param name preallocated buffer for receiving name
* @param len Length of the buffer
virtual StatusCode GetName(char* name, size_t len, ResponseDesc* resp) const noexcept = 0;
/**
- * @brief Reset internal memory state for relevant infer request, to a value specified as default for according ReadValue node
+ * @brief Reset internal variable state for relevant infer request, to a value specified as default for according ReadValue node
*
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success*
*
* This method can fail if Blob size does not match the internal state size or precision
*
- * @param newState is the data to use as new state
+ * @param newState The data to use as new state
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
*/
virtual StatusCode SetState(Blob::Ptr newState, ResponseDesc* resp) noexcept = 0;
/**
- * @brief Returns the value of the memory state.
+ * @brief Returns the value of the variable state.
*
- * @param lastState
+ * @param state A reference to a blob containing a variable state
* @param resp Optional: pointer to an already allocated object to contain information in case of failure
* @return Status code of the operation: InferenceEngine::OK (0) for success
- * */
+ */
INFERENCE_ENGINE_DEPRECATED("Use GetState function instead")
- virtual StatusCode GetLastState(Blob::CPtr& state, ResponseDesc* resp) const noexcept {return GetState(state, resp);}
+ virtual StatusCode GetLastState(Blob::CPtr& state, ResponseDesc* resp) const noexcept {
+ return GetState(state, resp);
+ }
+
+ /**
+ * @brief Returns the value of the variable state.
+ *
+ * @param state A reference to a blob containing a variable state
+ * @param resp Optional: pointer to an already allocated object to contain information in case of failure
+ * @return Status code of the operation: InferenceEngine::OK (0) for success
+ */
virtual StatusCode GetState(Blob::CPtr& state, ResponseDesc* resp) const noexcept = 0;
};
-/*
+/**
* @brief For compatibility reasons.
*/
using IMemoryState = IVariableState;
+
} // namespace InferenceEngine
\ No newline at end of file
/**
* @brief Returns the tensor descriptor
+ * @return A const reference to a tensor descriptor
*/
const TensorDesc& getTensorDesc() const {
if (!_inputData) {
bool operator!=(const BlockingDesc& rhs) const;
protected:
+ /**
+ * @brief Fills tensor descriptor based on blocking dimensions and specific order
+ * @param blocked_dims A vector representing blocking dimensions
+ * @param order A vector with specific dims order
+ */
void fillDesc(const SizeVector& blocked_dims, const SizeVector& order);
private:
ROI() = default;
+ /**
+ * @brief Creates a ROI objects with given parameters
+ * @param id ID of a ROI (offset over batch dimension)
+ * @param posX W upper left coordinate of ROI
+ * @param posY H upper left coordinate of ROI
+ * @param sizeX W size of ROI
+ * @param sizeY H size of ROI
+ */
ROI(size_t id, size_t posX, size_t posY, size_t sizeX, size_t sizeY) :
id(id), posX(posX), posY(posY), sizeX(sizeX), sizeY(sizeY) {
}
/**
* @brief Compares stored object with the given one
* @param pointer An pointer to compare with.
- * @return true if objects are equal, false otherwise
+ * @return `true` if objects are equal, `false` otherwise
*/
bool operator==(const T* pointer) const {
// special case with nullptr
/**
* @brief Compares the object with the one stored in the memory.
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @param lm A compared LockedMemory object
+ * @return `true` if objects are equal, `false` otherwise
*/
friend bool operator==(const T* pointer, const LockedMemory<T>& lm) {
return lm.operator==(pointer);
/**
* @brief Compares stored object with the given one
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @return `true` if objects are equal, `false` otherwise
*/
bool operator==(const void* pointer) const {
// special case with nullptr
/**
* @brief Compares the object with the one stored in the memory
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @param lm A compared LockedMemory object
+ * @return `true` if objects are equal, `false` otherwise
*/
friend bool operator==(const void* pointer, const LockedMemory<void>& lm) {
return lm.operator==(pointer);
/**
* @brief Compares stored object with the given one
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @return `true` if objects are equal, `false` otherwise
*/
bool operator==(const T* pointer) const {
// special case with nullptr
/**
* @brief Compares the object with the one stored in the memory
- *
- * @return true if objects are equal, false otherwise
+ * @param pointer A pointer to compare with
+ * @param lm A compared LockedMemory object
+ * @return `true` if objects are equal, `false` otherwise
*/
friend bool operator==(const T* pointer, const LockedMemory<const T>& lm) {
return lm.operator==(pointer);
/** @brief Default constructor */
Precision() = default;
- /** @brief Constructor with specified precision */
+ /**
+ * @brief Constructor with specified precision
+ * @param value A value of ePrecision to create an object from
+ */
Precision(const Precision::ePrecision value) { // NOLINT
precisionInfo = getPrecisionInfo(value);
}
* @brief Custom precision constructor
*
* @param bitsSize size of elements
- * @param name optional name string, used in serialisation
+ * @param name optional: name string, used in serialisation
*/
explicit Precision(size_t bitsSize, const char* name = nullptr) {
if (bitsSize == 0) {
}
}
- /** @brief Equality operator with Precision object */
+ /**
+ * @brief Equality operator with Precision object
+ * @param p A value of Precision to compare with
+ * @return `true` if values represent the same precisions, `false` otherwise
+ */
bool operator==(const Precision& p) const noexcept {
return precisionInfo.value == p && precisionInfo.bitsSize == p.precisionInfo.bitsSize &&
areSameStrings(precisionInfo.name, p.precisionInfo.name);
}
- /** @brief Equality operator with ePrecision enum value */
+ /**
+ * @brief Equality operator with ePrecision enum value
+ * @param p A value of ePrecision to compare with
+ * @return `true` if values represent the same precisions, `false` otherwise
+ */
bool operator==(const ePrecision p) const noexcept {
return precisionInfo.value == p;
}
- /** @brief Inequality operator with ePrecision enum value */
+ /**
+ * @brief Inequality operator with ePrecision enum value
+ * @param p A value of ePrecision to compare with
+ * @return `true` if values represent different precisions, `false` otherwise
+ */
bool operator!=(const ePrecision p) const noexcept {
return precisionInfo.value != p;
}
- /** @brief Assignment operator with ePrecision enum value */
+ /**
+ * @brief Assignment operator with ePrecision enum value
+ * @param p A value of ePrecision enumeration
+ * @return A Precision instance
+ */
Precision& operator=(const ePrecision p) noexcept {
precisionInfo = getPrecisionInfo(p);
return *this;
}
- /** @brief Cast operator to a bool */
+ /**
+ * @brief Cast operator to a bool
+ * @return `true` if precision is specified, `false` otherwise
+ */
explicit operator bool() const noexcept {
return precisionInfo.value != UNSPECIFIED;
}
- /** @brief Logical negation operator */
+ /**
+ * @brief Logical negation operator
+ * @return `true` if precision is NOT specified, `false` otherwise
+ */
bool operator!() const noexcept {
return precisionInfo.value == UNSPECIFIED;
}
- /** @brief Cast operator to a ePrecision */
+ /**
+ * @brief Cast operator to a ePrecision
+ * @return A casted value of Precision::ePrecision enumeration
+ */
operator Precision::ePrecision() const noexcept {
return precisionInfo.value;
}
return precisionInfo.value;
}
- /** @brief Getter of precision name */
+ /**
+ * @brief Getter of precision name
+ * @return A string representing precision name
+ */
const char* name() const noexcept {
return precisionInfo.name;
}
- /** @brief Creates from string with precision name */
+ /**
+ * @brief Creates Precision from string with precision name
+ * @param str A string representing precision
+ * @return Precision created from string representation
+ */
static Precision FromStr(const std::string& str) {
static std::unordered_map<std::string, ePrecision> names = {
#define PRECISION_NAME(s) {#s, s}
}
/**
- * @brief Return PrecisionInfo
+ * @brief Creates PrecisionInfo based on ePrecision
+ * @param v A value of ePrecision emuneration
+ * @return Precision info object
*/
static PrecisionInfo getPrecisionInfo(ePrecision v) {
#define CASE(x) \
/**
* @deprecated Use OS-native conversion utilities
* @brief Conversion from possibly-wide character string to a single-byte chain.
+ * @param str A possibly-wide character string
+ * @return A single-byte character string
*/
INFERENCE_ENGINE_DEPRECATED("Use OS-native conversion utilities")
inline std::string fileNameToString(const file_name_t& str) {
/**
* @deprecated Use OS-native conversion utilities
* @brief Conversion from single-byte character string to a possibly-wide one
+ * @param str A single-byte character string
+ * @return A possibly-wide character string
*/
INFERENCE_ENGINE_DEPRECATED("Use OS-native conversion utilities")
inline file_name_t stringToFileName(const std::string& str) {
* @brief An API version reflects the set of supported features
*/
struct {
- int major;
- int minor;
+ int major; //!< A major version
+ int minor; //!< A minor version
} apiVersion;
/**
* @brief A null terminated string with build number
#include <vector>
#include <tuple>
#include <cpp_interfaces/interface/ie_iplugin_internal.hpp>
-#include "cpp_interfaces/impl/ie_memory_state_internal.hpp"
+#include "cpp_interfaces/impl/ie_variable_state_internal.hpp"
#include "descriptions/gna_flags.hpp"
#include "descriptions/gna_input_desc.hpp"
#include "descriptions/gna_output_desc.hpp"
#include <memory>
#include <utility>
-#include <cpp_interfaces/impl/ie_memory_state_internal.hpp>
+#include <cpp_interfaces/impl/ie_variable_state_internal.hpp>
#include "gna_plugin.hpp"
namespace GNAPluginNS {
#pragma once
-#include "cpp_interfaces/impl/ie_memory_state_internal.hpp"
+#include "cpp_interfaces/impl/ie_variable_state_internal.hpp"
#include "mkldnn_memory.h"
#include <string>
//
/**
- * @file A header file with caseless containers
+ * @file caseless.hpp
+ * @brief A header file with caseless containers
*/
#pragma once
#pragma once
#include <cpp/ie_executable_network.hpp>
-#include <cpp_interfaces/base/ie_memory_state_base.hpp>
-#include <cpp_interfaces/interface/ie_imemory_state_internal.hpp>
+#include <cpp_interfaces/base/ie_variable_state_base.hpp>
+#include <cpp_interfaces/interface/ie_ivariable_state_internal.hpp>
#include <map>
#include <memory>
#include <string>
public:
/**
* @brief Constructor with actual underlying implementation.
- * @param impl Underplying implementation of type IExecutableNetworkInternal
+ * @param impl Underlying implementation of type IExecutableNetworkInternal
*/
explicit ExecutableNetworkBase(std::shared_ptr<T> impl) {
if (impl.get() == nullptr) {
~ExecutableNetworkBase() = default;
};
+/**
+ * @brief Create an execuable network public C++ object wrapper based on internal inplementation
+ * @ingroup ie_dev_api_exec_network_api
+ * @param impl An internal implementation for executable network
+ * @tparam T A type of internal implementation
+ * @return C++ wrapper for executable network
+ */
template <class T>
inline typename InferenceEngine::ExecutableNetwork make_executable_network(std::shared_ptr<T> impl) {
typename ExecutableNetworkBase<T>::Ptr net(new ExecutableNetworkBase<T>(impl), [](IExecutableNetwork* p) {
#include "cpp_interfaces/exception2status.hpp"
#include "cpp_interfaces/plugin_itt.hpp"
-#include <cpp_interfaces/base/ie_memory_state_base.hpp>
+#include <cpp_interfaces/base/ie_variable_state_base.hpp>
#include "ie_iinfer_request.hpp"
#include "ie_preprocess.hpp"
#include "ie_profiling.hpp"
public:
/**
* @brief Constructor with actual underlying implementation.
- * @param impl Underplying implementation of type IAsyncInferRequestInternal
+ * @param impl Underlying implementation of type IAsyncInferRequestInternal
*/
explicit InferRequestBase(std::shared_ptr<T> impl): _impl(impl) {}
#include <memory>
#include "cpp_interfaces/exception2status.hpp"
-#include "cpp_interfaces/impl/ie_memory_state_internal.hpp"
+#include "cpp_interfaces/impl/ie_variable_state_internal.hpp"
#include "ie_imemory_state.hpp"
namespace InferenceEngine {
/**
- * @brief default implementation for IVariableState
- * @ingroup ie_dev_api_mem_state_api
+ * @brief Default implementation for IVariableState
+ * @tparam T Minimal CPP implementation of IVariableStateInternal (e.g. VariableStateInternal)
+ * @ingroup ie_dev_api_variable_state_api
*/
template <class T>
class VariableStateBase : public IVariableState {
-protected:
std::shared_ptr<T> impl;
public:
+ /**
+ * @brief Constructor with actual underlying implementation.
+ * @param impl Underlying implementation of type IVariableStateInternal
+ */
explicit VariableStateBase(std::shared_ptr<T> impl): impl(impl) {
if (impl == nullptr) {
- THROW_IE_EXCEPTION << "VariableStateBase implementation not defined";
+ THROW_IE_EXCEPTION << "VariableStateBase implementation is not defined";
}
}
}
protected:
+ /**
+ * @brief Creates asyncronous inference request from synchronous request returned by CreateInferRequestImpl
+ * @tparam AsyncInferRequestType A type of asynchronous inference request to use a wrapper for synchronous request
+ * @return A shared pointer to an asynchronous inference request
+ */
template <typename AsyncInferRequestType = AsyncInferRequestThreadSafeDefault>
IInferRequest::Ptr CreateAsyncInferRequestFromSync() {
IInferRequest::Ptr asyncRequest;
+++ /dev/null
-// Copyright (C) 2018-2020 Intel Corporation
-// SPDX-License-Identifier: Apache-2.0
-//
-
-#pragma once
-
-#include <cpp_interfaces/interface/ie_imemory_state_internal.hpp>
-#include <string>
-
-namespace InferenceEngine {
-
-/**
- * @brief minimal interface for memory state implementation
- * @ingroup ie_dev_api_mem_state_api
- */
-class VariableStateInternal : public IVariableStateInternal {
- std::string name;
- Blob::Ptr state;
-
-public:
- explicit VariableStateInternal(std::string name): name(name) {}
- std::string GetName() const override {
- return name;
- }
- void SetState(Blob::Ptr newState) override {
- state = newState;
- }
- Blob::CPtr GetState() const override {
- return state;
- }
-};
-
-/*
- * @brief For compatibility reasons.
- */
-using MemoryStateInternal = VariableStateInternal;
-} // namespace InferenceEngine
--- /dev/null
+// Copyright (C) 2018-2020 Intel Corporation
+// SPDX-License-Identifier: Apache-2.0
+//
+
+#pragma once
+
+#include <cpp_interfaces/interface/ie_ivariable_state_internal.hpp>
+#include <string>
+
+namespace InferenceEngine {
+
+/**
+ * @brief Minimal interface for variable state implementation
+ * @ingroup ie_dev_api_variable_state_api
+ */
+class VariableStateInternal : public IVariableStateInternal {
+ std::string name;
+ Blob::Ptr state;
+
+public:
+ /**
+ * @brief Constructs a variable state with a given name
+ * @param name A name of variable state
+ */
+ explicit VariableStateInternal(std::string name) : name(name) {}
+
+ /**
+ * @brief Gets a variable state name
+ * @return A string representing variable state name
+ */
+ std::string GetName() const override {
+ return name;
+ }
+
+ /**
+ * @brief Sets the new state for the next inference
+ * @param newState A new state
+ */
+ void SetState(Blob::Ptr newState) override {
+ state = newState;
+ }
+
+ /**
+ * @brief Returns the value of the variable state.
+ * @return The value of the variable state
+ */
+ Blob::CPtr GetState() const override {
+ return state;
+ }
+};
+
+/**
+ * @brief For compatibility reasons.
+ */
+using MemoryStateInternal = VariableStateInternal;
+
+} // namespace InferenceEngine
#pragma once
-#include <cpp_interfaces/interface/ie_imemory_state_internal.hpp>
+#include <cpp_interfaces/interface/ie_ivariable_state_internal.hpp>
#include <ie_iinfer_request.hpp>
#include <ie_parameter.hpp>
#include <map>
#pragma once
-#include <cpp_interfaces/interface/ie_imemory_state_internal.hpp>
+#include <cpp_interfaces/interface/ie_ivariable_state_internal.hpp>
#include <ie_blob.h>
#include <ie_common.h>
#include <ie_preprocess.hpp>
+++ /dev/null
-// Copyright (C) 2018-2020 Intel Corporation
-// SPDX-License-Identifier: Apache-2.0
-//
-
-#pragma once
-
-#include <ie_blob.h>
-
-#include <memory>
-#include <string>
-
-namespace InferenceEngine {
-/**
- * @interface IVariableStateInternal
- * @brief minimal interface for memory state implementation
- * @ingroup ie_dev_api_mem_state_api
- */
-class IVariableStateInternal {
-public:
- using Ptr = std::shared_ptr<IVariableStateInternal>;
-
- virtual ~IVariableStateInternal() = default;
- virtual std::string GetName() const = 0;
- virtual void Reset() = 0;
- virtual void SetState(Blob::Ptr newState) = 0;
- virtual Blob::CPtr GetState() const = 0;
- INFERENCE_ENGINE_DEPRECATED("Use GetState function instead")
- virtual Blob::CPtr GetLastState() const {return GetState();}
-};
-
-/*
- * @brief For compatibility reasons.
- */
-using IMemoryStateInternal = IVariableStateInternal;
-} // namespace InferenceEngine
--- /dev/null
+// Copyright (C) 2018-2020 Intel Corporation
+// SPDX-License-Identifier: Apache-2.0
+//
+
+#pragma once
+
+#include <ie_blob.h>
+
+#include <memory>
+#include <string>
+
+namespace InferenceEngine {
+
+/**
+ * @interface IVariableStateInternal
+ * @brief Minimal interface for variable state implementation
+ * @ingroup ie_dev_api_variable_state_api
+ */
+class IVariableStateInternal {
+public:
+ /**
+ * @brief A shared pointer to a IVariableStateInternal interface
+ */
+ using Ptr = std::shared_ptr<IVariableStateInternal>;
+
+ /**
+ * @brief A default virtual dtor
+ */
+ virtual ~IVariableStateInternal() = default;
+
+ /**
+ * @brief Gets a variable state name
+ * @return A string representing variable state name
+ */
+ virtual std::string GetName() const = 0;
+
+ /**
+ * @brief Reset internal variable state for relevant infer request, to a value specified as
+ * default for according `ReadValue` node
+ */
+ virtual void Reset() = 0;
+
+ /**
+ * @brief Sets the new state for the next inference
+ * @param newState A new state
+ */
+ virtual void SetState(Blob::Ptr newState) = 0;
+
+ /**
+ * @brief Returns the value of the variable state.
+ * @return The value of the variable state
+ */
+ virtual Blob::CPtr GetState() const = 0;
+
+ /**
+ * @deprecated Use IVariableStateInternal::GetState method instead
+ * @brief Returns the value of the variable state.
+ * @return The value of the variable state
+ */
+ INFERENCE_ENGINE_DEPRECATED("Use IVariableStateInternal::GetState method instead")
+ virtual Blob::CPtr GetLastState() const {
+ return GetState();
+ }
+};
+
+/**
+ * @brief For compatibility reasons.
+ */
+using IMemoryStateInternal = IVariableStateInternal;
+
+} // namespace InferenceEngine
return cloned;
}
+ /**
+ * @brief Visits attributes of the node
+ *
+ * @param[in] visitor An attribute visitor
+ *
+ * @return Returns `true` if an operation has completed successfully
+ */
bool visit_attributes(ngraph::AttributeVisitor& visitor) override {
return true;
}
__VA_ARGS__; \
return _##name##_value
-/**
- * @def IE_SET_METRIC(name, ...)
- * @ingroup ie_dev_api
- * @brief Set metric with specified @p name and arguments `...`. Example:
- * @code
- * Parameter result = IE_SET_METRIC(SUPPORTED_METRICS, {
- METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS),
- METRIC_KEY(SUPPORTED_METRICS),
- METRIC_KEY(NETWORK_NAME),
- METRIC_KEY(SUPPORTED_CONFIG_KEYS)
- });
- * @endcode
- *
- * @param name The metric name
- * @param ... A metric value
- *
- * @return A metric value wrapped with Parameter. Must be used as a left-side argument to assignment operator.
- */
-#define IE_SET_METRIC(name, ...) \
- [&] { \
- IE_SET_METRIC_RETURN(name, __VA_ARGS__); \
- }()
-
#include "ie_plugin_config.hpp"
* @defgroup ie_dev_api_async_infer_request_api Asynchronous Inference Request base classes
* @brief A set of base and helper classes to implement asynchronous inference request class
*
- * @defgroup ie_dev_api_mem_state_api Memory state base classes
- * @brief A set of base and helper classes to implement memory state
+ * @defgroup ie_dev_api_variable_state_api Variable state base classes
+ * @brief A set of base and helper classes to implement variable state
*
* @defgroup ie_dev_api_threading Threading utilities
* @brief Threading API providing task executors for asynchronous operations
* @ingroup ie_dev_api_precision
*
* @param value Value to be converted
+ * @return A saturated value
*/
template <class OutT, class InT, typename std::enable_if<
std::is_integral<OutT>::value && std::is_integral<InT>::value &&
* @ingroup ie_dev_api_precision
*
* @param value Value to be converted
+ * @return A saturated value
*/
template <class OutT, class InT, typename std::enable_if<
std::is_integral<OutT>::value && std::is_integral<InT>::value &&
* @ingroup ie_dev_api_precision
*
* @param value Value to be converted
+ * @return A saturated value
*/
template <class InT>
inline InT saturate_cast(const InT& value) {
#include <ngraph/pass/graph_rewrite.hpp>
/**
- * @defgroup ie_transformation_api Inference Engine Transformation API
- * @brief Defines Inference Engine Transformations API which is used to transform ngraph::Function
- *
- * @{
- * @defgroup ie_runtime_attr_api Runtime information
- * @brief A mechanism of runtime information extension
- *
- * @defgroup ie_transformation_common_api Common optimization passes
- * @brief A set of common optimization passes
- *
- * @defgroup ie_transformation_to_opset2_api Conversion from opset3 to opset2
- * @brief A set of conversion downgrade passes from opset3 to opset2
- *
- * @defgroup ie_transformation_to_opset1_api Conversion from opset2 to opset1
- * @brief A set of conversion downgrade passes from opset2 to opset1
- * @}
- */
-
-/**
* @brief ngraph namespace
*/
namespace ngraph {
#include "ngraph/visibility.hpp"
+/**
+ * @file transformations_visibility.hpp
+ * @brief Defines visibility settings for Inference Engine Transformations library
+ */
+
+/**
+ * @defgroup ie_transformation_api Inference Engine Transformation API
+ * @brief Defines Inference Engine Transformations API which is used to transform ngraph::Function
+ *
+ * @{
+ * @defgroup ie_runtime_attr_api Runtime information
+ * @brief A mechanism of runtime information extension
+ *
+ * @defgroup ie_transformation_common_api Common optimization passes
+ * @brief A set of common optimization passes
+ *
+ * @defgroup ie_transformation_to_opset2_api Conversion from opset3 to opset2
+ * @brief A set of conversion downgrade passes from opset3 to opset2
+ *
+ * @defgroup ie_transformation_to_opset1_api Conversion from opset2 to opset1
+ * @brief A set of conversion downgrade passes from opset2 to opset1
+ * @}
+ */
+
#ifdef inference_engine_transformations_EXPORTS
#define TRANSFORMATIONS_API NGRAPH_HELPER_DLL_EXPORT
#else
#include "unit_test_utils/mocks/mock_allocator.hpp"
#include "unit_test_utils/mocks/mock_icnn_network.hpp"
-#include "unit_test_utils/mocks/mock_ie_imemory_state.hpp"
+#include "unit_test_utils/mocks/mock_ie_ivariable_state.hpp"
#include "unit_test_utils/mocks/mock_iexecutable_network.hpp"
#include "unit_test_utils/mocks/mock_iinfer_request.hpp"
#include "unit_test_utils/mocks/mock_not_empty_icnn_network.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iasync_infer_request_internal.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iexecutable_network_internal.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iinfer_request_internal.hpp"
-#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_imemory_state_internal.hpp"
+#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_ivariable_state_internal.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iinference_plugin.hpp"
#include <vector>
#include <cpp_interfaces/interface/ie_iinfer_async_request_internal.hpp>
-#include <cpp_interfaces/interface/ie_imemory_state_internal.hpp>
+#include <cpp_interfaces/interface/ie_ivariable_state_internal.hpp>
class MockIAsyncInferRequestInternal : public InferenceEngine::IAsyncInferRequestInternal {
public:
#include <vector>
#include <cpp_interfaces/impl/ie_infer_request_internal.hpp>
-#include <cpp_interfaces/impl/ie_memory_state_internal.hpp>
+#include <cpp_interfaces/impl/ie_variable_state_internal.hpp>
class MockIInferRequestInternal : public InferenceEngine::IInferRequestInternal {
public:
#include <string>
#include <vector>
-#include <cpp_interfaces/interface/ie_imemory_state_internal.hpp>
+#include <cpp_interfaces/interface/ie_ivariable_state_internal.hpp>
class MockIVariableStateInternal : public InferenceEngine::IVariableStateInternal {
public:
#include <cpp_interfaces/base/ie_executable_network_base.hpp>
#include <cpp_interfaces/base/ie_infer_async_request_base.hpp>
-#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_imemory_state_internal.hpp"
+#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_ivariable_state_internal.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iexecutable_network_internal.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iasync_infer_request_internal.hpp"
#include "unit_test_utils/mocks/mock_iexecutable_network.hpp"
#include "unit_test_utils/mocks/mock_iinfer_request.hpp"
-#include "unit_test_utils/mocks/mock_ie_imemory_state.hpp"
+#include "unit_test_utils/mocks/mock_ie_ivariable_state.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/impl/mock_inference_plugin_internal.hpp"
#include "unit_test_utils/mocks/cpp_interfaces/interface/mock_iexecutable_network_internal.hpp"