From a428c469ce1d212e3eef66db95be3975828fae21 Mon Sep 17 00:00:00 2001 From: Roman Donchenko Date: Wed, 11 Nov 2020 15:35:39 +0300 Subject: [PATCH] Fix spelling errors in samples and documentation (#2795) * Fix spelling errors in samples * Fix spelling errors in the documentation --- docs/HOWTO/add_regression_test_vpu.md | 2 +- docs/IE_DG/Extensibility_DG/VPU_Kernel.md | 2 +- docs/IE_DG/Int8Inference.md | 2 +- docs/IE_DG/supported_plugins/GNA.md | 2 +- docs/IE_DG/supported_plugins/HDDL.md | 2 +- docs/IE_PLUGIN_DG/Plugin.md | 2 +- docs/IE_PLUGIN_DG/PluginTesting.md | 4 ++-- docs/Inference_Engine_Development_Procedure/IE_Dev_Procedure.md | 2 +- .../prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md | 8 ++++---- .../Extending_Model_Optimizer_with_New_Primitives.md | 2 +- .../TensorFlow_Faster_RCNN_ObjectDetection_API.md | 2 +- docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md | 2 +- docs/nGraph_DG/nGraphTransformation.md | 2 +- docs/ops/arithmetic/Round_5.md | 4 ++-- docs/ops/arithmetic/Subtract_1.md | 2 +- docs/ops/infrastructure/Parameter_1.md | 2 +- docs/ops/movement/VariadicSplit_1.md | 2 +- docs/ops/sequence/CTCLoss_4.md | 6 +++--- docs/ops/sequence/GRUSequence_5.md | 2 +- docs/ops/sequence/LSTMSequence_1.md | 2 +- docs/ops/sequence/RNNSequence_5.md | 2 +- docs/template_plugin/src/template_executable_network.cpp | 2 +- inference-engine/ie_bridges/c/docs/api_overview.md | 2 +- .../ie_bridges/c/samples/common/opencv_c_wraper/opencv_c_wraper.h | 2 +- .../ie_bridges/c/samples/object_detection_sample_ssd/main.c | 2 +- inference-engine/ie_bridges/java/samples/README.md | 2 +- inference-engine/ie_bridges/java/samples/benchmark_app/Main.java | 2 +- .../ie_bridges/python/sample/style_transfer_sample/README.md | 6 +++--- .../python/sample/style_transfer_sample/style_transfer_sample.py | 6 +++--- inference-engine/samples/benchmark_app/README.md | 2 +- inference-engine/samples/benchmark_app/main.cpp | 2 +- inference-engine/samples/benchmark_app/statistics_report.cpp | 2 +- inference-engine/samples/common/format_reader/opencv_wraper.h | 2 +- inference-engine/samples/common/samples/classification_results.h | 2 +- inference-engine/samples/speech_sample/README.md | 4 ++-- 35 files changed, 47 insertions(+), 47 deletions(-) diff --git a/docs/HOWTO/add_regression_test_vpu.md b/docs/HOWTO/add_regression_test_vpu.md index e48a34c..5f6fd02 100644 --- a/docs/HOWTO/add_regression_test_vpu.md +++ b/docs/HOWTO/add_regression_test_vpu.md @@ -80,4 +80,4 @@ There is no generalized scenario and recommendations are the same as for specifi ## Compilation tests -The tests are in the `vpu_classification_regression.cpp` file and contains only one scenario ` VpuNoRegressionWithCompilation `. To add a new test just update parameters just as in generalized scenarion of Classification/Detection test groups. +The tests are in the `vpu_classification_regression.cpp` file and contain only one scenario ` VpuNoRegressionWithCompilation `. To add a new test just update parameters just as in generalized scenario of Classification/Detection test groups. diff --git a/docs/IE_DG/Extensibility_DG/VPU_Kernel.md b/docs/IE_DG/Extensibility_DG/VPU_Kernel.md index a3c97d0..fccdd27 100644 --- a/docs/IE_DG/Extensibility_DG/VPU_Kernel.md +++ b/docs/IE_DG/Extensibility_DG/VPU_Kernel.md @@ -445,7 +445,7 @@ from/to a `__blobal` pointer since work-group copying could be done in a vector } } ``` -This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName` and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before `n`-th work group itself, while `__dma_postwrite_kernelName` is guarantied to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA. +This kernel can be rewritten to introduce special data binding `__dma_preload` and `__dma_postwrite intrinsics`. This means that instead of one kernel, a group of three kernels should be implemented: `kernelName`, `__dma_preload_kernelName` and `__dma_postwrite_kernelName`. `__dma_preload_kernelName` for a particular work group `n` is guaranteed to be executed before `n`-th work group itself, while `__dma_postwrite_kernelName` is guaranteed to be executed after a corresponding work group. You can define one of those functions that are intended to be used to copy data from-to `__global` and `__local` memory. The syntactics requires exact functional signature match. The example below illustrates how to prepare your kernel for manual-DMA. ```cpp __kernel void __dma_preload_grn_NCHW( __global const half* restrict src, diff --git a/docs/IE_DG/Int8Inference.md b/docs/IE_DG/Int8Inference.md index b815f0b..1443800 100644 --- a/docs/IE_DG/Int8Inference.md +++ b/docs/IE_DG/Int8Inference.md @@ -86,7 +86,7 @@ This means that 8-bit inference can only be performed with the CPU plugin on the For 8-bit integer computations, a model must be quantized. If the model is not quantized then you can use the [Post-Training Optimization Tool](@ref pot_README) to quantize the model. The quantization process adds `FakeQuantize` layers on activations and weights for most layers. Read more about mathematical computations under the hood in the [white paper](https://intel.github.io/mkl-dnn/ex_int8_simplenet.html). 8-bit inference pipeline includes two stages (also refer to the figure below): -1. *Offline stage*, or *model quantization*. During this stage, `FakeQuantize` layers are added before most layers to have quantized tensors before layers in a way that low-precision accuracy drop for 8-bit integer inference satisfies the specified threshold. The output of this stage is a quantized model. Quantized model precision is not changed, quantized tensors are in original precision range (`fp32`). `FakeQuantize` layer has `Quantization Levels` attribute whic defines quants count. Quants count defines precision which is used during inference. For `int8` range `Quantization Levels` attribute value has to be 255 or 256. +1. *Offline stage*, or *model quantization*. During this stage, `FakeQuantize` layers are added before most layers to have quantized tensors before layers in a way that low-precision accuracy drop for 8-bit integer inference satisfies the specified threshold. The output of this stage is a quantized model. Quantized model precision is not changed, quantized tensors are in original precision range (`fp32`). `FakeQuantize` layer has `Quantization Levels` attribute which defines quants count. Quants count defines precision which is used during inference. For `int8` range `Quantization Levels` attribute value has to be 255 or 256. 2. *Run-time stage*. This stage is an internal procedure of the [CPU Plugin](supported_plugins/CPU.md). During this stage, the quantized model is loaded to the plugin. The plugin updates each `FakeQuantize` layer on activations and weights to have `FakeQuantize` output tensor values in low precision range. ![int8_flow] diff --git a/docs/IE_DG/supported_plugins/GNA.md b/docs/IE_DG/supported_plugins/GNA.md index 7a4303c..d40db45 100644 --- a/docs/IE_DG/supported_plugins/GNA.md +++ b/docs/IE_DG/supported_plugins/GNA.md @@ -160,7 +160,7 @@ input blob using `InferenceEngine::ICNNNetwork::setBatchSize`. Increasing batch Heterogeneous plugin was tested with the Intel® GNA as a primary device and CPU as a secondary device. To run inference of networks with layers unsupported by the GNA plugin (for example, Softmax), use the Heterogeneous plugin with the `HETERO:GNA,CPU` configuration. For the list of supported networks, see the [Supported Frameworks](#supported-frameworks). -> **NOTE:** Due to limitation of the Intel® GNA backend library, heterogenous support is limited to cases where in the resulted sliced graph, only one subgraph is scheduled to run on GNA\_HW or GNA\_SW devices. +> **NOTE:** Due to limitation of the Intel® GNA backend library, heterogeneous support is limited to cases where in the resulted sliced graph, only one subgraph is scheduled to run on GNA\_HW or GNA\_SW devices. ## Recovery from interruption by high-priority Windows audio processes\* diff --git a/docs/IE_DG/supported_plugins/HDDL.md b/docs/IE_DG/supported_plugins/HDDL.md index cc53925..f935c42 100644 --- a/docs/IE_DG/supported_plugins/HDDL.md +++ b/docs/IE_DG/supported_plugins/HDDL.md @@ -30,7 +30,7 @@ In addition to common parameters for Myriad plugin and HDDL plugin, HDDL plugin | KEY_VPU_HDDL_STREAM_ID | string | empty string | Allows to execute inference on a specified device. | | KEY_VPU_HDDL_DEVICE_TAG | string | empty string | Allows to allocate/deallocate networks on specified devices. | | KEY_VPU_HDDL_BIND_DEVICE | YES/NO | NO | Whether the network should bind to a device. Refer to vpu_plugin_config.hpp. | -| KEY_VPU_HDDL_RUNTIME_PRIORITY | singed int | 0 | Specify the runtime priority of a device among all devices that running a same network Refer to vpu_plugin_config.hpp. | +| KEY_VPU_HDDL_RUNTIME_PRIORITY | signed int | 0 | Specify the runtime priority of a device among all devices that running a same network Refer to vpu_plugin_config.hpp. | ## See Also diff --git a/docs/IE_PLUGIN_DG/Plugin.md b/docs/IE_PLUGIN_DG/Plugin.md index 6008646..bb00fb9 100644 --- a/docs/IE_PLUGIN_DG/Plugin.md +++ b/docs/IE_PLUGIN_DG/Plugin.md @@ -86,7 +86,7 @@ The function accepts a const shared pointer to `ngraph::Function` object and per @snippet src/template_plugin.cpp plugin:transform_network -> **NOTE**: After all these transformations, a `ngraph::Function` object cointains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing `A + B` operations at once, the `TransformNetwork` function should contain a pass which fuses operations `A` and `B` into a single custom operation `A + B` which fits backend kernels set. +> **NOTE**: After all these transformations, a `ngraph::Function` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing `A + B` operations at once, the `TransformNetwork` function should contain a pass which fuses operations `A` and `B` into a single custom operation `A + B` which fits backend kernels set. ### `QueryNetwork()` diff --git a/docs/IE_PLUGIN_DG/PluginTesting.md b/docs/IE_PLUGIN_DG/PluginTesting.md index 96e753f..0aae628 100644 --- a/docs/IE_PLUGIN_DG/PluginTesting.md +++ b/docs/IE_PLUGIN_DG/PluginTesting.md @@ -6,7 +6,7 @@ All the tests are written in the [Google Test C++ framework](https://github.com/ Inference Engine Plugin tests are included in the `IE::funcSharedTests` CMake target which is built within the OpenVINO repository (see [Build Plugin Using CMake](@ref plugin_build) guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on. -Test definitions are splitted into tests class declaration (see `inference_engine/tests/functional/plugin/shared/include`) and tests class implementation (see `inference_engine/tests/functional/plugin/shared/src`) and include the following scopes of plugin conformance tests: +Test definitions are split into tests class declaration (see `inference_engine/tests/functional/plugin/shared/include`) and tests class implementation (see `inference_engine/tests/functional/plugin/shared/src`) and include the following scopes of plugin conformance tests: 1. **Behavior tests** (`behavior` sub-folder), which are a separate test group to check that a plugin satisfies basic Inference Engine concepts: plugin creation, multiple executable networks support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters. @@ -25,7 +25,7 @@ Engine concepts: plugin creation, multiple executable networks support, multiple @snippet single_layer_tests/convolution.cpp test_convolution:instantiate -3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetative subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests. +3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests. > **Note**, such sub-graphs or patterns for sub-graph tests should be added to `IE::ngraphFunctions` library first (this library is a pre-defined set of small `ngraph::Function`) and re-used in sub-graph tests after. 4. **HETERO tests** (`subgraph_tests` sub-folder) contains tests for `HETERO` scenario (manual or automatic affinities settings, tests for `QueryNetwork`). diff --git a/docs/Inference_Engine_Development_Procedure/IE_Dev_Procedure.md b/docs/Inference_Engine_Development_Procedure/IE_Dev_Procedure.md index f9638ee..0d35404 100644 --- a/docs/Inference_Engine_Development_Procedure/IE_Dev_Procedure.md +++ b/docs/Inference_Engine_Development_Procedure/IE_Dev_Procedure.md @@ -49,7 +49,7 @@ b. Add **Milestone** and **Labels** to the MR if it is possible. - c. If your work is finished, assign the MR to a reviewer. If it is in progress, assing the MR to yourself (`[WIP]` case). + c. If your work is finished, assign the MR to a reviewer. If it is in progress, assign the MR to yourself (`[WIP]` case). Example of an [MR](https://gitlab-icv.inn.intel.com/inference-engine/inference-engine/merge_requests/2512):
diff --git a/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md b/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md index 569e523..dc35ff0 100644 --- a/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md +++ b/docs/MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md @@ -1475,7 +1475,7 @@ where \f$b_{i}^{1}\f$ - first blob \f$i\f$-th element, \f$b_{i}^{2}\f$ - second * **Parameter name**: *end_axis* - * **Description**: *end_axis* speficies the last dimension to flatten. The value can be negative meaning counting axes from the end. + * **Description**: *end_axis* specifies the last dimension to flatten. The value can be negative meaning counting axes from the end. * **Range of values**: an integer * **Type**: `int` * **Default value**: -1 @@ -2352,7 +2352,7 @@ o_{i} = \sum_{i}^{H*W}\frac{\left ( n*C*H*W \right )` scale}{\sqrt{\sum_{i=0}^{C * **Parameter name**: *pads_end* - * **Description**: *pads_end* specfies the number of padding elements at the end of each axis. + * **Description**: *pads_end* specifies the number of padding elements at the end of each axis. * **Range of values**: a list of non-negative integers. The length of the list must be equal to the number of dimensions in the input blob. * **Type**: `int[]` * **Default value**: None @@ -4104,7 +4104,7 @@ Here 224 is the "canonical" size, 2 is the pyramid starting level, and w, h are **Short description**: *ExperimentalSparseWeightedSum* extracts embedding vectors from the parameters table for each object feature value and sum up these embedding vectors multiplied by weights for each object. -**Detailed description**: [Reference](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup_sparse). This is similar to *embedding_lookup_sparse* but it accepts objects with empty feature values for which it uses a defaut value to extract an embedding from the parameters table. In comparison with *embedding_lookup_sparse* it has a limitation to work only with two-dimensional indices tensor. +**Detailed description**: [Reference](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup_sparse). This is similar to *embedding_lookup_sparse* but it accepts objects with empty feature values for which it uses a default value to extract an embedding from the parameters table. In comparison with *embedding_lookup_sparse* it has a limitation to work only with two-dimensional indices tensor. **Inputs**: @@ -4543,7 +4543,7 @@ where \f$C\f$ is a number of classes * **1**: ND tensor. Data tensor from which rows are selected for the mean operation. Required. * **2**: 1D tensor. Tensor of rows indices selected from the first input tensor along 0 dimension. Required. -* **3**: 1D tensor. Tensor of segment IDs that rows selected for the operation belong to. Rows beloging to the same segment are summed up and divided by N, where N is a number of selected rows in a segment. This input has the same size as the second input. Values must be sorted in ascending order and can be repeated. Required. +* **3**: 1D tensor. Tensor of segment IDs that rows selected for the operation belong to. Rows belonging to the same segment are summed up and divided by N, where N is a number of selected rows in a segment. This input has the same size as the second input. Values must be sorted in ascending order and can be repeated. Required. **Outputs**: diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md index 8625f40..b94ddb5 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_New_Primitives.md @@ -62,7 +62,7 @@ layer { ```shell python mo.py --input_model VGG16_faster_rcnn_final.caffemodel --input_proto test.prototxt ``` - You get the model successfuly converted to Intermediate Representation, and you can infer it with the Inference Engine. + You get the model successfully converted to Intermediate Representation, and you can infer it with the Inference Engine. However, the aim of this tutorial is to demonstrate the way of supporting custom layers not yet supported by the Model Optimizer. If you want to understand better how Model Optimizer works, remove the extension for layer `Proposal` and follow all steps of this tutorial. diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md b/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md index 09805f6..482cb15 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/TensorFlow_Faster_RCNN_ObjectDetection_API.md @@ -393,7 +393,7 @@ The replacement code is similar to the `SecondStagePostprocessor` replacement fo * The priors tensor is not constant like in SSDs so the bounding boxes tensor must be scaled with variances [0.1, 0.1, 0.2, 0.2]. -The descibed above difference are resolved with the following code: +The described above difference are resolved with the following code: ```python # TF produces locations tensor without boxes for background. diff --git a/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md b/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md index 1eecf13..c0082ef 100644 --- a/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md +++ b/docs/install_guides/VisionAcceleratorFPGA_Configure_2018R5.md @@ -254,7 +254,7 @@ jtagconfig jtagconfig --setparam 1 JtagClock 6M ``` -4. Store the Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA bistream on the board: +4. Store the Intel® Vision Accelerator Design with Intel® Arria® 10 FPGA bitstream on the board: ```sh aocl flash acl0 /opt/intel/openvino/bitstreams/a10_vision_design_bitstreams/5-0_PL1_FP11_SqueezeNet.aocx ``` diff --git a/docs/nGraph_DG/nGraphTransformation.md b/docs/nGraph_DG/nGraphTransformation.md index 74d9c9b..5a17b35 100644 --- a/docs/nGraph_DG/nGraphTransformation.md +++ b/docs/nGraph_DG/nGraphTransformation.md @@ -280,7 +280,7 @@ Not using `get_shape()` method makes your transformation more flexible and appli Each `ngraph::Node` has a unique name (used for nGraph internals) and a friendly name. In transformations we care only about friendly name because it represents the name from intermediate representation (IR). Also friendly name is used as output tensor name (until we do not have other way to represent output tensor name) and user code that requests intermediate outputs based on these names. -To avoid loosing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below. +To avoid losing friendly name when replacing node with other node or subgraph, set the original friendly name to the latest node in replacing subgraph. See the example below. ```cpp // Replace Div operation with Power and Multiply sub-graph and set original friendly name to Multiply operation diff --git a/docs/ops/arithmetic/Round_5.md b/docs/ops/arithmetic/Round_5.md index 6d5eaf3..7944958 100644 --- a/docs/ops/arithmetic/Round_5.md +++ b/docs/ops/arithmetic/Round_5.md @@ -6,7 +6,7 @@ **Short description**: *Round* performs element-wise round operation with given tensor. -**Detailed description**: Operation takes one input tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value. In case of halfs, the rule is to round them to the nearest even integer if `mode` attribute is `half_to_even` or rounding in such a way that the result heads away from zero if `mode` attribute is `half_away_from_zero`. +**Detailed description**: Operation takes one input tensor and rounds the values, element-wise, meaning it finds the nearest integer for each value. In case of halves, the rule is to round them to the nearest even integer if `mode` attribute is `half_to_even` or rounding in such a way that the result heads away from zero if `mode` attribute is `half_away_from_zero`. Input = [-4.5, -1.9, -1.5, 0.5, 0.9, 1.5, 2.3, 2.5] @@ -18,7 +18,7 @@ * *mode* - * **Description**: If set to `half_to_even` then the rule is to round halfs to the nearest even integer, if set to `half_away_from_zero` then rounding in such a way that the result heads away from zero. + * **Description**: If set to `half_to_even` then the rule is to round halves to the nearest even integer, if set to `half_away_from_zero` then rounding in such a way that the result heads away from zero. * **Range of values**: `half_to_even` or `half_away_from_zero` * **Type**: string * **Default value**: `half_to_even` diff --git a/docs/ops/arithmetic/Subtract_1.md b/docs/ops/arithmetic/Subtract_1.md index b82a3f1..d0b6f5d 100644 --- a/docs/ops/arithmetic/Subtract_1.md +++ b/docs/ops/arithmetic/Subtract_1.md @@ -45,7 +45,7 @@ o_{i} = a_{i} - b_{i} *Example 1* ```xml - + 256 diff --git a/docs/ops/infrastructure/Parameter_1.md b/docs/ops/infrastructure/Parameter_1.md index 0467d5a..807a606 100644 --- a/docs/ops/infrastructure/Parameter_1.md +++ b/docs/ops/infrastructure/Parameter_1.md @@ -19,7 +19,7 @@ * *shape* * **Description**: the shape of the output tensor - * **Range of values**: list of non-negative integers, emty list is allowed that means 0D or scalar tensor + * **Range of values**: list of non-negative integers, empty list is allowed that means 0D or scalar tensor * **Type**: int[] * **Default value**: None * **Required**: *Yes* diff --git a/docs/ops/movement/VariadicSplit_1.md b/docs/ops/movement/VariadicSplit_1.md index 0efdcaf..d2a0602 100644 --- a/docs/ops/movement/VariadicSplit_1.md +++ b/docs/ops/movement/VariadicSplit_1.md @@ -17,7 +17,7 @@ No attributes available. * **2**: `axis` - An axis along `data` to split. A scalar of type T2 with value from range `-rank(data) .. rank(data)-1`. Negative values address dimensions from the end. **Required.** -* **3**: `split_lengths` - A list containing the sizes of each output tensor along the split `axis`. Size of `split_lengths` should be equal to the number of outputs. The sum of sizes must match `data.shape[axis]`. A 1-D Tensor of type T2. `split_lenghts` can contain a single `-1` element, that means all remining items along specified `axis` that are not consumed by other parts. **Required.** +* **3**: `split_lengths` - A list containing the sizes of each output tensor along the split `axis`. Size of `split_lengths` should be equal to the number of outputs. The sum of sizes must match `data.shape[axis]`. A 1-D Tensor of type T2. `split_lenghts` can contain a single `-1` element, that means all remaining items along specified `axis` that are not consumed by other parts. **Required.** **Outputs** diff --git a/docs/ops/sequence/CTCLoss_4.md b/docs/ops/sequence/CTCLoss_4.md index c36edec..8927c39 100644 --- a/docs/ops/sequence/CTCLoss_4.md +++ b/docs/ops/sequence/CTCLoss_4.md @@ -10,7 +10,7 @@ *CTCLoss* operation is presented in [Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: Graves et al., 2016](http://www.cs.toronto.edu/~graves/icml_2006.pdf) -*CTCLoss* estimates likelyhood that a target `labels[i,:]` can occur (or is real) for given input sequence of logits `logits[i,:,:]`. +*CTCLoss* estimates likelihood that a target `labels[i,:]` can occur (or is real) for given input sequence of logits `logits[i,:,:]`. Briefly, *CTCLoss* operation finds all sequences aligned with a target `labels[i,:]`, computes log-probabilities of the aligned sequences using `logits[i,:,:]` and computes a negative sum of these log-probabilies. @@ -28,7 +28,7 @@ p_{i,t,j} = \frac{\exp(logits[i,t,j])}{\sum^{K}_{k=0}{\exp(logits[i,t,k])}} 2. For a given `i`-th target from `labels[i,:]` find all aligned paths. A path `S = (c1,c2,...,cT)` is aligned with a target `G=(g1,g2,...,gT)` if both chains are equal after decoding. The decoding extracts substring of length `label_length[i]` from a target `G`, merges repeated characters in `G` in case *preprocess_collapse_repeated* equal to True and -finds unique elements in the order of character occurence in case *unique* equal to True. +finds unique elements in the order of character occurrence in case *unique* equal to True. The decoding merges repeated characters in `S` in case *ctc_merge_repeated* equal to True and removes blank characters represented by `blank_index`. By default, `blank_index` is equal to `C-1`, where `C` is a number of classes including the blank. For example, in case default *ctc_merge_repeated*, *preprocess_collapse_repeated*, *unique* and `blank_index` a target sequence `G=(0,3,2,2,2,2,2,4,3)` of a length `label_length[i]=4` is processed @@ -72,7 +72,7 @@ Having log-probabilities for aligned paths, log of summed up probabilities for t * *unique* - * **Description**: *unique* is a flag to find unique elements for a target `labels[i,:]` before matching with potential alignments. Unique elements in the processed `labels[i,:]` are sorted in the order of their occurence in original `labels[i,:]`. For example, the processed sequence for `labels[i,:]=(0,1,1,0,1,3,3,2,2,3)` of length `label_length[i]=10` will be `(0,1,3,2)` in case *unique* equal to True. + * **Description**: *unique* is a flag to find unique elements for a target `labels[i,:]` before matching with potential alignments. Unique elements in the processed `labels[i,:]` are sorted in the order of their occurrence in original `labels[i,:]`. For example, the processed sequence for `labels[i,:]=(0,1,1,0,1,3,3,2,2,3)` of length `label_length[i]=10` will be `(0,1,3,2)` in case *unique* equal to True. * **Range of values**: True or False * **Type**: `boolean` * **Default value**: False diff --git a/docs/ops/sequence/GRUSequence_5.md b/docs/ops/sequence/GRUSequence_5.md index 4b6788e..f44a1d7 100644 --- a/docs/ops/sequence/GRUSequence_5.md +++ b/docs/ops/sequence/GRUSequence_5.md @@ -31,7 +31,7 @@ A single cell in the sequence is implemented in the same way as in diff --git a/inference-engine/samples/speech_sample/README.md b/inference-engine/samples/speech_sample/README.md index 1dee4af..7095d7d 100644 --- a/inference-engine/samples/speech_sample/README.md +++ b/inference-engine/samples/speech_sample/README.md @@ -30,7 +30,7 @@ utterance in the input ARK file is scanned for dynamic range. The scale factor (floating point scalar multiplier) required to scale the maximum input value of the first utterance to 16384 (15 bits) is used for all subsequent inputs. The neural network is quantized to -accomodate the scaled input dynamic range. In user-defined +accommodate the scaled input dynamic range. In user-defined quantization mode, the user may specify a scale factor via the `-sf` flag that will be used for static quantization. In dynamic quantization mode, the scale factor for each input batch is computed @@ -42,7 +42,7 @@ target weight resolution for all layers. For example, when `-qb 8` is specified, the plugin will use 8-bit weights wherever possible in the network. Note that it is not always possible to use 8-bit weights due to GNA hardware limitations. For example, convolutional layers always -use 16-bit weights (GNA harware verison 1 and 2). This limitation +use 16-bit weights (GNA hardware version 1 and 2). This limitation will be removed in GNA hardware version 3 and higher. #### Execution Modes -- 2.7.4