Fix typos and skip checking typo for user defined words.
Signed-off-by: gichan2-jang <gichan2.jang@samsung.com>
- Join, a new element. It merges output sinks from src pads of different elements with the same GST-Cap.
- Tensor-rate, a new element. It allows throttling by generating QoS messages.
- TensorRT support
- - TF1-lite and TF2-lite coexistance
+ - TF1-lite and TF2-lite coexistence
- TFx-lite NNAPI, GPU Delegation
- Minor features
- Semantics of hardware-acceleration options for tensor-filter re-worked.
- API/Android: nnfw-runtime (neurun) and SNPE support
- API/Android: usability update.
- - API/Android: less invokation latency. (more optimization coming in next versions)
+ - API/Android: less invocation latency. (more optimization coming in next versions)
- API/C: bugfixes, architectural upgrade, latency reduction.
- tensor-filter has latency and throughput performance monitors.
- tensor-sink is by default "sync=false". If appsink or tensor_sink in NNStreamer Pipeline API's pipeline has sync=true, emit warning messages.
- Updated env-var handling logic for non-Tizen devices.
- Unit test: higher visibility & behavior correctness fixes.
- Auto-generated test cases for tensor-filter sub-plugins (extensions).
- - Android/Java support with more convinient methods.
+ - Android/Java support with more convenient methods.
- Support gcc9.
- Support openVino as a tensor-filter, allowing to accelerate with Intel NCS/Myriad.
- Support NCSDK as a tensor-filter.
0.1.0 -> 0.1.1:
- Full "Plug & Play" capability of subplugins. (tensor_filter, tensor_filter::custom, tensor_decoder)
- Fully configurable subplugin locations.
- - Capability to build subplungins wihtout the dependencies on nnstreamer sources.
+ - Capability to build subplungins without the dependencies on nnstreamer sources.
- Revert Tensorflow input-memcpy-less-ness for multi-tensor support. (Will support memcpy-less-ness later)
- Support "String" type of tensors.
- API sets updated. (still not "stable")
## TSC Meeting
-A TSC meeting is required to be publically announced (at least 10 days before the meeting) and publically accessible. A chairperson or its deputy, who is designated by a chairperson or the former chairperson, may announce and hold a TSC meeting. A deputy designated by a former resigning chairperson should hold a TSC meeting to elect a new chairperson and is relieved automatically by electing a new chairperson.
+A TSC meeting is required to be publicly announced (at least 10 days before the meeting) and publicly accessible. A chairperson or its deputy, who is designated by a chairperson or the former chairperson, may announce and hold a TSC meeting. A deputy designated by a former resigning chairperson should hold a TSC meeting to elect a new chairperson and is relieved automatically by electing a new chairperson.
A TSC meeting should be announced via the nnstreamer-announce LFAI mailing list (the mailing list). Other media (GitHub issues, gitter.im, social media, LFAI event calendar, and so on) may also be used along with the mailing list.
-A TSC meeting should be held at a publically accessible place. A TSC meeting is, by default, held conventionally (offline meeting) in a place where the chairperson or its deputy has announced. However, alternatively, a TSC meeting may be held virtually (audio or video conference) with publically available media that are declared with the meeting announcement. A conventionally-held TSC meeting may include audio or video conferences to help those who cannot present physically.
+A TSC meeting should be held at a publicly accessible place. A TSC meeting is, by default, held conventionally (offline meeting) in a place where the chairperson or its deputy has announced. However, alternatively, a TSC meeting may be held virtually (audio or video conference) with publicly available media that are declared with the meeting announcement. A conventionally-held TSC meeting may include audio or video conferences to help those who cannot present physically.
A TSC meeting is, by default, recorded or scripted for the general public. The URLs or the contents of the recordings or scripts should be available via the mailing list. Alternatively, a live video stream may be broadcasted via methods declared by the mailing list.
- [tensor\_sink](https://github.com/nnstreamer/nnstreamer/tree/main/gst/nnstreamer/elements/gsttensor_sink.md) (stable)
- ```appsink```-like element, which is specialized for ```other/tensors```. You may use appsink with capsfilter instead.
- [tensor\_merge](https://github.com/nnstreamer/nnstreamer/tree/main/gst/nnstreamer/elements/gsttensor_merge.c) (stable)
- - This combines muiltiple single-tensored (```other/tensors,num_tensors=1```) streams into a single-tensored stream by merging dimensions of incoming tensor streams. For example, it may merge two ```dimensions=640:480``` streams into ```dimensons=1280:480```, ```dimensions=640:960```, or ```dimensions=640:480:2```, according to a given configuration.
+ - This combines multiple single-tensored (```other/tensors,num_tensors=1```) streams into a single-tensored stream by merging dimensions of incoming tensor streams. For example, it may merge two ```dimensions=640:480``` streams into ```dimensons=1280:480```, ```dimensions=640:960```, or ```dimensions=640:480:2```, according to a given configuration.
- Users can adjust sync-mode and sync-option to change its behaviors of when to create output tensors and how to choose input tensors.
- Users can adjust how dimensions are merged (the rank merged, the order of merged streams).
- [tensor\_split](https://github.com/nnstreamer/nnstreamer/tree/main/gst/nnstreamer/elements/gsttensor_split.c) (stable)
typedef struct {
void *memBuf; /**<A void * pointing to memory of size bufSize.*/
size_t filePos; /**<Current position inside the file.*/
- size_t bufPos; /**<Curent position inside the buffer.*/
+ size_t bufPos; /**<Current position inside the buffer.*/
size_t bufSize; /**<The size of the buffer.*/
size_t bufLen; /**<The actual size of the buffer used.*/
enum bigWigFile_type_enum type; /**<The connection type*/
You can change the provided each variable on the go using a script, for example...
```sh
-$ sed s/@EXT_ABBRV@/cutom_plugin/g subplugin_unittest_template.cc.in > subplugin_unittest.cc
+$ sed s/@EXT_ABBRV@/custom_plugin/g subplugin_unittest_template.cc.in > subplugin_unittest.cc
```
but preferably, using meson through `configure_data` and `configure_file`
[default.extend-words]
mosquitto = "mosquitto"
+iy = "iy"
+addd = "addd"
+ND = "ND"
+thr = "thr"
+wth = "wth"
+ot = "ot"
+FORAMT = "FORAMT" # Fix later (This is related with flatbuf and protobuf schema)
+Seeked = "Seeked"
+ba = "ba"
+splitted = "splitted"
override_dh_auto_build:
ninja -C ${BUILDDIR}
- # A few modules are not avaiable in Ubuntu 22.04. Don't try to create them if not available.
+ # A few modules are not available in Ubuntu 22.04. Don't try to create them if not available.
if [ -f './build/ext/nnstreamer/tensor_filter/libnnstreamer_filter_nnfw.so' ]; then echo "NNFW exists" ; else rm debian/nnstreamer-nnfw.install; fi
if [ -f './build/ext/nnstreamer/tensor_filter/libnnstreamer_filter_pytorch.so' ]; then echo "pytorch exists" ; else rm debian/nnstreamer-pytorch.install; fi
if [ -f './build/ext/nnstreamer/tensor_filter/libnnstreamer_filter_caffe2.so' ]; then echo "caffe2 exists" ; else rm debian/nnstreamer-caffe2.install; fi
GstStateChange transition);
/**
- * @brief enum for propery
+ * @brief enum for property
*/
typedef enum
{
}
/**
- * @brief Looper desctructor
+ * @brief Looper destructor
*/
Looper::~Looper ()
{
/**
* @brief parse the converting result to feed output tensors
- * @param[result] Python object retunred by convert
+ * @param[result] Python object returned by convert
* @param[info] info Structure for output tensors info
* @return 0 if no error, otherwise negative errno
*/
* which converts to tensors using python.
* @see https://github.com/nnstreamer/nnstreamer
* @author Gichan Jang <gichan2.jang@samsung.com>
- * @bug python converter with Python3.9.10 is stucked during Py_Finalize().
+ * @bug python converter with Python3.9.10 is stuck during Py_Finalize().
*/
#include <nnstreamer_plugin_api.h>
if (core->init () != 0) {
delete core;
- Py_ERRMSG ("failed to initailize the object or the python script is invalid: Python\n");
+ Py_ERRMSG ("failed to initialize the object or the python script is invalid: Python\n");
ret = -EINVAL;
goto done;
}
/**
* @brief C++-Template-like box location calculation for box-priors
- * @bug This is not macro-argument safe. Use paranthesis!
+ * @bug This is not macro-argument safe. Use parenthesis!
* @param[in] bb The configuration, "bounding_boxes"
* @param[in] index The index (3rd dimension of BOX_SIZE:1:DETECTION_MAX:1)
* @param[in] total_labels The count of total labels. We can get this from input tensor info. (1st dimension of LABEL_SIZE:DETECTION_MAX:1:1)
return FALSE;
/**
- * The shape of the ouput tensor is [7, N, 1, 1], where N is the maximum
+ * The shape of the output tensor is [7, N, 1, 1], where N is the maximum
* number (i.e., 200) of detected bounding boxes.
*/
dim = config->info.info[0].dimension;
limit, config->info.num_tensors);
}
- /* tensor-type of the tensors shoule be the same */
+ /* tensor-type of the tensors should be the same */
for (i = 1; i < config->info.num_tensors; ++i) {
g_return_val_if_fail (config->info.info[i - 1].type == config->info.info[i].type, FALSE);
}
pos2 += width;
}
x1 += 9;
- pos1 += 9; /* charater width + 1px */
+ pos1 += 9; /* character width + 1px */
}
}
}
*
* @see https://github.com/nnstreamer/nnstreamer
* @author Jijoong Moon <jijoong.moon@samsung.com>
- * @bug If the elment size is 2 or larger, padding won't work.
+ * @bug If the element size is 2 or larger, padding won't work.
* GRAY16 types has size of 2 and if you have padding, it won't work.
* To correct this, dv_decode() should be fixed.
*/
*
* - Used model is deeplabv3_257_mv_gpu.tflite.
* - Resize image into 257:257 at the first videoscale.
- * - Transfrom RGB value into float32 in range [0,1] at tensor_transform.
+ * - Transform RGB value into float32 in range [0,1] at tensor_transform.
*
* gst-launch-1.0 -v \
* filesrc location=cat.png ! decodebin ! videoconvert ! videoscale ! imagefreeze !\
idata->max_labels = (guint) max_labels_64;
}
- GST_WARNING ("mode-option-\"%d\" is not definded.", op_num);
+ GST_WARNING ("mode-option-\"%d\" is not defined.", op_num);
return TRUE;
}
static singleLineSprite_t singleLineSprite;
/**
- * @brief Data structure for boundig box info.
+ * @brief Data structure for pose-estimation info.
*/
typedef struct
{
* @brief Check if a value is within lower and upper bounds
* @param value the value to check
* @param lower_b the lower bound (inclusive)
- * @param upper_b the uppoer bound (exlcusive)
+ * @param upper_b the uppoer bound (exclusive)
* @return TRUE if the value is within the bounds, otherwise FALSE
*/
static gboolean
/**
* @brief Fill in pixel with PIXEL_VALUE at x,y position. Make thicker (x+1, y+1)
* @param[out] out_info The output buffer (RGBA plain)
- * @param[in] bdata The bouding-box internal data.
+ * @param[in] bdata The pose-estimation internal data.
* @param[in] coordinate of pixel
*/
static void
/**
* @brief Draw line with dot at the end of line
* @param[out] out_info The output buffer (RGBA plain)
- * @param[in] bdata The bouding-box internal data.
+ * @param[in] bdata The pose-estimation internal data.
* @param[in] coordinate of two end point of line
*/
static void
}
/**
- * @brief Draw lable with the given results (pose) to the output buffer
+ * @brief Draw label with the given results (pose) to the output buffer
* @param[out] out_info The output buffer (RGBA plain)
- * @param[in] bdata The bouding-box internal data.
+ * @param[in] bdata The pose-estimation internal data.
* @param[in] results The final results to be drawn.
*/
static void
/**
* @brief Draw with the given results (pose) to the output buffer
* @param[out] out_info The output buffer (RGBA plain)
- * @param[in] bdata The bouding-box internal data.
+ * @param[in] bdata The pose-estimation internal data.
* @param[in] results The final results to be drawn.
*/
static void
Py_LOCK ();
if (!PyObject_HasAttrString (core_obj, (char *) "getOutCaps")) {
ml_loge ("Cannot find 'getOutCaps'");
- ml_loge ("defualt caps is `application/octet-stream`");
+ ml_loge ("default caps is `application/octet-stream`");
caps = gst_caps_from_string ("application/octet-stream");
goto done;
}
if (core->init () != 0) {
delete core;
- ml_loge ("failed to initailize the object: Python3\n");
+ ml_loge ("failed to initialize the object: Python3\n");
goto done;
}
* @brief transfer crop region info with the given results to the output buffer
* @param[out] out_info The output buffer
* @param[in] data The Tensor_region internal data.
- * @param[in] results The final results to be transfered.
+ * @param[in] results The final results to be transferred.
*/
static void
gst_tensor_top_detectedObjects_cropInfo (GstMapInfo *out_info, const tensor_region *data, GArray *results)
/**
* @brief C++-Template-like box location calculation for box-priors
- * @bug This is not macro-argument safe. Use paranthesis!
+ * @bug This is not macro-argument safe. Use parenthesis!
* @param[in] bb The configuration, "tensor region"
* @param[in] index The index (3rd dimension of BOX_SIZE:1:MOBILENET_SSD_DETECTION_MAX:1)
* @param[in] total_labels The count of total labels. We can get this from input tensor info. (1st dimension of LABEL_SIZE:MOBILENET_SSD_DETECTION_MAX:1:1)
If the framework or backend/runtime library has C APIs and you want to write the subplugin in C, use ```#include <nnstreamer_plugin_api_filter.h>```.
Your C subplugin is supposed to fill in ```GstTensorFilterFramework``` struct and register the struct with ```nnstreamer_filter_probe (GstTensorFilterFrameworkEventData *)``` function, which is supposed to be called with ```((constructor))``` initializer (```init_filter_nnfw (void)``` function in the reference).
If your subplugin has custom properties to be supplied by users, describe their usages with ```nnstreamer_filter_set_custom_property_desc ()``` function.
-Then, call ```nnstreamer_filter_exit ()``` function with ```((desctructor))``` terminator (```fini_filter_nnfw (void)``` function in the reference).
+Then, call ```nnstreamer_filter_exit ()``` function with ```((destructor))``` terminator (```fini_filter_nnfw (void)``` function in the reference).
-In ```GstTensorFilterFramework```, there are two different ways, ```v0 (version == GST_TENSOR_FILTER_FRAMEWORK_V0)``` and ```v1 (version == GST_TENSOR_FILTER_FRAMEWORK_V1)```. In the struct, there is a ```union``` of ```v0``` and ```v1```, and it is recommended to use ```v1``` and ```set version = GST_TENSOR_FILTER_FRAMEWORK_V1``` (v1). ```v0``` is supposed to be used by old subplugins for backward compatibilty and any new subplugins should use ```v1```, which is simpler and richers in features.
+In ```GstTensorFilterFramework```, there are two different ways, ```v0 (version == GST_TENSOR_FILTER_FRAMEWORK_V0)``` and ```v1 (version == GST_TENSOR_FILTER_FRAMEWORK_V1)```. In the struct, there is a ```union``` of ```v0``` and ```v1```, and it is recommended to use ```v1``` and ```set version = GST_TENSOR_FILTER_FRAMEWORK_V1``` (v1). ```v0``` is supposed to be used by old subplugins for backward compatibility and any new subplugins should use ```v1```, which is simpler and richers in features.
However, note that if you are going to use framework/library with C++ APIs, please do not use ```nnstreamer_plugin_api_filter.h```, but use the base tensor-filter-subplugin C++ class as in the next section.
edgetpu_dep = dependency('edgetpu', required: false)
if not edgetpu_dep.found()
- # Since the developement package for Ubuntu does not have pkgconfig file,
+ # Since the development package for Ubuntu does not have pkgconfig file,
# check that the required header and library files exist in the system
# include and lib directories.
if cxx.has_header('edgetpu.h')
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : armnn plugin's private data
- * @param[out] info The dimesions and types of input tensors
+ * @param[out] info The dimensions and types of input tensors
*/
static int
armnn_getInputDim (const GstTensorFilterProperties *prop, void **private_data,
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : armnn plugin's private data
- * @param[out] info The dimesions and types of output tensors
+ * @param[out] info The dimensions and types of output tensors
*/
static int
armnn_getOutputDim (const GstTensorFilterProperties *prop, void **private_data,
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : caffe2 plugin's private data
- * @param[out] info The dimesions and types of input tensors
+ * @param[out] info The dimensions and types of input tensors
*/
static int
caffe2_getInputDim (const GstTensorFilterProperties *prop, void **private_data,
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : caffe2 plugin's private data
- * @param[out] info The dimesions and types of output tensors
+ * @param[out] info The dimensions and types of output tensors
*/
static int
caffe2_getOutputDim (const GstTensorFilterProperties *prop, void **private_data,
* "Already registered (-EINVAL)" may occur.
*
*/
-#ifndef __NNS_TENSOR_FITLER_CPP_H__
-#define __NNS_TENSOR_FITLER_CPP_H__
+#ifndef __NNS_TENSOR_FILTER_CPP_H__
+#define __NNS_TENSOR_FILTER_CPP_H__
#ifdef __cplusplus
#endif /* __cplusplus */
-#endif /* __NNS_TENSOR_FITLER_CPP_H__ */
+#endif /* __NNS_TENSOR_FILTER_CPP_H__ */
GstTensorsInfo inputInfo; /**< The tensor info of input tensors */
GstTensorsInfo outputInfo; /**< The tensor info of output tensors */
- GMappedFile *modelMap; /**< Model file mmaped to memory */
+ GMappedFile *modelMap; /**< Model file mapped to memory */
NNContext *context; /**< Context for model load and runtime */
NNEngine *engine; /**< Engine path for context acceleration */
}
/**
- * @brief fetch and setup input/ouput tensors metadata
+ * @brief fetch and setup input/output tensors metadata
* @return 0 if OK. non-zero if error.
*/
int
}
/**
- * @brief fetch and setup ouput tensors metadata
+ * @brief fetch and setup output tensors metadata
* @return 0 if OK. non-zero if error.
*/
int
}
std::cerr << "Failed to invoke tensorflow-lite + edge-tpu." << std::endl;
- throw std::runtime_error ("Invoking tensorflow-lite with edge-tpu delgation failed.");
+ throw std::runtime_error ("Invoking tensorflow-lite with edge-tpu delegation failed.");
}
}
}
/**
- * 5. Get the tensor desciptions for input and output form allocated model
+ * 5. Get the tensor descriptions for input and output form allocated model
*/
len = (guint32) sizeof (tensor_desc_input);
ret_code =
* Supported props:
* input_rank: (mandatory)
* Rank of each input tensors.
- * Each ranks are separeted by ':'.
+ * Each ranks are separated by ':'.
* The number of ranks must be the same as the number of input
* tensors.
* enable_tensorrt: (optional)
{
extern "C" {
void init_filter_mxnet (void)
- __attribute__ ((constructor)); /**< Dynamic library contstructor */
-void fini_filter_mxnet (void) __attribute__ ((destructor)); /**< Dynamic library desctructor */
+ __attribute__ ((constructor)); /**< Dynamic library constructor */
+void fini_filter_mxnet (void) __attribute__ ((destructor)); /**< Dynamic library destructor */
}
/**
class TensorFilterMXNet final : public tensor_filter_subplugin
{
public:
- static void init_filter (); /**< Dynamic library contstructor helper */
- static void fini_filter (); /**< Dynamic library desctructor helper */
+ static void init_filter (); /**< Dynamic library constructor helper */
+ static void fini_filter (); /**< Dynamic library destructor helper */
TensorFilterMXNet ();
~TensorFilterMXNet ();
int output_ranks_[NNS_TENSOR_RANK_LIMIT]; /**< Rank info of output tensor */
std::string model_symbol_path_; /**< The model symbol .json file */
- std::string model_params_path_; /**< The model paremeters .params file */
+ std::string model_params_path_; /**< The model parameters .params file */
Symbol net_; /**< Model symbol */
std::unique_ptr<Executor> executor_; /**< Model executor */
class ncnn_subplugin final : public tensor_filter_subplugin
{
public:
- static void init_filter_ncnn (); /**< Dynamic library contstructor helper */
- static void fini_filter_ncnn (); /**< Dynamic library desctructor helper */
+ static void init_filter_ncnn (); /**< Dynamic library constructor helper */
+ static void fini_filter_ncnn (); /**< Dynamic library destructor helper */
ncnn_subplugin ();
~ncnn_subplugin ();
* @brief Convert from nnfw type to gst tensor type
* @param[in] nnfw_type type given in nnfw format
* @param[out] type container to receive type in gst tensor format
- * @return 0 on sucess, errno on error
+ * @return 0 on success, negative errno on error
*/
static int
nnfw_tensor_type_to_gst (const NNFW_TYPE nnfw_type, tensor_type * type)
* @brief Convert from gst tensor type to NNFW type
* @param[in] type type given in gst format
* @param[out] nnfw_type container to receive type in nnfw tensor format
- * @return 0 on sucess, negative errno on error
+ * @return 0 on success, negative errno on error
*/
static int
nnfw_tensor_type_from_gst (const tensor_type type, NNFW_TYPE * nnfw_type)
* @param[in] mem Tensor memory containing input/output information
* @param[in] info Tensor information in nnfw format
* @param[in] is_input given memory is for input or output
- * @return 0 on sucess, negative errno on error
+ * @return 0 on success, negative errno on error
*/
static int
nnfw_tensor_memory_set (const nnfw_pdata * pdata, const GstTensorMemory * mem,
/* the order of dimension is reversed at CAPS negotiation */
for (i = 0; i < rank; i++) {
- /* free dimensions are treated as 1 if not overriden */
+ /* free dimensions are treated as 1 if not overridden */
shapes[rank - i - 1] = (shapes[rank - i - 1] > 0) ? shapes[rank - i - 1] : 1;
dim[i] = shapes[rank - i - 1];
}
/**
* @brief Convert a tensor container in NNS to a tensor container in IE
* @param tensorDesc the class that defines a Tensor description to be converted from a GstTensorMemory
- * @param gstTensor the container of a tensor in NNS to be coverted to a tensor container in IE
+ * @param gstTensor the container of a tensor in NNS to be converted to a tensor container in IE
* @return a pointer to the Blob which is a container of a tensor in IE if OK, otherwise nullptr
*/
InferenceEngine::Blob::Ptr
/**
* @brief Get the information about the dimensions of input tensors from the given model
- * @param[out] info metadata containing the dimesions and types information of the input tensors
+ * @param[out] info metadata containing the dimensions and types information of the input tensors
* @return 0 (TensorFilterOpenvino::RetSuccess) if OK, negative values if error
*/
int
/**
* @brief Get the information about the dimensions of output tensors from the given model
- * @param[out] info metadata containing the dimesions and types information of the output tensors
+ * @param[out] info metadata containing the dimensions and types information of the output tensors
* @return 0 (TensorFilterOpenvino::RetSuccess) if OK, negative values if error
*/
int
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data TensorFilterOpenvino plugin's private data
- * @param[out] info the dimesions and types of input tensors
+ * @param[out] info the dimensions and types of input tensors
* @return 0 (TensorFilterOpenvino::RetSuccess) if OK, negative values if error
*/
static int
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data TensorFilterOpenvino plugin's private data
- * @param[out] info The dimesions and types of output tensors
+ * @param[out] info The dimensions and types of output tensors
* @return 0 (TensorFilterOpenvino::RetSuccess) if OK, negative values if error
*/
static int
Py_LOCK ();
PyObject *api_module = PyImport_ImportModule ("nnstreamer_python");
if (api_module == NULL) {
- Py_ERRMSG ("Cannt find `nnstreamer_python` module");
+ Py_ERRMSG ("Cannot find `nnstreamer_python` module");
goto exit;
}
core_obj = PyObject_CallObject (cls, NULL);
if (core_obj) {
- /** check whther either setInputDim or getInputDim/getOutputDim are
+ /** check whether either setInputDim or getInputDim/getOutputDim are
* defined */
if (PyObject_HasAttrString (core_obj, (char *) "setInputDim"))
callback_type = cb_type::CB_SETDIM;
/**
* prop->model_files[0] contains the path of a python script
- * prop->custom contains its arguments seperated by ' '
+ * prop->custom contains its arguments separated by ' '
*/
script_path = prop->model_files[0];
if (core->init (prop) != 0) {
delete core;
- g_printerr ("failed to initailize the object: Python\n");
+ g_printerr ("failed to initialize the object: Python\n");
PyGILState_Release (gstate);
throw std::runtime_error ("Python is not initialize");
}
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : pytorch plugin's private data
- * @param[out] info The dimesions and types of input tensors
+ * @param[out] info The dimensions and types of input tensors
*/
static gint
torch_getInputDim (const GstTensorFilterProperties *prop, void **private_data,
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : pytorch plugin's private data
- * @param[out] info The dimesions and types of output tensors
+ * @param[out] info The dimensions and types of output tensors
*/
static gint
torch_getOutputDim (const GstTensorFilterProperties *prop, void **private_data,
*private_data = NULL;
delete core;
- g_printerr ("failed to initailize the object: tensorflow\n");
+ g_printerr ("failed to initialize the object: tensorflow\n");
return -2;
}
* @brief The optional callback for GstTensorFilterFramework
* @param prop: property of tensor_filter instance
* @param private_data : tensorflow plugin's private data
- * @param[out] info The dimesions and types of input tensors
+ * @param[out] info The dimensions and types of input tensors
*/
static int
tf_getInputDim (const GstTensorFilterProperties *prop, void **private_data, GstTensorsInfo *info)
* @brief The optional callback for GstTensorFilterFramework
* @param prop: property of tensor_filter instance
* @param private_data : tensorflow plugin's private data
- * @param[out] info The dimesions and types of output tensors
+ * @param[out] info The dimensions and types of output tensors
*/
static int
tf_getOutputDim (const GstTensorFilterProperties *prop, void **private_data,
/** @brief callback method to delete interpreter for shared model */
friend void free_interpreter (void *instance);
/** @brief callback method to replace interpreter for shared model */
- friend void replace_interpreter (void *instance, void *interperter);
+ friend void replace_interpreter (void *instance, void *interpreter);
private:
int num_threads;
}
/**
- * @brief TFLiteInterpreter desctructor
+ * @brief TFLiteInterpreter destructor
*/
TFLiteInterpreter::~TFLiteInterpreter ()
{
* @brief callback method to replace interpreter for shared model
*/
void
-replace_interpreter (void *instance, void *interperter)
+replace_interpreter (void *instance, void *interpreter)
{
TFLiteCore *core = reinterpret_cast<TFLiteCore *> (instance);
- TFLiteInterpreter *interpreter_new = reinterpret_cast<TFLiteInterpreter *> (interperter);
+ TFLiteInterpreter *interpreter_new = reinterpret_cast<TFLiteInterpreter *> (interpreter);
if (core->reloadInterpreter (interpreter_new) != 0)
nns_loge ("Failed to replace interpreter");
}
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : tensorflow lite plugin's private data
- * @param[out] info The dimesions and types of input tensors
+ * @param[out] info The dimensions and types of input tensors
*/
static int
tflite_getInputDim (const GstTensorFilterProperties *prop, void **private_data,
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : tensorflow lite plugin's private data
- * @param[out] info The dimesions and types of output tensors
+ * @param[out] info The dimensions and types of output tensors
*/
static int
tflite_getOutputDim (const GstTensorFilterProperties *prop, void **private_data,
* @brief The optional callback for GstTensorFilterFramework
* @param prop property of tensor_filter instance
* @param private_data : tensorflow lite plugin's private data
- * @param in_info The dimesions and types of input tensors
- * @param[out] out_info The dimesions and types of output tensors
+ * @param in_info The dimensions and types of input tensors
+ * @param[out] out_info The dimensions and types of output tensors
* @detail Output Tensor info is recalculated based on the set Input Tensor Info
*/
static int
status = core->getOutputTensorDim (out_info);
if (status != 0) {
tflite_setInputDim_recovery (
- core, &cur_in_info, "while retreiving update output tensor info", 2);
+ core, &cur_in_info, "while retrieving update output tensor info", 2);
goto exit;
}
void
tensorrt_subplugin::invoke (const GstTensorMemory *input, GstTensorMemory *output)
{
- /* If internal _inputBuffer is nullptr, tne allocate GPU memory */
+ /* If internal _inputBuffer is nullptr, then allocate GPU memory */
if (!_inputBuffer) {
if (allocBuffer (&_inputBuffer, input->size) != 0) {
ml_loge ("Failed to allocate GPU memory for input");
return -1;
}
- /* Create ExecutionContext obejct */
+ /* Create ExecutionContext object */
_Context = makeUnique (_Engine->createExecutionContext ());
if (!_Context) {
ml_loge ("Failed to create the TensorRT ExecutionContext object");
* http://www.vivantecorp.com/
* https://www.khadas.com/product-page/vim3 (Amlogic A311D with 5.0 TOPS NPU)
-## How to buid
+## How to build
First of all, you must generate a library of a model (e.g., libvivantev3.so, libyolov3.so) to use NNStreamer tensor filter.
For more details, please refer to the below repository.
* https://www.github.com/nnstreamer/reference-models (TODO) - Press 'models' folder.
#################### Build ################################################
if [[ $BUILD == 1 ]]; then
- echo -e "Compling source .........."
+ echo -e "Compiling source .........."
rm -rf ./build
meson -Denable-vivante=true build
ninja -C build
start_time=$( date +%s.%N )
$CMD
if [[ $? != 0 ]]; then
- echo -e "Oooops. The exectuion is failed. Pleasse fix a bug."
+ echo -e "Oooops. The execution is failed. Please fix a bug."
exit 1
fi
elapsed_time=$( date +%s.%N --date="$start_time seconds ago" )
gst_tensors_info_init (&pdata->output_tensor);
/** Note that we must use vsi_nn_GetTensor() to get a meta data
- * (e.g., input tensor and outout tensor).
+ * (e.g., input tensor and output tensor).
* ./linux_sdk/acuity-ovxlib-dev/lib/libovxlib.so
* ./linux_sdk/acuity-ovxlib-dev/include/vsi_nn_graph.h
* ./linux_sdk/acuity-ovxlib-dev/include/vsi_nn_tensor.h
gst_element_class_set_static_metadata (gstelement_class,
"TensorSrcGRPC", "Source/Network",
- "Receive nnstreamer protocal buffers as a gRPC server/client",
+ "Receive nnstreamer protocol buffers as a gRPC server/client",
"Dongju Chae <dongju.chae@samsung.com>");
/* GstBasrSrcClass */
GST_DEBUG_CATEGORY_INIT (gst_tensor_src_grpc_debug,
"tensor_src_grpc", 0,
- "src element to support protocal buffers as a gRPC server/client");
+ "src element to support protocol buffers as a gRPC server/client");
}
/**
#define GST_TYPE_TIZEN_SENSOR_TYPE (tizen_sensor_get_type ())
/**
* @brief Support GEnumValue array for Tizen sensor framework's sensor_type_e (sensor.h)
- * @todo We need an automated maintanence system for sensor.h's sensor_type_e, which makes a build error if it has been changed.
+ * @todo We need an automated maintenance system for sensor.h's sensor_type_e, which makes a build error if it has been changed.
*/
static GType
tizen_sensor_get_type (void)
}
/**
- * @brief Called when a buffer should be presented or ouput.
+ * @brief Called when a buffer should be presented or output.
*/
static GstFlowReturn
gst_data_repo_sink_render (GstBaseSink * bsink, GstBuffer * buffer)
* @brief GstDataRepoSrcClass data structure.
*/
struct _GstDataRepoSrcClass {
- GstPushSrcClass parent_calss;
+ GstPushSrcClass parent_class;
};
GType gst_data_repo_src_get_type (void);
#endif
GST_PLUGIN_DEFINE (GST_VERSION_MAJOR, GST_VERSION_MINOR, edge,
- "A collcetion of GStreamer plugins to support Edge",
+ "A collection of GStreamer plugins to support NNStreamer edge feature",
plugin_init, VERSION, "LGPL", PACKAGE,
"https://github.com/nnstreamer/nnstreamer")
FALSE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
g_object_class_install_property (gobject_class, PROP_CONNECTION_TIMEOUT,
g_param_spec_uint64 ("connection-timeout",
- "Timeout for wating a connection",
+ "Timeout for waiting a connection",
"The timeout (in milliseconds) for waiting a connection to receiver. "
"0 timeout (default) means infinite wait.", 0, G_MAXUINT64, 0,
G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
return TRUE;
}
-/** Function defintions */
+/** Function definitions */
/**
* @brief Initialize GstMqttSrc object
*/
GstClockTime ulatency = GST_CLOCK_TIME_NONE;
GstClock *clock;
- /** This buffer is comming from the past. Drop it */
+ /** This buffer is coming from the past. Drop it. */
if (!_is_gst_buffer_timestamp_valid (*buf)) {
if (self->debug) {
GST_DEBUG_OBJECT (self,
* @see https://github.com/nnstreamer/nnstreamer
* @author Wook Song <wook16.song@samsung.com>
* @bug No known bugs except for NYI items
- * @todo Need to support cacheing and polling timer mechanism
+ * @todo Need to support caching and polling timer mechanism
*/
#include <errno.h>
goto ret_close_sockfd;
}
- /* Recieve */
+ /* Receive */
n = read (sockfd, &packet, sizeof (packet));
if (n < 0) {
ret = -errno;
} else {
/** @todo identify and printout the given input stream caps. */
GST_ERROR_OBJECT (self,
- "Tensor converter has an undefined behavior with type _NNS_MEDIA_ANY. It should've been custom-code or custom-script mode or a corrsponding external converter should've been registered (tensor_converter subplugin). However, nothing is available for the given input stream.");
+ "Tensor converter has an undefined behavior with type _NNS_MEDIA_ANY. It should've been custom-code or custom-script mode or a corresponding external converter should've been registered (tensor_converter subplugin). However, nothing is available for the given input stream.");
goto error;
}
self->do_not_append_header =
g_object_class_install_property (gobject_class, PROP_CONFIG,
g_param_spec_string ("config-file", "Configuration-file",
- "Path to configuraion file which contains plugins properties", "",
+ "Path to configuration file which contains plugins properties", "",
G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
gst_element_class_set_details_simple (gstelement_class,
/**
* @brief initialize the new element
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
if (decoder == self->decoder) {
/* Already configured??? */
GST_WARNING_OBJECT (self,
- "nnstreamer tensor_decoder %s is already confgured.\n",
+ "nnstreamer tensor_decoder %s is already configured.\n",
mode_string);
} else {
/* Changing decoder. Deallocate the previous */
}
if (0 == self->decoder->init (&self->plugin_data)) {
- ml_loge ("Failed to intialize a decode subplugin, \"%s\".\n",
+ ml_loge ("Failed to initialize a decode subplugin, \"%s\".\n",
mode_string);
break;
}
self->tensor_config.info.num_tensors = num_mems;
}
num_tensors = self->tensor_config.info.num_tensors;
- /** Internal logic error. Negotation process should prevent this! */
+ /** Internal logic error. Negotiation process should prevent this! */
g_assert (num_mems == num_tensors);
for (i = 0; i < num_tensors; i++) {
/**
* @brief GstTensorDecoderClass inherits GstBaseTransformClass.
*
- * Referring another child (sibiling), GstVideoFilter (abstract class) and
+ * Referring another child (sibling), GstVideoFilter (abstract class) and
* its child (concrete class) GstVideoConverter.
* Note that GstTensorDecoderClass is a concrete class; thus we need to look at both.
*/
| Mode | Main property (input tensor semantics) | Additional & mandatory property | Output |
| -| - | - | - |
| directvideo | other/tensors | N/A | video/x-raw |
-| bounding_boxes | Bounding boxes (other/tensor) | File path to labels, decoding schems, out dim, in dim | video/x-raw |
+| bounding_boxes | Bounding boxes (other/tensor) | File path to labels, decoding schemes, out dim, in dim | video/x-raw |
| image_labeling | Image label (other/tensor) | File path to labels | text/x-raw |
-| image_segment | segmentaion info | expected model | video/x-raw |
+| image_segment | segmentation info | expected model | video/x-raw |
| pose_estimation | pose info | out dim, in dim, File path to labels, mode | video/x-raw |
| flatbuf | other/tensors | N/A | flatbuffers |
| protobuf | other/tensors | N/A | protocol buffers |
# @file custom_decoder_example.py
## @brief User-defined custom decoder
class CustomDecoder(object):
-## @breif Python callback: getOutCaps
+## @brief Python callback: getOutCaps
def getOutCaps (self):
# Write capability of the media type.
return bytes('@CAPS_STRING@', 'UTF-8')
-## @breif Python callback: decode
+## @brief Python callback: decode
def decode (self, raw_data, in_info, rate_n, rate_d):
# return decoded raw data as `bytes` type.
return data
/**
* @brief initialize the new element
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
/**
* @brief Parse caps and configure tensors info.
- * @param tensor_demux GstTensorDemux Ojbect
- * @param caps incomming capablity
+ * @param tensor_demux GstTensorDemux object
+ * @param caps incoming capability
* @return TRUE/FALSE (if successfully configured, return TRUE)
*/
static gboolean
* @param config tensor config to be filled
* @param nth source ordering
* @param total number of tensors
- * @return TRUE if succesfully configured
+ * @return TRUE if successfully configured
*/
static gboolean
gst_tensor_demux_get_tensor_config (GstTensorDemux * tensor_demux,
gst_tensors_config_init (config);
if (tensor_demux->tensorpick != NULL) {
- gchar *seleted_tensor;
+ gchar *selected_tensor;
gchar **strv;
guint i, num, idx;
g_assert (g_list_length (tensor_demux->tensorpick) >= nth);
- seleted_tensor = (gchar *) g_list_nth_data (tensor_demux->tensorpick, nth);
- strv = g_strsplit_set (seleted_tensor, ":+", -1);
+ selected_tensor = (gchar *) g_list_nth_data (tensor_demux->tensorpick, nth);
+ strv = g_strsplit_set (selected_tensor, ":+", -1);
num = g_strv_length (strv);
for (i = 0; i < num; i++) {
tensorpad = g_new0 (GstTensorPad, 1);
g_assert (tensorpad != NULL);
- GST_DEBUG_OBJECT (tensor_demux, "createing pad: %d(%dth)",
+ GST_DEBUG_OBJECT (tensor_demux, "creating pad: %d(%dth)",
tensor_demux->num_srcpads, nth);
name = g_strdup_printf ("src_%u", tensor_demux->num_srcpads);
/**
* @brief initialize the new element (GST Standard)
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
for (i = 0; i < num; i++) {
val = g_ascii_strtoll (strv[i], NULL, 10);
if (errno == ERANGE) {
- ml_loge ("Overflow occured during converting %s to a gint64 value",
+ ml_loge ("Overflow occurred during converting %s to a gint64 value",
strv[i]);
}
*prop_list = g_list_append (*prop_list, GINT_TO_POINTER (val));
val = g_ascii_strtoll (strv[1], NULL, 10);
if (errno == ERANGE) {
- ml_loge ("Overflow occured during converting %s to a gint64 value",
+ ml_loge ("Overflow occurred during converting %s to a gint64 value",
strv[1]);
}
*prop_list = g_list_append (*prop_list, GINT_TO_POINTER (val));
/**
* @brief Parse caps and configure tensors info.
- * @param tensor_if GstTensorIf Ojbect
- * @param caps incomming capablity
+ * @param tensor_if GstTensorIf object
+ * @param caps incoming capability
* @return TRUE/FALSE (if successfully configured, return TRUE)
*/
static gboolean
tensorpad = g_new0 (GstTensorPad, 1);
g_assert (tensorpad != NULL);
- GST_DEBUG_OBJECT (tensor_if, "createing pad: %d(%dth)",
+ GST_DEBUG_OBJECT (tensor_if, "creating pad: %d(%dth)",
tensor_if->num_srcpads, nth);
name = g_strdup_printf ("src_%d", nth);
/**
* @brief initialize the new element
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
}
/**
- * @brief Looping to generete outbut buffer for srcpad
+ * @brief Looping to generete output buffer for srcpad
* @param tensor_merge tensor merger
* @param tensor_buf output buffer for srcpad
* @param is_eos boolean EOS ( End of Stream )
/**
* @brief initialize the new element
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
}
/**
- * @brief Looping to generete outbut buffer for srcpad
+ * @brief Looping to generete output buffer for srcpad
* @param tensor_mux tensor muxer
* @param tensors_buf output buffer for srcpad
* @param is_eos boolean EOS ( End of Stream )
g_return_val_if_fail (data != NULL, FALSE);
if (DBG)
- GST_DEBUG ("%dth RepoData : sink_chaned %d, src_changed %d\n", nth,
+ GST_DEBUG ("%dth RepoData : sink_changed %d, src_changed %d\n", nth,
data->sink_changed, data->src_changed);
if (is_sink) {
/**
* SECTION: element-tensor_reposink
*
- * Set elemnt to handle tensor repo
+ * Set element to handle tensor repo
*
* @file gsttensor_reposink.c
* @date 19 Nov 2018
gst_tensor_repo_init ();
- GST_DEBUG_OBJECT (self, "GstTensorRepo is sucessfully initailzed");
+ GST_DEBUG_OBJECT (self, "GstTensorRepo is successfully initialized");
self->silent = DEFAULT_SILENT;
self->signal_rate = DEFAULT_SIGNAL_RATE;
/**
* SECTION: element-tensor_reposrc
*
- * Pop elemnt to handle tensor repo
+ * Pop element to handle tensor repo
*
* @file gsttensor_reposrc.c
* @date 19 Nov 2018
if (new_caps && gst_caps_get_size (new_caps) == 1 && st
&& gst_structure_get_fraction (st, "framerate", &self->fps_n,
&self->fps_d)) {
- GST_INFO_OBJECT (self, "Seting framerate to %d/%d", self->fps_n,
+ GST_INFO_OBJECT (self, "Setting framerate to %d/%d", self->fps_n,
self->fps_d);
} else {
self->fps_n = -1;
((uint64_t *) output)[indices[i]] = ((uint64_t *) input)[i];
break;
default:
- nns_loge ("Error occured during get tensor value");
+ nns_loge ("Error occurred during get tensor value");
g_free (output);
goto done;
}
}
break;
default:
- nns_loge ("Error occured during get tensor value");
+ nns_loge ("Error occurred during get tensor value");
g_free (values);
g_free (indices);
goto done;
/**
* @brief initialize the new element
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
/**
* @brief Set Caps in pad.
- * @param split GstTensorSplit Ojbect
- * @param caps incomming capablity
+ * @param split GstTensorSplit object
+ * @param caps incoming capability
* @return TRUE/FALSE (if successfully generate & set cap, return TRUE)
*/
static gboolean
tensorpad = g_new0 (GstTensorPad, 1);
g_assert (tensorpad != NULL);
- GST_DEBUG_OBJECT (split, "createing pad: %d(%dth)", split->num_srcpads, nth);
+ GST_DEBUG_OBJECT (split, "creating pad: %d(%dth)", split->num_srcpads, nth);
name = g_strdup_printf ("src_%u", split->num_srcpads);
pad = gst_pad_new_from_static_template (&src_templ, name);
}
/**
- * @brief Make Splited Tensor
- * @param split TensorSplit Object
+ * @brief Make splitted tensor
+ * @param split TensorSplit object
* @param buffer gstbuffer form src
* @param nth orther of tensor
- * @return return GstMemory for splited tensor
+ * @return return GstMemory for splitted tensor
*/
static GstMemory *
-gst_tensor_split_get_splited (GstTensorSplit * split, GstBuffer * buffer,
+gst_tensor_split_get_splitted (GstTensorSplit * split, GstBuffer * buffer,
gint nth)
{
GstMemory *mem;
srcpad = gst_tensor_split_get_tensor_pad (split, buf, &created, i);
outbuf = gst_buffer_new ();
- mem = gst_tensor_split_get_splited (split, buf, i);
+ mem = gst_tensor_split_get_splitted (split, buf, i);
gst_buffer_append_memory (outbuf, mem);
ts = GST_BUFFER_TIMESTAMP (buf);
#define PROCESS_SCANNED_DATA(DTYPE_UNSIGNED, DTYPE_SIGNED) \
/**
* @brief process scanned data to float based on type info from channel
- * @param[in] prop Proprty of the channel whose data is processed
+ * @param[in] prop Property of the channel whose data is processed
* @param[in] value Raw value scanned from the channel
* @returns processed value in float
*/ \
gboolean merge_channels_data; /**< merge channel data with same type/size */
gboolean is_tensor; /**< False if tensors is used for data */
guint buffer_capacity; /**< size of the buffer */
- guint64 sampling_frequency; /**< sampling frequncy for the device */
+ guint64 sampling_frequency; /**< sampling frequency for the device */
guint64 default_sampling_frequency; /**< default set value of sampling frequency */
guint default_buffer_capacity; /**< size of the buffer */
g_object_class_install_property (gobject_class, PROP_EPOCHS,
g_param_spec_uint ("epochs", "Number of epoch",
- "Epochs are repetitions of training samples and validation smaples, "
+ "Epochs are repetitions of training samples and validation samples, "
"number of samples received for model training is "
"(num-training-samples+num-validation-samples)*epochs", 0, G_MAXINT,
DEFAULT_PROP_EPOCHS,
/** app need to send gst_element_send_event(tensor_trainer, gst_event_new_eos())
after training_complete or set eos to datareposrc */
GST_WARNING_OBJECT (trainer,
- "Training is completed, buffer is dropped, please chagne state of pipeline");
+ "Training is completed, buffer is dropped, please change state of pipeline");
return GST_FLOW_OK;
}
/**
* @brief initialize the new element (G_DEFINE_TYPE requires this)
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
float16_not_supported (void)
{
ml_loge
- ("Tensor_tranform does not support float16 operators. Apply -Denable-float16=true for meson build option if your architecture support float16. Note that tensor-transform's float16 is adhoc and does NOT perform good (slow!).\n");
+ ("Tensor_transform does not support float16 operators. Apply -Denable-float16=true for meson build option if your architecture support float16. Note that tensor-transform's float16 is adhoc and does NOT perform good (slow!).\n");
g_assert (0);
}
#endif
}
/**
- * @brief subrouting for tensor-tranform, "dimchg" case.
+ * @brief subrouting for tensor-transform, "dimchg" case.
* @param[in/out] filter "this" pointer
* @param[in] in_info input tensor info
* @param[in] out_info output tensor info
}
/**
- * @brief subrouting for tensor-tranform, "typecast" case.
+ * @brief subrouting for tensor-transform, "typecast" case.
* @param[in/out] filter "this" pointer
* @param[in] in_info input tensor info
* @param[in] out_info output tensor info
}
/**
- * @brief subrouting for tensor-tranform, "arithmetic" case.
+ * @brief subrouting for tensor-transform, "arithmetic" case.
* @param[in/out] filter "this" pointer
* @param[in] in_info input tensor info
* @param[in] out_info output tensor info
} while(0);
/**
- * @brief subrouting for tensor-tranform, "transpose" case.
+ * @brief subrouting for tensor-transform, "transpose" case.
* @param[in/out] filter "this" pointer
* @param[in] in_info input tensor info
* @param[in] out_info output tensor info
}
/**
- * @brief subrouting for tensor-tranform, "stand" case.
+ * @brief subrouting for tensor-transform, "stand" case.
* : pixel = abs((pixel - average(tensor))/(std(tensor) + val))
* @param[in/out] filter "this" pointer
* @param[in] in_info input tensor info
}
/**
- * @brief subrouting for tensor-tranform, "clamp" case.
+ * @brief subrouting for tensor-transform, "clamp" case.
* : pixel = if (pixel > max) ? max :
* if (pixel < min) ? min : pixel
* @param[in/out] filter "this" pointer
}
/**
- * @brief subrouting for tensor-tranform, "padding" case.
+ * @brief subrouting for tensor-transform, "padding" case.
* @param[in/out] filter "this" pointer
* @param[in] in_info input tensor info
* @param[in] out_info output tensor info
/**
* @brief GstTensorTransformClass inherits GstBaseTransformClass.
*
- * Referring another child (sibiling), GstVideoFilter (abstract class) and
+ * Referring another child (sibling), GstVideoFilter (abstract class) and
* its child (concrete class) GstVideoTransform.
* Note that GstTensorTransformClass is a concrete class; thus we need to look at both.
*/
* Unlike C-version, constructing an object will automatically
* register(probe) the subplugin for nnstreamer.
* Optional virtual functions (non pure virtual functions) may
- * be kept un-overriden if you don't support such.
+ * be kept un-overridden if you don't support such.
* For getInput/Output and setInput, return -EINVAL if you don't
* support it.
*
class tensor_filter_subplugin
{
private: /** Derived classes should NEVER access these */
- const uint64_t sanity; /**< Checks if dlopened obejct is really tensor_filter_subplugin */
+ const uint64_t sanity; /**< Checks if dlopened object is really tensor_filter_subplugin */
static const GstTensorFilterFramework
fwdesc_template; /**< Template for fwdesc. Each subclass or object may
* @detail A derived class MUST register itself with this function in order
* to be available for nnstreamer pipelines, i.e., at its init().
* The derived class type should be the template typename.
- * @retval Returns an "emptyInstnace" of the derived class. It is recommended
+ * @retval Returns an "emptyInstance" of the derived class. It is recommended
* to keep the object and feed to the unregister function.
*/
template <typename T> static T *register_subplugin ()
typedef struct _GstTensorFilterProperties
{
const char *fwname; /**< The name of NN Framework */
- int fw_opened; /**< TRUE IF open() is called or tried. Use int instead of gboolean because this is refered by custom plugins. */
+ int fw_opened; /**< TRUE IF open() is called or tried. Use int instead of gboolean because this is referred by custom plugins. */
const char **model_files; /**< File path to the model file (as an argument for NNFW). char instead of gchar for non-glib custom plugins */
int num_models; /**< number of model files. Some frameworks need multiple model files to initialize the graph (caffe, caffe2) */
- int input_configured; /**< TRUE if input tensor is configured. Use int instead of gboolean because this is refered by custom plugins. */
+ int input_configured; /**< TRUE if input tensor is configured. Use int instead of gboolean because this is referred by custom plugins. */
GstTensorsInfo input_meta; /**< configured input tensor info */
tensors_layout input_layout; /**< data layout info provided as a property to tensor_filter for the input, defaults to _NNS_LAYOUT_ANY for all the tensors */
unsigned int input_ranks[NNS_TENSOR_SIZE_LIMIT]; /**< the rank list of input tensors, it is calculated based on the dimension string. */
- int output_configured; /**< TRUE if output tensor is configured. Use int instead of gboolean because this is refered by custom plugins. */
+ int output_configured; /**< TRUE if output tensor is configured. Use int instead of gboolean because this is referred by custom plugins. */
GstTensorsInfo output_meta; /**< configured output tensor info */
tensors_layout output_layout; /**< data layout info provided as a property to tensor_filter for the output, defaults to _NNS_LAYOUT_ANY for all the tensors */
unsigned int output_ranks[NNS_TENSOR_SIZE_LIMIT]; /**< the rank list of output tensors, it is calculated based on the dimension string. */
* Otherwise, NULL
*
* @retval strdup-ed env-var value
- * @param[in] name Environmetal variable name
+ * @param[in] name Environmental variable name
*/
static gchar *
_strdup_getenv (const gchar * name)
}
/**
- * @brief initialzation for GstTensorAllocator
+ * @brief initialization for GstTensorAllocator
*/
static void
gst_tensor_allocator_init (GstTensorAllocator * allocator)
![tee-pipeline-img](./filter_tee.png)
#### Object detection using output combination option
-The orignal video frame is passed to output of tensor-filter using the property output-combination.
+The original video frame is passed to output of tensor-filter using the property output-combination.
- launch script
```
gst-launch-1.0 \
## Sub-Components
### Main ```tensor_filter.c```
-This is the main placeholder for all different subcomponents. With the property, ```FRAMEWORK```, this main component loads the proper subcomponent (e.g., tensorflow-lite support, custom support, or other addtional NNFW supports).
+This is the main placeholder for all different subcomponents. With the property, ```FRAMEWORK```, this main component loads the proper subcomponent (e.g., tensorflow-lite support, custom support, or other additional NNFW supports).
The main component is supposed process the standard properties for subcomponents as well as processing the input/output dimensions.
The subcomponents as supposed to fill in ```GstTensor_Filter_Framework``` struct and register it with ```supported``` array in ```tensor_filter.h```.
-Note that the registering sturcture may be updated later. (We may follow what ```Linux.kernel/drivers/devfreq/devfreq.c``` does)
+Note that the registering structure may be updated later. (We may follow what ```Linux.kernel/drivers/devfreq/devfreq.c``` does)
### Tensorflow-lite support, ```tensor_filter_tensorflow_lite.cc```
This should fill in ```GstTensor_Filter_Framework``` supporting tensorflow_lite.
### Custom function support, ```tensor_filter_custom.c```
Neural network and streameline developers may define their own tensor postprocessing operations with tensor_filter_custom.
-With ```nnstreamer-devel``` package installed at build time (e.g., ```BuildRequires: pkgconfig(nnstreamer)``` in .spec file), develerops can implement their own functions and expose their functions via ```NNStreamer_custom_class``` defined in ```tensor_fitler_custom.h```.
+With ```nnstreamer-devel``` package installed at build time (e.g., ```BuildRequires: pkgconfig(nnstreamer)``` in .spec file), develerops can implement their own functions and expose their functions via ```NNStreamer_custom_class``` defined in ```tensor_filter_custom.h```.
The resulting custom developer plugin should exist as a shared library (.so) with the symbol NNStreamer_custom exposed with all the func defined in NNStreamer_custom_class.
* When 'tensor_filter' receives a throttling QoS event from the 'tensor_rate' element,
* it compares the average processing latency and throttling delay, and takes the
* maximum value as the threshold to drop incoming frames by checking a buffer timestamp.
- * In this way, 'tensor filter' can avoid unncessary calculation and adjust a framerate,
+ * In this way, 'tensor filter' can avoid unnecessary calculation and adjust a framerate,
* effectively reducing resource utilizations.
* Even in the case of receiving QoS events from multiple downstream pipelines (e.g., tee),
* 'tensor_filter' takes the minimum value as the throttling delay for downstream pipeline
/**
* @brief initialize the new element
* instantiate pads and add them to element
- * set pad calback functions
+ * set pad callback functions
* initialize instance structure
*/
static void
/* Internal Logic Error: out of bound */
if (index >= info->num_tensors) {
GST_ELEMENT_ERROR_BTRACE (self, STREAM, FAILED,
- ("tensor_filter's core has inconsistent data. Please report to https://github.com/nnstreamer/nnstreamer/issues . The index argeument (%u) of tensors is greater-than or equal-to the number of tensors (%u)",
+ ("tensor_filter's core has inconsistent data. Please report to https://github.com/nnstreamer/nnstreamer/issues . The index argument (%u) of tensors is greater-than or equal-to the number of tensors (%u)",
index, info->num_tensors));
return 0;
}
}
/**
- * @brief Check input paramters for gst_tensor_filter_transform ();
+ * @brief Check input parameters for gst_tensor_filter_transform ();
*/
static GstFlowReturn
_gst_tensor_filter_transform_validate (GstBaseTransform * trans,
}
if (gst_buffer_get_size (outbuf) != 0) {
GST_ELEMENT_ERROR_BTRACE (self, STREAM, FAILED,
- ("The output buffer for the isntance of tensor-filter subplugin (%s / %s) already has a content (buffer size = %zu). It should be 0.",
+ ("The output buffer for the instance of tensor-filter subplugin (%s / %s) already has a content (buffer size = %zu). It should be 0.",
prop->fwname, TF_MODELNAME (prop), gst_buffer_get_size (outbuf)));
return GST_FLOW_ERROR;
}
/**
* @brief GstTensorFilterClass inherits GstBaseTransformClass.
*
- * Referring another child (sibiling), GstVideoFilter (abstract class) and
+ * Referring another child (sibling), GstVideoFilter (abstract class) and
* its child (concrete class) GstVideoConverter.
* Note that GstTensorFilterClass is a concrete class; thus we need to look at both.
*/
FALSE, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
g_object_class_install_property (gobject_class, PROP_CONFIG,
g_param_spec_string ("config-file", "Configuration-file",
- "Path to configuraion file which contains plugins properties", "",
+ "Path to configuration file which contains plugins properties", "",
G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
}
G_LOCK (shared_model_table);
if (shared_model_table) {
- GstTensorFilterSharedModelRepresenatation *rep;
+ GstTensorFilterSharedModelRepresentation *rep;
GList *value = g_hash_table_get_values (shared_model_table);
while (value) {
- rep = (GstTensorFilterSharedModelRepresenatation *) value->data;
+ rep = (GstTensorFilterSharedModelRepresentation *) value->data;
g_list_free (rep->referred_list);
value = g_list_next (value);
}
status = _gtfc_setprop_IS_UPDATABLE (priv, prop, &val);
g_value_unset (&val);
if (status != 0) {
- ml_logw ("Set propery is-updatable failed with error: %d", status);
+ ml_logw ("Set property is-updatable failed with error: %d", status);
return status;
}
status = _gtfc_setprop_ACCELERATOR (priv, prop, &val);
g_value_unset (&val);
if (status != 0) {
- ml_logw ("Set propery accelerator failed with error: %d", status);
+ ml_logw ("Set property accelerator failed with error: %d", status);
return status;
}
}
latency_mode = g_value_get_int (value);
if (latency_mode != 0 && latency_mode != 1) {
- ml_logw ("Invalid argument, nither 0 (OFF) nor 1 (ON).");
+ ml_logw ("Invalid argument, neither 0 (OFF) nor 1 (ON).");
return 0;
}
throughput_mode = g_value_get_int (value);
if (throughput_mode != 0 && throughput_mode != 1) {
- ml_logw ("Invalid argument, nither 0 (OFF) nor 1 (ON).");
+ ml_logw ("Invalid argument, neither 0 (OFF) nor 1 (ON).");
return 0;
}
{ACCL_AUTO, ACCL_AUTO_STR, ACCL_AUTO_STR},
{ACCL_CPU, ACCL_CPU_STR, ACCL_CPU_STR},
#if defined(__aarch64__) || defined(__arm__)
- /** Retreive NEON_STR when searching for SIMD/NEON on arm architectures */
+ /** Retrieve NEON_STR when searching for SIMD/NEON on arm architectures */
{ACCL_CPU_NEON, ACCL_CPU_NEON_STR, ACCL_CPU_NEON_STR},
#endif
{ACCL_CPU_SIMD, ACCL_CPU_SIMD_STR, ACCL_CPU_SIMD_STR},
const GstTensorFilterFramework *fw;
if (!name) {
- nns_logw ("Cannot check hw availability, given framwork name is NULL.");
+ nns_logw ("Cannot check hw availability, given framework name is NULL.");
return FALSE;
}
if ((fw = nnstreamer_filter_find (name)) == NULL) {
void *
nnstreamer_filter_shared_model_get (void *instance, const char *key)
{
- GstTensorFilterSharedModelRepresenatation *model_rep = NULL;
+ GstTensorFilterSharedModelRepresentation *model_rep = NULL;
G_LOCK (shared_model_table);
if (!shared_model_table) {
nnstreamer_filter_shared_model_insert_and_get (void *instance, char *key,
void *interpreter)
{
- GstTensorFilterSharedModelRepresenatation *model_rep;
+ GstTensorFilterSharedModelRepresentation *model_rep;
/* validate arguments */
if (!instance) {
interpreter = NULL;
goto done;
}
- model_rep = (GstTensorFilterSharedModelRepresenatation *)
- g_malloc0 (sizeof (GstTensorFilterSharedModelRepresenatation));
+ model_rep = (GstTensorFilterSharedModelRepresentation *)
+ g_malloc0 (sizeof (GstTensorFilterSharedModelRepresentation));
model_rep->shared_interpreter = interpreter;
model_rep->referred_list = g_list_append (model_rep->referred_list, instance);
g_hash_table_insert (shared_model_table, g_strdup (key),
nnstreamer_filter_shared_model_remove (void *instance, const char *key,
void (*free_callback) (void *))
{
- GstTensorFilterSharedModelRepresenatation *model_rep;
+ GstTensorFilterSharedModelRepresentation *model_rep;
int ret = FALSE;
/* search the table with key */
void *new_interpreter, void (*replace_callback) (void *, void *),
void (*free_callback) (void *))
{
- GstTensorFilterSharedModelRepresenatation *model_rep;
+ GstTensorFilterSharedModelRepresentation *model_rep;
GList *itr;
UNUSED (instance);
typedef struct {
void *shared_interpreter; /**< the model representation for each sub-plugins */
GList *referred_list; /**< the referred list about the instances sharing the same key */
-} GstTensorFilterSharedModelRepresenatation;
+} GstTensorFilterSharedModelRepresentation;
/**
* @brief Structure definition for common tensor-filter properties.
}
/**
- * @brief Called when the element starts processing, if fw not laoded
+ * @brief Called when the element starts processing, if fw not loaded
* @param self "this" pointer
* @return TRUE if there is no error.
*/
# This configuration file is to compile a test application
# using Gstreamer + NNstreamer library.
#
-# Step1: Build a test appliation based on nnstreamer for Android platform
+# Step1: Build a test application based on nnstreamer for Android platform
# ndk-build NDK_PROJECT_PATH=. APP_BUILD_SCRIPT=./Android-app.mk NDK_APPLICATION_MK=./Application.mk -j$(nproc)
#
# Step2: Install a test application into Android target device
android# cd /data
android# tar xvf *.tar
android# cd /data/nnstreamer
-android# {your_nnstreamer_applicaiton}
+android# {your_nnstreamer_application}
```
fi
if [ ! -d "tensorflow-${VERSION}/tensorflow/contrib/lite/downloads" ]; then
-#Download Dependencys
+#Download Dependencies
pushd "tensorflow-${VERSION}"
echo "[TENSORFLOW-LITE] Download external libraries of tensorflow-${VERSION}\n"
sed -i "s|flatbuffers/archive/master.zip|flatbuffers/archive/v1.8.0.zip|g" tensorflow/contrib/lite/download_dependencies.sh
endif
endforeach
-gst_api_verision = '1.0'
+gst_api_version = '1.0'
# Set install path
nnstreamer_prefix = get_option('prefix')
# join_paths drops first arg if second arg is absolute path.
# nnstreamer plugins path
-plugins_install_dir = join_paths(nnstreamer_libdir, 'gstreamer-' + gst_api_verision)
+plugins_install_dir = join_paths(nnstreamer_libdir, 'gstreamer-' + gst_api_version)
# nnstreamer sub-plugins path
if get_option('subplugindir') == ''
gobject_dep = dependency('gobject-2.0')
gmodule_dep = dependency('gmodule-2.0')
gio_dep = dependency('gio-2.0')
-gst_dep = dependency('gstreamer-' + gst_api_verision)
-gst_base_dep = dependency('gstreamer-base-' + gst_api_verision)
-gst_controller_dep = dependency('gstreamer-controller-' + gst_api_verision)
-gst_video_dep = dependency('gstreamer-video-' + gst_api_verision)
-gst_audio_dep = dependency('gstreamer-audio-' + gst_api_verision)
-gst_app_dep = dependency('gstreamer-app-' + gst_api_verision)
-gst_check_dep = dependency('gstreamer-check-' + gst_api_verision)
+gst_dep = dependency('gstreamer-' + gst_api_version)
+gst_base_dep = dependency('gstreamer-base-' + gst_api_version)
+gst_controller_dep = dependency('gstreamer-controller-' + gst_api_version)
+gst_video_dep = dependency('gstreamer-video-' + gst_api_version)
+gst_audio_dep = dependency('gstreamer-audio-' + gst_api_version)
+gst_app_dep = dependency('gstreamer-app-' + gst_api_version)
+gst_check_dep = dependency('gstreamer-check-' + gst_api_version)
libm_dep = cc.find_library('m') # cmath library
libdl_dep = cc.find_library('dl') # DL library
if (has_avx512fp16)
add_project_arguments(['-mavx512fp16'], language: ['c', 'cpp'])
- message ('Float16 for x86_64 enabled. Modern gcc-x64 genrally supports float16 with _Float16. -mavx512fp16 added for hardware acceleration')
+ message ('Float16 for x86_64 enabled. Modern gcc-x64 generally supports float16 with _Float16. -mavx512fp16 added for hardware acceleration')
else
warning ('Float16 for x86_64 enabled. However, software emulation is applied for fp16, making it slower and inconsistent. Use GCC 12+ for AVX512 FP16 support. This build will probably fail unless you bring a compiler that supports fp16 for x64.')
endif
endif
# See if src-iio can be built or not
-gst18_dep = dependency('gstreamer-' + gst_api_verision, version : '>=1.8', required : false)
+gst18_dep = dependency('gstreamer-' + gst_api_version, version : '>=1.8', required : false)
tensor_src_iio_build = false
if gst18_dep.found() and build_platform != 'macos'
add_project_arguments('-D_ENABLE_SRC_IIO', language: ['c', 'cpp'])
}
/**
- * @brief Test for getting config from strucrure with invalid param.
+ * @brief Test for getting config from structure with invalid param.
*/
-TEST (commonTensorsConfig, fromStructreInvalidParam0_n)
+TEST (commonTensorsConfig, fromStructureInvalidParam0_n)
{
GstStructure structure = { 0 };
}
/**
- * @brief Test for getting config from strucrure with invalid param.
+ * @brief Test for getting config from structure with invalid param.
*/
-TEST (commonTensorsConfig, fromStructreInvalidParam1_n)
+TEST (commonTensorsConfig, fromStructureInvalidParam1_n)
{
GstTensorsConfig conf;
gst_tensors_config_init (&conf);
static char *path_to_lib = NULL;
-/** @brief Positive case for the simpliest execution path */
+/** @brief Positive case for the simplest execution path */
TEST (cppFilterOnDemand, basic01)
{
filter_basic basic ("basic_01");
EXPECT_EQ (basic._unregister (), 0);
}
-/** @brief Negative case for the simpliest execution path */
+/** @brief Negative case for the simplest execution path */
TEST (cppFilterOnDemand, basic02_n)
{
filter_basic basic ("basic_02");
EXPECT_NE (basic._unregister (), 0);
}
-/** @brief Negative case for the simpliest execution path w/ static calls */
+/** @brief Negative case for the simplest execution path w/ static calls */
TEST (cppFilterOnDemand, basic03_n)
{
filter_basic basic ("basic_03");
EXPECT_NE (filter_basic::__unregister ("basic_03"), 0);
}
-/** @brief Negative case for the simpliest execution path w/ static calls */
+/** @brief Negative case for the simplest execution path w/ static calls */
TEST (cppFilterOnDemand, basic04_n)
{
filter_basic basic ("basic_04");
EXPECT_EQ (basic._unregister (), 0);
}
-/** @brief Negative case for the simpliest execution path */
-TEST (cppFilterOnDemand, unregstered01_n)
+/** @brief Negative case for the simplest execution path */
+TEST (cppFilterOnDemand, unregistered01_n)
{
filter_basic basic ("basic_01");
gchar *str_pipeline = g_strdup_printf (
~GstMqttTestHelper (){};
/**
- * @brief Initialize this class instead of explcit constuctors
+ * @brief Initialize this class instead of explicit constructors
*/
void init (void *ctx)
{
}
private:
- /* Variables for instance mangement */
+ /* Variables for instance management */
static std::unique_ptr<GstMqttTestHelper> mInstance;
static std::once_flag mOnceFlag;
/**
* @brief Test for ntp util to get epoch.
*/
-TEST_F (ntpUtilMockTest, getEpochIvalidTimestamp)
+TEST_F (ntpUtilMockTest, getEpochInvalidTimestamp)
{
int64_t ret;
test('unittest_common', unittest_common, env: testenv)
# Run unittest_sink
- gst18_dep = dependency('gstreamer-' + gst_api_verision, version : '>=1.8', required : false)
+ gst18_dep = dependency('gstreamer-' + gst_api_version, version : '>=1.8', required : false)
if gst18_dep.found()
unittest_sink = executable('unittest_sink',
join_paths('nnstreamer_sink', 'unittest_sink.cc'),
/**
* @brief Test for plugin registration
*/
-TEST (tensorConverter, subpluginNoraml)
+TEST (tensorConverter, subpluginNormal)
{
NNStreamerExternalConverter *sub = get_default_external_converter ("mode");
}
/**
- * @brief Test data for tensor_conveter::flexbuf (dimension 24:1:1:1)
+ * @brief Test data for tensor_converter::flexbuf (dimension 24:1:1:1)
*/
const gint _test_frames1[24]
= { 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112,
1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121, 1122, 1123, 1124 };
/**
- * @brief Test data for tensor_conveter::flexbuf (dimension 48:1:1:1)
+ * @brief Test data for tensor_converter::flexbuf (dimension 48:1:1:1)
*/
const gint _test_frames2[48] = { 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108,
1109, 1110, 1111, 1112, 1113, 1114, 1115, 1116, 1117, 1118, 1119, 1120, 1121,
convertBMP2PNG
PATH_TO_PLUGIN="../../build"
-# Check python libraies are built
+# Check python libraries are built
if [[ -d $PATH_TO_PLUGIN ]]; then
ini_path="${PATH_TO_PLUGIN}/ext/nnstreamer/tensor_converter"
if [[ -d ${ini_path} ]]; then
/* set invalid param */
g_object_set (GST_OBJECT (datareposink), "json", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_object_unref (datareposink);
/* set invalid param */
g_object_set (GST_OBJECT (datareposink), "location", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_object_unref (datareposink);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "json", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "json", "no_search_file", NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
setPipelineStateSync (pipeline, GST_STATE_NULL, UNITTEST_STATECHANGE_TIMEOUT);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "location", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "location", "no_search_file", NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "caps", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "start-sample-index", idx_out_of_range, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
g_object_set (GST_OBJECT (datareposrc), "stop-sample-index", idx_out_of_range, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "epochs", invalid_epochs, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
/* set invalid param */
g_object_set (GST_OBJECT (datareposrc), "tensors-sequence", "1,0,2", NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_element_set_state (pipeline, GST_STATE_NULL);
callCompareTest testcase2.con.log testcase2.dec.log 2 "Compare for case 2" 0 0
# Expand the unit test coverage
-## Set Property woth option*
+## Set Property with option*
gstTest "--gst-plugin-path=${PATH_TO_PLUGIN} multifilesrc location=\"testsequence_%1d.png\" index=0 caps=\"image/png,framerate=\(fraction\)30/1\" ! pngdec ! videoconvert ! video/x-raw, format=RGB ! tensor_converter ! tee name=t ! queue ! tensor_decoder mode=direct_video option1=\"nothing\" option2=\"else\" option3=\"matters\" option4=\"whattheheck=is\" option5=\"goingon=idontknow\" option6=\"whydowehave\" option7=\"somany\" option8=\"options\" option9=\"whydontyouguess\" ! filesink location=\"testcase3.dec.log\" sync=true t. ! queue ! filesink location=\"testcase3.con.log\" sync=true" 3 0 0 $PERFORMANCE
callCompareTest testcase3.con.log testcase3.dec.log 3 "Compare for case 3" 0 0
/**
* @brief Test for plugin registration
*/
-TEST (tensorDecoder, subpluginNoraml)
+TEST (tensorDecoder, subpluginNormal)
{
GstTensorDecoderDef *sub = get_default_decoder ("mode");
convertBMP2PNG
PATH_TO_PLUGIN="../../build"
-# Check python libraies are built
+# Check python libraries are built
if [[ -d $PATH_TO_PLUGIN ]]; then
ini_path="${PATH_TO_PLUGIN}/ext/nnstreamer/tensor_decoder"
if [[ -d ${ini_path} ]]; then
kill -9 $pid &> /dev/null
wait $pid
-# Sever src cap: Video, Server sink cap: Viedo test
+# Sever src cap: Video, Server sink cap: Video test
PORT=`python3 ../../get_available_port.py`
gstTestBackground "--gst-plugin-path=${PATH_TO_PLUGIN} tensor_query_serversrc port=${PORT} ! video/x-raw,width=300,height=300,format=RGB,framerate=0/1 ! tensor_query_serversink async=false" 6-1 1 0 30
pid=$!
* @see nnstreamer_customfilter_example_scaler_allocator.c
*
* This example scales an input tensor of [N][input_h][input_w][M]
- * to an ouput tensor of [N][output_h][output_w][M].
+ * to an output tensor of [N][output_h][output_w][M].
*
* The custom property is to be given as, "custom=[new-x]x[new-y]", where new-x
* and new-y are unsigned integers. E.g., custom=640x480
result = RUN_ALL_TESTS ();
} catch (...) {
- g_warning ("Catched exception, GTest failed.");
+ g_warning ("Caught exception, GTest failed.");
}
return result;
testInit $1
PATH_TO_PLUGIN="../../build"
-# Check lua libraies are built
+# Check lua libraries are built
if [[ -d $PATH_TO_PLUGIN ]]; then
ini_path="${PATH_TO_PLUGIN}/ext/nnstreamer/tensor_filter"
if [[ -d ${ini_path} ]]; then
}
/**
- * @brief Default constuctor. Note that explicit invocation of init () is always
+ * @brief Default constructor. Note that explicit invocation of init () is always
* required after getting the instance.
*/
NCSDKTensorFilterTestHelper::NCSDKTensorFilterTestHelper ()
#include <mutex>
-enum _contants {
+enum _constants {
TENSOR_RANK_LIMIT = 4,
SUPPORT_MAX_NUMS_DEVICES = 8,
};
-/* Dimension inforamtion of Google LeNet */
+/* Dimension information of Google LeNet */
enum _google_lenet {
GOOGLE_LENET_IN_DIM_C = 3,
GOOGLE_LENET_IN_DIM_W = 224,
ncStatus_t ncFifoRemoveElem (struct ncFifoHandle_t *fifoHandle); /* not supported yet */
private:
- /* Variables for instance mangement */
+ /* Variables for instance management */
static std::unique_ptr<NCSDKTensorFilterTestHelper> mInstance;
static std::once_flag mOnceFlag;
return fhandle.good ();
}
-/** @brief define the data type for NDArray, aliged with the definition in mshadow/base.h */
+/** @brief define the data type for NDArray, aligned with the definition in mshadow/base.h */
enum TypeFlag {
kFloat32 = 0,
kFloat64 = 1,
}
// Load the model
LoadModel (model_json_file);
- // Initilize the parameters
+ // Initialize the parameters
LoadParameters (model_params_file);
int dtype = GetDataLayerType ();
}
/**
- * @brief split loaded param map into arg parm and aux param with target context
+ * @brief split loaded param map into arg param and aux param with target context
*/
void
Predictor::SplitParamMap (const std::map<std::string, NDArray> ¶mMap,
TEST (tensorFilterOpenvino, convertFromIETypeStr0)
{
const gchar *root_path = g_getenv ("NNSTREAMER_SOURCE_ROOT_PATH");
- const std::vector<std::string> ie_suport_type_strs = {
+ const std::vector<std::string> ie_support_type_strs = {
"I8",
"I16",
"I32",
{
TensorFilterOpenvinoTest tfOvTest (str_test_model.assign (test_model_xml),
str_test_model.assign (test_model_bin));
- for (size_t i = 0; i < ie_suport_type_strs.size (); ++i) {
+ for (size_t i = 0; i < ie_support_type_strs.size (); ++i) {
tensor_type ret_type;
- ret_type = tfOvTest.convertFromIETypeStr (ie_suport_type_strs[i]);
+ ret_type = tfOvTest.convertFromIETypeStr (ie_support_type_strs[i]);
EXPECT_EQ (ret_type, nns_support_types[i]);
}
}
TEST (tensorFilterOpenvino, convertFromIETypeStr0_n)
{
const gchar *root_path = g_getenv ("NNSTREAMER_SOURCE_ROOT_PATH");
- const std::vector<std::string> ie_not_suport_type_strs = {
+ const std::vector<std::string> ie_not_support_type_strs = {
"F64",
};
const std::vector<tensor_type> nns_support_types = {
{
TensorFilterOpenvinoTest tfOvTest (str_test_model.assign (test_model_xml),
str_test_model.assign (test_model_bin));
- for (size_t i = 0; i < ie_not_suport_type_strs.size (); ++i) {
+ for (size_t i = 0; i < ie_not_support_type_strs.size (); ++i) {
tensor_type ret_type;
- ret_type = tfOvTest.convertFromIETypeStr (ie_not_suport_type_strs[i]);
+ ret_type = tfOvTest.convertFromIETypeStr (ie_not_support_type_strs[i]);
EXPECT_NE (ret_type, nns_support_types[i]);
}
}
TEST (tensorFilterOpenvino, convertFromIETypeStr1_n)
{
const gchar *root_path = g_getenv ("NNSTREAMER_SOURCE_ROOT_PATH");
- const std::string ie_suport_type_str ("Q78");
+ const std::string ie_support_type_str ("Q78");
std::string str_test_model;
gchar *test_model_xml;
gchar *test_model_bin;
str_test_model.assign (test_model_bin));
tensor_type ret_type;
- ret_type = tfOvTest.convertFromIETypeStr (ie_suport_type_str);
+ ret_type = tfOvTest.convertFromIETypeStr (ie_support_type_str);
EXPECT_EQ (_NNS_END, ret_type);
}
testInit $1
PATH_TO_PLUGIN="../../build"
-# Check python libraies are built
+# Check python libraries are built
if [[ -d $PATH_TO_PLUGIN ]]; then
ini_path="${PATH_TO_PLUGIN}/ext/nnstreamer/tensor_filter"
if [[ -d ${ini_path} ]]; then
}
/**
- * @brief Main function to evalute tensor_filter's model reload functionality
+ * @brief Main function to evaluate tensor_filter's model reload functionality
* @note feed the same input image to the tensor filter; So, even if a detection model
* is updated (mobilenet v1 <-> v2), the output should be the same for all frames.
*/
output.size = input.size = sizeof (float) * 1;
- /* Test: unsucessful invoke */
+ /* Test: unsuccessful invoke */
ret = sp->invoke (NULL, NULL, NULL, &input, &output);
EXPECT_NE (ret, 0);
callCompareTest testsynch05_8.golden testsynch06_8.log 17-9 "Compare 17-9" 1 0
callCompareTest testsynch05_9.golden testsynch06_9.log 17-10 "Compare 17-10" 1 0
-# Test Case for sync-option=0 without duration. If it does not set, then it use pts(n+1) - pts(n) as base duration. If there are pts(n-1) and pts(n) avaiable within duration condition, it always take pts(n).
+# Test Case for sync-option=0 without duration. If it does not set, then it use pts(n+1) - pts(n) as base duration. If there are pts(n-1) and pts(n) available within duration condition, it always take pts(n).
# For this test case, outputs are generated every 1000000000 nsec, and they are [0,0],[1000000000,133333332], [2000000000,2333333332], [3000000000,2999999997]. The reason last one is 2999999997 instead of 3333333332 is EOS of basepad.
gstTest "--gst-plugin-path=${PATH_TO_PLUGIN} tensor_merge name=merge mode=linear option=2 silent=true sync-mode=basepad sync-option=0 ! multifilesink location=testsynch07_%1d.log multifilesrc location=\"testsequence03_%1d.png\" index=0 caps=\"image/png, framerate=(fraction)10/1\" ! pngdec ! tensor_converter ! merge.sink_0 multifilesrc location=\"testsequence03_%1d.png\" index=0 caps=\"image/png, framerate=(fraction)30/1\" ! pngdec ! tensor_converter ! merge.sink_1" 18 0 0 $PERFORMANCE
} while (0)
/**
- * @brief Macro for check errorneous pipeline
+ * @brief Macro for check erroneous pipeline
*/
#define TEST_TENSOR_FILTER_AUTO_OPTION_N(gstpipe, fw_name) \
do { \
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFlite01)
{
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details The order of tensor filter options has changed.
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFlite02)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Test if options are insensitive to the case
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFlite03)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case when model file does not exist
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFliteModelNotFound_n)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case with not supported extension
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFliteNotSupportedExt_n)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case when permission of model file is not given.
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFliteNoPermission_n)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case with invalid framework name
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFliteInvalidFWName_n)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case with invalid dimension of tensor filter
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFliteWrongDimension_n)
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case with invalid input type of tensor filter
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoExtTFliteWrongInputType_n)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoNoFw)
{
}
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case when model file does not exist
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoNoFwModelNotFound_n)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Negative case with not supported extension
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoNoFwNotSupportedExt_n)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Negative case when permission of model file is not given.
*/
TEST_REQUIRE_TFLITE (testTensorFilter, frameworkAutoNoFwNoPermission_n)
#if !defined(ENABLE_TENSORFLOW_LITE) && !defined(ENABLE_TENSORFLOW2_LITE) \
&& defined(ENABLE_NNFW_RUNTIME)
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Check if nnfw (second priority) is detected automatically
*/
TEST (testTensorFilter, frameworkAutoExtTfliteNnfw04)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Check if nnfw (second priority) is detected automatically
*/
TEST (testTensorFilter, frameworkAutoWoOptExtTfliteNnfw)
#ifdef ENABLE_TENSORFLOW
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Check if tensoflow is detected automatically
*/
TEST (testTensorFilter, frameworkAutoExtPb01)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Check if tensoflow is detected automatically
*/
TEST (testTensorFilter, frameworkAutoWoOptExtPb)
}
#else
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Negative case whtn tensorflow is not enabled
*/
TEST (testTensorFilter, frameworkAutoExtPbTfDisabled_n)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Negative case whtn tensorflow is not enabled
*/
TEST (testTensorFilter, frameworkAutoWoOptExtPbTfDisabled_n)
#ifdef ENABLE_CAFFE2
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Check if caffe2 is detected automatically
*/
TEST (testTensorFilter, frameworkAutoExtPb03)
#else
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Check if caffe2 is not enabled
*/
TEST (testTensorFilter, frameworkAutoExtPbCaffe2Disabled_n)
#ifdef ENABLE_PYTORCH
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Check if pytorch is detected automatically
*/
TEST (testTensorFilter, frameworkAutoExtPt01)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Check if pytorch is detected automatically
*/
TEST (testTensorFilter, frameworkAutoWoOptExtPt01)
#else
/**
- * @brief Test framework auto detecion option in tensor-filter.
+ * @brief Test framework auto detection option in tensor-filter.
* @details Check if pytorch is not enabled
*/
TEST (testTensorFilter, frameworkAutoExtPtPytorchDisabled_n)
}
/**
- * @brief Test framework auto detecion without specifying the option in tensor-filter.
+ * @brief Test framework auto detection without specifying the option in tensor-filter.
* @details Check if pytorch is not enabled
*/
TEST (testTensorFilter, frameworkAutoWoOptExtPtPytorchDisabled_n)
/**
* @brief Test tensor_rate set property stats (negative)
*/
-TEST_F (NNSRateTest, setProperyStats_n)
+TEST_F (NNSRateTest, setPropertyStats_n)
{
guint64 in, out, dup, drop;
}
/**
- * @brief Test tensor_rate set invalide framerate (negative)
+ * @brief Test tensor_rate set invalid framerate (negative)
*/
-TEST_F (NNSRateTest, setProperyInvalidFramerate_n)
+TEST_F (NNSRateTest, setPropertyInvalidFramerate_n)
{
gchar *framerate;
/**
- * @brief Main function to evalute tensor_repo dynamicity
+ * @brief Main function to evaluate tensor_repo dynamicity
*/
int
main (int argc, char *argv[])
TEST_TYPE_ISSUE739_MERGE_PARALLEL_4, /**< pipeline to test Merge/Parallel case in #739 */
TEST_TYPE_DECODER_PROPERTY, /**< pipeline to test get/set_property of decoder */
TEST_CUSTOM_EASY_ICF_01, /**< pipeline to test easy-custom in code func */
- TEST_TYPE_UNKNOWN /**< unknonwn */
+ TEST_TYPE_UNKNOWN /**< unknown */
} TestType;
/**
option.num_buffers, option.num_buffers, option.num_buffers);
break;
case TEST_TYPE_TENSOR_CAP_1:
- /** other/tensor out, caps are specifed*/
+ /** other/tensor out, caps are specified*/
str_pipeline = g_strdup_printf (
"videotestsrc num-buffers=%d ! videoconvert ! video/x-raw,width=160,height=120,format=RGB,framerate=(fraction)%lu/1 ! "
"tensor_converter ! other/tensor,format=static ! tensor_sink name=test_sink async=false",
option.num_buffers, fps);
break;
case TEST_TYPE_TENSOR_CAP_2:
- /** other/tensor out, caps are not specifed (other/tensor or other/tensors) */
+ /** other/tensor out, caps are not specified (other/tensor or other/tensors) */
str_pipeline = g_strdup_printf (
"videotestsrc num-buffers=%d ! videoconvert ! video/x-raw,width=160,height=120,format=RGB,framerate=(fraction)%lu/1 ! "
"tensor_converter ! tensor_sink name=test_sink async=false",
option.num_buffers, fps);
break;
case TEST_TYPE_TENSORS_CAP_1:
- /** other/tensors, caps are specifed (num_tensors is 1) */
+ /** other/tensors, caps are specified (num_tensors is 1) */
str_pipeline = g_strdup_printf (
"videotestsrc num-buffers=%d ! videoconvert ! video/x-raw,width=160,height=120,format=RGB,framerate=(fraction)%lu/1 ! "
"tensor_converter ! other/tensors,format=static ! tensor_sink name=test_sink async=false",
option.num_buffers, fps);
break;
case TEST_TYPE_TENSORS_CAP_2:
- /** other/tensors, caps are not specifed (num_tensors is 3) */
+ /** other/tensors, caps are not specified (num_tensors is 3) */
str_pipeline = g_strdup_printf (
"tensor_mux name=mux ! tensor_sink name=test_sink "
"videotestsrc num-buffers=%d ! video/x-raw,width=160,height=120,format=RGB,framerate=(fraction)30/1 ! tensor_converter ! mux.sink_0 "
* num-validation-samples: num-validation-samples, A sample can consist of
* multiple inputs and labels in tensors(in case of MNIST, all is 1), set how
* many samples are taken for validation model. epochs : epochs are repetitions
- * of training samples and validation smaples. number of samples received for
+ * of training samples and validation samples. number of samples received for
* model training is (num-training-samples + num-validation-samples) * epochs
*/
TEST (tensor_trainer, SetParams)
/* set invalid param */
g_object_set (GST_OBJECT (tensor_trainer), "framework", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_object_unref (GST_OBJECT (tensor_trainer));
/* set invalid param */
g_object_set (GST_OBJECT (tensor_trainer), "framework", "no_framework", NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_object_unref (GST_OBJECT (tensor_trainer));
/* set invalid param */
g_object_set (GST_OBJECT (tensor_trainer), "model-config", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_object_unref (GST_OBJECT (tensor_trainer));
/* set invalid param */
g_object_set (GST_OBJECT (tensor_trainer), "model-config", non_existent_path, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
g_free (non_existent_path);
/* set invalid param */
g_object_set (GST_OBJECT (tensor_trainer), "model-save-path", NULL, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_NE (setPipelineStateSync (pipeline, GST_STATE_PLAYING, UNITTEST_STATECHANGE_TIMEOUT), 0);
gst_object_unref (GST_OBJECT (tensor_trainer));
/** value "-1" is out of range for property 'num-training-samples' of type
'guint' default value is set */
g_object_get (GST_OBJECT (tensor_trainer), "num-training-samples", &get_value, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_EQ (get_value, 0U);
gst_object_unref (GST_OBJECT (tensor_trainer));
/** value "-1" is out of range for property 'num-validation-samples' of type
'guint' default value is set */
g_object_get (GST_OBJECT (tensor_trainer), "num-validation-samples", &get_value, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_EQ (get_value, 0U);
gst_object_unref (GST_OBJECT (tensor_trainer));
/** value "-1" is out of range for property 'epochs' of type 'guint'
default value is set */
g_object_get (GST_OBJECT (tensor_trainer), "epochs", &get_value, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_EQ (get_value, 1U);
gst_object_unref (GST_OBJECT (tensor_trainer));
/** value "-1" is out of range for property 'num-inputs' of type 'guint'
default value is set */
g_object_get (GST_OBJECT (tensor_trainer), "num-inputs", &get_value, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_EQ (get_value, 1U);
gst_object_unref (GST_OBJECT (tensor_trainer));
/** value "-1" of type 'gint64' is invalid or out of range for property
'num-labels' of type 'guint' default value is set */
g_object_get (GST_OBJECT (tensor_trainer), "num-labels", &get_value, NULL);
- /* state chagne failure is expected */
+ /* state change failure is expected */
EXPECT_EQ (get_value, 1U);
gst_object_unref (GST_OBJECT (tensor_trainer));
/**
* @brief C-wrapper for the MockModel's method get_model.
- * @param[in] name The model's name to retreive
- * @param[in] version The model's version to retreive
+ * @param[in] name The model's name to retrieve
+ * @param[in] version The model's version to retrieve
* @return A pointer to the model matching the given information. If there is no model possible, NULL is returned.
*/
MockModel *ml_agent_mock_get_model (const gchar *name, const guint version);
/**
* @brief Pass the JSON c-string generated by the ML Agent mock class to the caller.
- * @param[in] name The model's name to retreive
- * @param[in] version The model's version to retreive
+ * @param[in] name The model's name to retrieve
+ * @param[in] version The model's version to retrieve
* @param[out] description The JSON c-string, containing the information of the model matching the given name and version
* @param[out] err The return location for a recoverable error. This can be NULL.
* @return 0 if there is a matching model to the given name and version otherwise a negative integer
self.sinks = []
##
- # @brief add a filter instnace
+ # @brief add a filter instance
def addFilter(self, _filter):
self.filters[_filter.name] = _filter
subdir('confchk')
endif
-# Gst/NNS string pipeline desciption <--> pbtxt pipeline description
+# Gst/NNS string pipeline description <--> pbtxt pipeline description
# for pbtxt pipeline WYSIWYG tools.
if get_option('enable-pbtxt-converter')
if (build_platform == 'macos')
* @param[out] output The output tensors.
* @return 0 if success. Non-zero if failed
*
- * @note The intput / output dimensions, required for interpreting input/output
+ * @note The input / output dimensions, required for interpreting input/output
* pointers, are stored in prop.
*/
static int
_Element *pipeline;
context =
- g_option_context_new ("- Prototxt to/from GStreamer Pipeline Converver");
+ g_option_context_new ("- Prototxt to/from GStreamer Pipeline Converter");
g_option_context_add_main_entries (context, entries, NULL);
if (!g_option_context_parse (context, &argc, &argv, &error)) {
g_printerr ("Option parsing failed: %s\n", error->message);
nnstparser_element_from_uri (const _URIType type, const gchar *uri,
const gchar * elementname, void **error);
-/** @brief gst_object_unref for psuedo element */
+/** @brief gst_object_unref for pseudo element */
extern _Element *
nnstparser_element_unref (_Element * element);
-/** @brief gst_object_ref for psuedo element */
+/** @brief gst_object_ref for pseudo element */
extern _Element *
nnstparser_element_ref (_Element * element);
#### Analyzing the data
* First of all, install google-chrome (or chromium-brwser) in you own PC.
-* Open **chrome://tracing/ webapge after running the browser.
-* Click load buttong. Then, open file output_file.json
+* Open **chrome://tracing/ webpage after running the browser.
+* Click load button. Then, open file output_file.json
* You should see a callstack with timing
<img src=hawktracer-chrome-tracing-out.png border=0></img>
#### gstshark-plot (Experimental/Unstable)
gstshark-plot is a set of [Octave](https://www.gnu.org/software/octave/) scripts included with GstShark. The gstshark-plot scripts are located in scripts/graphics directory, inside the repository. The main script that processes the data is the gstshark-plot script. Currently, the scripts need to be run on this directory, but on upcoming releases the scripts will be accessible from any path. Make sure the GST_SHARK_CTF_DISABLE environment variable is unset, to enable the generation of the full traces.
-Note that you have to run "unset GST_SHARK_LOCATION" statement in order to archive output date into CTF (Commen Trace Format, ./gstshark_yyyy-mm-dd_hh:mm:ss/) folder.
+Note that you have to run "unset GST_SHARK_LOCATION" statement in order to archive output date into CTF (Common Trace Format, ./gstshark_yyyy-mm-dd_hh:mm:ss/) folder.
* CTF (Common Trace Format) file: Directory with date and time with the traces of the latest session.
```bash
$ unset GST_SHARK_LOCATION