Fix typos
authorKwanghoon Son <k.son@samsung.com>
Thu, 6 Jul 2023 07:36:47 +0000 (16:36 +0900)
committerKwanghoon Son <k.son@samsung.com>
Wed, 12 Jul 2023 09:45:35 +0000 (18:45 +0900)
Change-Id: I42eb061115e5962540a38c7aff1a2a5532b89f1a
Signed-off-by: Kwanghoon Son <k.son@samsung.com>
45 files changed:
include/mv_surveillance.h
meta-template/README.md
mv_3d/3d/include/Mv3d.h
mv_3d/3d/include/mv_3d_open.h
mv_3d/3d/src/mv_3d_open.cpp
mv_common/src/CommonUtils.cpp
mv_face/face/src/FaceDetector.cpp
mv_face/face/src/FaceEyeCondition.cpp
mv_face/face/src/FaceRecognitionModel.cpp
mv_face/face/src/mv_face_open.cpp
mv_image/image/include/ImageMathUtil.h
mv_image/image/src/ImageMathUtil.cpp
mv_image/image/src/Tracking/ImageTrackingModel.cpp
mv_machine_learning/inference/include/InferenceIni.h
mv_machine_learning/inference/include/ObjectDecoder.h
mv_machine_learning/inference/src/Inference.cpp
mv_machine_learning/inference/src/InferenceIni.cpp
mv_machine_learning/inference/src/InputMetadata.cpp
mv_machine_learning/inference/src/ObjectDecoder.cpp
mv_machine_learning/inference/src/OutputMetadata.cpp
mv_machine_learning/inference/src/Posture.cpp
mv_machine_learning/landmark_detection/src/facial_landmark_adapter.cpp
mv_machine_learning/landmark_detection/src/pose_landmark_adapter.cpp
mv_machine_learning/meta/include/MetaParser.h
mv_machine_learning/object_detection/src/face_detection_adapter.cpp
mv_machine_learning/object_detection/src/object_detection_adapter.cpp
mv_machine_learning/training/src/feature_vector_manager.cpp
mv_surveillance/surveillance/include/EventManager.h
mv_surveillance/surveillance/include/EventTrigger.h
mv_surveillance/surveillance/include/EventTriggerMovementDetection.h
mv_surveillance/surveillance/include/EventTriggerPersonAppearance.h
mv_surveillance/surveillance/include/EventTriggerPersonRecognition.h
mv_surveillance/surveillance/include/SurveillanceHelper.h
mv_surveillance/surveillance/include/mv_mask_buffer.h
mv_surveillance/surveillance/src/EventTriggerPersonAppearance.cpp
mv_surveillance/surveillance/src/EventTriggerPersonRecognition.cpp
test/README.md
test/testsuites/common/testsuite_common/mv_testsuite_common.c
test/testsuites/common/testsuite_common/mv_testsuite_common.h
test/testsuites/common/video_helper/mv_video_helper.c
test/testsuites/common/visualizer/src/mv_util_render_2d.cpp
test/testsuites/face/face_test_suite.c
test/testsuites/machine_learning/inference/inference_test_suite.c
test/testsuites/machine_learning/inference/test_face_landmark_detection.cpp
test/testsuites/surveillance/surveillance_test_suite.c

index 3deb6e9..aed1558 100644 (file)
@@ -691,7 +691,7 @@ extern "C" {
 #define MV_SURVEILLANCE_FACE_RECOGNITION_MODEL_FILE_PATH "MV_SURVEILLANCE_FACE_RECOGNITION_MODEL_FILE_PATH"
 
 /**
- * @brief Defines MV_SURVEILLANCE_MOVEMENT_DETECTION_THRESOLD to set movement
+ * @brief Defines MV_SURVEILLANCE_MOVEMENT_DETECTION_THRESHOLD to set movement
  *        detection threshold. It is an attribute of the engine configuration.
  * @details This value might be set in engine configuration before subscription
  *          on #MV_SURVEILLANCE_EVENT_TYPE_MOVEMENT_DETECTED event trigger
index 4614406..34467b2 100644 (file)
@@ -64,11 +64,11 @@ The `outputmetadata` includes
 - `name`: name to an output tensor for score
 - `index`: index to get score from the output tensor
 - `top_number`: the top number of outputs
-- `threshold` : threshold to cut ouputs under the `threshold` value
+- `threshold` : threshold to cut outputs under the `threshold` value
 - `score_type` : score type; `NORMAL` if score between 0 ~ 1, `SIGMOID` if score requires sigmoid
 
 The classification meta file, thus, illustrates that the model has an input which is named of `input_2`, `NHWC` shape type with `[1, 224, 224, 3]` dimensions, `MV_INFERENCE_DATA_FLOAT32` data type, and `RGB888` color space. It requires normalization with mean `[127.5, 127.5, 127.5]` and standard deviation `[127.5, 127.5, 127.5]`. But it doesn't apply quantization.
-The meta file illustrates that the model has an ouput which is named of `dense_3/Softmax`. The tensor is 2-dimensional and its' 2nd index corresponds to the score. In addition, the score is just between 0 ~ 1. The score under `threshold` 0.3 should be thrown out and the `top_number` of outputs should be given as results.
+The meta file illustrates that the model has an output which is named of `dense_3/Softmax`. The tensor is 2-dimensional and its' 2nd index corresponds to the score. In addition, the score is just between 0 ~ 1. The score under `threshold` 0.3 should be thrown out and the `top_number` of outputs should be given as results.
 
 A meta file, however, for classification with quantized model is shown below.
 
@@ -121,7 +121,7 @@ in `score`. You can get real value `value` :
 * `value` = `value8` / `scale`+ `zeropoint`
 
 The classification meta file, thus, illustrates that the model has an input which is named of `input`, `NHWC` shape type with `[1, 224, 224, 3]` dimensions, `MV_INFERENCE_DATA_UINT8` data type, and `RGB888` color space. It requires any preprocess.
-The meta file illustrates that the model has an ouput which is named of `MobilenetV1/Predictions/Reshape_1`. The tensor is 2-dimensional and its' 2nd index corresponds to the score. In addition, the score is just between 0 ~ 1, but the value requires dequantization with scale and zeropoint values.  The score after dequantizing under `threshold`0.3 should be thrown out and the `top_number` of outputs should be given as results.
+The meta file illustrates that the model has an output which is named of `MobilenetV1/Predictions/Reshape_1`. The tensor is 2-dimensional and its' 2nd index corresponds to the score. In addition, the score is just between 0 ~ 1, but the value requires dequantization with scale and zeropoint values.  The score after dequantizing under `threshold`0.3 should be thrown out and the `top_number` of outputs should be given as results.
 
 To show how to apply meta files to well-know models, we provide example meta files which support google hosted models for image classification as:
 
index 54298db..8a0e437 100644 (file)
@@ -74,7 +74,7 @@ private:
        static gpointer DfsThreadLoop(gpointer data);
 
 #ifdef MV_3D_POINTCLOUD_IS_AVAILABLE
-       PointCloudPtr GetPointcloudFromSource(DfsInputData &intput, DfsOutputData &depthData);
+       PointCloudPtr GetPointcloudFromSource(DfsInputData &input, DfsOutputData &depthData);
 #endif
 public:
        Mv3d();
index c3adb0e..9d572e5 100644 (file)
@@ -39,19 +39,19 @@ int mv3dDestroy(mv_3d_h mv3d);
 
 /**
  * @brief Configure mv3d handle.
- * @sicne_tizen 7.0
+ * @since_tizen 7.0
  */
 int mv3dConfigure(mv_3d_h mv3d, mv_engine_config_h engine_config);
 
 /**
  * @brief Set depth callback to mv3d handle.
- * @sicne_tizen 7.0
+ * @since_tizen 7.0
  */
 int mv3dSetDepthCallback(mv_3d_h mv3d, mv_3d_depth_cb depth_cb, void *user_data);
 
 /**
  * @brief Set pointcloud callback to mv3d handle.
- * @sicne_tizen 7.0
+ * @since_tizen 7.0
  */
 int mv3dSetPointcloudCallback(mv_3d_h mv3d, mv_3d_pointcloud_cb pointcloud_cb, void *user_data);
 
index 973f5fc..89d0af9 100644 (file)
@@ -201,7 +201,7 @@ int mv3dSetDepthCallback(mv_3d_h mv3d, mv_3d_depth_cb depth_cb, void *user_data)
        }
 
        if (!depth_cb) {
-               LOGE("Callbakc is NULL");
+               LOGE("Callback is NULL");
                return MEDIA_VISION_ERROR_INVALID_PARAMETER;
        }
 
@@ -223,7 +223,7 @@ int mv3dSetPointcloudCallback(mv_3d_h mv3d, mv_3d_pointcloud_cb pointcloud_cb, v
        }
 
        if (!pointcloud_cb) {
-               LOGE("Callbakc is NULL");
+               LOGE("Callback is NULL");
                return MEDIA_VISION_ERROR_INVALID_PARAMETER;
        }
 
index 7035205..4937dfb 100644 (file)
@@ -47,7 +47,7 @@ int convertSourceMV2GrayCV(mv_source_h mvSource, cv::Mat &cvSource)
        case MEDIA_VISION_COLORSPACE_Y800:
                channelsNumber = 1;
                conversionType = -1; /* Type of conversion from given colorspace to gray */
-               /* Without convertion */
+               /* Without conversion */
                break;
        case MEDIA_VISION_COLORSPACE_I420:
                channelsNumber = 1;
index 756d536..5f7998b 100644 (file)
@@ -38,17 +38,17 @@ bool FaceDetector::detectFaces(const cv::Mat &image, const cv::Rect &roi, const
 
        faceLocations.clear();
 
-       cv::Mat intrestingRegion = image;
+       cv::Mat interestingRegion = image;
 
        bool roiIsUsed = false;
        if (roi.x >= 0 && roi.y >= 0 && roi.width > 0 && roi.height > 0 && (roi.x + roi.width) <= image.cols &&
                (roi.y + roi.height) <= image.rows) {
-               intrestingRegion = intrestingRegion(roi);
+               interestingRegion = interestingRegion(roi);
                roiIsUsed = true;
        }
 
        try {
-               m_faceCascade.detectMultiScale(intrestingRegion, faceLocations, 1.1, 3, 0, minSize);
+               m_faceCascade.detectMultiScale(interestingRegion, faceLocations, 1.1, 3, 0, minSize);
        } catch (cv::Exception &e) {
                return false;
        }
index 26276ce..a0cf7b4 100644 (file)
@@ -70,8 +70,8 @@ int FaceEyeCondition::isEyeOpen(const cv::Mat &eye)
        cv::Mat eyeEqualized;
        cv::equalizeHist(eye, eyeEqualized);
 
-       const int thresold = 20;
-       eyeEqualized = eyeEqualized < thresold;
+       const int threshold = 20;
+       eyeEqualized = eyeEqualized < threshold;
 
        std::vector<std::vector<cv::Point> > contours;
        std::vector<cv::Vec4i> hierarchy;
@@ -88,7 +88,7 @@ int FaceEyeCondition::isEyeOpen(const cv::Mat &eye)
        const int width = eyeEqualized.cols / 2.5;
        const int height = eyeEqualized.rows / 2.5;
 
-       const cv::Rect boundThresold(xCenter - width, yCenter - height, 2 * width, 2 * height);
+       const cv::Rect boundThreshold(xCenter - width, yCenter - height, 2 * width, 2 * height);
 
        const int widthHeightRatio = 3;
        const double areaRatio = 0.005;
@@ -99,11 +99,12 @@ int FaceEyeCondition::isEyeOpen(const cv::Mat &eye)
                const cv::Rect currentRect = cv::boundingRect(contours[i]);
                const double currentArea = cv::contourArea(contours[i]);
 
-               if (boundThresold.contains(currentRect.br()) && boundThresold.contains(currentRect.tl()) &&
-                       currentArea > areaRatio * boundThresold.area() && currentRect.width < widthHeightRatio * currentRect.height)
+               if (boundThreshold.contains(currentRect.br()) && boundThreshold.contains(currentRect.tl()) &&
+                       currentArea > areaRatio * boundThreshold.area() &&
+                       currentRect.width < widthHeightRatio * currentRect.height)
                        isOpen = MV_FACE_EYES_OPEN;
-               else if (boundThresold.contains(currentRect.br()) && boundThresold.contains(currentRect.tl()) &&
-                                currentArea > areaSmallRatio * boundThresold.area())
+               else if (boundThreshold.contains(currentRect.br()) && boundThreshold.contains(currentRect.tl()) &&
+                                currentArea > areaSmallRatio * boundThreshold.area())
                        ++rectanglesInsideCount;
        }
 
index e104147..c926dab 100644 (file)
@@ -114,7 +114,7 @@ int CopyOpenCVAlgorithmParameters(const cv::Ptr<cv::face::FaceRecognizer> &srcAl
                        dstAlg->set(paramNames[i], srcAlg->getAlgorithm(paramNames[i]));
                        break;
                default:
-                       LOGE("While copying algorothm parameters unsupported parameter "
+                       LOGE("While copying algorithm parameters unsupported parameter "
                                "%s was found.", paramNames[i].c_str());
 
                        return MEDIA_VISION_ERROR_NOT_SUPPORTED;
index 30dda5f..458480b 100644 (file)
@@ -74,7 +74,7 @@ int mv_face_detect_open(mv_source_h source, mv_engine_config_h engine_cfg, mv_fa
 
        int error = MediaVision::Common::convertSourceMV2GrayCV(source, image);
        if (error != MEDIA_VISION_ERROR_NONE) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return error;
        }
 
@@ -101,7 +101,7 @@ int mv_face_detect_open(mv_source_h source, mv_engine_config_h engine_cfg, mv_fa
                                 error);
                }
 
-               /* Ser roi to be detected */
+               /* Set roi to be detected */
                error = mv_engine_config_get_int_attribute_c(engine_cfg, MV_FACE_DETECTION_ROI_X, &roi.x);
                if (error != MEDIA_VISION_ERROR_NONE)
                        LOGE("Error occurred during face detection roi (x) receiving."
@@ -200,7 +200,7 @@ int mv_face_recognize_open(mv_source_h source, mv_face_recognition_model_h recog
        int ret = MediaVision::Common::convertSourceMV2GrayCV(source, grayImage);
 
        if (MEDIA_VISION_ERROR_NONE != ret) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return ret;
        }
 
@@ -275,7 +275,7 @@ int mv_face_track_open(mv_source_h source, mv_face_tracking_model_h tracking_mod
        int ret = MediaVision::Common::convertSourceMV2GrayCV(source, grayImage);
 
        if (MEDIA_VISION_ERROR_NONE != ret) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return ret;
        }
 
@@ -315,7 +315,7 @@ int mv_face_eye_condition_recognize_open(mv_source_h source, mv_engine_config_h
 
        int error = MediaVision::Common::convertSourceMV2GrayCV(source, image);
        if (error != MEDIA_VISION_ERROR_NONE) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return error;
        }
 
@@ -323,7 +323,7 @@ int mv_face_eye_condition_recognize_open(mv_source_h source, mv_engine_config_h
        error = FaceEyeCondition::recognizeEyeCondition(image, face_location, &eye_condition);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               LOGE("eye contition recognition failed");
+               LOGE("eye condition recognition failed");
                return error;
        }
 
@@ -341,7 +341,7 @@ int mv_face_facial_expression_recognize_open(mv_source_h source, mv_engine_confi
 
        int error = MediaVision::Common::convertSourceMV2GrayCV(source, image);
        if (error != MEDIA_VISION_ERROR_NONE) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return error;
        }
 
@@ -349,7 +349,7 @@ int mv_face_facial_expression_recognize_open(mv_source_h source, mv_engine_confi
        error = FaceExpressionRecognizer::recognizeFaceExpression(image, face_location, &expression);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               LOGE("eye contition recognition failed");
+               LOGE("eye condition recognition failed");
                return error;
        }
 
@@ -501,7 +501,7 @@ int mv_face_recognition_model_add_open(const mv_source_h source, mv_face_recogni
        cv::Mat image;
        int ret = MediaVision::Common::convertSourceMV2GrayCV(source, image);
        if (MEDIA_VISION_ERROR_NONE != ret) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return ret;
        }
 
@@ -660,7 +660,7 @@ int mv_face_tracking_model_prepare_open(mv_face_tracking_model_h tracking_model,
        cv::Mat image;
        int ret = MediaVision::Common::convertSourceMV2GrayCV(source, image);
        if (MEDIA_VISION_ERROR_NONE != ret) {
-               LOGE("Convertion mv_source_h to gray failed");
+               LOGE("Conversion mv_source_h to gray failed");
                return ret;
        }
 
index ae75205..3dc416f 100644 (file)
@@ -81,10 +81,10 @@ bool checkAccessory(const cv::Point2f &point, const std::vector<cv::Point2f> &re
  *          rectangle from {0,0} to @a maxSize
  *
  * @since_tizen 3.0
- * @param [in] rectange   Rectangle which will be cut
+ * @param [in] rectangle   Rectangle which will be cut
  * @param [in] maxSize    Maximum values of needed rectangle
  */
-void catRect(cv::Rect &rectange, const cv::Size &maxSize);
+void catRect(cv::Rect &rectangle, const cv::Size &maxSize);
 
 /**
  * @brief   Resizes a region.
index b6e08ce..7295d2c 100644 (file)
@@ -67,41 +67,41 @@ bool checkAccessory(const cv::Point2f &point, const std::vector<cv::Point2f> &re
        return insideFlag;
 }
 
-void catRect(cv::Rect &rectange, const cv::Size &maxSize)
+void catRect(cv::Rect &rectangle, const cv::Size &maxSize)
 {
-       if (rectange.width < 0) {
-               rectange.x += rectange.width;
-               rectange.width *= -1;
+       if (rectangle.width < 0) {
+               rectangle.x += rectangle.width;
+               rectangle.width *= -1;
        }
 
-       if (rectange.height < 0) {
-               rectange.y += rectange.height;
-               rectange.height *= -1;
+       if (rectangle.height < 0) {
+               rectangle.y += rectangle.height;
+               rectangle.height *= -1;
        }
 
-       if (rectange.x > maxSize.width || rectange.y > maxSize.height) {
-               rectange.x = 0;
-               rectange.y = 0;
-               rectange.width = 0;
-               rectange.height = 0;
+       if (rectangle.x > maxSize.width || rectangle.y > maxSize.height) {
+               rectangle.x = 0;
+               rectangle.y = 0;
+               rectangle.width = 0;
+               rectangle.height = 0;
                return;
        }
 
-       if (rectange.x < 0) {
-               rectange.width += rectange.x;
-               rectange.x = 0;
+       if (rectangle.x < 0) {
+               rectangle.width += rectangle.x;
+               rectangle.x = 0;
        }
 
-       if (rectange.y < 0) {
-               rectange.height += rectange.y;
-               rectange.y = 0;
+       if (rectangle.y < 0) {
+               rectangle.height += rectangle.y;
+               rectangle.y = 0;
        }
 
-       if (rectange.x + rectange.width > maxSize.width)
-               rectange.width = maxSize.width - rectange.x;
+       if (rectangle.x + rectangle.width > maxSize.width)
+               rectangle.width = maxSize.width - rectangle.x;
 
-       if (rectange.y + rectange.height > maxSize.height)
-               rectange.height = maxSize.height - rectange.y;
+       if (rectangle.y + rectangle.height > maxSize.height)
+               rectangle.height = maxSize.height - rectangle.y;
 }
 
 std::vector<cv::Point2f> contourResize(const std::vector<cv::Point2f> &roi, float scalingCoefficient)
index bae97dd..e7d167b 100644 (file)
@@ -97,9 +97,9 @@ int ImageTrackingModel::setTarget(const ImageObject &target)
 
        /* Parameters of cascade tracker */
 
-       const float recognitionBasedTrackerPriotity = 1.0f;
-       const float featureSubstitutionTrackerPriotity = 0.6f;
-       const float medianFlowTrackerPriotity = 0.1f;
+       const float recognitionBasedTrackerPriority = 1.0f;
+       const float featureSubstitutionTrackerPriority = 0.6f;
+       const float medianFlowTrackerPriority = 0.1f;
 
        /* Parameters of stabilization */
 
@@ -131,7 +131,7 @@ int ImageTrackingModel::setTarget(const ImageObject &target)
        if (asyncRecogTracker == NULL)
                LOGE("Failed to create Async Recognition Tracker");
 
-       mainTracker->enableTracker(asyncRecogTracker, recognitionBasedTrackerPriotity);
+       mainTracker->enableTracker(asyncRecogTracker, recognitionBasedTrackerPriority);
 
        /* Adding asynchronous feature substitution based tracker */
 
@@ -144,7 +144,7 @@ int ImageTrackingModel::setTarget(const ImageObject &target)
        if (asyncSubstitutionTracker == NULL)
                LOGE("Failed to create Async Substitution Tracker");
 
-       mainTracker->enableTracker(asyncSubstitutionTracker, featureSubstitutionTrackerPriotity);
+       mainTracker->enableTracker(asyncSubstitutionTracker, featureSubstitutionTrackerPriority);
 
        /* Adding median flow tracker */
 
@@ -152,7 +152,7 @@ int ImageTrackingModel::setTarget(const ImageObject &target)
        if (mfTracker == NULL)
                LOGE("Failed to create MFTracker");
 
-       mainTracker->enableTracker(mfTracker, medianFlowTrackerPriotity);
+       mainTracker->enableTracker(mfTracker, medianFlowTrackerPriority);
 
        __tracker = mainTracker;
        __target = target;
index 3a06a61..6f95a0a 100644 (file)
@@ -63,7 +63,7 @@ private:
        std::vector<int> mSupportedInferenceBackend;
        std::string mIniDefaultPath;
        std::string mDefaultBackend;
-       std::string mDelimeter;
+       std::string mDelimiter;
 };
 
 } /* Inference */
index c9f37c3..bbc33a4 100644 (file)
@@ -43,7 +43,7 @@ private:
        TensorBuffer mTensorBuffer;
        OutputMetadata mMeta;
        int mBoxOffset;
-       int mNumberOfOjects;
+       int mNumberOfObjects;
        float mScaleW;
        float mScaleH;
        Boxes mResultBoxes;
@@ -61,7 +61,7 @@ public:
                        : mTensorBuffer(buffer)
                        , mMeta(metaData)
                        , mBoxOffset(boxOffset)
-                       , mNumberOfOjects(numberOfObjects)
+                       , mNumberOfObjects(numberOfObjects)
                        , mScaleW(scaleW)
                        , mScaleH(scaleH)
                        , mResultBoxes() {};
index 437093b..9169d67 100644 (file)
@@ -1226,7 +1226,7 @@ int Inference::getObjectDetectionResults(ObjectDetectionResults *results)
 
                // In case of object detection,
                // a model may apply post-process but others may not.
-               // Thus, those cases should be hanlded separately.
+               // Thus, those cases should be handled separately.
 
                float *boxes = nullptr;
                float *classes = nullptr;
@@ -1332,7 +1332,7 @@ int Inference::getFaceDetectionResults(FaceDetectionResults *results)
                if (outputMeta.GetBoxDecodingType() != INFERENCE_BOX_DECODING_TYPE_BYPASS) {
                        std::vector<int> scoreIndexes = outputMeta.GetScoreDimInfo().GetValidIndexAll();
                        if (scoreIndexes.size() != 1) {
-                               LOGE("Invaid dim size. It should be 1");
+                               LOGE("Invalid dim size. It should be 1");
                                return MEDIA_VISION_ERROR_INVALID_OPERATION;
                        }
                        numberOfFaces = mOutputLayerProperty.layers[outputMeta.GetScoreName()].shape[scoreIndexes[0]];
@@ -1490,7 +1490,7 @@ int Inference::getFacialLandMarkDetectionResults(FacialLandMarkDetectionResults
                PoseDecoder poseDecoder(mOutputTensorBuffers, outputMeta, heatMapWidth, heatMapHeight, heatMapChannel,
                                                                number_of_landmarks);
 
-               // initialize decorder queue with landmarks to be decoded.
+               // initialize decoder queue with landmarks to be decoded.
                int ret = poseDecoder.init();
                if (ret != MEDIA_VISION_ERROR_NONE) {
                        LOGE("Fail to init poseDecoder");
@@ -1601,7 +1601,7 @@ int Inference::getPoseLandmarkDetectionResults(std::unique_ptr<mv_inference_pose
                LOGI("number of landmarks per pose: %d", poseResult->number_of_landmarks_per_pose);
 
                if (poseResult->number_of_landmarks_per_pose >= MAX_NUMBER_OF_LANDMARKS_PER_POSE) {
-                       LOGE("Exceeded maxinum number of landmarks per pose(%d >= %d).", poseResult->number_of_landmarks_per_pose,
+                       LOGE("Exceeded maximum number of landmarks per pose(%d >= %d).", poseResult->number_of_landmarks_per_pose,
                                 MAX_NUMBER_OF_LANDMARKS_PER_POSE);
                        return MEDIA_VISION_ERROR_INVALID_PARAMETER;
                }
@@ -1610,7 +1610,7 @@ int Inference::getPoseLandmarkDetectionResults(std::unique_ptr<mv_inference_pose
                PoseDecoder poseDecoder(mOutputTensorBuffers, outputMeta, heatMapWidth, heatMapHeight, heatMapChannel,
                                                                poseResult->number_of_landmarks_per_pose);
 
-               // initialize decorder queue with landmarks to be decoded.
+               // initialize decoder queue with landmarks to be decoded.
                int ret = poseDecoder.init();
                if (ret != MEDIA_VISION_ERROR_NONE) {
                        LOGE("Fail to init poseDecoder");
@@ -1674,7 +1674,7 @@ int Inference::getPoseLandmarkDetectionResults(std::unique_ptr<mv_inference_pose
                poseResult->number_of_landmarks_per_pose = outputTensorInfo.dimInfo[0][3];
 
                if (poseResult->number_of_landmarks_per_pose >= MAX_NUMBER_OF_LANDMARKS_PER_POSE) {
-                       LOGE("Exeeded maxinum number of landmarks per pose(%d >= %d).", poseResult->number_of_landmarks_per_pose,
+                       LOGE("Exeeded maximum number of landmarks per pose(%d >= %d).", poseResult->number_of_landmarks_per_pose,
                                 MAX_NUMBER_OF_LANDMARKS_PER_POSE);
                        return MEDIA_VISION_ERROR_INVALID_PARAMETER;
                }
index a0c978f..9498e81 100644 (file)
@@ -44,7 +44,7 @@ static inline std::string &trim(std::string &s, const char *t = " \t\n\r\f\v")
        return ltrim(rtrim(s, t), t);
 }
 
-InferenceInI::InferenceInI() : mIniDefaultPath(SYSCONFDIR), mDefaultBackend("OPENCV"), mDelimeter(",")
+InferenceInI::InferenceInI() : mIniDefaultPath(SYSCONFDIR), mDefaultBackend("OPENCV"), mDelimiter(",")
 {
        mIniDefaultPath += INFERENCE_INI_FILENAME;
 }
@@ -65,11 +65,11 @@ int InferenceInI::LoadInI()
                        iniparser_getstring(dict, "inference backend:supported backend types", (char *) mDefaultBackend.c_str()));
 
        size_t pos = 0;
-       while ((pos = list.find(mDelimeter)) != std::string::npos) {
+       while ((pos = list.find(mDelimiter)) != std::string::npos) {
                std::string tmp = list.substr(0, pos);
                mSupportedInferenceBackend.push_back(atoi(tmp.c_str()));
 
-               list.erase(0, pos + mDelimeter.length());
+               list.erase(0, pos + mDelimiter.length());
        }
        mSupportedInferenceBackend.push_back(atoi(list.c_str()));
 
index 4eaa389..5c02a0f 100644 (file)
@@ -61,7 +61,7 @@ int InputMetadata::GetTensorInfo(JsonObject *root, std::string key_name)
 
        std::map<std::string, LayerInfo>().swap(layer);
        // TODO: handling error
-       // FIXEME: LayerInfo.set()??
+       // FIXME: LayerInfo.set()??
        LayerInfo info;
 
        info.name = static_cast<const char *>(json_object_get_string_member(object, "name"));
index 4405a6f..d010195 100644 (file)
@@ -43,7 +43,7 @@ int ObjectDecoder::init()
 
                // mNumberOfObjects is set again if INFERENCE_BOX_DECODING_TYPE_BYPASS.
                // Otherwise it is set already within ctor.
-               mNumberOfOjects = mTensorBuffer.getValue<int>(mMeta.GetNumberName(), indexes[0]);
+               mNumberOfObjects = mTensorBuffer.getValue<int>(mMeta.GetNumberName(), indexes[0]);
        } else if (mMeta.GetBoxDecodingType() == INFERENCE_BOX_DECODING_TYPE_SSD_ANCHOR) {
                if (mMeta.GetBoxDecodeInfo().IsAnchorBoxEmpty()) {
                        LOGE("Anchor boxes are required but empty.");
@@ -138,7 +138,7 @@ int ObjectDecoder::decode()
        BoxesList boxList;
        Boxes boxes;
        int ret = MEDIA_VISION_ERROR_NONE;
-       int totalIdx = mNumberOfOjects;
+       int totalIdx = mNumberOfObjects;
 
        for (int idx = 0; idx < totalIdx; ++idx) {
                if (mMeta.GetBoxDecodingType() == INFERENCE_BOX_DECODING_TYPE_BYPASS) {
@@ -155,7 +155,7 @@ int ObjectDecoder::decode()
                        for (auto &anchorBox : mMeta.GetBoxDecodeInfo().GetAnchorBoxAll()) {
                                anchorIdx++;
 
-                               float score = decodeScore(anchorIdx * mNumberOfOjects + idx);
+                               float score = decodeScore(anchorIdx * mNumberOfObjects + idx);
 
                                if (score <= 0.0f)
                                        continue;
@@ -256,8 +256,8 @@ void ObjectDecoder::decodeYOLO(BoxesList &boxesList)
        box::AnchorParam &yoloAnchor = decodeInfo.anchorParam;
 
        //offsetAnchors is 3 which is number of BOX
-       mNumberOfOjects = mBoxOffset / yoloAnchor.offsetAnchors - 5;
-       boxesList.resize(mNumberOfOjects);
+       mNumberOfObjects = mBoxOffset / yoloAnchor.offsetAnchors - 5;
+       boxesList.resize(mNumberOfObjects);
 
        for (auto strideIdx = 0; strideIdx < yoloAnchor.offsetAnchors; strideIdx++) {
                auto &stride = yoloAnchor.strides[strideIdx];
@@ -271,17 +271,17 @@ void ObjectDecoder::decodeYOLO(BoxesList &boxesList)
                                //for each BOX
                                //handle order is (H,W,A)
                                float boxScore =
-                                               decodeYOLOScore(anchorIdx * mBoxOffset + (mNumberOfOjects + 5) * offset + 4, strideIdx);
+                                               decodeYOLOScore(anchorIdx * mBoxOffset + (mNumberOfObjects + 5) * offset + 4, strideIdx);
 
                                auto anchorBox = decodeInfo.vAnchorBoxes[strideIdx][anchorIdx * yoloAnchor.offsetAnchors + offset];
 
-                               for (int objIdx = 0; objIdx < mNumberOfOjects; ++objIdx) { //each box to every object
+                               for (int objIdx = 0; objIdx < mNumberOfObjects; ++objIdx) { //each box to every object
                                        float objScore = decodeYOLOScore(
-                                                       anchorIdx * mBoxOffset + (mNumberOfOjects + 5) * offset + 5 + objIdx, strideIdx);
+                                                       anchorIdx * mBoxOffset + (mNumberOfObjects + 5) * offset + 5 + objIdx, strideIdx);
 
                                        if (boxScore * objScore < mMeta.GetScoreThreshold())
                                                continue;
-                                       Box box = decodeYOLOBox(anchorIdx, objScore, objIdx, (mNumberOfOjects + 5) * offset, strideIdx);
+                                       Box box = decodeYOLOBox(anchorIdx, objScore, objIdx, (mNumberOfObjects + 5) * offset, strideIdx);
 
                                        if (!decodeInfo.vAnchorBoxes.empty()) {
                                                box.location.x = (box.location.x * 2 + anchorBox.x) * stride / mScaleW;
index bf7813b..01600bf 100644 (file)
@@ -121,7 +121,7 @@ int OutputMetadata::GetPostProcess(JsonObject *root, LayerInfo &layer)
                        return ret;
                }
 
-               // addtional parsing is required according to decoding type
+               // additional parsing is required according to decoding type
                if (box.GetDecodingType() != INFERENCE_BOX_DECODING_TYPE_BYPASS) {
                        int ret = box.ParseDecodeInfo(object);
                        if (ret != MEDIA_VISION_ERROR_NONE) {
index f9d237d..3601b25 100644 (file)
@@ -257,21 +257,21 @@ cv::Vec2f Posture::getUnitVectors(cv::Point point1, cv::Point point2)
 
 float Posture::cosineSimilarity(std::vector<cv::Vec2f> vec1, std::vector<cv::Vec2f> vec2, int size)
 {
-       float numer = 0.0f;
+       float number = 0.0f;
        float denom1 = 0.0f;
        float denom2 = 0.0f;
 
        float value = 0.0f;
 
        for (int k = 0; k < size; ++k) {
-               numer = denom1 = denom2 = 0.0f;
+               number = denom1 = denom2 = 0.0f;
                for (int dim = 0; dim < 2; ++dim) {
-                       numer += (vec1[k][dim] * vec2[k][dim]);
+                       number += (vec1[k][dim] * vec2[k][dim]);
                        denom1 += (vec1[k][dim] * vec1[k][dim]);
                        denom2 += (vec2[k][dim] * vec2[k][dim]);
                }
-               LOGI("similarity: %f", numer / sqrt(denom1 * denom2));
-               value += numer / sqrt(denom1 * denom2);
+               LOGI("similarity: %f", number / sqrt(denom1 * denom2));
+               value += number / sqrt(denom1 * denom2);
        }
 
        return value;
index 4e3b083..6d2943a 100644 (file)
@@ -30,7 +30,7 @@ template<typename T, typename V> FacialLandmarkAdapter<T, V>::FacialLandmarkAdap
 {
        // In default, Mobilenet v1 ssd model will be used.
        // If other model is set by user then strategy pattern will be used
-       // to create its corresponding concerte class by calling create().
+       // to create its corresponding concrete class by calling create().
        _landmark_detection = make_unique<FldTweakCnn>(LandmarkDetectionTaskType::FLD_TWEAK_CNN);
 }
 
index 5bbad77..3ba82d1 100644 (file)
@@ -30,7 +30,7 @@ template<typename T, typename V> PoseLandmarkAdapter<T, V>::PoseLandmarkAdapter(
 {
        // In default, Mobilenet v1 ssd model will be used.
        // If other model is set by user then strategy pattern will be used
-       // to create its corresponding concerte class by calling create().
+       // to create its corresponding concrete class by calling create().
        _landmark_detection = make_unique<PldCpm>(LandmarkDetectionTaskType::PLD_CPM);
 }
 
index 430dced..8a10927 100644 (file)
@@ -62,7 +62,7 @@ protected:
 
        /**
         * @brief parse postprocess node from a given meta file.
-        *        This is a pure virtual funcation so each derived class
+        *        This is a pure virtual function so each derived class
         *        should implement this function properly.
         *
         * @param metaInfo A MetaInfo object to output tensor.
index 9ee8d6c..e718305 100644 (file)
@@ -30,7 +30,7 @@ template<typename T, typename V> FaceDetectionAdapter<T, V>::FaceDetectionAdapte
 {
        // In default, FD Mobilenet v1 ssd model will be used.
        // If other model is set by user then strategy pattern will be used
-       // to create its corresponding concerte class by calling create().
+       // to create its corresponding concrete class by calling create().
        _object_detection = make_unique<MobilenetV1Ssd>(ObjectDetectionTaskType::FD_MOBILENET_V1_SSD);
 }
 
index c6fde97..27feb9c 100644 (file)
@@ -31,7 +31,7 @@ template<typename T, typename V> ObjectDetectionAdapter<T, V>::ObjectDetectionAd
 {
        // In default, Mobilenet v1 ssd model will be used.
        // If other model is set by user then strategy pattern will be used
-       // to create its corresponding concerte class by calling create().
+       // to create its corresponding concrete class by calling create().
        _object_detection = make_unique<MobilenetV1Ssd>(ObjectDetectionTaskType::MOBILENET_V1_SSD);
 }
 
index bddeb47..d49964b 100644 (file)
@@ -54,9 +54,9 @@ void FeatureVectorManager::getVecFromImg(const string image_file, vector<float>
 
        resized.convertTo(floatSrc, CV_32FC3);
 
-       cv::Mat meaned = cv::Mat(floatSrc.size(), CV_32FC3, cv::Scalar(127.5f, 127.5f, 127.5f));
+       cv::Mat meant = cv::Mat(floatSrc.size(), CV_32FC3, cv::Scalar(127.5f, 127.5f, 127.5f));
 
-       cv::subtract(floatSrc, meaned, dst);
+       cv::subtract(floatSrc, meant, dst);
        dst /= 127.5f;
 
        vec.assign((float *) dst.data, (float *) dst.data + dst.total() * dst.channels());
@@ -75,10 +75,10 @@ void FeatureVectorManager::getVecFromRGB(unsigned char *in_data, vector<float> &
 
        resized.convertTo(floatSrc, CV_32FC3);
 
-       cv::Mat meaned = cv::Mat(floatSrc.size(), CV_32FC3, cv::Scalar(127.5f, 127.5f, 127.5f));
+       cv::Mat meant = cv::Mat(floatSrc.size(), CV_32FC3, cv::Scalar(127.5f, 127.5f, 127.5f));
        cv::Mat dst;
 
-       cv::subtract(floatSrc, meaned, dst);
+       cv::subtract(floatSrc, meant, dst);
        dst /= 127.5f;
 
        vec.assign((float *) dst.data, (float *) dst.data + dst.total() * dst.channels());
@@ -91,9 +91,9 @@ void FeatureVectorManager::getVecFromXRGB(unsigned char *in_data, vector<float>
 
        cv::Mat split_rgbx[4];
        cv::split(argb, split_rgbx);
-       cv::Mat splitted[] = { split_rgbx[0], split_rgbx[1], split_rgbx[2] };
+       cv::Mat split[] = { split_rgbx[0], split_rgbx[1], split_rgbx[2] };
        cv::Mat rgb;
-       cv::merge(splitted, 3, rgb);
+       cv::merge(split, 3, rgb);
 
        cv::Mat resized;
 
@@ -103,11 +103,11 @@ void FeatureVectorManager::getVecFromXRGB(unsigned char *in_data, vector<float>
 
        resized.convertTo(floatSrc, CV_32FC3);
 
-       cv::Mat meaned = cv::Mat(floatSrc.size(), CV_32FC3, cv::Scalar(127.5f, 127.5f, 127.5f));
+       cv::Mat meant = cv::Mat(floatSrc.size(), CV_32FC3, cv::Scalar(127.5f, 127.5f, 127.5f));
 
        cv::Mat dst;
 
-       cv::subtract(floatSrc, meaned, dst);
+       cv::subtract(floatSrc, meant, dst);
        dst /= 127.5f;
 
        vec.assign((float *) dst.data, (float *) dst.data + dst.total() * dst.channels());
index 8bb4011..77e8b49 100644 (file)
@@ -85,10 +85,10 @@ public:
         * @param [in] eventType        Type of the event
         * @param [in] videoStreamId    Video stream identificator
         * @param [in] engineCfg        The engine configuration for event trigger
-        * @param [in] callback         The callback to be called if event will be occured
+        * @param [in] callback         The callback to be called if event will be occurred
         * @param [in] user_data        The user data to be passed to the callback function
         * @param [in] numberOfPoints    The number of ROI points
-        * @param [in] roi               The intput array with ROI points
+        * @param [in] roi               The input array with ROI points
         * @param [in] isInternal        Interpretation event as internal in surveillance
         * @return @c 0 on success, otherwise a negative error value
         */
index 74829f0..3cc6805 100644 (file)
@@ -51,10 +51,10 @@ public:
         * @param [in] eventTrigger      The event trigger to be register (NULL if internal)
         * @param [in] triggerId         Unique event trigger identifier to be register
         * @param [in] videoStreamId     Video stream identifier
-        * @param [in] callback          The callback to be called if event will be occured
+        * @param [in] callback          The callback to be called if event will be occurred
         * @param [in] user_data         The user data to be passed to the callback function
         * @param [in] numberOfPoints    The number of ROI points
-        * @param [in] roi               The intput array with ROI points
+        * @param [in] roi               The input array with ROI points
         * @param [in] isInternal        Interpretation event as internal in surveillance
         */
        EventTrigger(mv_surveillance_event_trigger_h eventTrigger, long int triggerId, int videoStreamId,
@@ -108,7 +108,7 @@ public:
         * @brief Checks if callback with the identifier is subscribed.
         *
         * @since_tizen 3.0
-        * @return true if suscribed, false otherwise
+        * @return true if subscribed, false otherwise
         */
        bool isCallbackSubscribed(long int triggerId) const;
 
@@ -118,10 +118,10 @@ public:
         * @since_tizen 3.0
         * @param [in] eventTrigger      The event trigger to be register (NULL if internal)
         * @param [in] triggerId         Unique event trigger identifier to be subscribed
-        * @param [in] callback          The callback to be called if event will be occured
+        * @param [in] callback          The callback to be called if event will be occurred
         * @param [in] user_data         The user data to be passed to the callback function
         * @param [in] numberOfPoints    The number of ROI points
-        * @param [in] roi               The intput array with ROI points
+        * @param [in] roi               The input array with ROI points
         * @param [in] isInternal        Interpretation event as internal in surveillance
         * @return @c true on success, false otherwise
         */
@@ -153,7 +153,7 @@ public:
         * @param [in, out] image     The input image where ROI will be applied
         * @param [in] imageWidth     The input image width
         * @param [in] imageHeight    The input image height
-        * @param [in] scalePoints    True if ROI points must be scaled, false oterwise
+        * @param [in] scalePoints    True if ROI points must be scaled, false otherwise
         * @param [in] scaleX         The scale for X ROI point coordinate
         * @param [in] scaleY         The scale for Y ROI point coordinate
         * @return @c true on success, false otherwise
index f593f68..db2b473 100644 (file)
@@ -73,10 +73,10 @@ public:
         * @param [in] eventTrigger      The event trigger to be register (NULL if internal)
         * @param [in] triggerId         Unique event trigger identifier to be register
         * @param [in] videoStreamId     Video stream identifier
-        * @param [in] callback          The callback to be called if event will be occured
+        * @param [in] callback          The callback to be called if event will be occurred
         * @param [in] user_data         The user data to be passed to the callback function
         * @param [in] numberOfPoints    The number of ROI points
-        * @param [in] roi               The intput array with ROI points
+        * @param [in] roi               The input array with ROI points
         * @param [in] isInternal        Interpretation event as internal in surveillance
         */
        EventTriggerMovementDetection(mv_surveillance_event_trigger_h eventTrigger, long int triggerId, int videoStreamId,
index 3e9d947..a9236d6 100644 (file)
@@ -19,7 +19,7 @@
 
 /**
  * @file  EventTriggerPersonAppearance.h
- * @brief This file contains interface for person appeared / disapeared events.
+ * @brief This file contains interface for person appeared / disappeared events.
  */
 
 #include "EventTrigger.h"
@@ -39,7 +39,7 @@ namespace surveillance
 {
 /**
  * @class EventResultPersonAppearance
- * @brief This class contains person appeared / disapeared event results.
+ * @brief This class contains person appeared / disappeared event results.
  *
  * @since_tizen 3.0
  */
@@ -70,7 +70,7 @@ public:
 
 /**
  * @class EventTriggerPersonAppearance
- * @brief This class contains person appeared / disapeared events.
+ * @brief This class contains person appeared / disappeared events.
  *
  * @since_tizen 3.0
  */
@@ -84,10 +84,10 @@ public:
         * @param [in] eventTrigger      The event trigger to be register (NULL if internal)
         * @param [in] triggerId         Unique event trigger identifier to be register
         * @param [in] videoStreamId     Video stream identifier
-        * @param [in] callback          The callback to be called if event will be occured
+        * @param [in] callback          The callback to be called if event will be occurred
         * @param [in] user_data         The user data to be passed to the callback function
         * @param [in] numberOfPoints    The number of ROI points
-        * @param [in] roi               The intput array with ROI points
+        * @param [in] roi               The input array with ROI points
         * @param [in] isInternal        Interpretation event as internal in surveillance
         */
        EventTriggerPersonAppearance(mv_surveillance_event_trigger_h eventTrigger, long int triggerId, int videoStreamId,
index 57f9aea..bbfcd1a 100644 (file)
@@ -36,7 +36,7 @@ namespace mediavision
 namespace surveillance
 {
 /**
- * @class EventResultPersonRecogniton
+ * @class EventResultPersonRecognition
  * @brief This class contains person recognized event results.
  *
  * @since_tizen 3.0
@@ -58,7 +58,7 @@ public:
 public:
        MVRectangles __locations; /**< Persons locations */
 
-       IntVector __faceLabels; /**< Persons face lables */
+       IntVector __faceLabels; /**< Persons face labels */
 
        DoubleVector __confidences; /**< Persons face recognition confidences */
 };
@@ -79,10 +79,10 @@ public:
         * @param [in] eventTrigger      The event trigger to be register (NULL if internal)
         * @param [in] triggerId         Unique event trigger identifier to be register
         * @param [in] videoStreamId     Video stream identifier
-        * @param [in] callback          The callback to be called if event will be occured
+        * @param [in] callback          The callback to be called if event will be occurred
         * @param [in] user_data         The user data to be passed to the callback function
         * @param [in] numberOfPoints    The number of ROI points
-        * @param [in] roi               The intput array with ROI points
+        * @param [in] roi               The input array with ROI points
         * @param [in] isInternal        Interpretation event as internal in surveillance
         */
        EventTriggerPersonRecognition(mv_surveillance_event_trigger_h eventTrigger, long int triggerId, int videoStreamId,
index d884805..966f202 100644 (file)
@@ -47,7 +47,7 @@ public:
         *
         * @since_tizen 3.0
         * @param [in] mvSource     The input media source handle
-        * @param [out] cvSource    The outut matrix with gray scaled image
+        * @param [out] cvSource    The output matrix with gray scaled image
         * @return @c 0 on success, otherwise a negative error value
         */
        static int convertSourceMVRGB2GrayCVNeon(mv_source_h mvSource, cv::Mat &cvSource);
index 9207316..7f276ed 100644 (file)
@@ -25,7 +25,7 @@ extern "C" {
 
 /**
  * @brief Gets mask buffer from buffer with known size.
- * @details Mask buffer values: 0 ouside polygon and 255 inside polygon.
+ * @details Mask buffer values: 0 outside polygon and 255 inside polygon.
  *
  * @since_tizen 3.0
  * @param [in] buffer_width     The buffer width
index e8781e9..51b997b 100644 (file)
@@ -34,7 +34,7 @@ namespace surveillance
 // LCOV_EXCL_START
 using namespace cv;
 
-static const int MAX_VALUE_NAME_LENGHT = 255;
+static const int MAX_VALUE_NAME_LENGTH = 255;
 
 static const int DEFAULT_SKIP_FRAMES_COUNT = 6;
 
@@ -157,30 +157,30 @@ int EventResultPersonAppearance::getResultValue(const char *valueName, void *val
                return MEDIA_VISION_ERROR_INVALID_PARAMETER;
        }
 
-       if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_APPEARED_NUMBER, MAX_VALUE_NAME_LENGHT) == 0) {
+       if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_APPEARED_NUMBER, MAX_VALUE_NAME_LENGTH) == 0) {
                size_t *const numberOfAppearedPersons = (size_t *) value;
                *numberOfAppearedPersons = __appearedLocations.size();
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_APPEARED_LOCATIONS, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_APPEARED_LOCATIONS, MAX_VALUE_NAME_LENGTH) == 0) {
                mv_rectangle_s *const appearedLocations = (mv_rectangle_s *) value;
 
                const size_t numberOfAppearedPersons = __appearedLocations.size();
 
                for (size_t i = 0u; i < numberOfAppearedPersons; ++i)
                        appearedLocations[i] = __appearedLocations[i];
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_TRACKED_NUMBER, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_TRACKED_NUMBER, MAX_VALUE_NAME_LENGTH) == 0) {
                size_t *const numberOfTrackedPersons = (size_t *) value;
                *numberOfTrackedPersons = __trackedLocations.size();
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_TRACKED_LOCATIONS, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_TRACKED_LOCATIONS, MAX_VALUE_NAME_LENGTH) == 0) {
                mv_rectangle_s *const trackedLocations = (mv_rectangle_s *) value;
 
                const size_t numberOfTrackedPersons = __trackedLocations.size();
 
                for (size_t i = 0u; i < numberOfTrackedPersons; ++i)
                        trackedLocations[i] = __trackedLocations[i];
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_DISAPPEARED_NUMBER, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_DISAPPEARED_NUMBER, MAX_VALUE_NAME_LENGTH) == 0) {
                size_t *const numberOfDisappearedPersons = (size_t *) value;
                *numberOfDisappearedPersons = __disappearedLocations.size();
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_DISAPPEARED_LOCATIONS, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_DISAPPEARED_LOCATIONS, MAX_VALUE_NAME_LENGTH) == 0) {
                mv_rectangle_s *const disappearedLocations = (mv_rectangle_s *) value;
 
                const size_t numberOfDisappearedPersons = __disappearedLocations.size();
index 750402b..5a3243f 100644 (file)
@@ -27,7 +27,7 @@ namespace mediavision
 {
 namespace surveillance
 {
-static const int MAX_VALUE_NAME_LENGHT = 255;
+static const int MAX_VALUE_NAME_LENGTH = 255;
 
 namespace
 {
@@ -54,20 +54,20 @@ int EventResultPersonRecognition::getResultValue(const char *valueName, void *va
 
        const size_t numberOfPersons = __locations.size();
 
-       if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_NUMBER, MAX_VALUE_NAME_LENGHT) == 0) {
+       if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_NUMBER, MAX_VALUE_NAME_LENGTH) == 0) {
                size_t *outNumberOfPersons = (size_t *) value;
                *outNumberOfPersons = numberOfPersons;
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_LOCATIONS, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_LOCATIONS, MAX_VALUE_NAME_LENGTH) == 0) {
                mv_rectangle_s *locations = (mv_rectangle_s *) value;
 
                for (size_t i = 0; i < numberOfPersons; ++i)
                        locations[i] = __locations[i];
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_LABELS, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_LABELS, MAX_VALUE_NAME_LENGTH) == 0) {
                int *labels = (int *) value;
 
                for (size_t i = 0; i < numberOfPersons; ++i)
                        labels[i] = __faceLabels[i];
-       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_CONFIDENCES, MAX_VALUE_NAME_LENGHT) == 0) {
+       } else if (strncmp(valueName, MV_SURVEILLANCE_PERSONS_RECOGNIZED_CONFIDENCES, MAX_VALUE_NAME_LENGTH) == 0) {
                double *confidences = (double *) value;
 
                for (size_t i = 0; i < numberOfPersons; ++i)
index 5155327..9e7dd07 100644 (file)
@@ -25,4 +25,4 @@ Another point is Tizen tct has huge amount of dataset of other packages which is
 
 ### Why gtest?
 I tried with libcheck but since Tizen API is C/C++, hard to cover all of C++ modules (interal API is more rely on C++).  
-libcheck also needs runtime dependancy.
\ No newline at end of file
+libcheck also needs runtime dependency.
\ No newline at end of file
index 44d87a0..677f13b 100644 (file)
@@ -42,7 +42,7 @@ void print_success_result(const char *action_name)
 void print_action_result(const char *action_name, int action_return_value, notification_type_e notification_type_e)
 {
        switch (notification_type_e) {
-       case FAIL_OR_SUCCESSS:
+       case FAIL_OR_SUCCESS:
                if (MEDIA_VISION_ERROR_NONE != action_return_value)
                        print_fail_result(action_name, action_return_value);
                else
index c5593d4..7bd316f 100644 (file)
@@ -25,7 +25,7 @@
 extern "C" {
 #endif /* __cplusplus */
 
-typedef enum { FAIL_OR_SUCCESSS, FAIL_OR_DONE } notification_type_e;
+typedef enum { FAIL_OR_SUCCESS, FAIL_OR_DONE } notification_type_e;
 
 /**
  * @brief Prints success result of action.
index a3f92a2..a703786 100644 (file)
@@ -71,12 +71,12 @@ typedef struct _mv_video_writer_s {
        unsigned long buffer_size;
 } mv_video_writer_s;
 
-/* video reader internal funcitons */
+/* video reader internal functions */
 static int _mv_video_reader_create_internals(mv_video_reader_s *reader);
 static int _mv_video_reader_link_internals(mv_video_reader_s *reader);
 static int _mv_video_reader_state_change(mv_video_reader_s *reader, GstState state);
 
-/* video writer internal funciton */
+/* video writer internal function */
 static int _mv_video_writer_create_internals(mv_video_writer_s *writer);
 static int _mv_video_writer_link_internals(mv_video_writer_s *writer);
 static int _mv_video_writer_state_change(mv_video_writer_s *writer, GstState state);
index 54058d5..6305ec1 100644 (file)
@@ -357,7 +357,7 @@ int egl_init_with_platform_window_surface(int gles_version, int depth_size, int
 
        ret = eglMakeCurrent(s_dpy, s_sfc, s_sfc, s_ctx);
        if (ret != EGL_TRUE) {
-               LOGE("Falied to call eglMakeCurrent()");
+               LOGE("Failed to call eglMakeCurrent()");
                return MEDIA_VISION_ERROR_INTERNAL;
        }
 
index 27d30d2..a85e258 100644 (file)
@@ -478,7 +478,7 @@ int perform_mv_face_recognition_model_add_face_example(mv_face_recognition_model
                        printf(TEXT_RED "Can't read from specified directory (%s)\n" TEXT_RESET, in_file_name);
                }
        } else {
-               *notification_type = FAIL_OR_SUCCESSS;
+               *notification_type = FAIL_OR_SUCCESS;
                mv_rectangle_s roi;
                err = add_single_example(model, in_file_name, &roi, NULL);
        }
@@ -881,7 +881,7 @@ int perform_recognize()
 
        while (!sel_opt) {
                sel_opt = show_menu("Select action:", options, names, 11);
-               notification_type_e notification_type = FAIL_OR_SUCCESSS;
+               notification_type_e notification_type = FAIL_OR_SUCCESS;
 
                switch (sel_opt) {
                case 1:
@@ -1787,7 +1787,7 @@ int perform_track()
 
        while (!sel_opt) {
                sel_opt = show_menu("Select action:", options, names, 6);
-               notification_type_e notification_type = FAIL_OR_SUCCESSS;
+               notification_type_e notification_type = FAIL_OR_SUCCESS;
 
                switch (sel_opt) {
                case 1:
index e898b66..3970212 100644 (file)
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/OD/tflite/od_yolo_v5_320x320.tflite"
 #define OD_TFLITE_META_YOLO_V5_320_PATH \
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/OD/tflite/od_yolo_v5_320x320.json"
-#define OD_LABLE_YOLO_V5_320_PATH \
+#define OD_LABEL_YOLO_V5_320_PATH \
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/OD/tflite/od_yolo_v5_label.txt"
 
 //Face Detection
 /*
  * Hosted models
  */
-#define FLD_TFLITE_WIEGHT_TWEAKCNN_128_PATH \
+#define FLD_TFLITE_WEIGHT_TWEAKCNN_128_PATH \
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/FLD/tflite/fld_tweakcnn_128x128.tflite"
 #define FLD_TFLITE_META_TWEAKCNN_128_PATH \
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/FLD/tflite/fld_tweakcnn_128x128.json"
-#define FLD_TFLITE_WIEGHT_MEDIAPIPE_192_PATH \
+#define FLD_TFLITE_WEIGHT_MEDIAPIPE_192_PATH \
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/FLD/tflite/fld_mediapipe_192x192.tflite"
 #define FLD_TFLITE_META_MEDIAPIPE_192_PATH \
        "/opt/usr/home/owner/media/Others/mv_test/open_model_zoo/models/FLD/tflite/fld_mediapipe_192x192.json"
@@ -1173,7 +1173,7 @@ int perform_object_detection()
        } break;
        case 7: {
                err = engine_config_user_hosted_tflite_cpu(engine_cfg, OD_TFLITE_WEIGHT_YOLO_V5_320_PATH,
-                                                                                                  OD_LABLE_YOLO_V5_320_PATH, OD_TFLITE_META_YOLO_V5_320_PATH);
+                                                                                                  OD_LABEL_YOLO_V5_320_PATH, OD_TFLITE_META_YOLO_V5_320_PATH);
        } break;
        }
        if (err != MEDIA_VISION_ERROR_NONE) {
@@ -1374,11 +1374,11 @@ int perform_facial_landmark_detection()
                err = perform_opencv_cnncascade(engine_cfg);
        } break;
        case 3: {
-               err = engine_config_hosted_tflite_cpu(engine_cfg, FLD_TFLITE_WIEGHT_TWEAKCNN_128_PATH,
+               err = engine_config_hosted_tflite_cpu(engine_cfg, FLD_TFLITE_WEIGHT_TWEAKCNN_128_PATH,
                                                                                          FLD_TFLITE_META_TWEAKCNN_128_PATH);
        } break;
        case 4: {
-               err = engine_config_hosted_tflite_cpu(engine_cfg, FLD_TFLITE_WIEGHT_MEDIAPIPE_192_PATH,
+               err = engine_config_hosted_tflite_cpu(engine_cfg, FLD_TFLITE_WEIGHT_MEDIAPIPE_192_PATH,
                                                                                          FLD_TFLITE_META_MEDIAPIPE_192_PATH);
        } break;
        }
index 22874c8..88262d6 100644 (file)
@@ -28,7 +28,7 @@
        TEST_RES_PATH         \
        "/res/inference/images/faceLandmark.jpg"
 
-#define FLD_TFLITE_WIEGHT_TWEAKCNN_128_PATH \
+#define FLD_TFLITE_WEIGHT_TWEAKCNN_128_PATH \
        TEST_RES_PATH                           \
        "/open_model_zoo/models/FLD/tflite/fld_tweakcnn_128x128.tflite"
 #define FLD_TFLITE_META_TWEAKCNN_128_PATH \
@@ -111,7 +111,7 @@ public:
 
 TEST_P(TestFaceLandmarkDetectionTflite, TweakCNN)
 {
-       engine_config_hosted_tflite_model(engine_cfg, FLD_TFLITE_WIEGHT_TWEAKCNN_128_PATH,
+       engine_config_hosted_tflite_model(engine_cfg, FLD_TFLITE_WEIGHT_TWEAKCNN_128_PATH,
                                                                          FLD_TFLITE_META_TWEAKCNN_128_PATH, _use_json_parser, _target_device_type);
        if (_use_json_parser) {
                inferenceFaceLandmark();
index 89987f7..7090790 100644 (file)
@@ -297,7 +297,7 @@ bool try_destroy_event_trigger(mv_surveillance_event_trigger_h trigger)
 {
        const int error = mv_surveillance_event_trigger_destroy(trigger);
        if (MEDIA_VISION_ERROR_NONE != error) {
-               PRINT_E("Error with code %d was occured when try to destroy "
+               PRINT_E("Error with code %d was occurred when try to destroy "
                                "event trigger.",
                                error);
                return false;
@@ -465,7 +465,7 @@ void unsubscribe_from_event()
 
        const int error = mv_surveillance_unsubscribe_event_trigger(event_trigger, video_streams_ids[trigger_id]);
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in unsubscribe event.", error);
+               PRINT_E("Error with code %d was occurred in unsubscribe event.", error);
                return;
        }
 
@@ -590,7 +590,7 @@ void turn_on_off_saving_to_image()
 void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source_h source, int video_stream_id,
                                                           mv_surveillance_result_h event_result, void *user_data)
 {
-       PRINT_G("Person appeared / disappeared event was occured");
+       PRINT_G("Person appeared / disappeared event was occurred");
        if (save_results_to_image)
                PRINT_G("Output image will be saved to /tmp/person_app.jpg.\n"
                                "Appeared locations - green;\n"
@@ -622,7 +622,7 @@ void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source
                                                                                                 &number_of_appeared_persons);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting number of "
+               PRINT_E("Error with code %d was occurred in getting number of "
                                "appeared persons.",
                                error);
                if (out_buffer_copy != NULL)
@@ -639,7 +639,7 @@ void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source
                                                                                         appeared_locations);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting locations of "
+               PRINT_E("Error with code %d was occurred in getting locations of "
                                "appeared persons.",
                                error);
 
@@ -669,7 +669,7 @@ void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source
                                                                                         &number_of_tracked_persons);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting number of "
+               PRINT_E("Error with code %d was occurred in getting number of "
                                "tracked persons.",
                                error);
 
@@ -690,7 +690,7 @@ void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source
                                                                                         tracked_locations);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting locations of "
+               PRINT_E("Error with code %d was occurred in getting locations of "
                                "tracked persons.",
                                error);
 
@@ -722,7 +722,7 @@ void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source
                                                                                         &number_of_disappeared_persons);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting number of "
+               PRINT_E("Error with code %d was occurred in getting number of "
                                "disappeared persons.",
                                error);
 
@@ -746,7 +746,7 @@ void detect_person_appeared_cb(mv_surveillance_event_trigger_h handle, mv_source
                                                                                         disappeared_locations);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting locations of "
+               PRINT_E("Error with code %d was occurred in getting locations of "
                                "disappeared persons.",
                                error);
 
@@ -807,7 +807,7 @@ void person_recognized_cb(mv_surveillance_event_trigger_h handle, mv_source_h so
                                                                                                 &number_of_persons);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting number of persons.", error);
+               PRINT_E("Error with code %d was occurred in getting number of persons.", error);
                return;
        }
 
@@ -818,7 +818,7 @@ void person_recognized_cb(mv_surveillance_event_trigger_h handle, mv_source_h so
        error = mv_surveillance_get_result_value(event_result, MV_SURVEILLANCE_PERSONS_RECOGNIZED_LOCATIONS, locations);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting locations of persons.", error);
+               PRINT_E("Error with code %d was occurred in getting locations of persons.", error);
 
                if (locations != NULL)
                        free(locations);
@@ -831,7 +831,7 @@ void person_recognized_cb(mv_surveillance_event_trigger_h handle, mv_source_h so
        error = mv_surveillance_get_result_value(event_result, MV_SURVEILLANCE_PERSONS_RECOGNIZED_LABELS, labels);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting labels of persons.", error);
+               PRINT_E("Error with code %d was occurred in getting labels of persons.", error);
 
                if (locations != NULL)
                        free(locations);
@@ -847,7 +847,7 @@ void person_recognized_cb(mv_surveillance_event_trigger_h handle, mv_source_h so
        error = mv_surveillance_get_result_value(event_result, MV_SURVEILLANCE_PERSONS_RECOGNIZED_CONFIDENCES, confidences);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting confidences of persons.", error);
+               PRINT_E("Error with code %d was occurred in getting confidences of persons.", error);
 
                if (locations != NULL)
                        free(locations);
@@ -926,7 +926,7 @@ void person_recognized_cb(mv_surveillance_event_trigger_h handle, mv_source_h so
 void movement_detected_cb(mv_surveillance_event_trigger_h event_trigger, mv_source_h source, int video_stream_id,
                                                  mv_surveillance_result_h event_result, void *user_data)
 {
-       PRINT_G("Movement detected event was occured");
+       PRINT_G("Movement detected event was occurred");
        if (save_results_to_image)
                PRINT_G("Output image will be saved to /tmp/move_detect.jpg.\n"
                                "Movement detected locations - blue.");
@@ -936,7 +936,7 @@ void movement_detected_cb(mv_surveillance_event_trigger_h event_trigger, mv_sour
                                                                                                 &number_of_movement_regions);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting number of "
+               PRINT_E("Error with code %d was occurred in getting number of "
                                "movement regions.",
                                error);
 
@@ -950,7 +950,7 @@ void movement_detected_cb(mv_surveillance_event_trigger_h event_trigger, mv_sour
        error = mv_surveillance_get_result_value(event_result, MV_SURVEILLANCE_MOVEMENT_REGIONS, movement_regions);
 
        if (error != MEDIA_VISION_ERROR_NONE) {
-               PRINT_E("Error with code %d was occured in getting movement regions.", error);
+               PRINT_E("Error with code %d was occurred in getting movement regions.", error);
 
                if (movement_regions != NULL)
                        free(movement_regions);