3 Adapter is a function for conversion network infer output to metric specific format.
4 You can use 2 ways to set adapter for topology:
5 * Define adapter as a string.
8 adapter: classification
11 * Define adapter as a dictionary, using `type:` for setting adapter name. This approach gives opportunity to set additional parameters for adapter if it is required.
19 AccuracyChecker supports following set of adapters:
20 * `classification` - converting output of classification model to `ClassificationPrediction` representation.
21 * `segmentation` - converting output of semantic segmentation model to `SeegmentationPrediction` representation.
22 * `tiny_yolo_v1` - converting output of Tiny YOLO v1 model to `DetectionPrediction` representation.
23 * `reid` - converting output of reidentification model to `ReIdentificationPrediction` representation.
24 * `grn_workaround` - enabling processing output with adding Global Region Normalization layer.
25 * `yolo_v2` - converting output of YOLO v2 family models to `DetectionPrediction` representation.
26 * `classes` - number of detection classes (default 20).
27 * `anchors` - anchor values provided as comma-separated list or one of precomputed: `yolo_v2` and `tiny_yolo_v2`.
28 * `coords` - number of bbox coordinates (default 4).
29 * `num` - num parameter from DarkNet configuration file (default 5).
30 * `yolo_v3` - converting output of YOLO v3 family models to `DetectionPrediction` representation.
31 * `classes` - number of detection classes (default 80).
32 * `anchors` - anchor values provided as comma-separited list or precomputed: `yolo_v3`.
33 * `coords` - number of bbox coordinates (default 4).
34 * `num` - num parameter from DarkNet configuration file (default 3).
35 * `threshold` - minimal objectness score value for valid detections (default 0.001).
36 * `input_width` and `input_height` - network input width and height correspondingly (default 416).
37 * `outputs` - the list of output layers names (optional), if specified there should be exactly 3 output layers provided.
38 * `lpr` - converting output of license plate recognition model to `CharacterRecognitionPrediction` representation.
39 * `ssd` - converting output of SSD model to `DetectionPrediction` representation.
40 * `face_person_detection` - converting face person detection model output with 2 detection outputs to `ContainerPredition`, where value of parameters `face_out`and `person_out` are used for identification `DetectionPrediction` in container.
41 * `face_out` - face detection output layer name.
42 * `person_out` - person detection output layer name.
43 * `attributes_recognition` - converting vehicle attributes recognition model output to `ContainerPrediction` where value of parameters `color_out`and `type_out` are used for identification `ClassificationPrediction` in container.
44 * `color_out` - vehicle color attribute output layer name.
45 * `type_out`- vehicle type attribute output layer name.
46 * `head_pose` - converting head pose estimation model output to `ContainerPrediction` where names of parameters `angle_pitch`, `angle_yaw` and `angle_roll` are used for identification `RegressionPrediction` in container.
47 * `angle_pitch` - output layer name for pitch angle.
48 * `angle_yaw`- output layer name for yaw angle.
49 * `angle_roll` - output layer name for roll angle.
50 * `age_gender` - converting age gender recognition model output to `ContainerPrediction` with `ClassificationPrediction` named `gender` for gender recognition, `ClassificationPrediction` named `age_classification` and `RegressionPrediction` named `age_error` for age recognition.
51 * `age_out` - output layer name for age recognition.
52 * `gender_out` - output layer name for gender recognition.
53 * `action_detection` - converting output of model for person detection and action recognition tasks to `ContainerPrediction` with `DetectionPrdiction` for class agnostic metric calculation and `DetectionPrediction` for action recognition. The representations in container have names `class_agnostic_prediction` and `action_prediction` respectively.
54 * `priorbox_out` - name of layer containing prior boxes in SSD format.
55 * `loc_out` - name of layer containing box coordinates in SSD format.
56 * `main_conf_out` - name of layer containing detection confidences.
57 * `add_conf_out_prefix` - prefix for generation name of layers containing action confidences if topology has several following layers or layer name.
58 * `add_conf_out_count` - number of layers with action confidences (optional, you can not provide this argument if action confidences contained in one layer).
59 * `num_action_classes` - number classes for action recognition.
60 * `detection_threshold` - minimal detection confidences level for valid detections.
61 * `super_resolution` - converting output of single image super resolution network to `SuperResolutionPrediction`.
62 * `landmarks_regression` - converting output of model for landmarks regression to `FacialLandmarksPrediction`.
63 * `text_detection` - converting output of model for text detection to `TextDetectionPrediction`.
64 * `pixel_class_out` - name of layer containing information related to text/no-text classification for each pixel.
65 * `pixel_link_out` - name of layer containing information related to linkage between pixels and their neighbors.
66 * `human_pose_estimation` - converting output of model for human pose estimation to `PoseEstimationPrediction`.
67 * `part_affinity_fields_out` - name of output layer with keypoints pairwise relations (part affinity fields).
68 * `keypoints_heatmap_out` - name of output layer with keypoints heatmaps.
69 * `beam_search_decoder` - realization CTC Beam Search decoder for symbol sequence recognition, converting model output to `CharacterRecognitionPrediction`.
70 * `beam_size` - size of the beam to use during decoding (default 10).
71 * `blank_label` - index of the CTC blank label.
72 * `softmaxed_probabilities` - indicator that model uses softmax for output layer (default False).
73 * `gaze_estimation` - converting output of gaze estimation model to `GazeVectorPrediction`.