1 AKAZE and ORB planar tracking {#tutorial_akaze_tracking}
2 =============================
4 @prev_tutorial{tutorial_akaze_matching}
5 @next_tutorial{tutorial_homography}
10 In this tutorial we will compare *AKAZE* and *ORB* local features using them to find matches between
11 video frames and track object movements.
13 The algorithm is as follows:
15 - Detect and describe keypoints on the first frame, manually set object boundaries
16 - For every next frame:
17 -# Detect and describe keypoints
18 -# Match them using bruteforce matcher
19 -# Estimate homography transformation using RANSAC
20 -# Filter inliers from all the matches
21 -# Apply homography transformation to the bounding box to find the object
22 -# Draw bounding box and inliers, compute inlier ratio as evaluation metric
29 To do the tracking we need a video and object position on the first frame.
31 You can download our example video and data from
32 [here](https://docs.google.com/file/d/0B72G7D4snftJandBb0taLVJHMFk).
34 To run the code you have to specify input (camera id or video_file). Then, select a bounding box with the mouse, and press any key to start tracking
36 ./planar_tracking blais.mp4
42 @include cpp/tutorial_code/features2D/AKAZE_tracking/planar_tracking.cpp
49 This class implements algorithm described abobve using given feature detector and descriptor
52 - **Setting up the first frame**
54 void Tracker::setFirstFrame(const Mat frame, vector<Point2f> bb, string title, Stats& stats)
56 first_frame = frame.clone();
57 (*detector)(first_frame, noArray(), first_kp, first_desc);
58 stats.keypoints = (int)first_kp.size();
59 drawBoundingBox(first_frame, bb);
60 putText(first_frame, title, Point(0, 60), FONT_HERSHEY_PLAIN, 5, Scalar::all(0), 4);
64 We compute and store keypoints and descriptors from the first frame and prepare it for the
67 We need to save number of detected keypoints to make sure both detectors locate roughly the same
70 - **Processing frames**
72 -# Locate keypoints and compute descriptors
74 (*detector)(frame, noArray(), kp, desc);
77 To find matches between frames we have to locate the keypoints first.
79 In this tutorial detectors are set up to find about 1000 keypoints on each frame.
81 -# Use 2-nn matcher to find correspondences
83 matcher->knnMatch(first_desc, desc, matches, 2);
84 for(unsigned i = 0; i < matches.size(); i++) {
85 if(matches[i][0].distance < nn_match_ratio * matches[i][1].distance) {
86 matched1.push_back(first_kp[matches[i][0].queryIdx]);
87 matched2.push_back( kp[matches[i][0].trainIdx]);
91 If the closest match is *nn_match_ratio* closer than the second closest one, then it's a
94 -# Use *RANSAC* to estimate homography transformation
96 homography = findHomography(Points(matched1), Points(matched2),
97 RANSAC, ransac_thresh, inlier_mask);
99 If there are at least 4 matches we can use random sample consensus to estimate image
104 for(unsigned i = 0; i < matched1.size(); i++) {
105 if(inlier_mask.at<uchar>(i)) {
106 int new_i = static_cast<int>(inliers1.size());
107 inliers1.push_back(matched1[i]);
108 inliers2.push_back(matched2[i]);
109 inlier_matches.push_back(DMatch(new_i, new_i, 0));
113 Since *findHomography* computes the inliers we only have to save the chosen points and
116 -# Project object bounding box
118 perspectiveTransform(object_bb, new_bb, homography);
121 If there is a reasonable number of inliers we can use estimated transformation to locate the
127 You can watch the resulting [video on youtube](http://www.youtube.com/watch?v=LWY-w8AGGhE).