<p>Let's answer your questions one-by-one:</p>
<li><p>Mask is an input image that you specify such that you can control <strong>where</strong> the detection of the keypoints takes place. Sometimes, you don't want to detect keypoints over the entire image, and you want to localize where you want to detect keypoints, or locate a <strong>subsection</strong> of the image to capture your keypoints. The reason why this is the case is because there may be some pre-processing done to locate salient regions in your image. For example, if you wanted to do face recognition, you only want to detect keypoints on the face, not the entire image. As such, there may be a step where you first get a general idea of where faces are in the image, then you localize the keypoint detection to those areas only.</p></li>
<li><p>Detecting and computing are obviously two different things. Detecting is determining <strong>which pixel locations</strong> in the image are valid keypoints. Computing <strong>describes</strong> the keypoint at those particular locations. The success of interest point detectors is not only are they repeatable and robust enough to be detected, but the method to <strong>describe the keypoints</strong> is what makes them popular.</p>
<p>This alludes to detectors and descriptors respectively. There are frameworks, such as SIFT and SURF that are both a detection and description framework. SIFT / SURF compute a histogram of orientations (roughly) in a 128-bin vector, and also have a detection framework that is based on the approximation of the Difference of Gaussians. If I can suggest a link, take a look at this one: <a href="http://stackoverflow.com/questions/14808429/classification-of-detectors-extractors-and-matchers/14912160#14912160">Classification of detectors, extractors and matchers</a> - They talk about all of the different detectors, descriptors, as well as methods for matching keypoints. The <code>useProvidedKeypoints</code> (in OpenCV: <a href="http://docs.opencv.org/2.4.1/modules/nonfree/doc/feature_detection.html#sift-operator" rel="nofollow">http://docs.opencv.org/2.4.1/modules/nonfree/doc/feature_detection.html#sift-operator</a>) flag means that you have already determined the pixel locations of <strong>where in the image</strong> you want to compute the descriptors for. As such, SIFT will bypass the detection stage of the algorithm and it will simply compute the descriptors for those pixel locations. </p></li>
<li><p>The difference between Brute Force and FLANN (Fast Library for Approximate Nearest Neighbours - <a href="http://www.cs.ubc.ca/research/flann/" rel="nofollow">http://www.cs.ubc.ca/research/flann/</a>) is in the mechanism for <strong>matching</strong> keypoints. For a given keypoint, you want to determine whether this keypoint matches any of the other keypoints that were detected in the image. One way to do this is to either search <strong>all</strong> of the key points (brute force) or a <strong>subset</strong> of keypoints (FLANN). FLANN performs a nearest neighbour searching in high-dimensional space, so that it limits where you are searching for keypoints. This will obviously be much faster than brute force, but it all depends on your application.</p></li>
<p>This tip was originally posted on <a href="http://stackoverflow.com/questions/28118773/OpenCV:%20SIFT%20detection%20and%20matching%20methods/28119121">Stack Overflow</a>.</p>
Get New Tutorials Delivered to Your Inbox
New tutorials will be sent to your Inbox once a week.