× {{alert.msg}} Never ask again
Get notified about new tutorials RECEIVE NEW TUTORIALS

# Matching images with different orientations and scales in MATLAB

Ray Phan
Mar 24, 2015
<p>Here's something to get you started. What you are asking for is a classic problem known as <a href="http://en.wikipedia.org/wiki/Image_registration" rel="nofollow">image registration</a>. Image registration seeks to find the correct homography that takes one image and aligns it with another. This involves finding interest or keypoints that are common between the two images and determining which keypoints match up between the two images. Once you have these pairs of points, you determine a homography matrix and warp one of the images so that they are aligned with the other with this said matrix.</p> <p>I'm going to assume that you have the Computer Vision and Image Processing Toolboxes that are part of MATLAB. If you don't, then the answer that Maurits gave is a good alternative, and the VLFeat Toolbox is one that I have also used.</p> <p>First, let's read in the images directly from StackOverflow:</p> <pre><code>im = imread('http://i.stack.imgur.com/vXqe8.png'); im2 = imread('http://i.stack.imgur.com/Pd7pt.png'); im_gray = rgb2gray(im); im2_gray = rgb2gray(im2); </code></pre> <p>We also need to convert to grayscale as the keypoint detection algorithms require a grayscale image. Next, we can use any feature detection algorithm that's part of MATLAB's CVST.... I'm going to use <a href="http://en.wikipedia.org/wiki/SURF" rel="nofollow">SURF</a> as it is essentially the same as SIFT, but with some minor but key differences. You can use the <a href="http://www.mathworks.com/help/vision/ref/detectsurffeatures.html" rel="nofollow"><code>detectSURFFeatures</code></a> function that's part of the CVST toolbox and it accepts grayscale images. The output is a structure that contains a bunch of information about each feature point the algorithm detected for the image. Let's apply that to both of the images (in grayscale).</p> <pre><code>points = detectSURFFeatures(im_gray); points2 = detectSURFFeatures(im2_gray); </code></pre> <p>Once we detect the features, it's now time to <strong>extract</strong> the descriptors that <strong>describe</strong> these keypoints. That can be done with <a href="http://www.mathworks.com/help/vision/ref/extractfeatures.html" rel="nofollow"><code>extractFeatures</code></a>. This takes in a grayscale image and the corresponding structure that was output from <code>detectSURFFeatures</code>. The output is a set of features and valid keypoints after some post-processing.</p> <pre><code>[features1, validPoints1] = extractFeatures(im_gray, points); [features2, validPoints2] = extractFeatures(im2_gray, points2); </code></pre> <p>Now it's time to match the features between the two images. That can be done with <a href="http://www.mathworks.com/help/vision/ref/matchfeatures.html" rel="nofollow"><code>matchFeatures</code></a> and it takes in the features between the two images:</p> <pre><code>indexPairs = matchFeatures(features1, features2); </code></pre> <p><code>indexPairs</code> is a 2D array where the first column tells you which feature point from the first image matched with those from the second image, stored in the second column. We would use this to index into our valid points to flesh out what actually matches.</p> <pre><code>matchedPoints1 = validPoints1(indexPairs(:, 1), :); matchedPoints2 = validPoints2(indexPairs(:, 2), :); </code></pre> <p>We can then show which points matched by using <a href="http://www.mathworks.com/help/vision/ref/showmatchedfeatures.html" rel="nofollow"><code>showMatchedFeatures</code></a> like so. We can put both images side by side each other and draw lines between the matching keypoints to see which matched.</p> <pre><code>figure; showMatchedFeatures(im, im2, matchedPoints1, matchedPoints2, 'montage'); </code></pre> <p>This is what I get:</p> <p><img src="http://i.stack.imgur.com/310mM.png" alt="enter image description here"></p> <p>It isn't perfect, but it most certainly finds consistent matches between the two images.</p> <p>Now what we need to do next is find the homography matrix and warp the images. I'm going to use <a href="http://www.mathworks.com/help/vision/ref/estimategeometrictransform.html" rel="nofollow"><code>estimateGeometricTransform</code></a> so that we can find a transformation that warps one set of points to another. As Dima noted in his comments to me below, this <strong>robustly</strong> determines the best homography matrix via RANSAC. We can call <code>estimateGeometricTransform</code> like so:</p> <pre><code>tform = estimateGeometricTransform(matchedPoints1.Location),... matchedPoints2.Location, 'projective'); </code></pre> <p>The first input takes in a set of <strong>input</strong> points, which are the points that you to transform. The second input takes in a set of <strong>base points</strong> which are the <strong>reference</strong> points. These points are what we want to match up to. </p> <p>In our case, we want to warp the points from the first image - the person standing up and make it match the second image - the person leaning on his side, and so the first input is the points from the first image, and the second input is the points from the second image. </p> <p>For the matched points, we want to reference the <code>Location</code> field because these contain the coordinates of where the actual points matched between the two images. We also use <code>projective</code> to account for scale, shearing and rotation. The output is a structure that contains our transformation of points.</p> <p>What we will do next is use <a href="http://www.mathworks.com/help/images/ref/imwarp.html" rel="nofollow"><code>imwarp</code></a> to <strong>warp</strong> the first image so that it aligns up with the second.</p> <pre><code>out = imwarp(im, tform); </code></pre> <p><code>out</code> will contain our warped image. If we showed the second image and this output image side by side:</p> <pre><code>figure; subplot(1,2,1); imshow(im2); subplot(1,2,2); imshow(out); </code></pre> <p>This is what we get:</p> <p><img src="http://i.stack.imgur.com/ZTT2I.png" alt="enter image description here"></p> <p>I'd say that's pretty good, don't you think?</p> <hr> <p>For your copying and pasting pleasure, here's what the full code looks like:</p> <pre><code>im = imread('http://i.stack.imgur.com/vXqe8.png'); im2 = imread('http://i.stack.imgur.com/Pd7pt.png'); im_gray = rgb2gray(im); im2_gray = rgb2gray(im2); points = detectSURFFeatures(im_gray); points2 = detectSURFFeatures(im2_gray); [features1, validPoints1] = extractFeatures(im_gray, points); [features2, validPoints2] = extractFeatures(im2_gray, points2); indexPairs = matchFeatures(features1, features2); matchedPoints1 = validPoints1(indexPairs(:, 1), :); matchedPoints2 = validPoints2(indexPairs(:, 2), :); figure; showMatchedFeatures(im, im2, matchedPoints1, matchedPoints2, 'montage'); tform = estimateGeometricTransform(matchedPoints1.Location,... matchedPoints2.Location, 'projective'); out = imwarp(im, tform); figure; subplot(1,2,1); imshow(im2); subplot(1,2,2); imshow(out); </code></pre> <h1>Aside</h1> <p>Bear in mind that I used the <strong>default</strong> parameters for everything... so <code>detectSURFFeatures</code>, <code>matchFeatures</code>, etc. You may have to play around with the parameters to get consistent results across different pairs of images you try. I'll leave that to you as an exercise. Take a look at all of the links I've linked above with regards to each of the functions so that you can play around with the parameters to suit your tastes.</p> <hr> <p>Have fun, and good luck!</p> <p>This tip was originally posted on <a href="http://stackoverflow.com/questions/29127181/Matching%20images%20with%20different%20orientations%20and%20scales%20in%20MATLAB/29128507">Stack Overflow</a>.</p>

New tutorials will be sent to your Inbox once a week.

comments powered by Disqus