The following two arrays contain the descriptors of features extracted from two images. The descriptors are 2-dimensional (much lower than we would usually use in pracice); each column of the matrix is the descriptor for a feature. In all of the following, use matrix-style (i, j) indexing with indices starting at 1. For example, feature 4 in image 1 has descriptor [31]T.
F1=[01431041]
F2=[251152]
Create a table with 4 rows and 3 columns in which the (i,j)th cell contains the SSD distance between feature i in image 1 and image j in image 2.
For each feature in image 1, give the index of the closest feature match in image 2 using to the SSD metric.
For each feature in image 2, give the index of the closest feature match in image 1 and the ratio distance between each feature and its closest match.
Suppose you’ve aligned two images using feature matches using a translational motion model; that is, you have a vector t=[tx,ty] that specifies the offset of corresponding pixels in image 2 from their coordinates in image 1. We’d like to warp image 2 into image 1’s coordinates and combine the two together using some blending scheme (maybe we’ll average them or something).