Improve matching of feature points with OpenCV(改进特征点与 OpenCV 的匹配)
问题描述
我想匹配立体图像中的特征点.我已经用不同的算法找到并提取了特征点,现在我需要一个很好的匹配.在这种情况下,我使用 FAST 算法进行检测和提取,并使用 BruteForceMatcher
来匹配特征点.
匹配代码:
vector<向量<DMatch>>火柴;//使用FLANN或BruteForcePtr<DescriptorMatcher>matcher = DescriptorMatcher::create(algorithmName);matcher->knnMatch(descriptors_1,descriptors_2,matches,1);//只是一些临时代码以获得正确的数据结构向量<匹配>good_matches2;good_matches2.reserve(matches.size());for (size_t i = 0; i
因为有很多错误匹配,我计算了最小和最大距离并删除了所有太糟糕的匹配:
//计算关键点之间的最大最小距离双max_dist = 0;双 min_dist = 100;for( int i = 0; i max_dist ) max_dist = dist;}//找到好的"匹配向量<匹配>good_matches;for( int i = 0; i
问题是,我要么得到很多错误匹配,要么只有少数正确匹配(见下图).
(来源:
(来源:
(来源:
(来源:codemax.de)
相关代码:
Ptr探测器;检测器 = new DynamicAdaptedFeatureDetector ( new FastAdjuster(10,true), 5000, 10000, 10);检测器->检测(leftImageGrey,keypoints_1);检测器->检测(rightImageGrey,keypoints_2);Ptr<DescriptorExtractor>提取器 = DescriptorExtractor::create("SIFT");提取器->计算(leftImageGrey,keypoints_1,descriptors_1);提取器->计算(rightImageGrey,keypoints_2,descriptors_2);向量<向量<DMatch>>火柴;Ptr<DescriptorMatcher>matcher = DescriptorMatcher::create("BruteForce");matcher->knnMatch(descriptors_1,descriptors_2,matches,500);//查看匹配是否在图像的定义区域内//只有最大可能距离的25%double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));向量<匹配>good_matches2;good_matches2.reserve(matches.size());for (size_t i = 0; i
I want to match feature points in stereo images. I've already found and extracted the feature points with different algorithms and now I need a good matching. In this case I'm using the FAST algorithms for detection and extraction and the BruteForceMatcher
for matching the feature points.
The matching code:
vector< vector<DMatch> > matches;
//using either FLANN or BruteForce
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 1 );
//just some temporarily code to have the right data structure
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
good_matches2.push_back(matches[i][0]);
}
Because there are a lot of false matches I caluclated the min and max distance and remove all matches that are too bad:
//calculation of max and min distances between keypoints
double max_dist = 0; double min_dist = 100;
for( int i = 0; i < descriptors_1.rows; i++ )
{
double dist = good_matches2[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
//find the "good" matches
vector< DMatch > good_matches;
for( int i = 0; i < descriptors_1.rows; i++ )
{
if( good_matches2[i].distance <= 5*min_dist )
{
good_matches.push_back( good_matches2[i]);
}
}
The problem is, that I either get a lot of false matches or only a few right ones (see the images below).
(source: codemax.de)
(source: codemax.de)
I think it's not a problem of programming but more a matching thing. As far as I understood the BruteForceMatcher
only regards the visual distance of feature points (which is stored in the FeatureExtractor
), not the local distance (x&y position), which is in my case important, too. Has anybody any experiences with this problem or a good idea to improve the matching results?
EDIT
I changed the code, that it gives me the 50 best matches. After this I go through the first match to check, whether it's in a specified area. If it's not, I take the next match until I have found a match inside the given area.
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(algorithmName);
matcher->knnMatch( descriptors_1, descriptors_2, matches, 50 );
//look if the match is inside a defined area of the image
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
//calculate local distance for each possible match
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area
if (dist < tresholdDist)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
I think I don't get more matches, but with this I'm able to remove more false matches:
(source: codemax.de)
By comparing all feature detection algorithms I found a good combination, which gives me a lot more matches. Now I am using FAST for feature detection, SIFT for feature extraction and BruteForce for the matching. Combined with the check, whether the matches is inside a defined region I get a lot of matches, see the image:
(source: codemax.de)
The relevant code:
Ptr<FeatureDetector> detector;
detector = new DynamicAdaptedFeatureDetector ( new FastAdjuster(10,true), 5000, 10000, 10);
detector->detect(leftImageGrey, keypoints_1);
detector->detect(rightImageGrey, keypoints_2);
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SIFT");
extractor->compute( leftImageGrey, keypoints_1, descriptors_1 );
extractor->compute( rightImageGrey, keypoints_2, descriptors_2 );
vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
matcher->knnMatch( descriptors_1, descriptors_2, matches, 500 );
//look whether the match is inside a defined area of the image
//only 25% of maximum of possible distance
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));
vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;
//calculate local distance for each possible match
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));
//save as best match if local distance is in specified area and on same height
if (dist < tresholdDist && abs(from.y-to.y)<5)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
}
这篇关于改进特征点与 OpenCV 的匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:改进特征点与 OpenCV 的匹配
基础教程推荐
- C++ 标准:取消引用 NULL 指针以获取引用? 2021-01-01
- C++,'if' 表达式中的变量声明 2021-01-01
- 您如何将 CreateThread 用于属于类成员的函数? 2021-01-01
- 如何定义双括号/双迭代器运算符,类似于向量的向量? 2022-01-01
- 什么是T&&(双与号)在 C++11 中是什么意思? 2022-11-04
- C++ 程序在执行 std::string 分配时总是崩溃 2022-01-01
- 调用std::Package_TASK::Get_Future()时可能出现争用情况 2022-12-17
- 运算符重载的基本规则和习语是什么? 2022-10-31
- 设计字符串本地化的最佳方法 2022-01-01
- 如何在 C++ 中处理或避免堆栈溢出 2022-01-01