执行 cv::warpPerspective 以在一组 cv::Point 上进行假纠

Executing cv::warpPerspective for a fake deskewing on a set of cv::Point(执行 cv::warpPerspective 以在一组 cv::Point 上进行假纠偏)

本文介绍了执行 cv::warpPerspective 以在一组 cv::Point 上进行假纠偏的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试对一组点进行透视变换,以实现deskewing 效果:

I'm trying to do a perspective transformation of a set of points in order to achieve a deskewing effect:

http://nuigroup.com/?ACT=28&fid=27&aid=1892_H6eNAaign4Mrnn30Au8d

我使用下图进行测试,绿色矩形显示感兴趣的区域.

I'm using the image below for tests, and the green rectangle display the area of interest.

我想知道是否可以使用 cv::getPerspectiveTransformcv::warpPerspective 的简单组合来达到我希望的效果.我正在分享我到目前为止编写的源代码,但它不起作用.这是生成的图像:

I was wondering if it's possible to achieve the effect I'm hoping for using a simple combination of cv::getPerspectiveTransform and cv::warpPerspective. I'm sharing the source code I've written so far, but it doesn't work. This is the resulting image:

所以有一个vector定义感兴趣的区域,但是这些点没有以任何特定的顺序存储strong> 在向量内部,这是我无法在检测过程中更改的内容.无论如何,稍后,向量中的点被用来定义一个RotatedRect,它依次用来组装cv::Point2f src_vertices[4];cv::getPerspectiveTransform() 需要的变量之一.

So there is a vector<cv::Point> that defines the region of interest, but the points are not stored in any particular order inside the vector, and that's something I can't change in the detection procedure. Anyway, later, the points in the vector are used to define a RotatedRect, which in turn is used to assemble cv::Point2f src_vertices[4];, one of the variables required by cv::getPerspectiveTransform().

我对顶点及其组织方式的理解可能是问题之一.我也认为使用 RotatedRect 不是最好的主意来存储 ROI 的原始点,因为 坐标会改变一点点以适应旋转的矩形,这不是很酷.

My understanding about vertices and how they are organized might be one of the issues. I also think that using a RotatedRect is not the best idea to store the original points of the ROI, since the coordinates will change a little bit to fit into the rotated rectangle, and that's not very cool.

#include <cv.h>
#include <highgui.h>
#include <iostream>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
    cv::Mat src = cv::imread(argv[1], 1);

    // After some magical procedure, these are points detect that represent 
    // the corners of the paper in the picture: 
    // [408, 69] [72, 2186] [1584, 2426] [1912, 291]
    vector<Point> not_a_rect_shape;
    not_a_rect_shape.push_back(Point(408, 69));
    not_a_rect_shape.push_back(Point(72, 2186));
    not_a_rect_shape.push_back(Point(1584, 2426));
    not_a_rect_shape.push_back(Point(1912, 291));

    // For debugging purposes, draw green lines connecting those points 
    // and save it on disk
    const Point* point = &not_a_rect_shape[0];
    int n = (int)not_a_rect_shape.size();
    Mat draw = src.clone();
    polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
    imwrite("draw.jpg", draw);

    // Assemble a rotated rectangle out of that info
    RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
    std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;

    // Does the order of the points matter? I assume they do NOT.
    // But if it does, is there an easy way to identify and order 
    // them as topLeft, topRight, bottomRight, bottomLeft?
    cv::Point2f src_vertices[4];
    src_vertices[0] = not_a_rect_shape[0];
    src_vertices[1] = not_a_rect_shape[1];
    src_vertices[2] = not_a_rect_shape[2];
    src_vertices[3] = not_a_rect_shape[3];

    Point2f dst_vertices[4];
    dst_vertices[0] = Point(0, 0);
    dst_vertices[1] = Point(0, box.boundingRect().width-1);
    dst_vertices[2] = Point(0, box.boundingRect().height-1);
    dst_vertices[3] = Point(box.boundingRect().width-1, box.boundingRect().height-1);

    Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    warpPerspective(src, rotated, warpMatrix, rotated.size(), INTER_LINEAR, BORDER_CONSTANT);

    imwrite("rotated.jpg", rotated);

    return 0;
}

有人可以帮我解决这个问题吗?

Can someone help me fix this problem?

推荐答案

所以,第一个问题是角点顺序.它们在两个向量中的顺序必须相同.因此,如果在第一个向量中您的顺序是:(左上、左下、右下、右上),则它们在另一个向量中的顺序必须相同.

So, first problem is corner order. They must be in the same order in both vectors. So, if in the first vector your order is:(top-left, bottom-left, bottom-right, top-right) , they MUST be in the same order in the other vector.

其次,要使生成的图像仅包含感兴趣的对象,您必须将其宽度和高度设置为与生成的矩形宽度和高度相同.不用担心,warpPerspective 中的 src 和 dst 图像可以是不同的大小.

Second, to have the resulting image contain only the object of interest, you must set its width and height to be the same as resulting rectangle width and height. Do not worry, the src and dst images in warpPerspective can be different sizes.

第三,性能问题.虽然您的方法绝对准确,因为您只进行仿射变换(旋转、调整大小、去歪斜),但在数学上,您可以使用函数的仿射对应.它们更快.

Third, a performance concern. While your method is absolutely accurate, because you are doing only affine transforms (rotate, resize, deskew), mathematically, you can use the affine corespondent of your functions. They are much faster.

  • getAffineTransform()

  • getAffineTransform()

warpAffine().

warpAffine().

重要提示:getAffine 变换只需要和期望 3 个点,结果矩阵是 2×3,而不是 3×3.

Important note: getAffine transform needs and expects ONLY 3 points, and the result matrix is 2-by-3, instead of 3-by-3.

如何使结果图像与输入的大小不同:

How to make the result image have a different size than the input:

cv::warpPerspective(src, dst, dst.size(), ... );

使用

cv::Mat rotated;
cv::Size size(box.boundingRect().width, box.boundingRect().height);
cv::warpPerspective(src, dst, size, ... );

到这里,你的编程任务就结束了.

So here you are, and your programming assignment is over.

void main()
{
    cv::Mat src = cv::imread("r8fmh.jpg", 1);


    // After some magical procedure, these are points detect that represent 
    // the corners of the paper in the picture: 
    // [408, 69] [72, 2186] [1584, 2426] [1912, 291]

    vector<Point> not_a_rect_shape;
    not_a_rect_shape.push_back(Point(408, 69));
    not_a_rect_shape.push_back(Point(72, 2186));
    not_a_rect_shape.push_back(Point(1584, 2426));
    not_a_rect_shape.push_back(Point(1912, 291));

    // For debugging purposes, draw green lines connecting those points 
    // and save it on disk
    const Point* point = &not_a_rect_shape[0];
    int n = (int)not_a_rect_shape.size();
    Mat draw = src.clone();
    polylines(draw, &point, &n, 1, true, Scalar(0, 255, 0), 3, CV_AA);
    imwrite("draw.jpg", draw);

    // Assemble a rotated rectangle out of that info
    RotatedRect box = minAreaRect(cv::Mat(not_a_rect_shape));
    std::cout << "Rotated box set to (" << box.boundingRect().x << "," << box.boundingRect().y << ") " << box.size.width << "x" << box.size.height << std::endl;

    Point2f pts[4];

    box.points(pts);

    // Does the order of the points matter? I assume they do NOT.
    // But if it does, is there an easy way to identify and order 
    // them as topLeft, topRight, bottomRight, bottomLeft?

    cv::Point2f src_vertices[3];
    src_vertices[0] = pts[0];
    src_vertices[1] = pts[1];
    src_vertices[2] = pts[3];
    //src_vertices[3] = not_a_rect_shape[3];

    Point2f dst_vertices[3];
    dst_vertices[0] = Point(0, 0);
    dst_vertices[1] = Point(box.boundingRect().width-1, 0); 
    dst_vertices[2] = Point(0, box.boundingRect().height-1);

   /* Mat warpMatrix = getPerspectiveTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    cv::Size size(box.boundingRect().width, box.boundingRect().height);
    warpPerspective(src, rotated, warpMatrix, size, INTER_LINEAR, BORDER_CONSTANT);*/
    Mat warpAffineMatrix = getAffineTransform(src_vertices, dst_vertices);

    cv::Mat rotated;
    cv::Size size(box.boundingRect().width, box.boundingRect().height);
    warpAffine(src, rotated, warpAffineMatrix, size, INTER_LINEAR, BORDER_CONSTANT);

    imwrite("rotated.jpg", rotated);
}

这篇关于执行 cv::warpPerspective 以在一组 cv::Point 上进行假纠偏的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本文标题为:执行 cv::warpPerspective 以在一组 cv::Point 上进行假纠

基础教程推荐