How to obtain the right alpha value to perfectly blend two images?(如何获得正确的 alpha 值以完美融合两个图像?)
问题描述
我一直在尝试混合两个图像.我目前采用的方法是,我获取两个图像的重叠区域的坐标,并且仅对于重叠区域,我在添加之前与 0.5 的硬编码 alpha 混合.所以基本上我只是从两个图像的重叠区域中获取每个像素值的一半,然后添加它们.这并没有给我一个完美的融合,因为 alpha 值被硬编码为 0.5.这是 3 张图像混合的结果:
如您所见,从一张图像到另一张图像的过渡仍然可见.如何获得可以消除这种可见过渡的完美 alpha 值?还是没有这样的事情,我采取了错误的方法?
这是我目前进行混合的方式:
for i in range(3):base_img_warp[overlap_coords[0],overlap_coords[1],i] = base_img_warp[overlap_coords[0],overlap_coords[1],i]*0.5next_img_warp[overlap_coords[0],overlap_coords[1],i] = next_img_warp[overlap_coords[0],overlap_coords[1],i]*0.5final_img = cv2.add(base_img_warp, next_img_warp)
如果有人想试一试,这里有两张扭曲的图像,以及它们重叠区域的蒙版:
混合蒙版(只是作为一种印象,必须是浮点矩阵):
图像马赛克:
I've been trying to blend two images. The current approach I'm taking is, I obtain the coordinates of the overlapping region of the two images, and only for the overlapping regions, I blend with a hardcoded alpha of 0.5, before adding it. SO basically I'm just taking half the value of each pixel from overlapping regions of both the images, and adding them. That doesn't give me a perfect blend because the alpha value is hardcoded to 0.5. Here's the result of blending of 3 images:
As you can see, the transition from one image to another is still visible. How do I obtain the perfect alpha value that would eliminate this visible transition? Or is there no such thing, and I'm taking a wrong approach?
Here's how I'm currently doing the blending:
for i in range(3):
base_img_warp[overlap_coords[0], overlap_coords[1], i] = base_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
next_img_warp[overlap_coords[0], overlap_coords[1], i] = next_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
final_img = cv2.add(base_img_warp, next_img_warp)
If anyone would like to give it a shot, here are two warped images, and the mask of their overlapping region: http://imgur.com/a/9pOsQ
Here is the way I would do it in general:
int main(int argc, char* argv[])
{
cv::Mat input1 = cv::imread("C:/StackOverflow/Input/pano1.jpg");
cv::Mat input2 = cv::imread("C:/StackOverflow/Input/pano2.jpg");
// compute the vignetting masks. This is much easier before warping, but I will try...
// it can be precomputed, if the size and position of your ROI in the image doesnt change and can be precomputed and aligned, if you can determine the ROI for every image
// the compression artifacts make it a little bit worse here, I try to extract all the non-black regions in the images.
cv::Mat mask1;
cv::inRange(input1, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask1);
cv::Mat mask2;
cv::inRange(input2, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask2);
// now compute the distance from the ROI border:
cv::Mat dt1;
cv::distanceTransform(mask1, dt1, CV_DIST_L1, 3);
cv::Mat dt2;
cv::distanceTransform(mask2, dt2, CV_DIST_L1, 3);
// now you can use the distance values for blending directly. If the distance value is smaller this means that the value is worse (your vignetting becomes worse at the image border)
cv::Mat mosaic = cv::Mat(input1.size(), input1.type(), cv::Scalar(0, 0, 0));
for (int j = 0; j < mosaic.rows; ++j)
for (int i = 0; i < mosaic.cols; ++i)
{
float a = dt1.at<float>(j, i);
float b = dt2.at<float>(j, i);
float alpha = a / (a + b); // distances are not between 0 and 1 but this value is. The "better" a is, compared to b, the higher is alpha.
// actual blending: alpha*A + beta*B
mosaic.at<cv::Vec3b>(j, i) = alpha*input1.at<cv::Vec3b>(j, i) + (1 - alpha)* input2.at<cv::Vec3b>(j, i);
}
cv::imshow("mosaic", mosaic);
cv::waitKey(0);
return 0;
}
Basically you compute the distance from your ROI border to the center of your objects and compute the alpha from both blending mask values. So if one image has a high distance from the border and other one a low distance from border, you prefer the pixel that is closer to the image center. It would be better to normalize those values for cases where the warped images aren't of similar size. But even better and more efficient is to precompute the blending masks and warp them. Best would be to know the vignetting of your optical system and choose and identical blending mask (typically lower values of the border).
From the previous code you'll get these results: ROI masks:
Blending masks (just as an impression, must be float matrices instead):
image mosaic:
这篇关于如何获得正确的 alpha 值以完美融合两个图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:如何获得正确的 alpha 值以完美融合两个图像?
基础教程推荐
- 合并具有多索引的两个数据帧 2022-01-01
- 将 YAML 文件转换为 python dict 2022-01-01
- Python 的 List 是如何实现的? 2022-01-01
- 使用Python匹配Stata加权xtil命令的确定方法? 2022-01-01
- 使 Python 脚本在 Windows 上运行而不指定“.py";延期 2022-01-01
- 如何在Python中绘制多元函数? 2022-01-01
- 症状类型错误:无法确定关系的真值 2022-01-01
- 如何在 Python 中检测文件是否为二进制(非文本)文 2022-01-01
- 哪些 Python 包提供独立的事件系统? 2022-01-01
- 使用 Google App Engine (Python) 将文件上传到 Google Cloud Storage 2022-01-01