Perspective Projection in Android in an augmented reality application(Android 中的透视投影在增强现实应用程序中)
问题描述
目前我正在编写一个增强现实应用程序,但在将对象显示在屏幕上时遇到了一些问题.令我非常沮丧的是,我无法将 gps 点转换为我的 android 设备上的相应屏幕点.我已经阅读了很多关于 stackoverflow 的文章和许多其他帖子(我已经问过类似的问题),但我仍然需要你的帮助.
我做了维基百科中解释的透视投影.
我要如何处理透视投影的结果才能获得最终的屏幕点?
前段时间看维基百科的文章也让我很困惑.这是我尝试以不同的方式解释它:
<小时>情况
让我们简化一下情况.我们有:
- 我们的投影点 D(x,y,z) - 你所说的 relativePositionX|Y|Z
- 大小为 w * h 的图像平面
- ,我们得到:
MB/CD = EM/EC
<=>X/x = f/z
(2)强>
同时使用 (1) 和 (2),我们现在有:
X = (x/z) * ( (w/2)/tan(α) )
如果我们回到维基百科文章中使用的符号,我们的等式相当于:
b_x = (d_x/d_z) * r_z
您会注意到我们缺少
<块引用>s_x/r_x
的乘法.这是因为在我们的例子中,显示尺寸"和记录面"是相同的,所以s_x/r_x = 1
.注意:Y 的推理相同.
<小时>
实际使用
一些备注:
- 通常使用α = 45deg,即
tan(α) = 1
.这就是为什么这个术语没有出现在许多实现中的原因. 如果你想保持你显示的元素的比例,保持f对于X和Y都保持不变,即而不是计算:
X = (x/z) * ( (w/2)/tan(α) )
和Y = (y/z) * ( (h/2)/tan(α))
...做:
X = (x/z) * ( (min(w,h)/2)/tan(α) )
和Y = (y/z) * ( (min(w,h)/2)/tan(α) )
注意:当我说显示尺寸"和记录表面"是相同的",这不太正确,min操作是为了补偿这个近似值,适应方形表面 r 到潜在矩形表面 s.
注意 2:Appunta 不使用 min(w,h)/2,而是使用
screenRatio=(getWidth()+getHeight())/2
如您所见.两种解决方案都保留了元素比率.焦点,因此视角,只会有点不同,取决于屏幕本身的比例.您实际上可以使用任何您想要的功能定义 f.您可能已经在上图中注意到,屏幕坐标在这里定义在 [-w/2 ;w/2] 用于 X 和 [-h/2 ;h/2] 表示 Y,但您可能想要 [0 ;w] 和 [0 ;h] 代替.
X += w/2
和Y += h/2
- 问题已解决.
结论
我希望这将回答您的问题.如果需要版本,我会留在附近.
再见!
<块引用><自我推销警报 > 其实我前段时间做了一个 文章关于 3D 投影和渲染.实施在Javascript,但应该很容易翻译.
Currently I'm writing an augmented reality app and I have some problems to get the objects on my screen. It's very frustrating for me that I'm not able to transform gps-points to the correspending screen-points on my android device. I've read many articles and many other posts on stackoverflow (I've already asked similar questions) but I still need your help.
I did the perspective projection which is explained in wikipedia.
What do I have to do with the result of the perspective projection to get the resulting screenpoint?
解决方案The Wikipedia article also confused me when I read it some time ago. Here is my attempt to explain it differently:
The Situation
Let's simplify the situation. We have:
- Our projected point D(x,y,z) - what you call relativePositionX|Y|Z
- An image plane of size w * h
- A half-angle of view α
... and we want:
- The coordinates of B in the image plane (let's call them X and Y)
A schema for the X-screen-coordinates:
E is the position of our "eye" in this configuration, which I chose as origin to simplify.
The focal length f can be estimated knowing that:
tan(α) = (w/2) / f
(1)
A bit of Geometry
You can see on the picture that the triangles ECD and EBM are similar, so using the Side-Splitter Theorem, we get:
MB / CD = EM / EC
<=>X / x = f / z
(2)
With both (1) and (2), we now have:
X = (x / z) * ( (w / 2) / tan(α) )
If we go back to the notation used in the Wikipedia article, our equation is equivalent to:
b_x = (d_x / d_z) * r_z
You can notice we are missing the multiplication by
s_x / r_x
. This is because in our case, the "display size" and the "recording surface" are the same, sos_x / r_x = 1
.Note: Same reasoning for Y.
Practical Use
Some remarks:
- Usually, α = 45deg is used, which means
tan(α) = 1
. That's why this term doesn't appear in many implementations. If you want to preserve the ratio of the elements you display, keep f constant for both X and Y, ie instead of calculating:
X = (x / z) * ( (w / 2) / tan(α) )
andY = (y / z) * ( (h / 2) / tan(α) )
... do:
X = (x / z) * ( (min(w,h) / 2) / tan(α) )
andY = (y / z) * ( (min(w,h) / 2) / tan(α) )
Note: when I said that "the "display size" and the "recording surface" are the same", that wasn't quite true, and the min operation is here to compensate this approximation, adapting the square surface r to the potentially-rectangular surface s.
Note 2: Instead of using min(w,h) / 2, Appunta uses
screenRatio= (getWidth()+getHeight())/2
as you noticed. Both solutions preserve the elements ratio. The focal, and thus the angle of view, will simply be a bit different, depending on the screen's own ratio. You can actually use any function you want to define f.As you may have noticed on the picture above, the screen coordinates are here defined between [-w/2 ; w/2] for X and [-h/2 ; h/2] for Y, but you probably want [0 ; w] and [0 ; h] instead.
X += w/2
andY += h/2
- Problem solved.
Conclusion
I hope this will answer your questions. I'll stay near if it needs editions.
Bye!
< Self-promotion Alert > I actually made some time ago an article about 3D projection and rendering. The implementation is in Javascript, but it should be quite easy to translate.
这篇关于Android 中的透视投影在增强现实应用程序中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Android 中的透视投影在增强现实应用程序中
基础教程推荐
- 如何在 UIImageView 中异步加载图像? 2022-01-01
- 如何在 iPhone 上显示来自 API 的 HTML 文本? 2022-01-01
- UIWebView 委托方法 shouldStartLoadWithRequest:在 WKWebView 中等效? 2022-01-01
- Android:对话框关闭而不调用关闭 2022-01-01
- 如何在没有IB的情况下将2个按钮添加到右侧的UINavigationbar? 2022-01-01
- Kivy Buildozer 无法构建 apk,命令失败:./distribute.sh -m “kivy"d 2022-01-01
- android 应用程序已发布,但在 google play 中找不到 2022-01-01
- 在 gmail 中为 ios 应用程序检索朋友的朋友 2022-01-01
- 如何让对象对 Cocos2D 中的触摸做出反应? 2022-01-01
- 当从同一个组件调用时,两个 IBAction 触发的顺序是什么? 2022-01-01