CUDA Device To Device transfer expensive(CUDA 设备到设备传输昂贵)
问题描述
我编写了一些代码来尝试交换二维矩阵的象限以用于 FFT,该矩阵存储在平面数组中.
I have written some code to try to swap quadrants of a 2D matrix for FFT purposes, that is stored in a flat array.
int leftover = W-dcW;
T *temp;
T *topHalf;
cudaMalloc((void **)&temp, dcW * sizeof(T));
//swap every row, left and right
for(int i = 0; i < H; i++)
{
cudaMemcpy(temp, &data[i*W], dcW*sizeof(T),cudaMemcpyDeviceToDevice);
cudaMemcpy(&data[i*W],&data[i*W+dcW], leftover*sizeof(T), cudaMemcpyDeviceToDevice);
cudaMemcpy(&data[i*W+leftover], temp, dcW*sizeof(T), cudaMemcpyDeviceToDevice);
}
cudaMalloc((void **)&topHalf, dcH*W* sizeof(T));
leftover = H-dcH;
cudaMemcpy(topHalf, data, dcH*W*sizeof(T), cudaMemcpyDeviceToDevice);
cudaMemcpy(data, &data[dcH*W], leftover*W*sizeof(T), cudaMemcpyDeviceToDevice);
cudaMemcpy(&data[leftover*W], topHalf, dcH*W*sizeof(T), cudaMemcpyDeviceToDevice);
请注意,此代码采用设备指针,并执行 DeviceToDevice 传输.
Notice that this code takes device pointers, and does DeviceToDevice transfers.
为什么这似乎运行得这么慢?这可以以某种方式优化吗?与使用常规 memcpy 在主机上进行相同操作相比,我对此进行了计时,并且速度慢了大约 2 倍.
Why does this seem to run so slow? Can this be optimized somehow? I timed this compared to the same operation on host using regular memcpy and it was about 2x slower.
有什么想法吗?
推荐答案
我最终编写了一个内核来进行交换.这确实比 Device to Device memcpy 操作快
I ended up writing a kernel to do the swaps. This was indeed faster than the Device to Device memcpy operations
这篇关于CUDA 设备到设备传输昂贵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:CUDA 设备到设备传输昂贵
基础教程推荐
- 从 std::cin 读取密码 2021-01-01
- 使用从字符串中提取的参数调用函数 2022-01-01
- 在 C++ 中循环遍历所有 Lua 全局变量 2021-01-01
- 如何“在 Finder 中显示"或“在资源管理器中显 2021-01-01
- Windows Media Foundation 录制音频 2021-01-01
- 如何在不破坏 vtbl 的情况下做相当于 memset(this, ...) 的操作? 2022-01-01
- 如何使图像调整大小以在 Qt 中缩放? 2021-01-01
- 管理共享内存应该分配多少内存?(助推) 2022-12-07
- 为什么语句不能出现在命名空间范围内? 2021-01-01
- 为 C/C++ 中的项目的 makefile 生成依赖项 2022-01-01