Using 100% of all cores with the multiprocessing module(将 100% 的内核与多处理模块一起使用)
问题描述
我有两段代码用于了解 Python 3.1 中的多处理.我的目标是使用 100% 的所有可用处理器.但是,这里的代码片段在所有处理器上仅达到 30% - 50%.
I have two pieces of code that I'm using to learn about multiprocessing in Python 3.1. My goal is to use 100% of all the available processors. However, the code snippets here only reach 30% - 50% on all processors.
无论如何强制"python 100% 使用?操作系统(Windows 7、64 位)是否限制了 Python 对处理器的访问?当下面的代码片段正在运行时,我打开任务管理器并观察处理器的峰值,但从未达到并保持 100%.除此之外,我还可以看到在此过程中创建和销毁了多个 python.exe 进程.这些过程与处理器有什么关系?例如,如果我生成 4 个进程,则每个进程都没有使用它自己的核心.相反,这些进程使用的是什么?他们是否共享所有内核?如果是这样,是操作系统强制进程共享内核吗?
Is there anyway to 'force' python to use all 100%? Is the OS (windows 7, 64bit) limiting Python's access to the processors? While the code snippets below are running, I open the task manager and watch the processor's spike, but never reach and maintain 100%. In addition to that, I can see multiple python.exe processes created and destroyed along the way. How do these processes relate to processors? For example, if I spawn 4 processes, each process isn't using it's own core. Instead, what are the processes using? Are they sharing all cores? And if so, is it the OS that is forcing the processes to share the cores?
import multiprocessing
def worker():
#worker function
print ('Worker')
x = 0
while x < 1000:
print(x)
x += 1
return
if __name__ == '__main__':
jobs = []
for i in range(50):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
代码片段 2
from multiprocessing import Process, Lock
def f(l, i):
l.acquire()
print('worker ', i)
x = 0
while x < 1000:
print(x)
x += 1
l.release()
if __name__ == '__main__':
lock = Lock()
for num in range(50):
Process(target=f, args=(lock, num)).start()
推荐答案
要使用 100% 的所有内核,不要创建和销毁新进程.
To use 100% of all cores, do not create and destroy new processes.
为每个核心创建几个进程并将它们与管道链接.
Create a few processes per core and link them with a pipeline.
在操作系统级别,所有流水线进程同时运行.
At the OS-level, all pipelined processes run concurrently.
你写的越少(你委托给操作系统的越多)你就越有可能使用尽可能多的资源.
The less you write (and the more you delegate to the OS) the more likely you are to use as many resources as possible.
python p1.py | python p2.py | python p3.py | python p4.py ...
将最大限度地利用您的 CPU.
Will make maximal use of your CPU.
这篇关于将 100% 的内核与多处理模块一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:将 100% 的内核与多处理模块一起使用
基础教程推荐
- 何时使用 os.name、sys.platform 或 platform.system? 2022-01-01
- 线程时出现 msgbox 错误,GUI 块 2022-01-01
- 如何让 python 脚本监听来自另一个脚本的输入 2022-01-01
- Dask.array.套用_沿_轴:由于额外的元素([1]),使用dask.array的每一行作为另一个函数的输入失败 2022-01-01
- 用于分类数据的跳跃记号标签 2022-01-01
- 在 Python 中,如果我在一个“with"中返回.块,文件还会关闭吗? 2022-01-01
- 使用PyInstaller后在Windows中打开可执行文件时出错 2022-01-01
- 筛选NumPy数组 2022-01-01
- Python kivy 入口点 inflateRest2 无法定位 libpng16-16.dll 2022-01-01
- 如何在海运重新绘制中自定义标题和y标签 2022-01-01