Python Multiprocessing.Pool lazy iteration(Python Multiprocessing.Pool 延迟迭代)
问题描述
我想知道 python 的 Multiprocessing.Pool 类与 map、imap 和 map_async 一起工作的方式.我的特殊问题是我想映射一个创建大量内存对象的迭代器,并且不希望所有这些对象同时生成到内存中.我想看看各种 map() 函数是否会使我的迭代器干涸,或者仅在子进程缓慢推进时智能地调用 next() 函数,所以我像这样破解了一些测试:
I'm wondering about the way that python's Multiprocessing.Pool class works with map, imap, and map_async. My particular problem is that I want to map on an iterator that creates memory-heavy objects, and don't want all these objects to be generated into memory at the same time. I wanted to see if the various map() functions would wring my iterator dry, or intelligently call the next() function only as child processes slowly advanced, so I hacked up some tests as such:
def g():
for el in xrange(100):
print el
yield el
def f(x):
time.sleep(1)
return x*x
if __name__ == '__main__':
pool = Pool(processes=4) # start 4 worker processes
go = g()
g2 = pool.imap(f, go)
g2.next()
map、imap 和 map_async 等等.然而,这是最明显的例子,因为在 g2 上简单地调用一次 next() 会打印出我的生成器 g() 中的所有元素,而如果 imap '懒惰地'这样做,我希望它只调用 go.next() 一次,因此只打印出 '1'.
And so on with map, imap, and map_async. This is the most flagrant example however, as simply calling next() a single time on g2 prints out all my elements from my generator g(), whereas if imap were doing this 'lazily' I would expect it to only call go.next() once, and therefore print out only '1'.
有人能弄清楚发生了什么吗?是否有某种方法可以让进程池根据需要懒惰地"评估迭代器?
Can someone clear up what is happening, and if there is some way to have the process pool 'lazily' evaluate the iterator as needed?
谢谢,
加布
推荐答案
我们先看看程序的结尾.
Let's look at the end of the program first.
多处理模块在程序结束时使用 atexit
调用 multiprocessing.util._exit_function
.
The multiprocessing module uses atexit
to call multiprocessing.util._exit_function
when your program ends.
如果你删除 g2.next()
,你的程序会很快结束.
If you remove g2.next()
, your program ends quickly.
_exit_function
最终调用Pool._terminate_pool
.主线程将 pool._task_handler._state
的状态从 RUN
更改为 TERMINATE
.同时 pool._task_handler
线程在 Pool._handle_tasks
中循环,并在达到条件时退出
The _exit_function
eventually calls Pool._terminate_pool
. The main thread changes the state of pool._task_handler._state
from RUN
to TERMINATE
. Meanwhile the pool._task_handler
thread is looping in Pool._handle_tasks
and bails out when it reaches the condition
if thread._state:
debug('task handler found thread._state != RUN')
break
(参见/usr/lib/python2.6/multiprocessing/pool.py)
(See /usr/lib/python2.6/multiprocessing/pool.py)
这是阻止任务处理程序完全使用您的生成器 g()
的原因.如果你查看 Pool._handle_tasks
你会看到
This is what stops the task handler from fully consuming your generator, g()
. If you look in Pool._handle_tasks
you'll see
for i, task in enumerate(taskseq):
...
try:
put(task)
except IOError:
debug('could not put task on queue')
break
这是使用您的生成器的代码.(taskseq
并不完全是您的生成器,但随着 taskseq
被消耗,您的生成器也是如此.)
This is the code which consumes your generator. (taskseq
is not exactly your generator, but as taskseq
is consumed, so is your generator.)
相反,当你调用 g2.next()
时,主线程调用 IMapIterator.next
,并在到达 self._cond.wait(超时)
.
In contrast, when you call g2.next()
the main thread calls IMapIterator.next
, and waits when it reaches self._cond.wait(timeout)
.
主线程正在等待而不是调用 _exit_function
是允许任务处理程序线程正常运行的原因,这意味着在生成器 put
的任务到 worker
s' 时完全消耗生成器inqueue
在 Pool._handle_tasks
函数中.
That the main thread is waiting instead of
calling _exit_function
is what allows the task handler thread to run normally, which means fully consuming the generator as it put
s tasks in the worker
s' inqueue
in the Pool._handle_tasks
function.
底线是所有 Pool
映射函数都会消耗给定的整个可迭代对象.如果你想分块使用生成器,你可以这样做:
The bottom line is that all Pool
map functions consume the entire iterable that it is given. If you'd like to consume the generator in chunks, you could do this instead:
import multiprocessing as mp
import itertools
import time
def g():
for el in xrange(50):
print el
yield el
def f(x):
time.sleep(1)
return x * x
if __name__ == '__main__':
pool = mp.Pool(processes=4) # start 4 worker processes
go = g()
result = []
N = 11
while True:
g2 = pool.map(f, itertools.islice(go, N))
if g2:
result.extend(g2)
time.sleep(1)
else:
break
print(result)
这篇关于Python Multiprocessing.Pool 延迟迭代的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Python Multiprocessing.Pool 延迟迭代
基础教程推荐
- Python kivy 入口点 inflateRest2 无法定位 libpng16-16.dll 2022-01-01
- 在 Python 中,如果我在一个“with"中返回.块,文件还会关闭吗? 2022-01-01
- 用于分类数据的跳跃记号标签 2022-01-01
- 如何让 python 脚本监听来自另一个脚本的输入 2022-01-01
- 线程时出现 msgbox 错误,GUI 块 2022-01-01
- 使用PyInstaller后在Windows中打开可执行文件时出错 2022-01-01
- Dask.array.套用_沿_轴:由于额外的元素([1]),使用dask.array的每一行作为另一个函数的输入失败 2022-01-01
- 何时使用 os.name、sys.platform 或 platform.system? 2022-01-01
- 筛选NumPy数组 2022-01-01
- 如何在海运重新绘制中自定义标题和y标签 2022-01-01