Python multiprocessing utilizes only one core(Python 多处理只使用一个核心)
问题描述
我正在尝试使用 标准 python 文档中的代码片段来学习如何使用多处理模块.代码粘贴在此消息的末尾.我在四核机器上的 Ubuntu 11.04 上使用 Python 2.7.1(根据系统监视器,由于超线程,它给了我八个内核)
I'm trying out a code snippet from the standard python documentation to learn how to use the multiprocessing module. The code is pasted at the end of this message. I'm using Python 2.7.1 on Ubuntu 11.04 on a quad core machine (which according to the system monitor gives me eight cores due to hyper threading)
问题:尽管启动了多个进程,但所有工作负载似乎都安排在一个内核上,利用率接近 100%.有时,所有工作负载都会迁移到另一个核心,但工作负载从未在它们之间分配.
Problem: All workload seems to be scheduled to just one core, which gets close to 100% utilization, despite the fact that several processes are started. Occasionally all workload migrates to another core but the workload is never distributed among them.
任何想法为什么会这样?
Any ideas why this is so?
最好的问候,
保罗
#
# Simple example which uses a pool of workers to carry out some tasks.
#
# Notice that the results will probably not come out of the output
# queue in the same in the same order as the corresponding tasks were
# put on the input queue. If it is important to get the results back
# in the original order then consider using `Pool.map()` or
# `Pool.imap()` (which will save on the amount of code needed anyway).
#
# Copyright (c) 2006-2008, R Oudkerk
# All rights reserved.
#
import time
import random
from multiprocessing import Process, Queue, current_process, freeze_support
#
# Function run by worker processes
#
def worker(input, output):
for func, args in iter(input.get, 'STOP'):
result = calculate(func, args)
output.put(result)
#
# Function used to calculate result
#
def calculate(func, args):
result = func(*args)
return '%s says that %s%s = %s' %
(current_process().name, func.__name__, args, result)
#
# Functions referenced by tasks
#
def mul(a, b):
time.sleep(0.5*random.random())
return a * b
def plus(a, b):
time.sleep(0.5*random.random())
return a + b
def test():
NUMBER_OF_PROCESSES = 4
TASKS1 = [(mul, (i, 7)) for i in range(500)]
TASKS2 = [(plus, (i, 8)) for i in range(250)]
# Create queues
task_queue = Queue()
done_queue = Queue()
# Submit tasks
for task in TASKS1:
task_queue.put(task)
# Start worker processes
for i in range(NUMBER_OF_PROCESSES):
Process(target=worker, args=(task_queue, done_queue)).start()
# Get and print results
print 'Unordered results:'
for i in range(len(TASKS1)):
print ' ', done_queue.get()
# Add more tasks using `put()`
for task in TASKS2:
task_queue.put(task)
# Get and print some more results
for i in range(len(TASKS2)):
print ' ', done_queue.get()
# Tell child processes to stop
for i in range(NUMBER_OF_PROCESSES):
task_queue.put('STOP')
test()
推荐答案
尝试将 time.sleep
替换为实际需要 CPU 的东西,您将看到 multiprocess
有效正好!例如:
Try replacing the time.sleep
with something that actually requires CPUs and you will see the multiprocess
works just fine! For example:
def mul(a, b):
for i in xrange(100000):
j = i**2
return a * b
def plus(a, b):
for i in xrange(100000):
j = i**2
return a + b
这篇关于Python 多处理只使用一个核心的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Python 多处理只使用一个核心
基础教程推荐
- 在 Python 中,如果我在一个“with"中返回.块,文件还会关闭吗? 2022-01-01
- Python kivy 入口点 inflateRest2 无法定位 libpng16-16.dll 2022-01-01
- 使用PyInstaller后在Windows中打开可执行文件时出错 2022-01-01
- 筛选NumPy数组 2022-01-01
- 线程时出现 msgbox 错误,GUI 块 2022-01-01
- 如何让 python 脚本监听来自另一个脚本的输入 2022-01-01
- 如何在海运重新绘制中自定义标题和y标签 2022-01-01
- 用于分类数据的跳跃记号标签 2022-01-01
- 何时使用 os.name、sys.platform 或 platform.system? 2022-01-01
- Dask.array.套用_沿_轴:由于额外的元素([1]),使用dask.array的每一行作为另一个函数的输入失败 2022-01-01