Skip to content Skip to sidebar Skip to footer

Need Help To Parallelize A Loop In Python

I have a huge data set and I have to compute for every point of it a series of properties. My code is really slow and I would like to make it faster parallelizing somehow the do lo

Solution 1:

Parallelizing is not trivial, however you might find numexpr useful.

For numerical work, you really should look into the utilities numpy gives you (vectorize and similar), these give you usually a good speedup as a basis to work on.

For more complicated, non-numerical cases, you may use multiprocessing (see comments).


On a sidenote, multithreading is even more non-trivial with python than with other languages, is that CPython has the Global Interpreter Lock (GIL) which disallows two sections of python code to run in the same interpreter at the same time (i.e. there is no real multithreaded pure python code). For I/O and heavy calculations, third party libraries however tend to release that lock, so that limited multithreading is possible.

This adds to the usual multithreading nuisances of having to mutex shared data accesses and similar.

Solution 2:

I'm not sure that this is the way that you should do things as I'd expect numpy to have a much more efficient method of going about it, but do you just mean something like this?

import numpy
import multiprocessing

x = numpy.linspace(0,20,10000)
p = multiprocessing.Pool(processes=4)

print p.map(numpy.sqrt, x)

Here are the results of timeit on both solutions. As @SvenMarcach points out, however, with a more expensive function multiprocessing will start to be much more effective.

% python -m timeit -s 'import numpy; x=numpy.linspace(0,20,10000)' 'prop=[]                                                                          
foriin numpy.arange(0,len(x)):
         prop.append(numpy.sqrt(x[i]))'
10 loops, best of 3: 31.3 msec per loop

% python -m timeit -s 'import numpy, multiprocessing; x=numpy.linspace(0,20,10000)
p = multiprocessing.Pool(processes=4)' 'l = p.map(numpy.sqrt, x)' 
10 loops, best of 3: 102 msec per loop

At Sven's request, here is the result of l = numpy.sqrt(x) which is significantly faster than either of the alternatives.

% python -m timeit -s 'import numpy; x=numpy.linspace(0,20,10000)' 'l = numpy.sqrt(x)'
10000 loops, best of 3: 70.3 usec per loop

Solution 3:

I would suggest that you take a look at cython: http://www.cython.org/

It enables you to create c extension for python really quickly, and integrates very well with numpy. Here is a nice tutorial which may help you get started: http://docs.cython.org/src/tutorial/numpy.html

Post a Comment for "Need Help To Parallelize A Loop In Python"