1 :mod:`multiprocessing` --- Process-based "threading" interface
2 ==============================================================
4 .. module:: multiprocessing
5 :synopsis: Process-based "threading" interface.
11 ----------------------
13 :mod:`multiprocessing` is a package that supports spawning processes using an
14 API similar to the :mod:`threading` module. The :mod:`multiprocessing` package
15 offers both local and remote concurrency, effectively side-stepping the
16 :term:`Global Interpreter Lock` by using subprocesses instead of threads. Due
17 to this, the :mod:`multiprocessing` module allows the programmer to fully
18 leverage multiple processors on a given machine. It runs on both Unix and
23 Some of this package's functionality requires a functioning shared semaphore
24 implementation on the host operating system. Without one, the
25 :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
26 import it will result in an :exc:`ImportError`. See
27 :issue:`3770` for additional information.
31 Functionality within this package requires that the ``__main__`` module be
32 importable by the children. This is covered in :ref:`multiprocessing-programming`
33 however it is worth pointing out here. This means that some examples, such
34 as the :class:`multiprocessing.Pool` examples will not work in the
35 interactive interpreter. For example::
37 >>> from multiprocessing import Pool
46 Traceback (most recent call last):
47 Traceback (most recent call last):
48 Traceback (most recent call last):
49 AttributeError: 'module' object has no attribute 'f'
50 AttributeError: 'module' object has no attribute 'f'
51 AttributeError: 'module' object has no attribute 'f'
53 (If you try this it will actually output three full tracebacks
54 interleaved in a semi-random fashion, and then you may have to
55 stop the master process somehow.)
58 The :class:`Process` class
59 ~~~~~~~~~~~~~~~~~~~~~~~~~~
61 In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
62 object and then calling its :meth:`~Process.start` method. :class:`Process`
63 follows the API of :class:`threading.Thread`. A trivial example of a
64 multiprocess program is ::
66 from multiprocessing import Process
71 if __name__ == '__main__':
72 p = Process(target=f, args=('bob',))
76 To show the individual process IDs involved, here is an expanded example::
78 from multiprocessing import Process
83 print 'module name:', __name__
84 print 'parent process:', os.getppid()
85 print 'process id:', os.getpid()
91 if __name__ == '__main__':
93 p = Process(target=f, args=('bob',))
97 For an explanation of why (on Windows) the ``if __name__ == '__main__'`` part is
98 necessary, see :ref:`multiprocessing-programming`.
102 Exchanging objects between processes
103 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
105 :mod:`multiprocessing` supports two types of communication channel between
110 The :class:`Queue` class is a near clone of :class:`Queue.Queue`. For
113 from multiprocessing import Process, Queue
116 q.put([42, None, 'hello'])
118 if __name__ == '__main__':
120 p = Process(target=f, args=(q,))
122 print q.get() # prints "[42, None, 'hello']"
125 Queues are thread and process safe.
129 The :func:`Pipe` function returns a pair of connection objects connected by a
130 pipe which by default is duplex (two-way). For example::
132 from multiprocessing import Process, Pipe
135 conn.send([42, None, 'hello'])
138 if __name__ == '__main__':
139 parent_conn, child_conn = Pipe()
140 p = Process(target=f, args=(child_conn,))
142 print parent_conn.recv() # prints "[42, None, 'hello']"
145 The two connection objects returned by :func:`Pipe` represent the two ends of
146 the pipe. Each connection object has :meth:`~Connection.send` and
147 :meth:`~Connection.recv` methods (among others). Note that data in a pipe
148 may become corrupted if two processes (or threads) try to read from or write
149 to the *same* end of the pipe at the same time. Of course there is no risk
150 of corruption from processes using different ends of the pipe at the same
154 Synchronization between processes
155 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
157 :mod:`multiprocessing` contains equivalents of all the synchronization
158 primitives from :mod:`threading`. For instance one can use a lock to ensure
159 that only one process prints to standard output at a time::
161 from multiprocessing import Process, Lock
165 print 'hello world', i
168 if __name__ == '__main__':
171 for num in range(10):
172 Process(target=f, args=(lock, num)).start()
174 Without using the lock output from the different processes is liable to get all
178 Sharing state between processes
179 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
181 As mentioned above, when doing concurrent programming it is usually best to
182 avoid using shared state as far as possible. This is particularly true when
183 using multiple processes.
185 However, if you really do need to use some shared data then
186 :mod:`multiprocessing` provides a couple of ways of doing so.
190 Data can be stored in a shared memory map using :class:`Value` or
191 :class:`Array`. For example, the following code ::
193 from multiprocessing import Process, Value, Array
197 for i in range(len(a)):
200 if __name__ == '__main__':
201 num = Value('d', 0.0)
202 arr = Array('i', range(10))
204 p = Process(target=f, args=(num, arr))
214 [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
216 The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
217 typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
218 double precision float and ``'i'`` indicates a signed integer. These shared
219 objects will be process and thread-safe.
221 For more flexibility in using shared memory one can use the
222 :mod:`multiprocessing.sharedctypes` module which supports the creation of
223 arbitrary ctypes objects allocated from shared memory.
227 A manager object returned by :func:`Manager` controls a server process which
228 holds Python objects and allows other processes to manipulate them using
231 A manager returned by :func:`Manager` will support types :class:`list`,
232 :class:`dict`, :class:`Namespace`, :class:`Lock`, :class:`RLock`,
233 :class:`Semaphore`, :class:`BoundedSemaphore`, :class:`Condition`,
234 :class:`Event`, :class:`Queue`, :class:`Value` and :class:`Array`. For
237 from multiprocessing import Process, Manager
245 if __name__ == '__main__':
249 l = manager.list(range(10))
251 p = Process(target=f, args=(d, l))
260 {0.25: None, 1: '1', '2': 2}
261 [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
263 Server process managers are more flexible than using shared memory objects
264 because they can be made to support arbitrary object types. Also, a single
265 manager can be shared by processes on different computers over a network.
266 They are, however, slower than using shared memory.
269 Using a pool of workers
270 ~~~~~~~~~~~~~~~~~~~~~~~
272 The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
273 processes. It has methods which allows tasks to be offloaded to the worker
274 processes in a few different ways.
278 from multiprocessing import Pool
283 if __name__ == '__main__':
284 pool = Pool(processes=4) # start 4 worker processes
285 result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously
286 print result.get(timeout=1) # prints "100" unless your computer is *very* slow
287 print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
293 The :mod:`multiprocessing` package mostly replicates the API of the
294 :mod:`threading` module.
297 :class:`Process` and exceptions
298 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
300 .. class:: Process([group[, target[, name[, args[, kwargs]]]]])
302 Process objects represent activity that is run in a separate process. The
303 :class:`Process` class has equivalents of all the methods of
304 :class:`threading.Thread`.
306 The constructor should always be called with keyword arguments. *group*
307 should always be ``None``; it exists solely for compatibility with
308 :class:`threading.Thread`. *target* is the callable object to be invoked by
309 the :meth:`run()` method. It defaults to ``None``, meaning nothing is
310 called. *name* is the process name. By default, a unique name is constructed
311 of the form 'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' where N\
312 :sub:`1`,N\ :sub:`2`,...,N\ :sub:`k` is a sequence of integers whose length
313 is determined by the *generation* of the process. *args* is the argument
314 tuple for the target invocation. *kwargs* is a dictionary of keyword
315 arguments for the target invocation. By default, no arguments are passed to
318 If a subclass overrides the constructor, it must make sure it invokes the
319 base class constructor (:meth:`Process.__init__`) before doing anything else
324 Method representing the process's activity.
326 You may override this method in a subclass. The standard :meth:`run`
327 method invokes the callable object passed to the object's constructor as
328 the target argument, if any, with sequential and keyword arguments taken
329 from the *args* and *kwargs* arguments, respectively.
333 Start the process's activity.
335 This must be called at most once per process object. It arranges for the
336 object's :meth:`run` method to be invoked in a separate process.
338 .. method:: join([timeout])
340 Block the calling thread until the process whose :meth:`join` method is
341 called terminates or until the optional timeout occurs.
343 If *timeout* is ``None`` then there is no timeout.
345 A process can be joined many times.
347 A process cannot join itself because this would cause a deadlock. It is
348 an error to attempt to join a process before it has been started.
354 The name is a string used for identification purposes only. It has no
355 semantics. Multiple processes may be given the same name. The initial
356 name is set by the constructor.
360 Return whether the process is alive.
362 Roughly, a process object is alive from the moment the :meth:`start`
363 method returns until the child process terminates.
365 .. attribute:: daemon
367 The process's daemon flag, a Boolean value. This must be set before
368 :meth:`start` is called.
370 The initial value is inherited from the creating process.
372 When a process exits, it attempts to terminate all of its daemonic child
375 Note that a daemonic process is not allowed to create child processes.
376 Otherwise a daemonic process would leave its children orphaned if it gets
377 terminated when its parent process exits. Additionally, these are **not**
378 Unix daemons or services, they are normal processes that will be
379 terminated (and not joined) if non-daemonic processes have exited.
381 In addition to the :class:`Threading.Thread` API, :class:`Process` objects
382 also support the following attributes and methods:
386 Return the process ID. Before the process is spawned, this will be
389 .. attribute:: exitcode
391 The child's exit code. This will be ``None`` if the process has not yet
392 terminated. A negative value *-N* indicates that the child was terminated
395 .. attribute:: authkey
397 The process's authentication key (a byte string).
399 When :mod:`multiprocessing` is initialized the main process is assigned a
400 random string using :func:`os.random`.
402 When a :class:`Process` object is created, it will inherit the
403 authentication key of its parent process, although this may be changed by
404 setting :attr:`authkey` to another byte string.
406 See :ref:`multiprocessing-auth-keys`.
408 .. method:: terminate()
410 Terminate the process. On Unix this is done using the ``SIGTERM`` signal;
411 on Windows :c:func:`TerminateProcess` is used. Note that exit handlers and
412 finally clauses, etc., will not be executed.
414 Note that descendant processes of the process will *not* be terminated --
415 they will simply become orphaned.
419 If this method is used when the associated process is using a pipe or
420 queue then the pipe or queue is liable to become corrupted and may
421 become unusable by other process. Similarly, if the process has
422 acquired a lock or semaphore etc. then terminating it is liable to
423 cause other processes to deadlock.
425 Note that the :meth:`start`, :meth:`join`, :meth:`is_alive` and
426 :attr:`exit_code` methods should only be called by the process that created
429 Example usage of some of the methods of :class:`Process`:
433 >>> import multiprocessing, time, signal
434 >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
435 >>> print p, p.is_alive()
436 <Process(Process-1, initial)> False
438 >>> print p, p.is_alive()
439 <Process(Process-1, started)> True
442 >>> print p, p.is_alive()
443 <Process(Process-1, stopped[SIGTERM])> False
444 >>> p.exitcode == -signal.SIGTERM
448 .. exception:: BufferTooShort
450 Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
451 buffer object is too small for the message read.
453 If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
454 the message as a byte string.
460 When using multiple processes, one generally uses message passing for
461 communication between processes and avoids having to use any synchronization
462 primitives like locks.
464 For passing messages one can use :func:`Pipe` (for a connection between two
465 processes) or a queue (which allows multiple producers and consumers).
467 The :class:`Queue`, :class:`multiprocessing.queues.SimpleQueue` and :class:`JoinableQueue` types are multi-producer,
468 multi-consumer FIFO queues modelled on the :class:`Queue.Queue` class in the
469 standard library. They differ in that :class:`Queue` lacks the
470 :meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join` methods introduced
471 into Python 2.5's :class:`Queue.Queue` class.
473 If you use :class:`JoinableQueue` then you **must** call
474 :meth:`JoinableQueue.task_done` for each task removed from the queue or else the
475 semaphore used to count the number of unfinished tasks may eventually overflow,
476 raising an exception.
478 Note that one can also create a shared queue by using a manager object -- see
479 :ref:`multiprocessing-managers`.
483 :mod:`multiprocessing` uses the usual :exc:`Queue.Empty` and
484 :exc:`Queue.Full` exceptions to signal a timeout. They are not available in
485 the :mod:`multiprocessing` namespace so you need to import them from
491 If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
492 while it is trying to use a :class:`Queue`, then the data in the queue is
493 likely to become corrupted. This may cause any other process to get an
494 exception when it tries to use the queue later on.
498 As mentioned above, if a child process has put items on a queue (and it has
499 not used :meth:`JoinableQueue.cancel_join_thread`), then that process will
500 not terminate until all buffered items have been flushed to the pipe.
502 This means that if you try joining that process you may get a deadlock unless
503 you are sure that all items which have been put on the queue have been
504 consumed. Similarly, if the child process is non-daemonic then the parent
505 process may hang on exit when it tries to join all its non-daemonic children.
507 Note that a queue created using a manager does not have this issue. See
508 :ref:`multiprocessing-programming`.
510 For an example of the usage of queues for interprocess communication see
511 :ref:`multiprocessing-examples`.
514 .. function:: Pipe([duplex])
516 Returns a pair ``(conn1, conn2)`` of :class:`Connection` objects representing
519 If *duplex* is ``True`` (the default) then the pipe is bidirectional. If
520 *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
521 used for receiving messages and ``conn2`` can only be used for sending
525 .. class:: Queue([maxsize])
527 Returns a process shared queue implemented using a pipe and a few
528 locks/semaphores. When a process first puts an item on the queue a feeder
529 thread is started which transfers objects from a buffer into the pipe.
531 The usual :exc:`Queue.Empty` and :exc:`Queue.Full` exceptions from the
532 standard library's :mod:`Queue` module are raised to signal timeouts.
534 :class:`Queue` implements all the methods of :class:`Queue.Queue` except for
535 :meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join`.
539 Return the approximate size of the queue. Because of
540 multithreading/multiprocessing semantics, this number is not reliable.
542 Note that this may raise :exc:`NotImplementedError` on Unix platforms like
543 Mac OS X where ``sem_getvalue()`` is not implemented.
547 Return ``True`` if the queue is empty, ``False`` otherwise. Because of
548 multithreading/multiprocessing semantics, this is not reliable.
552 Return ``True`` if the queue is full, ``False`` otherwise. Because of
553 multithreading/multiprocessing semantics, this is not reliable.
555 .. method:: put(obj[, block[, timeout]])
557 Put obj into the queue. If the optional argument *block* is ``True``
558 (the default) and *timeout* is ``None`` (the default), block if necessary until
559 a free slot is available. If *timeout* is a positive number, it blocks at
560 most *timeout* seconds and raises the :exc:`Queue.Full` exception if no
561 free slot was available within that time. Otherwise (*block* is
562 ``False``), put an item on the queue if a free slot is immediately
563 available, else raise the :exc:`Queue.Full` exception (*timeout* is
564 ignored in that case).
566 .. method:: put_nowait(obj)
568 Equivalent to ``put(obj, False)``.
570 .. method:: get([block[, timeout]])
572 Remove and return an item from the queue. If optional args *block* is
573 ``True`` (the default) and *timeout* is ``None`` (the default), block if
574 necessary until an item is available. If *timeout* is a positive number,
575 it blocks at most *timeout* seconds and raises the :exc:`Queue.Empty`
576 exception if no item was available within that time. Otherwise (block is
577 ``False``), return an item if one is immediately available, else raise the
578 :exc:`Queue.Empty` exception (*timeout* is ignored in that case).
580 .. method:: get_nowait()
583 Equivalent to ``get(False)``.
585 :class:`multiprocessing.Queue` has a few additional methods not found in
586 :class:`Queue.Queue`. These methods are usually unnecessary for most
591 Indicate that no more data will be put on this queue by the current
592 process. The background thread will quit once it has flushed all buffered
593 data to the pipe. This is called automatically when the queue is garbage
596 .. method:: join_thread()
598 Join the background thread. This can only be used after :meth:`close` has
599 been called. It blocks until the background thread exits, ensuring that
600 all data in the buffer has been flushed to the pipe.
602 By default if a process is not the creator of the queue then on exit it
603 will attempt to join the queue's background thread. The process can call
604 :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
606 .. method:: cancel_join_thread()
608 Prevent :meth:`join_thread` from blocking. In particular, this prevents
609 the background thread from being joined automatically when the process
610 exits -- see :meth:`join_thread`.
613 .. class:: multiprocessing.queues.SimpleQueue()
615 It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
619 Return ``True`` if the queue is empty, ``False`` otherwise.
623 Remove and return an item from the queue.
625 .. method:: put(item)
627 Put *item* into the queue.
630 .. class:: JoinableQueue([maxsize])
632 :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
633 additionally has :meth:`task_done` and :meth:`join` methods.
635 .. method:: task_done()
637 Indicate that a formerly enqueued task is complete. Used by queue consumer
638 threads. For each :meth:`~Queue.get` used to fetch a task, a subsequent
639 call to :meth:`task_done` tells the queue that the processing on the task
642 If a :meth:`~Queue.join` is currently blocking, it will resume when all
643 items have been processed (meaning that a :meth:`task_done` call was
644 received for every item that had been :meth:`~Queue.put` into the queue).
646 Raises a :exc:`ValueError` if called more times than there were items
652 Block until all items in the queue have been gotten and processed.
654 The count of unfinished tasks goes up whenever an item is added to the
655 queue. The count goes down whenever a consumer thread calls
656 :meth:`task_done` to indicate that the item was retrieved and all work on
657 it is complete. When the count of unfinished tasks drops to zero,
658 :meth:`~Queue.join` unblocks.
664 .. function:: active_children()
666 Return list of all live children of the current process.
668 Calling this has the side affect of "joining" any processes which have
671 .. function:: cpu_count()
673 Return the number of CPUs in the system. May raise
674 :exc:`NotImplementedError`.
676 .. function:: current_process()
678 Return the :class:`Process` object corresponding to the current process.
680 An analogue of :func:`threading.current_thread`.
682 .. function:: freeze_support()
684 Add support for when a program which uses :mod:`multiprocessing` has been
685 frozen to produce a Windows executable. (Has been tested with **py2exe**,
686 **PyInstaller** and **cx_Freeze**.)
688 One needs to call this function straight after the ``if __name__ ==
689 '__main__'`` line of the main module. For example::
691 from multiprocessing import Process, freeze_support
696 if __name__ == '__main__':
698 Process(target=f).start()
700 If the ``freeze_support()`` line is omitted then trying to run the frozen
701 executable will raise :exc:`RuntimeError`.
703 If the module is being run normally by the Python interpreter then
704 :func:`freeze_support` has no effect.
706 .. function:: set_executable()
708 Sets the path of the Python interpreter to use when starting a child process.
709 (By default :data:`sys.executable` is used). Embedders will probably need to
710 do some thing like ::
712 set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
714 before they can create child processes. (Windows only)
719 :mod:`multiprocessing` contains no analogues of
720 :func:`threading.active_count`, :func:`threading.enumerate`,
721 :func:`threading.settrace`, :func:`threading.setprofile`,
722 :class:`threading.Timer`, or :class:`threading.local`.
728 Connection objects allow the sending and receiving of picklable objects or
729 strings. They can be thought of as message oriented connected sockets.
731 Connection objects are usually created using :func:`Pipe` -- see also
732 :ref:`multiprocessing-listeners-clients`.
734 .. class:: Connection
736 .. method:: send(obj)
738 Send an object to the other end of the connection which should be read
741 The object must be picklable. Very large pickles (approximately 32 MB+,
742 though it depends on the OS) may raise a :exc:`ValueError` exception.
746 Return an object sent from the other end of the connection using
747 :meth:`send`. Blocks until there its something to receive. Raises
748 :exc:`EOFError` if there is nothing left to receive
749 and the other end was closed.
753 Return the file descriptor or handle used by the connection.
757 Close the connection.
759 This is called automatically when the connection is garbage collected.
761 .. method:: poll([timeout])
763 Return whether there is any data available to be read.
765 If *timeout* is not specified then it will return immediately. If
766 *timeout* is a number then this specifies the maximum time in seconds to
767 block. If *timeout* is ``None`` then an infinite timeout is used.
769 .. method:: send_bytes(buffer[, offset[, size]])
771 Send byte data from an object supporting the buffer interface as a
774 If *offset* is given then data is read from that position in *buffer*. If
775 *size* is given then that many bytes will be read from buffer. Very large
776 buffers (approximately 32 MB+, though it depends on the OS) may raise a
777 :exc:`ValueError` exception
779 .. method:: recv_bytes([maxlength])
781 Return a complete message of byte data sent from the other end of the
782 connection as a string. Blocks until there is something to receive.
783 Raises :exc:`EOFError` if there is nothing left
784 to receive and the other end has closed.
786 If *maxlength* is specified and the message is longer than *maxlength*
787 then :exc:`IOError` is raised and the connection will no longer be
790 .. method:: recv_bytes_into(buffer[, offset])
792 Read into *buffer* a complete message of byte data sent from the other end
793 of the connection and return the number of bytes in the message. Blocks
794 until there is something to receive. Raises
795 :exc:`EOFError` if there is nothing left to receive and the other end was
798 *buffer* must be an object satisfying the writable buffer interface. If
799 *offset* is given then the message will be written into the buffer from
800 that position. Offset must be a non-negative integer less than the
801 length of *buffer* (in bytes).
803 If the buffer is too short then a :exc:`BufferTooShort` exception is
804 raised and the complete message is available as ``e.args[0]`` where ``e``
805 is the exception instance.
812 >>> from multiprocessing import Pipe
814 >>> a.send([1, 'hello', None])
817 >>> b.send_bytes('thank you')
821 >>> arr1 = array.array('i', range(5))
822 >>> arr2 = array.array('i', [0] * 10)
823 >>> a.send_bytes(arr1)
824 >>> count = b.recv_bytes_into(arr2)
825 >>> assert count == len(arr1) * arr1.itemsize
827 array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
832 The :meth:`Connection.recv` method automatically unpickles the data it
833 receives, which can be a security risk unless you can trust the process
834 which sent the message.
836 Therefore, unless the connection object was produced using :func:`Pipe` you
837 should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
838 methods after performing some sort of authentication. See
839 :ref:`multiprocessing-auth-keys`.
843 If a process is killed while it is trying to read or write to a pipe then
844 the data in the pipe is likely to become corrupted, because it may become
845 impossible to be sure where the message boundaries lie.
848 Synchronization primitives
849 ~~~~~~~~~~~~~~~~~~~~~~~~~~
851 Generally synchronization primitives are not as necessary in a multiprocess
852 program as they are in a multithreaded program. See the documentation for
853 :mod:`threading` module.
855 Note that one can also create synchronization primitives by using a manager
856 object -- see :ref:`multiprocessing-managers`.
858 .. class:: BoundedSemaphore([value])
860 A bounded semaphore object: a clone of :class:`threading.BoundedSemaphore`.
862 (On Mac OS X, this is indistinguishable from :class:`Semaphore` because
863 ``sem_getvalue()`` is not implemented on that platform).
865 .. class:: Condition([lock])
867 A condition variable: a clone of :class:`threading.Condition`.
869 If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
870 object from :mod:`multiprocessing`.
874 A clone of :class:`threading.Event`.
875 This method returns the state of the internal semaphore on exit, so it
876 will always return ``True`` except if a timeout is given and the operation
879 .. versionchanged:: 2.7
880 Previously, the method always returned ``None``.
884 A non-recursive lock object: a clone of :class:`threading.Lock`.
888 A recursive lock object: a clone of :class:`threading.RLock`.
890 .. class:: Semaphore([value])
892 A semaphore object: a clone of :class:`threading.Semaphore`.
896 The :meth:`acquire` method of :class:`BoundedSemaphore`, :class:`Lock`,
897 :class:`RLock` and :class:`Semaphore` has a timeout parameter not supported
898 by the equivalents in :mod:`threading`. The signature is
899 ``acquire(block=True, timeout=None)`` with keyword parameters being
900 acceptable. If *block* is ``True`` and *timeout* is not ``None`` then it
901 specifies a timeout in seconds. If *block* is ``False`` then *timeout* is
904 On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
905 a timeout will emulate that function's behavior using a sleeping loop.
909 If the SIGINT signal generated by Ctrl-C arrives while the main thread is
910 blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
911 :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
912 or :meth:`Condition.wait` then the call will be immediately interrupted and
913 :exc:`KeyboardInterrupt` will be raised.
915 This differs from the behaviour of :mod:`threading` where SIGINT will be
916 ignored while the equivalent blocking calls are in progress.
919 Shared :mod:`ctypes` Objects
920 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
922 It is possible to create shared objects using shared memory which can be
923 inherited by child processes.
925 .. function:: Value(typecode_or_type, *args[, lock])
927 Return a :mod:`ctypes` object allocated from shared memory. By default the
928 return value is actually a synchronized wrapper for the object.
930 *typecode_or_type* determines the type of the returned object: it is either a
931 ctypes type or a one character typecode of the kind used by the :mod:`array`
932 module. *\*args* is passed on to the constructor for the type.
934 If *lock* is ``True`` (the default) then a new lock object is created to
935 synchronize access to the value. If *lock* is a :class:`Lock` or
936 :class:`RLock` object then that will be used to synchronize access to the
937 value. If *lock* is ``False`` then access to the returned object will not be
938 automatically protected by a lock, so it will not necessarily be
941 Note that *lock* is a keyword-only argument.
943 .. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
945 Return a ctypes array allocated from shared memory. By default the return
946 value is actually a synchronized wrapper for the array.
948 *typecode_or_type* determines the type of the elements of the returned array:
949 it is either a ctypes type or a one character typecode of the kind used by
950 the :mod:`array` module. If *size_or_initializer* is an integer, then it
951 determines the length of the array, and the array will be initially zeroed.
952 Otherwise, *size_or_initializer* is a sequence which is used to initialize
953 the array and whose length determines the length of the array.
955 If *lock* is ``True`` (the default) then a new lock object is created to
956 synchronize access to the value. If *lock* is a :class:`Lock` or
957 :class:`RLock` object then that will be used to synchronize access to the
958 value. If *lock* is ``False`` then access to the returned object will not be
959 automatically protected by a lock, so it will not necessarily be
962 Note that *lock* is a keyword only argument.
964 Note that an array of :data:`ctypes.c_char` has *value* and *raw*
965 attributes which allow one to use it to store and retrieve strings.
968 The :mod:`multiprocessing.sharedctypes` module
969 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
971 .. module:: multiprocessing.sharedctypes
972 :synopsis: Allocate ctypes objects from shared memory.
974 The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
975 :mod:`ctypes` objects from shared memory which can be inherited by child
980 Although it is possible to store a pointer in shared memory remember that
981 this will refer to a location in the address space of a specific process.
982 However, the pointer is quite likely to be invalid in the context of a second
983 process and trying to dereference the pointer from the second process may
986 .. function:: RawArray(typecode_or_type, size_or_initializer)
988 Return a ctypes array allocated from shared memory.
990 *typecode_or_type* determines the type of the elements of the returned array:
991 it is either a ctypes type or a one character typecode of the kind used by
992 the :mod:`array` module. If *size_or_initializer* is an integer then it
993 determines the length of the array, and the array will be initially zeroed.
994 Otherwise *size_or_initializer* is a sequence which is used to initialize the
995 array and whose length determines the length of the array.
997 Note that setting and getting an element is potentially non-atomic -- use
998 :func:`Array` instead to make sure that access is automatically synchronized
1001 .. function:: RawValue(typecode_or_type, *args)
1003 Return a ctypes object allocated from shared memory.
1005 *typecode_or_type* determines the type of the returned object: it is either a
1006 ctypes type or a one character typecode of the kind used by the :mod:`array`
1007 module. *\*args* is passed on to the constructor for the type.
1009 Note that setting and getting the value is potentially non-atomic -- use
1010 :func:`Value` instead to make sure that access is automatically synchronized
1013 Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1014 attributes which allow one to use it to store and retrieve strings -- see
1015 documentation for :mod:`ctypes`.
1017 .. function:: Array(typecode_or_type, size_or_initializer, *args[, lock])
1019 The same as :func:`RawArray` except that depending on the value of *lock* a
1020 process-safe synchronization wrapper may be returned instead of a raw ctypes
1023 If *lock* is ``True`` (the default) then a new lock object is created to
1024 synchronize access to the value. If *lock* is a :class:`Lock` or
1025 :class:`RLock` object then that will be used to synchronize access to the
1026 value. If *lock* is ``False`` then access to the returned object will not be
1027 automatically protected by a lock, so it will not necessarily be
1030 Note that *lock* is a keyword-only argument.
1032 .. function:: Value(typecode_or_type, *args[, lock])
1034 The same as :func:`RawValue` except that depending on the value of *lock* a
1035 process-safe synchronization wrapper may be returned instead of a raw ctypes
1038 If *lock* is ``True`` (the default) then a new lock object is created to
1039 synchronize access to the value. If *lock* is a :class:`Lock` or
1040 :class:`RLock` object then that will be used to synchronize access to the
1041 value. If *lock* is ``False`` then access to the returned object will not be
1042 automatically protected by a lock, so it will not necessarily be
1045 Note that *lock* is a keyword-only argument.
1047 .. function:: copy(obj)
1049 Return a ctypes object allocated from shared memory which is a copy of the
1050 ctypes object *obj*.
1052 .. function:: synchronized(obj[, lock])
1054 Return a process-safe wrapper object for a ctypes object which uses *lock* to
1055 synchronize access. If *lock* is ``None`` (the default) then a
1056 :class:`multiprocessing.RLock` object is created automatically.
1058 A synchronized wrapper will have two methods in addition to those of the
1059 object it wraps: :meth:`get_obj` returns the wrapped object and
1060 :meth:`get_lock` returns the lock object used for synchronization.
1062 Note that accessing the ctypes object through the wrapper can be a lot slower
1063 than accessing the raw ctypes object.
1066 The table below compares the syntax for creating shared ctypes objects from
1067 shared memory with the normal ctypes syntax. (In the table ``MyStruct`` is some
1068 subclass of :class:`ctypes.Structure`.)
1070 ==================== ========================== ===========================
1071 ctypes sharedctypes using type sharedctypes using typecode
1072 ==================== ========================== ===========================
1073 c_double(2.4) RawValue(c_double, 2.4) RawValue('d', 2.4)
1074 MyStruct(4, 6) RawValue(MyStruct, 4, 6)
1075 (c_short * 7)() RawArray(c_short, 7) RawArray('h', 7)
1076 (c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1077 ==================== ========================== ===========================
1080 Below is an example where a number of ctypes objects are modified by a child
1083 from multiprocessing import Process, Lock
1084 from multiprocessing.sharedctypes import Value, Array
1085 from ctypes import Structure, c_double
1087 class Point(Structure):
1088 _fields_ = [('x', c_double), ('y', c_double)]
1090 def modify(n, x, s, A):
1093 s.value = s.value.upper()
1098 if __name__ == '__main__':
1102 x = Value(c_double, 1.0/3.0, lock=False)
1103 s = Array('c', 'hello world', lock=lock)
1104 A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1106 p = Process(target=modify, args=(n, x, s, A))
1113 print [(a.x, a.y) for a in A]
1116 .. highlightlang:: none
1118 The results printed are ::
1123 [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1125 .. highlightlang:: python
1128 .. _multiprocessing-managers:
1133 Managers provide a way to create data which can be shared between different
1134 processes. A manager object controls a server process which manages *shared
1135 objects*. Other processes can access the shared objects by using proxies.
1137 .. function:: multiprocessing.Manager()
1139 Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1140 can be used for sharing objects between processes. The returned manager
1141 object corresponds to a spawned child process and has methods which will
1142 create shared objects and return corresponding proxies.
1144 .. module:: multiprocessing.managers
1145 :synopsis: Share data between process with shared objects.
1147 Manager processes will be shutdown as soon as they are garbage collected or
1148 their parent process exits. The manager classes are defined in the
1149 :mod:`multiprocessing.managers` module:
1151 .. class:: BaseManager([address[, authkey]])
1153 Create a BaseManager object.
1155 Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1156 that the manager object refers to a started manager process.
1158 *address* is the address on which the manager process listens for new
1159 connections. If *address* is ``None`` then an arbitrary one is chosen.
1161 *authkey* is the authentication key which will be used to check the validity
1162 of incoming connections to the server process. If *authkey* is ``None`` then
1163 ``current_process().authkey``. Otherwise *authkey* is used and it
1166 .. method:: start([initializer[, initargs]])
1168 Start a subprocess to start the manager. If *initializer* is not ``None``
1169 then the subprocess will call ``initializer(*initargs)`` when it starts.
1171 .. method:: get_server()
1173 Returns a :class:`Server` object which represents the actual server under
1174 the control of the Manager. The :class:`Server` object supports the
1175 :meth:`serve_forever` method::
1177 >>> from multiprocessing.managers import BaseManager
1178 >>> manager = BaseManager(address=('', 50000), authkey='abc')
1179 >>> server = manager.get_server()
1180 >>> server.serve_forever()
1182 :class:`Server` additionally has an :attr:`address` attribute.
1184 .. method:: connect()
1186 Connect a local manager object to a remote manager process::
1188 >>> from multiprocessing.managers import BaseManager
1189 >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc')
1192 .. method:: shutdown()
1194 Stop the process used by the manager. This is only available if
1195 :meth:`start` has been used to start the server process.
1197 This can be called multiple times.
1199 .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1201 A classmethod which can be used for registering a type or callable with
1204 *typeid* is a "type identifier" which is used to identify a particular
1205 type of shared object. This must be a string.
1207 *callable* is a callable used for creating objects for this type
1208 identifier. If a manager instance will be created using the
1209 :meth:`from_address` classmethod or if the *create_method* argument is
1210 ``False`` then this can be left as ``None``.
1212 *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1213 proxies for shared objects with this *typeid*. If ``None`` then a proxy
1214 class is created automatically.
1216 *exposed* is used to specify a sequence of method names which proxies for
1217 this typeid should be allowed to access using
1218 :meth:`BaseProxy._callMethod`. (If *exposed* is ``None`` then
1219 :attr:`proxytype._exposed_` is used instead if it exists.) In the case
1220 where no exposed list is specified, all "public methods" of the shared
1221 object will be accessible. (Here a "public method" means any attribute
1222 which has a :meth:`__call__` method and whose name does not begin with
1225 *method_to_typeid* is a mapping used to specify the return type of those
1226 exposed methods which should return a proxy. It maps method names to
1227 typeid strings. (If *method_to_typeid* is ``None`` then
1228 :attr:`proxytype._method_to_typeid_` is used instead if it exists.) If a
1229 method's name is not a key of this mapping or if the mapping is ``None``
1230 then the object returned by the method will be copied by value.
1232 *create_method* determines whether a method should be created with name
1233 *typeid* which can be used to tell the server process to create a new
1234 shared object and return a proxy for it. By default it is ``True``.
1236 :class:`BaseManager` instances also have one read-only property:
1238 .. attribute:: address
1240 The address used by the manager.
1243 .. class:: SyncManager
1245 A subclass of :class:`BaseManager` which can be used for the synchronization
1246 of processes. Objects of this type are returned by
1247 :func:`multiprocessing.Manager`.
1249 It also supports creation of shared lists and dictionaries.
1251 .. method:: BoundedSemaphore([value])
1253 Create a shared :class:`threading.BoundedSemaphore` object and return a
1256 .. method:: Condition([lock])
1258 Create a shared :class:`threading.Condition` object and return a proxy for
1261 If *lock* is supplied then it should be a proxy for a
1262 :class:`threading.Lock` or :class:`threading.RLock` object.
1266 Create a shared :class:`threading.Event` object and return a proxy for it.
1270 Create a shared :class:`threading.Lock` object and return a proxy for it.
1272 .. method:: Namespace()
1274 Create a shared :class:`Namespace` object and return a proxy for it.
1276 .. method:: Queue([maxsize])
1278 Create a shared :class:`Queue.Queue` object and return a proxy for it.
1282 Create a shared :class:`threading.RLock` object and return a proxy for it.
1284 .. method:: Semaphore([value])
1286 Create a shared :class:`threading.Semaphore` object and return a proxy for
1289 .. method:: Array(typecode, sequence)
1291 Create an array and return a proxy for it.
1293 .. method:: Value(typecode, value)
1295 Create an object with a writable ``value`` attribute and return a proxy
1302 Create a shared ``dict`` object and return a proxy for it.
1307 Create a shared ``list`` object and return a proxy for it.
1311 Modifications to mutable values or items in dict and list proxies will not
1312 be propagated through the manager, because the proxy has no way of knowing
1313 when its values or items are modified. To modify such an item, you can
1314 re-assign the modified object to the container proxy::
1316 # create a list proxy and append a mutable object (a dictionary)
1317 lproxy = manager.list()
1319 # now mutate the dictionary
1323 # at this point, the changes to d are not yet synced, but by
1324 # reassigning the dictionary, the proxy is notified of the change
1331 A namespace object has no public methods, but does have writable attributes.
1332 Its representation shows the values of its attributes.
1334 However, when using a proxy for a namespace object, an attribute beginning with
1335 ``'_'`` will be an attribute of the proxy and not an attribute of the referent:
1339 >>> manager = multiprocessing.Manager()
1340 >>> Global = manager.Namespace()
1342 >>> Global.y = 'hello'
1343 >>> Global._z = 12.3 # this is an attribute of the proxy
1345 Namespace(x=10, y='hello')
1351 To create one's own manager, one creates a subclass of :class:`BaseManager` and
1352 uses the :meth:`~BaseManager.register` classmethod to register new types or
1353 callables with the manager class. For example::
1355 from multiprocessing.managers import BaseManager
1357 class MathsClass(object):
1358 def add(self, x, y):
1360 def mul(self, x, y):
1363 class MyManager(BaseManager):
1366 MyManager.register('Maths', MathsClass)
1368 if __name__ == '__main__':
1369 manager = MyManager()
1371 maths = manager.Maths()
1372 print maths.add(4, 3) # prints 7
1373 print maths.mul(7, 8) # prints 56
1376 Using a remote manager
1377 >>>>>>>>>>>>>>>>>>>>>>
1379 It is possible to run a manager server on one machine and have clients use it
1380 from other machines (assuming that the firewalls involved allow it).
1382 Running the following commands creates a server for a single shared queue which
1383 remote clients can access::
1385 >>> from multiprocessing.managers import BaseManager
1387 >>> queue = Queue.Queue()
1388 >>> class QueueManager(BaseManager): pass
1389 >>> QueueManager.register('get_queue', callable=lambda:queue)
1390 >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1391 >>> s = m.get_server()
1392 >>> s.serve_forever()
1394 One client can access the server as follows::
1396 >>> from multiprocessing.managers import BaseManager
1397 >>> class QueueManager(BaseManager): pass
1398 >>> QueueManager.register('get_queue')
1399 >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1401 >>> queue = m.get_queue()
1402 >>> queue.put('hello')
1404 Another client can also use it::
1406 >>> from multiprocessing.managers import BaseManager
1407 >>> class QueueManager(BaseManager): pass
1408 >>> QueueManager.register('get_queue')
1409 >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1411 >>> queue = m.get_queue()
1415 Local processes can also access that queue, using the code from above on the
1416 client to access it remotely::
1418 >>> from multiprocessing import Process, Queue
1419 >>> from multiprocessing.managers import BaseManager
1420 >>> class Worker(Process):
1421 ... def __init__(self, q):
1423 ... super(Worker, self).__init__()
1425 ... self.q.put('local hello')
1428 >>> w = Worker(queue)
1430 >>> class QueueManager(BaseManager): pass
1432 >>> QueueManager.register('get_queue', callable=lambda: queue)
1433 >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1434 >>> s = m.get_server()
1435 >>> s.serve_forever()
1440 A proxy is an object which *refers* to a shared object which lives (presumably)
1441 in a different process. The shared object is said to be the *referent* of the
1442 proxy. Multiple proxy objects may have the same referent.
1444 A proxy object has methods which invoke corresponding methods of its referent
1445 (although not every method of the referent will necessarily be available through
1446 the proxy). A proxy can usually be used in most of the same ways that its
1451 >>> from multiprocessing import Manager
1452 >>> manager = Manager()
1453 >>> l = manager.list([i*i for i in range(10)])
1455 [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1457 <ListProxy object, typeid 'list' at 0x...>
1463 Notice that applying :func:`str` to a proxy will return the representation of
1464 the referent, whereas applying :func:`repr` will return the representation of
1467 An important feature of proxy objects is that they are picklable so they can be
1468 passed between processes. Note, however, that if a proxy is sent to the
1469 corresponding manager's process then unpickling it will produce the referent
1470 itself. This means, for example, that one shared object can contain a second:
1474 >>> a = manager.list()
1475 >>> b = manager.list()
1476 >>> a.append(b) # referent of a now contains referent of b
1479 >>> b.append('hello')
1481 [['hello']] ['hello']
1485 The proxy types in :mod:`multiprocessing` do nothing to support comparisons
1486 by value. So, for instance, we have:
1490 >>> manager.list([1,2,3]) == [1,2,3]
1493 One should just use a copy of the referent instead when making comparisons.
1495 .. class:: BaseProxy
1497 Proxy objects are instances of subclasses of :class:`BaseProxy`.
1499 .. method:: _callmethod(methodname[, args[, kwds]])
1501 Call and return the result of a method of the proxy's referent.
1503 If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
1505 proxy._callmethod(methodname, args, kwds)
1507 will evaluate the expression ::
1509 getattr(obj, methodname)(*args, **kwds)
1511 in the manager's process.
1513 The returned value will be a copy of the result of the call or a proxy to
1514 a new shared object -- see documentation for the *method_to_typeid*
1515 argument of :meth:`BaseManager.register`.
1517 If an exception is raised by the call, then is re-raised by
1518 :meth:`_callmethod`. If some other exception is raised in the manager's
1519 process then this is converted into a :exc:`RemoteError` exception and is
1520 raised by :meth:`_callmethod`.
1522 Note in particular that an exception will be raised if *methodname* has
1525 An example of the usage of :meth:`_callmethod`:
1529 >>> l = manager.list(range(10))
1530 >>> l._callmethod('__len__')
1532 >>> l._callmethod('__getslice__', (2, 7)) # equiv to `l[2:7]`
1534 >>> l._callmethod('__getitem__', (20,)) # equiv to `l[20]`
1535 Traceback (most recent call last):
1537 IndexError: list index out of range
1539 .. method:: _getvalue()
1541 Return a copy of the referent.
1543 If the referent is unpicklable then this will raise an exception.
1545 .. method:: __repr__
1547 Return a representation of the proxy object.
1551 Return the representation of the referent.
1557 A proxy object uses a weakref callback so that when it gets garbage collected it
1558 deregisters itself from the manager which owns its referent.
1560 A shared object gets deleted from the manager process when there are no longer
1561 any proxies referring to it.
1567 .. module:: multiprocessing.pool
1568 :synopsis: Create pools of processes.
1570 One can create a pool of processes which will carry out tasks submitted to it
1571 with the :class:`Pool` class.
1573 .. class:: multiprocessing.Pool([processes[, initializer[, initargs[, maxtasksperchild]]]])
1575 A process pool object which controls a pool of worker processes to which jobs
1576 can be submitted. It supports asynchronous results with timeouts and
1577 callbacks and has a parallel map implementation.
1579 *processes* is the number of worker processes to use. If *processes* is
1580 ``None`` then the number returned by :func:`cpu_count` is used. If
1581 *initializer* is not ``None`` then each worker process will call
1582 ``initializer(*initargs)`` when it starts.
1584 .. versionadded:: 2.7
1585 *maxtasksperchild* is the number of tasks a worker process can complete
1586 before it will exit and be replaced with a fresh worker process, to enable
1587 unused resources to be freed. The default *maxtasksperchild* is None, which
1588 means worker processes will live as long as the pool.
1592 Worker processes within a :class:`Pool` typically live for the complete
1593 duration of the Pool's work queue. A frequent pattern found in other
1594 systems (such as Apache, mod_wsgi, etc) to free resources held by
1595 workers is to allow a worker within a pool to complete only a set
1596 amount of work before being exiting, being cleaned up and a new
1597 process spawned to replace the old one. The *maxtasksperchild*
1598 argument to the :class:`Pool` exposes this ability to the end user.
1600 .. method:: apply(func[, args[, kwds]])
1602 Equivalent of the :func:`apply` built-in function. It blocks until the
1603 result is ready, so :meth:`apply_async` is better suited for performing
1604 work in parallel. Additionally, *func* is only executed in one of the
1605 workers of the pool.
1607 .. method:: apply_async(func[, args[, kwds[, callback]]])
1609 A variant of the :meth:`apply` method which returns a result object.
1611 If *callback* is specified then it should be a callable which accepts a
1612 single argument. When the result becomes ready *callback* is applied to
1613 it (unless the call failed). *callback* should complete immediately since
1614 otherwise the thread which handles the results will get blocked.
1616 .. method:: map(func, iterable[, chunksize])
1618 A parallel equivalent of the :func:`map` built-in function (it supports only
1619 one *iterable* argument though). It blocks until the result is ready.
1621 This method chops the iterable into a number of chunks which it submits to
1622 the process pool as separate tasks. The (approximate) size of these
1623 chunks can be specified by setting *chunksize* to a positive integer.
1625 .. method:: map_async(func, iterable[, chunksize[, callback]])
1627 A variant of the :meth:`.map` method which returns a result object.
1629 If *callback* is specified then it should be a callable which accepts a
1630 single argument. When the result becomes ready *callback* is applied to
1631 it (unless the call failed). *callback* should complete immediately since
1632 otherwise the thread which handles the results will get blocked.
1634 .. method:: imap(func, iterable[, chunksize])
1636 An equivalent of :func:`itertools.imap`.
1638 The *chunksize* argument is the same as the one used by the :meth:`.map`
1639 method. For very long iterables using a large value for *chunksize* can
1640 make the job complete **much** faster than using the default value of
1643 Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
1644 returned by the :meth:`imap` method has an optional *timeout* parameter:
1645 ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
1646 result cannot be returned within *timeout* seconds.
1648 .. method:: imap_unordered(func, iterable[, chunksize])
1650 The same as :meth:`imap` except that the ordering of the results from the
1651 returned iterator should be considered arbitrary. (Only when there is
1652 only one worker process is the order guaranteed to be "correct".)
1656 Prevents any more tasks from being submitted to the pool. Once all the
1657 tasks have been completed the worker processes will exit.
1659 .. method:: terminate()
1661 Stops the worker processes immediately without completing outstanding
1662 work. When the pool object is garbage collected :meth:`terminate` will be
1667 Wait for the worker processes to exit. One must call :meth:`close` or
1668 :meth:`terminate` before using :meth:`join`.
1671 .. class:: AsyncResult
1673 The class of the result returned by :meth:`Pool.apply_async` and
1674 :meth:`Pool.map_async`.
1676 .. method:: get([timeout])
1678 Return the result when it arrives. If *timeout* is not ``None`` and the
1679 result does not arrive within *timeout* seconds then
1680 :exc:`multiprocessing.TimeoutError` is raised. If the remote call raised
1681 an exception then that exception will be reraised by :meth:`get`.
1683 .. method:: wait([timeout])
1685 Wait until the result is available or until *timeout* seconds pass.
1689 Return whether the call has completed.
1691 .. method:: successful()
1693 Return whether the call completed without raising an exception. Will
1694 raise :exc:`AssertionError` if the result is not ready.
1696 The following example demonstrates the use of a pool::
1698 from multiprocessing import Pool
1703 if __name__ == '__main__':
1704 pool = Pool(processes=4) # start 4 worker processes
1706 result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously
1707 print result.get(timeout=1) # prints "100" unless your computer is *very* slow
1709 print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
1711 it = pool.imap(f, range(10))
1712 print it.next() # prints "0"
1713 print it.next() # prints "1"
1714 print it.next(timeout=1) # prints "4" unless your computer is *very* slow
1717 result = pool.apply_async(time.sleep, (10,))
1718 print result.get(timeout=1) # raises TimeoutError
1721 .. _multiprocessing-listeners-clients:
1723 Listeners and Clients
1724 ~~~~~~~~~~~~~~~~~~~~~
1726 .. module:: multiprocessing.connection
1727 :synopsis: API for dealing with sockets.
1729 Usually message passing between processes is done using queues or by using
1730 :class:`Connection` objects returned by :func:`Pipe`.
1732 However, the :mod:`multiprocessing.connection` module allows some extra
1733 flexibility. It basically gives a high level message oriented API for dealing
1734 with sockets or Windows named pipes, and also has support for *digest
1735 authentication* using the :mod:`hmac` module.
1738 .. function:: deliver_challenge(connection, authkey)
1740 Send a randomly generated message to the other end of the connection and wait
1743 If the reply matches the digest of the message using *authkey* as the key
1744 then a welcome message is sent to the other end of the connection. Otherwise
1745 :exc:`AuthenticationError` is raised.
1747 .. function:: answerChallenge(connection, authkey)
1749 Receive a message, calculate the digest of the message using *authkey* as the
1750 key, and then send the digest back.
1752 If a welcome message is not received, then :exc:`AuthenticationError` is
1755 .. function:: Client(address[, family[, authenticate[, authkey]]])
1757 Attempt to set up a connection to the listener which is using address
1758 *address*, returning a :class:`~multiprocessing.Connection`.
1760 The type of the connection is determined by *family* argument, but this can
1761 generally be omitted since it can usually be inferred from the format of
1762 *address*. (See :ref:`multiprocessing-address-formats`)
1764 If *authenticate* is ``True`` or *authkey* is a string then digest
1765 authentication is used. The key used for authentication will be either
1766 *authkey* or ``current_process().authkey)`` if *authkey* is ``None``.
1767 If authentication fails then :exc:`AuthenticationError` is raised. See
1768 :ref:`multiprocessing-auth-keys`.
1770 .. class:: Listener([address[, family[, backlog[, authenticate[, authkey]]]]])
1772 A wrapper for a bound socket or Windows named pipe which is 'listening' for
1775 *address* is the address to be used by the bound socket or named pipe of the
1780 If an address of '0.0.0.0' is used, the address will not be a connectable
1781 end point on Windows. If you require a connectable end-point,
1782 you should use '127.0.0.1'.
1784 *family* is the type of socket (or named pipe) to use. This can be one of
1785 the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
1786 domain socket) or ``'AF_PIPE'`` (for a Windows named pipe). Of these only
1787 the first is guaranteed to be available. If *family* is ``None`` then the
1788 family is inferred from the format of *address*. If *address* is also
1789 ``None`` then a default is chosen. This default is the family which is
1790 assumed to be the fastest available. See
1791 :ref:`multiprocessing-address-formats`. Note that if *family* is
1792 ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
1793 private temporary directory created using :func:`tempfile.mkstemp`.
1795 If the listener object uses a socket then *backlog* (1 by default) is passed
1796 to the :meth:`listen` method of the socket once it has been bound.
1798 If *authenticate* is ``True`` (``False`` by default) or *authkey* is not
1799 ``None`` then digest authentication is used.
1801 If *authkey* is a string then it will be used as the authentication key;
1802 otherwise it must be *None*.
1804 If *authkey* is ``None`` and *authenticate* is ``True`` then
1805 ``current_process().authkey`` is used as the authentication key. If
1806 *authkey* is ``None`` and *authenticate* is ``False`` then no
1807 authentication is done. If authentication fails then
1808 :exc:`AuthenticationError` is raised. See :ref:`multiprocessing-auth-keys`.
1810 .. method:: accept()
1812 Accept a connection on the bound socket or named pipe of the listener
1813 object and return a :class:`Connection` object. If authentication is
1814 attempted and fails, then :exc:`AuthenticationError` is raised.
1818 Close the bound socket or named pipe of the listener object. This is
1819 called automatically when the listener is garbage collected. However it
1820 is advisable to call it explicitly.
1822 Listener objects have the following read-only properties:
1824 .. attribute:: address
1826 The address which is being used by the Listener object.
1828 .. attribute:: last_accepted
1830 The address from which the last accepted connection came. If this is
1831 unavailable then it is ``None``.
1834 The module defines two exceptions:
1836 .. exception:: AuthenticationError
1838 Exception raised when there is an authentication error.
1843 The following server code creates a listener which uses ``'secret password'`` as
1844 an authentication key. It then waits for a connection and sends some data to
1847 from multiprocessing.connection import Listener
1848 from array import array
1850 address = ('localhost', 6000) # family is deduced to be 'AF_INET'
1851 listener = Listener(address, authkey='secret password')
1853 conn = listener.accept()
1854 print 'connection accepted from', listener.last_accepted
1856 conn.send([2.25, None, 'junk', float])
1858 conn.send_bytes('hello')
1860 conn.send_bytes(array('i', [42, 1729]))
1865 The following code connects to the server and receives some data from the
1868 from multiprocessing.connection import Client
1869 from array import array
1871 address = ('localhost', 6000)
1872 conn = Client(address, authkey='secret password')
1874 print conn.recv() # => [2.25, None, 'junk', float]
1876 print conn.recv_bytes() # => 'hello'
1878 arr = array('i', [0, 0, 0, 0, 0])
1879 print conn.recv_bytes_into(arr) # => 8
1880 print arr # => array('i', [42, 1729, 0, 0, 0])
1885 .. _multiprocessing-address-formats:
1890 * An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
1891 *hostname* is a string and *port* is an integer.
1893 * An ``'AF_UNIX'`` address is a string representing a filename on the
1896 * An ``'AF_PIPE'`` address is a string of the form
1897 :samp:`r'\\\\.\\pipe\\{PipeName}'`. To use :func:`Client` to connect to a named
1898 pipe on a remote computer called *ServerName* one should use an address of the
1899 form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
1901 Note that any string beginning with two backslashes is assumed by default to be
1902 an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
1905 .. _multiprocessing-auth-keys:
1910 When one uses :meth:`Connection.recv`, the data received is automatically
1911 unpickled. Unfortunately unpickling data from an untrusted source is a security
1912 risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
1913 to provide digest authentication.
1915 An authentication key is a string which can be thought of as a password: once a
1916 connection is established both ends will demand proof that the other knows the
1917 authentication key. (Demonstrating that both ends are using the same key does
1918 **not** involve sending the key over the connection.)
1920 If authentication is requested but do authentication key is specified then the
1921 return value of ``current_process().authkey`` is used (see
1922 :class:`~multiprocessing.Process`). This value will automatically inherited by
1923 any :class:`~multiprocessing.Process` object that the current process creates.
1924 This means that (by default) all processes of a multi-process program will share
1925 a single authentication key which can be used when setting up connections
1928 Suitable authentication keys can also be generated by using :func:`os.urandom`.
1934 Some support for logging is available. Note, however, that the :mod:`logging`
1935 package does not use process shared locks so it is possible (depending on the
1936 handler type) for messages from different processes to get mixed up.
1938 .. currentmodule:: multiprocessing
1939 .. function:: get_logger()
1941 Returns the logger used by :mod:`multiprocessing`. If necessary, a new one
1944 When first created the logger has level :data:`logging.NOTSET` and no
1945 default handler. Messages sent to this logger will not by default propagate
1948 Note that on Windows child processes will only inherit the level of the
1949 parent process's logger -- any other customization of the logger will not be
1952 .. currentmodule:: multiprocessing
1953 .. function:: log_to_stderr()
1955 This function performs a call to :func:`get_logger` but in addition to
1956 returning the logger created by get_logger, it adds a handler which sends
1957 output to :data:`sys.stderr` using format
1958 ``'[%(levelname)s/%(processName)s] %(message)s'``.
1960 Below is an example session with logging turned on::
1962 >>> import multiprocessing, logging
1963 >>> logger = multiprocessing.log_to_stderr()
1964 >>> logger.setLevel(logging.INFO)
1965 >>> logger.warning('doomed')
1966 [WARNING/MainProcess] doomed
1967 >>> m = multiprocessing.Manager()
1968 [INFO/SyncManager-...] child process calling self.run()
1969 [INFO/SyncManager-...] created temp directory /.../pymp-...
1970 [INFO/SyncManager-...] manager serving at '/.../listener-...'
1972 [INFO/MainProcess] sending shutdown message to manager
1973 [INFO/SyncManager-...] manager exiting with exitcode 0
1975 In addition to having these two logging functions, the multiprocessing also
1976 exposes two additional logging level attributes. These are :const:`SUBWARNING`
1977 and :const:`SUBDEBUG`. The table below illustrates where theses fit in the
1978 normal level hierarchy.
1980 +----------------+----------------+
1981 | Level | Numeric value |
1982 +================+================+
1983 | ``SUBWARNING`` | 25 |
1984 +----------------+----------------+
1985 | ``SUBDEBUG`` | 5 |
1986 +----------------+----------------+
1988 For a full table of logging levels, see the :mod:`logging` module.
1990 These additional logging levels are used primarily for certain debug messages
1991 within the multiprocessing module. Below is the same example as above, except
1992 with :const:`SUBDEBUG` enabled::
1994 >>> import multiprocessing, logging
1995 >>> logger = multiprocessing.log_to_stderr()
1996 >>> logger.setLevel(multiprocessing.SUBDEBUG)
1997 >>> logger.warning('doomed')
1998 [WARNING/MainProcess] doomed
1999 >>> m = multiprocessing.Manager()
2000 [INFO/SyncManager-...] child process calling self.run()
2001 [INFO/SyncManager-...] created temp directory /.../pymp-...
2002 [INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
2004 [SUBDEBUG/MainProcess] finalizer calling ...
2005 [INFO/MainProcess] sending shutdown message to manager
2006 [DEBUG/SyncManager-...] manager received shutdown message
2007 [SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
2008 [SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
2009 [SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
2010 [SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
2011 [INFO/SyncManager-...] manager exiting with exitcode 0
2013 The :mod:`multiprocessing.dummy` module
2014 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2016 .. module:: multiprocessing.dummy
2017 :synopsis: Dumb wrapper around threading.
2019 :mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2020 no more than a wrapper around the :mod:`threading` module.
2023 .. _multiprocessing-programming:
2025 Programming guidelines
2026 ----------------------
2028 There are certain guidelines and idioms which should be adhered to when using
2029 :mod:`multiprocessing`.
2037 As far as possible one should try to avoid shifting large amounts of data
2040 It is probably best to stick to using queues or pipes for communication
2041 between processes rather than using the lower level synchronization
2042 primitives from the :mod:`threading` module.
2046 Ensure that the arguments to the methods of proxies are picklable.
2048 Thread safety of proxies
2050 Do not use a proxy object from more than one thread unless you protect it
2053 (There is never a problem with different processes using the *same* proxy.)
2055 Joining zombie processes
2057 On Unix when a process finishes but has not been joined it becomes a zombie.
2058 There should never be very many because each time a new process starts (or
2059 :func:`active_children` is called) all completed processes which have not
2060 yet been joined will be joined. Also calling a finished process's
2061 :meth:`Process.is_alive` will join the process. Even so it is probably good
2062 practice to explicitly join all the processes that you start.
2064 Better to inherit than pickle/unpickle
2066 On Windows many types from :mod:`multiprocessing` need to be picklable so
2067 that child processes can use them. However, one should generally avoid
2068 sending shared objects to other processes using pipes or queues. Instead
2069 you should arrange the program so that a process which needs access to a
2070 shared resource created elsewhere can inherit it from an ancestor process.
2072 Avoid terminating processes
2074 Using the :meth:`Process.terminate` method to stop a process is liable to
2075 cause any shared resources (such as locks, semaphores, pipes and queues)
2076 currently being used by the process to become broken or unavailable to other
2079 Therefore it is probably best to only consider using
2080 :meth:`Process.terminate` on processes which never use any shared resources.
2082 Joining processes that use queues
2084 Bear in mind that a process that has put items in a queue will wait before
2085 terminating until all the buffered items are fed by the "feeder" thread to
2086 the underlying pipe. (The child process can call the
2087 :meth:`Queue.cancel_join_thread` method of the queue to avoid this behaviour.)
2089 This means that whenever you use a queue you need to make sure that all
2090 items which have been put on the queue will eventually be removed before the
2091 process is joined. Otherwise you cannot be sure that processes which have
2092 put items on the queue will terminate. Remember also that non-daemonic
2093 processes will be automatically be joined.
2095 An example which will deadlock is the following::
2097 from multiprocessing import Process, Queue
2100 q.put('X' * 1000000)
2102 if __name__ == '__main__':
2104 p = Process(target=f, args=(queue,))
2106 p.join() # this deadlocks
2109 A fix here would be to swap the last two lines round (or simply remove the
2112 Explicitly pass resources to child processes
2114 On Unix a child process can make use of a shared resource created in a
2115 parent process using a global resource. However, it is better to pass the
2116 object as an argument to the constructor for the child process.
2118 Apart from making the code (potentially) compatible with Windows this also
2119 ensures that as long as the child process is still alive the object will not
2120 be garbage collected in the parent process. This might be important if some
2121 resource is freed when the object is garbage collected in the parent
2126 from multiprocessing import Process, Lock
2129 ... do something using "lock" ...
2131 if __name__ == '__main__':
2134 Process(target=f).start()
2136 should be rewritten as ::
2138 from multiprocessing import Process, Lock
2141 ... do something using "l" ...
2143 if __name__ == '__main__':
2146 Process(target=f, args=(lock,)).start()
2148 Beware of replacing :data:`sys.stdin` with a "file like object"
2150 :mod:`multiprocessing` originally unconditionally called::
2152 os.close(sys.stdin.fileno())
2154 in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2155 in issues with processes-in-processes. This has been changed to::
2158 sys.stdin = open(os.devnull)
2160 Which solves the fundamental issue of processes colliding with each other
2161 resulting in a bad file descriptor error, but introduces a potential danger
2162 to applications which replace :func:`sys.stdin` with a "file-like object"
2163 with output buffering. This danger is that if multiple processes call
2164 :func:`close()` on this file-like object, it could result in the same
2165 data being flushed to the object multiple times, resulting in corruption.
2167 If you write a file-like object and implement your own caching, you can
2168 make it fork-safe by storing the pid whenever you append to the cache,
2169 and discarding the cache when the pid changes. For example::
2174 if pid != self._pid:
2179 For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2184 Since Windows lacks :func:`os.fork` it has a few extra restrictions:
2188 Ensure that all arguments to :meth:`Process.__init__` are picklable. This
2189 means, in particular, that bound or unbound methods cannot be used directly
2190 as the ``target`` argument on Windows --- just define a function and use
2193 Also, if you subclass :class:`Process` then make sure that instances will be
2194 picklable when the :meth:`Process.start` method is called.
2198 Bear in mind that if code run in a child process tries to access a global
2199 variable, then the value it sees (if any) may not be the same as the value
2200 in the parent process at the time that :meth:`Process.start` was called.
2202 However, global variables which are just module level constants cause no
2205 Safe importing of main module
2207 Make sure that the main module can be safely imported by a new Python
2208 interpreter without causing unintended side effects (such a starting a new
2211 For example, under Windows running the following module would fail with a
2212 :exc:`RuntimeError`::
2214 from multiprocessing import Process
2219 p = Process(target=foo)
2222 Instead one should protect the "entry point" of the program by using ``if
2223 __name__ == '__main__':`` as follows::
2225 from multiprocessing import Process, freeze_support
2230 if __name__ == '__main__':
2232 p = Process(target=foo)
2235 (The ``freeze_support()`` line can be omitted if the program will be run
2236 normally instead of frozen.)
2238 This allows the newly spawned Python interpreter to safely import the module
2239 and then run the module's ``foo()`` function.
2241 Similar restrictions apply if a pool or manager is created in the main
2245 .. _multiprocessing-examples:
2250 Demonstration of how to create and use customized managers and proxies:
2252 .. literalinclude:: ../includes/mp_newtype.py
2255 Using :class:`Pool`:
2257 .. literalinclude:: ../includes/mp_pool.py
2260 Synchronization types like locks, conditions and queues:
2262 .. literalinclude:: ../includes/mp_synchronize.py
2265 An example showing how to use queues to feed tasks to a collection of worker
2266 processes and collect the results:
2268 .. literalinclude:: ../includes/mp_workers.py
2271 An example of how a pool of worker processes can each run a
2272 :class:`SimpleHTTPServer.HttpServer` instance while sharing a single listening
2275 .. literalinclude:: ../includes/mp_webserver.py
2278 Some simple benchmarks comparing :mod:`multiprocessing` with :mod:`threading`:
2280 .. literalinclude:: ../includes/mp_benchmarks.py