3 <meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
4 <title>Python Bindings</title>
5 <link rel="stylesheet" href="../../../doc/src/boostbook.css" type="text/css">
6 <meta name="generator" content="DocBook XSL Stylesheets V1.78.1">
7 <link rel="home" href="../index.html" title="The Boost C++ Libraries BoostBook Documentation Subset">
8 <link rel="up" href="../mpi.html" title="Chapter 20. Boost.MPI">
9 <link rel="prev" href="../boost/mpi/timer.html" title="Class timer">
10 <link rel="next" href="../program_options.html" title="Chapter 21. Boost.Program_options">
12 <body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF">
13 <table cellpadding="2" width="100%"><tr>
14 <td valign="top"><img alt="Boost C++ Libraries" width="277" height="86" src="../../../boost.png"></td>
15 <td align="center"><a href="../../../index.html">Home</a></td>
16 <td align="center"><a href="../../../libs/libraries.htm">Libraries</a></td>
17 <td align="center"><a href="http://www.boost.org/users/people.html">People</a></td>
18 <td align="center"><a href="http://www.boost.org/users/faq.html">FAQ</a></td>
19 <td align="center"><a href="../../../more/index.htm">More</a></td>
22 <div class="spirit-nav">
23 <a accesskey="p" href="../boost/mpi/timer.html"><img src="../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../mpi.html"><img src="../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="../program_options.html"><img src="../../../doc/src/images/next.png" alt="Next"></a>
26 <div class="titlepage"><div><div><h2 class="title" style="clear: both">
27 <a name="mpi.python"></a>Python Bindings</h2></div></div></div>
28 <div class="toc"><dl class="toc">
29 <dt><span class="section"><a href="python.html#mpi.python_quickstart">Quickstart</a></span></dt>
30 <dt><span class="section"><a href="python.html#mpi.python_user_data">Transmitting User-Defined Data</a></span></dt>
31 <dt><span class="section"><a href="python.html#mpi.python_collectives">Collectives</a></span></dt>
32 <dt><span class="section"><a href="python.html#mpi.python_skeleton_content">Skeleton/Content Mechanism</a></span></dt>
33 <dt><span class="section"><a href="python.html#mpi.design">Design Philosophy</a></span></dt>
34 <dt><span class="section"><a href="python.html#mpi.threading">Threads</a></span></dt>
35 <dt><span class="section"><a href="python.html#mpi.performance">Performance Evaluation</a></span></dt>
36 <dt><span class="section"><a href="python.html#mpi.history">Revision History</a></span></dt>
37 <dt><span class="section"><a href="python.html#mpi.acknowledge">Acknowledgments</a></span></dt>
40 Boost.MPI provides an alternative MPI interface from the <a href="http://www.python.org" target="_top">Python</a>
41 programming language via the <code class="computeroutput"><span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span></code> module.
42 The Boost.MPI Python bindings, built on top of the C++ Boost.MPI using the
43 <a href="http://www.boost.org/libs/python/doc" target="_top">Boost.Python</a> library,
44 provide nearly all of the functionality of Boost.MPI within a dynamic, object-oriented
48 The Boost.MPI Python module can be built and installed from the <code class="computeroutput"><span class="identifier">libs</span><span class="special">/</span><span class="identifier">mpi</span><span class="special">/</span><span class="identifier">build</span></code> directory.
49 Just follow the <a class="link" href="getting_started.html#mpi.config" title="Configure and Build">configuration</a> and <a class="link" href="getting_started.html#mpi.installation" title="Installing and Using Boost.MPI">installation</a>
50 instructions for the C++ Boost.MPI. Once you have installed the Python module,
51 be sure that the installation location is in your <code class="computeroutput"><span class="identifier">PYTHONPATH</span></code>.
54 <div class="titlepage"><div><div><h3 class="title">
55 <a name="mpi.python_quickstart"></a>Quickstart</h3></div></div></div>
57 Getting started with the Boost.MPI Python module is as easy as importing
58 <code class="computeroutput"><span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span></code>. Our first "Hello, World!"
59 program is just two lines long:
61 <pre class="programlisting"><span class="keyword">import</span> <span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span> <span class="keyword">as</span> <span class="identifier">mpi</span>
62 <span class="keyword">print</span> <span class="string">"I am process %d of %d."</span> <span class="special">%</span> <span class="special">(</span><span class="identifier">mpi</span><span class="special">.</span><span class="identifier">rank</span><span class="special">,</span> <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">size</span><span class="special">)</span>
65 Go ahead and run this program with several processes. Be sure to invoke the
66 <code class="computeroutput"><span class="identifier">python</span></code> interpreter from
67 <code class="computeroutput"><span class="identifier">mpirun</span></code>, e.g.,
69 <pre class="programlisting">mpirun -np 5 python hello_world.py
72 This will return output such as:
74 <pre class="programlisting">I am process 1 of 5.
81 Point-to-point operations in Boost.MPI have nearly the same syntax in Python
82 as in C++. We can write a simple two-process Python program that prints "Hello,
83 world!" by transmitting Python strings:
85 <pre class="programlisting"><span class="keyword">import</span> <span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span> <span class="keyword">as</span> <span class="identifier">mpi</span>
87 <span class="keyword">if</span> <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">rank</span> <span class="special">==</span> <span class="number">0</span><span class="special">:</span>
88 <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">send</span><span class="special">(</span><span class="number">1</span><span class="special">,</span> <span class="number">0</span><span class="special">,</span> <span class="string">'Hello'</span><span class="special">)</span>
89 <span class="identifier">msg</span> <span class="special">=</span> <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">recv</span><span class="special">(</span><span class="number">1</span><span class="special">,</span> <span class="number">1</span><span class="special">)</span>
90 <span class="keyword">print</span> <span class="identifier">msg</span><span class="special">,</span><span class="string">'!'</span>
91 <span class="keyword">else</span><span class="special">:</span>
92 <span class="identifier">msg</span> <span class="special">=</span> <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">recv</span><span class="special">(</span><span class="number">0</span><span class="special">,</span> <span class="number">0</span><span class="special">)</span>
93 <span class="keyword">print</span> <span class="special">(</span><span class="identifier">msg</span> <span class="special">+</span> <span class="string">', '</span><span class="special">),</span>
94 <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">send</span><span class="special">(</span><span class="number">0</span><span class="special">,</span> <span class="number">1</span><span class="special">,</span> <span class="string">'world'</span><span class="special">)</span>
97 There are only a few notable differences between this Python code and the
98 example <a class="link" href="tutorial.html#mpi.point_to_point" title="Point-to-Point communication">in the C++ tutorial</a>. First
99 of all, we don't need to write any initialization code in Python: just loading
100 the <code class="computeroutput"><span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span></code> module makes the appropriate <code class="computeroutput"><span class="identifier">MPI_Init</span></code> and <code class="computeroutput"><span class="identifier">MPI_Finalize</span></code>
101 calls. Second, we're passing Python objects from one process to another through
102 MPI. Any Python object that can be pickled can be transmitted; the next section
103 will describe in more detail how the Boost.MPI Python layer transmits objects.
104 Finally, when we receive objects with <code class="computeroutput"><span class="identifier">recv</span></code>,
105 we don't need to specify the type because transmission of Python objects
109 When experimenting with Boost.MPI in Python, don't forget that help is always
110 available via <code class="computeroutput"><span class="identifier">pydoc</span></code>: just
111 pass the name of the module or module entity on the command line (e.g.,
112 <code class="computeroutput"><span class="identifier">pydoc</span> <span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span><span class="special">.</span><span class="identifier">communicator</span></code>) to receive complete reference
113 documentation. When in doubt, try it!
116 <div class="section">
117 <div class="titlepage"><div><div><h3 class="title">
118 <a name="mpi.python_user_data"></a>Transmitting User-Defined Data</h3></div></div></div>
120 Boost.MPI can transmit user-defined data in several different ways. Most
121 importantly, it can transmit arbitrary <a href="http://www.python.org" target="_top">Python</a>
122 objects by pickling them at the sender and unpickling them at the receiver,
123 allowing arbitrarily complex Python data structures to interoperate with
127 Boost.MPI also supports efficient serialization and transmission of C++ objects
128 (that have been exposed to Python) through its C++ interface. Any C++ type
129 that provides (de-)serialization routines that meet the requirements of the
130 Boost.Serialization library is eligible for this optimization, but the type
131 must be registered in advance. To register a C++ type, invoke the C++ function
132 <code class="computeroutput"><a class="link" href="../boost/mpi/python/register_serialized.html" title="Function template register_serialized">register_serialized</a></code>.
133 If your C++ types come from other Python modules (they probably will!), those
134 modules will need to link against the <code class="computeroutput"><span class="identifier">boost_mpi</span></code>
135 and <code class="computeroutput"><span class="identifier">boost_mpi_python</span></code> libraries
136 as described in the <a class="link" href="getting_started.html#mpi.installation" title="Installing and Using Boost.MPI">installation section</a>.
137 Note that you do <span class="bold"><strong>not</strong></span> need to link against
138 the Boost.MPI Python extension module.
141 Finally, Boost.MPI supports separation of the structure of an object from
142 the data it stores, allowing the two pieces to be transmitted separately.
143 This "skeleton/content" mechanism, described in more detail in
144 a later section, is a communication optimization suitable for problems with
145 fixed data structures whose internal data changes frequently.
148 <div class="section">
149 <div class="titlepage"><div><div><h3 class="title">
150 <a name="mpi.python_collectives"></a>Collectives</h3></div></div></div>
152 Boost.MPI supports all of the MPI collectives (<code class="computeroutput"><span class="identifier">scatter</span></code>,
153 <code class="computeroutput"><span class="identifier">reduce</span></code>, <code class="computeroutput"><span class="identifier">scan</span></code>,
154 <code class="computeroutput"><span class="identifier">broadcast</span></code>, etc.) for any
155 type of data that can be transmitted with the point-to-point communication
156 operations. For the MPI collectives that require a user-specified operation
157 (e.g., <code class="computeroutput"><span class="identifier">reduce</span></code> and <code class="computeroutput"><span class="identifier">scan</span></code>), the operation can be an arbitrary
158 Python function. For instance, one could concatenate strings with <code class="computeroutput"><span class="identifier">all_reduce</span></code>:
160 <pre class="programlisting"><span class="identifier">mpi</span><span class="special">.</span><span class="identifier">all_reduce</span><span class="special">(</span><span class="identifier">my_string</span><span class="special">,</span> <span class="keyword">lambda</span> <span class="identifier">x</span><span class="special">,</span><span class="identifier">y</span><span class="special">:</span> <span class="identifier">x</span> <span class="special">+</span> <span class="identifier">y</span><span class="special">)</span>
163 The following module-level functions implement MPI collectives: all_gather
164 Gather the values from all processes. all_reduce Combine the results from
165 all processes. all_to_all Every process sends data to every other process.
166 broadcast Broadcast data from one process to all other processes. gather
167 Gather the values from all processes to the root. reduce Combine the results
168 from all processes to the root. scan Prefix reduction of the values from
169 all processes. scatter Scatter the values stored at the root to all processes.
172 <div class="section">
173 <div class="titlepage"><div><div><h3 class="title">
174 <a name="mpi.python_skeleton_content"></a>Skeleton/Content Mechanism</h3></div></div></div>
175 <div class="toc"><dl class="toc">
176 <dt><span class="section"><a href="python.html#mpi.python_compatbility">C++/Python MPI Compatibility</a></span></dt>
177 <dt><span class="section"><a href="python.html#mpi.pythonref">Reference</a></span></dt>
180 Boost.MPI provides a skeleton/content mechanism that allows the transfer
181 of large data structures to be split into two separate stages, with the skeleton
182 (or, "shape") of the data structure sent first and the content
183 (or, "data") of the data structure sent later, potentially several
184 times, so long as the structure has not changed since the skeleton was transferred.
185 The skeleton/content mechanism can improve performance when the data structure
186 is large and its shape is fixed, because while the skeleton requires serialization
187 (it has an unknown size), the content transfer is fixed-size and can be done
188 without extra copies.
191 To use the skeleton/content mechanism from Python, you must first register
192 the type of your data structure with the skeleton/content mechanism <span class="bold"><strong>from C++</strong></span>. The registration function is <code class="computeroutput"><a class="link" href="../boost/mpi/python/register_skel_idp159037008.html" title="Function template register_skeleton_and_content">register_skeleton_and_content</a></code>
193 and resides in the <code class="computeroutput"><a class="link" href="reference.html#header.boost.mpi.python_hpp" title="Header <boost/mpi/python.hpp>"><boost/mpi/python.hpp></a></code>
197 Once you have registered your C++ data structures, you can extract the skeleton
198 for an instance of that data structure with <code class="computeroutput"><span class="identifier">skeleton</span><span class="special">()</span></code>. The resulting <code class="computeroutput"><span class="identifier">skeleton_proxy</span></code>
199 can be transmitted via the normal send routine, e.g.,
201 <pre class="programlisting"><span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">send</span><span class="special">(</span><span class="number">1</span><span class="special">,</span> <span class="number">0</span><span class="special">,</span> <span class="identifier">skeleton</span><span class="special">(</span><span class="identifier">my_data_structure</span><span class="special">))</span>
204 <code class="computeroutput"><span class="identifier">skeleton_proxy</span></code> objects can
205 be received on the other end via <code class="computeroutput"><span class="identifier">recv</span><span class="special">()</span></code>, which stores a newly-created instance
206 of your data structure with the same "shape" as the sender in its
207 <code class="computeroutput"><span class="string">"object"</span></code> attribute:
209 <pre class="programlisting"><span class="identifier">shape</span> <span class="special">=</span> <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">recv</span><span class="special">(</span><span class="number">0</span><span class="special">,</span> <span class="number">0</span><span class="special">)</span>
210 <span class="identifier">my_data_structure</span> <span class="special">=</span> <span class="identifier">shape</span><span class="special">.</span><span class="identifier">object</span>
213 Once the skeleton has been transmitted, the content (accessed via <code class="computeroutput"><span class="identifier">get_content</span></code>) can be transmitted in much
214 the same way. Note, however, that the receiver also specifies <code class="computeroutput"><span class="identifier">get_content</span><span class="special">(</span><span class="identifier">my_data_structure</span><span class="special">)</span></code>
215 in its call to receive:
217 <pre class="programlisting"><span class="keyword">if</span> <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">rank</span> <span class="special">==</span> <span class="number">0</span><span class="special">:</span>
218 <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">send</span><span class="special">(</span><span class="number">1</span><span class="special">,</span> <span class="number">0</span><span class="special">,</span> <span class="identifier">get_content</span><span class="special">(</span><span class="identifier">my_data_structure</span><span class="special">))</span>
219 <span class="keyword">else</span><span class="special">:</span>
220 <span class="identifier">mpi</span><span class="special">.</span><span class="identifier">world</span><span class="special">.</span><span class="identifier">recv</span><span class="special">(</span><span class="number">0</span><span class="special">,</span> <span class="number">0</span><span class="special">,</span> <span class="identifier">get_content</span><span class="special">(</span><span class="identifier">my_data_structure</span><span class="special">))</span>
223 Of course, this transmission of content can occur repeatedly, if the values
224 in the data structure--but not its shape--changes.
227 The skeleton/content mechanism is a structured way to exploit the interaction
228 between custom-built MPI datatypes and <code class="computeroutput"><span class="identifier">MPI_BOTTOM</span></code>,
229 to eliminate extra buffer copies.
231 <div class="section">
232 <div class="titlepage"><div><div><h4 class="title">
233 <a name="mpi.python_compatbility"></a>C++/Python MPI Compatibility</h4></div></div></div>
235 Boost.MPI is a C++ library whose facilities have been exposed to Python
236 via the Boost.Python library. Since the Boost.MPI Python bindings are build
237 directly on top of the C++ library, and nearly every feature of C++ library
238 is available in Python, hybrid C++/Python programs using Boost.MPI can
239 interact, e.g., sending a value from Python but receiving that value in
240 C++ (or vice versa). However, doing so requires some care. Because Python
241 objects are dynamically typed, Boost.MPI transfers type information along
242 with the serialized form of the object, so that the object can be received
243 even when its type is not known. This mechanism differs from its C++ counterpart,
244 where the static types of transmitted values are always known.
247 The only way to communicate between the C++ and Python views on Boost.MPI
248 is to traffic entirely in Python objects. For Python, this is the normal
249 state of affairs, so nothing will change. For C++, this means sending and
250 receiving values of type <code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">python</span><span class="special">::</span><span class="identifier">object</span></code>,
251 from the <a href="http://www.boost.org/libs/python/doc" target="_top">Boost.Python</a>
252 library. For instance, say we want to transmit an integer value from Python:
254 <pre class="programlisting"><span class="identifier">comm</span><span class="special">.</span><span class="identifier">send</span><span class="special">(</span><span class="number">1</span><span class="special">,</span> <span class="number">0</span><span class="special">,</span> <span class="number">17</span><span class="special">)</span>
257 In C++, we would receive that value into a Python object and then <code class="computeroutput"><span class="identifier">extract</span></code> an integer value:
259 <pre class="programlisting"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">python</span><span class="special">::</span><span class="identifier">object</span> <span class="identifier">value</span><span class="special">;</span>
260 <span class="identifier">comm</span><span class="special">.</span><span class="identifier">recv</span><span class="special">(</span><span class="number">0</span><span class="special">,</span> <span class="number">0</span><span class="special">,</span> <span class="identifier">value</span><span class="special">);</span>
261 <span class="keyword">int</span> <span class="identifier">int_value</span> <span class="special">=</span> <span class="identifier">boost</span><span class="special">::</span><span class="identifier">python</span><span class="special">::</span><span class="identifier">extract</span><span class="special"><</span><span class="keyword">int</span><span class="special">>(</span><span class="identifier">value</span><span class="special">);</span>
264 In the future, Boost.MPI will be extended to allow improved interoperability
265 with the C++ Boost.MPI and the C MPI bindings.
268 <div class="section">
269 <div class="titlepage"><div><div><h4 class="title">
270 <a name="mpi.pythonref"></a>Reference</h4></div></div></div>
272 The Boost.MPI Python module, <code class="computeroutput"><span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span></code>,
273 has its own <a href="../boost.mpi.html" target="_top">reference documentation</a>,
274 which is also available using <code class="computeroutput"><span class="identifier">pydoc</span></code>
275 (from the command line) or <code class="computeroutput"><span class="identifier">help</span><span class="special">(</span><span class="identifier">boost</span><span class="special">.</span><span class="identifier">mpi</span><span class="special">)</span></code> (from the Python interpreter).
279 <div class="section">
280 <div class="titlepage"><div><div><h3 class="title">
281 <a name="mpi.design"></a>Design Philosophy</h3></div></div></div>
283 The design philosophy of the Parallel MPI library is very simple: be both
284 convenient and efficient. MPI is a library built for high-performance applications,
285 but it's FORTRAN-centric, performance-minded design makes it rather inflexible
286 from the C++ point of view: passing a string from one process to another
287 is inconvenient, requiring several messages and explicit buffering; passing
288 a container of strings from one process to another requires an extra level
289 of manual bookkeeping; and passing a map from strings to containers of strings
290 is positively infuriating. The Parallel MPI library allows all of these data
291 types to be passed using the same simple <code class="computeroutput"><span class="identifier">send</span><span class="special">()</span></code> and <code class="computeroutput"><span class="identifier">recv</span><span class="special">()</span></code> primitives. Likewise, collective operations
292 such as <code class="computeroutput"><a class="link" href="../boost/mpi/reduce.html" title="Function reduce">reduce()</a></code> allow arbitrary data types
293 and function objects, much like the C++ Standard Library would.
296 The higher-level abstractions provided for convenience must not have an impact
297 on the performance of the application. For instance, sending an integer via
298 <code class="computeroutput"><span class="identifier">send</span></code> must be as efficient
299 as a call to <code class="computeroutput"><span class="identifier">MPI_Send</span></code>, which
300 means that it must be implemented by a simple call to <code class="computeroutput"><span class="identifier">MPI_Send</span></code>;
301 likewise, an integer <code class="computeroutput"><a class="link" href="../boost/mpi/reduce.html" title="Function reduce">reduce()</a></code>
302 using <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">plus</span><span class="special"><</span><span class="keyword">int</span><span class="special">></span></code> must
303 be implemented with a call to <code class="computeroutput"><span class="identifier">MPI_Reduce</span></code>
304 on integers using the <code class="computeroutput"><span class="identifier">MPI_SUM</span></code>
305 operation: anything less will impact performance. In essence, this is the
306 "don't pay for what you don't use" principle: if the user is not
307 transmitting strings, s/he should not pay the overhead associated with strings.
310 Sometimes, achieving maximal performance means foregoing convenient abstractions
311 and implementing certain functionality using lower-level primitives. For
312 this reason, it is always possible to extract enough information from the
313 abstractions in Boost.MPI to minimize the amount of effort required to interface
314 between Boost.MPI and the C MPI library.
317 <div class="section">
318 <div class="titlepage"><div><div><h3 class="title">
319 <a name="mpi.threading"></a>Threads</h3></div></div></div>
321 There are an increasing number of hybrid parrallel applications that mix
322 distributed and shared memory parallelism. To know how to support that model,
323 one need to know what level of threading support is guaranteed by the MPI
324 implementation. There are 4 ordered level of possible threading support described
325 by <code class="computeroutput">mpi::threading::level</code>.
326 At the lowest level, you should not use threads at all, at the highest level,
327 any thread can perform MPI call.
330 If you want to use multi-threading in your MPI application, you should indicate
331 in the environment constructor your preffered threading support. Then probe
332 the one the librarie did provide, and decide what you can do with it (it
333 could be nothing, then aborting is a valid option):
335 <pre class="programlisting"><span class="preprocessor">#include</span> <span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">mpi</span><span class="special">/</span><span class="identifier">environment</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span>
336 <span class="preprocessor">#include</span> <span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">mpi</span><span class="special">/</span><span class="identifier">communicator</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span>
337 <span class="preprocessor">#include</span> <span class="special"><</span><span class="identifier">iostream</span><span class="special">></span>
338 <span class="keyword">namespace</span> <span class="identifier">mpi</span> <span class="special">=</span> <span class="identifier">boost</span><span class="special">::</span><span class="identifier">mpi</span><span class="special">;</span>
339 <span class="keyword">namespace</span> <span class="identifier">mt</span> <span class="special">=</span> <span class="identifier">mpi</span><span class="special">::</span><span class="identifier">threading</span><span class="special">;</span>
341 <span class="keyword">int</span> <span class="identifier">main</span><span class="special">()</span>
342 <span class="special">{</span>
343 <span class="identifier">mpi</span><span class="special">::</span><span class="identifier">environment</span> <span class="identifier">env</span><span class="special">(</span><span class="identifier">mt</span><span class="special">::</span><span class="identifier">funneled</span><span class="special">);</span>
344 <span class="keyword">if</span> <span class="special">(</span><span class="identifier">env</span><span class="special">.</span><span class="identifier">thread_level</span><span class="special">()</span> <span class="special"><</span> <span class="identifier">mt</span><span class="special">::</span><span class="identifier">funneled</span><span class="special">)</span> <span class="special">{</span>
345 <span class="identifier">env</span><span class="special">.</span><span class="identifier">abort</span><span class="special">(-</span><span class="number">1</span><span class="special">);</span>
346 <span class="special">}</span>
347 <span class="identifier">mpi</span><span class="special">::</span><span class="identifier">communicator</span> <span class="identifier">world</span><span class="special">;</span>
348 <span class="identifier">std</span><span class="special">::</span><span class="identifier">cout</span> <span class="special"><<</span> <span class="string">"I am process "</span> <span class="special"><<</span> <span class="identifier">world</span><span class="special">.</span><span class="identifier">rank</span><span class="special">()</span> <span class="special"><<</span> <span class="string">" of "</span> <span class="special"><<</span> <span class="identifier">world</span><span class="special">.</span><span class="identifier">size</span><span class="special">()</span>
349 <span class="special"><<</span> <span class="string">"."</span> <span class="special"><<</span> <span class="identifier">std</span><span class="special">::</span><span class="identifier">endl</span><span class="special">;</span>
350 <span class="keyword">return</span> <span class="number">0</span><span class="special">;</span>
351 <span class="special">}</span>
354 <div class="section">
355 <div class="titlepage"><div><div><h3 class="title">
356 <a name="mpi.performance"></a>Performance Evaluation</h3></div></div></div>
358 Message-passing performance is crucial in high-performance distributed computing.
359 To evaluate the performance of Boost.MPI, we modified the standard <a href="http://www.scl.ameslab.gov/netpipe/" target="_top">NetPIPE</a> benchmark (version
360 3.6.2) to use Boost.MPI and compared its performance against raw MPI. We
361 ran five different variants of the NetPIPE benchmark:
363 <div class="orderedlist"><ol class="orderedlist" type="1">
364 <li class="listitem">
365 MPI: The unmodified NetPIPE benchmark.
367 <li class="listitem">
368 Boost.MPI: NetPIPE modified to use Boost.MPI calls for communication.
370 <li class="listitem">
371 MPI (Datatypes): NetPIPE modified to use a derived datatype (which itself
372 contains a single <code class="computeroutput"><span class="identifier">MPI_BYTE</span></code>)
373 rathan than a fundamental datatype.
375 <li class="listitem">
376 Boost.MPI (Datatypes): NetPIPE modified to use a user-defined type <code class="computeroutput"><span class="identifier">Char</span></code> in place of the fundamental <code class="computeroutput"><span class="keyword">char</span></code> type. The <code class="computeroutput"><span class="identifier">Char</span></code>
377 type contains a single <code class="computeroutput"><span class="keyword">char</span></code>,
378 a <code class="computeroutput"><span class="identifier">serialize</span><span class="special">()</span></code>
379 method to make it serializable, and specializes <code class="computeroutput"><a class="link" href="../boost/mpi/is_mpi_datatype.html" title="Struct template is_mpi_datatype">is_mpi_datatype</a></code>
380 to force Boost.MPI to build a derived MPI data type for it.
382 <li class="listitem">
383 Boost.MPI (Serialized): NetPIPE modified to use a user-defined type
384 <code class="computeroutput"><span class="identifier">Char</span></code> in place of the
385 fundamental <code class="computeroutput"><span class="keyword">char</span></code> type. This
386 <code class="computeroutput"><span class="identifier">Char</span></code> type contains a
387 single <code class="computeroutput"><span class="keyword">char</span></code> and is serializable.
388 Unlike the Datatypes case, <code class="computeroutput"><a class="link" href="../boost/mpi/is_mpi_datatype.html" title="Struct template is_mpi_datatype">is_mpi_datatype</a></code>
389 is <span class="bold"><strong>not</strong></span> specialized, forcing Boost.MPI
390 to perform many, many serialization calls.
394 The actual tests were performed on the Odin cluster in the <a href="http://www.cs.indiana.edu/" target="_top">Department
395 of Computer Science</a> at <a href="http://www.iub.edu" target="_top">Indiana University</a>,
396 which contains 128 nodes connected via Infiniband. Each node contains 4GB
397 memory and two AMD Opteron processors. The NetPIPE benchmarks were compiled
398 with Intel's C++ Compiler, version 9.0, Boost 1.35.0 (prerelease), and <a href="http://www.open-mpi.org/" target="_top">Open MPI</a> version 1.1. The NetPIPE
402 <span class="inlinemediaobject"><img src="../../../libs/mpi/doc/netpipe.png" alt="netpipe"></span>
405 There are a some observations we can make about these NetPIPE results. First
406 of all, the top two plots show that Boost.MPI performs on par with MPI for
407 fundamental types. The next two plots show that Boost.MPI performs on par
408 with MPI for derived data types, even though Boost.MPI provides a much more
409 abstract, completely transparent approach to building derived data types
410 than raw MPI. Overall performance for derived data types is significantly
411 worse than for fundamental data types, but the bottleneck is in the underlying
412 MPI implementation itself. Finally, when forcing Boost.MPI to serialize characters
413 individually, performance suffers greatly. This particular instance is the
414 worst possible case for Boost.MPI, because we are serializing millions of
415 individual characters. Overall, the additional abstraction provided by Boost.MPI
416 does not impair its performance.
419 <div class="section">
420 <div class="titlepage"><div><div><h3 class="title">
421 <a name="mpi.history"></a>Revision History</h3></div></div></div>
422 <div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; ">
423 <li class="listitem">
424 <span class="bold"><strong>Boost 1.36.0</strong></span>:
425 <div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; "><li class="listitem">
426 Support for non-blocking operations in Python, from Andreas Klöckner
429 <li class="listitem">
430 <span class="bold"><strong>Boost 1.35.0</strong></span>: Initial release, containing
431 the following post-review changes
432 <div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: circle; ">
433 <li class="listitem">
434 Support for arrays in all collective operations
436 <li class="listitem">
437 Support default-construction of <code class="computeroutput"><a class="link" href="../boost/mpi/environment.html" title="Class environment">environment</a></code>
441 <li class="listitem">
442 <span class="bold"><strong>2006-09-21</strong></span>: Boost.MPI accepted into
447 <div class="section">
448 <div class="titlepage"><div><div><h3 class="title">
449 <a name="mpi.acknowledge"></a>Acknowledgments</h3></div></div></div>
451 Boost.MPI was developed with support from Zurcher Kantonalbank. Daniel Egloff
452 and Michael Gauckler contributed many ideas to Boost.MPI's design, particularly
453 in the design of its abstractions for MPI data types and the novel skeleton/context
454 mechanism for large data structures. Prabhanjan (Anju) Kambadur developed
455 the predecessor to Boost.MPI that proved the usefulness of the Serialization
456 library in an MPI setting and the performance benefits of specialization
457 in a C++ abstraction layer for MPI. Jeremy Siek managed the formal review
462 <table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
463 <td align="left"></td>
464 <td align="right"><div class="copyright-footer">Copyright © 2005-2007 Douglas Gregor,
465 Matthias Troyer, Trustees of Indiana University<p>
466 Distributed under the Boost Software License, Version 1.0. (See accompanying
467 file LICENSE_1_0.txt or copy at <a href="http://www.boost.org/LICENSE_1_0.txt" target="_top">
468 http://www.boost.org/LICENSE_1_0.txt </a>)
473 <div class="spirit-nav">
474 <a accesskey="p" href="../boost/mpi/timer.html"><img src="../../../doc/src/images/prev.png" alt="Prev"></a><a accesskey="u" href="../mpi.html"><img src="../../../doc/src/images/up.png" alt="Up"></a><a accesskey="h" href="../index.html"><img src="../../../doc/src/images/home.png" alt="Home"></a><a accesskey="n" href="../program_options.html"><img src="../../../doc/src/images/next.png" alt="Next"></a>