...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Getting started with Boost.MPI requires a working MPI implementation, a recent version of Boost, and some configuration information.
To get started with Boost.MPI, you will first need a working MPI implementation. There are many conforming MPI implementations available. Boost.MPI should work with any of the implementations, although it has only been tested extensively with:
You can test your implementation using the following simple program, which passes a message from one processor to another. Each processor prints a message to standard output.
#include <mpi.h> #include <iostream> int main(int argc, char* argv[]) { MPI_Init(&argc, &argv); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { int value = 17; int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); if (result == MPI_SUCCESS) std::cout << "Rank 0 OK!" << std::endl; } else if (rank == 1) { int value; int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); if (result == MPI_SUCCESS && value == 17) std::cout << "Rank 1 OK!" << std::endl; } MPI_Finalize(); return 0; }
You should compile and run this program on two processors. To do this, consult
the documentation for your MPI implementation. With OpenMPI,
for instance, you compile with the mpiCC
or mpic++
compiler, boot the LAM/MPI daemon, and run your program via mpirun
. For instance, if your program is
called mpi-test.cpp
,
use the following commands:
mpiCC -o mpi-test mpi-test.cpp lamboot mpirun -np 2 ./mpi-test lamhalt
When you run this program, you will see both Rank
0 OK!
and Rank
1 OK!
printed to the screen. However, they may
be printed in any order and may even overlap each other. The following output
is perfectly legitimate for this MPI program:
Rank Rank 1 OK! 0 OK!
If your output looks something like the above, your MPI implementation appears to be working with a C++ compiler and we're ready to move on.
As the rest of Boost, Boost.MPI uses version 2 of the Boost.Build system for configuring and building the library binary.
Please refer to the general Boost installation instructions for Unix Variant (including Unix, Linux and MacOS) or Windows. The simplified build instructions should apply on most platforms with a few specific modifications described below.
As explained in the boost installation instructions, running the bootstrap
(./bootstrap.sh
for
unix variants or bootstrap.bat
for Windows) from the boost root directory will produce a 'project-config.jam`
file. You need to edit that file and add the following line:
using mpi ;
Alternatively, you can explicitly provide the list of Boost libraries you
want to build. Please refer to the --help
option of the bootstrap
script.
First, you need to scan the include/boost/mpi/config.hpp
file and check if some settings need to be modified for your MPI implementation
or preferences.
In particular, the BOOST_MPI_HOMOGENEOUS
macro, that you will need to comment out if you plan to run on a heterogeneous
set of machines. See the optimization
notes below.
Most MPI implementations require specific compilation and link options. In order to mask theses details to the user, most MPI implementations provide wrappers which silently pass those options to the compiler.
Depending on your MPI implementation, some work might be needed to tell
Boost which specific MPI option to use. This is done through the using mpi ;
directive in the project-config.jam
file those general form is (do not forget to leave spaces around : and before ;):
using mpi : [<MPI compiler wrapper>] : [<compilation and link options>] : [<mpi runner>] ;
Depending on your installation and MPI distribution, the build system might be able to find all the required informations and you just need to specify:
using mpi ;
Most of the time, specially with production HPC clusters, some work will need to be done.
Here is a list of the most common issues and suggestions on how to fix those.
You will need to tell the build system how to call it using the first parameter:
using mpi : /opt/mpi/bullxmpi/1.2.8.3/bin/mpicc ;
Warning | |
---|---|
Boost.MPI only uses the C interface, so specifying the C wrapper should be enough. But some implementations will insist on importing the C++ bindings. |
With some implementations, or with some specific integration[10] you will need to provide the compilation and link options through de second parameter using 'jam' directives. The following type configuration used to be required for some specific Intel MPI implementation (in such a case, the name of the wrapper can be left blank):
using mpi : mpiicc : <library-path>/softs/intel/impi/5.0.1.035/intel64/lib <library-path>/softs/intel/impi/5.0.1.035/intel64/lib/release_mt <include>/softs/intel/impi/5.0.1.035/intel64/include <find-shared-library>mpifort <find-shared-library>mpi_mt <find-shared-library>mpigi <find-shared-library>dl <find-shared-library>rt ;
As a convenience, MPI wrappers usually have an option that provides the
required informations, which usually starts with --show
. You can use those to find out
the requested jam directive:
$ mpiicc -show icc -I/softs/.../include ... -L/softs/.../lib ... -Xlinker -rpath -Xlinker /softs/.../lib .... -lmpi -ldl -lrt -lpthread $
$ mpicc --showme icc -I/opt/.../include -pthread -L/opt/.../lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl $ mpicc --showme:compile -I/opt/mpi/bullxmpi/1.2.8.3/include -pthread $ mpicc --showme:link -pthread -L/opt/.../lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl $
To see the results of MPI auto-detection, pass --debug-configuration
on the bjam command line.
Note | |
---|---|
This is only used when running the tests. |
If you need to use a special command to launch an MPI program, you will
need to specify it through the third parameter of the using
mpi
directive.
So, assuming you launch the all_gather_test
program with:
$mpiexec.hydra -np 4 all_gather_test
The directive will look like:
using mpi : mpiicc : [<compilation and link options>] : mpiexec.hydra -n ;
To build the whole Boost distribution:
$cd <boost distribution> $./b2
To build the Boost.MPI library and dependancies:
$cd <boost distribution>/lib/mpi/build $../../../b2
To build applications based on Boost.MPI, compile and link them as you normally
would for MPI programs, but remember to link against the boost_mpi
and boost_serialization
libraries,
e.g.,
mpic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \ -lboost_mpi -lboost_serialization
If you plan to use the Python bindings
for Boost.MPI in conjunction with the C++ Boost.MPI, you will also need to
link against the boost_mpi_python library, e.g., by adding -lboost_mpi_python-gcc
to
your link command. This step will only be necessary if you intend to register C++ types or use the skeleton/content mechanism from
within Python.
[10] Some HPC cluster will insist that the users uss theirs own in house interface to the MPI system.