Installation instructions for Parallel HDF5 ------------------------------------------- 1. Overview ----------- This file contains instructions for the installation of parallel HDF5. Platforms supported by this release are SGI Origin 2000, IBM SP2, and the Intel TFLOP. The steps are kind of unnatural and will be more automized in the next release. If you have difficulties installing the software in your system, please send mail to hdfhelp@ncsa.uiuc.edu In your mail, please enclose the output of "uname -a". Also attach the content of "config.log" if you have run the "configure" command. First, you must obtain and unpack the HDF5 source as described in the file INSTALL. You also need to obtain the information of the include and library paths of MPI and MPIO software installed in your system since the parallel HDF5 library uses them for parallel I/O access. 2. Quick Instruction for known systems -------------------------------------- The following shows particular steps to run the parallel HDF5 configure for a few machines we tested. If your particular platform is not shown or somehow the steps do not work for yours, please go to the next section for more detail explanations. 2.1. IBM SP2 ------------ Follow the instructions in bin/config_para_ibm_sp.sh. 2.2. TFLOPS ----------- Follow the instuctions in INSTALL_TFLOPS. 2.3. SGI/CRAY (Origin 2000, T3E) -------------------------------- For SGI/CRAY systems in which MPI-IO is part of system MPI library such as mpt 1.3, use the following steps. ./configure --enable-parallel --prefix=$PWD/installdir make make check make install 2.4. MPICH systems ------------------ For systems in which the latest MPICH library with ROMIO is installed, use the following steps. CC=mpicc ./configure --prefix=$PWD/installdir 2.5. Other machines ------------------- For systems in which MPI and/or MPI-IO are not part of system library or I want to use my own version of MPI or MPIO libraries, use the following steps. mpi1_inc="" #mpi-1 include mpi1_lib="" #mpi-1 library mpio_inc=-I$HOME/ROMIO/include #mpio include mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library MPI_INC="$mpio_inc $mpi1_inc" MPI_LIB="$mpio_lib $mpi1_lib" # Specify where to find the MPI and/or MPI-IO headers and libraries CPPFLAGS=$MPI_INC export CPPFLAGS LDFLAGS=$MPI_LIB export LDFLAGS LIBS="-lmpio -lmpi" export LIBS # Specify how to run MPI parallel jobs RUNPARALLEL="mpirun -np 2" export RUNPARALLEL ./configure --enable-parallel --prefix=$PWD/installdir make make check make install 3. Detail explanation --------------------- The HDF5 library can be configured to use MPI and MPI-IO for parallelizm on a distributed multi-processor system. The easy way to do this is to have a properly installed parallel compiler (e.g., MPICH's mpicc or IBM's mpcc) and supply that executable as the value of the CC environment variable: $ CC=mpcc ./configure $ CC=/usr/local/mpi/bin/mpicc ./configure If no such wrapper script is available then you must specify your normal C compiler along with the distribution of MPI/MPI-IO which is to be used (values other than `mpich' will be added at a later date): $ ./configure --enable-parallel=mpich If the MPI/MPI-IO include files and/or libraries cannot be found by the compiler then their directories must be given as arguments to CPPFLAGS and/or LDFLAGS: $ CPPFLAGS=-I/usr/local/mpi/include \ LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \ ./configure --enable-parallel=mpich If a parallel library is being built then configure attempts to determine how to run a parallel application on one processor and on many processors. If the compiler is mpicc and the user hasn't specified values for RUNSERIAL and RUNPARALLEL then configure chooses `mpirun' from the same directory as `mpicc': RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1 RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=2} The `$${NPROCS:=2}' will be substituted with the value of the NPROCS environment variable at the time `make check' is run (or the value 2).