blob: 9a10d69138b14cbe58be783a733c619a576d40a6 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
|
Installation instructions for Parallel HDF5
-------------------------------------------
1. Overview
-----------
This file contains instructions for the installation of parallel
HDF5. Platforms supported by this release are SGI Origin 2000,
IBM SP2, and the Intel TFLOP. The steps are kind of unnatural and
will be more automized in the next release. If you have difficulties
installing the software in your system, please send mail to
hdfparallel@ncsa.uiuc.edu
In your mail, please enclose the output of "uname -a". Also attach the
content of "config.log" if you have run the "configure" command.
First, you must obtain and unpack the HDF5 source as
described in the file INSTALL. You also need to obtain the
information of the include and library paths of MPI and MPIO
software installed in your system since the parallel HDF5 library
uses them for parallel I/O access.
2. Quick Instruction for known systems
--------------------------------------
The following shows particular steps to run the parallel HDF5
configure for a few machines we tested. If your particular platform
is not shown or somehow the steps do not work for yours, please go
to the next section for more detail explanations.
------
TFLOPS
------
follow the instuctions in INSTALL_TFLOPS.
-------
IBM SP2
-------
First of all, make sure your environment variables are set correctly
to compile and execute single process mpi applications for the SP2
machine. They should be the same or comparable to the following.
setenv CC mpcc_r
setenv MP_PROCS 1
setenv MP_NODES 1
setenv MP_LABELIO no
setenv MP_RMPOOL 0
setenv RUNPARALLEL "MP_PROCS=2 MP_TASKS_PER_NODE=2 poe"
setenv LLNL_COMPILE_SINGLE_THREADED TRUE
The shared library configuration for this version is broken.
So, only static library is supported.
An error for powerpc-ibm-aix4.3.2.0 (LLNL Blue) about install method
is discovered after code freeze. You need to remove the following
line from config/powerpc-ibm-aix4.3.2.0 before configuration.
ac_cv_path_install=${ac_cv_path_install='cp -r'}
Then do the following steps:
./configure --disable-shared --prefix=<install-directory>
make # build the library
make check # verify the correctness
make install
---------------
SGI Origin 2000
Cray T3E
(where MPI-IO is part of system MPI library such as mpt 1.3)
---------------
#!/bin/sh
RUNPARALLEL="mpirun -np 3"
export RUNPARALLEL
LIBS="-lmpi"
export LIBS
./configure --enable-parallel --disable-shared --prefix=$PWD/installdir
make
make check
make install
---------------
SGI Origin 2000
Cray T3E
(where MPI-IO is not part of system MPI library or I want to
use my own version of MPIO)
---------------
mpi1_inc="" #mpi-1 include
mpi1_lib="" #mpi-1 library
mpio_inc=-I$HOME/ROMIO/include #mpio include
mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library
MPI_INC="$mpio_inc $mpi1_inc"
MPI_LIB="$mpio_lib $mpi1_lib"
#for version 1.1
CPPFLAGS=$MPI_INC
export CPPFLAGS
LDFLAGS=$MPI_LIB
export LDFLAGS
RUNPARALLEL="mpirun -np 3"
export RUNPARALLEL
LIBS="-lmpio -lmpi"
export LIBS
./configure --enable-parallel --disable-shared --prefix=$PWD/installdir
make
make check
make install
3. Detail explanation
---------------------
The HDF5 library can be configured to use MPI and MPI-IO for
parallelizm on a distributed multi-processor system. The easy
way to do this is to have a properly installed parallel
compiler (e.g., MPICH's mpicc or IBM's mpcc) and supply that
executable as the value of the CC environment variable:
$ CC=mpcc ./configure
$ CC=/usr/local/mpi/bin/mpicc ./configure
If no such wrapper script is available then you must specify
your normal C compiler along with the distribution of
MPI/MPI-IO which is to be used (values other than `mpich' will
be added at a later date):
$ ./configure --enable-parallel=mpich
If the MPI/MPI-IO include files and/or libraries cannot be
found by the compiler then their directories must be given as
arguments to CPPFLAGS and/or LDFLAGS:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel=mpich
If a parallel library is being built then configure attempts
to determine how to run a parallel application on one
processor and on many processors. If the compiler is mpicc
and the user hasn't specified values for RUNSERIAL and
RUNPARALLEL then configure chooses `mpirun' from the same
directory as `mpicc':
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=3}
The `$${NPROCS:=3}' will be substituted with the value of the
NPROCS environment variable at the time `make check' is run
(or the value 3).
|