1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
|
Installation instructions for Parallel HDF5
-------------------------------------------
1. Overview
-----------
This file contains instructions for the installation of parallel
HDF5. Platforms supported by this release are SGI Origin 2000, IBM SP2,
Intel TFLOPs, and Linux version 2.4 and greater. The steps are kind of
unnatural and will be more automized in the next release. If you have
difficulties installing the software in your system, please send mail to
hdfparallel@ncsa.uiuc.edu
In your mail, please include the output of "uname -a". Also attach the
content of "config.log" if you ran the "configure" command.
First, you must obtain and unpack the HDF5 source as described in the
INSTALL file. You also need to obtain the information of the include and
library paths of MPI and MPIO software installed in your system since the
parallel HDF5 library uses them for parallel I/O access.
2. Quick Instruction for known systems
--------------------------------------
The following shows particular steps to run the parallel HDF5
configure for a few machines we've tested. If your particular platform
is not shown or somehow the steps do not work for yours, please go
to the next section for more detailed explanations.
------
TFLOPS
------
Follow the instuctions in INSTALL_TFLOPS.
-------
IBM SP
-------
First of all, make sure your environment variables are set correctly
to compile and execute a single process mpi applications for the SP
machine. Unfortunately, the setting varies from machine to machine.
E.g., the following works for the Blue machine of LLNL.
setenv CC mpcc_r
setenv F9X mpxlf_r # if parallel Fortran API is wanted
setenv MP_PROCS 1
setenv MP_NODES 1
setenv MP_LABELIO no
setenv MP_RMPOOL 0
setenv RUNPARALLEL "MP_PROCS=2 MP_TASKS_PER_NODE=2 poe"
setenv LLNL_COMPILE_SINGLE_THREADED TRUE
The shared library configuration for this version is broken. So, only
static library is supported.
Then do the following steps:
$ ./configure --disable-shared --prefix=<install-directory>
$ make # build the library
$ make check # verify the correctness
$ make install
---------------
SGI Origin 2000
Cray T3E
(where MPI-IO is part of system MPI library such as mpt 1.3)
---------------
#!/bin/sh
RUNPARALLEL="mpirun -np 3"
export RUNPARALLEL
LIBS="-lmpi"
export LIBS
./configure --enable-parallel --prefix=$PWD/installdir
make
make check
make install
***Known problem***
Some O2K system may encounter an error during make.
ld32: FATAL 9: I/O error (-lmpi): No such file or directory
This is because libtool tries too hard to locate the loader 'ld'
but ends up with one that does not know where to find the right
version of libmpi.a for the particular ABI requested.
The fix is to edit the file 'libtool' at the top of the build directory.
Search for a string that looks like the following:
LD="/opt/MIPSpro/MIPSpro_default/opt/MIPSpro/bin/ld -n32"
Replace it with something that knows how to find the right libmpi.a.
E.g.,
LD="/opt/MIPSpro/MIPSpro_default/opt/MIPSpro/bin/cc -n32"
---------------
SGI Origin 2000
Cray T3E
(where MPI-IO is not part of system MPI library or I want to use my own
version of MPIO)
---------------
mpi1_inc="" #mpi-1 include
mpi1_lib="" #mpi-1 library
mpio_inc=-I$HOME/ROMIO/include #mpio include
mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library
MPI_INC="$mpio_inc $mpi1_inc"
MPI_LIB="$mpio_lib $mpi1_lib"
#for version 1.1
CPPFLAGS=$MPI_INC
export CPPFLAGS
LDFLAGS=$MPI_LIB
export LDFLAGS
RUNPARALLEL="mpirun -np 3"
export RUNPARALLEL
LIBS="-lmpio -lmpi"
export LIBS
./configure --enable-parallel --prefix=$PWD/installdir
make
make check
make install
---------------------
Linux 2.4 and greater
---------------------
Be sure that your installation of MPICH was configured with the following
configuration command-line option:
-cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
This allows for >2GB sized files on Linux systems and is only available
with Linux kernels 2.4 and greater.
------------------
HP V2500 and N4000
------------------
Follow the instructions in section 3.
3. Detail explanation
---------------------
The HDF5 library can be configured to use MPI and MPI-IO for parallelism
on a distributed multi-processor system. The easiest way to do this is to
have a properly installed parallel compiler (e.g., MPICH's mpicc or IBM's
mpcc) and supply that executable as the value of the CC environment
variable:
$ CC=mpcc ./configure
$ CC=/usr/local/mpi/bin/mpicc ./configure
If no such wrapper script is available then you must specify your normal
C compiler along with the distribution of MPI/MPI-IO which is to be used
(values other than `mpich' will be added at a later date):
$ ./configure --enable-parallel=mpich
If the MPI/MPI-IO include files and/or libraries cannot be found by the
compiler then their directories must be given as arguments to CPPFLAGS
and/or LDFLAGS:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel=mpich
If a parallel library is being built then configure attempts to determine
how to run a parallel application on one processor and on many
processors. If the compiler is `mpicc' and the user hasn't specified
values for RUNSERIAL and RUNPARALLEL then configure chooses `mpirun' from
the same directory as `mpicc':
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=3}
The `$${NPROCS:=3}' will be substituted with the value of the NPROCS
environment variable at the time `make check' is run (or the value 3).
4. Parallel tests
-----------------
The testpar/ directory contains tests for Parallel HDF5 and MPI-IO.
The t_mpi tests the basic functionalities of some MPI-IO features used by
Parallel HDF5. It usually exits with non-zero code if a required MPI-IO
feature does not succeed as expected. One exception is the testing of
accessing files larger than 2GB. If the underlaying filesystem or if
the MPI-IO library fails to handle file sizes larger than 2GB, the test
will print informational essages stating the failure but will not exit
with non-zero code. Failure to support file size greater than 2GB is
not a fatal error for HDF5 becuase HDF5 can use other file-drivers such
as families of files to by pass the file size limit.
By default, the parallel tests use /tmp/$LOGIN as the test directory.
This can be override by the environment variable $HDF5_PARAPREFIX.
For example, if the tests should use directory /PFS/user/me, do
HDF5_PARAPREFIX=/PFS/user/me
export HDF5_PARAPREFIX
make check
(In some batch job system, you many need to hardset HDF5_PARAPREFIX in
the shell initial files like .profile, .cshrc, etc.)
|