1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
|
Installation instructions for Parallel HDF5
-------------------------------------------
1. Overview
-----------
This file contains instructions for the installation of parallel HDF5 (PHDF5).
PHDF5 requires an MPI compiler with MPI-IO support and a parallel file system.
If you don't know yet, you should first consult with your system support staff
of information how to compile an MPI program, how to run an MPI application,
and how to access the parallel file system. There are sample MPI-IO C and
Fortran programs in the section of "Sample programs". You can use them to
run simple tests of your MPI compilers and the parallel file system.
If you still have difficulties installing PHDF5 in your system, please
send mail to
hdfhelp@ncsa.uiuc.edu
In your mail, please include the output of "uname -a". If you have run the
"configure" command, attach the output of the command and the content of
the file "config.log".
2. Quick Instruction for known systems
--------------------------------------
The following shows particular steps to run the parallel HDF5
configure for a few machines we've tested. If your particular platform
is not shown or somehow the steps do not work for yours, please go
to the next section for more detailed explanations.
------
Know parallel compilers
------
HDF5 knows several parallel compilers: mpicc, hcc, mpcc, mpcc_r.
To build parallel HDF5 with one of the above, just set CC as it
and configure. The "--enable-parallel" is optional in this case.
$ CC=/usr/local/mpi/bin/mpicc ./configure --prefix=<install-directory>
$ make
$ make check
$ make install
------
TFLOPS
------
Follow the instructions in INSTALL_TFLOPS.
-------
IBM SP
-------
First of all, make sure your environment variables are set correctly
to compile and execute a single process mpi applications for the SP
machine. Unfortunately, the setting varies from machine to machine.
E.g., the following works for the Blue machine of LLNL.
setenv MP_PROCS 1
setenv MP_NODES 1
setenv MP_LABELIO no
setenv MP_RMPOOL 0
setenv LLNL_COMPILE_SINGLE_THREADED TRUE # for LLNL site only
The shared library configuration for this version is broken. So, only
static library is supported.
Then do the following steps:
$ ./configure --disable-shared --prefix=<install-directory>
$ make # build the library
$ make check # verify the correctness
$ make install
We also suggest that you add "-qxlf90=autodealloc" to FFLAGS when
building parallel with fortran enabled. This can be done by invoking:
setenv FFLAGS -qxlf90=autodealloc # 32 bit build
or
setenv FFLAGS "-q64 -qxlf90=autodealloc" # 64 bit build
prior to running configure. Recall that the "-q64" is necessary
for 64 bit builds.
---------------
SGI Origin 2000
Cray T3E
(where MPI-IO is part of system MPI library such as the mpt module)
---------------
#!/bin/sh
RUNPARALLEL="mpirun -np 3"
export RUNPARALLEL
LIBS="-lmpi"
export LIBS
./configure --enable-parallel --prefix=$PWD/installdir
make
make check
make install
***Known problem***
Some O2K system may encounter an error during make.
ld32: FATAL 9: I/O error (-lmpi): No such file or directory
This is because libtool tries too hard to locate the loader 'ld'
but ends up with one that does not know where to find the right
version of libmpi.a for the particular ABI requested.
The fix is to edit the file 'libtool' at the top of the build directory.
Search for a string that looks like the following:
LD="/opt/MIPSpro/MIPSpro_default/opt/MIPSpro/bin/ld -n32"
Replace it with something that knows how to find the right libmpi.a.
E.g.,
LD="/opt/MIPSpro/MIPSpro_default/opt/MIPSpro/bin/cc -n32"
Or you can pre-empt it by setting LD at configure time
$ LD="cc" ./configure --enable-parallel ...
---------------
SGI Origin 2000
Cray T3E
(where MPI-IO is not part of system MPI library or I want to use my own
version of MPIO)
---------------
mpi1_inc="" #mpi-1 include
mpi1_lib="" #mpi-1 library
mpio_inc=-I$HOME/ROMIO/include #mpio include
mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library
MPI_INC="$mpio_inc $mpi1_inc"
MPI_LIB="$mpio_lib $mpi1_lib"
#for version 1.1
CPPFLAGS=$MPI_INC
export CPPFLAGS
LDFLAGS=$MPI_LIB
export LDFLAGS
RUNPARALLEL="mpirun -np 3"
export RUNPARALLEL
LIBS="-lmpio -lmpi"
export LIBS
./configure --enable-parallel --prefix=$PWD/installdir
make
make check
make install
---------------------
Linux 2.4 and greater
---------------------
Be sure that your installation of MPICH was configured with the following
configuration command-line option:
-cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
This allows for >2GB sized files on Linux systems and is only available
with Linux kernels 2.4 and greater.
------------------
HP V2500 and N4000
------------------
Follow the instructions in section 3.
3. Detail explanation
---------------------
The HDF5 library can be configured to use MPI and MPI-IO for parallelism
on a distributed multi-processor system. The easiest way to do this is to
have a properly installed parallel compiler (e.g., MPICH's mpicc or IBM's
mpcc_r) and supply that executable as the value of the CC environment
variable. For examples,
$ CC=mpcc_r ./configure
$ CC=/usr/local/mpi/bin/mpicc ./configure
If no such wrapper script is available then you must specify your normal
C compiler along with the distribution of MPI/MPI-IO which is to be used
(values other than `mpich' will be added at a later date):
$ ./configure --enable-parallel=mpich
If the MPI/MPI-IO include files and/or libraries cannot be found by the
compiler then their directories must be given as arguments to CPPFLAGS
and/or LDFLAGS:
$ CPPFLAGS=-I/usr/local/mpi/include \
LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
./configure --enable-parallel=mpich
If a parallel library is being built then configure attempts to determine
how to run a parallel application on one processor and on many
processors. If the compiler is `mpicc' and the user hasn't specified
values for RUNSERIAL and RUNPARALLEL then configure chooses `mpirun' from
the same directory as `mpicc':
RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=3}
The `$${NPROCS:=3}' will be substituted with the value of the NPROCS
environment variable at the time `make check' is run (or the value 3).
4. Parallel tests
-----------------
The testpar/ directory contains tests for Parallel HDF5 and MPI-IO.
The t_mpi tests the basic functionalities of some MPI-IO features used by
Parallel HDF5. It usually exits with non-zero code if a required MPI-IO
feature does not succeed as expected. One exception is the testing of
accessing files larger than 2GB. If the underlaying filesystem or if
the MPI-IO library fails to handle file sizes larger than 2GB, the test
will print informational messages stating the failure but will not exit
with non-zero code. Failure to support file size greater than 2GB is
not a fatal error for HDF5 because HDF5 can use other file-drivers such
as families of files to by pass the file size limit.
By default, the parallel tests use the current directory as the test directory.
This can be changed by the environment variable $HDF5_PARAPREFIX.
For example, if the tests should use directory /PFS/user/me, do
HDF5_PARAPREFIX=/PFS/user/me
export HDF5_PARAPREFIX
make check
(In some batch job system, you many need to hardset HDF5_PARAPREFIX in
the shell initial files like .profile, .cshrc, etc.)
5. Sample programs
------------------
Here are sample MPI-IO C and Fortran programs. You may use them to run simple
tests of your MPI compilers and the parallel file system. The MPI commands
used here are mpicc, mpif90 and mpirun. Replace them with the commands of
your system.
The programs assume they run in the parallel file system. Thus they create
the test data file in the current directory. If the parallel file system
is somewhere else, you need to run the sample programs there or edit the
programs to use a different file name.
Example compiling and running:
% mpicc Sample_mpio.c -o c.out
% mpirun -np 4 c.out
% mpif90 Sample_mpio.f90 -o f.out
% mpirun -np 4 f.out
==> Sample_mpio.c <==
/* Simple MPI-IO program testing if a parallel file can be created.
* Default filename can be specified via first program argument.
* Each process writes something, then reads all data back.
*/
#include <mpi.h>
#ifndef MPI_FILE_NULL /*MPIO may be defined in mpi.h already */
# include <mpio.h>
#endif
#define DIMSIZE 10 /* dimension size, avoid powers of 2. */
#define PRINTID printf("Proc %d: ", mpi_rank)
main(int ac, char **av)
{
char hostname[128];
int mpi_size, mpi_rank;
MPI_File fh;
char *filename = "./mpitest.data";
char mpi_err_str[MPI_MAX_ERROR_STRING];
int mpi_err_strlen;
int mpi_err;
char writedata[DIMSIZE], readdata[DIMSIZE];
char expect_val;
int i, irank;
int nerrors = 0; /* number of errors */
MPI_Offset mpi_off;
MPI_Status mpi_stat;
MPI_Init(&ac, &av);
MPI_Comm_size(MPI_COMM_WORLD, &mpi_size);
MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);
/* get file name if provided */
if (ac > 1){
filename = *++av;
}
if (mpi_rank==0){
printf("Testing simple MPIO program with %d processes accessing file %s\n",
mpi_size, filename);
printf(" (Filename can be specified via program argument)\n");
}
/* show the hostname so that we can tell where the processes are running */
if (gethostname(hostname, 128) < 0){
PRINTID;
printf("gethostname failed\n");
return 1;
}
PRINTID;
printf("hostname=%s\n", hostname);
if ((mpi_err = MPI_File_open(MPI_COMM_WORLD, filename,
MPI_MODE_RDWR | MPI_MODE_CREATE | MPI_MODE_DELETE_ON_CLOSE,
MPI_INFO_NULL, &fh))
!= MPI_SUCCESS){
MPI_Error_string(mpi_err, mpi_err_str, &mpi_err_strlen);
PRINTID;
printf("MPI_File_open failed (%s)\n", mpi_err_str);
return 1;
}
/* each process writes some data */
for (i=0; i < DIMSIZE; i++)
writedata[i] = mpi_rank*DIMSIZE + i;
mpi_off = mpi_rank*DIMSIZE;
if ((mpi_err = MPI_File_write_at(fh, mpi_off, writedata, DIMSIZE, MPI_BYTE,
&mpi_stat))
!= MPI_SUCCESS){
MPI_Error_string(mpi_err, mpi_err_str, &mpi_err_strlen);
PRINTID;
printf("MPI_File_write_at offset(%ld), bytes (%d), failed (%s)\n",
(long) mpi_off, (int) DIMSIZE, mpi_err_str);
return 1;
};
/* make sure all processes has done writing. */
MPI_Barrier(MPI_COMM_WORLD);
/* each process reads all data and verify. */
for (irank=0; irank < mpi_size; irank++){
mpi_off = irank*DIMSIZE;
if ((mpi_err = MPI_File_read_at(fh, mpi_off, readdata, DIMSIZE, MPI_BYTE,
&mpi_stat))
!= MPI_SUCCESS){
MPI_Error_string(mpi_err, mpi_err_str, &mpi_err_strlen);
PRINTID;
printf("MPI_File_read_at offset(%ld), bytes (%d), failed (%s)\n",
(long) mpi_off, (int) DIMSIZE, mpi_err_str);
return 1;
};
for (i=0; i < DIMSIZE; i++){
expect_val = irank*DIMSIZE + i;
if (readdata[i] != expect_val){
PRINTID;
printf("read data[%d:%d] got %d, expect %d\n", irank, i,
readdata[i], expect_val);
nerrors++;
}
}
}
if (nerrors)
return 1;
MPI_File_close(&fh);
PRINTID;
printf("all tests passed\n");
MPI_Finalize();
return 0;
}
==> Sample_mpio.f90 <==
!
! The following example demonstrates how to create and close a parallel
! file using MPI-IO calls.
!
! USE MPI is the proper way to bring in MPI definitions but many
! MPI Fortran compiler supports the pseudo standard of INCLUDE.
! So, HDF5 uses the INCLUDE statement instead.
!
PROGRAM MPIOEXAMPLE
! USE MPI
IMPLICIT NONE
INCLUDE 'mpif.h'
CHARACTER(LEN=80), PARAMETER :: filename = "filef.h5" ! File name
INTEGER :: ierror ! Error flag
INTEGER :: fh ! File handle
INTEGER :: amode ! File access mode
call MPI_INIT(ierror)
amode = MPI_MODE_RDWR + MPI_MODE_CREATE + MPI_MODE_DELETE_ON_CLOSE
call MPI_FILE_OPEN(MPI_COMM_WORLD, filename, amode, MPI_INFO_NULL, fh, ierror)
print *, "Trying to create ", filename
if ( ierror .eq. MPI_SUCCESS ) then
print *, "MPI_FILE_OPEN succeeded"
call MPI_FILE_CLOSE(fh, ierror)
else
print *, "MPI_FILE_OPEN failed"
endif
call MPI_FINALIZE(ierror);
END PROGRAM
|