summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorAlbert Cheng <acheng@hdfgroup.org>1999-09-30 17:15:06 (GMT)
committerAlbert Cheng <acheng@hdfgroup.org>1999-09-30 17:15:06 (GMT)
commit5d44f23ac06c70143aa949e02ff150ec36fb3cef (patch)
tree75531b3901ecf1a7ca60c161630f5ce34747b74a
parentbdf0dbf7ed9a671cf915de65006ae5a5379c6941 (diff)
downloadhdf5-5d44f23ac06c70143aa949e02ff150ec36fb3cef.zip
hdf5-5d44f23ac06c70143aa949e02ff150ec36fb3cef.tar.gz
hdf5-5d44f23ac06c70143aa949e02ff150ec36fb3cef.tar.bz2
[svn-r1700] Changed names of files to better reflect their purposes.
INSTALL.ascired: Becomes INSTALL_TFLOPS. INSTALL_parallel.ascired: Becomes bin/config_para_tflops.sh INSTALL.ibm.sp.parallel: Becomes bin/config_para_ibm_sp.sh
-rw-r--r--INSTALL_TFLOPS107
-rw-r--r--bin/config_para_ibm_sp.sh88
-rw-r--r--bin/config_para_tflops.sh58
3 files changed, 253 insertions, 0 deletions
diff --git a/INSTALL_TFLOPS b/INSTALL_TFLOPS
new file mode 100644
index 0000000..95e16c2
--- /dev/null
+++ b/INSTALL_TFLOPS
@@ -0,0 +1,107 @@
+
+FOR THE INTEL TFLOPS MACHINE:
+
+Below are the step-by-step procedures for building, testing, and
+installing both the sequential and parallel versions of the HDF5 library.
+
+---------------
+Sequential HDF5:
+---------------
+
+The setup process for building the sequential HDF5 library for the
+ASCI Red machine is done by a coordination of events from sasn100 and
+janus. Though janus can do compiling, it is better to build it
+from sasn100 which has more complete building tools and runs faster.
+It is also anti-social to tie up janus with compiling. The HDF5 building
+requires the use of janus because one of steps is to execute a program
+to find out the run-time characteristics of the TFLOPS machine.
+
+Assuming you have already unpacked the HDF5 tar-file into the
+directory <hdf5>, follow the steps below:
+
+FROM SASN100,
+
+1) cd <hdf5>
+
+2) ./configure tflop
+
+3) make H5detect
+
+
+FROM JANUS,
+
+4) cd <hdf5>
+
+5) make H5Tinit.c
+
+
+FROM SASN100,
+
+6) make
+
+
+When everything is finished compiling and linking,
+you can run the tests by
+FROM JANUS,
+
+7.1) Due to a bug, you must first remove the following line from
+ the file test/Makefile before the next step.
+ RUNTEST=$(LT_RUN)
+7.2) make check
+
+
+Once satisfied with the test results, you can install
+the software by
+FROM SASN100,
+
+8) make install
+
+
+---------------
+Parallel HDF5:
+---------------
+
+The setup process for building the parallel version of the HDF5 library for the
+ASCI Red machine is very similar to the sequential version. Since TFLOPS
+does not support MPIO, we have prepared a shell-script file that configures
+with the appropriate MPI library.
+
+Assuming you have already unpacked the HDF5 tar-file into the
+directory <hdf5>, follow the steps below:
+FROM SASN100,
+
+1) cd <hdf5>
+
+2) sh bin/config_para_tflops.sh /* this is different from the sequential version */
+
+3) make H5detect
+
+
+FROM JANUS,
+
+4) cd <hdf5>
+
+5) make H5Tinit.c
+
+
+FROM SASN100,
+
+6) make
+
+
+When everything is finished compiling and linking,
+FROM JANUS,
+
+7.1) Due to a bug, you must first remove the following line from
+ the file test/Makefile before the next step.
+ RUNTEST=$(LT_RUN)
+7.2) make check
+
+
+Once satisfied with the parallel test results, as long as you
+have the correct permission,
+FROM SASN100,
+
+8) make install
+
+
diff --git a/bin/config_para_ibm_sp.sh b/bin/config_para_ibm_sp.sh
new file mode 100644
index 0000000..b269527
--- /dev/null
+++ b/bin/config_para_ibm_sp.sh
@@ -0,0 +1,88 @@
+# How to create a parallel version of HDF5 on an IBM SP system
+# that uses MPI and MPI-IO.
+
+# Unfortunately, the configure/make process to create the parallel version of
+# HDF5 has not yet been automated to the same extent that the sequential
+# version has.
+# Read the INSTALL file to understand the configure/make process for the
+# sequential (i.e., uniprocess) version of HDF5.
+# The process for creating the parallel version of HDF5 using MPI-IO
+# is similar, but first you will have to set up some environment variables
+# with values specific to your local installation.
+# The relevant variables are shown below, with values that work for LLNL's
+# ASCI baby blue pacific SP as of the writing of these instructions (980210).
+
+# In addition to the environment variables, you _might_ also have to
+# create a new file in the config directory.
+# You will need to create this file only if the execution of the ./configure
+# program aborts with an error after printing the message
+# "checking whether byte ordering is bigendian..."
+#
+# If this is the case, create a new file in the config directory
+# whose name is of the form architecture-vendor-OSversion
+# (e.g., for baby blue pacific, this file is named powerpc-ibm-aix4.2.1.0)
+# and which contains the line
+# ac_cv_c_bigendian=${ac_cv_c_bigendian='yes'}
+# if the target architecture is bigendian, or
+# ac_cv_c_bigendian=${ac_cv_c_bigendian='no'}
+# otherwise.
+# Running the program ./bin/config.guess will print out the name
+# of the new file you must create.
+
+# Don't try to make a parallel version of HDF5 from the same hdf5 root
+# directory where you made a sequential version of HDF5 -- start with
+# a fresh copy.
+# Here are the flags you must set before running the ./configure program
+# to create the parallel version of HDF5.
+# (We use csh here, but of course you can adapt to whatever shell you like.)
+
+# compile for MPI jobs
+setenv CC "/usr/local/mpich-1.1.2+romio_lgfiles/bin/mpicc"
+
+#
+# next 4 for IBM mpi
+#
+#setenv CC /usr/lpp/ppe.poe/bin/mpcc_r
+
+#
+# for both
+#
+setenv MP_PROCS 1
+
+
+# These compiler flags work on ASCI baby blue pacific (IBM SP),
+# using IBM's MPI and Argonne's MPI-IO (ROMIO):
+# -DHAVE_FUNCTION compiler accepts __FUNCTION__ notation
+# -I/usr/local/mpio/include/ibm using ROMIO's MPI-IO header files
+#
+# The following flags are only needed when compiling/linking a user program
+# for execution.
+# -bI:/usr/include/piofs/piofs.exp this MPI-IO uses PIOFS file system
+# -L/usr /local/mpio/lib/ibm -lmpio link to this MPI-IO lib
+#
+#setenv CFLAGS "-D_LARGE_FILES $CFLAGS"
+
+# The configure/make process needs to be able to run some programs,
+# need to specify a processor pool.
+# Also, don't prepend the process id in the output of the programs
+# run by config/make.
+setenv MP_RMPOOL 0
+setenv MP_LABELIO no
+
+# Once these variables are set to the proper values for your installation,
+# you can run the configure program (i.e., ./configure)
+# to set up the Makefiles, etc.
+# After configuring, run the make as described in the INSTALL file.
+# Once the configuration is complete, you can set any of your
+# environment variables to whatever you like.
+
+# the files in the config directory, such as
+# config/powerpc-ibm-aix4.2.1.0
+# config/powerpc-ibm-aix4.x
+# config/powerpc-ibm-aix4.3.2.0
+# sometimes will need some help depending on subtlties of the installation
+
+
+# When compiling and linking your application, don't forget to compile with
+# mpcc and link to the MPI-IO library and the parallel version of the HDF5
+# library (that was created and installed with the configure/make process).
diff --git a/bin/config_para_tflops.sh b/bin/config_para_tflops.sh
new file mode 100644
index 0000000..7e94971
--- /dev/null
+++ b/bin/config_para_tflops.sh
@@ -0,0 +1,58 @@
+#! /bin/sh
+# How to configure a parallel version of HDF5 on the Sandia National Laboratory
+# TFLOPS System that uses MPI and MPI-IO.
+
+# Read the INSTALL_TFLOPS file to understand the configure/make process
+# for the sequential (i.e., uniprocessor) version of HDF5.
+# The process for creating the parallel version of HDF5 using MPI-IO
+# is similar, but first you will have to set up some environment variables
+# with values specific to your local installation.
+# The relevant variables are shown below, with values that work for Sandia'a
+# ASCI Red Tflops machine as of the writing of these instructions (980421).
+
+# Don't try to make a parallel version of HDF5 from the same hdf5 root
+# directory where you made a sequential version of HDF5 -- start with
+# a fresh copy.
+# Here are the flags you must set before running the ./configure program
+# to create the parallel version of HDF5.
+# (We use sh here, but of course you can adapt to whatever shell you like.)
+
+# compile for MPI jobs
+#CC=cicc
+
+# The following flags are only needed when compiling/linking a user program
+# for execution.
+#
+
+# Using the MPICH libary by Daniel Sands.
+# It contains both MPI-1 and MPI-IO functions.
+ROMIO="${HOME}/MPIO/mpich"
+if [ ! -d $ROMIO ]
+then
+ echo "ROMIO directory ($ROMIO) not found"
+ echo "Aborted"
+ exit 1
+fi
+mpi1_inc=""
+mpi1_lib=""
+mpio_inc="-I$ROMIO/include"
+mpio_lib="-L$ROMIO/lib"
+
+
+MPI_INC="$mpi1_inc $mpio_inc"
+MPI_LIB="$mpi1_lib $mpio_lib"
+
+
+# Once these variables are set to the proper values for your installation,
+# you can run the configure program (i.e., ./configure tflop --enable-parallel=mpio)
+# to set up the Makefiles, etc.
+# After configuring, run the make as described in the INSTALL file.
+
+# When compiling and linking your application, don't forget to compile with
+# cicc and link to the MPI-IO library and the parallel version of the HDF5
+# library (that was created and installed with the configure/make process).
+
+CPPFLAGS=$MPI_INC \
+LDFLAGS=$MPI_LIB \
+LIBS="-lmpich" \
+./configure --enable-parallel tflop $@