summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMuQun Yang <ymuqun@hdfgroup.org>2006-02-16 16:45:42 (GMT)
committerMuQun Yang <ymuqun@hdfgroup.org>2006-02-16 16:45:42 (GMT)
commitbac54105f6ac50036b0075768bc5c9d3f7c65063 (patch)
tree879ac3ea2f25afc51d08ce6103b95143ca0ecfb6
parent811131397c1c1c124f6b23bb8b3382f55c05c9b5 (diff)
downloadhdf5-bac54105f6ac50036b0075768bc5c9d3f7c65063.zip
hdf5-bac54105f6ac50036b0075768bc5c9d3f7c65063.tar.gz
hdf5-bac54105f6ac50036b0075768bc5c9d3f7c65063.tar.bz2
[svn-r11939] Purpose:
Supports for collective chunk IO inside parallel HDF5 Description: Added a macro hdf5_mpi_special_collective_io_works to filter out some mpi-io packages that don't support collective IO for no IO contributions in some processes. Solution: Using AC_CACHE_VAL to do the job. Platforms tested: Parallel: IBM AIX 5.2(copper) Linux (heping) mpich-1.2.6 SDSC Teragrid mpich-1.2.5 Linux(Tungsten) mpich-1.2.6 Altix(NCSA cobalt) Seq: Linux(heping) Misc. update:
-rwxr-xr-xconfigure23
-rw-r--r--configure.in24
2 files changed, 46 insertions, 1 deletions
diff --git a/configure b/configure
index f265f99..86baeee 100755
--- a/configure
+++ b/configure
@@ -50023,6 +50023,29 @@ else
echo "$as_me:$LINENO: result: no" >&5
echo "${ECHO_T}no" >&6
fi
+
+echo "$as_me:$LINENO: checking if MPI-IO can do collective IO when one or more processes don't do IOs" >&5
+echo $ECHO_N "checking if MPI-IO can do collective IO when one or more processes don't do IOs... $ECHO_C" >&6
+
+if test "${hdf5_mpi_special_collective_io_works+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ hdf5_mpi_special_collective_io_works=yes
+fi
+
+
+if test ${hdf5_mpi_special_collective_io_works} = "yes"; then
+
+cat >>confdefs.h <<\_ACEOF
+#define MPI_SPECIAL_COLLECTIVE_IO_WORKS 1
+_ACEOF
+
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
fi
diff --git a/configure.in b/configure.in
index 4a02907..b286a65 100644
--- a/configure.in
+++ b/configure.in
@@ -2383,9 +2383,13 @@ if test -n "$PARALLEL"; then
dnl ----------------------------------------------------------------------
dnl Check to see whether the complicate MPI derived datatype works.
-dnl Up to now(Dec. 20th, 2004), we find that IBM's MPIO implemention doesn't
+dnl In Dec. 20th, 2004, we found that IBM's MPIO implemention didn't
dnl handle with the displacement of the complicate MPI type derived datatype
dnl correctly. So we add the check here.
+dnl IBM fixed this bug in their new version that supported MPI-IO around spring 2005.
+dnl We find that mpich 1.2.5 has the similar bug. The same
+dnl bug also occurs at SGI IRIX 6.5 C with compiler version lower than or equal to 7.3.
+dnl In case people still use the old compiler, we keep this flag.
AC_MSG_CHECKING([if irregular hyperslab optimization code works inside MPI-IO])
AC_CACHE_VAL([hdf5_mpi_complex_derived_datatype_works],[hdf5_mpi_complex_derived_datatype_works=yes])
@@ -2397,6 +2401,24 @@ if test ${hdf5_mpi_complex_derived_datatype_works} = "yes"; then
else
AC_MSG_RESULT([no])
fi
+
+dnl ----------------------------------------------------------------------
+dnl Check to see whether MPI-IO can do collective IO successfully when one or more processes don't do
+dnl any IOs.
+dnl Up to now(Feb. 8th, 2006), we find that it will cause program hung with mpich 1.2.x version
+dnl and SGI altix. For those systems, we have to turn off this feature and use independent IO instead.
+dnl
+AC_MSG_CHECKING([if MPI-IO can do collective IO when one or more processes don't do IOs])
+
+AC_CACHE_VAL([hdf5_mpi_special_collective_io_works],[hdf5_mpi_special_collective_io_works=yes])
+
+if test ${hdf5_mpi_special_collective_io_works} = "yes"; then
+ AC_DEFINE([MPI_SPECIAL_COLLECTIVE_IO_WORKS], [1],
+ [Define if your system can handle special collective IO properly.])
+ AC_MSG_RESULT([yes])
+else
+ AC_MSG_RESULT([no])
+fi
fi
dnl ----------------------------------------------------------------------