summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMuQun Yang <ymuqun@hdfgroup.org>2005-03-11 22:11:05 (GMT)
committerMuQun Yang <ymuqun@hdfgroup.org>2005-03-11 22:11:05 (GMT)
commit74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d (patch)
tree2dec0be236542f282be8f902f421a050e0feb447
parent941edeab91597b946eb1283640b3f3110d5fa996 (diff)
downloadhdf5-74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d.zip
hdf5-74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d.tar.gz
hdf5-74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d.tar.bz2
[svn-r10201] Purpose:
IBM MPI-IO has a bug for MPI derived data type, which is used to support collective IO. In order to avoid this, we provide a way to turn off collective IO support for this kind of platform. Description: Using a new macro called H5_MPI_COMPLEX_DERIVED_DATATYPE_WORKS to turn off irregular hyperslab collective IO support at such platforms. Solution: Hard code such platforms under hdf5/config. So far only IBM AIX has been found to have such problems. Platforms tested: heping (Linux) and copper(Aix). Testing at tungsten now, however, it took almost an hour to build parallel HDF5 at tungsten, it is still building. Cannot wait. Have tested this feature at Tungsten a week ago. Misc. update:
-rwxr-xr-xconfigure22
-rw-r--r--configure.in18
2 files changed, 39 insertions, 1 deletions
diff --git a/configure b/configure
index 64e827f..e5111f4 100755
--- a/configure
+++ b/configure
@@ -47090,6 +47090,28 @@ echo "${ECHO_T}yes" >&6
echo "${ECHO_T}no" >&6
fi
+echo "$as_me:$LINENO: checking if irregular hyperslab optimization code works inside MPI-IO" >&5
+echo $ECHO_N "checking if irregular hyperslab optimization code works inside MPI-IO... $ECHO_C" >&6
+
+if test "${hdf5_mpi_complex_derived_datatype_works+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ hdf5_mpi_complex_derived_datatype_works=yes
+fi
+
+
+if test ${hdf5_mpi_complex_derived_datatype_works} = "yes"; then
+
+cat >>confdefs.h <<\_ACEOF
+#define MPI_COMPLEX_DERIVED_DATATYPE_WORKS 1
+_ACEOF
+
+ echo "$as_me:$LINENO: result: yes" >&5
+echo "${ECHO_T}yes" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
fi
diff --git a/configure.in b/configure.in
index 94932fb..0f3b753 100644
--- a/configure.in
+++ b/configure.in
@@ -2292,7 +2292,23 @@ if test -n "$PARALLEL"; then
else
AC_MSG_RESULT([no])
fi
-
+
+dnl ----------------------------------------------------------------------
+dnl Check to see whether the complicate MPI derived datatype works.
+dnl Up to now(Dec. 20th, 2004), we find that IBM's MPIO implemention doesn't
+dnl handle with the displacement of the complicate MPI type derived datatype
+dnl correctly. So we add the check here.
+AC_MSG_CHECKING([if irregular hyperslab optimization code works inside MPI-IO])
+
+AC_CACHE_VAL([hdf5_mpi_complex_derived_datatype_works],[hdf5_mpi_complex_derived_datatype_works=yes])
+
+if test ${hdf5_mpi_complex_derived_datatype_works} = "yes"; then
+ AC_DEFINE([MPI_COMPLEX_DERIVED_DATATYPE_WORKS], [1],
+ [Define if your system can handle complicated MPI derived datatype correctly.])
+ AC_MSG_RESULT([yes])
+else
+ AC_MSG_RESULT([no])
+fi
fi
dnl ----------------------------------------------------------------------