summaryrefslogtreecommitdiffstats
path: root/configure.in
diff options
context:
space:
mode:
authorMuQun Yang <ymuqun@hdfgroup.org>2005-03-11 22:11:05 (GMT)
committerMuQun Yang <ymuqun@hdfgroup.org>2005-03-11 22:11:05 (GMT)
commit74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d (patch)
tree2dec0be236542f282be8f902f421a050e0feb447 /configure.in
parent941edeab91597b946eb1283640b3f3110d5fa996 (diff)
downloadhdf5-74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d.zip
hdf5-74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d.tar.gz
hdf5-74efc1e4f5c1d6a704f8f1a9058f1936556e5c0d.tar.bz2
[svn-r10201] Purpose:
IBM MPI-IO has a bug for MPI derived data type, which is used to support collective IO. In order to avoid this, we provide a way to turn off collective IO support for this kind of platform. Description: Using a new macro called H5_MPI_COMPLEX_DERIVED_DATATYPE_WORKS to turn off irregular hyperslab collective IO support at such platforms. Solution: Hard code such platforms under hdf5/config. So far only IBM AIX has been found to have such problems. Platforms tested: heping (Linux) and copper(Aix). Testing at tungsten now, however, it took almost an hour to build parallel HDF5 at tungsten, it is still building. Cannot wait. Have tested this feature at Tungsten a week ago. Misc. update:
Diffstat (limited to 'configure.in')
-rw-r--r--configure.in18
1 files changed, 17 insertions, 1 deletions
diff --git a/configure.in b/configure.in
index 94932fb..0f3b753 100644
--- a/configure.in
+++ b/configure.in
@@ -2292,7 +2292,23 @@ if test -n "$PARALLEL"; then
else
AC_MSG_RESULT([no])
fi
-
+
+dnl ----------------------------------------------------------------------
+dnl Check to see whether the complicate MPI derived datatype works.
+dnl Up to now(Dec. 20th, 2004), we find that IBM's MPIO implemention doesn't
+dnl handle with the displacement of the complicate MPI type derived datatype
+dnl correctly. So we add the check here.
+AC_MSG_CHECKING([if irregular hyperslab optimization code works inside MPI-IO])
+
+AC_CACHE_VAL([hdf5_mpi_complex_derived_datatype_works],[hdf5_mpi_complex_derived_datatype_works=yes])
+
+if test ${hdf5_mpi_complex_derived_datatype_works} = "yes"; then
+ AC_DEFINE([MPI_COMPLEX_DERIVED_DATATYPE_WORKS], [1],
+ [Define if your system can handle complicated MPI derived datatype correctly.])
+ AC_MSG_RESULT([yes])
+else
+ AC_MSG_RESULT([no])
+fi
fi
dnl ----------------------------------------------------------------------