summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorjhendersonHDF <jhenderson@hdfgroup.org>2023-10-25 00:36:18 (GMT)
committerGitHub <noreply@github.com>2023-10-25 00:36:18 (GMT)
commita91be87f072a8020d9a467f1ef81132cb5c40149 (patch)
treee5a3ef49a818d07f5649f9aeaa718047efaef658
parent097fd51481a7d5eaf8f94436cf21e08ac516d6ba (diff)
downloadhdf5-a91be87f072a8020d9a467f1ef81132cb5c40149.zip
hdf5-a91be87f072a8020d9a467f1ef81132cb5c40149.tar.gz
hdf5-a91be87f072a8020d9a467f1ef81132cb5c40149.tar.bz2
Sync with develop (#3764)
* Add missing test files to distclean target (#3734) Cleans up new files in Autotools `make distclean` in the test directory * Add tools/libtest to Autotools builds (#3735) This was only added to CMake many years ago and tests the tools library. * Clean up onion VFD files in tools `make clean` (#3739) Cleans up h5dump and h5diff *.onion files in the Autotools when runing `make clean`. * Clean Java test files on Autotools (#3740) Removes generated HDF5 and text output files when running `make clean`. * Clean the flushrefresh test dir on Autotools (#3741) The flushrefresh_test directory was not being cleaned up w/ `make clean` under the Autotools * Fix file names in tfile.c (#3743) Some tests in tfile.c use h5_fileaccess to get a VFD-dependent file name but use the scheme from testhdf5, reusing the FILE1 and FILE8 names. This leads to files like test1.h5.h5 which are unintended and not cleaned up. This changes the filename scheme for a few tests to work with h5test, resulting in more informative names and allowing the files to be cleaned up at the end of the test. The test files have also been added to the `make clean` target for the Autotools. * Clean Autotools files in parallel tests (#3744) Adds missing files to `make clean` for parallel, including Fortran. * Add native VOL checks to deprecated functions (#3647) * Add native VOL checks to deprecated functions * Remove unneeded native VOL checks * Move native checks to top level calls * Fix buffer overflow in cache debugging code (#3691) * update stat arg for apple (#3726) * update stat arg for apple * use H5_HAVE_DARWIN for Apple ifdef * fix typo * removed duplicate H5_ih_info_t * added fortran async test to cmake * Fix windows cpack error in WiX package. (#3747) * Add a simple cache to the ros3 VFD (#3753) Adds a small cache of the first N bytes of a file opened with the read-only S3 (ros3) VFD, where N is 4kiB or the size of the file, whichever is smaller. This avoids a lot of small I/O operations on file open. Addresses GitHub issue #3381 * Update Autotools to correctly configure oneAPI (#3751) * Update Autotools to correctly configure oneAPI Splits the Intel config files under the Autotools into 'classic' Intel and oneAPI versions, fixing 'unsupported option' messages. Also turns off `-check uninit` (new in 2023) in Fortran, which kills the H5_buildiface program due to false positives. * Enable Fortran in oneAPI CI workflow * Turn on Fortran in CMake, update LD_LIBRARY_PATH * Go back to disabling Fortran w/ Intel For some reason there's a linking problem w/ Fortran error while loading shared libraries: libifport.so.5: cannot open shared object file: No such file or directory * Add h5pget_actual_selection_io_mode fortran wrapper (#3746) * added h5pget_actual_selection_io_mode_f test * added tests for h5pget_actual_selection_io_mode_f * fixed int_f type conversion * Update fortran action step (#3748) * Added missing DLL for H5PGET_ACTUAL_SELECTION_IO_MODE_F (#3760) * add missing H5PGET_ACTUAL_SELECTION_IO_MODE_F dll * Bump the ros3 VFD cache to 16 MiB (#3759) * Fix hangs during collective I/O with independent metadata writes (#3693) * Fix some issues with collective metadata reads for chunked datasets (#3716) Add functions/callbacks for explicit control over chunk index open/close Add functions/callbacks to check if chunk index is open or not so that it can be opened if necessary before temporarily disabling collective metadata reads in the library Add functions/callbacks for requesting loading of additional chunk index metadata beyond the chunk index itself * Fix failure in t_select_io_dset when run with more than 10 ranks (#3758) * Fix H5Pset_evict_on_close failing regardless of actual parallel use (#3761) Allow H5Pset_evict_on_close to be called regardless of whether a parallel build of HDF5 is being used Fail during file opens if H5Pset_evict_on_close has been set to true on the given File Access Property List and the size of the MPI communicator being used is greater than 1
-rw-r--r--config/apple42
-rw-r--r--config/freebsd16
-rw-r--r--config/linux-gnulibc133
-rw-r--r--config/netbsd13
-rw-r--r--config/oneapi-cxxflags155
-rw-r--r--config/oneapi-fflags145
-rw-r--r--config/oneapi-flags151
-rw-r--r--fortran/src/H5Fff.F906
-rw-r--r--fortran/src/H5Off.F906
-rw-r--r--fortran/src/H5Pff.F9038
-rw-r--r--fortran/src/H5_f.c4
-rw-r--r--fortran/src/H5_ff.F905
-rw-r--r--fortran/src/H5config_f.inc.cmake8
-rw-r--r--fortran/src/H5config_f.inc.in3
-rw-r--r--fortran/src/H5f90global.F9016
-rw-r--r--fortran/src/hdf5_fortrandll.def.in1
-rw-r--r--fortran/test/tH5P.F905
-rw-r--r--fortran/testpar/CMakeTests.cmake1
-rw-r--r--fortran/testpar/Makefile.am2
-rw-r--r--fortran/testpar/hyper.F9015
-rw-r--r--fortran/testpar/subfiling.F9011
-rw-r--r--hl/test/Makefile.am2
-rw-r--r--java/test/Makefile.am3
-rw-r--r--release_docs/RELEASE.txt81
-rw-r--r--src/H5Cmpio.c142
-rw-r--r--src/H5Dbtree.c442
-rw-r--r--src/H5Dbtree2.c467
-rw-r--r--src/H5Dchunk.c63
-rw-r--r--src/H5Dearray.c271
-rw-r--r--src/H5Dfarray.c234
-rw-r--r--src/H5Dmpio.c40
-rw-r--r--src/H5Dnone.c170
-rw-r--r--src/H5Dpkg.h21
-rw-r--r--src/H5Dsingle.c168
-rw-r--r--src/H5FDros3.c73
-rw-r--r--src/H5Fint.c18
-rw-r--r--src/H5Odeprec.c75
-rw-r--r--src/H5Pfapl.c9
-rw-r--r--src/H5Rdeprec.c128
-rw-r--r--test/Makefile.am7
-rw-r--r--test/evict_on_close.c92
-rw-r--r--test/tfile.c21
-rw-r--r--testpar/Makefile.am3
-rw-r--r--testpar/t_coll_md.c103
-rw-r--r--testpar/t_file.c59
-rw-r--r--testpar/t_select_io_dset.c30
-rw-r--r--testpar/testphdf5.c5
-rw-r--r--testpar/testphdf5.h2
-rw-r--r--tools/Makefile.am2
-rw-r--r--tools/libtest/Makefile.am8
-rw-r--r--tools/test/h5diff/Makefile.am3
-rw-r--r--tools/test/h5dump/Makefile.am2
52 files changed, 2528 insertions, 892 deletions
diff --git a/config/apple b/config/apple
index a8a219b..39ed454 100644
--- a/config/apple
+++ b/config/apple
@@ -55,30 +55,19 @@ fi
# Figure out C compiler flags
. $srcdir/config/gnu-flags
. $srcdir/config/clang-flags
+. $srcdir/config/oneapi-flags
. $srcdir/config/intel-flags
-# temp patch: if GCC 4.2.1 is used in Lion or Mountain Lion systems, do not
-# use -O option as it causes failures in test/dt_arith.
-case "$host_os" in
- darwin1[12].*) # lion & mountain lion
- #echo cc_vendor=$cc_vendor'-'cc_version=$cc_version
- case "$cc_vendor-$cc_version" in
- gcc-4.2.1)
- # Remove any -O flags
- #echo PROD_CFLAGS=$PROD_CFLAGS
- PROD_CFLAGS="`echo $PROD_CFLAGS | sed -e 's/-O[0-3]*//'`"
- #echo new PROD_CFLAGS=$PROD_CFLAGS
- ;;
- esac
- ;;
-esac
-
if test "X-" = "X-$FC"; then
case $CC_BASENAME in
gcc*)
FC=gfortran
FC_BASENAME=gfortran
;;
+ icx*)
+ FC=ifx
+ FC_BASENAME=ifx
+ ;;
icc*)
FC=ifort
FC_BASENAME=ifort
@@ -97,6 +86,7 @@ fi
# Figure out FORTRAN compiler flags
. $srcdir/config/gnu-fflags
+. $srcdir/config/oneapi-fflags
. $srcdir/config/intel-fflags
@@ -107,6 +97,10 @@ if test "X-" = "X-$CXX"; then
CXX=g++
CXX_BASENAME=g++
;;
+ icx)
+ CXX=icpx
+ CXX_BASENAME=icpx
+ ;;
icc)
CXX=icpc
CXX_BASENAME=icpc
@@ -123,6 +117,7 @@ if test "X-" = "X-$CXX"; then
fi
# Figure out C++ compiler flags
+. $srcdir/config/oneapi-cxxflags
. $srcdir/config/intel-cxxflags # Do this ahead of GNU to avoid icpc being detected as g++
. $srcdir/config/gnu-cxxflags
. $srcdir/config/clang-cxxflags
@@ -139,6 +134,11 @@ case $CC in
grep 'GCC' | sed 's/.*\((GCC) [-a-z0-9\. ]*.*\)/\1/'`
;;
+ *icx*)
+ cc_version_info=`$CC $CCFLAGS $H5_CCFLAGS -V 2>&1 | grep 'Version' |\
+ sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
+ ;;
+
*icc*)
cc_version_info=`$CC $CCFLAGS $H5_CCFLAGS -V 2>&1 | grep 'Version' |\
sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
@@ -156,6 +156,11 @@ case $FC in
grep 'GCC' | sed 's/\(.*(GCC) [-a-z0-9\. ]*\).*/\1/'`
;;
+ *ifx*)
+ fc_version_info=`$FC $FCFLAGS $H5_FCFLAGS -V 2>&1 | grep 'Version' |\
+ sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
+ ;;
+
*ifc*|*ifort*)
fc_version_info=`$FC $FCFLAGS $H5_FCFLAGS -V 2>&1 | grep 'Version' |\
sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
@@ -179,6 +184,11 @@ case $CXX in
grep 'GCC' | sed 's/.*\((GCC) [-a-z0-9\. ]*.*\)/\1/'`
;;
+ *icpx*)
+ cxx_version_info=`$CXX $CXXFLAGS $H5_CXXFLAGS -V 2>&1 | grep 'Version' |\
+ sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
+ ;;
+
*icpc*)
cxx_version_info=`$CXX $CXXFLAGS $H5_CXXFLAGS -V 2>&1 | grep 'Version' |\
sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
diff --git a/config/freebsd b/config/freebsd
index 2fb962f..b0e825a 100644
--- a/config/freebsd
+++ b/config/freebsd
@@ -29,7 +29,10 @@ fi
# Figure out GNU C compiler flags
. $srcdir/config/gnu-flags
-# Figure out Intel C compiler flags
+# Figure out Intel oneAPI C compiler flags
+. $srcdir/config/oneapi-flags
+
+# Figure out Intel classic C compiler flags
. $srcdir/config/intel-flags
# The default Fortran 90 compiler
@@ -43,6 +46,10 @@ if test "X-" = "X-$FC"; then
FC=gfortran
FC_BASENAME=gfortran
;;
+ icx*)
+ FC=ifx
+ FC_BASENAME=ifx
+ ;;
icc*)
FC=ifort
FC_BASENAME=ifort
@@ -57,8 +64,11 @@ fi
# Figure out FORTRAN compiler flags
. $srcdir/config/gnu-fflags
-# Figure out Intel F90 compiler flags
-. $srcdir/config/intel-fflags
+# Figure out Intel oneAPI FC compiler flags
+. $srcdir/config/oneapi-fflags
+
+# Figure out Intel classic FC compiler flags
+. $srcdir/config/classic-fflags
# The default C++ compiler
diff --git a/config/linux-gnulibc1 b/config/linux-gnulibc1
index 7614b07..328f8d3 100644
--- a/config/linux-gnulibc1
+++ b/config/linux-gnulibc1
@@ -38,7 +38,10 @@ fi
# Figure out CCE C compiler flags
. $srcdir/config/cce-flags
-# Figure out Intel C compiler flags
+# Figure out Intel oneAPI C compiler flags
+. $srcdir/config/oneapi-flags
+
+# Figure out Intel classic C compiler flags
. $srcdir/config/intel-flags
# Figure out Clang C compiler flags
@@ -55,6 +58,10 @@ if test "X-" = "X-$FC"; then
FC=pgf90
FC_BASENAME=pgf90
;;
+ icx*)
+ FC=ifx
+ FC_BASENAME=ifx
+ ;;
icc*)
FC=ifort
FC_BASENAME=ifort
@@ -119,7 +126,10 @@ fi
# Figure out CCE FC compiler flags
. $srcdir/config/cce-fflags
-# Figure out Intel FC compiler flags
+# Figure out Intel oneAPI FC compiler flags
+. $srcdir/config/oneapi-fflags
+
+# Figure out Intel classic FC compiler flags
. $srcdir/config/intel-fflags
# Figure out Clang FC compiler flags
@@ -200,7 +210,10 @@ if test -z "$CXX"; then
CXX_BASENAME=g++
fi
-# Figure out Intel CXX compiler flags
+# Figure out Intel oneAPI CXX compiler flags
+. $srcdir/config/oneapi-cxxflags
+
+# Figure out Intel classic CXX compiler flags
# Do this ahead of GNU to avoid icpc being detected as g++
. $srcdir/config/intel-cxxflags
@@ -237,6 +250,11 @@ case $CC in
cc_version_info=`echo $cc_version_info`
;;
+ *icx*)
+ cc_version_info=`$CC $CCFLAGS $H5_CCFLAGS -V 2>&1 | grep 'Version' |\
+ sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
+ ;;
+
*icc*)
cc_version_info=`$CC $CCFLAGS $H5_CCFLAGS -V 2>&1 | grep 'Version' |\
sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
@@ -271,6 +289,11 @@ case $FC in
fc_version_info=`echo $fc_version_info`
;;
+ *ifx*)
+ fc_version_info=`$FC $FCFLAGS $H5_FCFLAGS -V 2>&1 | grep 'Version' |\
+ sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
+ ;;
+
*ifc*|*ifort*)
fc_version_info=`$FC $FCFLAGS $H5_FCFLAGS -V 2>&1 | grep 'Version' |\
sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
@@ -333,6 +356,10 @@ case $CXX in
cxx_version_info=`$CXX $CXXFLAGS $H5_CXXFLAGS --version 2>&1 |\
grep 'GCC' | sed 's/\(.*(GCC) [-a-z0-9\. ]*\).*/\1/'`
;;
+ *icpx*)
+ cxx_version_info=`$CXX $CXXFLAGS $H5_CXXFLAGS -V 2>&1 | grep 'Version' |\
+ sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
+ ;;
*icpc*)
cxx_version_info=`$CXX $CXXFLAGS $H5_CXXFLAGS -V 2>&1 | grep 'Version' |\
sed 's/\(Intel.* Compiler\).*\( Version [a-z0-9\.]*\).*\( Build [0-9]*\)/\1\2\3/'`
diff --git a/config/netbsd b/config/netbsd
index 04761f2..0ed84f7 100644
--- a/config/netbsd
+++ b/config/netbsd
@@ -26,7 +26,10 @@ fi
# Figure out C compiler flags
. $srcdir/config/gnu-flags
-# Figure out Intel C compiler flags
+# Figure out Intel oneAPI C compiler flags
+. $srcdir/config/oneapi-flags
+
+# Figure out Intel classic C compiler flags
. $srcdir/config/intel-flags
# The default Fortran 90 compiler
@@ -36,6 +39,10 @@ if test "X-" = "X-$FC"; then
FC=gfortran
FC_BASENAME=gfortran
;;
+ icx*)
+ FC=ifx
+ FC_BASENAME=ifx
+ ;;
icc*)
FC=ifort
FC_BASENAME=ifort
@@ -50,6 +57,8 @@ fi
# Figure out FORTRAN compiler flags
. $srcdir/config/gnu-fflags
-# Figure out Intel F90 compiler flags
+# Figure out Intel oneAPI FC compiler flags
. $srcdir/config/intel-fflags
+# Figure out Intel classic FC compiler flags
+. $srcdir/config/oneapi-fflags
diff --git a/config/oneapi-cxxflags b/config/oneapi-cxxflags
new file mode 100644
index 0000000..d9819b9
--- /dev/null
+++ b/config/oneapi-cxxflags
@@ -0,0 +1,155 @@
+# -*- shell-script -*-
+#
+# Copyright by The HDF Group.
+# All rights reserved.
+#
+# This file is part of HDF5. The full HDF5 copyright notice, including
+# terms governing use, modification, and redistribution, is contained in
+# the COPYING file, which can be found at the root of the source code
+# distribution tree, or in https://www.hdfgroup.org/licenses.
+# If you do not have access to either file, you may request a copy from
+# help@hdfgroup.org.
+
+
+# This file should be sourced into configure if the compiler is the
+# Intel icpx compiler or a derivative. It is careful not to do anything
+# if the compiler is not Intel; otherwise `cxx_flags_set' is set to `yes'
+#
+
+#
+# Prepend `$srcdir/config/intel-warnings/` to the filename suffix(es) given as
+# subroutine argument(s), remove comments starting with # and ending
+# at EOL, replace spans of whitespace (including newlines) with spaces,
+# and re-emit the file(s) thus filtered on the standard output stream.
+#
+load_intel_arguments()
+{
+ set -- $(for arg; do
+ sed 's,#.*$,,' $srcdir/config/intel-warnings/${arg}
+ done)
+ IFS=' ' echo "$*"
+}
+
+# Get the compiler version in a way that works for icpx
+# icpx unless a compiler version is already known
+#
+# cxx_vendor: The compiler name: icpx
+# cxx_version: Version number: 2023.2.0
+#
+if test X = "X$cxx_flags_set"; then
+ cxx_version="`$CXX $CXXFLAGS $H5_CXXFLAGS -V 2>&1 |grep 'oneAPI'`"
+ if test X != "X$cxx_version"; then
+ cxx_vendor=icpx
+ cxx_version=`echo $cxx_version |sed 's/.*Version \([-a-z0-9\.\-]*\).*/\1/'`
+ echo "compiler '$CXX' is Intel oneAPI $cxx_vendor-$cxx_version"
+
+ # Some version numbers
+ # Intel oneAPI version numbers are of the form: "major.minor.patch"
+ cxx_vers_major=`echo $cxx_version | cut -f1 -d.`
+ cxx_vers_minor=`echo $cxx_version | cut -f2 -d.`
+ cxx_vers_patch=`echo $cxx_version | cut -f2 -d.`
+ test -n "$cxx_vers_major" || cxx_vers_major=0
+ test -n "$cxx_vers_minor" || cxx_vers_minor=0
+ test -n "$cxx_vers_patch" || cxx_vers_patch=0
+ cxx_vers_all=`expr $cxx_vers_major '*' 1000000 + $cxx_vers_minor '*' 1000 + $cxx_vers_patch`
+ fi
+fi
+
+# Common Intel flags for various situations
+if test "X-icpx" = "X-$cxx_vendor"; then
+ # Insert section about version specific problems from compiler flags here,
+ # if necessary.
+
+ arch=
+ # Architecture-specific flags
+ # Nothing currently. (Uncomment code below and modify to add any)
+ #case "$host_os-$host_cpu" in
+ # *-i686)
+ # arch="-march=i686"
+ # ;;
+ #esac
+
+ # Host-specific flags
+ # Nothing currently. (Uncomment code below and modify to add any)
+ #case "`hostname`" in
+ # sleipnir.ncsa.uiuc.edu)
+ # arch="$arch -pipe"
+ # ;;
+ #esac
+
+ ###########
+ # General #
+ ###########
+
+ # Default to C++11 standard
+ H5_CXXFLAGS="$H5_CXXFLAGS $arch -std=c++11"
+
+ ##############
+ # Production #
+ ##############
+
+ PROD_CXXFLAGS=
+
+ #########
+ # Debug #
+ #########
+
+ # NDEBUG is handled explicitly in configure
+ # -g is handled by the symbols flags
+ DEBUG_CXXFLAGS=
+
+ ###########
+ # Symbols #
+ ###########
+
+ NO_SYMBOLS_CXXFLAGS="-Wl,-s"
+ SYMBOLS_CXXFLAGS="-g"
+
+ #############
+ # Profiling #
+ #############
+
+ PROFILE_CXXFLAGS="-p"
+
+ ################
+ # Optimization #
+ ################
+
+ HIGH_OPT_CXXFLAGS="-O3"
+ DEBUG_OPT_CXXFLAGS="-O0"
+ NO_OPT_CXXFLAGS="-O0"
+
+ ############
+ # Warnings #
+ ############
+
+ ###########
+ # General #
+ ###########
+
+ # Add various general warning flags in intel-warnings.
+ # Use the C warnings as CXX warnings are the same
+ H5_CXXFLAGS="$H5_CXXFLAGS $(load_intel_arguments oneapi/general)"
+
+ ######################
+ # Developer warnings #
+ ######################
+
+ # Use the C warnings as CXX warnings are the same
+ DEVELOPER_WARNING_CXXFLAGS=$(load_intel_arguments oneapi/developer-general)
+
+ #############################
+ # Version-specific warnings #
+ #############################
+
+ #################
+ # Flags are set #
+ #################
+ cxx_flags_set=yes
+fi
+
+# Clear cxx info if no flags set
+if test "X-$cxx_flags_set" = "X-"; then
+ cxx_vendor=
+ cxx_version=
+fi
diff --git a/config/oneapi-fflags b/config/oneapi-fflags
new file mode 100644
index 0000000..a63108d
--- /dev/null
+++ b/config/oneapi-fflags
@@ -0,0 +1,145 @@
+# -*- shell-script -*-
+#
+# Copyright by The HDF Group.
+# All rights reserved.
+#
+# This file is part of HDF5. The full HDF5 copyright notice, including
+# terms governing use, modification, and redistribution, is contained in
+# the COPYING file, which can be found at the root of the source code
+# distribution tree, or in https://www.hdfgroup.org/licenses.
+# If you do not have access to either file, you may request a copy from
+# help@hdfgroup.org.
+
+
+# This file should be sourced into configure if the compiler is the
+# Intel oneAPI ifx compiler or a derivative. It is careful not to do anything
+# if the compiler is not Intel; otherwise `f9x_flags_set' is set to `yes'
+#
+
+#
+# Prepend `$srcdir/config/intel-warnings/` to the filename suffix(es) given as
+# subroutine argument(s), remove comments starting with # and ending
+# at EOL, replace spans of whitespace (including newlines) with spaces,
+# and re-emit the file(s) thus filtered on the standard output stream.
+#
+load_intel_arguments()
+{
+ set -- $(for arg; do
+ sed 's,#.*$,,' $srcdir/config/intel-warnings/${arg}
+ done)
+ IFS=' ' echo "$*"
+}
+
+# Get the compiler version in a way that works for ifx
+# ifx unless a compiler version is already known
+#
+# f9x_vendor: The compiler name: ifx
+# f9x_version: Version number: 2023.2.0
+#
+if test X = "X$f9x_flags_set"; then
+ f9x_version="`$FC $FCFLAGS $H5_FCFLAGS -V 2>&1 |grep '^Intel'`"
+ if test X != "X$f9x_version"; then
+ f9x_vendor=ifx
+ f9x_version="`echo $f9x_version |sed 's/.*Version \([-a-z0-9\.\-]*\).*/\1/'`"
+ echo "compiler '$FC' is Intel oneAPI $f9x_vendor-$f9x_version"
+
+ # Some version numbers
+ # Intel oneAPI version numbers are of the form: "major.minor.patch"
+ f9x_vers_major=`echo $f9x_version | cut -f1 -d.`
+ f9x_vers_minor=`echo $f9x_version | cut -f2 -d.`
+ f9x_vers_patch=`echo $f9x_version | cut -f2 -d.`
+ test -n "$f9x_vers_major" || f9x_vers_major=0
+ test -n "$f9x_vers_minor" || f9x_vers_minor=0
+ test -n "$f9x_vers_patch" || f9x_vers_patch=0
+ f9x_vers_all=`expr $f9x_vers_major '*' 1000000 + $f9x_vers_minor '*' 1000 + $f9x_vers_patch`
+ fi
+fi
+
+if test "X-ifx" = "X-$f9x_vendor"; then
+
+ FC_BASENAME=ifx
+ F9XSUFFIXFLAG=""
+ FSEARCH_DIRS=""
+
+ ###############################
+ # Architecture-specific flags #
+ ###############################
+
+ arch=
+ # Nothing currently. (Uncomment code below and modify to add any)
+ #case "$host_os-$host_cpu" in
+ # *-i686)
+ # arch="-march=i686"
+ # ;;
+ #esac
+
+ # Host-specific flags
+ # Nothing currently. (Uncomment code below and modify to add any)
+ #case "`hostname`" in
+ # sleipnir.ncsa.uiuc.edu)
+ # arch="$arch -pipe"
+ # ;;
+ #esac
+
+ ##############
+ # Production #
+ ##############
+
+ PROD_FCFLAGS=
+
+ #########
+ # Debug #
+ #########
+
+ # Don't use -check uninit or you'll get false positives from H5_buildiface
+ DEBUG_FCFLAGS="-check all,nouninit"
+
+ ###########
+ # Symbols #
+ ###########
+
+ NO_SYMBOLS_FCFLAGS=
+ SYMBOLS_FCFLAGS="-g"
+
+ #############
+ # Profiling #
+ #############
+
+ PROFILE_FCFLAGS="-p"
+
+ ################
+ # Optimization #
+ ################
+
+ HIGH_OPT_FCFLAGS="-O3"
+ DEBUG_OPT_FCFLAGS="-O0"
+ NO_OPT_FCFLAGS="-O0"
+
+ ############
+ # Warnings #
+ ############
+
+ ###########
+ # General #
+ ###########
+
+ H5_FCFLAGS="$H5_FCFLAGS -free"
+ H5_FCFLAGS="$H5_FCFLAGS $(load_intel_arguments oneapi/ifort-general)"
+
+ #############################
+ # Version-specific warnings #
+ #############################
+
+
+ #################
+ # Flags are set #
+ #################
+ f9x_flags_set=yes
+fi
+
+# Clear f9x info if no flags set
+if test "X-$f9x_flags_set" = "X-"; then
+ f9x_vendor=
+ f9x_version=
+fi
+
diff --git a/config/oneapi-flags b/config/oneapi-flags
new file mode 100644
index 0000000..629e93f
--- /dev/null
+++ b/config/oneapi-flags
@@ -0,0 +1,151 @@
+# -*- shell-script -*-
+#
+# Copyright by The HDF Group.
+# All rights reserved.
+#
+# This file is part of HDF5. The full HDF5 copyright notice, including
+# terms governing use, modification, and redistribution, is contained in
+# the COPYING file, which can be found at the root of the source code
+# distribution tree, or in https://www.hdfgroup.org/licenses.
+# If you do not have access to either file, you may request a copy from
+# help@hdfgroup.org.
+
+
+# This file should be sourced into configure if the compiler is the
+# Intel icx compiler or a derivative. It is careful not to do anything
+# if the compiler is not Intel; otherwise `cc_flags_set' is set to `yes'
+#
+
+#
+# Prepend `$srcdir/config/intel-warnings/` to the filename suffix(es) given as
+# subroutine argument(s), remove comments starting with # and ending
+# at EOL, replace spans of whitespace (including newlines) with spaces,
+# and re-emit the file(s) thus filtered on the standard output stream.
+#
+load_intel_arguments()
+{
+ set -- $(for arg; do
+ sed 's,#.*$,,' $srcdir/config/intel-warnings/${arg}
+ done)
+ IFS=' ' echo "$*"
+}
+
+# Get the compiler version in a way that works for icx
+# icx unless a compiler version is already known
+# cc_vendor: The compiler name: icx
+# cc_version: Version number: 2023.2.0
+#
+if test X = "X$cc_flags_set"; then
+ cc_version="`$CC $CFLAGS $H5_CFLAGS -V 2>&1 |grep 'oneAPI'`"
+ if test X != "X$cc_version"; then
+ cc_vendor=icx
+ cc_version=`echo $cc_version |sed 's/.*Version \([-a-z0-9\.\-]*\).*/\1/'`
+ echo "compiler '$CC' is Intel oneAPI $cc_vendor-$cc_version"
+
+ # Some version numbers
+ # Intel oneAPI version numbers are of the form: "major.minor.patch"
+ cc_vers_major=`echo $cc_version | cut -f1 -d.`
+ cc_vers_minor=`echo $cc_version | cut -f2 -d.`
+ cc_vers_patch=`echo $cc_version | cut -f2 -d.`
+ test -n "$cc_vers_major" || cc_vers_major=0
+ test -n "$cc_vers_minor" || cc_vers_minor=0
+ test -n "$cc_vers_patch" || cc_vers_patch=0
+ cc_vers_all=`expr $cc_vers_major '*' 1000000 + $cc_vers_minor '*' 1000 + $cc_vers_patch`
+ fi
+fi
+
+# Common Intel flags for various situations
+if test "X-icx" = "X-$cc_vendor"; then
+ # Insert section about version specific problems from compiler flags here,
+ # if necessary.
+
+ arch=
+ # Architecture-specific flags
+ # Nothing currently. (Uncomment code below and modify to add any)
+ #case "$host_os-$host_cpu" in
+ # *-i686)
+ # arch="-march=i686"
+ # ;;
+ #esac
+
+ # Host-specific flags
+ # Nothing currently. (Uncomment code below and modify to add any)
+ #case "`hostname`" in
+ # sleipnir.ncsa.uiuc.edu)
+ # arch="$arch -pipe"
+ # ;;
+ #esac
+
+ ###########
+ # General #
+ ###########
+
+ # Default to C99 standard.
+ H5_CFLAGS="$H5_CFLAGS $arch -std=c99"
+
+ ##############
+ # Production #
+ ##############
+
+ PROD_CFLAGS=
+
+ #########
+ # Debug #
+ #########
+
+ # NDEBUG is handled explicitly in configure
+ DEBUG_CFLAGS=
+
+ ###########
+ # Symbols #
+ ###########
+
+ NO_SYMBOLS_CFLAGS="-Wl,-s"
+ SYMBOLS_CFLAGS="-g"
+
+ #############
+ # Profiling #
+ #############
+
+ PROFILE_CFLAGS="-p"
+
+ ################
+ # Optimization #
+ ################
+
+ HIGH_OPT_CFLAGS="-O3"
+ DEBUG_OPT_CFLAGS="-O0"
+ NO_OPT_CFLAGS="-O0"
+
+ ############
+ # Warnings #
+ ############
+
+ ###########
+ # General #
+ ###########
+
+ # Add various general warning flags in intel-warnings.
+ H5_CFLAGS="$H5_CFLAGS $(load_intel_arguments oneapi/general)"
+
+ ######################
+ # Developer warnings #
+ ######################
+
+ DEVELOPER_WARNING_CFLAGS=$(load_intel_arguments oneapi/developer-general)
+
+ #############################
+ # Version-specific warnings #
+ #############################
+
+ #################
+ # Flags are set #
+ #################
+ cc_flags_set=yes
+fi
+
+# Clear cc info if no flags set
+if test "X-$cc_flags_set" = "X-"; then
+ cc_vendor=
+ cc_version=
+fi
diff --git a/fortran/src/H5Fff.F90 b/fortran/src/H5Fff.F90
index fee4d3c..d311177 100644
--- a/fortran/src/H5Fff.F90
+++ b/fortran/src/H5Fff.F90
@@ -72,12 +72,6 @@ MODULE H5F
INTEGER(HSIZE_T) :: tot_space !< Amount of free space in the file
END TYPE H5F_info_free_t
-!> @brief H5_ih_info_t derived type.
- TYPE, BIND(C) :: H5_ih_info_t
- INTEGER(HSIZE_T) :: index_size !< btree and/or list
- INTEGER(HSIZE_T) :: heap_size !< Heap size
- END TYPE H5_ih_info_t
-
!> @brief H5F_info_t_sohm derived type.
TYPE, BIND(C) :: H5F_info_sohm_t
INTEGER(C_INT) :: version !< Version # of shared object header info
diff --git a/fortran/src/H5Off.F90 b/fortran/src/H5Off.F90
index 4a0a163..b705ba3 100644
--- a/fortran/src/H5Off.F90
+++ b/fortran/src/H5Off.F90
@@ -110,12 +110,6 @@ MODULE H5O
TYPE(mesg_t) :: mesg
END TYPE c_hdr_t
-!> @brief Extra metadata storage for obj & attributes
- TYPE, BIND(C) :: H5_ih_info_t
- INTEGER(hsize_t) :: index_size !< btree and/or list
- INTEGER(hsize_t) :: heap_size !< heap
- END TYPE H5_ih_info_t
-
!> @brief meta_size_t derived type
TYPE, BIND(C) :: meta_size_t
TYPE(H5_ih_info_t) :: obj !< v1/v2 B-tree & local/fractal heap for groups, B-tree for chunked datasets
diff --git a/fortran/src/H5Pff.F90 b/fortran/src/H5Pff.F90
index bbc7a9d..5821889 100644
--- a/fortran/src/H5Pff.F90
+++ b/fortran/src/H5Pff.F90
@@ -6405,7 +6405,7 @@ END SUBROUTINE h5pget_virtual_dsetname_f
!! \brief Gets the file space handling strategy and persisting free-space values for a file creation property list.
!!
!! \param plist_id File creation property list identifier
-!! \param strategy The file space handling strategy to be used.
+!! \param strategy The file space handling strategy to be used
!! \param persist Indicate whether free space should be persistent or not
!! \param threshold The free-space section size threshold value
!! \param hdferr \fortran_error
@@ -6507,6 +6507,42 @@ END SUBROUTINE h5pget_virtual_dsetname_f
hdferr = INT(h5pget_file_space_page_size(prp_id, fsp_size))
END SUBROUTINE h5pget_file_space_page_size_f
+!>
+!! \ingroup FH5P
+!!
+!! \brief Retrieves the type(s) of I/O that HDF5 actually performed on raw data
+!! during the last I/O call.
+!!
+!! \param plist_id File creation property list identifier
+!! \param actual_selection_io_mode A bitwise set value indicating the type(s) of I/O performed
+!! \param hdferr \fortran_error
+!!
+!! See C API: @ref H5Pget_actual_selection_io_mode()
+!!
+ SUBROUTINE h5pget_actual_selection_io_mode_f(plist_id, actual_selection_io_mode, hdferr)
+
+ IMPLICIT NONE
+ INTEGER(HID_T), INTENT(IN) :: plist_id
+ INTEGER , INTENT(OUT) :: actual_selection_io_mode
+ INTEGER , INTENT(OUT) :: hdferr
+
+ INTEGER(C_INT32_T) :: c_actual_selection_io_mode
+
+ INTERFACE
+ INTEGER(C_INT) FUNCTION H5Pget_actual_selection_io_mode(plist_id, actual_selection_io_mode) &
+ BIND(C, NAME='H5Pget_actual_selection_io_mode')
+ IMPORT :: HID_T, C_INT32_T, C_INT
+ IMPLICIT NONE
+ INTEGER(HID_T), VALUE :: plist_id
+ INTEGER(C_INT32_T) :: actual_selection_io_mode
+ END FUNCTION H5Pget_actual_selection_io_mode
+ END INTERFACE
+
+ hdferr = INT(H5Pget_actual_selection_io_mode(plist_id, c_actual_selection_io_mode))
+
+ actual_selection_io_mode = INT(c_actual_selection_io_mode)
+
+ END SUBROUTINE h5pget_actual_selection_io_mode_f
END MODULE H5P
diff --git a/fortran/src/H5_f.c b/fortran/src/H5_f.c
index 181047b..0392c2b 100644
--- a/fortran/src/H5_f.c
+++ b/fortran/src/H5_f.c
@@ -477,6 +477,10 @@ h5init_flags_c(int_f *h5d_flags, size_t_f *h5d_size_flags, int_f *h5e_flags, hid
h5d_flags[55] = (int_f)H5D_MPIO_LINK_CHUNK;
h5d_flags[56] = (int_f)H5D_MPIO_MULTI_CHUNK;
+ h5d_flags[57] = (int_f)H5D_SCALAR_IO;
+ h5d_flags[58] = (int_f)H5D_VECTOR_IO;
+ h5d_flags[59] = (int_f)H5D_SELECTION_IO;
+
/*
* H5E flags
*/
diff --git a/fortran/src/H5_ff.F90 b/fortran/src/H5_ff.F90
index 68b3dd8..5315673 100644
--- a/fortran/src/H5_ff.F90
+++ b/fortran/src/H5_ff.F90
@@ -74,7 +74,7 @@ MODULE H5LIB
!
! H5D flags declaration
!
- INTEGER, PARAMETER :: H5D_FLAGS_LEN = 57
+ INTEGER, PARAMETER :: H5D_FLAGS_LEN = 60
INTEGER, DIMENSION(1:H5D_FLAGS_LEN) :: H5D_flags
INTEGER, PARAMETER :: H5D_SIZE_FLAGS_LEN = 2
INTEGER(SIZE_T), DIMENSION(1:H5D_SIZE_FLAGS_LEN) :: H5D_size_flags
@@ -467,6 +467,9 @@ CONTAINS
H5D_MPIO_NO_CHUNK_OPTIMIZATION_F = H5D_flags(55)
H5D_MPIO_LINK_CHUNK_F = H5D_flags(56)
H5D_MPIO_MULTI_CHUNK_F = H5D_flags(57)
+ H5D_SCALAR_IO_F = H5D_flags(58)
+ H5D_VECTOR_IO_F = H5D_flags(59)
+ H5D_SELECTION_IO_F = H5D_flags(60)
H5D_CHUNK_CACHE_NSLOTS_DFLT_F = H5D_size_flags(1)
H5D_CHUNK_CACHE_NBYTES_DFLT_F = H5D_size_flags(2)
diff --git a/fortran/src/H5config_f.inc.cmake b/fortran/src/H5config_f.inc.cmake
index 34fb091..71bce0e 100644
--- a/fortran/src/H5config_f.inc.cmake
+++ b/fortran/src/H5config_f.inc.cmake
@@ -23,6 +23,12 @@
#undef H5_HAVE_SUBFILING_VFD
#endif
+! Define if on APPLE
+#cmakedefine01 H5_HAVE_DARWIN
+#if H5_HAVE_DARWIN == 0
+#undef H5_HAVE_DARWIN
+#endif
+
! Define if the intrinsic function STORAGE_SIZE exists
#define H5_FORTRAN_HAVE_STORAGE_SIZE @H5_FORTRAN_HAVE_STORAGE_SIZE@
@@ -81,4 +87,4 @@
#cmakedefine01 H5_NO_DEPRECATED_SYMBOLS
#if H5_NO_DEPRECATED_SYMBOLS == 0
#undef H5_NO_DEPRECATED_SYMBOLS
-#endif \ No newline at end of file
+#endif
diff --git a/fortran/src/H5config_f.inc.in b/fortran/src/H5config_f.inc.in
index 7fb76e1..991e4b0 100644
--- a/fortran/src/H5config_f.inc.in
+++ b/fortran/src/H5config_f.inc.in
@@ -20,6 +20,9 @@
! Define if we have subfiling support
#undef HAVE_SUBFILING_VFD
+! Define if on APPLE
+#undef HAVE_DARWIN
+
! Define if the intrinsic function STORAGE_SIZE exists
#undef FORTRAN_HAVE_STORAGE_SIZE
diff --git a/fortran/src/H5f90global.F90 b/fortran/src/H5f90global.F90
index e60f1e8..aa04623 100644
--- a/fortran/src/H5f90global.F90
+++ b/fortran/src/H5f90global.F90
@@ -25,6 +25,12 @@ MODULE H5GLOBAL
IMPLICIT NONE
+!> @brief H5_ih_info_t derived type.
+ TYPE, BIND(C) :: H5_ih_info_t
+ INTEGER(HSIZE_T) :: index_size !< btree and/or list
+ INTEGER(HSIZE_T) :: heap_size !< Heap size
+ END TYPE H5_ih_info_t
+
!> \addtogroup FH5
!> @{
! Parameters used in the function 'h5kind_to_type' located in H5_ff.F90.
@@ -368,6 +374,12 @@ MODULE H5GLOBAL
!DEC$ATTRIBUTES DLLEXPORT :: H5D_MPIO_NO_CHUNK_OPTIMIZATION_F
!DEC$ATTRIBUTES DLLEXPORT :: H5D_MPIO_LINK_CHUNK_F
!DEC$ATTRIBUTES DLLEXPORT :: H5D_MPIO_MULTI_CHUNK_F
+
+ !DEC$ATTRIBUTES DLLEXPORT :: H5D_SCALAR_IO_F
+ !DEC$ATTRIBUTES DLLEXPORT :: H5D_VECTOR_IO_F
+ !DEC$ATTRIBUTES DLLEXPORT :: H5D_SELECTION_IO_F
+
+
!DEC$endif
!> \addtogroup FH5D
!> @{
@@ -444,6 +456,10 @@ MODULE H5GLOBAL
INTEGER :: H5D_MPIO_NO_CHUNK_OPTIMIZATION_F !< H5D_MPIO_NO_CHUNK_OPTIMIZATION
INTEGER :: H5D_MPIO_LINK_CHUNK_F !< H5D_MPIO_LINK_CHUNK
INTEGER :: H5D_MPIO_MULTI_CHUNK_F !< H5D_MPIO_MULTI_CHUNK
+
+ INTEGER :: H5D_SCALAR_IO_F !< Scalar (or legacy MPIO) I/O was performed
+ INTEGER :: H5D_VECTOR_IO_F !< Vector I/O was performed
+ INTEGER :: H5D_SELECTION_IO_F !< Selection I/O was performed
!
! H5E flags declaration
!
diff --git a/fortran/src/hdf5_fortrandll.def.in b/fortran/src/hdf5_fortrandll.def.in
index 3b6600c..2ded002 100644
--- a/fortran/src/hdf5_fortrandll.def.in
+++ b/fortran/src/hdf5_fortrandll.def.in
@@ -417,6 +417,7 @@ H5P_mp_H5PSET_FILE_SPACE_STRATEGY_F
H5P_mp_H5PGET_FILE_SPACE_STRATEGY_F
H5P_mp_H5PSET_FILE_SPACE_PAGE_SIZE_F
H5P_mp_H5PGET_FILE_SPACE_PAGE_SIZE_F
+H5P_mp_H5PGET_ACTUAL_SELECTION_IO_MODE_F
; Parallel
@H5_NOPAREXP@H5P_mp_H5PSET_FAPL_MPIO_F
@H5_NOPAREXP@H5P_mp_H5PGET_FAPL_MPIO_F
diff --git a/fortran/test/tH5P.F90 b/fortran/test/tH5P.F90
index c73016b..78d665f 100644
--- a/fortran/test/tH5P.F90
+++ b/fortran/test/tH5P.F90
@@ -869,6 +869,7 @@ SUBROUTINE test_in_place_conversion(cleanup, total_error)
REAL(KIND=C_DOUBLE), DIMENSION(1:array_len) :: wbuf_d_org
REAL(KIND=C_FLOAT), DIMENSION(1:array_len), TARGET :: rbuf
INTEGER :: i
+ INTEGER :: actual_selection_io_mode
TYPE(C_PTR) :: f_ptr
! create the data
@@ -919,6 +920,10 @@ SUBROUTINE test_in_place_conversion(cleanup, total_error)
! Should not be equal for in-place buffer use
CALL VERIFY("h5dwrite_f -- in-place", wbuf_d(1), wbuf_d_org(1), total_error, .FALSE.)
+ CALL h5pget_actual_selection_io_mode_f(plist_id, actual_selection_io_mode, error)
+ CALL check("h5pget_actual_selection_io_mode_f", error, total_error)
+ CALL VERIFY("h5pget_actual_selection_io_mode_f", actual_selection_io_mode, H5D_SCALAR_IO_F, total_error)
+
f_ptr = C_LOC(rbuf)
CALL h5dread_f(dset_id, h5kind_to_type(KIND(rbuf(1)), H5_REAL_KIND), f_ptr, error)
CALL check("h5dread_f", error, total_error)
diff --git a/fortran/testpar/CMakeTests.cmake b/fortran/testpar/CMakeTests.cmake
index 8c15724..473049f 100644
--- a/fortran/testpar/CMakeTests.cmake
+++ b/fortran/testpar/CMakeTests.cmake
@@ -17,3 +17,4 @@
##############################################################################
add_test (NAME MPI_TEST_FORT_parallel_test COMMAND ${MPIEXEC_EXECUTABLE} ${MPIEXEC_NUMPROC_FLAG} ${MPIEXEC_MAX_NUMPROCS} ${MPIEXEC_PREFLAGS} $<TARGET_FILE:parallel_test> ${MPIEXEC_POSTFLAGS})
add_test (NAME MPI_TEST_FORT_subfiling_test COMMAND ${MPIEXEC_EXECUTABLE} ${MPIEXEC_NUMPROC_FLAG} ${MPIEXEC_MAX_NUMPROCS} ${MPIEXEC_PREFLAGS} $<TARGET_FILE:subfiling_test> ${MPIEXEC_POSTFLAGS})
+add_test (NAME MPI_TEST_FORT_async_test COMMAND ${MPIEXEC_EXECUTABLE} ${MPIEXEC_NUMPROC_FLAG} ${MPIEXEC_MAX_NUMPROCS} ${MPIEXEC_PREFLAGS} $<TARGET_FILE:async_test> ${MPIEXEC_POSTFLAGS})
diff --git a/fortran/testpar/Makefile.am b/fortran/testpar/Makefile.am
index 7f9f284..afdda98 100644
--- a/fortran/testpar/Makefile.am
+++ b/fortran/testpar/Makefile.am
@@ -36,7 +36,7 @@ TEST_PROG_PARA=parallel_test subfiling_test async_test
check_PROGRAMS=$(TEST_PROG_PARA)
# Temporary files
-CHECK_CLEANFILES+=parf[12].h5 subf.h5*
+CHECK_CLEANFILES+=parf[12].h5 h5*_tests.h5 subf.h5* test_async_apis.mod
# Test source files
parallel_test_SOURCES=ptest.F90 hyper.F90 mdset.F90 multidsetrw.F90
diff --git a/fortran/testpar/hyper.F90 b/fortran/testpar/hyper.F90
index edd93cf..ec3a657 100644
--- a/fortran/testpar/hyper.F90
+++ b/fortran/testpar/hyper.F90
@@ -55,6 +55,7 @@ SUBROUTINE hyper(length,do_collective,do_chunk, mpi_size, mpi_rank, nerrors)
INTEGER :: local_no_collective_cause
INTEGER :: global_no_collective_cause
INTEGER :: no_selection_io_cause
+ INTEGER :: actual_selection_io_mode
!
! initialize the array data between the processes (3)
@@ -236,6 +237,20 @@ SUBROUTINE hyper(length,do_collective,do_chunk, mpi_size, mpi_rank, nerrors)
CALL h5dwrite_f(dset_id,H5T_NATIVE_INTEGER,wbuf,dims,hdferror,file_space_id=fspace_id,mem_space_id=mspace_id,xfer_prp=dxpl_id)
CALL check("h5dwrite_f", hdferror, nerrors)
+ CALL h5pget_actual_selection_io_mode_f(dxpl_id, actual_selection_io_mode, hdferror)
+ CALL check("h5pget_actual_selection_io_mode_f", hdferror, nerrors)
+ IF(do_collective)THEN
+ IF(actual_selection_io_mode .NE. H5D_SELECTION_IO_F)THEN
+ PRINT*, "Incorrect actual selection io mode"
+ nerrors = nerrors + 1
+ ENDIF
+ ELSE
+ IF(actual_selection_io_mode .NE. IOR(H5D_SELECTION_IO_F, H5D_SCALAR_IO_F))THEN
+ PRINT*, "Incorrect actual selection io mode"
+ nerrors = nerrors + 1
+ ENDIF
+ ENDIF
+
! Check h5pget_mpio_actual_io_mode_f function
CALL h5pget_mpio_actual_io_mode_f(dxpl_id, actual_io_mode, hdferror)
CALL check("h5pget_mpio_actual_io_mode_f", hdferror, nerrors)
diff --git a/fortran/testpar/subfiling.F90 b/fortran/testpar/subfiling.F90
index 043ac6c..a677bea 100644
--- a/fortran/testpar/subfiling.F90
+++ b/fortran/testpar/subfiling.F90
@@ -54,6 +54,7 @@ PROGRAM subfiling_test
INTEGER(HID_T) :: driver_id
CHARACTER(len=8) :: hex1, hex2
+ CHARACTER(len=1) :: arg
!
! initialize MPI
@@ -336,10 +337,14 @@ PROGRAM subfiling_test
WRITE(*,"(A,A)") "Failed to find the stub subfile ",TRIM(filename)
nerrors = nerrors + 1
ENDIF
-
- CALL EXECUTE_COMMAND_LINE("stat --format='%i' "//filename//" >> tmp_inode", EXITSTAT=i)
+#ifdef H5_HAVE_DARWIN
+ arg(1:1)="f"
+#else
+ arg(1:1)="c"
+#endif
+ CALL EXECUTE_COMMAND_LINE("stat -"//arg(1:1)//" %i "//filename//" >> tmp_inode", EXITSTAT=i)
IF(i.ne.0)THEN
- WRITE(*,"(A,A)") "Failed to stat the stub subfile ",TRIM(filename)
+ WRITE(*,"(A,A)") "Failed to stat the stub subfile ",TRIM(filename)
nerrors = nerrors + 1
ENDIF
diff --git a/hl/test/Makefile.am b/hl/test/Makefile.am
index 1d1cb0f..6f66291 100644
--- a/hl/test/Makefile.am
+++ b/hl/test/Makefile.am
@@ -20,7 +20,7 @@ include $(top_srcdir)/config/commence.am
# Add include directories to C preprocessor flags
AM_CPPFLAGS+=-I. -I$(srcdir) -I$(top_builddir)/src -I$(top_srcdir)/src -I$(top_builddir)/test -I$(top_srcdir)/test -I$(top_srcdir)/hl/src
-# The tests depend on the hdf5, hdf5 test, and hdf5_hl libraries
+# The tests depend on the hdf5, hdf5 test, and hdf5_hl libraries
LDADD=$(LIBH5_HL) $(LIBH5TEST) $(LIBHDF5)
# Test programs. These are our main targets. They should be listed in the
diff --git a/java/test/Makefile.am b/java/test/Makefile.am
index 9f39be9..7f6ab01 100644
--- a/java/test/Makefile.am
+++ b/java/test/Makefile.am
@@ -90,7 +90,8 @@ noinst_DATA = $(jarfile)
check_SCRIPTS = junit.sh
TEST_SCRIPT = $(check_SCRIPTS)
-CLEANFILES = classnoinst.stamp $(jarfile) $(JAVAROOT)/$(pkgpath)/*.class junit.sh
+CLEANFILES = classnoinst.stamp $(jarfile) $(JAVAROOT)/$(pkgpath)/*.class junit.sh \
+ *.h5 testExport*.txt
clean:
rm -rf $(JAVAROOT)/*
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 1734c01..f228e39 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -47,6 +47,14 @@ New Features
Configuration:
-------------
+ - Improved support for Intel oneAPI
+
+ * Separates the old 'classic' Intel compiler settings and warnings
+ from the oneAPI settings
+ * Uses `-check nouninit` in debug builds to avoid false positives
+ when building H5_buildiface with `-check all`
+ * Both Autotools and CMake
+
- Added new options for CMake and Autotools to control the Doxygen
warnings as errors setting.
@@ -127,6 +135,16 @@ New Features
Library:
--------
+ - Added a simple cache to the read-only S3 (ros3) VFD
+
+ The read-only S3 VFD now caches the first N bytes of a file stored
+ in S3 to avoid a lot of small I/O operations when opening files.
+ This cache is per-file and created when the file is opened.
+
+ N is currently 16 MiB or the size of the file, whichever is smaller.
+
+ Addresses GitHub issue #3381
+
- Added new API function H5Pget_actual_selection_io_mode()
This function allows the user to determine if the library performed
@@ -159,7 +177,8 @@ New Features
- Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added.
- Added Fortran APIs:
- h5pset_selection_io_f, h5pget_selection_io_f
+ h5pset_selection_io_f, h5pget_selection_io_f,
+ h5pget_actual_selection_io_mode_f,
h5pset_modify_write_buf_f, h5pget_modify_write_buf_f
- Added Fortran APIs:
@@ -219,6 +238,66 @@ Bug Fixes since HDF5-1.14.2 release
===================================
Library
-------
+ - Fixed some issues with chunk index metadata not getting read
+ collectively when collective metadata reads are enabled
+
+ When looking up dataset chunks during I/O, the parallel library
+ temporarily disables collective metadata reads since it's generally
+ unlikely that the application will read the same chunks from all
+ MPI ranks. Leaving collective metadata reads enabled during
+ chunk lookups can lead to hangs or other bad behavior depending
+ on the chunk indexing structure used for the dataset in question.
+ However, due to the way that dataset chunk index metadata was
+ previously loaded in a deferred manner, this could mean that
+ the metadata for the main chunk index structure or its
+ accompanying pieces of metadata (e.g., fixed array data blocks)
+ could end up being read independently if these chunk lookup
+ operations are the first chunk index-related operation that
+ occurs on a dataset. This behavior is generally observed when
+ opening a dataset for which the metadata isn't in the metadata
+ cache yet and then immediately performing I/O on that dataset.
+ This behavior is not generally observed when creating a dataset
+ and then performing I/O on it, as the relevant metadata will
+ usually be in the metadata cache as a side effect of creating
+ the chunk index structures during dataset creation.
+
+ This issue has been fixed by adding callbacks to the different
+ chunk indexing structure classes that allow more explicit control
+ over when chunk index metadata gets loaded. When collective
+ metadata reads are enabled, the necessary index metadata will now
+ get loaded collectively by all MPI ranks at the start of dataset
+ I/O to ensure that the ranks don't unintentionally read this
+ metadata independently further on. These changes fix collective
+ loading of the main chunk index structure, as well as v2 B-tree
+ root nodes, extensible array index blocks and fixed array data
+ blocks. There are still pieces of metadata that cannot currently
+ be loaded collectively, however, such as extensible array data
+ blocks, data block pages and super blocks, as well as fixed array
+ data block pages. These pieces of metadata are not necessarily
+ read in by all MPI ranks since this depends on which chunks the
+ ranks have selected in the dataset. Therefore, reading of these
+ pieces of metadata remains an independent operation.
+
+ - Fixed potential hangs in parallel library during collective I/O with
+ independent metadata writes
+
+ When performing collective parallel writes to a dataset where metadata
+ writes are requested as (or left as the default setting of) independent,
+ hangs could potentially occur during metadata cache sync points. This
+ was due to incorrect management of the internal state tracking whether
+ an I/O operation should be collective or not, causing the library to
+ attempt collective writes of metadata when they were meant to be
+ independent writes. During the metadata cache sync points, if the number
+ of cache entries being flushed was a multiple of the number of MPI ranks
+ in the MPI communicator used to access the HDF5 file, an equal amount of
+ collective MPI I/O calls were made and the dataset write call would be
+ successful. However, when the number of cache entries being flushed was
+ NOT a multiple of the number of MPI ranks, the ranks with more entries
+ than others would get stuck in an MPI_File_set_view call, while other
+ ranks would get stuck in a post-write MPI_Barrier call. This issue has
+ been fixed by correctly switching to independent I/O temporarily when
+ writing metadata independently during collective dataset I/O.
+
- Fixed a bug with the way the Subfiling VFD assigns I/O concentrators
During a file open operation, the Subfiling VFD determines the topology
diff --git a/src/H5Cmpio.c b/src/H5Cmpio.c
index d7bf5b1..c8db535 100644
--- a/src/H5Cmpio.c
+++ b/src/H5Cmpio.c
@@ -154,8 +154,9 @@ herr_t
H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, haddr_t *candidates_list_ptr,
int mpi_rank, int mpi_size)
{
- unsigned first_entry_to_flush;
- unsigned last_entry_to_flush;
+ H5FD_mpio_xfer_t orig_xfer_mode;
+ unsigned first_entry_to_flush;
+ unsigned last_entry_to_flush;
#ifndef NDEBUG
unsigned total_entries_to_clear = 0;
unsigned total_entries_to_flush = 0;
@@ -169,11 +170,12 @@ H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, ha
haddr_t last_addr;
#endif /* H5C_DO_SANITY_CHECKS */
#if H5C_APPLY_CANDIDATE_LIST__DEBUG
- char tbl_buf[1024];
+ char *tbl_buf = NULL;
#endif /* H5C_APPLY_CANDIDATE_LIST__DEBUG */
unsigned m, n;
- unsigned u; /* Local index variable */
- herr_t ret_value = SUCCEED; /* Return value */
+ unsigned u; /* Local index variable */
+ bool restore_io_mode = false;
+ herr_t ret_value = SUCCEED; /* Return value */
FUNC_ENTER_NOAPI(FAIL)
@@ -185,21 +187,57 @@ H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, ha
assert(0 <= mpi_rank);
assert(mpi_rank < mpi_size);
+ /* Get I/O transfer mode */
+ if (H5CX_get_io_xfer_mode(&orig_xfer_mode) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_CANTGET, FAIL, "can't get MPI-I/O transfer mode");
+
/* Initialize the entries_to_flush and entries_to_clear arrays */
memset(entries_to_flush, 0, sizeof(entries_to_flush));
memset(entries_to_clear, 0, sizeof(entries_to_clear));
#if H5C_APPLY_CANDIDATE_LIST__DEBUG
- fprintf(stdout, "%s:%d: setting up candidate assignment table.\n", __func__, mpi_rank);
+ {
+ const char *const table_header = "candidate list = ";
+ size_t tbl_buf_size;
+ size_t tbl_buf_left;
+ size_t entry_nchars;
+ int bytes_printed;
- memset(tbl_buf, 0, sizeof(tbl_buf));
+ fprintf(stdout, "%s:%d: setting up candidate assignment table.\n", __func__, mpi_rank);
+
+ /* Calculate maximum number of characters printed for each
+ * candidate entry, including the leading space and "0x"
+ */
+ entry_nchars = (sizeof(long long) * CHAR_BIT / 4) + 3;
+
+ tbl_buf_size = strlen(table_header) + (num_candidates * entry_nchars) + 1;
+ if (NULL == (tbl_buf = H5MM_malloc(tbl_buf_size)))
+ HGOTO_ERROR(H5E_CACHE, H5E_CANTALLOC, FAIL, "can't allocate debug buffer");
+ tbl_buf_left = tbl_buf_size;
+
+ if ((bytes_printed = snprintf(tbl_buf, tbl_buf_left, table_header)) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_SYSERRSTR, FAIL, "can't add to candidate list");
+ assert((size_t)bytes_printed < tbl_buf_left);
+ tbl_buf_left -= (size_t)bytes_printed;
+
+ for (u = 0; u < num_candidates; u++) {
+ if ((bytes_printed = snprintf(&(tbl_buf[tbl_buf_size - tbl_buf_left]), tbl_buf_left, " 0x%llx",
+ (long long)(*(candidates_list_ptr + u)))) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_SYSERRSTR, FAIL, "can't add to candidate list");
+ assert((size_t)bytes_printed < tbl_buf_left);
+ tbl_buf_left -= (size_t)bytes_printed;
+ }
- snprintf(tbl_buf, sizeof(tbl_buf), "candidate list = ");
- for (u = 0; u < num_candidates; u++)
- sprintf(&(tbl_buf[strlen(tbl_buf)]), " 0x%llx", (long long)(*(candidates_list_ptr + u)));
- sprintf(&(tbl_buf[strlen(tbl_buf)]), "\n");
+ if ((bytes_printed = snprintf(&(tbl_buf[tbl_buf_size - tbl_buf_left]), tbl_buf_left, "\n")) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_SYSERRSTR, FAIL, "can't add to candidate list");
+ assert((size_t)bytes_printed < tbl_buf_left);
+ tbl_buf_left -= (size_t)bytes_printed + 1; /* NUL terminator */
- fprintf(stdout, "%s", tbl_buf);
+ fprintf(stdout, "%s", tbl_buf);
+
+ H5MM_free(tbl_buf);
+ tbl_buf = NULL;
+ }
#endif /* H5C_APPLY_CANDIDATE_LIST__DEBUG */
if (f->shared->coll_md_write) {
@@ -258,18 +296,50 @@ H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, ha
last_entry_to_flush = candidate_assignment_table[mpi_rank + 1] - 1;
#if H5C_APPLY_CANDIDATE_LIST__DEBUG
- for (u = 0; u < 1024; u++)
- tbl_buf[u] = '\0';
- snprintf(tbl_buf, sizeof(tbl_buf), "candidate assignment table = ");
- for (u = 0; u <= (unsigned)mpi_size; u++)
- sprintf(&(tbl_buf[strlen(tbl_buf)]), " %u", candidate_assignment_table[u]);
- sprintf(&(tbl_buf[strlen(tbl_buf)]), "\n");
- fprintf(stdout, "%s", tbl_buf);
-
- fprintf(stdout, "%s:%d: flush entries [%u, %u].\n", __func__, mpi_rank, first_entry_to_flush,
- last_entry_to_flush);
-
- fprintf(stdout, "%s:%d: marking entries.\n", __func__, mpi_rank);
+ {
+ const char *const table_header = "candidate assignment table = ";
+ unsigned umax = UINT_MAX;
+ size_t tbl_buf_size;
+ size_t tbl_buf_left;
+ size_t entry_nchars;
+ int bytes_printed;
+
+ /* Calculate the maximum number of characters printed for each entry */
+ entry_nchars = (size_t)(log10(umax) + 1) + 1;
+
+ tbl_buf_size = strlen(table_header) + ((size_t)mpi_size * entry_nchars) + 1;
+ if (NULL == (tbl_buf = H5MM_malloc(tbl_buf_size)))
+ HGOTO_ERROR(H5E_CACHE, H5E_CANTALLOC, FAIL, "can't allocate debug buffer");
+ tbl_buf_left = tbl_buf_size;
+
+ if ((bytes_printed = snprintf(tbl_buf, tbl_buf_left, table_header)) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_SYSERRSTR, FAIL, "can't add to candidate list");
+ assert((size_t)bytes_printed < tbl_buf_left);
+ tbl_buf_left -= (size_t)bytes_printed;
+
+ for (u = 0; u <= (unsigned)mpi_size; u++) {
+ if ((bytes_printed = snprintf(&(tbl_buf[tbl_buf_size - tbl_buf_left]), tbl_buf_left, " %u",
+ candidate_assignment_table[u])) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_SYSERRSTR, FAIL, "can't add to candidate list");
+ assert((size_t)bytes_printed < tbl_buf_left);
+ tbl_buf_left -= (size_t)bytes_printed;
+ }
+
+ if ((bytes_printed = snprintf(&(tbl_buf[tbl_buf_size - tbl_buf_left]), tbl_buf_left, "\n")) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_SYSERRSTR, FAIL, "can't add to candidate list");
+ assert((size_t)bytes_printed < tbl_buf_left);
+ tbl_buf_left -= (size_t)bytes_printed + 1; /* NUL terminator */
+
+ fprintf(stdout, "%s", tbl_buf);
+
+ H5MM_free(tbl_buf);
+ tbl_buf = NULL;
+
+ fprintf(stdout, "%s:%d: flush entries [%u, %u].\n", __func__, mpi_rank, first_entry_to_flush,
+ last_entry_to_flush);
+
+ fprintf(stdout, "%s:%d: marking entries.\n", __func__, mpi_rank);
+ }
#endif /* H5C_APPLY_CANDIDATE_LIST__DEBUG */
for (u = 0; u < num_candidates; u++) {
@@ -354,6 +424,19 @@ H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, ha
num_candidates, total_entries_to_clear, total_entries_to_flush);
#endif /* H5C_APPLY_CANDIDATE_LIST__DEBUG */
+ /*
+ * If collective I/O was requested, but collective metadata
+ * writes were not requested, temporarily disable collective
+ * I/O while flushing candidate entries so that we don't cause
+ * a hang in the case where the number of candidate entries
+ * to flush isn't a multiple of mpi_size.
+ */
+ if ((orig_xfer_mode == H5FD_MPIO_COLLECTIVE) && !f->shared->coll_md_write) {
+ if (H5CX_set_io_xfer_mode(H5FD_MPIO_INDEPENDENT) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_CANTSET, FAIL, "can't set MPI-I/O transfer mode");
+ restore_io_mode = true;
+ }
+
/* We have now marked all the entries on the candidate list for
* either flush or clear -- now scan the LRU and the pinned list
* for these entries and do the deed. Do this via a call to
@@ -367,6 +450,13 @@ H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, ha
if (H5C__flush_candidate_entries(f, entries_to_flush, entries_to_clear) < 0)
HGOTO_ERROR(H5E_CACHE, H5E_CANTFLUSH, FAIL, "flush candidates failed");
+ /* Restore collective I/O if we temporarily disabled it */
+ if (restore_io_mode) {
+ if (H5CX_set_io_xfer_mode(orig_xfer_mode) < 0)
+ HGOTO_ERROR(H5E_CACHE, H5E_CANTSET, FAIL, "can't set MPI-I/O transfer mode");
+ restore_io_mode = false;
+ }
+
/* If we've deferred writing to do it collectively, take care of that now */
if (f->shared->coll_md_write) {
/* Sanity check */
@@ -378,6 +468,10 @@ H5C_apply_candidate_list(H5F_t *f, H5C_t *cache_ptr, unsigned num_candidates, ha
} /* end if */
done:
+ /* Restore collective I/O if we temporarily disabled it */
+ if (restore_io_mode && (H5CX_set_io_xfer_mode(orig_xfer_mode) < 0))
+ HDONE_ERROR(H5E_CACHE, H5E_CANTSET, FAIL, "can't set MPI-I/O transfer mode");
+
if (candidate_assignment_table != NULL)
candidate_assignment_table = (unsigned *)H5MM_xfree((void *)candidate_assignment_table);
if (cache_ptr->coll_write_list) {
diff --git a/src/H5Dbtree.c b/src/H5Dbtree.c
index d79f7d0..4f8a867 100644
--- a/src/H5Dbtree.c
+++ b/src/H5Dbtree.c
@@ -24,30 +24,32 @@
/***********/
/* Headers */
/***********/
-#include "H5private.h" /* Generic Functions */
-#include "H5Bprivate.h" /* B-link trees */
-#include "H5Dpkg.h" /* Datasets */
-#include "H5Eprivate.h" /* Error handling */
-#include "H5Fprivate.h" /* Files */
-#include "H5FDprivate.h" /* File drivers */
+#include "H5private.h" /* Generic Functions */
+#include "H5Bprivate.h" /* B-link trees */
+#include "H5Dpkg.h" /* Datasets */
+#include "H5Eprivate.h" /* Error handling */
+#include "H5Fprivate.h" /* Files */
+#include "H5FDprivate.h" /* File drivers */
#include "H5FLprivate.h" /* Free Lists */
-#include "H5Iprivate.h" /* IDs */
-#include "H5MFprivate.h" /* File space management */
-#include "H5MMprivate.h" /* Memory management */
-#include "H5Oprivate.h" /* Object headers */
+#include "H5Iprivate.h" /* IDs */
+#include "H5MFprivate.h" /* File space management */
+#include "H5MMprivate.h" /* Memory management */
+#include "H5Oprivate.h" /* Object headers */
#include "H5Sprivate.h" /* Dataspaces */
-#include "H5VMprivate.h" /* Vector and array functions */
+#include "H5VMprivate.h" /* Vector and array functions */
/****************/
/* Local Macros */
/****************/
+#define H5D_BTREE_IDX_IS_OPEN(idx_info) (NULL != (idx_info)->storage->u.btree.shared)
+
/******************/
/* Local Typedefs */
/******************/
/*
- * B-tree key. A key contains the minimum logical N-dimensional coordinates and
+ * B-tree key. A key contains the minimum logical N-dimensional coordinates and
* the logical size of the chunk to which this key refers. The
* fastest-varying dimension is assumed to reference individual bytes of the
* array, so a 100-element 1-d array of 4-byte integers would really be a 2-d
@@ -61,9 +63,9 @@
* The chunk's file address is part of the B-tree and not part of the key.
*/
typedef struct H5D_btree_key_t {
- hsize_t scaled[H5O_LAYOUT_NDIMS]; /*logical offset to start*/
- uint32_t nbytes; /*size of stored data */
- unsigned filter_mask; /*excluded filters */
+ hsize_t scaled[H5O_LAYOUT_NDIMS]; /*logical offset to start */
+ uint32_t nbytes; /*size of stored data */
+ unsigned filter_mask; /*excluded filters */
} H5D_btree_key_t;
/* B-tree callback info for iteration over chunks */
@@ -111,10 +113,14 @@ static herr_t H5D__btree_debug_key(FILE *stream, int indent, int fwidth, const v
static herr_t H5D__btree_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t *space,
haddr_t dset_ohdr_addr);
static herr_t H5D__btree_idx_create(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__btree_idx_open(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__btree_idx_close(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__btree_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open);
static bool H5D__btree_idx_is_space_alloc(const H5O_storage_chunk_t *storage);
static herr_t H5D__btree_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
const H5D_t *dset);
static herr_t H5D__btree_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+static herr_t H5D__btree_idx_load_metadata(const H5D_chk_idx_info_t *idx_info);
static int H5D__btree_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
static herr_t H5D__btree_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t *udata);
@@ -137,9 +143,13 @@ const H5D_chunk_ops_t H5D_COPS_BTREE[1] = {{
false, /* v1 B-tree indices does not support SWMR access */
H5D__btree_idx_init, /* insert */
H5D__btree_idx_create, /* create */
+ H5D__btree_idx_open, /* open */
+ H5D__btree_idx_close, /* close */
+ H5D__btree_idx_is_open, /* is_open */
H5D__btree_idx_is_space_alloc, /* is_space_alloc */
H5D__btree_idx_insert, /* insert */
H5D__btree_idx_get_addr, /* get_addr */
+ H5D__btree_idx_load_metadata, /* load_metadata */
NULL, /* resize */
H5D__btree_idx_iterate, /* iterate */
H5D__btree_idx_remove, /* remove */
@@ -158,21 +168,21 @@ const H5D_chunk_ops_t H5D_COPS_BTREE[1] = {{
/* inherits B-tree like properties from H5B */
static H5B_class_t H5B_BTREE[1] = {{
- H5B_CHUNK_ID, /*id */
- sizeof(H5D_btree_key_t), /*sizeof_nkey */
- H5D__btree_get_shared, /*get_shared */
- H5D__btree_new_node, /*new */
- H5D__btree_cmp2, /*cmp2 */
- H5D__btree_cmp3, /*cmp3 */
- H5D__btree_found, /*found */
- H5D__btree_insert, /*insert */
- false, /*follow min branch? */
- false, /*follow max branch? */
- H5B_LEFT, /*critical key */
- H5D__btree_remove, /*remove */
- H5D__btree_decode_key, /*decode */
- H5D__btree_encode_key, /*encode */
- H5D__btree_debug_key /*debug */
+ H5B_CHUNK_ID, /* id */
+ sizeof(H5D_btree_key_t), /* sizeof_nkey */
+ H5D__btree_get_shared, /* get_shared */
+ H5D__btree_new_node, /* new */
+ H5D__btree_cmp2, /* cmp2 */
+ H5D__btree_cmp3, /* cmp3 */
+ H5D__btree_found, /* found */
+ H5D__btree_insert, /* insert */
+ false, /* follow min branch? */
+ false, /* follow max branch? */
+ H5B_LEFT, /* critical key */
+ H5D__btree_remove, /* remove */
+ H5D__btree_decode_key, /* decode */
+ H5D__btree_encode_key, /* encode */
+ H5D__btree_debug_key /* debug */
}};
/*******************/
@@ -183,13 +193,13 @@ static H5B_class_t H5B_BTREE[1] = {{
H5FL_DEFINE_STATIC(H5O_layout_chunk_t);
/*-------------------------------------------------------------------------
- * Function: H5D__btree_get_shared
+ * Function: H5D__btree_get_shared
*
- * Purpose: Returns the shared B-tree info for the specified UDATA.
+ * Purpose: Returns the shared B-tree info for the specified UDATA.
*
- * Return: Success: Pointer to the raw B-tree page for this dataset
+ * Return: Success: Pointer to the raw B-tree page for this dataset
*
- * Failure: Can't fail
+ * Failure: Can't fail
*
*-------------------------------------------------------------------------
*/
@@ -210,17 +220,17 @@ H5D__btree_get_shared(const H5F_t H5_ATTR_UNUSED *f, const void *_udata)
} /* end H5D__btree_get_shared() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_new_node
+ * Function: H5D__btree_new_node
*
- * Purpose: Adds a new entry to an i-storage B-tree. We can assume that
- * the domain represented by UDATA doesn't intersect the domain
- * already represented by the B-tree.
+ * Purpose: Adds a new entry to an i-storage B-tree. We can assume
+ * that the domain represented by UDATA doesn't intersect the
+ * domain already represented by the B-tree.
*
- * Return: Success: Non-negative. The address of leaf is returned
- * through the ADDR argument. It is also added
- * to the UDATA.
+ * Return: Success: Non-negative. The address of leaf is returned
+ * through the ADDR argument. It is also added
+ * to the UDATA.
*
- * Failure: Negative
+ * Failure: Negative
*
*-------------------------------------------------------------------------
*/
@@ -275,18 +285,18 @@ H5D__btree_new_node(H5F_t H5_ATTR_NDEBUG_UNUSED *f, H5B_ins_t op, void *_lt_key,
} /* end H5D__btree_new_node() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_cmp2
+ * Function: H5D__btree_cmp2
*
- * Purpose: Compares two keys sort of like strcmp(). The UDATA pointer
- * is only to supply extra information not carried in the keys
- * (in this case, the dimensionality) and is not compared
- * against the keys.
+ * Purpose: Compares two keys sort of like strcmp(). The UDATA pointer
+ * is only to supply extra information not carried in the keys
+ * (in this case, the dimensionality) and is not compared
+ * against the keys.
*
- * Return: Success: -1 if LT_KEY is less than RT_KEY;
- * 1 if LT_KEY is greater than RT_KEY;
- * 0 if LT_KEY and RT_KEY are equal.
+ * Return: Success: -1 if LT_KEY is less than RT_KEY;
+ * 1 if LT_KEY is greater than RT_KEY;
+ * 0 if LT_KEY and RT_KEY are equal.
*
- * Failure: FAIL (same as LT_KEY<RT_KEY)
+ * Failure: FAIL (same as LT_KEY < RT_KEY)
*
*-------------------------------------------------------------------------
*/
@@ -312,26 +322,26 @@ H5D__btree_cmp2(void *_lt_key, void *_udata, void *_rt_key)
} /* end H5D__btree_cmp2() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_cmp3
+ * Function: H5D__btree_cmp3
*
- * Purpose: Compare the requested datum UDATA with the left and right
- * keys of the B-tree.
+ * Purpose: Compare the requested datum UDATA with the left and right
+ * keys of the B-tree.
*
- * Return: Success: negative if the min_corner of UDATA is less
- * than the min_corner of LT_KEY.
+ * Return: Success: negative if the min_corner of UDATA is less
+ * than the min_corner of LT_KEY.
*
- * positive if the min_corner of UDATA is
- * greater than or equal the min_corner of
- * RT_KEY.
+ * positive if the min_corner of UDATA is
+ * greater than or equal the min_corner of
+ * RT_KEY.
*
- * zero otherwise. The min_corner of UDATA is
- * not necessarily contained within the address
- * space represented by LT_KEY, but a key that
- * would describe the UDATA min_corner address
- * would fall lexicographically between LT_KEY
- * and RT_KEY.
+ * zero otherwise. The min_corner of UDATA is
+ * not necessarily contained within the address
+ * space represented by LT_KEY, but a key that
+ * would describe the UDATA min_corner address
+ * would fall lexicographically between LT_KEY
+ * and RT_KEY.
*
- * Failure: FAIL (same as UDATA < LT_KEY)
+ * Failure: FAIL (same as UDATA < LT_KEY)
*
*-------------------------------------------------------------------------
*/
@@ -375,23 +385,24 @@ H5D__btree_cmp3(void *_lt_key, void *_udata, void *_rt_key)
} /* end H5D__btree_cmp3() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_found
- *
- * Purpose: This function is called when the B-tree search engine has
- * found the leaf entry that points to a chunk of storage that
- * contains the beginning of the logical address space
- * represented by UDATA. The LT_KEY is the left key (the one
- * that describes the chunk) and RT_KEY is the right key (the
- * one that describes the next or last chunk).
- *
- * Note: It's possible that the chunk isn't really found. For
- * instance, in a sparse dataset the requested chunk might fall
- * between two stored chunks in which case this function is
- * called with the maximum stored chunk indices less than the
- * requested chunk indices.
- *
- * Return: Non-negative on success with information about the
- * chunk returned through the UDATA argument, if *FOUND is true.
+ * Function: H5D__btree_found
+ *
+ * Purpose: This function is called when the B-tree search engine has
+ * found the leaf entry that points to a chunk of storage that
+ * contains the beginning of the logical address space
+ * represented by UDATA. The LT_KEY is the left key (the one
+ * that describes the chunk) and RT_KEY is the right key (the
+ * one that describes the next or last chunk).
+ *
+ * Note: It's possible that the chunk isn't really found. For
+ * instance, in a sparse dataset the requested chunk might fall
+ * between two stored chunks in which case this function is
+ * called with the maximum stored chunk indices less than the
+ * requested chunk indices.
+ *
+ * Return: Non-negative on success with information about the
+ * chunk returned through the UDATA argument, if *FOUND is
+ * true.
* Negative on failure.
*
*-------------------------------------------------------------------------
@@ -432,14 +443,14 @@ done:
} /* end H5D__btree_found() */
/*-------------------------------------------------------------------------
- * Function: H5D__chunk_disjoint
+ * Function: H5D__chunk_disjoint
*
- * Purpose: Determines if two chunks are disjoint.
+ * Purpose: Determines if two chunks are disjoint.
*
- * Return: Success: false if they are not disjoint.
- * true if they are disjoint.
+ * Return: Success: false if they are not disjoint.
+ * true if they are disjoint.
*
- * Note: Assumes that the chunk offsets are scaled coordinates
+ * Note: Assumes that the chunk offsets are scaled coordinates
*
*-------------------------------------------------------------------------
*/
@@ -466,27 +477,27 @@ done:
} /* end H5D__chunk_disjoint() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_insert
+ * Function: H5D__btree_insert
*
- * Purpose: This function is called when the B-tree insert engine finds
- * the node to use to insert new data. The UDATA argument
- * points to a struct that describes the logical addresses being
- * added to the file. This function allocates space for the
- * data and returns information through UDATA describing a
- * file chunk to receive (part of) the data.
+ * Purpose: This function is called when the B-tree insert engine finds
+ * the node to use to insert new data. The UDATA argument
+ * points to a struct that describes the logical addresses being
+ * added to the file. This function allocates space for the
+ * data and returns information through UDATA describing a
+ * file chunk to receive (part of) the data.
*
- * The LT_KEY is always the key describing the chunk of file
- * memory at address ADDR. On entry, UDATA describes the logical
- * addresses for which storage is being requested (through the
- * `offset' and `size' fields). On return, UDATA describes the
- * logical addresses contained in a chunk on disk.
+ * The LT_KEY is always the key describing the chunk of file
+ * memory at address ADDR. On entry, UDATA describes the logical
+ * addresses for which storage is being requested (through the
+ * `offset' and `size' fields). On return, UDATA describes the
+ * logical addresses contained in a chunk on disk.
*
- * Return: Success: An insertion command for the caller, one of
- * the H5B_INS_* constants. The address of the
- * new chunk is returned through the NEW_NODE
- * argument.
+ * Return: Success: An insertion command for the caller, one of
+ * the H5B_INS_* constants. The address of the
+ * new chunk is returned through the NEW_NODE
+ * argument.
*
- * Failure: H5B_INS_ERROR
+ * Failure: H5B_INS_ERROR
*
*-------------------------------------------------------------------------
*/
@@ -567,11 +578,11 @@ done:
} /* end H5D__btree_insert() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_remove
+ * Function: H5D__btree_remove
*
- * Purpose: Removes chunks that are no longer necessary in the B-tree.
+ * Purpose: Removes chunks that are no longer necessary in the B-tree.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -645,11 +656,11 @@ done:
} /* end H5D__btree_decode_key() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_encode_key
+ * Function: H5D__btree_encode_key
*
- * Purpose: Encode a key from native format to raw format.
+ * Purpose: Encode a key from native format to raw format.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -684,11 +695,11 @@ H5D__btree_encode_key(const H5B_shared_t *shared, uint8_t *raw, const void *_key
} /* end H5D__btree_encode_key() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_debug_key
+ * Function: H5D__btree_debug_key
*
- * Purpose: Prints a key.
+ * Purpose: Prints a key.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -714,11 +725,11 @@ H5D__btree_debug_key(FILE *stream, int indent, int fwidth, const void *_key, con
} /* end H5D__btree_debug_key() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_shared_free
+ * Function: H5D__btree_shared_free
*
- * Purpose: Free "local" B-tree shared info
+ * Purpose: Free "local" B-tree shared info
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -742,11 +753,11 @@ done:
} /* end H5D__btree_shared_free() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_shared_create
+ * Function: H5D__btree_shared_create
*
- * Purpose: Create & initialize B-tree shared info
+ * Purpose: Create & initialize B-tree shared info
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -788,11 +799,11 @@ done:
} /* end H5D__btree_shared_create() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_init
+ * Function: H5D__btree_idx_init
*
- * Purpose: Initialize the indexing information for a dataset.
+ * Purpose: Initialize the indexing information for a dataset.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -823,17 +834,18 @@ done:
} /* end H5D__btree_idx_init() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_create
+ * Function: H5D__btree_idx_create
*
- * Purpose: Creates a new indexed-storage B-tree and initializes the
- * layout struct with information about the storage. The
- * struct should be immediately written to the object header.
+ * Purpose: Creates a new indexed-storage B-tree and initializes the
+ * layout struct with information about the storage. The
+ * struct should be immediately written to the object header.
*
- * This function must be called before passing LAYOUT to any of
- * the other indexed storage functions!
+ * This function must be called before passing LAYOUT to any
+ * of the other indexed storage functions!
*
- * Return: Non-negative on success (with the LAYOUT argument initialized
- * and ready to write to an object header). Negative on failure.
+ * Return: Non-negative on success (with the LAYOUT argument
+ * initialized and ready to write to an object header).
+ * Negative on failure.
*
*-------------------------------------------------------------------------
*/
@@ -866,11 +878,73 @@ done:
} /* end H5D__btree_idx_create() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_is_space_alloc
+ * Function: H5D__btree_idx_open
+ *
+ * Purpose: Opens an existing B-tree. Currently a no-op.
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__btree_idx_open(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__btree_idx_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__btree_idx_close
*
- * Purpose: Query if space is allocated for index method
+ * Purpose: Closes an existing B-tree. Currently a no-op.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__btree_idx_close(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__btree_idx_close() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__btree_idx_is_open
+ *
+ * Purpose: Query if the index is opened or not
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__btree_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_BTREE == idx_info->storage->idx_type);
+ assert(is_open);
+
+ *is_open = H5D_BTREE_IDX_IS_OPEN(idx_info);
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__btree_idx_is_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__btree_idx_is_space_alloc
+ *
+ * Purpose: Query if space is allocated for index method
+ *
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -886,11 +960,11 @@ H5D__btree_idx_is_space_alloc(const H5O_storage_chunk_t *storage)
} /* end H5D__btree_idx_is_space_alloc() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_insert
+ * Function: H5D__btree_idx_insert
*
- * Purpose: Insert chunk entry into the indexing structure.
+ * Purpose: Insert chunk entry into the indexing structure.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -922,13 +996,13 @@ done:
} /* H5D__btree_idx_insert() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_get_addr
+ * Function: H5D__btree_idx_get_addr
*
- * Purpose: Get the file address of a chunk if file space has been
- * assigned. Save the retrieved information in the udata
- * supplied.
+ * Purpose: Get the file address of a chunk if file space has been
+ * assigned. Save the retrieved information in the udata
+ * supplied.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -959,14 +1033,34 @@ done:
} /* H5D__btree_idx_get_addr() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_iterate_cb
+ * Function: H5D__btree_idx_load_metadata
+ *
+ * Purpose: Load additional chunk index metadata beyond the chunk index
+ * itself. Currently a no-op.
+ *
+ * Return: Non-negative on success/Negative on failure
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__btree_idx_load_metadata(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__btree_idx_load_metadata() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__btree_idx_iterate_cb
*
- * Purpose: Translate the B-tree specific chunk record into a generic
+ * Purpose: Translate the B-tree specific chunk record into a generic
* form and make the callback to the generic chunk callback
* routine.
*
- * Return: Success: Non-negative
- * Failure: Negative
+ * Return: Success: Non-negative
+ * Failure: Negative
*
*-------------------------------------------------------------------------
*/
@@ -1001,12 +1095,12 @@ H5D__btree_idx_iterate_cb(H5F_t H5_ATTR_UNUSED *f, const void *_lt_key, haddr_t
} /* H5D__btree_idx_iterate_cb() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_iterate
+ * Function: H5D__btree_idx_iterate
*
- * Purpose: Iterate over the chunks in an index, making a callback
+ * Purpose: Iterate over the chunks in an index, making a callback
* for each one.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1043,11 +1137,11 @@ H5D__btree_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t c
} /* end H5D__btree_idx_iterate() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_remove
+ * Function: H5D__btree_idx_remove
*
- * Purpose: Remove chunk from index.
+ * Purpose: Remove chunk from index.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1077,13 +1171,13 @@ done:
} /* H5D__btree_idx_remove() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_delete
+ * Function: H5D__btree_idx_delete
*
- * Purpose: Delete index and raw data storage for entire dataset
+ * Purpose: Delete index and raw data storage for entire dataset
* (i.e. all chunks)
*
- * Return: Success: Non-negative
- * Failure: negative
+ * Return: Success: Non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -1134,11 +1228,11 @@ done:
} /* end H5D__btree_idx_delete() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_copy_setup
+ * Function: H5D__btree_idx_copy_setup
*
- * Purpose: Set up any necessary information for copying chunks
+ * Purpose: Set up any necessary information for copying chunks
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1178,11 +1272,11 @@ done:
} /* end H5D__btree_idx_copy_setup() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_copy_shutdown
+ * Function: H5D__btree_idx_copy_shutdown
*
- * Purpose: Shutdown any information from copying chunks
+ * Purpose: Shutdown any information from copying chunks
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1250,11 +1344,11 @@ done:
} /* end H5D__btree_idx_size() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_reset
+ * Function: H5D__btree_idx_reset
*
- * Purpose: Reset indexing information.
+ * Purpose: Reset indexing information.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1274,11 +1368,11 @@ H5D__btree_idx_reset(H5O_storage_chunk_t *storage, bool reset_addr)
} /* end H5D__btree_idx_reset() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_dump
+ * Function: H5D__btree_idx_dump
*
- * Purpose: Dump indexing information to a stream.
+ * Purpose: Dump indexing information to a stream.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1296,11 +1390,11 @@ H5D__btree_idx_dump(const H5O_storage_chunk_t *storage, FILE *stream)
} /* end H5D__btree_idx_dump() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree_idx_dest
+ * Function: H5D__btree_idx_dest
*
- * Purpose: Release indexing information in memory.
+ * Purpose: Release indexing information in memory.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1328,11 +1422,11 @@ done:
} /* end H5D__btree_idx_dest() */
/*-------------------------------------------------------------------------
- * Function: H5D_btree_debug
+ * Function: H5D_btree_debug
*
- * Purpose: Debugs a B-tree node for indexed raw data storage.
+ * Purpose: Debugs a B-tree node for indexed raw data storage.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
diff --git a/src/H5Dbtree2.c b/src/H5Dbtree2.c
index 4da9555..7a26b6d 100644
--- a/src/H5Dbtree2.c
+++ b/src/H5Dbtree2.c
@@ -27,16 +27,18 @@
/* Headers */
/***********/
#include "H5private.h" /* Generic Functions */
-#include "H5Dpkg.h" /* Datasets */
+#include "H5Dpkg.h" /* Datasets */
#include "H5FLprivate.h" /* Free Lists */
#include "H5MFprivate.h" /* File space management */
-#include "H5MMprivate.h" /* Memory management */
-#include "H5VMprivate.h" /* Vector and array functions */
+#include "H5MMprivate.h" /* Memory management */
+#include "H5VMprivate.h" /* Vector and array functions */
/****************/
/* Local Macros */
/****************/
+#define H5D_BT2_IDX_IS_OPEN(idx_info) (NULL != (idx_info)->storage->u.btree2.bt2)
+
/******************/
/* Local Typedefs */
/******************/
@@ -92,7 +94,6 @@ static herr_t H5D__bt2_filt_debug(FILE *stream, int indent, int fwidth, const vo
const void *u_ctx);
/* Helper routine */
-static herr_t H5D__bt2_idx_open(const H5D_chk_idx_info_t *idx_info);
static herr_t H5D__btree2_idx_depend(const H5D_chk_idx_info_t *idx_info);
/* Callback for H5B2_iterate() which is called in H5D__bt2_idx_iterate() */
@@ -114,10 +115,14 @@ static herr_t H5D__bt2_mod_cb(void *_record, void *_op_data, bool *changed);
static herr_t H5D__bt2_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t *space,
haddr_t dset_ohdr_addr);
static herr_t H5D__bt2_idx_create(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__bt2_idx_open(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__bt2_idx_close(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__bt2_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open);
static bool H5D__bt2_idx_is_space_alloc(const H5O_storage_chunk_t *storage);
static herr_t H5D__bt2_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
const H5D_t *dset);
static herr_t H5D__bt2_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+static herr_t H5D__bt2_idx_load_metadata(const H5D_chk_idx_info_t *idx_info);
static int H5D__bt2_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
static herr_t H5D__bt2_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t *udata);
@@ -139,9 +144,13 @@ const H5D_chunk_ops_t H5D_COPS_BT2[1] = {{
true, /* Fixed array indices support SWMR access */
H5D__bt2_idx_init, /* init */
H5D__bt2_idx_create, /* create */
+ H5D__bt2_idx_open, /* open */
+ H5D__bt2_idx_close, /* close */
+ H5D__bt2_idx_is_open, /* is_open */
H5D__bt2_idx_is_space_alloc, /* is_space_alloc */
H5D__bt2_idx_insert, /* insert */
H5D__bt2_idx_get_addr, /* get_addr */
+ H5D__bt2_idx_load_metadata, /* load_metadata */
NULL, /* resize */
H5D__bt2_idx_iterate, /* iterate */
H5D__bt2_idx_remove, /* remove */
@@ -203,8 +212,8 @@ H5FL_ARR_DEFINE_STATIC(uint32_t, H5O_LAYOUT_NDIMS);
*
* Purpose: Create client callback context
*
- * Return: Success: non-NULL
- * Failure: NULL
+ * Return: Success: non-NULL
+ * Failure: NULL
*
*-------------------------------------------------------------------------
*/
@@ -258,8 +267,8 @@ done:
*
* Purpose: Destroy client callback context
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -286,10 +295,10 @@ H5D__bt2_dst_context(void *_ctx)
* Function: H5D__bt2_store
*
* Purpose: Store native information into record for v2 B-tree
- * (non-filtered)
+ * (non-filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -308,8 +317,8 @@ H5D__bt2_store(void *record, const void *_udata)
/*-------------------------------------------------------------------------
* Function: H5D__bt2_compare
*
- * Purpose: Compare two native information records, according to some key
- * (non-filtered)
+ * Purpose: Compare two native information records, according to some
+ * key (non-filtered)
*
* Return: <0 if rec1 < rec2
* =0 if rec1 == rec2
@@ -341,10 +350,10 @@ H5D__bt2_compare(const void *_udata, const void *_rec2, int *result)
* Function: H5D__bt2_unfilt_encode
*
* Purpose: Encode native information into raw form for storing on disk
- * (non-filtered)
+ * (non-filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -373,10 +382,10 @@ H5D__bt2_unfilt_encode(uint8_t *raw, const void *_record, void *_ctx)
* Function: H5D__bt2_unfilt_decode
*
* Purpose: Decode raw disk form of record into native form
- * (non-filtered)
+ * (non-filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -403,12 +412,12 @@ H5D__bt2_unfilt_decode(const uint8_t *raw, void *_record, void *_ctx)
} /* H5D__bt2_unfilt_decode() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_unfilt_debug
+ * Function: H5D__bt2_unfilt_debug
*
- * Purpose: Debug native form of record (non-filtered)
+ * Purpose: Debug native form of record (non-filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -440,10 +449,10 @@ H5D__bt2_unfilt_debug(FILE *stream, int indent, int fwidth, const void *_record,
* Function: H5D__bt2_filt_encode
*
* Purpose: Encode native information into raw form for storing on disk
- * (filtered)
+ * (filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -473,13 +482,13 @@ H5D__bt2_filt_encode(uint8_t *raw, const void *_record, void *_ctx)
} /* H5D__bt2_filt_encode() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_filt_decode
+ * Function: H5D__bt2_filt_decode
*
- * Purpose: Decode raw disk form of record into native form
- * (filtered)
+ * Purpose: Decode raw disk form of record into native form
+ * (filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -511,12 +520,12 @@ H5D__bt2_filt_decode(const uint8_t *raw, void *_record, void *_ctx)
} /* H5D__bt2_filt_decode() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_filt_debug
+ * Function: H5D__bt2_filt_debug
*
- * Purpose: Debug native form of record (filtered)
+ * Purpose: Debug native form of record (filtered)
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -570,13 +579,13 @@ H5D__bt2_idx_init(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info, const H5S_t
} /* end H5D__bt2_idx_init() */
/*-------------------------------------------------------------------------
- * Function: H5D__btree2_idx_depend
+ * Function: H5D__btree2_idx_depend
*
- * Purpose: Create flush dependency between v2 B-tree and dataset's
+ * Purpose: Create flush dependency between v2 B-tree and dataset's
* object header.
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -629,63 +638,9 @@ done:
} /* end H5D__btree2_idx_depend() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_open()
- *
- * Purpose: Opens an existing v2 B-tree.
- *
- * Note: This information is passively initialized from each index
- * operation callback because those abstract chunk index operations
- * are designed to work with the v2 B-tree chunk indices also,
- * which don't require an 'open' for the data structure.
+ * Function: H5D__bt2_idx_create
*
- * Return: Success: non-negative
- * Failure: negative
- *
- *-------------------------------------------------------------------------
- */
-static herr_t
-H5D__bt2_idx_open(const H5D_chk_idx_info_t *idx_info)
-{
- H5D_bt2_ctx_ud_t u_ctx; /* user data for creating context */
- herr_t ret_value = SUCCEED; /* Return value */
-
- FUNC_ENTER_PACKAGE
-
- /* Check args */
- assert(idx_info);
- assert(idx_info->f);
- assert(idx_info->pline);
- assert(idx_info->layout);
- assert(H5D_CHUNK_IDX_BT2 == idx_info->layout->idx_type);
- assert(idx_info->storage);
- assert(H5_addr_defined(idx_info->storage->idx_addr));
- assert(NULL == idx_info->storage->u.btree2.bt2);
-
- /* Set up the user data */
- u_ctx.f = idx_info->f;
- u_ctx.ndims = idx_info->layout->ndims - 1;
- u_ctx.chunk_size = idx_info->layout->size;
- u_ctx.dim = idx_info->layout->dim;
-
- /* Open v2 B-tree for the chunk index */
- if (NULL ==
- (idx_info->storage->u.btree2.bt2 = H5B2_open(idx_info->f, idx_info->storage->idx_addr, &u_ctx)))
- HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "can't open v2 B-tree for tracking chunked dataset");
-
- /* Check for SWMR writes to the file */
- if (H5F_INTENT(idx_info->f) & H5F_ACC_SWMR_WRITE)
- if (H5D__btree2_idx_depend(idx_info) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTDEPEND, FAIL,
- "unable to create flush dependency on object header");
-
-done:
- FUNC_LEAVE_NOAPI(ret_value)
-} /* end H5D__bt2_idx_open() */
-
-/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_create
- *
- * Purpose: Create the v2 B-tree for tracking dataset chunks
+ * Purpose: Create the v2 B-tree for tracking dataset chunks
*
* Return: SUCCEED/FAIL
*
@@ -758,11 +713,120 @@ done:
} /* end H5D__bt2_idx_create() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_is_space_alloc
+ * Function: H5D__bt2_idx_open()
+ *
+ * Purpose: Opens an existing v2 B-tree.
*
- * Purpose: Query if space is allocated for index method
+ * Note: This information is passively initialized from each index
+ * operation callback because those abstract chunk index
+ * operations are designed to work with the v2 B-tree chunk
+ * indices also, which don't require an 'open' for the data
+ * structure.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Success: non-negative
+ * Failure: negative
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__bt2_idx_open(const H5D_chk_idx_info_t *idx_info)
+{
+ H5D_bt2_ctx_ud_t u_ctx; /* user data for creating context */
+ herr_t ret_value = SUCCEED; /* Return value */
+
+ FUNC_ENTER_PACKAGE
+
+ /* Check args */
+ assert(idx_info);
+ assert(idx_info->f);
+ assert(idx_info->pline);
+ assert(idx_info->layout);
+ assert(H5D_CHUNK_IDX_BT2 == idx_info->layout->idx_type);
+ assert(idx_info->storage);
+ assert(H5_addr_defined(idx_info->storage->idx_addr));
+ assert(NULL == idx_info->storage->u.btree2.bt2);
+
+ /* Set up the user data */
+ u_ctx.f = idx_info->f;
+ u_ctx.ndims = idx_info->layout->ndims - 1;
+ u_ctx.chunk_size = idx_info->layout->size;
+ u_ctx.dim = idx_info->layout->dim;
+
+ /* Open v2 B-tree for the chunk index */
+ if (NULL ==
+ (idx_info->storage->u.btree2.bt2 = H5B2_open(idx_info->f, idx_info->storage->idx_addr, &u_ctx)))
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "can't open v2 B-tree for tracking chunked dataset");
+
+ /* Check for SWMR writes to the file */
+ if (H5F_INTENT(idx_info->f) & H5F_ACC_SWMR_WRITE)
+ if (H5D__btree2_idx_depend(idx_info) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTDEPEND, FAIL,
+ "unable to create flush dependency on object header");
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* end H5D__bt2_idx_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__bt2_idx_close()
+ *
+ * Purpose: Closes an existing v2 B-tree.
+ *
+ * Return: Success: non-negative
+ * Failure: negative
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__bt2_idx_close(const H5D_chk_idx_info_t *idx_info)
+{
+ herr_t ret_value = SUCCEED; /* Return value */
+
+ FUNC_ENTER_PACKAGE
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_BT2 == idx_info->storage->idx_type);
+ assert(idx_info->storage->u.btree2.bt2);
+
+ if (H5B2_close(idx_info->storage->u.btree2.bt2) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close v2 B-tree");
+ idx_info->storage->u.btree2.bt2 = NULL;
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* end H5D__bt2_idx_close() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__bt2_idx_is_open
+ *
+ * Purpose: Query if the index is opened or not
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__bt2_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_BT2 == idx_info->storage->idx_type);
+ assert(is_open);
+
+ *is_open = H5D_BT2_IDX_IS_OPEN(idx_info);
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__bt2_idx_is_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__bt2_idx_is_space_alloc
+ *
+ * Purpose: Query if space is allocated for index method
+ *
+ * Return: true/false
*
*-------------------------------------------------------------------------
*/
@@ -778,14 +842,14 @@ H5D__bt2_idx_is_space_alloc(const H5O_storage_chunk_t *storage)
} /* end H5D__bt2_idx_is_space_alloc() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_mod_cb
+ * Function: H5D__bt2_mod_cb
*
- * Purpose: Modify record for dataset chunk when it is found in a v2 B-tree.
- * This is the callback for H5B2_update() which is called in
- * H5D__bt2_idx_insert().
+ * Purpose: Modify record for dataset chunk when it is found in a v2
+ * B-tree. This is the callback for H5B2_update() which is
+ * called in H5D__bt2_idx_insert().
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -817,18 +881,21 @@ H5D__bt2_mod_cb(void *_record, void *_op_data, bool *changed)
} /* end H5D__bt2_mod_cb() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_insert
+ * Function: H5D__bt2_idx_insert
+ *
+ * Purpose: Insert chunk address into the indexing structure.
+ * A non-filtered chunk:
+ * Should not exist
+ * Allocate the chunk and pass chunk address back up
+ * A filtered chunk:
+ * If it was not found, create the chunk and pass chunk
+ * address back up
+ * If it was found but its size changed, reallocate the chunk
+ * and pass chunk address back up
+ * If it was found but its size was the same, pass chunk
+ * address back up
*
- * Purpose: Insert chunk address into the indexing structure.
- * A non-filtered chunk:
- * Should not exist
- * Allocate the chunk and pass chunk address back up
- * A filtered chunk:
- * If it was not found, create the chunk and pass chunk address back up
- * If it was found but its size changed, reallocate the chunk and pass chunk address back up
- * If it was found but its size was the same, pass chunk address back up
- *
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -854,7 +921,7 @@ H5D__bt2_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
assert(H5_addr_defined(udata->chunk_block.offset));
/* Check if the v2 B-tree is open yet */
- if (NULL == idx_info->storage->u.btree2.bt2) {
+ if (!H5D_BT2_IDX_IS_OPEN(idx_info)) {
/* Open existing v2 B-tree */
if (H5D__bt2_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open v2 B-tree");
@@ -889,14 +956,14 @@ done:
} /* H5D__bt2_idx_insert() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_found_cb
+ * Function: H5D__bt2_found_cb
*
- * Purpose: Retrieve record for dataset chunk when it is found in a v2 B-tree.
- * This is the callback for H5B2_find() which is called in
- * H5D__bt2_idx_get_addr() and H5D__bt2_idx_insert().
+ * Purpose: Retrieve record for dataset chunk when it is found in a v2
+ * B-tree. This is the callback for H5B2_find() which is called
+ * in H5D__bt2_idx_get_addr() and H5D__bt2_idx_insert().
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -911,13 +978,13 @@ H5D__bt2_found_cb(const void *nrecord, void *op_data)
} /* H5D__bt2_found_cb() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_get_addr
+ * Function: H5D__bt2_idx_get_addr
*
- * Purpose: Get the file address of a chunk if file space has been
- * assigned. Save the retrieved information in the udata
- * supplied.
+ * Purpose: Get the file address of a chunk if file space has been
+ * assigned. Save the retrieved information in the udata
+ * supplied.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -944,7 +1011,7 @@ H5D__bt2_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata)
assert(udata);
/* Check if the v2 B-tree is open yet */
- if (NULL == idx_info->storage->u.btree2.bt2) {
+ if (!H5D_BT2_IDX_IS_OPEN(idx_info)) {
/* Open existing v2 B-tree */
if (H5D__bt2_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open v2 B-tree");
@@ -1003,16 +1070,59 @@ done:
} /* H5D__bt2_idx_get_addr() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_iterate_cb
+ * Function: H5D__bt2_idx_load_metadata
+ *
+ * Purpose: Load additional chunk index metadata beyond the chunk index
+ * itself.
+ *
+ * Return: Non-negative on success/Negative on failure
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__bt2_idx_load_metadata(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ H5D_chunk_ud_t chunk_ud;
+ hsize_t scaled[H5O_LAYOUT_NDIMS] = {0};
+ herr_t ret_value = SUCCEED;
+
+ FUNC_ENTER_PACKAGE
+
+ /*
+ * After opening a dataset that uses a v2 Btree, the root
+ * node will generally not be read in until an element is
+ * looked up for the first time. Since there isn't currently
+ * a good way of controlling that explicitly, perform a fake
+ * lookup of a chunk to cause it to be read in.
+ */
+ chunk_ud.common.layout = idx_info->layout;
+ chunk_ud.common.storage = idx_info->storage;
+ chunk_ud.common.scaled = scaled;
+
+ chunk_ud.chunk_block.offset = HADDR_UNDEF;
+ chunk_ud.chunk_block.length = 0;
+ chunk_ud.filter_mask = 0;
+ chunk_ud.new_unfilt_chunk = false;
+ chunk_ud.idx_hint = UINT_MAX;
+
+ if (H5D__bt2_idx_get_addr(idx_info, &chunk_ud) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL, "can't load v2 B-tree root node");
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* H5D__bt2_idx_load_metadata() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__bt2_idx_iterate_cb
*
- * Purpose: Translate the B-tree specific chunk record into a generic
+ * Purpose: Translate the B-tree specific chunk record into a generic
* form and make the callback to the generic chunk callback
* routine.
- * This is the callback for H5B2_iterate() which is called in
- * H5D__bt2_idx_iterate().
+ * This is the callback for H5B2_iterate() which is called in
+ * H5D__bt2_idx_iterate().
*
- * Return: Success: Non-negative
- * Failure: Negative
+ * Return: Success: Non-negative
+ * Failure: Negative
*
*-------------------------------------------------------------------------
*/
@@ -1033,12 +1143,12 @@ H5D__bt2_idx_iterate_cb(const void *_record, void *_udata)
} /* H5D__bt2_idx_iterate_cb() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_iterate
+ * Function: H5D__bt2_idx_iterate
*
- * Purpose: Iterate over the chunks in an index, making a callback
+ * Purpose: Iterate over the chunks in an index, making a callback
* for each one.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1062,7 +1172,7 @@ H5D__bt2_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chu
assert(chunk_udata);
/* Check if the v2 B-tree is open yet */
- if (NULL == idx_info->storage->u.btree2.bt2) {
+ if (!H5D_BT2_IDX_IS_OPEN(idx_info)) {
/* Open existing v2 B-tree */
if (H5D__bt2_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open v2 B-tree");
@@ -1087,15 +1197,16 @@ done:
} /* end H5D__bt2_idx_iterate() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_remove_cb()
+ * Function: H5D__bt2_remove_cb()
*
- * Purpose: Free space for 'dataset chunk' object as v2 B-tree
- * is being deleted or v2 B-tree node is removed.
- * This is the callback for H5B2_remove() and H5B2_delete() which
- * which are called in H5D__bt2_idx_remove() and H5D__bt2_idx_delete().
+ * Purpose: Free space for 'dataset chunk' object as v2 B-tree
+ * is being deleted or v2 B-tree node is removed.
+ * This is the callback for H5B2_remove() and H5B2_delete()
+ * which are called in H5D__bt2_idx_remove() and
+ * H5D__bt2_idx_delete().
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -1121,11 +1232,11 @@ done:
} /* H5D__bt2_remove_cb() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_remove
+ * Function: H5D__bt2_idx_remove
*
- * Purpose: Remove chunk from index.
+ * Purpose: Remove chunk from index.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1149,7 +1260,7 @@ H5D__bt2_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t *u
assert(udata);
/* Check if the v2 B-tree is open yet */
- if (NULL == idx_info->storage->u.btree2.bt2) {
+ if (!H5D_BT2_IDX_IS_OPEN(idx_info)) {
/* Open existing v2 B-tree */
if (H5D__bt2_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open v2 B-tree");
@@ -1180,13 +1291,13 @@ done:
} /* H5D__bt2_idx_remove() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_delete
+ * Function: H5D__bt2_idx_delete
*
- * Purpose: Delete index and raw data storage for entire dataset
+ * Purpose: Delete index and raw data storage for entire dataset
* (i.e. all chunks)
*
- * Return: Success: Non-negative
- * Failure: negative
+ * Return: Success: Non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -1233,11 +1344,11 @@ done:
} /* end H5D__bt2_idx_delete() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_copy_setup
+ * Function: H5D__bt2_idx_copy_setup
*
- * Purpose: Set up any necessary information for copying chunks
+ * Purpose: Set up any necessary information for copying chunks
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1264,7 +1375,7 @@ H5D__bt2_idx_copy_setup(const H5D_chk_idx_info_t *idx_info_src, const H5D_chk_id
assert(!H5_addr_defined(idx_info_dst->storage->idx_addr));
/* Check if the source v2 B-tree is open yet */
- if (NULL == idx_info_src->storage->u.btree2.bt2)
+ if (!H5D_BT2_IDX_IS_OPEN(idx_info_src))
if (H5D__bt2_idx_open(idx_info_src) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open v2 B-tree");
@@ -1284,11 +1395,11 @@ done:
} /* end H5D__bt2_idx_copy_setup() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_copy_shutdown
+ * Function: H5D__bt2_idx_copy_shutdown
*
- * Purpose: Shutdown any information from copying chunks
+ * Purpose: Shutdown any information from copying chunks
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1324,8 +1435,8 @@ done:
*
* Purpose: Retrieve the amount of index storage for chunked dataset
*
- * Return: Success: Non-negative
- * Failure: negative
+ * Return: Success: Non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -1355,23 +1466,23 @@ H5D__bt2_idx_size(const H5D_chk_idx_info_t *idx_info, hsize_t *index_size)
/* Get v2 B-tree size for indexing chunked dataset */
if (H5B2_size(bt2_cdset, index_size) < 0)
- HGOTO_ERROR(H5E_SYM, H5E_CANTGET, FAIL, "can't retrieve v2 B-tree storage info for chunked dataset");
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL,
+ "can't retrieve v2 B-tree storage info for chunked dataset");
done:
/* Close v2 B-tree index */
- if (bt2_cdset && H5B2_close(bt2_cdset) < 0)
- HDONE_ERROR(H5E_SYM, H5E_CLOSEERROR, FAIL, "can't close v2 B-tree for tracking chunked dataset");
- idx_info->storage->u.btree2.bt2 = NULL;
+ if (H5D__bt2_idx_close(idx_info) < 0)
+ HDONE_ERROR(H5E_DATASET, H5E_CLOSEERROR, FAIL, "can't close v2 B-tree for tracking chunked dataset");
FUNC_LEAVE_NOAPI(ret_value)
} /* end H5D__bt2_idx_size() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_reset
+ * Function: H5D__bt2_idx_reset
*
- * Purpose: Reset indexing information.
+ * Purpose: Reset indexing information.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1392,11 +1503,11 @@ H5D__bt2_idx_reset(H5O_storage_chunk_t *storage, bool reset_addr)
} /* end H5D__bt2_idx_reset() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_dump
+ * Function: H5D__bt2_idx_dump
*
- * Purpose: Dump indexing information to a stream.
+ * Purpose: Dump indexing information to a stream.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1415,11 +1526,11 @@ H5D__bt2_idx_dump(const H5O_storage_chunk_t *storage, FILE *stream)
} /* end H5D__bt2_idx_dump() */
/*-------------------------------------------------------------------------
- * Function: H5D__bt2_idx_dest
+ * Function: H5D__bt2_idx_dest
*
- * Purpose: Release indexing information in memory.
+ * Purpose: Release indexing information in memory.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -1436,16 +1547,14 @@ H5D__bt2_idx_dest(const H5D_chk_idx_info_t *idx_info)
assert(idx_info->storage);
/* Check if the v2-btree is open */
- if (idx_info->storage->u.btree2.bt2) {
-
+ if (H5D_BT2_IDX_IS_OPEN(idx_info)) {
/* Patch the top level file pointer contained in bt2 if needed */
if (H5B2_patch_file(idx_info->storage->u.btree2.bt2, idx_info->f) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't patch v2 B-tree file pointer");
/* Close v2 B-tree */
- if (H5B2_close(idx_info->storage->u.btree2.bt2) < 0)
+ if (H5D__bt2_idx_close(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "can't close v2 B-tree");
- idx_info->storage->u.btree2.bt2 = NULL;
} /* end if */
done:
diff --git a/src/H5Dchunk.c b/src/H5Dchunk.c
index 9f4bd90..41d774d 100644
--- a/src/H5Dchunk.c
+++ b/src/H5Dchunk.c
@@ -1124,18 +1124,33 @@ H5D__chunk_io_init(H5D_io_info_t *io_info, H5D_dset_io_info_t *dinfo)
if (H5F_SHARED_HAS_FEATURE(io_info->f_sh, H5FD_FEAT_HAS_MPI) &&
H5F_shared_get_coll_metadata_reads(io_info->f_sh) &&
H5D__chunk_is_space_alloc(&dataset->shared->layout.storage)) {
- H5D_chunk_ud_t udata;
- hsize_t scaled[H5O_LAYOUT_NDIMS] = {0};
+ H5O_storage_chunk_t *sc = &(dataset->shared->layout.storage.u.chunk);
+ H5D_chk_idx_info_t idx_info;
+ bool index_is_open;
+
+ idx_info.f = dataset->oloc.file;
+ idx_info.pline = &dataset->shared->dcpl_cache.pline;
+ idx_info.layout = &dataset->shared->layout.u.chunk;
+ idx_info.storage = sc;
+
+ assert(sc && sc->ops && sc->ops->is_open);
+ if (sc->ops->is_open(&idx_info, &index_is_open) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL, "unable to check if dataset chunk index is open");
+
+ if (!index_is_open) {
+ assert(sc->ops->open);
+ if (sc->ops->open(&idx_info) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "unable to open dataset chunk index");
+ }
/*
- * TODO: Until the dataset chunk index callback structure has
- * callbacks for checking if an index is opened and also for
- * directly opening the index, the following fake chunk lookup
- * serves the purpose of forcing a chunk index open operation
- * on all ranks
+ * Load any other chunk index metadata that we can,
+ * such as fixed array data blocks, while we know all
+ * MPI ranks will do so with collective metadata reads
+ * enabled
*/
- if (H5D__chunk_lookup(dataset, scaled, &udata) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "unable to collectively open dataset chunk index");
+ if (sc->ops->load_metadata && sc->ops->load_metadata(&idx_info) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "unable to load additional chunk index metadata");
}
#endif
@@ -3827,15 +3842,29 @@ H5D__chunk_lookup(const H5D_t *dset, const hsize_t *scaled, H5D_chunk_ud_t *udat
idx_info.storage = sc;
#ifdef H5_HAVE_PARALLEL
- /* Disable collective metadata read for chunk indexes as it is
- * highly unlikely that users would read the same chunks from all
- * processes.
- */
if (H5F_HAS_FEATURE(idx_info.f, H5FD_FEAT_HAS_MPI)) {
- md_reads_file_flag = H5P_FORCE_FALSE;
- md_reads_context_flag = false;
- H5F_set_coll_metadata_reads(idx_info.f, &md_reads_file_flag, &md_reads_context_flag);
- restore_md_reads_state = true;
+ /* Disable collective metadata read for chunk indexes as it is
+ * highly unlikely that users would read the same chunks from all
+ * processes.
+ */
+ if (H5F_get_coll_metadata_reads(idx_info.f)) {
+#ifndef NDEBUG
+ bool index_is_open;
+
+ /*
+ * The dataset's chunk index should be open at this point.
+ * Otherwise, we will end up reading it in independently,
+ * which may not be desired.
+ */
+ sc->ops->is_open(&idx_info, &index_is_open);
+ assert(index_is_open);
+#endif
+
+ md_reads_file_flag = H5P_FORCE_FALSE;
+ md_reads_context_flag = false;
+ H5F_set_coll_metadata_reads(idx_info.f, &md_reads_file_flag, &md_reads_context_flag);
+ restore_md_reads_state = true;
+ }
}
#endif /* H5_HAVE_PARALLEL */
diff --git a/src/H5Dearray.c b/src/H5Dearray.c
index c713b6f..965eaac 100644
--- a/src/H5Dearray.c
+++ b/src/H5Dearray.c
@@ -26,19 +26,21 @@
/***********/
/* Headers */
/***********/
-#include "H5private.h" /* Generic Functions */
-#include "H5Dpkg.h" /* Datasets */
-#include "H5Eprivate.h" /* Error handling */
-#include "H5EAprivate.h" /* Extensible arrays */
+#include "H5private.h" /* Generic Functions */
+#include "H5Dpkg.h" /* Datasets */
+#include "H5Eprivate.h" /* Error handling */
+#include "H5EAprivate.h" /* Extensible arrays */
#include "H5FLprivate.h" /* Free Lists */
-#include "H5MFprivate.h" /* File space management */
-#include "H5MMprivate.h" /* Memory management */
-#include "H5VMprivate.h" /* Vector functions */
+#include "H5MFprivate.h" /* File space management */
+#include "H5MMprivate.h" /* Memory management */
+#include "H5VMprivate.h" /* Vector functions */
/****************/
/* Local Macros */
/****************/
+#define H5D_EARRAY_IDX_IS_OPEN(idx_info) (NULL != (idx_info)->storage->u.earray.ea)
+
/* Value to fill unset array elements with */
#define H5D_EARRAY_FILL HADDR_UNDEF
#define H5D_EARRAY_FILT_FILL \
@@ -106,10 +108,14 @@ static herr_t H5D__earray_filt_debug(FILE *stream, int indent, int fwidth, hsize
static herr_t H5D__earray_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t *space,
haddr_t dset_ohdr_addr);
static herr_t H5D__earray_idx_create(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__earray_idx_open(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__earray_idx_close(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__earray_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open);
static bool H5D__earray_idx_is_space_alloc(const H5O_storage_chunk_t *storage);
static herr_t H5D__earray_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
const H5D_t *dset);
static herr_t H5D__earray_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+static herr_t H5D__earray_idx_load_metadata(const H5D_chk_idx_info_t *idx_info);
static herr_t H5D__earray_idx_resize(H5O_layout_chunk_t *layout);
static int H5D__earray_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
@@ -125,7 +131,6 @@ static herr_t H5D__earray_idx_dump(const H5O_storage_chunk_t *storage, FILE *str
static herr_t H5D__earray_idx_dest(const H5D_chk_idx_info_t *idx_info);
/* Generic extensible array routines */
-static herr_t H5D__earray_idx_open(const H5D_chk_idx_info_t *idx_info);
static herr_t H5D__earray_idx_depend(const H5D_chk_idx_info_t *idx_info);
/*********************/
@@ -137,9 +142,13 @@ const H5D_chunk_ops_t H5D_COPS_EARRAY[1] = {{
true, /* Extensible array indices support SWMR access */
H5D__earray_idx_init, /* init */
H5D__earray_idx_create, /* create */
+ H5D__earray_idx_open, /* open */
+ H5D__earray_idx_close, /* close */
+ H5D__earray_idx_is_open, /* is_open */
H5D__earray_idx_is_space_alloc, /* is_space_alloc */
H5D__earray_idx_insert, /* insert */
H5D__earray_idx_get_addr, /* get_addr */
+ H5D__earray_idx_load_metadata, /* load_metadata */
H5D__earray_idx_resize, /* resize */
H5D__earray_idx_iterate, /* iterate */
H5D__earray_idx_remove, /* remove */
@@ -270,10 +279,10 @@ H5D__earray_dst_context(void *_ctx)
/*-------------------------------------------------------------------------
* Function: H5D__earray_fill
*
- * Purpose: Fill "missing elements" in block of elements
+ * Purpose: Fill "missing elements" in block of elements
*
- * Return: Success: non-negative
- * Failure: negative
+ * Return: Success: non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -706,59 +715,6 @@ done:
} /* end H5D__earray_idx_depend() */
/*-------------------------------------------------------------------------
- * Function: H5D__earray_idx_open
- *
- * Purpose: Opens an existing extensible array.
- *
- * Note: This information is passively initialized from each index
- * operation callback because those abstract chunk index operations
- * are designed to work with the v1 B-tree chunk indices also,
- * which don't require an 'open' for the data structure.
- *
- * Return: Success: non-negative
- * Failure: negative
- *
- *-------------------------------------------------------------------------
- */
-static herr_t
-H5D__earray_idx_open(const H5D_chk_idx_info_t *idx_info)
-{
- H5D_earray_ctx_ud_t udata; /* User data for extensible array open call */
- herr_t ret_value = SUCCEED; /* Return value */
-
- FUNC_ENTER_PACKAGE
-
- /* Check args */
- assert(idx_info);
- assert(idx_info->f);
- assert(idx_info->pline);
- assert(idx_info->layout);
- assert(H5D_CHUNK_IDX_EARRAY == idx_info->layout->idx_type);
- assert(idx_info->storage);
- assert(H5D_CHUNK_IDX_EARRAY == idx_info->storage->idx_type);
- assert(H5_addr_defined(idx_info->storage->idx_addr));
- assert(NULL == idx_info->storage->u.earray.ea);
-
- /* Set up the user data */
- udata.f = idx_info->f;
- udata.chunk_size = idx_info->layout->size;
-
- /* Open the extensible array for the chunk index */
- if (NULL ==
- (idx_info->storage->u.earray.ea = H5EA_open(idx_info->f, idx_info->storage->idx_addr, &udata)))
- HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "can't open extensible array");
-
- /* Check for SWMR writes to the file */
- if (H5F_INTENT(idx_info->f) & H5F_ACC_SWMR_WRITE)
- if (H5D__earray_idx_depend(idx_info) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTDEPEND, FAIL,
- "unable to create flush dependency on object header");
-
-done:
- FUNC_LEAVE_NOAPI(ret_value)
-} /* end H5D__earray_idx_open() */
-
-/*-------------------------------------------------------------------------
* Function: H5D__earray_idx_init
*
* Purpose: Initialize the indexing information for a dataset.
@@ -906,11 +862,119 @@ done:
} /* end H5D__earray_idx_create() */
/*-------------------------------------------------------------------------
+ * Function: H5D__earray_idx_open
+ *
+ * Purpose: Opens an existing extensible array.
+ *
+ * Note: This information is passively initialized from each index
+ * operation callback because those abstract chunk index
+ * operations are designed to work with the v1 B-tree chunk
+ * indices also, which don't require an 'open' for the data
+ * structure.
+ *
+ * Return: Success: non-negative
+ * Failure: negative
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__earray_idx_open(const H5D_chk_idx_info_t *idx_info)
+{
+ H5D_earray_ctx_ud_t udata; /* User data for extensible array open call */
+ herr_t ret_value = SUCCEED; /* Return value */
+
+ FUNC_ENTER_PACKAGE
+
+ /* Check args */
+ assert(idx_info);
+ assert(idx_info->f);
+ assert(idx_info->pline);
+ assert(idx_info->layout);
+ assert(H5D_CHUNK_IDX_EARRAY == idx_info->layout->idx_type);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_EARRAY == idx_info->storage->idx_type);
+ assert(H5_addr_defined(idx_info->storage->idx_addr));
+ assert(NULL == idx_info->storage->u.earray.ea);
+
+ /* Set up the user data */
+ udata.f = idx_info->f;
+ udata.chunk_size = idx_info->layout->size;
+
+ /* Open the extensible array for the chunk index */
+ if (NULL ==
+ (idx_info->storage->u.earray.ea = H5EA_open(idx_info->f, idx_info->storage->idx_addr, &udata)))
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "can't open extensible array");
+
+ /* Check for SWMR writes to the file */
+ if (H5F_INTENT(idx_info->f) & H5F_ACC_SWMR_WRITE)
+ if (H5D__earray_idx_depend(idx_info) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTDEPEND, FAIL,
+ "unable to create flush dependency on object header");
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* end H5D__earray_idx_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__earray_idx_close
+ *
+ * Purpose: Closes an existing extensible array.
+ *
+ * Return: Success: non-negative
+ * Failure: negative
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__earray_idx_close(const H5D_chk_idx_info_t *idx_info)
+{
+ herr_t ret_value = SUCCEED; /* Return value */
+
+ FUNC_ENTER_PACKAGE
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_EARRAY == idx_info->storage->idx_type);
+ assert(idx_info->storage->u.earray.ea);
+
+ if (H5EA_close(idx_info->storage->u.earray.ea) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close extensible array");
+ idx_info->storage->u.earray.ea = NULL;
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* end H5D__earray_idx_close() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__earray_idx_is_open
+ *
+ * Purpose: Query if the index is opened or not
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__earray_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_EARRAY == idx_info->storage->idx_type);
+ assert(is_open);
+
+ *is_open = H5D_EARRAY_IDX_IS_OPEN(idx_info);
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__earray_idx_is_open() */
+
+/*-------------------------------------------------------------------------
* Function: H5D__earray_idx_is_space_alloc
*
* Purpose: Query if space is allocated for index method
*
- * Return: Non-negative on success/Negative on failure
+ * Return: true/false
*
*-------------------------------------------------------------------------
*/
@@ -953,7 +1017,7 @@ H5D__earray_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata
assert(udata);
/* Check if the extensible array is open yet */
- if (NULL == idx_info->storage->u.earray.ea) {
+ if (!H5D_EARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the extensible array in file */
if (H5D__earray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open extensible array");
@@ -1021,7 +1085,7 @@ H5D__earray_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *uda
assert(udata);
/* Check if the extensible array is open yet */
- if (NULL == idx_info->storage->u.earray.ea) {
+ if (!H5D_EARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the extensible array in file */
if (H5D__earray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open extensible array");
@@ -1087,6 +1151,51 @@ done:
} /* H5D__earray_idx_get_addr() */
/*-------------------------------------------------------------------------
+ * Function: H5D__earray_idx_load_metadata
+ *
+ * Purpose: Load additional chunk index metadata beyond the chunk index
+ * itself.
+ *
+ * Return: Non-negative on success/Negative on failure
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__earray_idx_load_metadata(const H5D_chk_idx_info_t *idx_info)
+{
+ H5D_chunk_ud_t chunk_ud;
+ hsize_t scaled[H5O_LAYOUT_NDIMS] = {0};
+ herr_t ret_value = SUCCEED;
+
+ FUNC_ENTER_PACKAGE
+
+ /*
+ * After opening a dataset that uses an extensible array,
+ * the extensible array header index block will generally
+ * not be read in until an element is looked up for the
+ * first time. Since there isn't currently a good way of
+ * controlling that explicitly, perform a fake lookup of
+ * a chunk to cause it to be read in or created if it
+ * doesn't exist yet.
+ */
+ chunk_ud.common.layout = idx_info->layout;
+ chunk_ud.common.storage = idx_info->storage;
+ chunk_ud.common.scaled = scaled;
+
+ chunk_ud.chunk_block.offset = HADDR_UNDEF;
+ chunk_ud.chunk_block.length = 0;
+ chunk_ud.filter_mask = 0;
+ chunk_ud.new_unfilt_chunk = false;
+ chunk_ud.idx_hint = UINT_MAX;
+
+ if (H5D__earray_idx_get_addr(idx_info, &chunk_ud) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL, "can't load extensible array header index block");
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* H5D__earray_idx_load_metadata() */
+
+/*-------------------------------------------------------------------------
* Function: H5D__earray_idx_resize
*
* Purpose: Calculate/setup the swizzled down chunk array, used for chunk
@@ -1195,10 +1304,6 @@ H5D__earray_idx_iterate_cb(hsize_t H5_ATTR_UNUSED idx, const void *_elmt, void *
* Purpose: Iterate over the chunks in an index, making a callback
* for each one.
*
- * Note: This implementation is slow, particularly for sparse
- * extensible arrays, replace it with call to H5EA_iterate()
- * when that's available.
- *
* Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
@@ -1223,10 +1328,10 @@ H5D__earray_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t
assert(chunk_udata);
/* Check if the extensible array is open yet */
- if (NULL == idx_info->storage->u.earray.ea) {
+ if (!H5D_EARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the extensible array in file */
if (H5D__earray_idx_open(idx_info) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open extensible array");
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, H5_ITER_ERROR, "can't open extensible array");
}
else /* Patch the top level file pointer contained in ea if needed */
H5EA_patch_file(idx_info->storage->u.earray.ea, idx_info->f);
@@ -1236,7 +1341,7 @@ H5D__earray_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t
/* Get the extensible array statistics */
if (H5EA_get_stats(ea, &ea_stat) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL, "can't query extensible array statistics");
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, H5_ITER_ERROR, "can't query extensible array statistics");
if (ea_stat.stored.max_idx_set > 0) {
H5D_earray_it_ud_t udata; /* User data for iteration callback */
@@ -1291,7 +1396,7 @@ H5D__earray_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t
assert(udata);
/* Check if the extensible array is open yet */
- if (NULL == idx_info->storage->u.earray.ea) {
+ if (!H5D_EARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the extensible array in file */
if (H5D__earray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open extensible array");
@@ -1444,9 +1549,8 @@ H5D__earray_idx_delete(const H5D_chk_idx_info_t *idx_info)
HGOTO_ERROR(H5E_DATASET, H5E_BADITER, FAIL, "unable to iterate over chunk addresses");
/* Close extensible array */
- if (H5EA_close(idx_info->storage->u.earray.ea) < 0)
+ if (H5D__earray_idx_close(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close extensible array");
- idx_info->storage->u.earray.ea = NULL;
/* Set up the context user data */
ctx_udata.f = idx_info->f;
@@ -1494,7 +1598,7 @@ H5D__earray_idx_copy_setup(const H5D_chk_idx_info_t *idx_info_src, const H5D_chk
assert(!H5_addr_defined(idx_info_dst->storage->idx_addr));
/* Check if the source extensible array is open yet */
- if (NULL == idx_info_src->storage->u.earray.ea)
+ if (!H5D_EARRAY_IDX_IS_OPEN(idx_info_src))
/* Open the extensible array in file */
if (H5D__earray_idx_open(idx_info_src) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open extensible array");
@@ -1593,9 +1697,8 @@ H5D__earray_idx_size(const H5D_chk_idx_info_t *idx_info, hsize_t *index_size)
done:
if (idx_info->storage->u.earray.ea) {
- if (H5EA_close(idx_info->storage->u.earray.ea) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close extensible array");
- idx_info->storage->u.earray.ea = NULL;
+ if (H5D__earray_idx_close(idx_info) < 0)
+ HDONE_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close extensible array");
} /* end if */
FUNC_LEAVE_NOAPI(ret_value)
@@ -1673,16 +1776,14 @@ H5D__earray_idx_dest(const H5D_chk_idx_info_t *idx_info)
assert(idx_info->storage);
/* Check if the extensible array is open */
- if (idx_info->storage->u.earray.ea) {
-
+ if (H5D_EARRAY_IDX_IS_OPEN(idx_info)) {
/* Patch the top level file pointer contained in ea if needed */
if (H5EA_patch_file(idx_info->storage->u.earray.ea, idx_info->f) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't patch earray file pointer");
/* Close extensible array */
- if (H5EA_close(idx_info->storage->u.earray.ea) < 0)
+ if (H5D__earray_idx_close(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close extensible array");
- idx_info->storage->u.earray.ea = NULL;
} /* end if */
done:
diff --git a/src/H5Dfarray.c b/src/H5Dfarray.c
index 450d466..8d06de4 100644
--- a/src/H5Dfarray.c
+++ b/src/H5Dfarray.c
@@ -37,6 +37,8 @@
/* Local Macros */
/****************/
+#define H5D_FARRAY_IDX_IS_OPEN(idx_info) (NULL != (idx_info)->storage->u.btree2.bt2)
+
/* Value to fill unset array elements with */
#define H5D_FARRAY_FILL HADDR_UNDEF
#define H5D_FARRAY_FILT_FILL \
@@ -105,10 +107,14 @@ static herr_t H5D__farray_filt_debug(FILE *stream, int indent, int fwidth, hsize
static herr_t H5D__farray_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t *space,
haddr_t dset_ohdr_addr);
static herr_t H5D__farray_idx_create(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__farray_idx_open(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__farray_idx_close(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__farray_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open);
static bool H5D__farray_idx_is_space_alloc(const H5O_storage_chunk_t *storage);
static herr_t H5D__farray_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
const H5D_t *dset);
static herr_t H5D__farray_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+static herr_t H5D__farray_idx_load_metadata(const H5D_chk_idx_info_t *idx_info);
static int H5D__farray_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
static herr_t H5D__farray_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t *udata);
@@ -123,7 +129,6 @@ static herr_t H5D__farray_idx_dump(const H5O_storage_chunk_t *storage, FILE *str
static herr_t H5D__farray_idx_dest(const H5D_chk_idx_info_t *idx_info);
/* Generic fixed array routines */
-static herr_t H5D__farray_idx_open(const H5D_chk_idx_info_t *idx_info);
static herr_t H5D__farray_idx_depend(const H5D_chk_idx_info_t *idx_info);
/*********************/
@@ -135,9 +140,13 @@ const H5D_chunk_ops_t H5D_COPS_FARRAY[1] = {{
true, /* Fixed array indices support SWMR access */
H5D__farray_idx_init, /* init */
H5D__farray_idx_create, /* create */
+ H5D__farray_idx_open, /* open */
+ H5D__farray_idx_close, /* close */
+ H5D__farray_idx_is_open, /* is_open */
H5D__farray_idx_is_space_alloc, /* is_space_alloc */
H5D__farray_idx_insert, /* insert */
H5D__farray_idx_get_addr, /* get_addr */
+ H5D__farray_idx_load_metadata, /* load_metadata */
NULL, /* resize */
H5D__farray_idx_iterate, /* iterate */
H5D__farray_idx_remove, /* remove */
@@ -727,55 +736,6 @@ H5D__farray_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t H5_ATTR_UNU
} /* end H5D__farray_idx_init() */
/*-------------------------------------------------------------------------
- * Function: H5D__farray_idx_open
- *
- * Purpose: Opens an existing fixed array and initializes
- * the layout struct with information about the storage.
- *
- * Return: Success: non-negative
- * Failure: negative
- *
- *-------------------------------------------------------------------------
- */
-static herr_t
-H5D__farray_idx_open(const H5D_chk_idx_info_t *idx_info)
-{
- H5D_farray_ctx_ud_t udata; /* User data for fixed array open call */
- herr_t ret_value = SUCCEED; /* Return value */
-
- FUNC_ENTER_PACKAGE
-
- /* Check args */
- assert(idx_info);
- assert(idx_info->f);
- assert(idx_info->pline);
- assert(idx_info->layout);
- assert(H5D_CHUNK_IDX_FARRAY == idx_info->layout->idx_type);
- assert(idx_info->storage);
- assert(H5D_CHUNK_IDX_FARRAY == idx_info->storage->idx_type);
- assert(H5_addr_defined(idx_info->storage->idx_addr));
- assert(NULL == idx_info->storage->u.farray.fa);
-
- /* Set up the user data */
- udata.f = idx_info->f;
- udata.chunk_size = idx_info->layout->size;
-
- /* Open the fixed array for the chunk index */
- if (NULL ==
- (idx_info->storage->u.farray.fa = H5FA_open(idx_info->f, idx_info->storage->idx_addr, &udata)))
- HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "can't open fixed array");
-
- /* Check for SWMR writes to the file */
- if (H5F_INTENT(idx_info->f) & H5F_ACC_SWMR_WRITE)
- if (H5D__farray_idx_depend(idx_info) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTDEPEND, FAIL,
- "unable to create flush dependency on object header");
-
-done:
- FUNC_LEAVE_NOAPI(ret_value)
-} /* end H5D__farray_idx_open() */
-
-/*-------------------------------------------------------------------------
* Function: H5D__farray_idx_create
*
* Purpose: Creates a new indexed-storage fixed array and initializes
@@ -854,11 +814,114 @@ done:
} /* end H5D__farray_idx_create() */
/*-------------------------------------------------------------------------
+ * Function: H5D__farray_idx_open
+ *
+ * Purpose: Opens an existing fixed array and initializes
+ * the layout struct with information about the storage.
+ *
+ * Return: Success: non-negative
+ * Failure: negative
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__farray_idx_open(const H5D_chk_idx_info_t *idx_info)
+{
+ H5D_farray_ctx_ud_t udata; /* User data for fixed array open call */
+ herr_t ret_value = SUCCEED; /* Return value */
+
+ FUNC_ENTER_PACKAGE
+
+ /* Check args */
+ assert(idx_info);
+ assert(idx_info->f);
+ assert(idx_info->pline);
+ assert(idx_info->layout);
+ assert(H5D_CHUNK_IDX_FARRAY == idx_info->layout->idx_type);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_FARRAY == idx_info->storage->idx_type);
+ assert(H5_addr_defined(idx_info->storage->idx_addr));
+ assert(NULL == idx_info->storage->u.farray.fa);
+
+ /* Set up the user data */
+ udata.f = idx_info->f;
+ udata.chunk_size = idx_info->layout->size;
+
+ /* Open the fixed array for the chunk index */
+ if (NULL ==
+ (idx_info->storage->u.farray.fa = H5FA_open(idx_info->f, idx_info->storage->idx_addr, &udata)))
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, FAIL, "can't open fixed array");
+
+ /* Check for SWMR writes to the file */
+ if (H5F_INTENT(idx_info->f) & H5F_ACC_SWMR_WRITE)
+ if (H5D__farray_idx_depend(idx_info) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTDEPEND, FAIL,
+ "unable to create flush dependency on object header");
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* end H5D__farray_idx_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__farray_idx_close
+ *
+ * Purpose: Closes an existing fixed array.
+ *
+ * Return: Success: non-negative
+ * Failure: negative
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__farray_idx_close(const H5D_chk_idx_info_t *idx_info)
+{
+ herr_t ret_value = SUCCEED; /* Return value */
+
+ FUNC_ENTER_PACKAGE
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_FARRAY == idx_info->storage->idx_type);
+ assert(idx_info->storage->u.farray.fa);
+
+ if (H5FA_close(idx_info->storage->u.farray.fa) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close fixed array");
+ idx_info->storage->u.farray.fa = NULL;
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* end H5D__farray_idx_close() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__farray_idx_is_open
+ *
+ * Purpose: Query if the index is opened or not
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__farray_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_FARRAY == idx_info->storage->idx_type);
+ assert(is_open);
+
+ *is_open = H5D_FARRAY_IDX_IS_OPEN(idx_info);
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__farray_idx_is_open() */
+
+/*-------------------------------------------------------------------------
* Function: H5D__farray_idx_is_space_alloc
*
* Purpose: Query if space is allocated for index method
*
- * Return: Non-negative on success/Negative on failure
+ * Return: true/false
*
*-------------------------------------------------------------------------
*/
@@ -901,7 +964,7 @@ H5D__farray_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata
assert(udata);
/* Check if the fixed array is open yet */
- if (NULL == idx_info->storage->u.farray.fa) {
+ if (!H5D_FARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the fixed array in file */
if (H5D__farray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open fixed array");
@@ -969,7 +1032,7 @@ H5D__farray_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *uda
assert(udata);
/* Check if the fixed array is open yet */
- if (NULL == idx_info->storage->u.farray.fa) {
+ if (!H5D_FARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the fixed array in file */
if (H5D__farray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open fixed array");
@@ -1017,6 +1080,50 @@ done:
} /* H5D__farray_idx_get_addr() */
/*-------------------------------------------------------------------------
+ * Function: H5D__farray_idx_load_metadata
+ *
+ * Purpose: Load additional chunk index metadata beyond the chunk index
+ * itself.
+ *
+ * Return: Non-negative on success/Negative on failure
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__farray_idx_load_metadata(const H5D_chk_idx_info_t *idx_info)
+{
+ H5D_chunk_ud_t chunk_ud;
+ hsize_t scaled[H5O_LAYOUT_NDIMS] = {0};
+ herr_t ret_value = SUCCEED;
+
+ FUNC_ENTER_PACKAGE
+
+ /*
+ * After opening a dataset that uses a fixed array, the
+ * fixed array data block will generally not be read in
+ * until an element is looked up for the first time. Since
+ * there isn't currently a good way of controlling that
+ * explicitly, perform a fake lookup of a chunk to cause
+ * it to be read in.
+ */
+ chunk_ud.common.layout = idx_info->layout;
+ chunk_ud.common.storage = idx_info->storage;
+ chunk_ud.common.scaled = scaled;
+
+ chunk_ud.chunk_block.offset = HADDR_UNDEF;
+ chunk_ud.chunk_block.length = 0;
+ chunk_ud.filter_mask = 0;
+ chunk_ud.new_unfilt_chunk = false;
+ chunk_ud.idx_hint = UINT_MAX;
+
+ if (H5D__farray_idx_get_addr(idx_info, &chunk_ud) < 0)
+ HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL, "can't load fixed array data block");
+
+done:
+ FUNC_LEAVE_NOAPI(ret_value)
+} /* H5D__farray_idx_load_metadata() */
+
+/*-------------------------------------------------------------------------
* Function: H5D__farray_idx_iterate_cb
*
* Purpose: Callback routine for fixed array element iteration.
@@ -1102,7 +1209,7 @@ H5D__farray_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t
assert(chunk_udata);
/* Check if the fixed array is open yet */
- if (NULL == idx_info->storage->u.farray.fa) {
+ if (!H5D_FARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the fixed array in file */
if (H5D__farray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open fixed array");
@@ -1171,7 +1278,7 @@ H5D__farray_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t
assert(udata);
/* Check if the fixed array is open yet */
- if (NULL == idx_info->storage->u.farray.fa) {
+ if (!H5D_FARRAY_IDX_IS_OPEN(idx_info)) {
/* Open the fixed array in file */
if (H5D__farray_idx_open(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open fixed array");
@@ -1302,9 +1409,8 @@ H5D__farray_idx_delete(const H5D_chk_idx_info_t *idx_info)
HGOTO_ERROR(H5E_DATASET, H5E_BADITER, FAIL, "unable to iterate over chunk addresses");
/* Close fixed array */
- if (H5FA_close(idx_info->storage->u.farray.fa) < 0)
+ if (H5D__farray_idx_close(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close fixed array");
- idx_info->storage->u.farray.fa = NULL;
/* Set up the user data */
ctx_udata.f = idx_info->f;
@@ -1352,10 +1458,11 @@ H5D__farray_idx_copy_setup(const H5D_chk_idx_info_t *idx_info_src, const H5D_chk
assert(!H5_addr_defined(idx_info_dst->storage->idx_addr));
/* Check if the source fixed array is open yet */
- if (NULL == idx_info_src->storage->u.farray.fa)
+ if (!H5D_FARRAY_IDX_IS_OPEN(idx_info_src)) {
/* Open the fixed array in file */
if (H5D__farray_idx_open(idx_info_src) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't open fixed array");
+ }
/* Set copied metadata tag */
H5_BEGIN_TAG(H5AC__COPIED_TAG)
@@ -1450,9 +1557,8 @@ H5D__farray_idx_size(const H5D_chk_idx_info_t *idx_info, hsize_t *index_size)
done:
if (idx_info->storage->u.farray.fa) {
- if (H5FA_close(idx_info->storage->u.farray.fa) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close fixed array");
- idx_info->storage->u.farray.fa = NULL;
+ if (H5D__farray_idx_close(idx_info) < 0)
+ HDONE_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close fixed array");
} /* end if */
FUNC_LEAVE_NOAPI(ret_value)
@@ -1528,16 +1634,14 @@ H5D__farray_idx_dest(const H5D_chk_idx_info_t *idx_info)
assert(idx_info->storage);
/* Check if the fixed array is open */
- if (idx_info->storage->u.farray.fa) {
-
+ if (H5D_FARRAY_IDX_IS_OPEN(idx_info)) {
/* Patch the top level file pointer contained in fa if needed */
if (H5FA_patch_file(idx_info->storage->u.farray.fa, idx_info->f) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTOPENOBJ, FAIL, "can't patch fixed array file pointer");
/* Close fixed array */
- if (H5FA_close(idx_info->storage->u.farray.fa) < 0)
+ if (H5D__farray_idx_close(idx_info) < 0)
HGOTO_ERROR(H5E_DATASET, H5E_CANTCLOSEOBJ, FAIL, "unable to close fixed array");
- idx_info->storage->u.farray.fa = NULL;
} /* end if */
done:
diff --git a/src/H5Dmpio.c b/src/H5Dmpio.c
index 8ba9a14..5c2ee87 100644
--- a/src/H5Dmpio.c
+++ b/src/H5Dmpio.c
@@ -3024,6 +3024,26 @@ H5D__obtain_mpio_mode(H5D_io_info_t *io_info, H5D_dset_io_info_t *di, uint8_t as
* metadata reads are enabled.
*/
if (H5F_get_coll_metadata_reads(di->dset->oloc.file)) {
+#ifndef NDEBUG
+ {
+ H5D_chk_idx_info_t idx_info;
+ bool index_is_open;
+
+ idx_info.f = di->dset->oloc.file;
+ idx_info.pline = &di->dset->shared->dcpl_cache.pline;
+ idx_info.layout = &di->dset->shared->layout.u.chunk;
+ idx_info.storage = &di->dset->shared->layout.storage.u.chunk;
+
+ /*
+ * The dataset's chunk index should be open at this point.
+ * Otherwise, we will end up reading it in independently,
+ * which may not be desired.
+ */
+ idx_info.storage->ops->is_open(&idx_info, &index_is_open);
+ assert(index_is_open);
+ }
+#endif
+
md_reads_file_flag = H5P_FORCE_FALSE;
md_reads_context_flag = false;
H5F_set_coll_metadata_reads(di->dset->oloc.file, &md_reads_file_flag, &md_reads_context_flag);
@@ -3446,26 +3466,6 @@ H5D__mpio_collective_filtered_chunk_io_setup(const H5D_io_info_t *io_info, const
chunk_node = H5SL_next(chunk_node);
}
}
- else if (H5F_get_coll_metadata_reads(di[dset_idx].dset->oloc.file)) {
- hsize_t scaled[H5O_LAYOUT_NDIMS] = {0};
-
- /*
- * If this rank has no selection in the dataset and collective
- * metadata reads are enabled, do a fake lookup of a chunk to
- * ensure that this rank has the chunk index opened. Otherwise,
- * only the ranks that had a selection will have opened the
- * chunk index and they will have done so independently. Therefore,
- * when ranks with no selection participate in later collective
- * metadata reads, they will try to open the chunk index collectively
- * and issues will occur since other ranks won't participate.
- *
- * In the future, we should consider having a chunk index "open"
- * callback that can be used to ensure collectivity between ranks
- * in a more natural way, but this hack should suffice for now.
- */
- if (H5D__chunk_lookup(di[dset_idx].dset, scaled, &udata) < 0)
- HGOTO_ERROR(H5E_DATASET, H5E_CANTGET, FAIL, "error looking up chunk address");
- }
/* Reset metadata tagging */
H5AC_tag(prev_tag, NULL);
diff --git a/src/H5Dnone.c b/src/H5Dnone.c
index 472a221..d4eb918 100644
--- a/src/H5Dnone.c
+++ b/src/H5Dnone.c
@@ -14,9 +14,9 @@
* Purpose: Implicit (Non Index) chunked I/O functions.
*
* This is used when the dataset is:
- * - extendible but with fixed max. dims
- * - with early allocation
- * - without filter
+ * - extendible but with fixed max. dims
+ * - with early allocation
+ * - without filter
*
* The chunk coordinate is mapped into the actual disk addresses
* for the chunk without indexing.
@@ -31,12 +31,12 @@
/***********/
/* Headers */
/***********/
-#include "H5private.h" /* Generic Functions */
-#include "H5Dpkg.h" /* Datasets */
-#include "H5Eprivate.h" /* Error handling */
+#include "H5private.h" /* Generic Functions */
+#include "H5Dpkg.h" /* Datasets */
+#include "H5Eprivate.h" /* Error handling */
#include "H5FLprivate.h" /* Free Lists */
-#include "H5MFprivate.h" /* File space management */
-#include "H5VMprivate.h" /* Vector functions */
+#include "H5MFprivate.h" /* File space management */
+#include "H5VMprivate.h" /* Vector functions */
/****************/
/* Local Macros */
@@ -52,8 +52,12 @@
/* Non Index chunking I/O ops */
static herr_t H5D__none_idx_create(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__none_idx_open(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__none_idx_close(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__none_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open);
static bool H5D__none_idx_is_space_alloc(const H5O_storage_chunk_t *storage);
static herr_t H5D__none_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+static herr_t H5D__none_idx_load_metadata(const H5D_chk_idx_info_t *idx_info);
static int H5D__none_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
static herr_t H5D__none_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t *udata);
@@ -73,9 +77,13 @@ const H5D_chunk_ops_t H5D_COPS_NONE[1] = {{
false, /* Non-indexed chunking don't current support SWMR access */
NULL, /* init */
H5D__none_idx_create, /* create */
+ H5D__none_idx_open, /* open */
+ H5D__none_idx_close, /* close */
+ H5D__none_idx_is_open, /* is_open */
H5D__none_idx_is_space_alloc, /* is_space_alloc */
NULL, /* insert */
H5D__none_idx_get_addr, /* get_addr */
+ H5D__none_idx_load_metadata, /* load_metadata */
NULL, /* resize */
H5D__none_idx_iterate, /* iterate */
H5D__none_idx_remove, /* remove */
@@ -97,12 +105,12 @@ const H5D_chunk_ops_t H5D_COPS_NONE[1] = {{
/*******************/
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_create
+ * Function: H5D__none_idx_create
*
- * Purpose: Allocate memory for the maximum # of chunks in the dataset.
+ * Purpose: Allocate memory for the maximum # of chunks in the dataset.
*
- * Return: Non-negative on success
- * Negative on failure.
+ * Return: Non-negative on success
+ * Negative on failure.
*
*-------------------------------------------------------------------------
*/
@@ -141,11 +149,73 @@ done:
} /* end H5D__none_idx_create() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_is_space_alloc
+ * Function: H5D__none_idx_open
*
- * Purpose: Query if space for the dataset chunks is allocated
+ * Purpose: Opens an existing "none" index. Currently a no-op.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__none_idx_open(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__none_idx_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__none_idx_close
+ *
+ * Purpose: Closes an existing "none" index. Currently a no-op.
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__none_idx_close(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__none_idx_close() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__none_idx_is_open
+ *
+ * Purpose: Query if the index is opened or not
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__none_idx_is_open(const H5D_chk_idx_info_t H5_ATTR_NDEBUG_UNUSED *idx_info, bool *is_open)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_NONE == idx_info->storage->idx_type);
+ assert(is_open);
+
+ *is_open = true;
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__none_idx_is_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__none_idx_is_space_alloc
+ *
+ * Purpose: Query if space for the dataset chunks is allocated
+ *
+ * Return: true/false
*
*-------------------------------------------------------------------------
*/
@@ -161,12 +231,12 @@ H5D__none_idx_is_space_alloc(const H5O_storage_chunk_t *storage)
} /* end H5D__none_idx_is_space_alloc() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_get_addr
+ * Function: H5D__none_idx_get_addr
*
- * Purpose: Get the file address of a chunk.
- * Save the retrieved information in the udata supplied.
+ * Purpose: Get the file address of a chunk.
+ * Save the retrieved information in the udata supplied.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -200,12 +270,32 @@ H5D__none_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata
} /* H5D__none_idx_get_addr() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_iterate
+ * Function: H5D__none_idx_load_metadata
+ *
+ * Purpose: Load additional chunk index metadata beyond the chunk index
+ * itself. Currently a no-op.
+ *
+ * Return: Non-negative on success/Negative on failure
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__none_idx_load_metadata(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* H5D__none_idx_load_metadata() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__none_idx_iterate
*
- * Purpose: Iterate over the chunks in an index, making a callback
+ * Purpose: Iterate over the chunks in an index, making a callback
* for each one.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -275,13 +365,13 @@ done:
} /* end H5D__none_idx_iterate() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_remove
+ * Function: H5D__none_idx_remove
*
- * Purpose: Remove chunk from index.
+ * Purpose: Remove chunk from index.
*
- * Note: Chunks can't be removed (or added) to datasets with this
- * form of index - all the space for all the chunks is always
- * allocated in the file.
+ * Note: Chunks can't be removed (or added) to datasets with this
+ * form of index - all the space for all the chunks is always
+ * allocated in the file.
*
* Return: Non-negative on success/Negative on failure
*
@@ -299,12 +389,12 @@ H5D__none_idx_remove(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info,
} /* H5D__none_idx_remove() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_delete
+ * Function: H5D__none_idx_delete
*
- * Purpose: Delete raw data storage for entire dataset (i.e. all chunks)
+ * Purpose: Delete raw data storage for entire dataset (i.e. all chunks)
*
- * Return: Success: Non-negative
- * Failure: negative
+ * Return: Success: Non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -337,11 +427,11 @@ done:
} /* end H5D__none_idx_delete() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_copy_setup
+ * Function: H5D__none_idx_copy_setup
*
- * Purpose: Set up any necessary information for copying chunks
+ * Purpose: Set up any necessary information for copying chunks
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -407,11 +497,11 @@ H5D__none_idx_size(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info, hsize_t *i
} /* end H5D__none_idx_size() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_reset
+ * Function: H5D__none_idx_reset
*
- * Purpose: Reset indexing information.
+ * Purpose: Reset indexing information.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -431,11 +521,11 @@ H5D__none_idx_reset(H5O_storage_chunk_t *storage, bool reset_addr)
} /* end H5D__none_idx_reset() */
/*-------------------------------------------------------------------------
- * Function: H5D__none_idx_dump
+ * Function: H5D__none_idx_dump
*
- * Purpose: Dump
+ * Purpose: Dump
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
diff --git a/src/H5Dpkg.h b/src/H5Dpkg.h
index 82fec0e..a3695ae 100644
--- a/src/H5Dpkg.h
+++ b/src/H5Dpkg.h
@@ -393,10 +393,14 @@ typedef int (*H5D_chunk_cb_func_t)(const H5D_chunk_rec_t *chunk_rec, void *udata
typedef herr_t (*H5D_chunk_init_func_t)(const H5D_chk_idx_info_t *idx_info, const H5S_t *space,
haddr_t dset_ohdr_addr);
typedef herr_t (*H5D_chunk_create_func_t)(const H5D_chk_idx_info_t *idx_info);
+typedef herr_t (*H5D_chunk_open_func_t)(const H5D_chk_idx_info_t *idx_info);
+typedef herr_t (*H5D_chunk_close_func_t)(const H5D_chk_idx_info_t *idx_info);
+typedef herr_t (*H5D_chunk_is_open_func_t)(const H5D_chk_idx_info_t *idx_info, bool *is_open);
typedef bool (*H5D_chunk_is_space_alloc_func_t)(const H5O_storage_chunk_t *storage);
typedef herr_t (*H5D_chunk_insert_func_t)(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
const H5D_t *dset);
typedef herr_t (*H5D_chunk_get_addr_func_t)(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+typedef herr_t (*H5D_chunk_load_metadata_func_t)(const H5D_chk_idx_info_t *idx_info);
typedef herr_t (*H5D_chunk_resize_func_t)(H5O_layout_chunk_t *layout);
typedef int (*H5D_chunk_iterate_func_t)(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
@@ -413,13 +417,18 @@ typedef herr_t (*H5D_chunk_dest_func_t)(const H5D_chk_idx_info_t *idx_info);
/* Typedef for grouping chunk I/O routines */
typedef struct H5D_chunk_ops_t {
- bool can_swim; /* Flag to indicate that the index supports SWMR access */
- H5D_chunk_init_func_t init; /* Routine to initialize indexing information in memory */
- H5D_chunk_create_func_t create; /* Routine to create chunk index */
+ bool can_swim; /* Flag to indicate that the index supports SWMR access */
+ H5D_chunk_init_func_t init; /* Routine to initialize indexing information in memory */
+ H5D_chunk_create_func_t create; /* Routine to create chunk index */
+ H5D_chunk_open_func_t open; /* Routine to open chunk index */
+ H5D_chunk_close_func_t close; /* Routine to close chunk index */
+ H5D_chunk_is_open_func_t is_open; /* Query routine to determine if index is open or not */
H5D_chunk_is_space_alloc_func_t
- is_space_alloc; /* Query routine to determine if storage/index is allocated */
- H5D_chunk_insert_func_t insert; /* Routine to insert a chunk into an index */
- H5D_chunk_get_addr_func_t get_addr; /* Routine to retrieve address of chunk in file */
+ is_space_alloc; /* Query routine to determine if storage/index is allocated */
+ H5D_chunk_insert_func_t insert; /* Routine to insert a chunk into an index */
+ H5D_chunk_get_addr_func_t get_addr; /* Routine to retrieve address of chunk in file */
+ H5D_chunk_load_metadata_func_t
+ load_metadata; /* Routine to load additional chunk index metadata, such as fixed array data blocks */
H5D_chunk_resize_func_t resize; /* Routine to update chunk index info after resizing dataset */
H5D_chunk_iterate_func_t iterate; /* Routine to iterate over chunks */
H5D_chunk_remove_func_t remove; /* Routine to remove a chunk from an index */
diff --git a/src/H5Dsingle.c b/src/H5Dsingle.c
index 9cb18d3..dd9f235 100644
--- a/src/H5Dsingle.c
+++ b/src/H5Dsingle.c
@@ -27,12 +27,12 @@
/***********/
/* Headers */
/***********/
-#include "H5private.h" /* Generic Functions */
-#include "H5Dpkg.h" /* Datasets */
-#include "H5Eprivate.h" /* Error handling */
+#include "H5private.h" /* Generic Functions */
+#include "H5Dpkg.h" /* Datasets */
+#include "H5Eprivate.h" /* Error handling */
#include "H5FLprivate.h" /* Free Lists */
-#include "H5MFprivate.h" /* File space management */
-#include "H5VMprivate.h" /* Vector functions */
+#include "H5MFprivate.h" /* File space management */
+#include "H5VMprivate.h" /* Vector functions */
/****************/
/* Local Macros */
@@ -50,10 +50,14 @@
static herr_t H5D__single_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t *space,
haddr_t dset_ohdr_addr);
static herr_t H5D__single_idx_create(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__single_idx_open(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__single_idx_close(const H5D_chk_idx_info_t *idx_info);
+static herr_t H5D__single_idx_is_open(const H5D_chk_idx_info_t *idx_info, bool *is_open);
static bool H5D__single_idx_is_space_alloc(const H5O_storage_chunk_t *storage);
static herr_t H5D__single_idx_insert(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata,
const H5D_t *dset);
static herr_t H5D__single_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *udata);
+static herr_t H5D__single_idx_load_metadata(const H5D_chk_idx_info_t *idx_info);
static int H5D__single_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t chunk_cb,
void *chunk_udata);
static herr_t H5D__single_idx_remove(const H5D_chk_idx_info_t *idx_info, H5D_chunk_common_ud_t *udata);
@@ -73,9 +77,13 @@ const H5D_chunk_ops_t H5D_COPS_SINGLE[1] = {{
false, /* Single Chunk indexing doesn't current support SWMR access */
H5D__single_idx_init, /* init */
H5D__single_idx_create, /* create */
+ H5D__single_idx_open, /* open */
+ H5D__single_idx_close, /* close */
+ H5D__single_idx_is_open, /* is_open */
H5D__single_idx_is_space_alloc, /* is_space_alloc */
H5D__single_idx_insert, /* insert */
H5D__single_idx_get_addr, /* get_addr */
+ H5D__single_idx_load_metadata, /* load_metadata */
NULL, /* resize */
H5D__single_idx_iterate, /* iterate */
H5D__single_idx_remove, /* remove */
@@ -133,12 +141,12 @@ H5D__single_idx_init(const H5D_chk_idx_info_t *idx_info, const H5S_t H5_ATTR_UNU
} /* end H5D__single_idx_init() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_create
+ * Function: H5D__single_idx_create
*
- * Purpose: Set up Single Chunk Index: filtered or non-filtered
+ * Purpose: Set up Single Chunk Index: filtered or non-filtered
*
- * Return: Non-negative on success
- * Negative on failure.
+ * Return: Non-negative on success
+ * Negative on failure.
*
*-------------------------------------------------------------------------
*/
@@ -166,11 +174,73 @@ H5D__single_idx_create(const H5D_chk_idx_info_t *idx_info)
} /* end H5D__single_idx_create() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_is_space_alloc
+ * Function: H5D__single_idx_open
*
- * Purpose: Query if space is allocated for the single chunk
+ * Purpose: Opens an existing "single" index. Currently a no-op.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__single_idx_open(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__single_idx_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__single_idx_close
+ *
+ * Purpose: Closes an existing "single" index. Currently a no-op.
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__single_idx_close(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__single_idx_close() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__single_idx_is_open
+ *
+ * Purpose: Query if the index is opened or not
+ *
+ * Return: SUCCEED (can't fail)
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__single_idx_is_open(const H5D_chk_idx_info_t H5_ATTR_NDEBUG_UNUSED *idx_info, bool *is_open)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ assert(idx_info);
+ assert(idx_info->storage);
+ assert(H5D_CHUNK_IDX_SINGLE == idx_info->storage->idx_type);
+ assert(is_open);
+
+ *is_open = true;
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* end H5D__single_idx_is_open() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__single_idx_is_space_alloc
+ *
+ * Purpose: Query if space is allocated for the single chunk
+ *
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -186,11 +256,11 @@ H5D__single_idx_is_space_alloc(const H5O_storage_chunk_t *storage)
} /* end H5D__single_idx_is_space_alloc() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_insert
+ * Function: H5D__single_idx_insert
*
- * Purpose: Allocate space for the single chunk
+ * Purpose: Allocate space for the single chunk
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -231,12 +301,12 @@ done:
} /* H5D__single_idx_insert() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_get_addr
+ * Function: H5D__single_idx_get_addr
*
- * Purpose: Get the file address of a chunk.
- * Save the retrieved information in the udata supplied.
+ * Purpose: Get the file address of a chunk.
+ * Save the retrieved information in the udata supplied.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -271,11 +341,31 @@ H5D__single_idx_get_addr(const H5D_chk_idx_info_t *idx_info, H5D_chunk_ud_t *uda
} /* H5D__single_idx_get_addr() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_iterate
+ * Function: H5D__single_idx_load_metadata
+ *
+ * Purpose: Load additional chunk index metadata beyond the chunk index
+ * itself. Currently a no-op.
+ *
+ * Return: Non-negative on success/Negative on failure
+ *
+ *-------------------------------------------------------------------------
+ */
+static herr_t
+H5D__single_idx_load_metadata(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info)
+{
+ FUNC_ENTER_PACKAGE_NOERR
+
+ /* NO OP */
+
+ FUNC_LEAVE_NOAPI(SUCCEED)
+} /* H5D__single_idx_load_metadata() */
+
+/*-------------------------------------------------------------------------
+ * Function: H5D__single_idx_iterate
*
- * Purpose: Make callback for the single chunk
+ * Purpose: Make callback for the single chunk
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -318,11 +408,11 @@ H5D__single_idx_iterate(const H5D_chk_idx_info_t *idx_info, H5D_chunk_cb_func_t
} /* end H5D__single_idx_iterate() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_remove
+ * Function: H5D__single_idx_remove
*
- * Purpose: Remove the single chunk
+ * Purpose: Remove the single chunk
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -357,12 +447,13 @@ done:
} /* H5D__single_idx_remove() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_delete
+ * Function: H5D__single_idx_delete
*
- * Purpose: Delete raw data storage for entire dataset (i.e. the only chunk)
+ * Purpose: Delete raw data storage for entire dataset (i.e. the only
+ * chunk)
*
- * Return: Success: Non-negative
- * Failure: negative
+ * Return: Success: Non-negative
+ * Failure: negative
*
*-------------------------------------------------------------------------
*/
@@ -389,11 +480,12 @@ H5D__single_idx_delete(const H5D_chk_idx_info_t *idx_info)
} /* end H5D__single_idx_delete() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_copy_setup
+ * Function: H5D__single_idx_copy_setup
*
- * Purpose: Set up any necessary information for copying the single chunk
+ * Purpose: Set up any necessary information for copying the single
+ * chunk
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -457,11 +549,11 @@ H5D__single_idx_size(const H5D_chk_idx_info_t H5_ATTR_UNUSED *idx_info, hsize_t
} /* end H5D__single_idx_size() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_reset
+ * Function: H5D__single_idx_reset
*
- * Purpose: Reset indexing information.
+ * Purpose: Reset indexing information.
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
@@ -481,11 +573,11 @@ H5D__single_idx_reset(H5O_storage_chunk_t *storage, bool reset_addr)
} /* end H5D__single_idx_reset() */
/*-------------------------------------------------------------------------
- * Function: H5D__single_idx_dump
+ * Function: H5D__single_idx_dump
*
- * Purpose: Dump the address of the single chunk
+ * Purpose: Dump the address of the single chunk
*
- * Return: Non-negative on success/Negative on failure
+ * Return: Non-negative on success/Negative on failure
*
*-------------------------------------------------------------------------
*/
diff --git a/src/H5FDros3.c b/src/H5FDros3.c
index 3f3413c..c6aea0e 100644
--- a/src/H5FDros3.c
+++ b/src/H5FDros3.c
@@ -43,6 +43,9 @@
*/
#define ROS3_STATS 0
+/* Max size of the cache, in bytes */
+#define ROS3_MAX_CACHE_SIZE 16777216
+
/* The driver identification number, initialized at runtime
*/
static hid_t H5FD_ROS3_g = 0;
@@ -189,6 +192,8 @@ typedef struct H5FD_ros3_t {
H5FD_ros3_fapl_t fa;
haddr_t eoa;
s3r_t *s3r_handle;
+ uint8_t *cache;
+ size_t cache_size;
#if ROS3_STATS
ros3_statsbin meta[ROS3_STATS_BIN_COUNT + 1];
ros3_statsbin raw[ROS3_STATS_BIN_COUNT + 1];
@@ -1000,6 +1005,18 @@ H5FD__ros3_open(const char *url, unsigned flags, hid_t fapl_id, haddr_t maxaddr)
HGOTO_ERROR(H5E_INTERNAL, H5E_UNINITIALIZED, NULL, "unable to reset file statistics");
#endif /* ROS3_STATS */
+ /* Cache the initial bytes of the file */
+ {
+ size_t filesize = H5FD_s3comms_s3r_get_filesize(file->s3r_handle);
+
+ file->cache_size = (filesize < ROS3_MAX_CACHE_SIZE) ? filesize : ROS3_MAX_CACHE_SIZE;
+
+ if (NULL == (file->cache = (uint8_t *)H5MM_calloc(file->cache_size)))
+ HGOTO_ERROR(H5E_VFL, H5E_NOSPACE, NULL, "unable to allocate cache memory");
+ if (H5FD_s3comms_s3r_read(file->s3r_handle, 0, file->cache_size, file->cache) == FAIL)
+ HGOTO_ERROR(H5E_VFL, H5E_READERROR, NULL, "unable to execute read");
+ }
+
ret_value = (H5FD_t *)file;
done:
@@ -1007,8 +1024,10 @@ done:
if (handle != NULL)
if (FAIL == H5FD_s3comms_s3r_close(handle))
HDONE_ERROR(H5E_VFL, H5E_CANTCLOSEFILE, NULL, "unable to close s3 file handle");
- if (file != NULL)
+ if (file != NULL) {
+ H5MM_xfree(file->cache);
file = H5FL_FREE(H5FD_ros3_t, file);
+ }
curl_global_cleanup(); /* early cleanup because open failed */
} /* end if null return value (error) */
@@ -1335,6 +1354,7 @@ H5FD__ros3_close(H5FD_t H5_ATTR_UNUSED *_file)
#endif /* ROS3_STATS */
/* Release the file info */
+ H5MM_xfree(file->cache);
file = H5FL_FREE(H5FD_ros3_t, file);
done:
@@ -1666,41 +1686,50 @@ H5FD__ros3_read(H5FD_t *_file, H5FD_mem_t H5_ATTR_UNUSED type, hid_t H5_ATTR_UNU
fprintf(stdout, "H5FD__ros3_read() called.\n");
#endif
- assert(file != NULL);
- assert(file->s3r_handle != NULL);
- assert(buf != NULL);
+ assert(file);
+ assert(file->cache);
+ assert(file->s3r_handle);
+ assert(buf);
filesize = H5FD_s3comms_s3r_get_filesize(file->s3r_handle);
if ((addr > filesize) || ((addr + size) > filesize))
HGOTO_ERROR(H5E_ARGS, H5E_OVERFLOW, FAIL, "range exceeds file address");
- if (H5FD_s3comms_s3r_read(file->s3r_handle, addr, size, buf) == FAIL)
- HGOTO_ERROR(H5E_VFL, H5E_READERROR, FAIL, "unable to execute read");
+ /* Copy from the cache when accessing the first N bytes of the file.
+ * Saves network I/O operations when opening files.
+ */
+ if (addr + size < file->cache_size) {
+ memcpy(buf, file->cache + addr, size);
+ }
+ else {
+ if (H5FD_s3comms_s3r_read(file->s3r_handle, addr, size, buf) == FAIL)
+ HGOTO_ERROR(H5E_VFL, H5E_READERROR, FAIL, "unable to execute read");
#if ROS3_STATS
- /* Find which "bin" this read fits in. Can be "overflow" bin. */
- for (bin_i = 0; bin_i < ROS3_STATS_BIN_COUNT; bin_i++)
- if ((unsigned long long)size < ros3_stats_boundaries[bin_i])
- break;
- bin = (type == H5FD_MEM_DRAW) ? &file->raw[bin_i] : &file->meta[bin_i];
+ /* Find which "bin" this read fits in. Can be "overflow" bin. */
+ for (bin_i = 0; bin_i < ROS3_STATS_BIN_COUNT; bin_i++)
+ if ((unsigned long long)size < ros3_stats_boundaries[bin_i])
+ break;
+ bin = (type == H5FD_MEM_DRAW) ? &file->raw[bin_i] : &file->meta[bin_i];
- /* Store collected stats in appropriate bin */
- if (bin->count == 0) {
- bin->min = size;
- bin->max = size;
- }
- else {
- if (size < bin->min)
+ /* Store collected stats in appropriate bin */
+ if (bin->count == 0) {
bin->min = size;
- if (size > bin->max)
bin->max = size;
- }
- bin->count++;
- bin->bytes += (unsigned long long)size;
+ }
+ else {
+ if (size < bin->min)
+ bin->min = size;
+ if (size > bin->max)
+ bin->max = size;
+ }
+ bin->count++;
+ bin->bytes += (unsigned long long)size;
#endif /* ROS3_STATS */
+ }
done:
FUNC_LEAVE_NOAPI(ret_value)
diff --git a/src/H5Fint.c b/src/H5Fint.c
index 014f619..4093b4b 100644
--- a/src/H5Fint.c
+++ b/src/H5Fint.c
@@ -1968,6 +1968,22 @@ H5F_open(const char *name, unsigned flags, hid_t fcpl_id, hid_t fapl_id)
HGOTO_ERROR(H5E_FILE, H5E_CANTGET, NULL, "can't get minimum raw data fraction of page buffer");
} /* end if */
+ /* Get the evict on close setting */
+ if (H5P_get(a_plist, H5F_ACS_EVICT_ON_CLOSE_FLAG_NAME, &evict_on_close) < 0)
+ HGOTO_ERROR(H5E_PLIST, H5E_CANTGET, NULL, "can't get evict on close value");
+
+#ifdef H5_HAVE_PARALLEL
+ /* Check for evict on close in parallel (currently unsupported) */
+ assert(file->shared);
+ if (H5F_SHARED_HAS_FEATURE(file->shared, H5FD_FEAT_HAS_MPI)) {
+ int mpi_size = H5F_shared_mpi_get_size(file->shared);
+
+ if ((mpi_size > 1) && evict_on_close)
+ HGOTO_ERROR(H5E_FILE, H5E_UNSUPPORTED, NULL,
+ "evict on close is currently not supported in parallel HDF5");
+ }
+#endif
+
/*
* Read or write the file superblock, depending on whether the file is
* empty or not.
@@ -2046,8 +2062,6 @@ H5F_open(const char *name, unsigned flags, hid_t fcpl_id, hid_t fapl_id)
* or later, verify that the access property list value matches the value
* in shared file structure.
*/
- if (H5P_get(a_plist, H5F_ACS_EVICT_ON_CLOSE_FLAG_NAME, &evict_on_close) < 0)
- HGOTO_ERROR(H5E_PLIST, H5E_CANTGET, NULL, "can't get evict on close value");
if (shared->nrefs == 1)
shared->evict_on_close = evict_on_close;
else if (shared->nrefs > 1) {
diff --git a/src/H5Odeprec.c b/src/H5Odeprec.c
index 6e8b34e..3de5818 100644
--- a/src/H5Odeprec.c
+++ b/src/H5Odeprec.c
@@ -116,9 +116,10 @@ static herr_t
H5O__iterate1_adapter(hid_t obj_id, const char *name, const H5O_info2_t *oinfo2, void *op_data)
{
H5O_visit1_adapter_t *shim_data = (H5O_visit1_adapter_t *)op_data;
- H5O_info1_t oinfo; /* Deprecated object info struct */
- unsigned dm_fields; /* Fields for data model query */
- unsigned nat_fields; /* Fields for native query */
+ H5O_info1_t oinfo; /* Deprecated object info struct */
+ unsigned dm_fields; /* Fields for data model query */
+ unsigned nat_fields; /* Fields for native query */
+ H5VL_object_t *vol_obj;
herr_t ret_value = H5_ITER_CONT; /* Return value */
FUNC_ENTER_PACKAGE
@@ -158,7 +159,6 @@ H5O__iterate1_adapter(hid_t obj_id, const char *name, const H5O_info2_t *oinfo2,
/* Check for retrieving native information */
nat_fields = shim_data->fields & (H5O_INFO_HDR | H5O_INFO_META_SIZE);
if (nat_fields) {
- H5VL_object_t *vol_obj; /* Object of obj_id */
H5VL_optional_args_t vol_cb_args; /* Arguments to VOL callback */
H5VL_native_object_optional_args_t obj_opt_args; /* Arguments for optional operation */
H5VL_loc_params_t loc_params; /* Location parameters for VOL callback */
@@ -401,7 +401,8 @@ H5Oget_info1(hid_t loc_id, H5O_info1_t *oinfo /*out*/)
{
H5VL_object_t *vol_obj = NULL; /* Object of loc_id */
H5VL_loc_params_t loc_params;
- herr_t ret_value = SUCCEED; /* Return value */
+ bool is_native_vol_obj = false;
+ herr_t ret_value = SUCCEED; /* Return value */
FUNC_ENTER_API(FAIL)
H5TRACE2("e", "ix", loc_id, oinfo);
@@ -418,6 +419,15 @@ H5Oget_info1(hid_t loc_id, H5O_info1_t *oinfo /*out*/)
if (NULL == (vol_obj = H5VL_vol_object(loc_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_OHDR, H5E_VOL, FAIL,
+ "Deprecated H5Oget_info1 is only meant to be used with the native VOL connector");
+
/* Retrieve the object's information */
if (H5O__get_info_old(vol_obj, &loc_params, oinfo, H5O_INFO_ALL) < 0)
HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't get deprecated info for object");
@@ -441,7 +451,8 @@ H5Oget_info_by_name1(hid_t loc_id, const char *name, H5O_info1_t *oinfo /*out*/,
{
H5VL_object_t *vol_obj = NULL; /* object of loc_id */
H5VL_loc_params_t loc_params;
- herr_t ret_value = SUCCEED; /* Return value */
+ bool is_native_vol_obj = false;
+ herr_t ret_value = SUCCEED; /* Return value */
FUNC_ENTER_API(FAIL)
H5TRACE4("e", "i*sxi", loc_id, name, oinfo, lapl_id);
@@ -468,6 +479,15 @@ H5Oget_info_by_name1(hid_t loc_id, const char *name, H5O_info1_t *oinfo /*out*/,
if (NULL == (vol_obj = H5VL_vol_object(loc_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_OHDR, H5E_VOL, FAIL,
+ "Deprecated H5Oget_info_by_name1 is only meant to be used with the native VOL connector");
+
/* Retrieve the object's information */
if (H5O__get_info_old(vol_obj, &loc_params, oinfo, H5O_INFO_ALL) < 0)
HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't get deprecated info for object");
@@ -493,7 +513,8 @@ H5Oget_info_by_idx1(hid_t loc_id, const char *group_name, H5_index_t idx_type, H
{
H5VL_object_t *vol_obj = NULL; /* object of loc_id */
H5VL_loc_params_t loc_params;
- herr_t ret_value = SUCCEED; /* Return value */
+ bool is_native_vol_obj = false;
+ herr_t ret_value = SUCCEED; /* Return value */
FUNC_ENTER_API(FAIL)
H5TRACE7("e", "i*sIiIohxi", loc_id, group_name, idx_type, order, n, oinfo, lapl_id);
@@ -524,6 +545,15 @@ H5Oget_info_by_idx1(hid_t loc_id, const char *group_name, H5_index_t idx_type, H
if (NULL == (vol_obj = H5VL_vol_object(loc_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_OHDR, H5E_VOL, FAIL,
+ "Deprecated H5Oget_info_by_idx1 is only meant to be used with the native VOL connector");
+
/* Retrieve the object's information */
if (H5O__get_info_old(vol_obj, &loc_params, oinfo, H5O_INFO_ALL) < 0)
HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't get deprecated info for object");
@@ -574,7 +604,7 @@ H5Oget_info2(hid_t loc_id, H5O_info1_t *oinfo /*out*/, unsigned fields)
"can't determine if VOL object is native connector object");
if (!is_native_vol_obj)
HGOTO_ERROR(H5E_OHDR, H5E_BADVALUE, H5I_INVALID_HID,
- "H5Oget_info2 is only meant to be used with the native VOL connector");
+ "Deprecated H5Oget_info2 is only meant to be used with the native VOL connector");
/* Retrieve deprecated info struct */
if (H5O__get_info_old(vol_obj, &loc_params, oinfo, fields) < 0)
@@ -637,7 +667,7 @@ H5Oget_info_by_name2(hid_t loc_id, const char *name, H5O_info1_t *oinfo /*out*/,
"can't determine if VOL object is native connector object");
if (!is_native_vol_obj)
HGOTO_ERROR(H5E_OHDR, H5E_BADVALUE, H5I_INVALID_HID,
- "H5Oget_info_by_name2 is only meant to be used with the native VOL connector");
+ "Deprecated H5Oget_info_by_name2 is only meant to be used with the native VOL connector");
/* Retrieve deprecated info struct */
if (H5O__get_info_old(vol_obj, &loc_params, oinfo, fields) < 0)
@@ -706,7 +736,7 @@ H5Oget_info_by_idx2(hid_t loc_id, const char *group_name, H5_index_t idx_type, H
"can't determine if VOL object is native connector object");
if (!is_native_vol_obj)
HGOTO_ERROR(H5E_OHDR, H5E_BADVALUE, H5I_INVALID_HID,
- "H5Oget_info_by_idx2 is only meant to be used with the native VOL connector");
+ "Deprecated H5Oget_info_by_idx2 is only meant to be used with the native VOL connector");
/* Retrieve deprecated info struct */
if (H5O__get_info_old(vol_obj, &loc_params, oinfo, fields) < 0)
@@ -753,6 +783,7 @@ H5Ovisit1(hid_t obj_id, H5_index_t idx_type, H5_iter_order_t order, H5O_iterate1
H5VL_loc_params_t loc_params; /* Location parameters for object access */
H5O_visit1_adapter_t shim_data; /* Adapter for passing app callback & user data */
herr_t ret_value; /* Return value */
+ bool is_native_vol_obj = false;
FUNC_ENTER_API(FAIL)
H5TRACE5("e", "iIiIoOi*x", obj_id, idx_type, order, op, op_data);
@@ -769,6 +800,15 @@ H5Ovisit1(hid_t obj_id, H5_index_t idx_type, H5_iter_order_t order, H5O_iterate1
if (NULL == (vol_obj = H5VL_vol_object(obj_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_OHDR, H5E_VOL, FAIL,
+ "Deprecated H5Ovisit1 is only meant to be used with the native VOL connector");
+
/* Set location parameters */
loc_params.type = H5VL_OBJECT_BY_SELF;
loc_params.obj_type = H5I_get_type(obj_id);
@@ -833,6 +873,7 @@ H5Ovisit_by_name1(hid_t loc_id, const char *obj_name, H5_index_t idx_type, H5_it
H5VL_loc_params_t loc_params; /* Location parameters for object access */
H5O_visit1_adapter_t shim_data; /* Adapter for passing app callback & user data */
herr_t ret_value; /* Return value */
+ bool is_native_vol_obj = false;
FUNC_ENTER_API(FAIL)
H5TRACE7("e", "i*sIiIoOi*xi", loc_id, obj_name, idx_type, order, op, op_data, lapl_id);
@@ -857,6 +898,15 @@ H5Ovisit_by_name1(hid_t loc_id, const char *obj_name, H5_index_t idx_type, H5_it
if (NULL == (vol_obj = H5VL_vol_object(loc_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, FAIL, "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_OHDR, H5E_VOL, FAIL,
+ "Deprecated H5Ovisit_by_name1 is only meant to be used with the native VOL connector");
+
/* Set location parameters */
loc_params.type = H5VL_OBJECT_BY_NAME;
loc_params.loc_data.loc_by_name.name = obj_name;
@@ -949,9 +999,10 @@ H5Ovisit2(hid_t obj_id, H5_index_t idx_type, H5_iter_order_t order, H5O_iterate1
if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
HGOTO_ERROR(H5E_OHDR, H5E_CANTGET, H5I_INVALID_HID,
"can't determine if VOL object is native connector object");
+
if (!is_native_vol_obj)
HGOTO_ERROR(H5E_OHDR, H5E_BADVALUE, H5I_INVALID_HID,
- "H5Ovisit2 is only meant to be used with the native VOL connector");
+ "Deprecated H5Ovisit2 is only meant to be used with the native VOL connector");
/* Set location parameters */
loc_params.type = H5VL_OBJECT_BY_SELF;
@@ -1053,7 +1104,7 @@ H5Ovisit_by_name2(hid_t loc_id, const char *obj_name, H5_index_t idx_type, H5_it
"can't determine if VOL object is native connector object");
if (!is_native_vol_obj)
HGOTO_ERROR(H5E_OHDR, H5E_BADVALUE, H5I_INVALID_HID,
- "H5Ovisit_by_name2 is only meant to be used with the native VOL connector");
+ "Deprecated H5Ovisit_by_name2 is only meant to be used with the native VOL connector");
/* Set location parameters */
loc_params.type = H5VL_OBJECT_BY_NAME;
diff --git a/src/H5Pfapl.c b/src/H5Pfapl.c
index 5f5782c..e7c1fb3 100644
--- a/src/H5Pfapl.c
+++ b/src/H5Pfapl.c
@@ -4848,7 +4848,7 @@ H5P__facc_mdc_log_location_close(const char H5_ATTR_UNUSED *name, size_t H5_ATTR
*-------------------------------------------------------------------------
*/
herr_t
-H5Pset_evict_on_close(hid_t fapl_id, hbool_t H5_ATTR_PARALLEL_UNUSED evict_on_close)
+H5Pset_evict_on_close(hid_t fapl_id, hbool_t evict_on_close)
{
H5P_genplist_t *plist; /* property list pointer */
herr_t ret_value = SUCCEED; /* return value */
@@ -4864,14 +4864,9 @@ H5Pset_evict_on_close(hid_t fapl_id, hbool_t H5_ATTR_PARALLEL_UNUSED evict_on_cl
if (NULL == (plist = (H5P_genplist_t *)H5I_object(fapl_id)))
HGOTO_ERROR(H5E_ID, H5E_BADID, FAIL, "can't find object for ID");
-#ifndef H5_HAVE_PARALLEL
/* Set value */
if (H5P_set(plist, H5F_ACS_EVICT_ON_CLOSE_FLAG_NAME, &evict_on_close) < 0)
HGOTO_ERROR(H5E_PLIST, H5E_CANTSET, FAIL, "can't set evict on close property");
-#else
- HGOTO_ERROR(H5E_PLIST, H5E_UNSUPPORTED, FAIL,
- "evict on close is currently not supported in parallel HDF5");
-#endif /* H5_HAVE_PARALLEL */
done:
FUNC_LEAVE_API(ret_value)
@@ -5174,7 +5169,7 @@ done:
* Function: H5Pset_coll_metadata_write
*
* Purpose: Tell the library whether the metadata write operations will
- * be done collectively (1) or not (0). Default is collective.
+ * be done collectively (1) or not (0). Default is independent.
*
* Return: Non-negative on success/Negative on failure
*
diff --git a/src/H5Rdeprec.c b/src/H5Rdeprec.c
index 773d8b0..1d12eba 100644
--- a/src/H5Rdeprec.c
+++ b/src/H5Rdeprec.c
@@ -101,14 +101,14 @@ H5R__decode_token_compat(H5VL_object_t *vol_obj, H5I_type_t type, H5R_type_t ref
#ifndef NDEBUG
{
- bool is_native = false; /* Whether the src file is using the native VOL connector */
+ bool is_native_vol_obj = false; /* Whether the src file is using the native VOL connector */
/* Check if using native VOL connector */
- if (H5VL_object_is_native(vol_obj, &is_native) < 0)
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL, "can't query if file uses native VOL connector");
/* Must use native VOL connector for this operation */
- assert(is_native);
+ assert(is_native_vol_obj);
}
#endif /* NDEBUG */
@@ -251,7 +251,8 @@ H5Rget_obj_type1(hid_t id, H5R_type_t ref_type, const void *ref)
H5O_token_t obj_token = {0}; /* Object token */
const unsigned char *buf = (const unsigned char *)ref; /* Reference buffer */
H5O_type_t obj_type = H5O_TYPE_UNKNOWN; /* Type of the referenced object */
- H5G_obj_t ret_value; /* Return value */
+ bool is_native_vol_obj; /* Whether the native VOL connector is in use */
+ H5G_obj_t ret_value; /* Return value */
FUNC_ENTER_API(H5G_UNKNOWN)
H5TRACE3("Go", "iRt*x", id, ref_type, ref);
@@ -266,6 +267,16 @@ H5Rget_obj_type1(hid_t id, H5R_type_t ref_type, const void *ref)
if (NULL == (vol_obj = H5VL_vol_object(id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5G_UNKNOWN, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL,
+ "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_VOL, FAIL,
+ "H5Rget_obj_type1 is only meant to be used with the native VOL connector");
+
/* Get object type */
if ((vol_obj_type = H5I_get_type(id)) < 0)
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5G_UNKNOWN, "invalid location identifier");
@@ -315,7 +326,8 @@ H5Rdereference1(hid_t obj_id, H5R_type_t ref_type, const void *ref)
H5I_type_t opened_type; /* Opened object type */
void *opened_obj = NULL; /* Opened object */
const unsigned char *buf = (const unsigned char *)ref; /* Reference buffer */
- hid_t ret_value = H5I_INVALID_HID; /* Return value */
+ bool is_native_vol_obj; /* Whether the native VOL connector is in use */
+ hid_t ret_value = H5I_INVALID_HID; /* Return value */
FUNC_ENTER_API(H5I_INVALID_HID)
H5TRACE3("i", "iRt*x", obj_id, ref_type, ref);
@@ -330,6 +342,16 @@ H5Rdereference1(hid_t obj_id, H5R_type_t ref_type, const void *ref)
if (NULL == (vol_obj = H5VL_vol_object(obj_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5I_INVALID_HID, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL,
+ "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_VOL, FAIL,
+ "H5Rdereference1 is only meant to be used with the native VOL connector");
+
/* Get object type */
if ((vol_obj_type = H5I_get_type(obj_id)) < 0)
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5I_INVALID_HID, "invalid location identifier");
@@ -382,8 +404,9 @@ H5Rcreate(void *ref, hid_t loc_id, const char *name, H5R_type_t ref_type, hid_t
H5VL_file_get_args_t file_get_vol_cb_args; /* Arguments to VOL callback */
hid_t file_id = H5I_INVALID_HID; /* File ID for region reference */
void *vol_obj_file = NULL;
- unsigned char *buf = (unsigned char *)ref; /* Return reference pointer */
- herr_t ret_value = SUCCEED; /* Return value */
+ bool is_native_vol_obj = false; /* Whether the src file is using the native VOL connector */
+ unsigned char *buf = (unsigned char *)ref; /* Return reference pointer */
+ herr_t ret_value = SUCCEED; /* Return value */
FUNC_ENTER_API(FAIL)
H5TRACE5("e", "*xi*sRti", ref, loc_id, name, ref_type, space_id);
@@ -404,18 +427,13 @@ H5Rcreate(void *ref, hid_t loc_id, const char *name, H5R_type_t ref_type, hid_t
if (NULL == (vol_obj = H5VL_vol_object(loc_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
-#ifndef NDEBUG
- {
- bool is_native = false; /* Whether the src file is using the native VOL connector */
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL, "can't query if file uses native VOL connector");
- /* Check if using native VOL connector */
- if (H5VL_object_is_native(vol_obj, &is_native) < 0)
- HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL, "can't query if file uses native VOL connector");
-
- /* Must use native VOL connector for this operation */
- assert(is_native);
- }
-#endif /* NDEBUG */
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_VOL, FAIL, "must use native VOL connector to create reference");
/* Get object type */
if ((vol_obj_type = H5I_get_type(loc_id)) < 0)
@@ -500,13 +518,14 @@ done:
herr_t
H5Rget_obj_type2(hid_t id, H5R_type_t ref_type, const void *ref, H5O_type_t *obj_type /*out*/)
{
- H5VL_object_t *vol_obj = NULL; /* Object of loc_id */
- H5I_type_t vol_obj_type = H5I_BADID; /* Object type of loc_id */
- H5VL_object_get_args_t vol_cb_args; /* Arguments to VOL callback */
- H5VL_loc_params_t loc_params; /* Location parameters */
- H5O_token_t obj_token = {0}; /* Object token */
- const unsigned char *buf = (const unsigned char *)ref; /* Reference pointer */
- herr_t ret_value = SUCCEED; /* Return value */
+ H5VL_object_t *vol_obj = NULL; /* Object of loc_id */
+ H5I_type_t vol_obj_type = H5I_BADID; /* Object type of loc_id */
+ H5VL_object_get_args_t vol_cb_args; /* Arguments to VOL callback */
+ H5VL_loc_params_t loc_params; /* Location parameters */
+ H5O_token_t obj_token = {0}; /* Object token */
+ const unsigned char *buf = (const unsigned char *)ref; /* Reference pointer */
+ bool is_native_vol_obj = false; /* Whether the native VOL connector is in use */
+ herr_t ret_value = SUCCEED; /* Return value */
FUNC_ENTER_API(FAIL)
H5TRACE4("e", "iRt*xx", id, ref_type, ref, obj_type);
@@ -521,6 +540,16 @@ H5Rget_obj_type2(hid_t id, H5R_type_t ref_type, const void *ref, H5O_type_t *obj
if (NULL == (vol_obj = H5VL_vol_object(id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL,
+ "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_VOL, FAIL,
+ "H5Rget_obj_type2 is only meant to be used with the native VOL connector");
+
/* Get object type */
if ((vol_obj_type = H5I_get_type(id)) < 0)
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, FAIL, "invalid location identifier");
@@ -560,14 +589,15 @@ done:
hid_t
H5Rdereference2(hid_t obj_id, hid_t oapl_id, H5R_type_t ref_type, const void *ref)
{
- H5VL_object_t *vol_obj = NULL; /* Object of loc_id */
- H5I_type_t vol_obj_type = H5I_BADID; /* Object type of loc_id */
- H5VL_loc_params_t loc_params; /* Location parameters */
- H5O_token_t obj_token = {0}; /* Object token */
- H5I_type_t opened_type; /* Opened object type */
- void *opened_obj = NULL; /* Opened object */
- const unsigned char *buf = (const unsigned char *)ref; /* Reference pointer */
- hid_t ret_value = H5I_INVALID_HID; /* Return value */
+ H5VL_object_t *vol_obj = NULL; /* Object of loc_id */
+ H5I_type_t vol_obj_type = H5I_BADID; /* Object type of loc_id */
+ H5VL_loc_params_t loc_params; /* Location parameters */
+ H5O_token_t obj_token = {0}; /* Object token */
+ H5I_type_t opened_type; /* Opened object type */
+ void *opened_obj = NULL; /* Opened object */
+ const unsigned char *buf = (const unsigned char *)ref; /* Reference pointer */
+ bool is_native_vol_obj = false; /* Whether the native VOL connector is in use */
+ hid_t ret_value = H5I_INVALID_HID; /* Return value */
FUNC_ENTER_API(H5I_INVALID_HID)
H5TRACE4("i", "iiRt*x", obj_id, oapl_id, ref_type, ref);
@@ -588,6 +618,16 @@ H5Rdereference2(hid_t obj_id, hid_t oapl_id, H5R_type_t ref_type, const void *re
if (NULL == (vol_obj = H5VL_vol_object(obj_id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5I_INVALID_HID, "invalid file identifier");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, FAIL,
+ "can't determine if VOL object is native connector object");
+
+ /* Must use native VOL connector for this operation */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_VOL, FAIL,
+ "H5Rdereference2 is only meant to be used with the native VOL connector");
+
/* Get object type */
if ((vol_obj_type = H5I_get_type(obj_id)) < 0)
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5I_INVALID_HID, "invalid location identifier");
@@ -639,7 +679,8 @@ H5Rget_region(hid_t id, H5R_type_t ref_type, const void *ref)
H5S_t *space = NULL; /* Dataspace object */
hid_t file_id = H5I_INVALID_HID; /* File ID for region reference */
const unsigned char *buf = (const unsigned char *)ref; /* Reference pointer */
- hid_t ret_value; /* Return value */
+ bool is_native_vol_obj = false; /* Whether the src file is using the native VOL connector */
+ hid_t ret_value; /* Return value */
FUNC_ENTER_API(H5I_INVALID_HID)
H5TRACE3("i", "iRt*x", id, ref_type, ref);
@@ -654,19 +695,14 @@ H5Rget_region(hid_t id, H5R_type_t ref_type, const void *ref)
if (NULL == (vol_obj = H5VL_vol_object(id)))
HGOTO_ERROR(H5E_ARGS, H5E_BADTYPE, H5I_INVALID_HID, "invalid file identifier");
-#ifndef NDEBUG
- {
- bool is_native = false; /* Whether the src file is using the native VOL connector */
-
- /* Check if using native VOL connector */
- if (H5VL_object_is_native(vol_obj, &is_native) < 0)
- HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, H5I_INVALID_HID,
- "can't query if file uses native VOL connector");
+ /* Check if using native VOL connector */
+ if (H5VL_object_is_native(vol_obj, &is_native_vol_obj) < 0)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_CANTGET, H5I_INVALID_HID,
+ "can't query if file uses native VOL connector");
- /* Must use native VOL connector for this operation */
- assert(is_native);
- }
-#endif /* NDEBUG */
+ if (!is_native_vol_obj)
+ HGOTO_ERROR(H5E_REFERENCE, H5E_VOL, FAIL,
+ "H5Rget_region is only meant to be used with the native VOL connector");
/* Get object type */
if ((vol_obj_type = H5I_get_type(id)) < 0)
diff --git a/test/Makefile.am b/test/Makefile.am
index 291907c..fdd83e5 100644
--- a/test/Makefile.am
+++ b/test/Makefile.am
@@ -192,7 +192,7 @@ CHECK_CLEANFILES+=accum.h5 cmpd_dset.h5 mdset.h5 compact_dataset.h5 dataset.h5 d
storage_size.h5 dls_01_strings.h5 power2up.h5 version_bounds.h5 \
alloc_0sized.h5 h5s_block.h5 h5s_plist.h5 \
extend.h5 istore.h5 extlinks*.h5 frspace.h5 links*.h5 \
- sys_file1 tfile[1-7].h5 th5s[1-4].h5 lheap.h5 fheap.h5 ohdr.h5 \
+ sys_file1 tfile[1-8].h5 th5s[1-4].h5 lheap.h5 fheap.h5 ohdr.h5 \
stab.h5 extern_[1-5].h5 extern_[1-4][rw].raw gheap[0-4].h5 \
ohdr_min_a.h5 ohdr_min_b.h5 min_dset_ohdr_testfile.h5 \
dt_arith[1-2] links.h5 links[0-6]*.h5 extlinks[0-15].h5 \
@@ -226,7 +226,10 @@ CHECK_CLEANFILES+=accum.h5 cmpd_dset.h5 mdset.h5 compact_dataset.h5 dataset.h5 d
test_swmr*.h5 cache_logging.h5 cache_logging.out vds_swmr.h5 vds_swmr_src_*.h5 \
swmr[0-2].h5 swmr_writer.out swmr_writer.log.* swmr_reader.out.* swmr_reader.log.* \
tbogus.h5.copy cache_image_test.h5 direct_chunk.h5 native_vol_test.h5 \
- splitter*.h5 splitter.log mirror_rw mirror_ro event_set_[0-9].h5
+ splitter*.h5 splitter.log mirror_rw mirror_ro event_set_[0-9].h5 \
+ cmpd_dtransform.h5 single_latest.h5 source_file.h5 stdio_file.h5 \
+ tfile_is_accessible.h5 tfile_is_accessible_non_hdf5.h5 tverbounds_dtype.h5 \
+ virtual_file1.h5 tfile_double_open.h5 tfile_incr_filesize.h5 flushrefresh_test
# Sources for testhdf5 executable
testhdf5_SOURCES=testhdf5.c tarray.c tattr.c tchecksum.c tconfig.c tfile.c \
diff --git a/test/evict_on_close.c b/test/evict_on_close.c
index 9ca7f9f..db2a962 100644
--- a/test/evict_on_close.c
+++ b/test/evict_on_close.c
@@ -32,12 +32,6 @@
#include "H5Ipkg.h"
#include "H5VLprivate.h" /* Virtual Object Layer */
-/* Evict on close is not supported under parallel at this time.
- * In the meantime, we just run a simple check that EoC can't be
- * enabled in parallel HDF5.
- */
-#ifndef H5_HAVE_PARALLEL
-
/* Uncomment to manually inspect cache states */
/* (Requires debug build of the library) */
/* #define EOC_MANUAL_INSPECTION */
@@ -974,89 +968,3 @@ error:
exit(EXIT_FAILURE);
} /* end main() */
-
-#else
-
-/*-------------------------------------------------------------------------
- * Function: check_evict_on_close_parallel_fail()
- *
- * Purpose: Verify that the H5Pset_evict_on_close() call fails in
- * parallel HDF5.
- *
- * Return: SUCCEED/FAIL
- *
- *-------------------------------------------------------------------------
- */
-static herr_t
-check_evict_on_close_parallel_fail(void)
-{
- hid_t fapl_id = H5I_INVALID_HID;
- bool evict_on_close;
- herr_t status;
-
- TESTING("evict on close fails in parallel");
-
- /* Create a fapl */
- if ((fapl_id = H5Pcreate(H5P_FILE_ACCESS)) < 0)
- TEST_ERROR;
-
- /* Set the evict on close property (should fail)*/
- evict_on_close = true;
- H5E_BEGIN_TRY
- {
- status = H5Pset_evict_on_close(fapl_id, evict_on_close);
- }
- H5E_END_TRY
- if (status >= 0)
- FAIL_PUTS_ERROR("H5Pset_evict_on_close() did not fail in parallel HDF5.");
-
- /* close fapl */
- if (H5Pclose(fapl_id) < 0)
- TEST_ERROR;
-
- PASSED();
- return SUCCEED;
-
-error:
- H5_FAILED();
- return FAIL;
-
-} /* check_evict_on_close_parallel_fail() */
-
-/*-------------------------------------------------------------------------
- * Function: main (parallel version)
- *
- * Return: EXIT_FAILURE/EXIT_SUCCESS
- *
- *-------------------------------------------------------------------------
- */
-int
-main(void)
-{
- unsigned nerrors = 0; /* number of test errors */
-
- printf("Testing evict-on-close cache behavior\n");
-
- /* Initialize */
- h5_reset();
-
- /* Test that EoC fails in parallel HDF5 */
- nerrors += check_evict_on_close_parallel_fail() < 0 ? 1 : 0;
-
- if (nerrors)
- goto error;
-
- printf("All evict-on-close tests passed.\n");
- printf("Note that EoC is not supported under parallel so most tests are skipped.\n");
-
- exit(EXIT_SUCCESS);
-
-error:
-
- printf("***** %u evict-on-close test%s FAILED! *****\n", nerrors, nerrors > 1 ? "S" : "");
-
- exit(EXIT_FAILURE);
-
-} /* main() - parallel */
-
-#endif /* H5_HAVE_PARALLEL */
diff --git a/test/tfile.c b/test/tfile.c
index d11de11..4335652d 100644
--- a/test/tfile.c
+++ b/test/tfile.c
@@ -138,9 +138,15 @@
#define NGROUPS 2
#define NDSETS 4
-/* Declaration for test_incr_filesize() */
+/* Declaration for libver bounds tests */
#define FILE8 "tfile8.h5" /* Test file */
+/* Declaration for test_file_double_file_dataset_open() */
+#define FILE_DOUBLE_OPEN "tfile_double_open"
+
+/* Declaration for test_incr_filesize() */
+#define FILE_INCR_FILESIZE "tfile_incr_filesize"
+
/* Files created under 1.6 branch and 1.8 branch--used in test_filespace_compatible() */
static const char *OLD_FILENAME[] = {
"filespace_1_6.h5", /* 1.6 HDF5 file */
@@ -2623,8 +2629,8 @@ test_file_double_file_dataset_open(bool new_format)
if (new_format) {
ret = H5Pset_libver_bounds(fapl, H5F_LIBVER_LATEST, H5F_LIBVER_LATEST);
CHECK(ret, FAIL, "H5Pset_libver_bounds");
- } /* end if */
- h5_fixname(FILE1, fapl, filename, sizeof filename);
+ }
+ h5_fixname(FILE_DOUBLE_OPEN, fapl, filename, sizeof filename);
/* Create the test file */
fid1 = H5Fcreate(filename, H5F_ACC_TRUNC, H5P_DEFAULT, fapl);
@@ -2934,6 +2940,9 @@ test_file_double_file_dataset_open(bool new_format)
ret = H5Tclose(tid1);
CHECK(ret, FAIL, "H5Tclose");
+ /* Delete the test file */
+ h5_delete_test_file(filename, fapl);
+
/* Close FAPL */
ret = H5Pclose(fapl);
CHECK(ret, FAIL, "H5Pclose");
@@ -7649,7 +7658,7 @@ test_incr_filesize(void)
MESSAGE(5, ("Testing H5Fincrement_filesize() and H5Fget_eoa())\n"));
fapl = h5_fileaccess();
- h5_fixname(FILE8, fapl, filename, sizeof filename);
+ h5_fixname(FILE_INCR_FILESIZE, fapl, filename, sizeof filename);
/* Get the VFD feature flags */
driver_id = H5Pget_driver(fapl);
@@ -7734,6 +7743,9 @@ test_incr_filesize(void)
/* Verify the filesize is the previous stored_eoa + 512 */
VERIFY(filesize, stored_eoa + 512, "file size");
+ /* Delete the test file */
+ h5_delete_test_file(FILE_INCR_FILESIZE, fapl);
+
/* Close the file access property list */
ret = H5Pclose(fapl);
CHECK(ret, FAIL, "H5Pclose");
@@ -8224,6 +8236,7 @@ cleanup_file(void)
H5Fdelete(FILE5, H5P_DEFAULT);
H5Fdelete(FILE6, H5P_DEFAULT);
H5Fdelete(FILE7, H5P_DEFAULT);
+ H5Fdelete(FILE8, H5P_DEFAULT);
H5Fdelete(DST_FILE, H5P_DEFAULT);
}
H5E_END_TRY
diff --git a/testpar/Makefile.am b/testpar/Makefile.am
index 59d47e1..4a8cb82 100644
--- a/testpar/Makefile.am
+++ b/testpar/Makefile.am
@@ -58,6 +58,7 @@ LDADD = $(LIBH5TEST) $(LIBHDF5)
# after_mpi_fin.h5 is from t_init_term
# go is used for debugging. See testphdf5.c.
CHECK_CLEANFILES+=MPItest.h5 Para*.h5 bigio_test.h5 CacheTestDummy.h5 \
- ShapeSameTest.h5 shutdown.h5 pmulti_dset.h5 after_mpi_fin.h5 go
+ ShapeSameTest.h5 shutdown.h5 pmulti_dset.h5 after_mpi_fin.h5 go noflush.h5 \
+ mpio_select_test_file.h5 *.btr
include $(top_srcdir)/config/conclude.am
diff --git a/testpar/t_coll_md.c b/testpar/t_coll_md.c
index 1220111..9c6fc71 100644
--- a/testpar/t_coll_md.c
+++ b/testpar/t_coll_md.c
@@ -43,6 +43,11 @@
#define COLL_GHEAP_WRITE_ATTR_NAME "coll_gheap_write_attr"
#define COLL_GHEAP_WRITE_ATTR_DIMS 1
+#define COLL_IO_IND_MD_WRITE_NDIMS 2
+#define COLL_IO_IND_MD_WRITE_CHUNK0 4
+#define COLL_IO_IND_MD_WRITE_CHUNK1 256
+#define COLL_IO_IND_MD_WRITE_NCHUNK1 16384
+
/*
* A test for issue HDFFV-10501. A parallel hang was reported which occurred
* in linked-chunk I/O when collective metadata reads are enabled and some ranks
@@ -569,3 +574,101 @@ test_collective_global_heap_write(void)
VRFY((H5Pclose(fapl_id) >= 0), "H5Pclose succeeded");
VRFY((H5Fclose(file_id) >= 0), "H5Fclose succeeded");
}
+
+/*
+ * A test to ensure that hangs don't occur when collective I/O
+ * is requested at the interface level (by a call to
+ * H5Pset_dxpl_mpio(dxpl_id, H5FD_MPIO_COLLECTIVE)), while
+ * collective metadata writes are NOT requested.
+ */
+void
+test_coll_io_ind_md_write(void)
+{
+ const char *filename;
+ long long *data = NULL;
+ hsize_t dset_dims[COLL_IO_IND_MD_WRITE_NDIMS];
+ hsize_t chunk_dims[COLL_IO_IND_MD_WRITE_NDIMS];
+ hsize_t sel_dims[COLL_IO_IND_MD_WRITE_NDIMS];
+ hsize_t offset[COLL_IO_IND_MD_WRITE_NDIMS];
+ hid_t file_id = H5I_INVALID_HID;
+ hid_t fapl_id = H5I_INVALID_HID;
+ hid_t dset_id = H5I_INVALID_HID;
+ hid_t dset_id2 = H5I_INVALID_HID;
+ hid_t dcpl_id = H5I_INVALID_HID;
+ hid_t dxpl_id = H5I_INVALID_HID;
+ hid_t fspace_id = H5I_INVALID_HID;
+ int mpi_rank, mpi_size;
+
+ MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);
+ MPI_Comm_size(MPI_COMM_WORLD, &mpi_size);
+
+ filename = GetTestParameters();
+
+ fapl_id = create_faccess_plist(MPI_COMM_WORLD, MPI_INFO_NULL, facc_type);
+ VRFY((fapl_id >= 0), "create_faccess_plist succeeded");
+
+ VRFY((H5Pset_all_coll_metadata_ops(fapl_id, false) >= 0), "Unset collective metadata reads succeeded");
+ VRFY((H5Pset_coll_metadata_write(fapl_id, false) >= 0), "Unset collective metadata writes succeeded");
+
+ file_id = H5Fcreate(filename, H5F_ACC_TRUNC, H5P_DEFAULT, fapl_id);
+ VRFY((file_id >= 0), "H5Fcreate succeeded");
+
+ dset_dims[0] = (hsize_t)(mpi_size * COLL_IO_IND_MD_WRITE_CHUNK0);
+ dset_dims[1] = (hsize_t)(COLL_IO_IND_MD_WRITE_CHUNK1 * COLL_IO_IND_MD_WRITE_NCHUNK1);
+
+ fspace_id = H5Screate_simple(COLL_IO_IND_MD_WRITE_NDIMS, dset_dims, NULL);
+ VRFY((fspace_id >= 0), "H5Screate_simple succeeded");
+
+ dcpl_id = H5Pcreate(H5P_DATASET_CREATE);
+ VRFY((dcpl_id >= 0), "H5Pcreate succeeded");
+
+ chunk_dims[0] = (hsize_t)(COLL_IO_IND_MD_WRITE_CHUNK0);
+ chunk_dims[1] = (hsize_t)(COLL_IO_IND_MD_WRITE_CHUNK1);
+
+ VRFY((H5Pset_chunk(dcpl_id, COLL_IO_IND_MD_WRITE_NDIMS, chunk_dims) >= 0), "H5Pset_chunk succeeded");
+
+ VRFY((H5Pset_shuffle(dcpl_id) >= 0), "H5Pset_shuffle succeeded");
+
+ dset_id = H5Dcreate2(file_id, "dset1", H5T_NATIVE_LLONG, fspace_id, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);
+ VRFY((dset_id >= 0), "H5Dcreate2 succeeded");
+
+ sel_dims[0] = (hsize_t)(COLL_IO_IND_MD_WRITE_CHUNK0);
+ sel_dims[1] = (hsize_t)(COLL_IO_IND_MD_WRITE_CHUNK1 * COLL_IO_IND_MD_WRITE_NCHUNK1);
+
+ offset[0] = (hsize_t)mpi_rank * sel_dims[0];
+ offset[1] = 0;
+
+ VRFY((H5Sselect_hyperslab(fspace_id, H5S_SELECT_SET, offset, NULL, sel_dims, NULL) >= 0),
+ "H5Sselect_hyperslab succeeded");
+
+ dxpl_id = H5Pcreate(H5P_DATASET_XFER);
+ VRFY((dxpl_id >= 0), "H5Pcreate succeeded");
+
+ VRFY((H5Pset_dxpl_mpio(dxpl_id, H5FD_MPIO_COLLECTIVE) >= 0), "H5Pset_dxpl_mpio succeeded");
+
+ data = malloc(sel_dims[0] * sel_dims[1] * sizeof(long long));
+ for (size_t i = 0; i < sel_dims[0] * sel_dims[1]; i++)
+ data[i] = rand();
+
+ VRFY((H5Dwrite(dset_id, H5T_NATIVE_LLONG, H5S_BLOCK, fspace_id, dxpl_id, data) >= 0),
+ "H5Dwrite succeeded");
+
+ dset_id2 = H5Dcreate2(file_id, "dset2", H5T_NATIVE_LLONG, fspace_id, H5P_DEFAULT, dcpl_id, H5P_DEFAULT);
+ VRFY((dset_id2 >= 0), "H5Dcreate2 succeeded");
+
+ for (size_t i = 0; i < sel_dims[0] * sel_dims[1]; i++)
+ data[i] = rand();
+
+ VRFY((H5Dwrite(dset_id2, H5T_NATIVE_LLONG, H5S_BLOCK, fspace_id, dxpl_id, data) >= 0),
+ "H5Dwrite succeeded");
+
+ free(data);
+
+ VRFY((H5Sclose(fspace_id) >= 0), "H5Sclose succeeded");
+ VRFY((H5Dclose(dset_id) >= 0), "H5Dclose succeeded");
+ VRFY((H5Dclose(dset_id2) >= 0), "H5Dclose succeeded");
+ VRFY((H5Pclose(dcpl_id) >= 0), "H5Pclose succeeded");
+ VRFY((H5Pclose(dxpl_id) >= 0), "H5Pclose succeeded");
+ VRFY((H5Pclose(fapl_id) >= 0), "H5Pclose succeeded");
+ VRFY((H5Fclose(file_id) >= 0), "H5Fclose succeeded");
+}
diff --git a/testpar/t_file.c b/testpar/t_file.c
index a6a541b..700ccc2 100644
--- a/testpar/t_file.c
+++ b/testpar/t_file.c
@@ -1060,3 +1060,62 @@ test_invalid_libver_bounds_file_close_assert(void)
ret = H5Pclose(fcpl_id);
VRFY((SUCCEED == ret), "H5Pclose");
}
+
+/*
+ * Tests that H5Pevict_on_close properly succeeds in serial/one rank and fails when
+ * called by multiple ranks.
+ */
+void
+test_evict_on_close_parallel_unsupp(void)
+{
+ const char *filename = NULL;
+ MPI_Comm comm = MPI_COMM_WORLD;
+ MPI_Info info = MPI_INFO_NULL;
+ hid_t fid = H5I_INVALID_HID;
+ hid_t fapl_id = H5I_INVALID_HID;
+ herr_t ret;
+
+ filename = (const char *)GetTestParameters();
+
+ /* set up MPI parameters */
+ MPI_Comm_size(MPI_COMM_WORLD, &mpi_size);
+ MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);
+
+ /* setup file access plist */
+ fapl_id = H5Pcreate(H5P_FILE_ACCESS);
+ VRFY((fapl_id != H5I_INVALID_HID), "H5Pcreate");
+ ret = H5Pset_libver_bounds(fapl_id, H5F_LIBVER_EARLIEST, H5F_LIBVER_V18);
+ VRFY((SUCCEED == ret), "H5Pset_libver_bounds");
+
+ ret = H5Pset_evict_on_close(fapl_id, true);
+ VRFY((SUCCEED == ret), "H5Pset_evict_on_close");
+
+ /* test on 1 rank */
+ ret = H5Pset_fapl_mpio(fapl_id, MPI_COMM_SELF, info);
+ VRFY((SUCCEED == ret), "H5Pset_fapl_mpio");
+
+ if (mpi_rank == 0) {
+ fid = H5Fcreate(filename, H5F_ACC_TRUNC, H5P_DEFAULT, fapl_id);
+ VRFY((SUCCEED == ret), "H5Fcreate");
+ ret = H5Fclose(fid);
+ VRFY((SUCCEED == ret), "H5Fclose");
+ }
+
+ VRFY((MPI_SUCCESS == MPI_Barrier(MPI_COMM_WORLD)), "MPI_Barrier");
+
+ /* test on multiple ranks if we have them */
+ if (mpi_size > 1) {
+ ret = H5Pset_fapl_mpio(fapl_id, comm, info);
+ VRFY((SUCCEED == ret), "H5Pset_fapl_mpio");
+
+ H5E_BEGIN_TRY
+ {
+ fid = H5Fcreate(filename, H5F_ACC_TRUNC, H5P_DEFAULT, fapl_id);
+ }
+ H5E_END_TRY
+ VRFY((fid == H5I_INVALID_HID), "H5Fcreate");
+ }
+
+ ret = H5Pclose(fapl_id);
+ VRFY((SUCCEED == ret), "H5Pclose");
+}
diff --git a/testpar/t_select_io_dset.c b/testpar/t_select_io_dset.c
index 2be2b40..9d3f120 100644
--- a/testpar/t_select_io_dset.c
+++ b/testpar/t_select_io_dset.c
@@ -222,6 +222,26 @@ check_actual_selection_io_mode(hid_t dxpl, uint32_t sel_io_mode_expected)
}
/*
+ * Helper routine to check actual selection I/O mode on a dxpl
+ */
+static void
+check_actual_selection_io_mode_either(hid_t dxpl, uint32_t sel_io_mode_expected1,
+ uint32_t sel_io_mode_expected2)
+{
+ uint32_t actual_sel_io_mode;
+
+ if (H5Pget_actual_selection_io_mode(dxpl, &actual_sel_io_mode) < 0)
+ P_TEST_ERROR;
+ if (actual_sel_io_mode != sel_io_mode_expected1 && actual_sel_io_mode != sel_io_mode_expected2) {
+ if (MAINPROCESS)
+ printf("\n Failed: Incorrect selection I/O mode (expected/actual) %u or %u : %u",
+ (unsigned)sel_io_mode_expected1, (unsigned)sel_io_mode_expected2,
+ (unsigned)actual_sel_io_mode);
+ P_TEST_ERROR;
+ }
+}
+
+/*
* Case 1: single dataset read/write, no type conversion (null case)
*/
static void
@@ -327,8 +347,14 @@ test_no_type_conv(hid_t fid, unsigned chunked, unsigned dtrans, unsigned select,
exp_io_mode = chunked ? H5D_MPIO_CHUNK_COLLECTIVE : H5D_MPIO_CONTIGUOUS_COLLECTIVE;
testing_check_io_mode(dxpl, exp_io_mode);
- if (chunked && !dtrans)
- check_actual_selection_io_mode(dxpl, H5D_VECTOR_IO);
+ if (chunked && !dtrans) {
+ /* If there are more ranks than chunks, then some ranks will not perform vector I/O due to how the
+ * parallel compression code redistributes data */
+ if ((hsize_t)mpi_size > (dims[0] / cdims[0]))
+ check_actual_selection_io_mode_either(dxpl, H5D_VECTOR_IO, 0);
+ else
+ check_actual_selection_io_mode(dxpl, H5D_VECTOR_IO);
+ }
else
check_actual_selection_io_mode(dxpl, select ? H5D_SELECTION_IO : H5D_SCALAR_IO);
diff --git a/testpar/testphdf5.c b/testpar/testphdf5.c
index 584ca1f..2428c71 100644
--- a/testpar/testphdf5.c
+++ b/testpar/testphdf5.c
@@ -366,6 +366,9 @@ main(int argc, char **argv)
AddTest("invlibverassert", test_invalid_libver_bounds_file_close_assert, NULL,
"Invalid libver bounds assertion failure", PARATESTFILE);
+ AddTest("evictparassert", test_evict_on_close_parallel_unsupp, NULL, "Evict on close in parallel failure",
+ PARATESTFILE);
+
AddTest("idsetw", dataset_writeInd, NULL, "dataset independent write", PARATESTFILE);
AddTest("idsetr", dataset_readInd, NULL, "dataset independent read", PARATESTFILE);
@@ -521,6 +524,8 @@ main(int argc, char **argv)
"Collective MD read with link chunk I/O (H5D__sort_chunk)", PARATESTFILE);
AddTest("GH_coll_MD_wr", test_collective_global_heap_write, NULL,
"Collective MD write of global heap data", PARATESTFILE);
+ AddTest("COLLIO_INDMDWR", test_coll_io_ind_md_write, NULL,
+ "Collective I/O with Independent metadata writes", PARATESTFILE);
/* Display testing information */
TestInfo(argv[0]);
diff --git a/testpar/testphdf5.h b/testpar/testphdf5.h
index 6ac8080..6bbdb0d 100644
--- a/testpar/testphdf5.h
+++ b/testpar/testphdf5.h
@@ -233,6 +233,7 @@ void zero_dim_dset(void);
void test_file_properties(void);
void test_delete(void);
void test_invalid_libver_bounds_file_close_assert(void);
+void test_evict_on_close_parallel_unsupp(void);
void multiple_dset_write(void);
void multiple_group_write(void);
void multiple_group_read(void);
@@ -296,6 +297,7 @@ void test_partial_no_selection_coll_md_read(void);
void test_multi_chunk_io_addrmap_issue(void);
void test_link_chunk_io_sort_chunk_issue(void);
void test_collective_global_heap_write(void);
+void test_coll_io_ind_md_write(void);
void test_oflush(void);
/* commonly used prototypes */
diff --git a/tools/Makefile.am b/tools/Makefile.am
index 7db4040..d0a6c5c 100644
--- a/tools/Makefile.am
+++ b/tools/Makefile.am
@@ -19,7 +19,7 @@
include $(top_srcdir)/config/commence.am
if BUILD_TESTS_CONDITIONAL
- TESTSERIAL_DIR =test
+ TESTSERIAL_DIR=libtest test
else
TESTSERIAL_DIR=
endif
diff --git a/tools/libtest/Makefile.am b/tools/libtest/Makefile.am
index 835667c..45b3f47 100644
--- a/tools/libtest/Makefile.am
+++ b/tools/libtest/Makefile.am
@@ -19,11 +19,11 @@
include $(top_srcdir)/config/commence.am
-# Include src and tools/lib directories
-AM_CPPFLAGS+=-I$(top_srcdir)/src -I$(top_srcdir)/tools/lib
+# Include src, test, and tools/lib directories
+AM_CPPFLAGS+=-I$(top_srcdir)/src -I$(top_srcdir)/test -I$(top_srcdir)/tools/lib
-# All programs depend on the hdf5 and h5tools libraries
-LDADD=$(LIBH5TOOLS) $(LIBHDF5)
+# All programs depend on the hdf5, hdf5 test, and h5tools libraries
+LDADD=$(LIBH5TOOLS) $(LIBH5TEST) $(LIBHDF5)
# main target
diff --git a/tools/test/h5diff/Makefile.am b/tools/test/h5diff/Makefile.am
index b561d72..f920afa 100644
--- a/tools/test/h5diff/Makefile.am
+++ b/tools/test/h5diff/Makefile.am
@@ -60,8 +60,7 @@ endif
# Temporary files. *.h5 are generated by h5diff. They should
# be copied to the testfiles/ directory if update is required
-CHECK_CLEANFILES+=*.h5 expect_sorted actual_sorted
-
+CHECK_CLEANFILES+=*.h5 *.onion expect_sorted actual_sorted
DISTCLEANFILES=testh5diff.sh h5diff_plugin.sh
include $(top_srcdir)/config/conclude.am
diff --git a/tools/test/h5dump/Makefile.am b/tools/test/h5dump/Makefile.am
index a79b0fe..619647c 100644
--- a/tools/test/h5dump/Makefile.am
+++ b/tools/test/h5dump/Makefile.am
@@ -45,7 +45,7 @@ endif
# Temporary files. *.h5 are generated by h5dumpgentest. They should
# copied to the testfiles/ directory if update is required.
-CHECK_CLEANFILES+=*.h5 *.bin
+CHECK_CLEANFILES+=*.h5 *.bin *.onion
DISTCLEANFILES=testh5dump.sh testh5dumppbits.sh testh5dumpxml.sh h5dump_plugin.sh
include $(top_srcdir)/config/conclude.am