summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
authorjhendersonHDF <jhenderson@hdfgroup.org>2023-10-25 00:36:18 (GMT)
committerGitHub <noreply@github.com>2023-10-25 00:36:18 (GMT)
commita91be87f072a8020d9a467f1ef81132cb5c40149 (patch)
treee5a3ef49a818d07f5649f9aeaa718047efaef658 /release_docs
parent097fd51481a7d5eaf8f94436cf21e08ac516d6ba (diff)
downloadhdf5-a91be87f072a8020d9a467f1ef81132cb5c40149.zip
hdf5-a91be87f072a8020d9a467f1ef81132cb5c40149.tar.gz
hdf5-a91be87f072a8020d9a467f1ef81132cb5c40149.tar.bz2
Sync with develop (#3764)
* Add missing test files to distclean target (#3734) Cleans up new files in Autotools `make distclean` in the test directory * Add tools/libtest to Autotools builds (#3735) This was only added to CMake many years ago and tests the tools library. * Clean up onion VFD files in tools `make clean` (#3739) Cleans up h5dump and h5diff *.onion files in the Autotools when runing `make clean`. * Clean Java test files on Autotools (#3740) Removes generated HDF5 and text output files when running `make clean`. * Clean the flushrefresh test dir on Autotools (#3741) The flushrefresh_test directory was not being cleaned up w/ `make clean` under the Autotools * Fix file names in tfile.c (#3743) Some tests in tfile.c use h5_fileaccess to get a VFD-dependent file name but use the scheme from testhdf5, reusing the FILE1 and FILE8 names. This leads to files like test1.h5.h5 which are unintended and not cleaned up. This changes the filename scheme for a few tests to work with h5test, resulting in more informative names and allowing the files to be cleaned up at the end of the test. The test files have also been added to the `make clean` target for the Autotools. * Clean Autotools files in parallel tests (#3744) Adds missing files to `make clean` for parallel, including Fortran. * Add native VOL checks to deprecated functions (#3647) * Add native VOL checks to deprecated functions * Remove unneeded native VOL checks * Move native checks to top level calls * Fix buffer overflow in cache debugging code (#3691) * update stat arg for apple (#3726) * update stat arg for apple * use H5_HAVE_DARWIN for Apple ifdef * fix typo * removed duplicate H5_ih_info_t * added fortran async test to cmake * Fix windows cpack error in WiX package. (#3747) * Add a simple cache to the ros3 VFD (#3753) Adds a small cache of the first N bytes of a file opened with the read-only S3 (ros3) VFD, where N is 4kiB or the size of the file, whichever is smaller. This avoids a lot of small I/O operations on file open. Addresses GitHub issue #3381 * Update Autotools to correctly configure oneAPI (#3751) * Update Autotools to correctly configure oneAPI Splits the Intel config files under the Autotools into 'classic' Intel and oneAPI versions, fixing 'unsupported option' messages. Also turns off `-check uninit` (new in 2023) in Fortran, which kills the H5_buildiface program due to false positives. * Enable Fortran in oneAPI CI workflow * Turn on Fortran in CMake, update LD_LIBRARY_PATH * Go back to disabling Fortran w/ Intel For some reason there's a linking problem w/ Fortran error while loading shared libraries: libifport.so.5: cannot open shared object file: No such file or directory * Add h5pget_actual_selection_io_mode fortran wrapper (#3746) * added h5pget_actual_selection_io_mode_f test * added tests for h5pget_actual_selection_io_mode_f * fixed int_f type conversion * Update fortran action step (#3748) * Added missing DLL for H5PGET_ACTUAL_SELECTION_IO_MODE_F (#3760) * add missing H5PGET_ACTUAL_SELECTION_IO_MODE_F dll * Bump the ros3 VFD cache to 16 MiB (#3759) * Fix hangs during collective I/O with independent metadata writes (#3693) * Fix some issues with collective metadata reads for chunked datasets (#3716) Add functions/callbacks for explicit control over chunk index open/close Add functions/callbacks to check if chunk index is open or not so that it can be opened if necessary before temporarily disabling collective metadata reads in the library Add functions/callbacks for requesting loading of additional chunk index metadata beyond the chunk index itself * Fix failure in t_select_io_dset when run with more than 10 ranks (#3758) * Fix H5Pset_evict_on_close failing regardless of actual parallel use (#3761) Allow H5Pset_evict_on_close to be called regardless of whether a parallel build of HDF5 is being used Fail during file opens if H5Pset_evict_on_close has been set to true on the given File Access Property List and the size of the MPI communicator being used is greater than 1
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/RELEASE.txt81
1 files changed, 80 insertions, 1 deletions
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 1734c01..f228e39 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -47,6 +47,14 @@ New Features
Configuration:
-------------
+ - Improved support for Intel oneAPI
+
+ * Separates the old 'classic' Intel compiler settings and warnings
+ from the oneAPI settings
+ * Uses `-check nouninit` in debug builds to avoid false positives
+ when building H5_buildiface with `-check all`
+ * Both Autotools and CMake
+
- Added new options for CMake and Autotools to control the Doxygen
warnings as errors setting.
@@ -127,6 +135,16 @@ New Features
Library:
--------
+ - Added a simple cache to the read-only S3 (ros3) VFD
+
+ The read-only S3 VFD now caches the first N bytes of a file stored
+ in S3 to avoid a lot of small I/O operations when opening files.
+ This cache is per-file and created when the file is opened.
+
+ N is currently 16 MiB or the size of the file, whichever is smaller.
+
+ Addresses GitHub issue #3381
+
- Added new API function H5Pget_actual_selection_io_mode()
This function allows the user to determine if the library performed
@@ -159,7 +177,8 @@ New Features
- Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added.
- Added Fortran APIs:
- h5pset_selection_io_f, h5pget_selection_io_f
+ h5pset_selection_io_f, h5pget_selection_io_f,
+ h5pget_actual_selection_io_mode_f,
h5pset_modify_write_buf_f, h5pget_modify_write_buf_f
- Added Fortran APIs:
@@ -219,6 +238,66 @@ Bug Fixes since HDF5-1.14.2 release
===================================
Library
-------
+ - Fixed some issues with chunk index metadata not getting read
+ collectively when collective metadata reads are enabled
+
+ When looking up dataset chunks during I/O, the parallel library
+ temporarily disables collective metadata reads since it's generally
+ unlikely that the application will read the same chunks from all
+ MPI ranks. Leaving collective metadata reads enabled during
+ chunk lookups can lead to hangs or other bad behavior depending
+ on the chunk indexing structure used for the dataset in question.
+ However, due to the way that dataset chunk index metadata was
+ previously loaded in a deferred manner, this could mean that
+ the metadata for the main chunk index structure or its
+ accompanying pieces of metadata (e.g., fixed array data blocks)
+ could end up being read independently if these chunk lookup
+ operations are the first chunk index-related operation that
+ occurs on a dataset. This behavior is generally observed when
+ opening a dataset for which the metadata isn't in the metadata
+ cache yet and then immediately performing I/O on that dataset.
+ This behavior is not generally observed when creating a dataset
+ and then performing I/O on it, as the relevant metadata will
+ usually be in the metadata cache as a side effect of creating
+ the chunk index structures during dataset creation.
+
+ This issue has been fixed by adding callbacks to the different
+ chunk indexing structure classes that allow more explicit control
+ over when chunk index metadata gets loaded. When collective
+ metadata reads are enabled, the necessary index metadata will now
+ get loaded collectively by all MPI ranks at the start of dataset
+ I/O to ensure that the ranks don't unintentionally read this
+ metadata independently further on. These changes fix collective
+ loading of the main chunk index structure, as well as v2 B-tree
+ root nodes, extensible array index blocks and fixed array data
+ blocks. There are still pieces of metadata that cannot currently
+ be loaded collectively, however, such as extensible array data
+ blocks, data block pages and super blocks, as well as fixed array
+ data block pages. These pieces of metadata are not necessarily
+ read in by all MPI ranks since this depends on which chunks the
+ ranks have selected in the dataset. Therefore, reading of these
+ pieces of metadata remains an independent operation.
+
+ - Fixed potential hangs in parallel library during collective I/O with
+ independent metadata writes
+
+ When performing collective parallel writes to a dataset where metadata
+ writes are requested as (or left as the default setting of) independent,
+ hangs could potentially occur during metadata cache sync points. This
+ was due to incorrect management of the internal state tracking whether
+ an I/O operation should be collective or not, causing the library to
+ attempt collective writes of metadata when they were meant to be
+ independent writes. During the metadata cache sync points, if the number
+ of cache entries being flushed was a multiple of the number of MPI ranks
+ in the MPI communicator used to access the HDF5 file, an equal amount of
+ collective MPI I/O calls were made and the dataset write call would be
+ successful. However, when the number of cache entries being flushed was
+ NOT a multiple of the number of MPI ranks, the ranks with more entries
+ than others would get stuck in an MPI_File_set_view call, while other
+ ranks would get stuck in a post-write MPI_Barrier call. This issue has
+ been fixed by correctly switching to independent I/O temporarily when
+ writing metadata independently during collective dataset I/O.
+
- Fixed a bug with the way the Subfiling VFD assigns I/O concentrators
During a file open operation, the Subfiling VFD determines the topology