summaryrefslogtreecommitdiffstats
path: root/release_docs/RELEASE.txt
diff options
context:
space:
mode:
Diffstat (limited to 'release_docs/RELEASE.txt')
-rw-r--r--release_docs/RELEASE.txt81
1 files changed, 80 insertions, 1 deletions
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 1734c01..f228e39 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -47,6 +47,14 @@ New Features
Configuration:
-------------
+ - Improved support for Intel oneAPI
+
+ * Separates the old 'classic' Intel compiler settings and warnings
+ from the oneAPI settings
+ * Uses `-check nouninit` in debug builds to avoid false positives
+ when building H5_buildiface with `-check all`
+ * Both Autotools and CMake
+
- Added new options for CMake and Autotools to control the Doxygen
warnings as errors setting.
@@ -127,6 +135,16 @@ New Features
Library:
--------
+ - Added a simple cache to the read-only S3 (ros3) VFD
+
+ The read-only S3 VFD now caches the first N bytes of a file stored
+ in S3 to avoid a lot of small I/O operations when opening files.
+ This cache is per-file and created when the file is opened.
+
+ N is currently 16 MiB or the size of the file, whichever is smaller.
+
+ Addresses GitHub issue #3381
+
- Added new API function H5Pget_actual_selection_io_mode()
This function allows the user to determine if the library performed
@@ -159,7 +177,8 @@ New Features
- Fortran async APIs H5A, H5D, H5ES, H5G, H5F, H5L and H5O were added.
- Added Fortran APIs:
- h5pset_selection_io_f, h5pget_selection_io_f
+ h5pset_selection_io_f, h5pget_selection_io_f,
+ h5pget_actual_selection_io_mode_f,
h5pset_modify_write_buf_f, h5pget_modify_write_buf_f
- Added Fortran APIs:
@@ -219,6 +238,66 @@ Bug Fixes since HDF5-1.14.2 release
===================================
Library
-------
+ - Fixed some issues with chunk index metadata not getting read
+ collectively when collective metadata reads are enabled
+
+ When looking up dataset chunks during I/O, the parallel library
+ temporarily disables collective metadata reads since it's generally
+ unlikely that the application will read the same chunks from all
+ MPI ranks. Leaving collective metadata reads enabled during
+ chunk lookups can lead to hangs or other bad behavior depending
+ on the chunk indexing structure used for the dataset in question.
+ However, due to the way that dataset chunk index metadata was
+ previously loaded in a deferred manner, this could mean that
+ the metadata for the main chunk index structure or its
+ accompanying pieces of metadata (e.g., fixed array data blocks)
+ could end up being read independently if these chunk lookup
+ operations are the first chunk index-related operation that
+ occurs on a dataset. This behavior is generally observed when
+ opening a dataset for which the metadata isn't in the metadata
+ cache yet and then immediately performing I/O on that dataset.
+ This behavior is not generally observed when creating a dataset
+ and then performing I/O on it, as the relevant metadata will
+ usually be in the metadata cache as a side effect of creating
+ the chunk index structures during dataset creation.
+
+ This issue has been fixed by adding callbacks to the different
+ chunk indexing structure classes that allow more explicit control
+ over when chunk index metadata gets loaded. When collective
+ metadata reads are enabled, the necessary index metadata will now
+ get loaded collectively by all MPI ranks at the start of dataset
+ I/O to ensure that the ranks don't unintentionally read this
+ metadata independently further on. These changes fix collective
+ loading of the main chunk index structure, as well as v2 B-tree
+ root nodes, extensible array index blocks and fixed array data
+ blocks. There are still pieces of metadata that cannot currently
+ be loaded collectively, however, such as extensible array data
+ blocks, data block pages and super blocks, as well as fixed array
+ data block pages. These pieces of metadata are not necessarily
+ read in by all MPI ranks since this depends on which chunks the
+ ranks have selected in the dataset. Therefore, reading of these
+ pieces of metadata remains an independent operation.
+
+ - Fixed potential hangs in parallel library during collective I/O with
+ independent metadata writes
+
+ When performing collective parallel writes to a dataset where metadata
+ writes are requested as (or left as the default setting of) independent,
+ hangs could potentially occur during metadata cache sync points. This
+ was due to incorrect management of the internal state tracking whether
+ an I/O operation should be collective or not, causing the library to
+ attempt collective writes of metadata when they were meant to be
+ independent writes. During the metadata cache sync points, if the number
+ of cache entries being flushed was a multiple of the number of MPI ranks
+ in the MPI communicator used to access the HDF5 file, an equal amount of
+ collective MPI I/O calls were made and the dataset write call would be
+ successful. However, when the number of cache entries being flushed was
+ NOT a multiple of the number of MPI ranks, the ranks with more entries
+ than others would get stuck in an MPI_File_set_view call, while other
+ ranks would get stuck in a post-write MPI_Barrier call. This issue has
+ been fixed by correctly switching to independent I/O temporarily when
+ writing metadata independently during collective dataset I/O.
+
- Fixed a bug with the way the Subfiling VFD assigns I/O concentrators
During a file open operation, the Subfiling VFD determines the topology