HDF5 version 1.14.0-1 ================================================================================ INTRODUCTION ============ This document describes the differences between this release and the previous HDF5 release. It contains information on the platforms tested and known problems in this release. For more details check the HISTORY*.txt files in the HDF5 source. Note that documentation in the links below will be updated at the time of each final release. Links to HDF5 documentation can be found on The HDF5 web page: https://portal.hdfgroup.org/display/HDF5/HDF5 The official HDF5 releases can be obtained from: https://www.hdfgroup.org/downloads/hdf5/ Changes from Release to Release and New Features in the HDF5-1.13.x release series can be found at: https://portal.hdfgroup.org/display/HDF5/Release+Specific+Information If you have any questions or comments, please send them to the HDF Help Desk: help@hdfgroup.org CONTENTS ======== - New Features - Support for new platforms and languages - Bug Fixes since HDF5-1.12.0 - Platforms Tested - Known Problems - CMake vs. Autotools installations New Features ============ Configuration: ------------- - Removal of MPE support The ability to build with MPE instrumentation has been removed along with the following configure options: Autotools: --with-mpe= CMake has never supported building with MPE support. (DER - 2022/11/08) - Removal of dmalloc support The ability to build with dmalloc support has been removed along with the following configure options: Autotools: --with-dmalloc= CMake: HDF5_ENABLE_USING_DMALLOC (DER - 2022/11/08) - Removal of memory allocation sanity checks configure options With the removal of the memory allocation sanity checks feature, the following configure options are no longer necessary and have been removed: Autotools: --enable-memory-alloc-sanity-check CMake: HDF5_MEMORY_ALLOC_SANITY_CHECK HDF5_ENABLE_MEMORY_STATS (DER - 2022/11/03) - Add new CMake configuration variable HDF5_USE_GNU_DIRS HDF5_USE_GNU_DIRS (default OFF) selects the use of GNU Coding Standard install directory variables by including the CMake module, GNUInstallDirs(see CMake documentation for details). The HDF_DIR_PATHS macro in the HDFMacros.cmake file sets various PATH variables for use during the build, test and install processes. By default, the historical settings for these variables will be used. (ADB - 2022/10/21, GH-2175, GH-1716) - Update CMake minimum version to 3.18 Updated CMake minimum version from 3.12 to 3.18 and removed version checks which were added for Windows features not yet available in version 3.12. Also removed configure macros and code checks for old style code compile checks. (ADB - 2022/08/29, HDFFV-11329) - Correct the usage of CMAKE_Fortran_MODULE_DIRECTORY and where to install Fortran mod files. The Fortran modules files, ending in .mod are files describing a Fortran 90 (and above) module API and ABI. These are not like C header files describing an API, they are compiler dependent and arch dependent, and not easily readable by a human being. They are nevertheless searched for in the includes directories by gfortran (in directories specified with -I). Autotools configure uses the -fmoddir option to specify the folder. CMake will use "mod" folder by default unless overridden by the CMake variable; HDF5_INSTALL_MODULE_DIR. (ADB - 2022/07/21) - HDF5 memory allocation sanity checking is now off by default for Autotools debug builds HDF5 can be configured to perform sanity checking on internal memory allocations by adding heap canaries to these allocations. However, enabling this option can cause issues with external filter plugins when working with (reallocating/freeing/allocating and passing back) buffers. Previously, this option was off by default for all CMake build types, but only off by default for non-debug Autotools builds. Since debug is the default build mode for HDF5 when built from source with Autotools, this can result in surprising segfaults that don't occur when an application is built against a release version of HDF5. Therefore, this option is now off by default for all build types across both CMake and Autotools. (JTH - 2022/03/01) - Reworked corrected path searched by CMake find_package command The install path for cmake find_package files had been changed to use "share/cmake" for all platforms. However setting the HDF5_ROOT variable failed to locate the configuration files. The build variable HDF5_INSTALL_CMAKE_DIR is now set to the /cmake folder. The location of the configuration files can still be specified by the "HDF5_DIR" variable. (ADB - 2022/02/02) - CPack will now generate RPM/DEB packages. Enabled the RPM and DEB CPack generators on linux. In addition to generating STGZ and TGZ packages, CPack will try to package the library for RPM and DEB packages. This is the initial attempt and may change as issues are resolved. (ADB - 2022/01/27) - Added new option to the h5cc scripts produced by CMake. Add -showconfig option to h5cc scripts to cat the libhdf5.settings file to the standard output. (ADB - 2022/01/25) - CMake will now run the PowerShell script tests in test/ by default on Windows. The test directory includes several shell script tests that previously were not run by CMake on Windows. These are now run by default. If TEST_SHELL_SCRIPTS is ON and PWSH is found, the PowerShell scripts will execute. Similar to the bash scripts on unix platforms. (ADB - 2021/11/23) - Added new configure option to support building parallel tools. See Tools below (autotools - CMake): --enable-parallel-tools HDF5_BUILD_PARALLEL_TOOLS (RAW - 2021/10/25) - Added new configure options to enable dimension scales APIs (H5DS*) to use new object references with the native VOL connector (aka native HDF5 library). New references are always used for non-native terminal VOL connectors (e.g., DAOS). Autotools --enable-dimension-scales-with-new-ref CMake HDF5_DIMENSION_SCALES_NEW_REF=ON (EIP - 2021/10/25, HDFFV-11180) - Refactored the utils folder. Added subfolder test and moved the 'swmr_check_compat_vfd.c file' from test into utils/test. Deleted the duplicate swmr_check_compat_vfd.c file in hl/tools/h5watch folder. Also fixed vfd check options. (ADB - 2021/10/18) - Changed autotools and CMake configurations to derive both compilation warnings-as-errors and warnings-only-warn configurations from the same files, 'config/*/*error*'. Removed redundant files 'config/*/*noerror*'. (DCY - 2021/09/29) - Adds C++ Autotools configuration file for Intel * Checks for icpc as the compiler * Sets std=c++11 * Copies most non-warning flags from intel-flags (DER - 2021/06/02) - Adds C++ Autotools configuration file for PGI * Checks for pgc++ as the compiler name (was: pgCC) * Sets -std=c++11 * Other options basically match new C options (below) (DER - 2021/06/02) - Updates PGI C options * -Minform set to warn (was: inform) to suppress spurious messages * Sets -gopt -O2 as debug options * Sets -O4 as 'high optimization' option * Sets -O0 as 'no optimization' option * Removes specific settings for PGI 9 and 10 (DER - 2021/06/02) - A C++11-compliant compiler is now required to build the C++ wrappers CMAKE_CXX_STANDARD is now set to 11 when building with CMake and -std=c++11 is added when building with clang/gcc via the Autotools. (DER - 2021/05/27) - CMake will now run the shell script tests in test/ by default The test directory includes several shell script tests that previously were not run by CMake. These are now run by default. TEST_SHELL_SCRIPTS has been set to ON and SH_PROGRAM has been set to bash (some test scripts use bash-isms). Platforms without bash (e.g., Windows) will ignore the script tests. (DER - 2021/05/23) - Removed unused HDF5_ENABLE_HSIZET option from CMake This has been unused for some time and has no effect. (DER - 2021/05/23) - CMake no longer builds the C++ library by default HDF5_BUILD_CPP_LIB now defaults to OFF, which is in line with the Autotools build defaults. (DER - 2021/04/20) - Removal of pre-VS2015 work-arounds HDF5 now requires Visual Studio 2015 or greater, so old work-around code and definitions have been removed, including: * * snprintf and vsnprintf * llround, llroundf, lround, lroundf, round, roundf * strtoll and strtoull * va_copy * struct timespec (DER - 2021/03/22) - Add CMake variable HDF5_LIB_INFIX This infix is added to all library names after 'hdf5'. e.g. the infix '_openmpi' results in the library name 'libhdf5_openmpi.so' This name is used in packages on debian based systems. (see https://packages.debian.org/jessie/amd64/libhdf5-openmpi-8/filelist) (barcode - 2021/03/22) - On macOS, Universal Binaries can now be built, allowing native execution on both Intel and Apple Silicon (ARM) based Macs. To do so, set CMAKE_OSX_ARCHITECTURES="x86_64;arm64" (SAM - 2021/02/07, github-311) - Added a configure-time option to control certain compiler warnings diagnostics A new configure-time option was added that allows some compiler warnings diagnostics to have the default operation. This is mainly intended for library developers and currently only works for gcc 10 and above. The diagnostics flags apply to C, C++ and Fortran compilers and will appear in "H5 C Flags", H5 C++ Flags" and H5 Fortran Flags, respectively. They will NOT be exported to h5cc, etc. The default is OFF, which will disable the warnings URL and color attributes for the warnings output. ON will not add the flags and allow default behavior. Autotools: --enable-diags CMake: HDF5_ENABLE_BUILD_DIAGS (ADB - 2021/02/05, HDFFV-11213) - CMake option to build the HDF filter plugins project as an external project The HDF filter plugins project is a collection of registered compression filters that can be dynamically loaded when needed to access data stored in a hdf5 file. This CMake-only option allows the plugins to be built and distributed with the hdf5 library and tools. Like the options for szip and zlib, either a tgz file or a git repository can be specified for the source. The option was refactored to use the CMake FetchContent process. This allows more control over the filter targets, but required external project command options to be moved to a CMake include file, HDF5PluginCache.cmake. Also enabled the filter examples to be used as tests for operation of the filter plugins. (ADB - 2020/12/10, OESS-98) - FreeBSD Autotools configuration now defaults to 'cc' and 'c++' compilers On FreeBSD, the autotools defaulted to 'gcc' as the C compiler and did not process C++ options. Since FreeBSD 10, the default compiler has been clang (via 'cc'). The default compilers have been set to 'cc' for C and 'c++' for C++, which will pick up clang and clang++ respectively on FreeBSD 10+. Additionally, clang options are now set correctly for both C and C++ and g++ options will now be set if that compiler is being used (an omission from the former functionality). (DER - 2020/11/28, HDFFV-11193) - Fixed POSIX problems when building w/ gcc on Solaris When building on Solaris using gcc, the POSIX symbols were not being set correctly, which could lead to issues like clock_gettime() not being found. The standard is now set to gnu99 when building with gcc on Solaris, which allows POSIX things to be #defined and linked correctly. This differs slightly from the gcc norm, where we set the standard to c99 and manually set POSIX #define symbols. (DER - 2020/11/25, HDFFV-11191) - Added a configure-time option to consider certain compiler warnings as errors A new configure-time option was added that converts some compiler warnings to errors. This is mainly intended for library developers and currently only works for gcc and clang. The warnings that are considered errors will appear in the generated libhdf5.settings file. These warnings apply to C and C++ code and will appear in "H5 C Flags" and H5 C++ Flags", respectively. They will NOT be exported to h5cc, etc. The default is OFF. Building with this option may fail when compiling on operating systems and with compiler versions not commonly used by the library developers. Compilation may also fail when headers not under the control of the library developers (e.g., mpi.h, hdfs.h) raise warnings. Autotools: --enable-warnings-as-errors CMake: HDF5_ENABLE_WARNINGS_AS_ERRORS (DER - 2020/11/23, HDFFV-11189) - Autotools and CMake target added to produce doxygen generated documentation The default is OFF or disabled. Autoconf option is '--enable-doxygen' autotools make target is 'doxygen' and will build all doxygen targets CMake configure option is 'HDF5_BUILD_DOC'. CMake target is 'doxygen' for all available doxygen targets CMake target is 'hdf5lib_doc' for the src subdirectory (ADB - 2020/11/03) - CMake option to use MSVC naming conventions with MinGW HDF5_MSVC_NAMING_CONVENTION option enable to use MSVC naming conventions when using a MinGW toolchain (xan - 2020/10/30) - CMake option to statically link gcc libs with MinGW HDF5_MINGW_STATIC_GCC_LIBS allows to statically link libg/libstdc++ with the MinGW toolchain (xan - 2020/10/30) - CMake option to build the HDF filter plugins project as an external project The HDF filter plugins project is a collection of registered compression filters that can be dynamically loaded when needed to access data stored in a hdf5 file. This CMake-only option allows the plugins to be built and distributed with the hdf5 library and tools. Like the options for szip and zlib, either a tgz file or a git repository can be specified for the source. The necessary options are (see the INSTALL_CMake.txt file): HDF5_ENABLE_PLUGIN_SUPPORT PLUGIN_TGZ_NAME or PLUGIN_GIT_URL There are more options necessary for various filters and the plugin project documents should be referenced. (ADB - 2020/09/27, OESS-98) - Added CMake option to format source files HDF5_ENABLE_FORMATTERS option will enable creation of targets using the pattern - HDF5_*_SRC_FORMAT - where * corresponds to the source folder or tool folder. All sources can be formatted by executing the format target; make format (ADB - 2020/08/24) - Add file locking configure and CMake options HDF5 1.10.0 introduced a file locking scheme, primarily to help enforce SWMR setup. Formerly, the only user-level control of the scheme was via the HDF5_USE_FILE_LOCKING environment variable. This change introduces configure-time options that control whether or not file locking will be used and whether or not the library ignores errors when locking has been disabled on the file system (useful on some HPC Lustre installations). In both the Autotools and CMake, the settings have the effect of changing the default property list settings (see the H5Pset/get_file_locking() entry, below). The yes/no/best-effort file locking configure setting has also been added to the libhdf5.settings file. Autotools: An --enable-file-locking=(yes|no|best-effort) option has been added. yes: Use file locking. no: Do not use file locking. best-effort: Use file locking and ignore "disabled" errors. CMake: Two self-explanatory options have been added: HDF5_USE_FILE_LOCKING HDF5_IGNORE_DISABLED_FILE_LOCKS Setting both of these to ON is the equivalent to the Autotools' best-effort setting. NOTE: The precedence order of the various file locking control mechanisms is: 1) HDF5_USE_FILE_LOCKING environment variable (highest) 2) H5Pset_file_locking() 3) configure/CMake options (which set the property list defaults) 4) library defaults (currently best-effort) (DER - 2020/07/30, HDFFV-11092) - CMake option to link the generated Fortran MOD files into the include directory. The Fortran generation of MOD files by a Fortran compile can produce different binary files between SHARED and STATIC compiles with different compilers and/or different platforms. Note that it has been found that different versions of Fortran compilers will produce incompatible MOD files. Currently, CMake will locate these MOD files in subfolders of the include directory and add that path to the Fortran library target in the CMake config file, which can be used by the CMake find library process. For other build systems using the binary from a CMake install, a new CMake configuration can be used to copy the pre-chosen version of the Fortran MOD files into the install include directory. The default will depend on the configuration of BUILD_STATIC_LIBS and BUILD_SHARED_LIBS: YES YES Default to SHARED YES NO Default to STATIC NO YES Default to SHARED NO NO Default to SHARED The defaults can be overridden by setting the config option HDF5_INSTALL_MOD_FORTRAN to one of NO, SHARED, or STATIC (ADB - 2020/07/09, HDFFV-11116) - CMake option to use AEC (open source SZip) library instead of SZip The open source AEC library is a replacement library for SZip. In order to use it for hdf5 the libaec CMake source was changed to add "-fPIC" and exclude test files. Autotools does not build the compression libraries within hdf5 builds. New option USE_LIBAEC is required to compensate for the different files produced by AEC build. (ADB - 2020/04/22, OESS-65) - CMake ConfigureChecks.cmake file now uses CHECK_STRUCT_HAS_MEMBER Some handcrafted tests in HDFTests.c has been removed and the CMake CHECK_STRUCT_HAS_MEMBER module has been used. (ADB - 2020/03/24, TRILAB-24) - Both build systems use same set of warnings flags GNU C, C++ and gfortran warnings flags were moved to files in a config sub-folder named gnu-warnings. Flags that only are available for a specific version of the compiler are in files named with that version. Clang C warnings flags were moved to files in a config sub-folder named clang-warnings. Intel C, Fortran warnings flags were moved to files in a config sub-folder named intel-warnings. There are flags in named "error-xxx" files with warnings that may be promoted to errors. Some source files may still need fixes. There are also pairs of files named "developer-xxx" and "no-developer-xxx" that are chosen by the CMake option:HDF5_ENABLE_DEV_WARNINGS or the configure option:--enable-developer-warnings. In addition, CMake no longer applies these warnings for examples. (ADB - 2020/03/24, TRILAB-192) Library: -------- - Overhauled the Virtual Object Layer (VOL) The virtual object layer (VOL) was added in HDF5 1.12.0 but the initial implementation required API-breaking changes to better support optional operations and pass-through VOL connectors. The original VOL API is now considered deprecated and VOL users and connector authors should target the 1.14 VOL API. The specific changes are too extensive to document in a release note, so VOL users and connector authors should consult the updated VOL connector author's guide and the 1.12-1.14 VOL migration guide. (DER - 2022/12/28) - H5VLquery_optional() signature change The last parameter of this API call has changed from a pointer to hbool_t to a pointer to uint64_t. Due to the changes in how optional operations are handled in the 1.14 VOL API, we cannot make the old API call work with the new scheme, so there is no API compatibility macro for it. (DER - 2022/12/28) - H5I_free_t callback signature change In order to support asynchronous operations and future IDs, the signature of the H5I_free_t callback has been modified to take a second 'request' parameter. Due to the nature of the internal library changes, no API compatibility macro is available for this change. (DER - 2022/12/28) - Fix for CVE-2019-8396 Malformed HDF5 files may have truncated content which does not match the expected size. When H5O__pline_decode() attempts to decode these it may read past the end of the allocated space leading to heap overflows as bounds checking is incomplete. The fix ensures each element is within bounds before reading. (2022/11/09 - HDFFV-10712, CVE-2019-8396, GitHub #2209) - Removal of memory allocation sanity checks feature This feature added heap canaries and statistics tracking for internal library memory operations. Unfortunately, the heap canaries caused problems when library memory operations were mixed with standard C library memory operations (such as in the filter pipeline, where buffers may have to be reallocated). Since any platform with a C compiler also usually has much more sophisticated memory sanity checking tools than the HDF5 library provided (e.g., valgrind), we have decided to to remove the feature entirely. In addition to the configure changes described above, this also removes the following from the public API: H5get_alloc_stats() H5_alloc_stats_t (DER - 2022/11/03) - Added multi dataset I/O feature Added H5Dread_multi, H5Dread_multi_async, H5Dwrite_multi, and H5Dwrite_multi_async API routines to allow I/O on multiple datasets with a single API call. Added H5Dread_multi_f and H5Dwrite_multi_f Fortran wrappers. Updated VOL callbacks for dataset I/O to support multi dataset I/O. (NAF - 2022/10/19) - Onion VFD The onion VFD allows creating "versioned" HDF5 files. File open/close operations after initial file creation will add changes to an external "onion" file (.onion extension by default) instead of the original file. Each written revision can be opened independently. To open a file with the onion VFD, use the H5Pset_fapl_onion() API call (does not need to be used for the initial creation of the file). The options for the H5FD_onion_fapl_info_t struct are described in H5FDonion.h. The H5FDonion_get_revision_count() API call can be used to query a file to find out how many revisions have been created. (DER - 2022/08/02) - Subfiling VFD The HDF5 Subfiling VFD is a new MPI-based file driver that allows an HDF5 application to distribute an HDF5 file across a collection of "sub-files" in equal-sized data segment "stripes". I/O to the logical HDF5 file is then directed to the appropriate "sub-file" according to the Subfiling configuration and a system of I/O concentrators, which are MPI ranks operating worker threads. By allowing a configurable stripe size, number of I/O concentrators and method for selecting MPI ranks as I/O concentrators, the Subfiling VFD aims to enable an HDF5 application to find a middle ground between the single shared file and file-per-process approaches to parallel file I/O for the particular machine the application is running on. In general, the goal is to avoid some of the complexity of the file-per-process approach while also minimizing the locking issues of the single shared file approach on a parallel file system. Also included with the Subfiling VFD is a new h5fuse.sh script which reads a Subfiling configuration file and then combines the various sub-files back into a single HDF5 file. By default, the h5fuse.sh script looks in the current directory for the Subfiling configuration file, but can also be pointed to the configuration file with a command-line option. The Subfiling VFD can be used by calling H5Pset_fapl_subfiling() on a File Access Property List and using that FAPL for file operations. Note that the Subfiling VFD currently has the following limitations: * Does not currently support HDF5 collective I/O, other than collective metadata writes and reads as set by H5Pset_coll_metadata_write() and H5Pset_all_coll_metadata_ops() * The Subfiling VFD should not currently be used with an HDF5 library that has been built with thread-safety enabled. This can cause deadlocks when failures occur due to interactions between the VFD's internal threads and HDF5's global lock. (JTH - 2022/07/22) - Add a new public function, H5ESget_requests() This function allows the user to retrieve request pointers from an event set. It is intended for use primarily by VOL plugin developers. (NAF - 2022/01/11) - Adds new file driver-level memory copy operation for "ctl" callback and updates compact dataset I/O routines to utilize it When accessing an HDF5 file with a file driver that uses memory allocated in special ways (e.g., without standard library's `malloc`), a crash could be observed when HDF5 tries to perform `memcpy` operations on such a memory region. These changes add a new H5FD_FEAT_MEMMANAGE VFD feature flag, which, if specified as supported by a VFD, will inform HDF5 that the VFD either uses special memory management routines or wishes to perform memory management in a specific way. Therefore, this flag instructs HDF5 to ask the file driver to perform memory management for certain operations. These changes also introduce a new "ctl" callback operation identified by the H5FD_CTL__MEM_COPY op code. This operation simply asks a VFD to perform a memory copy. The arguments to this operation are passed to the "ctl" callback's "input" parameter as a pointer to a struct defined as: struct H5FD_ctl_memcpy_args_t { void * dstbuf; /**< Destination buffer */ hsize_t dst_off; /**< Offset within destination buffer */ const void *srcbuf; /**< Source buffer */ hsize_t src_off; /**< Offset within source buffer */ size_t len; /**< Length of data to copy from source buffer */ } H5FD_ctl_memcpy_args_t; Further, HDF5's compact dataset I/O routines were identified as a problematic area that could cause a crash for VFDs that make use of special memory management. Those I/O routines were therefore updated to make use of this new "ctl" callback operation in order to ask the underlying file driver to correctly handle memory copies. (JTH - 2021/09/28) - Adds new "ctl" callback to VFD H5FD_class_t structure with the following prototype: herr_t (*ctl)(H5FD_t *file, uint64_t op_code, uint64_t flags, const void *input, void **output); This newly-added "ctl" callback allows Virtual File Drivers to intercept and handle arbitrary operations identified by an operation code. Its parameters are as follows: `file` [in] - A pointer to the file to be operated on `op_code` [in] - The operation code identifying the operation to be performed `flags` [in] - Flags governing the behavior of the operation performed (see H5FDpublic.h for a list of valid flags) `input` [in] - A pointer to arguments passed to the VFD performing the operation `output` [out] - A pointer for the receiving VFD to use for output from the operation (JRM - 2021/08/16) - Change how the release part of version, in major.minor.release is checked for compatibility The HDF5 library uses a function, H5check_version, to check that the version defined in the header files, which is used to compile an application is compatible with the version codified in the library, which the application loads at runtime. This previously required an exact match or the library would print a warning, dump the build settings and then abort or continue. An environment variable controlled the logic. Now the function first checks that the library release version, in major.minor.release, is not older than the version in the headers. Secondly, if the release version is different, it checks if either the library version or the header version is in the exception list, in which case the release part of version, in major.minor.release, must be exact. An environment variable still controls the logic. (ADB - 2021/07/27) - gcc warning suppression macros were moved out of H5public.h The HDF5 library uses a set of macros to suppress warnings on gcc. These warnings were originally located in H5public.h so that the multi VFD (which only uses public headers) could also make use of them but internal macros should not be publicly exposed like this. These macros have now been moved to H5private.h. Pending future multi VFD refactoring, the macros have been duplicated in H5FDmulti.c to suppress the format string warnings there. (DER - 2021/06/03) - H5Gcreate1() now rejects size_hint parameters larger than UINT32_MAX The size_hint value is ultimately stored in a uint32_t struct field, so specifying a value larger than this on a 64-bit machine can cause undefined behavior including crashing the system. The documentation for this API call was also incorrect, stating that passing a negative value would cause the library to use a default value. Instead, passing a "negative" value actually passes a very large value, which is probably not what the user intends and can cause crashes on 64-bit systems. The Doxygen documentation has been updated and passing values larger than UINT32_MAX for size_hint will now produce a normal HDF5 error. (DER - 2021/04/29, HDFFV-11241) - H5Pset_fapl_log() no longer crashes when passed an invalid fapl ID When passed an invalid fapl ID, H5Pset_fapl_log() would usually segfault when attempting to free an uninitialized pointer in the error handling code. This behavior is more common in release builds or when the memory sanitization checks were not selected as a build option. The pointer is now correctly initialized and the API call now produces a normal HDF5 error when fed an invalid fapl ID. (DER - 2021/04/28, HDFFV-11240) - Fixes a segfault when H5Pset_mdc_log_options() is called multiple times The call incorrectly attempts to free an internal copy of the previous log location string, which causes a segfault. This only happens when the call is invoked multiple times on the same property list. On the first call to a given fapl, the log location is set to NULL so the segfault does not occur. The string is now handled properly and the segfault no longer occurs. (DER - 2021/04/27, HDFFV-11239) - HSYS_GOTO_ERROR now emits the results of GetLastError() on Windows HSYS_GOTO_ERROR is an internal macro that is used to produce error messages when system calls fail. These strings include errno and the the associated strerror() value, which are not particularly useful when a Win32 API call fails. On Windows, this macro has been updated to include the result of GetLastError(). When a system call fails on Windows, usually only one of errno and GetLastError() will be useful, however we emit both for the user to parse. The Windows error message is not emitted as it would be awkward to free the FormatMessage() buffer given the existing HDF5 error framework. Users will have to look up the error codes in MSDN. The format string on Windows has been changed from: "%s, errno = %d, error message = '%s'" to: "%s, errno = %d, error message = '%s', Win32 GetLastError() = %"PRIu32"" for those inclined to parse it for error values. (DER - 2021/03/21) - File locking now works on Windows Since version 1.10.0, the HDF5 library has used a file locking scheme to help enforce one reader at a time accessing an HDF5 file, which can be helpful when setting up readers and writers to use the single- writer/multiple-readers (SWMR) access pattern. In the past, this was only functional on POSIX systems where flock() or fcntl() were present. Windows used a no-op stub that always succeeded. HDF5 now uses LockFileEx() and UnlockFileEx() to lock the file using the same scheme as POSIX systems. We lock the entire file when we set up the locks (by passing DWORDMAX as both size parameters to LockFileEx()). (DER - 2021/03/19, HDFFV-10191) - H5Epush_ret() now requires a trailing semicolon H5Epush_ret() is a function-like macro that has been changed to contain a `do {} while(0)` loop. Consequently, a trailing semicolon is now required to end the `while` statement. Previously, a trailing semi would work, but was not mandatory. This change was made to allow clang-format to correctly format the source code. (SAM - 2021/03/03) - Improved performance of H5Sget_select_elem_pointlist Modified library to cache the point after the last block of points retrieved by H5Sget_select_elem_pointlist, so a subsequent call to the same function to retrieve the next block of points from the list can proceed immediately without needing to iterate over the point list. (NAF - 2021/01/19) - Replaced H5E_ATOM with H5E_ID in H5Epubgen.h The term "atom" is archaic and not in line with current HDF5 library terminology, which uses "ID" instead. "Atom" has mostly been purged from the library internals and this change removes H5E_ATOM from the H5Epubgen.h (exposed via H5Epublic.h) and replaces it with H5E_ID. (DER - 2020/11/24, HDFFV-11190) - Add a new public function H5Ssel_iter_reset This function resets a dataspace selection iterator back to an initial state so that it may be used for iteration once more. This can be useful when needing to iterate over a selection multiple times without having to repeatedly create/destroy a selection iterator for that dataspace selection. (JTH - 2020/09/18) - Remove HDFS VFD stubs The original implementation of the HDFS VFD included non-functional versions of the following public API calls when the HDFS VFD is not built as a part of the HDF5 library: * H5FD_hdfs_init() * H5Pget_fapl_hdfs() * H5Pset_fapl_hdfs() They will remain present in HDF5 1.10 and HDF5 1.12 releases for binary compatibility purposes but have been removed as of 1.14.0. Note that this has nothing to do with the real HDFS VFD API calls that are fully functional when the HDFS VFD is configured and built. We simply changed: #ifdef LIBHDFS #else #endif to: #ifdef LIBHDFS #endif Which is how the other optional VFDs are handled. (DER - 2020/08/27) - Add Mirror VFD Use TCP/IP sockets to perform write-only (W/O) file I/O on a remote machine. Must be used in conjunction with the Splitter VFD. (JOS - 2020/03/13, TBD) - Add Splitter VFD Maintain separate R/W and W/O channels for "concurrent" file writes to two files using a single HDF5 file handle. (JOS - 2020/03/13, TBD) Parallel Library: ----------------- - Several improvements to parallel compression feature, including: * Improved support for collective I/O (for both writes and reads) * Significant reduction of memory usage for the feature as a whole * Reduction of copying of application data buffers passed to H5Dwrite * Addition of support for incremental file space allocation for filtered datasets created in parallel. Incremental file space allocation is the default for these types of datasets (early file space allocation is also still supported), while early file space allocation is still the default (and only supported at allocation time) for unfiltered datasets created in parallel. Incremental file space allocation should help with parallel HDF5 applications that wish to use fill values on filtered datasets, but would typically avoid doing so since dataset creation in parallel would often take an excessive amount of time. Since these datasets previously used early file space allocation, HDF5 would allocate space for and write fill values to every chunk in the dataset at creation time, leading to noticeable overhead. Instead, with incremental file space allocation, allocation of file space for chunks and writing of fill values to those chunks will be delayed until each individual chunk is initially written to. * Addition of support for HDF5's "don't filter partial edge chunks" flag (https://portal.hdfgroup.org/display/HDF5/H5P_SET_CHUNK_OPTS) * Addition of proper support for HDF5 fill values with the feature * Addition of 'H5_HAVE_PARALLEL_FILTERED_WRITES' macro to H5pubconf.h so HDF5 applications can determine at compile-time whether the feature is available * Addition of simple examples (ph5_filtered_writes.c and ph5_filtered_writes_no_sel.c) under examples directory to demonstrate usage of the feature * Improved coverage of regression testing for the feature (JTH - 2022/2/23) Fortran Library: ---------------- - Added pointer based H5Dfill_f API Added Fortran H5Dfill_f, which is fully equivalent to the C API. It accepts pointers, fill value datatype and datatype of dataspace elements. (MSB - 2022/10/10, HDFFV-10734.) - H5Fget_name_f fixed to handle correctly trailing whitespaces and newly allocated buffers. (MSB - 2021/08/30, github-826,972) - Add wrappers for H5Pset/get_file_locking() API calls h5pget_file_locking_f() h5pset_file_locking_f() See the configure option discussion for HDFFV-11092 (above) for more information on the file locking feature and how it's controlled. (DER - 2020/07/30, HDFFV-11092) C++ Library: ------------ - Added two new constructors to H5::H5File class Two new constructors were added to allow opening a file with non-default access property list. - Add wrappers for H5Pset/get_file_locking() API calls FileAccPropList::setFileLocking() FileAccPropList::getFileLocking() See the configure option discussion for HDFFV-11092 (above) for more information on the file locking feature and how it's controlled. (DER - 2020/07/30, HDFFV-11092) Java Library: ------------- - Added version of H5Rget_name to return the name as a Java string. Other functions that get_name process the get_size then get the name within the JNI implementation. Now H5Rget_name has a H5Rget_name_string. (ADB - 2022/07/12) - Added reference support to H5A and H5D read write vlen JNI functions. Added the implementation to handle VL references as an Array of Lists of byte arrays. The JNI wrappers translate the Array of Lists to/from the hvl_t vlen structures. The wrappers use the specified datatype arguments for the List type translation, it is expected that the Java type is correct. (ADB - 2022/07/11, HDFFV-11318) - H5A and H5D read write vlen JNI functions were incorrect. Corrected the vlen function implementations for the basic primitive types. The VLStrings functions now correctly use the implementation that had been the VL functions. (VLStrings functions did not have an implementation.) The new VL functions implementation now expect an Array of Lists between Java and the JNI wrapper. The JNI wrappers translate the Array of Lists to/from the hvl_t vlen structures. The wrappers use the specified datatype arguments for the List type translation, it is expected that the Java type is correct. (ADB - 2022/07/07, HDFFV-11310) - H5A and H5D read write JNI functions had flawed vlen datatype check. Adapted tools function for JNI utils file. This reduced multiple calls to a single check and variable. The variable can then be used to call the H5Treclaim function. Adjusted existing test and added new test. (ADB - 2022/06/22) - Replaced HDF5AtomException with HDF5IdException Since H5E_ATOM changed to H5E_ID in the C library, the Java exception that wraps the error category was also renamed. Its functionality remains unchanged aside from the name. (See also the HDFFV-11190 note in the C library section) (DER - 2020/11/24, HDFFV-11190) - Added new H5S functions. H5Sselect_copy, H5Sselect_shape_same, H5Sselect_adjust, H5Sselect_intersect_block, H5Sselect_project_intersection, H5Scombine_hyperslab, H5Smodify_select, H5Scombine_select wrapper functions added. (ADB - 2020/10/27, HDFFV-10868) - Add wrappers for H5Pset/get_file_locking() API calls H5Pset_file_locking() H5Pget_use_file_locking() H5Pget_ignore_disabled_file_locking() Unlike the C++ and Fortran wrappers, there are separate getters for the two file locking settings, each of which returns a boolean value. See the configure option discussion for HDFFV-11092 (above) for more information on the file locking feature and how it's controlled. (DER - 2020/07/30, HDFFV-11092) Tools: ------ - Building h5perf/h5perf_serial in "standalone mode" has been removed Building h5perf separately from the library was added circa 2008 in HDF5 1.6.8. It's unclear what purpose this serves and the current implementation is currently broken. The existing files require H5private.h and the symbols we use to determine how the copied platform-independence scheme should be used come from H5pubconf.h, which may not match the compiler being used to build standalone h5perf. Due to the maintenance overhead and lack of a clear use case, support for building h5perf and h5perf_serial separately from the HDF5 library has been removed. (DER - 2022/07/15) - The perf tool has been removed The small `perf` tool didn't really do anything special and the name conflicts with gnu's perf tool. (DER - 2022/07/15, GitHub #1787) - 1.10 References in containers were not displayed properly by h5dump. Ported 1.10 tools display function to provide ability to inspect and display 1.10 reference data. (ADB - 2022/06/22) - h5repack added an optional verbose value for reporting R/W timing. In addition to adding timing capture around the read/write calls in h5repack, added help text to indicate how to show timing for read/write; -v N, --verbose=N Verbose mode, print object information. N - is an integer greater than 1, 2 displays read/write timing (ADB - 2021/11/08) - Added a new (unix ONLY) parallel meta tool 'h5dwalk', which utilizes the mpifileutils (https://hpc.github.io/mpifileutils) open source utility library to enable parallel execution of other HDF5 tools. This approach can greatly enhance the serial hdf5 tool performance over large collections of files by utilizing MPI parallelism to distribute an application load over many independent MPI ranks and files. An introduction to the mpifileutils library and initial 'User Guide' for the new 'h5dwalk" tool can be found at: https://github.com/HDFGroup/hdf5doc/tree/master/RFCs/HDF5/tools/parallel_tools (RAW - 2021/10/25) - Refactored the perform tools and removed depends on test library. Moved the perf and h5perf tools from tools/test/perform to tools/src/h5perf so that they can be installed. This required that the test library dependency be removed by copying the needed functions from h5test.c. The standalone scripts and other perform tools remain in the tools/test/perform folder. (ADB - 2021/08/10) - Removed partial long exceptions Some of the tools accepted shortened versions of the long options (ex: --datas instead of --dataset). These were implemented inconsistently, are difficult to maintian, and occasionally block useful long option names. These partial long options have been removed from all the tools. (DER - 2021/08/03) - h5repack added help text for user-defined filters. Added help text line that states the valid values of the filter flag for user-defined filters; filter_flag: 1 is OPTIONAL or 0 is MANDATORY (ADB - 2021/01/14, HDFFV-11099) - Added h5delete tool Deleting HDF5 storage when using the VOL can be tricky when the VOL does not create files. The h5delete tool is a simple wrapper around the H5Fdelete() API call that uses the VOL specified in the HDF5_VOL_CONNECTOR environment variable to delete a "file". If the call to H5Fdelete() fails, the tool will attempt to use the POSIX remove(3) call to remove the file. Note that the HDF5 library does currently have support for H5Fdelete() in the native VOL connector. (DER - 2020/12/16) - h5repack added options to control how external links are handled. Currently h5repack preserves external links and cannot copy and merge data from the external files. Two options, merge and prune, were added to control how to merge data from an external link into the resulting file. --merge Follow external soft link recursively and merge data. --prune Do not follow external soft links and remove link. --merge --prune Follow external link, merge data and remove dangling link. (ADB - 2020/08/05, HDFFV-9984) - h5repack was fixed to repack the reference attributes properly. The code line that checks if the update of reference inside a compound datatype is misplaced outside the code block loop that carries out the check. In consequence, the next attribute that is not the reference type was repacked again as the reference type and caused the failure of repacking. The fix is to move the corresponding code line to the correct code block. (KY -2020/02/07, HDFFV-11014) High-Level APIs: ---------------- - added set/get for unsigned long long attributes The attribute writing high-level API has been expanded to include public set/get functions for ULL attributes, analogously to the existing set/get for other types. (AF - 2021/09/08) C Packet Table API: ------------------- - Internal header file: --------------------- - All the #defines named H5FD_CTL__* were renamed to H5FD_CTL_*, i.e. the double underscore was reduced to a single underscore. Documentation: -------------- - Doxygen User Guide documentation is available when configured and generated. The resulting documentation files will be in the share/html subdirectory of the HDF5 install directory. (ADB - 2022/08/09) Support for new platforms, languages and compilers ================================================== - Bug Fixes since HDF5-1.12.0 release =================================== Library ------- - Seg fault on file close h5debug fails at file close with core dump on a file that has an illegal file size in its cache image. In H5F_dest(), the library performs all the closing operations for the file and keeps track of the error encountered when reading the file cache image. At the end of the routine, it frees the file's file structure and returns error. Due to the error return, the file object is not removed from the ID node table. This eventually causes assertion failure in H5VL__native_file_close() when the library finally exits and tries to access that file object in the table for closing. The closing routine, H5F_dest(), will not free the file structure if there is error, keeping a valid file structure in the ID node table. It will be freed later in H5VL__native_file_close() when the library exits and terminates the file package. (VC - 2022/12/14, HDFFV-11052, CVE-2020-10812) - Fix CVE-2018-13867 / GHSA-j8jr-chrh-qfrf Validate location (offset) of the accumulated metadata when comparing. Initially, the accumulated metadata location is initialized to HADDR_UNDEF - the highest available address. Bogus input files may provide a location or size matching this value. Comparing this address against such bogus values may provide false positives. Thus make sure, the value has been initialized or fail the comparison early and let other parts of the code deal with the bogus address/size. Note: To avoid unnecessary checks, it is assumed that if the 'dirty' member in the same structure is true the location is valid. (EFE - 2022/10/10 GH-2230) - Fix CVE-2018-16438 / GHSA-9xmm-cpf8-rgmx Make sure info block for external links has at least 3 bytes. According to the specification, the information block for external links contains 1 byte of version/flag information and two 0 terminated strings for the object linked to and the full path. Although not very useful, the minimum string length for each (with terminating 0) would be one byte. Checking this helps to avoid SEGVs triggered by bogus files. (EFE - 2022/10/09 GH-2233) - CVE-2021-46244 / GHSA-vrxh-5gxg-rmhm Compound datatypes may not have members of size 0 A member size of 0 may lead to an FPE later on as reported in CVE-2021-46244. To avoid this, check for this as soon as the member is decoded. (EFE - 2022/10/05 GEH-2242) - Fix CVE-2021-45830 / GHSA-5h2h-fjjr-x9m2 Make H5O__fsinfo_decode() more resilient to out-of-bound reads. When decoding a file space info message in H5O__fsinfo_decode() make sure each element to be decoded is still within the message. Malformed hdf5 files may have trunkated content which does not match the expected size. Checking this will prevent attempting to decode unrelated data and heap overflows. So far, only free space manager address data was checked before decoding. (EFE - 2022/10/05 GH-2228) - Fix CVE-2021-46242 / GHSA-x9pw-hh7v-wjpf When evicting driver info block, NULL the corresponding entry. Since H5C_expunge_entry() called (from H5AC_expunge_entry()) sets the flag H5C__FLUSH_INVALIDATE_FLAG, the driver info block will be freed. NULLing the pointer in f->shared->drvinfo will prevent use-after-free when it is used in other functions (like H5F__dest()) - as other places will check whether the pointer is initialized before using its value. (EFE - 2022/09/29 GH-2254) - Fix CVE-2021-45833 / GHSA-x57p-jwp6-4v79 Report error if dimensions of chunked storage in data layout < 2 For Data Layout Messages version 1 & 2 the specification state that the value stored in the data field is 1 greater than the number of dimensions in the dataspace. For version 3 this is not explicitly stated but the implementation suggests it to be the case. Thus the set value needs to be at least 2. For dimensionality < 2 an out-of-bounds access occurs. (EFE - 2022/09/28 GH-2240) - Fix CVE-2018-14031 / GHSA-2xc7-724c-r36j Parent of enum datatype message must have the same size as the enum datatype message itself. Functions accessing the enumeration values use the size of the enumeration datatype to determine the size of each element and how much data to copy. Thus the size of the enumeration and its parent need to match. Check in H5O_dtype_decode_helper() to avoid unpleasant surprises later. (EFE - 2022/09/28 GH-2236) - Fix CVE-2018-17439 / GHSA-vcxv-vp43-rch7 H5IMget_image_info(): Make sure to not exceed local array size Malformed hdf5 files may provide more dimensions than the array dim[] in H5IMget_image_info() is able to hold. Check number of elements first by calling H5Sget_simple_extent_dims() with NULL for both 'dims' and 'maxdims' arguments. This will cause the function to return only the number of dimensions. The fix addresses a stack overflow on write. (EFE - 2022/09/27 HDFFV-10589, GH-2226) - Fixed an issue with variable length attributes Previously, if a variable length attribute was held open while its file was opened through another handle, the same attribute was opened through the second file handle, and the second file and attribute handles were closed, attempting to write to the attribute through the first handle would cause an error. (NAF - 2022/10/24) - Memory leak A memory leak was observed with variable-length fill value in H5O_fill_convert() function in H5Ofill.c. The leak is manifested by running valgrind on test/set_extent.c. Previously, fill->buf is used for datatype conversion if it is large enough and the variable-length information is therefore lost. A buffer is now allocated regardless so that the element in fill->buf can later be reclaimed. (VC - 2022/10/10, HDFFV-10840) - Fixed an issue with hyperslab selections Previously, when combining hyperslab selections, it was possible for the library to produce an incorrect combined selection. (NAF - 2022/09/25) - Fixed an issue with attribute type conversion with compound datatypes Previously, when performing type conversion for attribute I/O with a compound datatype, the library would not fill the background buffer with the contents of the destination, potentially causing data to be lost when only writing to a subset of the compound fields. (NAF - 2022/08/22, GitHub #2016) - The offset parameter in H5Dchunk_iter() is now scaled properly In earlier HDF5 1.13.x versions, the chunk offset was not scaled by the chunk dimensions. This offset parameter in the callback now matches that of H5Dget_chunk_info(). (@mkitti - 2022/08/06, GitHub #1419) - Converted an assertion on (possibly corrupt) file contents to a normal error check Previously, the library contained an assertion check that a read superblock doesn't contain a superblock extension message when the superblock version < 2. When a corrupt HDF5 file is read, this assertion can be triggered in debug builds of HDF5. In production builds, this situation could cause either a library error or a crash, depending on the platform. (JTH - 2022/07/08, HDFFV-11316/HDFFV-11317) - Fixed a metadata cache bug when resizing a pinned/protected cache entry When resizing a pinned/protected cache entry, the metadata cache code previously would wait until after resizing the entry to attempt to log the newly-dirtied entry. This caused H5C_resize_entry to mark the entry as dirty and made H5AC_resize_entry think that it didn't need to add the newly-dirtied entry to the dirty entries skiplist. Thus, a subsequent H5AC__log_moved_entry would think it needed to allocate a new entry for insertion into the dirty entry skip list, since the entry didGn't exist on that list. This caused an assertion failure, as the code to allocate a new entry assumes that the entry is not dirty. (JRM - 2022/02/28) - Issue #1436 identified a problem with the H5_VERS_RELEASE check in the H5check_version function. Investigating the original fix, #812, we discovered some inconsistencies with a new block added to check H5_VERS_RELEASE for incompatibilities. This new block was not using the new warning text dealing with the H5_VERS_RELEASE check and would cause the warning to be duplicated. By removing the H5_VERS_RELEASE argument in the first check for H5_VERS_MAJOR and H5_VERS_MINOR, the second check would only check the H5_VERS_RELEASE for incompatible release versions. This adheres to the statement that except for the develop branch, all release versions in a major.minor maintenance branch should be compatible. The prerequisite is that an application will not use any APIs not present in all release versions. (ADB - 2022/02/24, #1438) - Unified handling of collective metadata reads to correctly fix old bugs Due to MPI-related issues occurring in HDF5 from mismanagement of the status of collective metadata reads, they were forced to be disabled during chunked dataset raw data I/O in the HDF5 1.10.5 release. This wouldn't generally have affected application performance because HDF5 already disables collective metadata reads during chunk lookup, since it is generally unlikely that the same chunks will be read by all MPI ranks in the I/O operation. However, this was only a partial solution that wasn't granular enough. This change now unifies the handling of the file-global flag and the API context-level flag for collective metadata reads in order to simplify querying of the true status of collective metadata reads. Thus, collective metadata reads are once again enabled for chunked dataset raw data I/O, but manually controlled at places where some processing occurs on MPI rank 0 only and would cause issues when collective metadata reads are enabled. (JTH - 2021/11/16, HDFFV-10501/HDFFV-10562) - Fixed several potential MPI deadlocks in library failure conditions In the parallel library, there were several places where MPI rank 0 could end up skipping past collective MPI operations when some failure occurs in rank 0-specific processing. This would lead to deadlocks where rank 0 completes an operation while other ranks wait in the collective operation. These places have been rewritten to have rank 0 push an error and try to cleanup after the failure, then continue to participate in the collective operation to the best of its ability. (JTH - 2021/11/09) - Fixed an H5Pget_filter_by_id1/2() assert w/ out of range filter IDs Both H5Pget_filter_by_id1 and 2 did not range check the filter ID, which could trip as assert in debug versions of the library. The library now returns a normal HDF5 error when the filter ID is out of range. (DER - 2021/11/23, HDFFV-11286) - Fixed an issue with collective metadata reads being permanently disabled after a dataset chunk lookup operation. This would usually cause a mismatched MPI_Bcast and MPI_ERR_TRUNCATE issue in the library for simple cases of H5Dcreate() -> H5Dwrite() -> H5Dcreate(). (JTH - 2021/11/08, HDFFV-11090) - Fixed cross platform incompatibility of references within variable length types Reference types within variable length types previously could not be read on a platform with different endianness from where they were written. Fixed so cross platform portability is restored. (NAF - 2021/09/30) - Detection of simple data transform function "x" In the case of the simple data transform function "x" the (parallel) library recognizes this is the same as not applying this data transform function. This improves the I/O performance. In the case of the parallel library, it also avoids breaking to independent I/O, which makes it possible to apply a filter when writing or reading data to or from the HDF5 file. (JWSB - 2021/09/13) - Fixed an invalid read and memory leak when parsing corrupt file space info messages When the corrupt file from CVE-2020-10810 was parsed by the library, the code that imports the version 0 file space info object header message to the version 1 struct could read past the buffer read from the disk, causing an invalid memory read. Not catching this error would cause downstream errors that eventually resulted in a previously allocated buffer to be unfreed when the library shut down. In builds where the free lists are in use, this could result in an infinite loop and SIGABRT when the library shuts down. We now track the buffer size and raise an error on attempts to read past the end of it. (DER - 2021/08/12, HDFFV-11053) - Fixed CVE-2018-14460 The tool h5repack produced a segfault when the rank in dataspace message was corrupted, causing invalid read while decoding the dimension sizes. The problem was fixed by ensuring that decoding the dimension sizes and max values will not go beyond the end of the buffer. (BMR - 2021/05/12, HDFFV-11223) - Fixed CVE-2018-11206 The tool h5dump produced a segfault when the size of a fill value message was corrupted and caused a buffer overflow. The problem was fixed by verifying the fill value's size against the buffer size before attempting to access the buffer. (BMR - 2021/03/15, HDFFV-10480) - Fixed CVE-2018-14033 (same issue as CVE-2020-10811) The tool h5dump produced a segfault when the storage size message was corrupted and caused a buffer overflow. The problem was fixed by verifying the storage size against the buffer size before attempting to access the buffer. (BMR - 2021/03/15, HDFFV-11159/HDFFV-11049) - Remove underscores on header file guards Header file guards used a variety of underscores at the beginning of the define. Removed all leading (some trailing) underscores from header file guards. (ADB - 2021/03/03, #361) - Fixed a segmentation fault A segmentation fault occurred with a Mathworks corrupted file. A detection of accessing a null pointer was added to prevent the problem. (BMR - 2021/02/19, HDFFV-11150) - Fixed issue with MPI communicator and info object not being copied into new FAPL retrieved from H5F_get_access_plist Added logic to copy the MPI communicator and info object into the output FAPL. MPI communicator is retrieved from the VFD, while the MPI info object is retrieved from the file's original FAPL. (JTH - 2021/02/15, HDFFV-11109) - Fixed problems with vlens and refs inside compound using H5VLget_file_type() Modified library to properly ref count H5VL_object_t structs and only consider file vlen and reference types to be equal if their files are the same. (NAF - 2021/01/22) - Fixed CVE-2018-17432 The tool h5repack produced a segfault on a corrupted file which had invalid rank for scalar or NULL datatype. The problem was fixed by modifying the dataspace encode and decode functions to detect and report invalid rank. h5repack now fails with an error message for the corrupted file. (BMR - 2020/10/26, HDFFV-10590) - Creation of dataset with optional filter When the combination of type, space, etc doesn't work for filter and the filter is optional, it was supposed to be skipped but it was not skipped and the creation failed. Allowed the creation of the dataset in such a situation. (BMR - 2020/08/13, HDFFV-10933) - Explicitly declared dlopen to use RTLD_LOCAL dlopen documentation states that if neither RTLD_GLOBAL nor RTLD_LOCAL are specified, then the default behavior is unspecified. The default on linux is usually RTLD_LOCAL while macos will default to RTLD_GLOBAL. (ADB - 2020/08/12, HDFFV-11127) - H5Sset_extent_none() sets the dataspace class to H5S_NO_CLASS which causes asserts/errors when passed to other dataspace API calls. H5S_NO_CLASS is an internal class value that should not have been exposed via a public API call. In debug builds of the library, this can cause assert() function to trip. In non-debug builds, it will produce normal library errors. The new library behavior is for H5Sset_extent_none() to convert the dataspace into one of type H5S_NULL, which is better handled by the library and easier for developers to reason about. (DER - 2020/07/27, HDFFV-11027) - Fixed issues CVE-2018-13870 and CVE-2018-13869 When a buffer overflow occurred because a name length was corrupted and became very large, h5dump crashed on memory access violation. A check for reading pass the end of the buffer was added to multiple locations to prevent the crashes and h5dump now simply fails with an error message when this error condition occurs. (BMR - 2020/07/22, HDFFV-11120 and HDFFV-11121) - Fixed the segmentation fault when reading attributes with multiple threads It was reported that the reading of attributes with variable length string datatype will crash with segmentation fault particularly when the number of threads is high (>16 threads). The problem was due to the file pointer that was set in the variable length string datatype for the attribute. That file pointer was already closed when the attribute was accessed. The problem was fixed by setting the file pointer to the current opened file pointer when the attribute was accessed. Similar patch up was done before when reading dataset with variable length string datatype. (VC - 2020/07/13, HDFFV-11080) - Fixed CVE-2020-10810 The tool h5clear produced a segfault during an error recovery in the superblock decoding. An internal pointer was reset to prevent further accessing when it is not assigned with a value. (BMR - 2020/06/29, HDFFV-11053) - Fixed CVE-2018-17435 The tool h52gif produced a segfault when the size of an attribute message was corrupted and caused a buffer overflow. The problem was fixed by verifying the attribute message's size against the buffer size before accessing the buffer. h52gif was also fixed to display the failure instead of silently exiting after the segfault was eliminated. (BMR - 2020/06/19, HDFFV-10591) Java Library ------------ - Improve variable-length datatype handling in JNI. The existing JNI read-write functions could handle variable-length datatypes that were simple variable-length datatype with an atomic sub-datatype. More complex combinations could not be handled. Reworked the JNI read-write functions to recursively inspect datatypes for variable-length sub-datatypes. (ADB - 2022/10/12, HDFFV-8701,10375) - JNI utility function does not handle new references. The JNI utility function for converting reference data to string did not use the new APIs. In addition to fixing that function, added new java tests for using the new APIs. (ADB - 2021/02/16, HDFFV-11212) - The H5FArray.java class, in which virtually the entire execution time is spent using the HDFNativeData method that converts from an array of bytes to an array of the destination Java type. 1. Convert the entire byte array into a 1-d array of the desired type, rather than performing 1 conversion per row; 2. Use the Java Arrays method copyOfRange to grab the section of the array from (1) that is desired to be inserted into the destination array. (PGT,ADB - 2020/12/13, HDFFV-10865) Configuration ------------- - Remove Javadoc generation The use of doxygen now supersedes the requirement to build javadocs. We do not have the resources to continue to support two documentation methods and have chosen doxygen as our standard. (ADB - 2022/12/19) - Change the default for building the high-level GIF tools The gif2h5 and h52gif high-level tools are deprecated and will be removed in a future release. The default build setting for them has been changed from enabled to disabled. A user can enable the build of these tools if needed. autotools: --enable-hlgiftools cmake: HDF5_BUILD_HL_GIF_TOOLS=ON Disabling the GIF tools eliminates the following CVEs: HDFFV-10592 CVE-2018-17433 HDFFV-10593 CVE-2018-17436 HDFFV-11048 CVE-2020-10809 (ADB - 2022/12/16) - Change the settings of the *pc files to use the correct format The pkg-config files generated by CMake uses incorrect syntax for the 'Requires' settings. Changing the set to use 'lib-name = version' instead 'lib-name-version' fixes the issue (ADB - 2022/12/06 HDFFV-11355) - Move MPI libraries link from PRIVATE to PUBLIC The install dependencies were not including the need for MPI libraries when an application or library was built with the C library. Also updated the CMake target link command to use the newer style MPI::MPI_C link variable. (ADB - 2022/10/27) - Corrected path searched by CMake find_package command The install path for cmake find_package files had been changed to use "share/cmake" for all platforms. However the trailing "hdf5" directory was not removed. This "hdf5" additional directory has been removed. (ADB - 2021/09/27) - Corrected pkg-config compile script It was discovered that the position of the "$@" argument for the command in the compile script may fail on some platforms and configurations. The position of the "$@"command argument was moved before the pkg-config sub command. (ADB - 2021/08/30) - Fixed CMake C++ compiler flags A recent refactoring of the C++ configure files accidentally removed the file that executed the enable_language command for C++ needed by the HDFCXXCompilerFlags.cmake file. Also updated the intel warnings files, including adding support for windows platforms. (ADB - 2021/08/10) - Better support for libaec (open-source Szip library) in CMake Implemented better support for libaec 1.0.5 (or later) library. This version of libaec contains improvements for better integration with HDF5. Furthermore, the variable USE_LIBAEC_STATIC has been introduced to allow to make use of static version of libaec library. Use libaec_DIR or libaec_ROOT to set the location in which libaec can be found. Be aware, the Szip library of libaec 1.0.4 depends on another library within libaec library. This dependency is not specified in the current CMake configuration which means that one can not use the static Szip library of libaec 1.0.4 when building HDF5. This has been resolved in libaec 1.0.5. (JWSB - 2021/06/22) - Refactor CMake configure for Fortran The Fortran configure tests for KINDs reused a single output file that was read to form the Integer and Real Kinds defines. However, if config was run more then once, the CMake completed variable prevented the tests from executing again and the last value saved in the file was used to create the define. Creating separate files for each KIND solved the issue. In addition the test for H5_PAC_C_MAX_REAL_PRECISION was not pulling in defines for proper operation and did not define H5_PAC_C_MAX_REAL_PRECISION correctly for a zero value. This was fixed by supplying the required defines. In addition it was moved from the Fortran specific HDF5UseFortran.camke file to the C centric ConfigureChecks.cmake file. (ADB - 2021/06/03) - Move emscripten flag to compile flags The emscripten flag, -O0, was removed from target_link_libraries command to the correct target_compile_options command. (ADB - 2021/04/26 HDFFV-11083) - Remove arbitrary warning flag groups from CMake builds The arbitrary groups were created to reduce the quantity of warnings being reported that overwhelmed testing report systems. Considerable work has been accomplished to reduce the warning count and these arbitrary groups are no longer needed. Also the default for all warnings, HDF5_ENABLE_ALL_WARNINGS, is now ON. Visual Studio warnings C4100, C4706, and C4127 have been moved to developer warnings, HDF5_ENABLE_DEV_WARNINGS, and are disabled for normal builds. (ADB - 2021/03/22, HDFFV-11228) - Reclassify CMake messages, to allow new modes and --log-level option CMake message commands have a mode argument. By default, STATUS mode was chosen for any non-error message. CMake version 3.15 added additional modes, NOTICE, VERBOSE, DEBUG and TRACE. All message commands with a mode of STATUS were reviewed and most were reclassified as VERBOSE. The new mode was protected by a check for a CMake version of at least 3.15. If CMake version 3.17 or above is used, the user can use the command line option of "--log-level" to further restrict which message commands are displayed. (ADB - 2021/01/11, HDFFV-11144) - Fixes Autotools determination of the stat struct having an st_blocks field A missing parenthesis in an autoconf macro prevented building the test code used to determine if the stat struct contains the st_blocks field. Now that the test functions correctly, the H5_HAVE_STAT_ST_BLOCKS #define found in H5pubconf.h will be defined correctly on both the Autotools and CMake. This #define is only used in the tests and does not affect the HDF5 C library. (DER - 2021/01/07, HDFFV-11201) - Add missing ENV variable line to hdfoptions.cmake file Using the build options to use system SZIP/ZLIB libraries need to also specify the library root directory. Setting the {library}_ROOT ENV variable was added to the hdfoptions.cmake file. (ADB - 2020/10/19 HDFFV-11108) Tools ----- - Fix h5repack to only print output when verbose option is selected When timing option was added to h5repack, the check for verbose was incorrectly implemented. (ADB - 2022/12/02, GH #2270) - Changed how h5dump and h5ls identify long double. Long double support is not consistent across platforms. Tools will always identify long double as 128-bit [little/big]-endian float nn-bit precision. New test file created for datasets with attributes for float, double and long double. In addition any unknown integer or float datatype will now also show the number of bits for precision. These files are also used in the java tests. (ADB - 2021/03/24, HDFFV-11229,HDFFV-11113) - Fixed tools argument parsing. Tools parsing used the length of the option from the long array to match the option from the command line. This incorrectly matched a shorter long name option that happened to be a subset of another long option. Changed to match whole names. (ADB - 2021/01/19, HDFFV-11106) - The tools library was updated by standardizing the error stack process. General sequence is: h5tools_setprogname(PROGRAMNAME); h5tools_setstatus(EXIT_SUCCESS); h5tools_init(); ... process the command-line (check for error-stack enable) ... h5tools_error_report(); ... (do work) ... h5diff_exit(ret); (ADB - 2020/07/20, HDFFV-11066) - h5diff fixed a command line parsing error. h5diff would ignore the argument to -d (delta) if it is smaller than DBL_EPSILON. The macro H5_DBL_ABS_EQUAL was removed and a direct value comparison was used. (ADB - 2020/07/20, HDFFV-10897) - h5diff added a command line option to ignore attributes. h5diff would ignore all objects with a supplied path if the exclude-path argument is used. Adding the exclude-attribute argument will only exclude attributes, with the supplied path, from comparison. (ADB - 2020/07/20, HDFFV-5935) - h5diff added another level to the verbose argument to print filenames. Added verbose level 3 that is level 2 plus the filenames. The levels are: 0 : Identical to '-v' or '--verbose' 1 : All level 0 information plus one-line attribute status summary 2 : All level 1 information plus extended attribute status report 3 : All level 2 information plus file names (ADB - 2020/07/20, HDFFV-1005) Performance ------------- - Fortran API ----------- - h5open_f and h5close_f fixes * Fixed it so both h5open_f and h5close_f can be called multiple times. * Fixed an issue with open objects remaining after h5close_f was called. * Added additional tests. (MSB, 2022/04/19, HDFFV-11306) High-Level Library ------------------ - Fixed HL_test_packet, test for packet table vlen of vlen. Incorrect length assignment. (ADB - 2021/10/14) Fortran High-Level APIs ----------------------- - Documentation ------------- - F90 APIs -------- - C++ APIs -------- - Added DataSet::operator= Some compilers complain if the copy constructor is given explicitly but the assignment operator is implicitly set to default. (2021/05/19) Testing ------- - Stopped java/test/junit.sh.in installing libs for testing under ${prefix} Lib files needed are now copied to a subdirectory in the java/test directory, and on Macs the loader path for libhdf5.xxxs.so is changed in the temporary copy of libhdf5_java.dylib. (LRK, 2020/07/02, HDFFV-11063) Platforms Tested =================== Linux 5.16.14-200.fc35 GNU gcc (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 11.2.1 20220127 (Red Hat 11.2.1-9) Fedora35 clang version 13.0.0 (Fedora 13.0.0-3.fc35) (cmake and autotools) Linux 5.15.0-1026-aws gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 #30-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 Ubuntu 22.04 Ubuntu clang version 14.0.0-1ubuntu1 (cmake and autotools) Linux 5.13.0-1031-aws GNU gcc (GCC) 9.4.0-1ubuntu1 #35-Ubuntu SMP x86_64 GNU/Linux GNU Fortran (GCC) 9.4.0-1ubuntu1 Ubuntu 20.04 clang version 10.0.0-4ubuntu1 (cmake and autotools) Linux 5.3.18-150300-cray_shasta_c cray-mpich/8.3.3 #1 SMP x86_64 GNU/Linux Cray clang 14.0.2, 15.0.0 (crusher) GCC 11.2.0, 12.1.0 (cmake) Linux 4.18.0-348.7.1.el8_5 gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4) #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4) CentOS8 clang version 12.0.1 (Red Hat 12.0.1) (cmake and autotools) Linux 4.14.0-115.35.1.1chaos openmpi 4.0.5 #1 SMP aarch64 GNU/Linux GCC 9.3.0 (ARM-build-5) (stria) GCC 7.2.0 (Spack GCC) arm/20.1 arm/22.1 (cmake) Linux 4.14.0-115.35.1.3chaos spectrum-mpi/rolling-release #1 SMP ppc64le GNU/Linux clang 12.0.1 (vortex) GCC 8.3.1 XL 16.1.1 (cmake) Linux-4.14.0-115.21.2 spectrum-mpi/rolling-release #1 SMP ppc64le GNU/Linux clang 12.0.1, 14.0.5 (lassen) GCC 8.3.1 XL 16.1.1.2, 2021,09.22, 2022.08.05 (cmake) Linux-4.12.14-197.99-default cray-mpich/7.7.14 #1 SMP x86_64 GNU/Linux cce 12.0.3 (theta) GCC 11.2.0 llvm 9.0 Intel 19.1.2 Linux 3.10.0-1160.36.2.el7.ppc64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) #1 SMP ppc64be GNU/Linux g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) Power8 (echidna) GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39) Linux 3.10.0-1160.24.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) #1 SMP x86_64 GNU/Linux compilers: Centos7 Version 4.8.5 20150623 (Red Hat 4.8.5-4) (jelly/kituo/moohan) Version 4.9.3, Version 5.3.0, Version 6.3.0, Version 7.2.0, Version 8.3.0, Version 9.1.0 Intel(R) C (icc), C++ (icpc), Fortran (icc) compilers: Version 17.0.0.098 Build 20160721 GNU C (gcc) and C++ (g++) 4.8.5 compilers with NAG Fortran Compiler Release 6.1(Tozai) Intel(R) C (icc) and C++ (icpc) 17.0.0.098 compilers with NAG Fortran Compiler Release 6.1(Tozai) MPICH 3.1.4 compiled with GCC 4.9.3 MPICH 3.3 compiled with GCC 7.2.0 OpenMPI 2.1.6 compiled with icc 18.0.1 OpenMPI 3.1.3 and 4.0.0 compiled with GCC 7.2.0 PGI C, Fortran, C++ for 64-bit target on x86_64; Version 19.10-0 (autotools and cmake) Linux-3.10.0-1160.0.0.1chaos openmpi-4.1.2 #1 SMP x86_64 GNU/Linux clang 6.0.0, 11.0.1 (quartz) GCC 7.3.0, 8.1.0 Intel 19.0.4, 2022.2, oneapi.2022.2 Linux-3.10.0-1160.71.1.1chaos openmpi/4.1 #1 SMP x86_64 GNU/Linux GCC 7.2.0 (skybridge) Intel/19.1 (cmake) Linux-3.10.0-1160.66.1.1chaos openmpi/4.1 #1 SMP x86_64 GNU/Linux GCC 7.2.0 (attaway) Intel/19.1 (cmake) Linux-3.10.0-1160.59.1.1chaos openmpi/4.1 #1 SMP x86_64 GNU/Linux Intel/19.1 (chama) (cmake) macOS Apple M1 11.6 Apple clang version 12.0.5 (clang-1205.0.22.11) Darwin 20.6.0 arm64 gfortran GNU Fortran (Homebrew GCC 11.2.0) 11.1.0 (macmini-m1) Intel icc/icpc/ifort version 2021.3.0 202106092021.3.0 20210609 macOS Big Sur 11.3.1 Apple clang version 12.0.5 (clang-1205.0.22.9) Darwin 20.4.0 x86_64 gfortran GNU Fortran (Homebrew GCC 10.2.0_3) 10.2.0 (bigsur-1) Intel icc/icpc/ifort version 2021.2.0 20210228 macOS High Sierra 10.13.6 Apple LLVM version 10.0.0 (clang-1000.10.44.4) 64-bit gfortran GNU Fortran (GCC) 6.3.0 (bear) Intel icc/icpc/ifort version 19.0.4.233 20190416 macOS Sierra 10.12.6 Apple LLVM version 9.0.0 (clang-900.39.2) 64-bit gfortran GNU Fortran (GCC) 7.4.0 (kite) Intel icc/icpc/ifort version 17.0.2 Mac OS X El Capitan 10.11.6 Apple clang version 7.3.0 from Xcode 7.3 64-bit gfortran GNU Fortran (GCC) 5.2.0 (osx1011test) Intel icc/icpc/ifort version 16.0.2 Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) #1 SMP x86_64 GNU/Linux compilers: Centos6 Version 4.4.7 20120313 (platypus) Version 4.9.3, 5.3.0, 6.2.0 MPICH 3.1.4 compiled with GCC 4.9.3 PGI C, Fortran, C++ for 64-bit target on x86_64; Version 19.10-0 Windows 10 x64 Visual Studio 2015 w/ Intel C/C++/Fortran 18 (cmake) Visual Studio 2017 w/ Intel C/C++/Fortran 19 (cmake) Visual Studio 2019 w/ clang 12.0.0 with MSVC-like command-line (C/C++ only - cmake) Visual Studio 2019 w/ Intel C/C++/Fortran oneAPI 2022 (cmake) Visual Studio 2022 w/ clang 15.0.1 with MSVC-like command-line (C/C++ only - cmake) Visual Studio 2022 w/ Intel C/C++/Fortran oneAPI 2022 (cmake) Visual Studio 2019 w/ MSMPI 10.1 (C only - cmake) Known Problems ============== ************************************************************ * _ * * (_) * * __ ____ _ _ __ _ __ _ _ __ __ _ * * \ \ /\ / / _` | '__| '_ \| | '_ \ / _` | * * \ V V / (_| | | | | | | | | | | (_| | * * \_/\_/ \__,_|_| |_| |_|_|_| |_|\__, | * * __/ | * * |___/ * * * * Please refrain from running any program (including * * HDF5 tests) which uses the subfiling VFD on Perlmutter * * at the National Energy Research Scientific Computing * * Center, NERSC. * * Doing so may cause a system disruption due to subfiling * * crashing Lustre. The system's Lustre bug is expected * * to be resolved by 2023. * * * ************************************************************ There is a bug in OpenMPI 4.1.0-4.1.4 that can result in incorrect results from MPI I/O requests unless one of the following parameters is passed to mpirun: --mca io ^ompio --mca fbtl_posix_read_data_sieving 0 This bug has been fixed in later versions of OpenMPI. Further discussion can be found here: https://www.hdfgroup.org/2022/11/workarounds-for-openmpi-bug-exposed-by-make-check-in-hdf5-1-13-3/ When using the subfiling feature with OpenMPI it is often necessary to increase the maximum number of threads: --mca common_pami_max_threads 4096 There is a bug in MPICH 4.0.0-4.0.3 where using device=ch4:ofi (the default) can cause failures in the testphdf5 test program. Using ch4:ucx or ch3 allows the test to pass. The bug appears to be fixed in the upcoming 4.1 release. These MPI implementation bugs may also be present in implementations derived from OpenMPI or MPICH. The workarounds listed above may need to be adjusted to match the derived implementation, or in some cases, there may be no workaround. The accum test fails on MacOS 12.6.2 (Monterey) with clang 14.0.0. The reason for this failure and its impact are unknown. The onion test has failures on Windows when built using Intel OneAPI 2022.3. The cause of these failures is under investigation. CMake files do not behave correctly with paths containing spaces. Do not use spaces in paths because the required escaping for handling spaces results in very complex and fragile build files. ADB - 2019/05/07 At present, metadata cache images may not be generated by parallel applications. Parallel applications can read files with metadata cache images, but since this is a collective operation, a deadlock is possible if one or more processes do not participate. CPP ptable test fails on both VS2017 and VS2019 with Intel compiler, JIRA issue: HDFFV-10628. This test will pass with VS2015 with Intel compiler. The subsetting option in ph5diff currently will fail and should be avoided. The subsetting option works correctly in serial h5diff. Several tests currently fail on certain platforms: MPI_TEST-t_bigio fails with spectrum-mpi on ppc64le platforms. MPI_TEST-t_subfiling_vfd and MPI_TEST_EXAMPLES-ph5_subfiling fail with cray-mpich on theta and with XL compilers on ppc64le platforms. MPI_TEST_testphdf5_tldsc fails with cray-mpich 7.7 on cori and theta. Known problems in previous releases can be found in the HISTORY*.txt files in the HDF5 source. Please report any new problems found to help@hdfgroup.org. CMake vs. Autotools installations ================================= While both build systems produce similar results, there are differences. Each system produces the same set of folders on linux (only CMake works on standard Windows); bin, include, lib and share. Autotools places the COPYING and RELEASE.txt file in the root folder, CMake places them in the share folder. The bin folder contains the tools and the build scripts. Additionally, CMake creates dynamic versions of the tools with the suffix "-shared". Autotools installs one set of tools depending on the "--enable-shared" configuration option. build scripts ------------- Autotools: h5c++, h5cc, h5fc CMake: h5c++, h5cc, h5hlc++, h5hlcc The include folder holds the header files and the fortran mod files. CMake places the fortran mod files into separate shared and static subfolders, while Autotools places one set of mod files into the include folder. Because CMake produces a tools library, the header files for tools will appear in the include folder. The lib folder contains the library files, and CMake adds the pkgconfig subfolder with the hdf5*.pc files used by the bin/build scripts created by the CMake build. CMake separates the C interface code from the fortran code by creating C-stub libraries for each Fortran library. In addition, only CMake installs the tools library. The names of the szip libraries are different between the build systems. The share folder will have the most differences because CMake builds include a number of CMake specific files for support of CMake's find_package and support for the HDF5 Examples CMake project. The issues with the gif tool are: HDFFV-10592 CVE-2018-17433 HDFFV-10593 CVE-2018-17436 HDFFV-11048 CVE-2020-10809 These CVE issues have not yet been addressed and are avoided by not building the gif tool by default. Enable building the High-Level tools with these options: autotools: --enable-hltools cmake: HDF5_BUILD_HL_TOOLS=ON