HDF5 version 1.13.0 currently under development ================================================================================ INTRODUCTION This document describes the differences between this release and the previous HDF5 release. It contains information on the platforms tested and known problems in this release. For more details check the HISTORY*.txt files in the HDF5 source. Note that documentation in the links below will be updated at the time of each final release. Links to HDF5 documentation can be found on The HDF5 web page: https://portal.hdfgroup.org/display/HDF5/HDF5 The official HDF5 releases can be obtained from: https://www.hdfgroup.org/downloads/hdf5/ Changes from Release to Release and New Features in the HDF5-1.13.x release series can be found at: https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide If you have any questions or comments, please send them to the HDF Help Desk: help@hdfgroup.org CONTENTS - New Features - Support for new platforms and languages - Bug Fixes since HDF5-1.10.3 - Bug Fixes since HDF5-1.10.2 - Supported Platforms - Tested Configuration Features Summary - More Tested Platforms - Known Problems - CMake vs. Autotools installations New Features ============ Configuration: ------------- - Both build systems use same set of warnings flags GNU C warnings flags were moved to files in a config sub-folder named gnu-warnings. Flags that only are available for a specific version of the compiler are in files named with that version. There are flags in named "error-xxx" files with warnings that may be promoted to errors. Some source files may still need fixes. There are also pairs of files named "developer-xxx" and "no-developer-xxx" that are chosen by the CMake option:HDF5_ENABLE_DEV_WARNINGS or the configure option:--enable-developer-warnings. (ADB - 2020/03/24, TRILAB-192) - Added test script for file size compare if CMake minimum version is at least 3.14, the fileCompareTest.cmake script will compare file sizes. (ADB - 2020/02/24, HDFFV-11036) - Update CMake minimum version to 3.12 Updated CMake minimum version to 3.12 and added version checks for Windows features. (ADB - 2020/02/05, TRILABS-142) - Fixed CMake include properties for Fortran libraries Corrected the library properties for Fortran to use the correct path for the Fortran module files. (ADB - 2020/02/04, HDFFV-11012) - Added common warnings files for gnu and intel Added warnings files to use one common set of flags during configure for both autotools and CMake build systems. The initial implementation only affects a general set of flags for gnu and intel compilers. (ADB - 2020/01/17) - Added new options to CMake for control of testing Added CMake options (default ON); HDF5_TEST_SERIAL AND/OR HDF5_TEST_PARALLEL combined with: HDF5_TEST_TOOLS HDF5_TEST_EXAMPLES HDF5_TEST_SWMR HDF5_TEST_FORTRAN HDF5_TEST_CPP HDF5_TEST_JAVA (ADB - 2020/01/15, HDFFV-11001) - Added Clang sanitizers to CMake for analyzer support if compiler is clang. Added CMake code and files to execute the Clang sanitizers if HDF5_ENABLE_SANITIZERS is enabled and the USE_SANITIZER option is set to one of the following: Address Memory MemoryWithOrigins Undefined Thread Leak 'Address;Undefined' (ADB - 2019/12/12, TRILAB-135) - Update CMake for VS2019 support CMake added support for VS2019 in version 3.15. Changes to the CMake generator setting required changes to scripts. Also updated version references in CMake files as necessary. (ADB - 2019/11/18, HDFFV-10962) - Update CMake options to match new autotools options Add configure options (autotools - CMake): enable-asserts HDF5_ENABLE_ASSERTS enable-symbols HDF5_ENABLE_SYMBOLS enable-profiling HDF5_ENABLE_PROFILING enable-optimization HDF5_ENABLE_OPTIMIZATION In addition NDEBUG is no longer forced defined and relies on the CMake process. (ADB - 2019/10/07, HDFFV-100901, HDFFV-10637, TRILAB-97) - Update CMake tests to use FIXTURES CMake test fixtures allow setup/cleanup tests and other dependency requirements as properties for tests. This is more flexible for modern CMake code. (ADB - 2019/07/23, HDFFV-10529) - Windows PDB files are always installed There are build configuration or flag settings for Windows that may not generate PDB files. If those files are not generated then the install utility will fail because those PDB files are not found. An optional variable, DISABLE_PDB_FILES, was added to not install PDB files. (ADB - 2019/07/17, HDFFV-10424) - Add mingw CMake support with a toolchain file There has been a number of mingw issues that has been linked under HDFFV-10845. It has been decided to implement the CMake cross-compiling technique of toolchain files. We will use a linux platform with the mingw compiler stack for testing. Only the C language is fully supported, and the error tests are skipped. The C++ language works for static but shared builds has a shared library issue with the mingw Standard Exception Handling library, which is not available on Windows. Fortran has a common cross-compile problem with the fortran configure tests. (ADB - 2019/07/12, HDFFV-10845, HDFFV-10595) - Windows PDB files are installed incorrectly For static builds, the PDB files for windows should be installed next to the static libraries in the lib folder. Also the debug versions of libraries and PDB files are now correctly built using the default CMAKE_DEBUG_POSTFIX setting. (ADB - 2019/07/09, HDFFV-10581) - Add option to build only shared libs A request was made to prevent building static libraries and only build shared. A new option was added to CMake, ONLY_SHARED_LIBS, which will skip building static libraries. Certain utility functions will build with static libs but are not published. Tests are adjusted to use the correct libraries depending on SHARED/STATIC settings. (ADB - 2019/06/12, HDFFV-10805) - Add options to enable or disable building tools and tests Configure options --enable-tests and --enable-tools were added for autotools configure. These options are enabled by default, and can be disabled with either --disable-tests (or tools) or --enable-tests=no (or --enable-tools=no). Build time is reduced ~20% when tools are disabled, 35% when tests are disabled, 45% when both are disabled. Reenabling them after the initial build requires running configure again with the option(s) enabled. (LRK - 2019/06/12, HDFFV-9976) - Change tools test that test the error stack There are some use cases which can cause the error stack of tools to be different then the expected output. These tests now use grepTest.cmake, this was changed to allow the error file to be searched for an expected string. (ADB - 2019/04/15, HDFFV-10741) - Keep stderr and stdout separate in tests Changed test handling of output capture. Tests now keep the stderr output separate from the stdout output. It is up to the test to decide which output to check against a reference. Also added the option to grep for a string in either output. (ADB - 2018/12/12, HDFFV-10632) - Add toolchain and cross-compile support Added info on using a toolchain file to INSTALL_CMAKE.txt. A toolchain file is also used in cross-compiling, which requires CMAKE_CROSSCOMPILING_EMULATOR to be set. To help with cross-compiling the fortran configure process, the HDF5UseFortran.cmake file macros were improved. Fixed a Fortran configure file issue that incorrectly used #cmakedefine instead of #define. (ADB - 2018/10/04, HDFFV-10594) - Add warning flags for Intel compilers Identified Intel compiler specific warnings flags that should be used instead of GNU flags. (ADB - 2018/10/04, TRILABS-21) - Add default rpath to targets Default rpaths should be set in shared executables and libraries to allow the use of loading dependent libraries without requiring LD_LIBRARY_PATH to be set. The default path should be relative using @rpath on osx and $ORIGIN on linux. Windows is not affected. (ADB - 2018/09/26, HDFFV-10594) - Add missing USE_110_API_DEFAULT option. Option USE_110_API_DEFAULT sets the default version of versioned APIs. The bin/makevers perl script did not set the maxidx variable correctly when the 1.10 branch was created. This caused the versioning process to always use the latest version of any API. (ADB - 2018/08/17, HDFFV-10552) - Added configuration checks for the following MPI functions: MPI_Mprobe - Used for the Parallel Compression feature MPI_Imrecv - Used for the Parallel Compression feature MPI_Get_elements_x - Used for the "big Parallel I/O" feature MPI_Type_size_x - Used for the "big Parallel I/O" feature (JTH - 2018/08/02, HDFFV-10512) - Added section to the libhdf5.settings file to indicate the status of the Parallel Compression and "big Parallel I/O" features. (JTH - 2018/08/02, HDFFV-10512) - Add option to execute swmr shell scripts from CMake. Option TEST_SHELL_SCRIPTS redirects processing into a separate ShellTests.cmake file for UNIX types. The tests execute the shell scripts if a SH program is found. (ADB - 2018/07/16) Library: -------- - Refactored public exposure of haddr_t type in favor of "object tokens" To better accommodate HDF5 VOL connectors where "object addresses in a file" may not make much sense, the following changes were made to the library: * Introduced new H5O_token_t "object token" type, which represents a unique and permanent identifier for referencing an HDF5 object within a container; these "object tokens" are meant to replace object addresses. Along with the new type, a new H5Oopen_by_token API call was introduced to open an object by a token, similar to how object addresses were previously used with H5Oopen_by_addr. * Introduced new H5Lget_info2, H5Lget_info_by_idx2, H5Literate2, H5Literate_by_name2, H5Lvisit2 and H5Lvisit_by_name2 API calls, along with their associated H5L_info2_t struct and H5L_iterate2_t callback function, which work with the newly-introduced object tokens, instead of object addresses. The original functions have been renamed to version 1 functions and are deprecated in favor of the new version 2 functions. The H5L_info_t and H5L_iterate_t types have been renamed to version 1 types and are now deprecated in favor of their version 2 counterparts. For each of the functions and types, compatibility macros take place of the original symbols. * Introduced new H5Oget_info3, H5Oget_info_by_name3, H5Oget_info_by_idx3, H5Ovisit3 and H5Ovisit_by_name3 API calls, along with their associated H5O_info2_t struct and H5O_iterate2_t callback function, which work with the newly-introduced object tokens, instead of object addresses. The version 2 functions are now deprecated in favor of the version 3 functions. The H5O_info_t and H5O_iterate_t types have been renamed to version 1 types and are now deprecated in favor of their version 2 counterparts. For each, compatibility macros take place of the original symbols. * Introduced new H5Oget_native_info, H5Oget_native_info_by_name and H5Oget_native_info_by_idx API calls, along with their associated H5O_native_info_t struct, which are used to retrieve the native HDF5 file format-specific information about an object. This information (such as object header info and B-tree/heap info) has been removed from the new H5O_info2_t struct so that the more generic H5Oget_info(_by_name/_by_idx)3 routines will not try to retrieve it for non-native VOL connectors. * Added new H5Otoken_cmp, H5Otoken_to_str and H5Otoken_from_str routines to compare two object tokens, convert an object token into a nicely-readable string format and to convert an object token string back into a real object token, respectively. (DER, QAK, JTH - 2020/01/16) - Add new public function H5Sselect_adjust. This function shifts a dataspace selection by a specified logical offset within the dataspace extent. This can be useful for VOL developers to implement chunked datasets. (NAF - 2019/11/18) - Add new public function H5Sselect_project_intersection. This function computes the intersection between two dataspace selections and projects that intersection into a third selection. This can be useful for VOL developers to implement chunked or virtual datasets. (NAF - 2019/11/13, ID-148) - Add new public function H5VLget_file_type. This function returns a datatype equivalent to the supplied datatype but with the location set to be in the file. This datatype can then be used with H5Tconvert to convert data between file and in-memory representation. This funcition is intended for use only by VOL connector developers. (NAF - 2019/11/08, ID-127) - Add S3 and HDFS VFDs to HDF5 maintenance Fix windows requirements and java tests. Windows requires CMake 3.13. Install openssl library (with dev files); from "Shining Light Productions". msi package preferred. PATH should have been updated with the installation dir. set ENV variable OPENSSL_ROOT_DIR to the installation dir. set ENV variable OPENSSL_CONF to the cfg file, likely %OPENSSL_ROOT_DIR%\bin\openssl.cfg Install libcurl library (with dev files); download the latest released version using git: https://github.com/curl/curl.git Open a Visual Studio Command prompt change to the libcurl root folder run the "buildconf.bat" batch file change to the winbuild directory nmake /f Makefile.vc mode=dll MACHINE=x64 copy libcurl-vc-x64-release-dll-ipv6-sspi-winssl dir to C:\curl (installation dir) set ENV variable CURL_ROOT to C:\curl (installation dir) update PATH ENV variable to %CURL_ROOT%\bin (installation bin dir). the aws credentials file should be in %USERPROFILE%\.aws folder set the ENV variable "HDF5_ROS3_TEST_BUCKET_URL=https://s3.us-east-2.amazonaws.com/hdf5ros3" (ADB - 2019/09/12, HDFFV-10854) - Added new chunk query functions The following public functions were added to discover information about the chunks in an HDF5 file. herr_t H5Dget_num_chunks(dset_id, fspace_id, *nchunks) herr_t H5Dget_chunk_info_by_coord(dset_id, *coord, *filter_mask, *addr, *size) herr_t H5Dget_chunk_info(dset_id, fspace_id, index, *coord, *filter_mask, *addr, *size) (BMR - 2019/06/11, HDFFV-10677) - Improved the performance of virtual dataset I/O Refactored the internal dataspace routines used by the virtual dataset code to improve performance, especially when one of the selections involved is very long and non-contiguous. (NAF - 2019/05/31, HDFFV-10693) - Added the ability to open files with UTF-8 file names on Windows. The POSIX open(2) API call on Windows is limited to ASCII file names. The library has been updated to convert incoming file names to UTF-16 (via MultiByteToWideChar(CP_UTF8, ...) and use _wopen() instead. (DER - 2019/03/15, HDFFV-2714, HDFFV-3914, HDFFV-3895, HDFFV-8237, HDFFV-10413, HDFFV-10691) - Add new API H5M for map objects. Currently not supported by native library, can be supported by VOL connectors. (NAF - 2019/03/01) - Add new H5R_ref_t type for object, dataset region and _attribute_ references. This new type will deprecate the current hobj_ref_t and hdset_reg_ref_t types for references. Added H5T_REF datatype to read and write new reference types. As opposed to previous reference types, reference creation no longer modifies existing files. New reference types also now support references to external files. (JS - 2019/10/08) - Remove H5I_REFERENCE from the library This ID class was never used by the library and has been removed. (DER - 2018/12/08, HDFFV-10252) - Allow pre-generated H5Tinit.c and H5make_libsettings.c to be used. Rather than always running H5detect and generating H5Tinit.c and H5make_libsettings.c, supply a location for those files. (ADB - 2018/09/18, HDFFV-10332) - Fix shutdown failure when using H5VLregister_connector_by_name/value When using H5VLregister_connector_by_name/value to dynamically load a VOL connector plugin, the library can experience segmentation faults when the library is closed. This is due to the library unloading the plugin interface before the virtual object layer. Then, when the VOL shutdown occurs, it will attempt to close the VOL connector, however this will fail since the plugin will already have been unloaded. (DER - 2020/03/18, HDFFV-11057) Parallel Library: ----------------- - Changed the default behavior in parallel when reading the same dataset in its entirely (i.e. H5S_ALL dataset selection) which is being read by all the processes collectively. The dataset mush be contiguous, less than 2GB, and of an atomic datatype. The new behavior is the HDF5 library will use an MPI_Bcast to pass the data read from the disk by the root process to the remain processes in the MPI communicator associated with the HDF5 file. (MSB - 2019/01/02, HDFFV-10652) Fortran Library: ---------------- - Added new Fortran derived type, c_h5o_info_t, which is interoperable with C's h5o_info_t. This is needed for callback functions which pass C's h5o_info_t data type definition. (MSB, 2019/01/08, HDFFV-10443) - Added new Fortran API, H5gmtime, which converts (C) 'time_t' structure to Fortran DATE AND TIME storage format. (MSB, 2019/01/08, HDFFV-10443) - Added new Fortran 'fields' optional parameter to: h5ovisit_f, h5oget_info_by_name_f, h5oget_info, h5oget_info_by_idx and h5ovisit_by_name_f. (MSB, 2019/01/08, HDFFV-10443) C++ Library: ------------ - Added new wrappers for H5Pset/get_create_intermediate_group() LinkCreatPropList::setCreateIntermediateGroup() LinkCreatPropList::getCreateIntermediateGroup() (BMR - 2019/04/22, HDFFV-10622) - Added new wrapper for H5Ovisit2() H5Object::visit() (BMR - 2019/02/14, HDFFV-10532) Java Library: ---------------- - Added ability to test java library with VOLs. Created new CMake script that combines the java and vol test scripts. (ADB - 2020/02/03, HDFFV-10996) - Tests fail for non-English locale. In the JUnit tests with a non-English locale, only the part before the decimal comma is replaced by XXXX and this leads to a comparison error. Changed the regex for the Time substitution. (ADB - 2020/01/09, HDFFV-10995) - Fix a failure in JUnit-TestH5P on 32-bit architectures (JTH - 2019/04/30) - Duplicate the data read/write functions of Datasets for Attributes. Region references could not be displayed for attributes as they could for datasets. Datasets had overloaded read and write functions for different datatypes that were not available for attributes. After adding similar functions, attribute region references work normally. (ADB - 2018/12/12, HDFVIEW-4) - Removed H5I_REFERENCE from the Java wrappers This ID class was never used by the library and has been removed from the Java wrappers. (DER - 2018/12/08, HDFFV-10252) Tools: ------ - h5repack was fixed to repack the reference attributes properly. The code line that checks if the update of reference inside a compound datatype is misplaced outside the code block loop that carries out the check. In consequence, the next attribute that is not the reference type was repacked again as the reference type and caused the failure of repacking. The fix is to move the corresponding code line to the correct code block. (KY -2020/02/07, HDFFV-11014) - h5diff was updated to use the new reference APIs. h5diff uses the new reference APIs to compare references. Attribute references can also be compared. (ADB - 2019/12/19, HDFFV-10980) - h5dump and h5ls were updated to use the new reference APIs. The tools library now use the new reference APIs to inspect a file. Also the DDL spec was updated to reflect the format changes produced with the new APIs. The export API and support functions in the JNI were updated to match. (ADB - 2019/12/06, HDFFV-10876 and HDFFV-10877) - h5repack was fixed to repack datasets with external storage to other types of storage. New test added to repack files and verify the correct data using h5diff. (JS - 2019/09/25, HDFFV-10408) (ADB - 2019/10/02, HDFFV-10918) - h5dump was fixed for 128-bit floats, but was missing a test. New test greps for the first 15 numbers of the 128-bit value. (ADB - 2019/06/23, HDFFV-9407) High-Level APIs: --------------- - C Packet Table API ------------------ - Internal header file -------------------- - Documentation ------------- - Support for new platforms, languages and compilers. ======================================= - Bug Fixes since HDF5-1.10.3 release ================================== Library ------- - Improved peformance when creating a large number of small datasets by retrieving default property values from the API context instead of doing skip list searches. (CJH - 2019/12/10, HDFFV-10658) - Fixed user-created data access properties not existing in the property list returned by H5Dget_access_plist. Thanks to Steven Varga for submitting a reproducer and a patch. (CJH - 2019/12/9, HDFFV-10934) - Fixed an assertion failure in the parallel library when collectively filling chunks. As it is required that chunks be written in monotonically non-decreasing order of offset in the file, this assertion was being triggered when the list of chunk file space allocations being passed to the collective chunk filling routine was not sorted according to this particular requirement. The addition of a sort of the out of order chunks trades a bit of performance for the elimination of this assertion and of any complaints from MPI implementations about the file offsets used being out of order. (JTH - 2019/10/07, HDFFV-10792) - Fixed the iteration error in test_versionbounds() in test/dtypes.c The test was supposed to loop through all valid combinations of low and high bounds in the array versions[], but they were set to H5F_LIBVER_EARLIEST always without changing. The problem was fixed by indexing low and high into the array versions[]. (VC - 2019/09/30) - Fixed the slowness of regular hyperslab selection in a chunked dataset It was reported that the selection of every 10th element from a 20G chunked dataset was extremely slow and sometimes could hang the system. The problem was due to the iteration and the building of the span tree for all the selected elements in file space. As the selected elements are going to a 1-d contiguous single block memory space, the problem was fixed by building regular hyperslab selections in memory space for the selected elements in file space. (VC - 2019/09/26, HDFFV-10585) - Fixed a bug caused by bad tag value when condensing object header messages There was an assertion failure when moving meessages from running a user test program with library release hdf5.1.10.4. It was because the tag value (object header's address) was not set up when entering the library routine H5O__chunk_update_idx(), which will eventually verifies the metadata tag value when protecting the object header. The problem was fixed by replacing FUNC_ENTER_PACKAGE in H5O__chunk_update_idx() with FUNC_ENTER_PACKAGE_TAG(oh->cache_info.addr) to set up the metadata tag. (VC - 2019/08/23, HDFFV-10873) - Fixed the test failure from test_metadata_read_retry_info() in test/swmr.c The test failure is due to the incorrect number of bins returned for retry info (info.nbins). The # of bins expected for 101 read attempts is 3 instead of 2. The routine H5F_set_retries() in src/H5Fint.c calculates the # of bins by first obtaining the log10 value for (read attempts - 1). For PGI/19, the log10 value for 100 read attempts is 1.9999999999999998 instead of 2.00000. When casting the log10 value to unsigned later on, the decimal part is chopped off causing the test failure. This was fixed by obtaining the rounded integer value (HDceil) for the log10 value of read attempts first before casting the result to unsigned. (VC - 2019/8/14, HDFFV-10813) - Fixed an issue where creating a file with non-default file space info together with library high bound setting to H5F_LIBVER_V18. When setting non-default file space info in fcpl via H5Pset_file_space_strategy() and then creating a file with both high and low library bounds set to H5F_LIBVER_V18 in fapl, the library succeeds in creating the file. File creation should fail because the feature of setting non-default file space info does not exist in library release 1.8 or earlier. This was fixed by setting and checking the proper version in the file space info message based on the library low and high bounds when creating and opening the HDF5 file. (VC - 2019/6/25, HDFFV-10808) - When iterating over an old-style group (i.e., when not using the latest file format) of size 0, a NULL pointer representing the empty links table would be sent to qsort(3) for sorting, which is undefined behavior. Iterating over an empty group is explicitly tested in the links test. This has not caused any failures to date and was flagged by gcc's -fsanitize=undefined. The library no longer attempts to sort an empty array. (DER - 2019/06/18, HDFFV-10829) - Fixed an issue where copying a version 1.8 dataset between files using H5Ocopy fails due to an incompatible fill version When using the HDF5 1.10.x H5Ocopy() API call to copy a version 1.8 dataset to a file created with both high and low library bounds set to H5F_LIBVER_V18, the H5Ocopy() call will fail with the error stack indicating that the fill value version is out of bounds. This was fixed by changing the fill value message version to H5O_FILL_VERSION_3 (from H5O_FILL_VERSION_2) for H5F_LIBVER_V18. (VC - 2019/6/14, HDFFV-10800) - Some oversights in the index iterating area of the library caused a callback function to continue iterating even though it's supposed to stop. Added the returned value check to the for loop's conditions in H5EA_iterate(), H5FA_iterate(), and H5D__none_idx_iterate(). The iteration now stops when it should. (BMR - 2019/06/11, HDFFV-10661) - Fixed a bug that would cause an error or cause fill values to be incorrectly read from a chunked dataset using the "single chunk" index if the data was held in cache and there was no data on disk. (NAF - 2019/03/06) - Fixed a bug that could cause an error or cause fill values to be incorrectly read from a dataset that was written to using H5Dwrite_chunk if the dataset was not closed after writing. (NAF - 2019/03/06, HDFFV-10716) - Fixed memory leak in scale offset filter In a special case where the MinBits is the same as the number of bits in the datatype's precision, the filter's data buffer was not freed, causing the memory usage to grow. In general the buffer was freed correctly. The Minbits are the minimal number of bits to store the data values. Please see the reference manual for H5Pset_scaleoffset for the detail. (RL - 2019/3/4, HDFFV-10705) - Fix hangs with collective metadata reads during chunked dataset I/O In the parallel library, it was discovered that when a particular sequence of operations following a pattern of: "write to chunked dataset" -> "flush file" -> "read from dataset" occurred with collective metadata reads enabled, hangs could be observed due to certain MPI ranks not participating in the collective metadata reads. To fix the issue, collective metadata reads are now disabled during chunked dataset raw data I/O. (JTH - 2019/02/11, HDFFV-10563, HDFFV-10688) - Performance issue when closing an object The slow down is due to the search of the "tag_list" to find out the "corked" status of an object and "uncork" it if so. Improve porformance by skipping the search of the "tag_list" if there are no "corked" objects when closing an object. (VC - 2019/2/6) - Fixed a potential invalid memory access and failure that could occur when decoding an unknown object header message (from a future version of the library). (NAF - 2019/01/07) - Deleting attributes in dense storage The library aborts with "infinite loop closing library" after attributes in dense storage are created and then deleted. When deleting the attribute nodes from the name index v2 B-tree, if an attribute is found in the intermediate B-tree nodes, which may be merged/redistributed in the process, we need to free the dynamically allocated spaces for the intermediate decoded attribute. (VC - 2018/12/26, HDFFV-10659) - Allow H5detect and H5make_libsettings to take a file as an argument. Rather than only writing to stdout, add a command argument to name the file that H5detect and H5make_libsettings will use for output. Without an argument, stdout is still used, so backwards compatibility is maintained. (ADB - 2018/09/05, HDFFV-9059) - A bug was discovered in the parallel library where an application would hang if a collective read/write of a chunked dataset occurred when collective metadata reads were enabled and some of the ranks had no selection in the dataset's dataspace. The ranks which had no selection in the dataset's dataspace called H5D__chunk_addrmap() to retrieve the lowest chunk address in the dataset. This is because we require reads/writes to be performed in strictly non-decreasing order of chunk address in the file. When the chunk index used was a version 1 or 2 B-tree, these non-participating ranks would issue a collective MPI_Bcast() call that the participating ranks would not issue, causing the hang. Since the non-participating ranks are not actually reading/writing anything, the H5D__chunk_addrmap() call can be safely removed and the address used for the read/write can be set to an arbitrary number (0 was chosen). (JTH - 2018/08/25, HDFFV-10501) - fcntl(2)-based file locking incorrectly passed the lock argument struct instead of a pointer to the struct, causing errors on systems where flock(2) is not available. File locking is used when files are opened to enforce SWMR semantics. A lock operation takes place on all file opens unless the HDF5_USE_FILE_LOCKING environment variable is set to the string "FALSE". flock(2) is preferentially used, with fcntl(2) locks as a backup if flock(2) is unavailable on a system (if neither is available, the lock operation fails). On these systems, the file lock will often fail, which causes HDF5 to not open the file and report an error. This bug only affects POSIX systems. Win32 builds on Windows use a no-op locking call which always succeeds. Systems which exhibit this bug will have H5_HAVE_FCNTL defined but not H5_HAVE_FLOCK in the configure output. This bug affects HDF5 1.10.0 through 1.10.5. fcntl(2)-based file locking now correctly passes the struct pointer. (DER - 2019/08/27, HDFFV-10892) - Torn pread/pwrite I/O would result in read and write corruption. In the sec2, log, and core (with backing store) virtual file drivers (VFDs), the read and write calls incorrectly reset the offset parameter on torn pread and pwrite operations (i.e., I/O operations which fail to be written atomically by the OS). For this bug to occur, pread/pwrite have to be configured (this is the default if they are present on the system) and the pread/pwrite operation has to fail to transfer all the bytes, resulting in a multiple pread/pwrite calls. This feature was initially enabled in HDF5 1.10.5 so the bug is limited to that version. (DER - 2019/12/09, HDFFV-10945) Java Library: ---------------- - JNI native library dependencies The build for the hdf5_java native library used the wrong hdf5 target library for CMake builds. Correcting the hdf5_java library to build with the shared hdf5 library required testing paths to change also. (ADB - 2018/08/31, HDFFV-10568) - Java iterator callbacks Change global callback object to a small stack structure in order to fix a runtime crash. This crash was discovered when iterating through a file with nested group members. The global variable visit_callback is overwritten when recursion starts. When recursion completes, visit_callback will be pointing to the wrong callback method. (ADB - 2018/08/15, HDFFV-10536) - Java HDFLibraryException class Change parent class from Exception to RuntimeException. (ADB - 2018/07/30, HDFFV-10534) - JNI Read and Write Refactored variable-length functions, H5DreadVL and H5AreadVL, to correct dataset and attribute reads. New write functions, H5DwriteVL and H5AwriteVL, are under construction. (ADB - 2018/06/02, HDFFV-10519) Configuration ------------- - Correct option for default API version CMake options for default API version are not mutually exclusive. Change the multiple BOOL options to a single STRING option with the strings; v16, v18, v110, v112. (ADB - 2019/08/12, HDFFV-10879) Performance ------------- - Fortran -------- - Added symbolic links libhdf5_hl_fortran.so to libhdf5hl_fortran.so and libhdf5_hl_fortran.a to libhdf5hl_fortran.a in hdf5/lib directory for autotools installs. These were added to match the name of the files installed by cmake and the general pattern of hl lib files. We will change the names of the installed lib files to the matching name in the next major release. (LRK - 2019/01/04, HDFFV-10596) - Made Fortran specific subroutines PRIVATE in generic procedures. Effected generic procedures were functions in H5A, H5D, H5P, H5R and H5T. (MSB, 2018/12/04, HDFFV-10511) - Fixed issue with Fortran not returning h5o_info_t field values meta_size%attr%index_size and meta_size%attr%heap_size. (MSB, 2018/1/8, HDFFV-10443) - Corrected INTERFACE INTENT(IN) to INTENT(OUT) for buf_size in h5fget_file_image_f. (MSB - 2020/2/18, HDFFV-11029) Tools ----- - High-Level APIs: ------ - Fortran High-Level APIs: ------ - Documentation ------------- - F90 APIs -------- - C++ APIs -------- - Testing ------- - Fixed a test failure in testpar/t_dset.c caused by the test trying to use the parallel filters feature on MPI-2 implementations. (JTH, 2019/2/7) Bug Fixes since HDF5-1.10.2 release ================================== Library ------- - Java HDF5LibraryException class The error minor and major values would be lost after the constructor executed. Created two local class variables to hold the values obtained during execution of the constructor. Refactored the class functions to retrieve the class values rather then calling the native functions. The native functions were renamed and called only during execution of the constructor. Added error checking to calling class constructors in JNI classes. (ADB - 2018/08/06, HDFFV-10544) - Added checks of the defined MPI_VERSION to guard against usage of MPI-3 functions in the Parallel Compression and "big Parallel I/O" features when HDF5 is built with MPI-2. Previously, the configure step would pass but the build itself would fail when it could not locate the MPI-3 functions used. As a result of these new checks, HDF5 can again be built with MPI-2, but the Parallel Compression feature will be disabled as it relies on the MPI-3 functions used. (JTH - 2018/08/02, HDFFV-10512) - User's patches: CVEs The following patches have been applied: CVE-2018-11202 - NULL pointer dereference was discovered in H5S_hyper_make_spans in H5Shyper.c (HDFFV-10476) https://security-tracker.debian.org/tracker/CVE-2018-11202 https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11202 CVE-2018-11203 - A division by zero was discovered in H5D__btree_decode_key in H5Dbtree.c (HDFFV-10477) https://security-tracker.debian.org/tracker/CVE-2018-11203 https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11203 CVE-2018-11204 - A NULL pointer dereference was discovered in H5O__chunk_deserialize in H5Ocache.c (HDFFV-10478) https://security-tracker.debian.org/tracker/CVE-2018-11204 https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11204 CVE-2018-11206 - An out of bound read was discovered in H5O_fill_new_decode and H5O_fill_old_decode in H5Ofill.c (HDFFV-10480) https://security-tracker.debian.org/tracker/CVE-2018-11206 https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11206 CVE-2018-11207 - A division by zero was discovered in H5D__chunk_init in H5Dchunk.c (HDFFV-10481) https://security-tracker.debian.org/tracker/CVE-2018-11207 https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11207 (BMR - 2018/7/22, PR#s: 1134 and 1139, HDFFV-10476, HDFFV-10477, HDFFV-10478, HDFFV-10480, HDFFV-10481) - H5Adelete H5Adelete failed when deleting the last "large" attribute that is stored densely via fractal heap/v2 b-tree. After removing the attribute, update the ainfo message. If the number of attributes goes to zero, remove the message. (VC - 2018/07/20, HDFFV-9277) - A bug was discovered in the parallel library which caused partial parallel reads of filtered datasets to return incorrect data. The library used the incorrect dataspace for each chunk read, causing the selection used in each chunk to be wrong. The bug was not caught during testing because all of the current tests which do parallel reads of filtered data read all of the data using an H5S_ALL selection. Several tests were added which exercise partial parallel reads. (JTH - 2018/07/16, HDFFV-10467) - A bug was discovered in the parallel library which caused parallel writes of filtered datasets to trigger an assertion failure in the file free space manager. This occurred when the filter used caused chunks to repeatedly shrink and grow over the course of several dataset writes. The previous chunk information, such as the size of the chunk and the offset in the file, was being cached and not updated after each write, causing the next write to the chunk to retrieve the incorrect cached information and run into issues when reallocating space in the file for the chunk. (JTH - 2018/07/16, HDFFV-10509) - A bug was discovered in the parallel library which caused the H5D__mpio_array_gatherv() function to allocate too much memory. When the function is called with the 'allgather' parameter set to a non-true value, the function will receive data from all MPI ranks and gather it to the single rank specied by the 'root' parameter. However, the bug in the function caused memory for the received data to be allocated on all MPI ranks, not just the singular rank specified as the receiver. In some circumstances, this would cause an application to fail due to the large amounts of memory being allocated. (JTH - 2018/07/16, HDFFV-10467) - Error checks in h5stat and when decoding messages h5stat exited with seg fault/core dumped when errors are encountered in the internal library. Add error checks and --enable-error-stack option to h5stat. Add range checks when decoding messages: old fill value, old layout and refcount. (VC - 2018/07/11, HDFFV-10333) - If an HDF5 file contains a malformed compound datatype with a suitably large offset, the type conversion code can run off the end of the type conversion buffer, causing a segmentation fault. This issue was reported to The HDF Group as issue #CVE-2017-17507. NOTE: The HDF5 C library cannot produce such a file. This condition should only occur in a corrupt (or deliberately altered) file or a file created by third-party software. THE HDF GROUP WILL NOT FIX THIS BUG AT THIS TIME Fixing this problem would involve updating the publicly visible H5T_conv_t function pointer typedef and versioning the API calls which use it. We normally only modify the public API during major releases, so this bug will not be fixed at this time. (DER - 2018/02/26, HDFFV-10356) - Inappropriate linking with deprecated MPI C++ libraries HDF5 does not define *_SKIP_MPICXX in the public headers, so applications can inadvertently wind up linking to the deprecated MPI C++ wrappers. MPICH_SKIP_MPICXX and OMPI_SKIP_MPICXX have both been defined in H5public.h so this should no longer be an issue. HDF5 makes no use of the deprecated MPI C++ wrappers. (DER - 2019/09/17, HDFFV-10893) Configuration ------------- - Applied patches to address Cywin build issues There were three issues for Cygwin builds: - Shared libs were not built. - The -std=c99 flag caused a SIG_SETMASK undeclared error. - Undefined errors when buildbing test shared libraries. Patches to address these issues were received and incorporated in this version. (LRK - 2018/07/18, HDFFV-10475) - Moved the location of gcc attribute. The gcc attribute(no_sanitize), named as the macro HDF_NO_UBSAN, was located after the function name. Builds with GCC 7 did not indicate any problem, but GCC 8 issued errors. Moved the attribute before the function name, as required. (ADB - 2018/05/22, HDFFV-10473) - Reworked java test suite into individual JUnit tests. Testing the whole suite of java unit tests in a single JUnit run made it difficult to determine actual failures when tests would fail. Running each file set of tests individually, allows individual failures to be diagnosed easier. A side benefit is that tests for optional components of the library can be disabled if not configured. (ADB - 2018/05/16, HDFFV-9739) - Converted CMake global commands ADD_DEFINITIONS and INCLUDE_DIRECTORIES to use target_* type commands. This change modernizes the CMake usage in the HDF5 library. In addition, there is the intention to convert to generator expressions, where possible. The exception is Fortran FLAGS on Windows Visual Studio. The HDF macros TARGET_C_PROPERTIES and TARGET_FORTRAN_PROPERTIES have been removed with this change in usage. The additional language (C++ and Fortran) checks have also been localized to only be checked when that language is enabled. (ADB - 2018/05/08) Performance ------------- - Fortran -------- - Tools ----- - High-Level APIs: ------ - Fortran High-Level APIs: ------ - Documentation ------------- - F90 APIs -------- - C++ APIs -------- - Adding default arguments to existing functions Added the following items: + Two more property list arguments are added to H5Location::createDataSet: const DSetAccPropList& dapl = DSetAccPropList::DEFAULT const LinkCreatPropList& lcpl = LinkCreatPropList::DEFAULT + One more property list argument is added to H5Location::openDataSet: const DSetAccPropList& dapl = DSetAccPropList::DEFAULT (BMR - 2018/07/21, PR# 1146) - Improvement C++ documentation Replaced the table in main page of the C++ documentation from mht to htm format for portability. (BMR - 2018/07/17, PR# 1141) Testing ------- - The dt_arith test failed on IBM Power8 and Power9 machines when testing conversions from or to long double types, especially when special values such as infinity or NAN were involved. In some cases the results differed by extremely small amounts from those on other machines, while some other tests resulted in segmentation faults. These conversion tests with long double types have been disabled for ppc64 machines until the problems are better understood and can be properly addressed. (SRL - 2019/01/07, TRILAB-98) Supported Platforms =================== Linux 2.6.32-696.16.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18) #1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18) (ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18) IBM XL C/C++ V13.1 IBM XL Fortran V15.1 Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++) #1 SMP x86_64 GNU/Linux compilers: (kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4) Version 4.9.3, Version 5.2.0, Intel(R) C (icc), C++ (icpc), Fortran (icc) compilers: Version 17.0.0.098 Build 20160721 MPICH 3.1.4 compiled with GCC 4.9.3 SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc (emu) Sun Fortran 95 8.6 SunOS_sparc Sun C++ 5.12 SunOS_sparc Windows 7 x64 Visual Studio 2015 w/ Intel C, Fortran 2018 (cmake) Visual Studio 2015 w/ MSMPI 10 (cmake) Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 18 (cmake) Visual Studio 2017 w/ Intel Fortran 19 (cmake) Visual Studio 2019 w/ Intel Fortran 19 (cmake) Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0 64-bit gfortran GNU Fortran (GCC) 4.9.2 (osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3 Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3.0 from Xcode 7.3 64-bit gfortran GNU Fortran (GCC) 5.2.0 (osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2 Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42) 64-bit gfortran GNU Fortran (GCC) 7.1.0 (swallow/kite) Intel icc/icpc/ifort version 17.0.2 Tested Configuration Features Summary ===================================== In the tables below y = tested n = not tested in this release C = Cluster W = Workstation x = not working in this release dna = does not apply ( ) = footnote appears below second table = testing incomplete on this feature or platform Platform C F90/ F90 C++ zlib SZIP parallel F2003 parallel Solaris2.11 32-bit n y/y n y y y Solaris2.11 64-bit n y/n n y y y Windows 7 y y/y n y y y Windows 7 x64 y y/y y y y y Windows 7 Cygwin n y/n n y y y Windows 7 x64 Cygwin n y/n n y y y Windows 10 y y/y n y y y Windows 10 x64 y y/y n y y y Mac OS X Mountain Lion 10.8.5 64-bit n y/y n y y y Mac OS X Mavericks 10.9.5 64-bit n y/y n y y ? Mac OS X Yosemite 10.10.5 64-bit n y/y n y y ? Mac OS X El Capitan 10.11.6 64-bit n y/y n y y ? CentOS 6.7 Linux 2.6.18 x86_64 GNU n y/y n y y y CentOS 6.7 Linux 2.6.18 x86_64 Intel n y/y n y y y CentOS 6.7 Linux 2.6.32 x86_64 PGI n y/y n y y y CentOS 7.2 Linux 2.6.32 x86_64 GNU y y/y y y y y CentOS 7.2 Linux 2.6.32 x86_64 Intel n y/y n y y y Linux 2.6.32-573.18.1.el6.ppc64 n y/n n y y y Platform Shared Shared Shared Thread- C libs F90 libs C++ libs safe Solaris2.11 32-bit y y y y Solaris2.11 64-bit y y y y Windows 7 y y y y Windows 7 x64 y y y y Windows 7 Cygwin n n n y Windows 7 x64 Cygwin n n n y Windows 10 y y y y Windows 10 x64 y y y y Mac OS X Mountain Lion 10.8.5 64-bit y n y y Mac OS X Mavericks 10.9.5 64-bit y n y y Mac OS X Yosemite 10.10.5 64-bit y n y y Mac OS X El Capitan 10.11.6 64-bit y n y y CentOS 6.7 Linux 2.6.18 x86_64 GNU y y y y CentOS 6.7 Linux 2.6.18 x86_64 Intel y y y n CentOS 6.7 Linux 2.6.32 x86_64 PGI y y y n CentOS 7.2 Linux 2.6.32 x86_64 GNU y y y n CentOS 7.2 Linux 2.6.32 x86_64 Intel y y y n Linux 2.6.32-573.18.1.el6.ppc64 y y y n Compiler versions for each platform are listed in the preceding "Supported Platforms" table. More Tested Platforms ===================== The following platforms are not supported but have been tested for this release. Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++) #1 SMP x86_64 GNU/Linux compilers: (mayll/platypus) Version 4.4.7 20120313 Version 4.9.3, 5.3.0, 6.2.0 PGI C, Fortran, C++ for 64-bit target on x86-64; Version 17.10-0 Intel(R) C (icc), C++ (icpc), Fortran (icc) compilers: Version 17.0.4.196 Build 20170411 MPICH 3.1.4 compiled with GCC 4.9.3 Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers #1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4) (jelly) with NAG Fortran Compiler Release 6.1(Tozai) GCC Version 7.1.0 OpenMPI 3.0.0-GCC-7.2.0-2.29 Intel(R) C (icc) and C++ (icpc) compilers Version 17.0.0.098 Build 20160721 with NAG Fortran Compiler Release 6.1(Tozai) Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0 #1 SMP x86_64 GNU/Linux (moohan) Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with #1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1 (ostrich) and IBM XL Fortran for Linux, V15.1 Fedora30 5.3.11-200.fc30.x86_64 #1 SMP x86_64 GNU/Linux GNU gcc (GCC) 9.2.1 20190827 (Red Hat 9.2.1 20190827) GNU Fortran (GCC) 9.2.1 20190827 (Red Hat 9.2.1 20190827) (cmake and autotools) Known Problems ============== CMake files do not behave correctly with paths containing spaces. Do not use spaces in paths because the required escaping for handling spaces results in very complex and fragile build files. ADB - 2019/05/07 At present, metadata cache images may not be generated by parallel applications. Parallel applications can read files with metadata cache images, but since this is a collective operation, a deadlock is possible if one or more processes do not participate. Known problems in previous releases can be found in the HISTORY*.txt files in the HDF5 source. Please report any new problems found to help@hdfgroup.org. CMake vs. Autotools installations ================================= While both build systems produce similar results, there are differences. Each system produces the same set of folders on linux (only CMake works on standard Windows); bin, include, lib and share. Autotools places the COPYING and RELEASE.txt file in the root folder, CMake places them in the share folder. The bin folder contains the tools and the build scripts. Additionally, CMake creates dynamic versions of the tools with the suffix "-shared". Autotools installs one set of tools depending on the "--enable-shared" configuration option. build scripts ------------- Autotools: h5c++, h5cc, h5fc CMake: h5c++, h5cc, h5hlc++, h5hlcc The include folder holds the header files and the fortran mod files. CMake places the fortran mod files into separate shared and static subfolders, while Autotools places one set of mod files into the include folder. Because CMake produces a tools library, the header files for tools will appear in the include folder. The lib folder contains the library files, and CMake adds the pkgconfig subfolder with the hdf5*.pc files used by the bin/build scripts created by the CMake build. CMake separates the C interface code from the fortran code by creating C-stub libraries for each Fortran library. In addition, only CMake installs the tools library. The names of the szip libraries are different between the build systems. The share folder will have the most differences because CMake builds include a number of CMake specific files for support of CMake's find_package and support for the HDF5 Examples CMake project.