summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
authorLarry Knox <lrknox@hdfgroup.org>2019-10-21 20:50:44 (GMT)
committerLarry Knox <lrknox@hdfgroup.org>2019-10-21 20:50:44 (GMT)
commit2c9362c53046f39ec14cd7b14404c13427d7abd5 (patch)
tree01002ef4779e56a1519739bde0dfca4c9a47d3fe /release_docs
parentfcb10fc1bf59f4ebc5db1579a96eb8afc766eaf4 (diff)
downloadhdf5-2c9362c53046f39ec14cd7b14404c13427d7abd5.zip
hdf5-2c9362c53046f39ec14cd7b14404c13427d7abd5.tar.gz
hdf5-2c9362c53046f39ec14cd7b14404c13427d7abd5.tar.bz2
Update RELEASE.txt and add HISTORY-1_10_0-1_12_0.txt file.
Set version to 1.12.0-alpha0 for snapshot release.
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/RELEASE.txt1082
1 files changed, 129 insertions, 953 deletions
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 01b7d6d..ecb2ef3 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -1,16 +1,15 @@
-HDF5 version 1.12.0 currently under development
+HDF5 version 1.12.0-alpha0 currently under development
================================================================================
INTRODUCTION
-This document describes the differences between this release and the previous
-HDF5 release. It contains information on the platforms tested and known
-problems in this release. For more details check the HISTORY*.txt files in the
-HDF5 source.
+This document describes the new features introduced in the HDF5 1.12.0 release.
+It contains information on the platforms tested and known problems in this
+release. For more details check the HISTORY*.txt files in the HDF5 source.
-Note that documentation in the links below will be updated at the time of each
-final release.
+Note that documentation in the links below will be updated at the time of the
+release.
Links to HDF5 documentation can be found on The HDF5 web page:
@@ -20,10 +19,9 @@ The official HDF5 releases can be obtained from:
https://www.hdfgroup.org/downloads/hdf5/
-Changes from Release to Release and New Features in the HDF5-1.10.x release series
-can be found at:
+More information about the new features can be found at:
- https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide
+ https://portal.hdfgroup.org/display/HDF5/New+Features+in+HDF5+Release+1.12
If you have any questions or comments, please send them to the HDF Help Desk:
@@ -34,8 +32,7 @@ CONTENTS
- New Features
- Support for new platforms and languages
-- Bug Fixes since HDF5-1.10.3
-- Bug Fixes since HDF5-1.10.2
+- Major Bug Fixes since HDF5-1.10.0
- Supported Platforms
- Tested Configuration Features Summary
- More Tested Platforms
@@ -46,928 +43,102 @@ CONTENTS
New Features
============
- Configuration:
- -------------
- - Update CMake tests to use FIXTURES
-
- CMake test fixtures allow setup/cleanup tests and other dependency
- requirements as properties for tests. This is more flexible for
- modern CMake code.
-
- (ADB - 2019/07/23, HDFFV-10529)
-
- - Windows PDB files are always installed
-
- There are build configuration or flag settings for Windows that may not
- generate PDB files. If those files are not generated then the install
- utility will fail because those PDB files are not found. An optional
- variable, DISABLE_PDB_FILES, was added to not install PDB files.
-
- (ADB - 2019/07/17, HDFFV-10424)
-
- - Add mingw CMake support with a toolchain file
-
- There has been a number of mingw issues that has been linked under
- HDFFV-10845. It has been decided to implement the CMake cross-compiling
- technique of toolchain files. We will use a linux platform with the mingw
- compiler stack for testing. Only the C language is fully supported, and
- the error tests are skipped. The C++ language works for static but shared
- builds has a shared library issue with the mingw Standard Exception Handling
- library, which is not available on Windows. Fortran has a common cross-compile
- problem with the fortran configure tests.
-
- (ADB - 2019/07/12, HDFFV-10845, HDFFV-10595)
-
- - Windows PDB files are installed incorrectly
-
- For static builds, the PDB files for windows should be installed next
- to the static libraries in the lib folder. Also the debug versions of
- libraries and PDB files are now correctly built using the default
- CMAKE_DEBUG_POSTFIX setting.
-
- (ADB - 2019/07/09, HDFFV-10581)
-
- - Add option to build only shared libs
-
- A request was made to prevent building static libraries and only build
- shared. A new option was added to CMake, ONLY_SHARED_LIBS, which will
- skip building static libraries. Certain utility functions will build with
- static libs but are not published. Tests are adjusted to use the correct
- libraries depending on SHARED/STATIC settings.
-
- (ADB - 2019/06/12, HDFFV-10805)
-
- - Add options to enable or disable building tools and tests
-
- Configure options --enable-tests and --enable-tools were added for
- autotools configure. These options are enabled by default, and can be
- disabled with either --disable-tests (or tools) or --enable-tests=no
- (or --enable-tools=no). Build time is reduced ~20% when tools are
- disabled, 35% when tests are disabled, 45% when both are disabled.
- Reenabling them after the initial build requires running configure
- again with the option(s) enabled.
-
- (LRK - 2019/06/12, HDFFV-9976)
-
- - Change tools test that test the error stack
-
- There are some use cases which can cause the error stack of tools to be
- different then the expected output. These tests now use grepTest.cmake,
- this was changed to allow the error file to be searched for an expected string.
-
- (ADB - 2019/04/15, HDFFV-10741)
-
- - Keep stderr and stdout separate in tests
-
- Changed test handling of output capture. Tests now keep the stderr
- output separate from the stdout output. It is up to the test to decide
- which output to check against a reference. Also added the option
- to grep for a string in either output.
-
- (ADB - 2018/12/12, HDFFV-10632)
-
- - Add toolchain and cross-compile support
-
- Added info on using a toolchain file to INSTALL_CMAKE.txt. A
- toolchain file is also used in cross-compiling, which requires
- CMAKE_CROSSCOMPILING_EMULATOR to be set. To help with cross-compiling
- the fortran configure process, the HDF5UseFortran.cmake file macros
- were improved. Fixed a Fortran configure file issue that incorrectly
- used #cmakedefine instead of #define.
-
- (ADB - 2018/10/04, HDFFV-10594)
-
- - Add warning flags for Intel compilers
-
- Identified Intel compiler specific warnings flags that should be used
- instead of GNU flags.
-
- (ADB - 2018/10/04, TRILABS-21)
-
- - Add default rpath to targets
-
- Default rpaths should be set in shared executables and
- libraries to allow the use of loading dependent libraries
- without requiring LD_LIBRARY_PATH to be set. The default
- path should be relative using @rpath on osx and $ORIGIN
- on linux. Windows is not affected.
-
- (ADB - 2018/09/26, HDFFV-10594)
-
- - Add missing USE_110_API_DEFAULT option.
-
- Option USE_110_API_DEFAULT sets the default version of
- versioned APIs. The bin/makevers perl script did not set
- the maxidx variable correctly when the 1.10 branch was
- created. This caused the versioning process to always use
- the latest version of any API.
-
- (ADB - 2018/08/17, HDFFV-10552)
-
- - Added configuration checks for the following MPI functions:
-
- MPI_Mprobe - Used for the Parallel Compression feature
- MPI_Imrecv - Used for the Parallel Compression feature
-
- MPI_Get_elements_x - Used for the "big Parallel I/O" feature
- MPI_Type_size_x - Used for the "big Parallel I/O" feature
-
- (JTH - 2018/08/02, HDFFV-10512)
-
- - Added section to the libhdf5.settings file to indicate
- the status of the Parallel Compression and "big Parallel I/O"
- features.
-
- (JTH - 2018/08/02, HDFFV-10512)
-
- - Add option to execute swmr shell scripts from CMake.
-
- Option TEST_SHELL_SCRIPTS redirects processing into a
- separate ShellTests.cmake file for UNIX types. The tests
- execute the shell scripts if a SH program is found.
-
- (ADB - 2018/07/16)
-
-
Library:
--------
- - Add S3 and HDFS VFDs to HDF5 maintenance
+ - Virtual Object Layer (VOL)
+
+ In this major HDF5 release we introduce HDF5 Virtual Object Layer (VOL).
+ VOL is an abstraction layer within the HDF5 library that enables different
+ methods for accessing data and objects that conform to the HDF5 data model.
+ The VOL layer intercepts all HDF5 API calls that potentially modify data
+ on disk and forwards those calls to a plugin "object driver". The data on
+ disk can be a different format than the HDF5 format. For more information
+ about VOL we refer the reader to the following documents (under review):
+
+ VOL HDF5 APIs
+ https://portal.hdfgroup.org/display/HDF5/Virtual+Object++Layer
+
+ VOL Documentation
+ https://bitbucket.hdfgroup.org/projects/HDFFV/repos/hdf5doc/browse/RFCs/HDF5/VOL
+
+ Repository with VOL plugins
+ https://bitbucket.hdfgroup.org/projects/HDF5VOL
+
+ - Enhancements to HDF5 References
+
+ HDF5 references were extended to support attributes, and object and dataset
+ selections that reside in another HDF5 file. For more information including
+ a list of new APIs, see
+
+ https://portal.hdfgroup.org/display/HDF5/Update+to+References
+
+ Current known limitations for references in this release:
+ • HDF5 command-line tools have not be updated to read new references types
+ • When reading data with the H5T_STD_REF type, if data is filled with 0s,
+ H5A/Dread() currently returns an error. This will be fixed in an upcoming
+ release.
+ • h5dump will fail to display references on big-endian machines
+
+ - New S3 and HDFS Virtual File Drivers (VFDs)
+
+ This release has two new VFDs. The S3 VFD allows to access HDF5 files on
+ AWS S3 buckets. HDFS VFD allows access to HDF5 file stored on Apache HDFS.
+ See https://portal.hdfgroup.org/display/HDF5/H5P_SET_FAPL_ROS3
+ and https://portal.hdfgroup.org/display/HDF5/H5P_SET_FAPL_HDFS
+ for how to use those APIs.
+ See https://portal.hdfgroup.org/display/HDF5/Virtual+Object+Layer+and+Virtual+File+Drivers
+ for more information about configuring and setting up either the S3 or HDFS vfd.
+ Below are specific instructions how to enable S3 VFD on Windows:
Fix windows requirements and java tests. Windows requires CMake 3.13.
- Install openssl library (with dev files);
- from "Shining Light Productions". msi package preferred.
-
- PATH should have been updated with the installation dir.
- set ENV variable OPENSSL_ROOT_DIR to the installation dir.
- set ENV variable OPENSSL_CONF to the cfg file, likely %OPENSSL_ROOT_DIR%\bin\openssl.cfg
- Install libcurl library (with dev files);
- download the latest released version using git: https://github.com/curl/curl.git
-
- Open a Visual Studio Command prompt
- change to the libcurl root folder
- run the "buildconf.bat" batch file
- change to the winbuild directory
- nmake /f Makefile.vc mode=dll MACHINE=x64
- copy libcurl-vc-x64-release-dll-ipv6-sspi-winssl dir to C:\curl (installation dir)
- set ENV variable CURL_ROOT to C:\curl (installation dir)
- update PATH ENV variable to %CURL_ROOT%\bin (installation bin dir).
- the aws credentials file should be in %USERPROFILE%\.aws folder
- set the ENV variable "HDF5_ROS3_TEST_BUCKET_URL=https://s3.us-east-2.amazonaws.com/hdf5ros3"
-
- (ADB - 2019/09/12, HDFFV-10854)
-
- - Added new chunk query functions
-
- The following public functions were added to discover information about
- the chunks in an HDF5 file.
- herr_t H5Dget_num_chunks(dset_id, fspace_id, *nchunks)
- herr_t H5Dget_chunk_info_by_coord(dset_id, *coord, *filter_mask, *addr, *size)
- herr_t H5Dget_chunk_info(dset_id, fspace_id, index, *coord, *filter_mask, *addr, *size)
-
- (BMR - 2019/06/11, HDFFV-10677)
-
- - Improved the performance of virtual dataset I/O
-
- Refactored the internal dataspace routines used by the virtual dataset
- code to improve performance, especially when one of the selections
- involved is very long and non-contiguous.
-
- (NAF - 2019/05/31, HDFFV-10693)
-
- - Added the ability to open files with UTF-8 file names on Windows.
-
- The POSIX open(2) API call on Windows is limited to ASCII
- file names. The library has been updated to convert incoming file
- names to UTF-16 (via MultiByteToWideChar(CP_UTF8, ...) and use
- _wopen() instead.
-
- (DER - 2019/03/15, HDFFV-2714, HDFFV-3914, HDFFV-3895, HDFFV-8237, HDFFV-10413, HDFFV-10691)
-
- - Add new API H5M for map objects. Currently not supported by native
- library, can be supported by VOL connectors.
-
- (NAF - 2019/03/01)
-
- - Add new H5R_ref_t type for object, dataset region and _attribute_
- references. This new type will deprecate the current hobj_ref_t
- and hdset_reg_ref_t types for references. Added H5T_REF datatype
- to read and write new reference types. As opposed to previous
- reference types, reference creation no longer modifies existing
- files. New reference types also now support references to external
- files.
-
- (JS - 2019/10/08)
-
- - Remove H5I_REFERENCE from the library
-
- This ID class was never used by the library and has been removed.
-
- (DER - 2018/12/08, HDFFV-10252)
-
- - Allow pre-generated H5Tinit.c and H5make_libsettings.c to be used.
-
- Rather than always running H5detect and generating H5Tinit.c and
- H5make_libsettings.c, supply a location for those files.
-
- (ADB - 2018/09/18, HDFFV-10332)
-
-
- Parallel Library:
- -----------------
- - Changed the default behavior in parallel when reading the same dataset in its entirely
- (i.e. H5S_ALL dataset selection) which is being read by all the processes collectively.
- The dataset mush be contiguous, less than 2GB, and of an atomic datatype.
- The new behavior is the HDF5 library will use an MPI_Bcast to pass the data read from
- the disk by the root process to the remain processes in the MPI communicator associated
- with the HDF5 file.
-
- (MSB - 2019/01/02, HDFFV-10652)
+ - Install openssl library (with dev files);
+ from "Shining Light Productions". msi package preferred.
+ - PATH should have been updated with the installation dir.
+ - set ENV variable OPENSSL_ROOT_DIR to the installation dir.
+ - set ENV variable OPENSSL_CONF to the cfg file, likely %OPENSSL_ROOT_DIR%\bin\openssl.cfg
+ - Install libcurl library (with dev files);
+ - download the latest released version using git: https://github.com/curl/curl.git
+ - Open a Visual Studio Command prompt
+ - change to the libcurl root folder
+ - run the "buildconf.bat" batch file
+ - change to the winbuild directory
+ - nmake /f Makefile.vc mode=dll MACHINE=x64
+ - copy libcurl-vc-x64-release-dll-ipv6-sspi-winssl dir to C:\curl (installation dir)
+ - set ENV variable CURL_ROOT to C:\curl (installation dir)
+ - update PATH ENV variable to %CURL_ROOT%\bin (installation bin dir).
+ - the aws credentials file should be in %USERPROFILE%\.aws folder
+ - set the ENV variable HDF5_ROS3_TEST_BUCKET_URL to the s3 url for the
+ s3 bucket containing the HDF5 files to be accessed.
+
+ Other improvements and changes:
+
+ - Hyperslab selection code was reworked to improve performance getting more
+ than 10x speedup in some cases.
+
+ - The HDF5 Library was enhanced to open files with Unicode names on Windows.
+
+ - Deprecated H5Dvlen_reclaim() and replaced it with H5Treclaim().
+ This routine is meant to be used when resources are internally allocated
+ when reading data, i.e. when using either vlen or new reference types.
+ This is applicable to both attribute and dataset reads.
- Fortran Library:
- ----------------
- - Added new Fortran derived type, c_h5o_info_t, which is interoperable with
- C's h5o_info_t. This is needed for callback functions which
- pass C's h5o_info_t data type definition.
-
- (MSB, 2019/01/08, HDFFV-10443)
-
- - Added new Fortran API, H5gmtime, which converts (C) 'time_t' structure
- to Fortran DATE AND TIME storage format.
-
- (MSB, 2019/01/08, HDFFV-10443)
-
- - Added new Fortran 'fields' optional parameter to: h5ovisit_f, h5oget_info_by_name_f,
- h5oget_info, h5oget_info_by_idx and h5ovisit_by_name_f.
-
- (MSB, 2019/01/08, HDFFV-10443)
-
- C++ Library:
- ------------
- - Added new wrappers for H5Pset/get_create_intermediate_group()
- LinkCreatPropList::setCreateIntermediateGroup()
- LinkCreatPropList::getCreateIntermediateGroup()
-
- (BMR - 2019/04/22, HDFFV-10622)
-
- - Added new wrapper for H5Ovisit2()
- H5Object::visit()
-
- (BMR - 2019/02/14, HDFFV-10532)
-
-
- Java Library:
- ----------------
- - Fix a failure in JUnit-TestH5P on 32-bit architectures
-
- (JTH - 2019/04/30)
-
- - Duplicate the data read/write functions of Datasets for Attributes.
-
- Region references could not be displayed for attributes as they could
- for datasets. Datasets had overloaded read and write functions for different
- datatypes that were not available for attributes. After adding similar
- functions, attribute region references work normally.
-
- (ADB - 2018/12/12, HDFVIEW-4)
-
- - Removed H5I_REFERENCE from the Java wrappers
-
- This ID class was never used by the library and has been removed
- from the Java wrappers.
-
- (DER - 2018/12/08, HDFFV-10252)
-
-
- Tools:
- ------
- h5repack was fixed to repack datasets with external storage
to other types of storage.
- New test added to repack files and verify the correct data using h5diff.
-
- (JS - 2019/09/25, HDFFV-10408)
- (ADB - 2019/10/02, HDFFV-10918)
-
- - h5dump was fixed for 128-bit floats, but was missing a test.
-
- New test greps for the first 15 numbers of the 128-bit value.
-
- (ADB - 2019/06/23, HDFFV-9407)
-
-
- High-Level APIs:
- ---------------
- -
-
- C Packet Table API
- ------------------
- -
-
- Internal header file
- --------------------
- -
-
- Documentation
- -------------
- -
Support for new platforms, languages and compilers.
=======================================
- -
-
-Bug Fixes since HDF5-1.10.3 release
-==================================
-
- Library
- -------
- - Fixed the iteration error in test_versionbounds() in test/dtypes.c
-
- The test was supposed to loop through all valid combinations of
- low and high bounds in the array versions[], but they were set to
- H5F_LIBVER_EARLIEST always without changing.
-
- The problem was fixed by indexing low and high into the array versions[].
-
- (VC - 2019/09/30)
-
- - Fixed the slowness of regular hyperslab selection in a chunked dataset
-
- It was reported that the selection of every 10th element from a 20G
- chunked dataset was extremely slow and sometimes could hang the system.
- The problem was due to the iteration and the building of the span tree
- for all the selected elements in file space.
-
- As the selected elements are going to a 1-d contiguous single block
- memory space, the problem was fixed by building regular hyperslab selections
- in memory space for the selected elements in file space.
-
- (VC - 2019/09/26, HDFFV-10585)
-
- - Fixed a bug caused by bad tag value when condensing object header
- messages
-
- There was an assertion failure when moving meessages from running a
- user test program with library release hdf5.1.10.4. It was because
- the tag value (object header's address) was not set up when entering
- the library routine H5O__chunk_update_idx(), which will eventually
- verifies the metadata tag value when protecting the object header.
-
- The problem was fixed by replacing FUNC_ENTER_PACKAGE in H5O__chunk_update_idx()
- with FUNC_ENTER_PACKAGE_TAG(oh->cache_info.addr) to set up the metadata tag.
-
- (VC - 2019/08/23, HDFFV-10873)
-
- - Fixed the test failure from test_metadata_read_retry_info() in
- test/swmr.c
-
- The test failure is due to the incorrect number of bins returned for
- retry info (info.nbins). The # of bins expected for 101 read attempts
- is 3 instead of 2. The routine H5F_set_retries() in src/H5Fint.c
- calculates the # of bins by first obtaining the log10 value for
- (read attempts - 1). For PGI/19, the log10 value for 100 read attempts
- is 1.9999999999999998 instead of 2.00000. When casting the log10 value
- to unsigned later on, the decimal part is chopped off causing the test
- failure.
-
- This was fixed by obtaining the rounded integer value (HDceil) for the
- log10 value of read attempts first before casting the result to unsigned.
-
- (VC - 2019/8/14, HDFFV-10813)
-
- - Fixed an issue where creating a file with non-default file space info
- together with library high bound setting to H5F_LIBVER_V18.
-
- When setting non-default file space info in fcpl via
- H5Pset_file_space_strategy() and then creating a file with
- both high and low library bounds set to
- H5F_LIBVER_V18 in fapl, the library succeeds in creating the file.
- File creation should fail because the feature of setting non-default
- file space info does not exist in library release 1.8 or earlier.
-
- This was fixed by setting and checking the proper version in the
- file space info message based on the library low and high bounds
- when creating and opening the HDF5 file.
-
- (VC - 2019/6/25, HDFFV-10808)
-
- - When iterating over an old-style group (i.e., when not using the latest
- file format) of size 0, a NULL pointer representing the empty links
- table would be sent to qsort(3) for sorting, which is undefined behavior.
-
- Iterating over an empty group is explicitly tested in the links test.
- This has not caused any failures to date and was flagged by gcc's
- -fsanitize=undefined.
-
- The library no longer attempts to sort an empty array.
-
- (DER - 2019/06/18, HDFFV-10829)
-
- - Fixed an issue where copying a version 1.8 dataset between files using
- H5Ocopy fails due to an incompatible fill version
-
- When using the HDF5 1.10.x H5Ocopy() API call to copy a version 1.8
- dataset to a file created with both high and low library bounds set to
- H5F_LIBVER_V18, the H5Ocopy() call will fail with the error stack indicating
- that the fill value version is out of bounds.
-
- This was fixed by changing the fill value message version to H5O_FILL_VERSION_3
- (from H5O_FILL_VERSION_2) for H5F_LIBVER_V18.
-
- (VC - 2019/6/14, HDFFV-10800)
-
- - Some oversights in the index iterating area of the library caused
- a callback function to continue iterating even though it's supposed
- to stop.
-
- Added the returned value check to the for loop's conditions in
- H5EA_iterate(), H5FA_iterate(), and H5D__none_idx_iterate(). The
- iteration now stops when it should.
-
- (BMR - 2019/06/11, HDFFV-10661)
-
- - Fixed a bug that would cause an error or cause fill values to be
- incorrectly read from a chunked dataset using the "single chunk" index if
- the data was held in cache and there was no data on disk.
-
- (NAF - 2019/03/06)
-
- - Fixed a bug that could cause an error or cause fill values to be
- incorrectly read from a dataset that was written to using H5Dwrite_chunk
- if the dataset was not closed after writing.
-
- (NAF - 2019/03/06, HDFFV-10716)
-
- - Fixed memory leak in scale offset filter
-
- In a special case where the MinBits is the same as the number of bits in
- the datatype's precision, the filter's data buffer was not freed, causing
- the memory usage to grow. In general the buffer was freed correctly. The
- Minbits are the minimal number of bits to store the data values. Please
- see the reference manual for H5Pset_scaleoffset for the detail.
-
- (RL - 2019/3/4, HDFFV-10705)
-
- - Fix hangs with collective metadata reads during chunked dataset I/O
+ - Added spectrum-mpi with clang, gcc and xl compilers on Linux 3.10.0
+ - Added OpenMPI 3.1 and 4.0 with clang, gcc and Intel compilers on Linux 3.10.0
+ - Added cray-mpich/PrgEnv with gcc and Intel compilers on Linux 4.14.180
+ - Added spectrum mpi with clang, gcc and xl compilers on Linux 4.14.0
+
- In the parallel library, it was discovered that when a particular
- sequence of operations following a pattern of:
-
- "write to chunked dataset" -> "flush file" -> "read from dataset"
-
- occurred with collective metadata reads enabled, hangs could be
- observed due to certain MPI ranks not participating in the collective
- metadata reads.
-
- To fix the issue, collective metadata reads are now disabled during
- chunked dataset raw data I/O.
-
- (JTH - 2019/02/11, HDFFV-10563, HDFFV-10688)
-
- - Performance issue when closing an object
-
- The slow down is due to the search of the "tag_list" to find
- out the "corked" status of an object and "uncork" it if so.
-
- Improve porformance by skipping the search of the "tag_list"
- if there are no "corked" objects when closing an object.
-
- (VC - 2019/2/6)
-
- - Fixed a potential invalid memory access and failure that could occur when
- decoding an unknown object header message (from a future version of the
- library).
-
- (NAF - 2019/01/07)
-
- - Deleting attributes in dense storage
-
- The library aborts with "infinite loop closing library" after
- attributes in dense storage are created and then deleted.
-
- When deleting the attribute nodes from the name index v2 B-tree,
- if an attribute is found in the intermediate B-tree nodes,
- which may be merged/redistributed in the process, we need to
- free the dynamically allocated spaces for the intermediate
- decoded attribute.
-
- (VC - 2018/12/26, HDFFV-10659)
-
- - Allow H5detect and H5make_libsettings to take a file as an argument.
-
- Rather than only writing to stdout, add a command argument to name
- the file that H5detect and H5make_libsettings will use for output.
- Without an argument, stdout is still used, so backwards compatibility
- is maintained.
-
- (ADB - 2018/09/05, HDFFV-9059)
-
- - A bug was discovered in the parallel library where an application
- would hang if a collective read/write of a chunked dataset occurred
- when collective metadata reads were enabled and some of the ranks
- had no selection in the dataset's dataspace. The ranks which had no
- selection in the dataset's dataspace called H5D__chunk_addrmap() to
- retrieve the lowest chunk address in the dataset. This is because we
- require reads/writes to be performed in strictly non-decreasing order
- of chunk address in the file.
-
- When the chunk index used was a version 1 or 2 B-tree, these
- non-participating ranks would issue a collective MPI_Bcast() call
- that the participating ranks would not issue, causing the hang. Since
- the non-participating ranks are not actually reading/writing anything,
- the H5D__chunk_addrmap() call can be safely removed and the address used
- for the read/write can be set to an arbitrary number (0 was chosen).
-
- (JTH - 2018/08/25, HDFFV-10501)
-
- - fcntl(2)-based file locking incorrectly passed the lock argument struct
- instead of a pointer to the struct, causing errors on systems where
- flock(2) is not available.
-
- File locking is used when files are opened to enforce SWMR semantics. A
- lock operation takes place on all file opens unless the
- HDF5_USE_FILE_LOCKING environment variable is set to the string "FALSE".
- flock(2) is preferentially used, with fcntl(2) locks as a backup if
- flock(2) is unavailable on a system (if neither is available, the lock
- operation fails). On these systems, the file lock will often fail, which
- causes HDF5 to not open the file and report an error.
-
- This bug only affects POSIX systems. Win32 builds on Windows use a no-op
- locking call which always succeeds. Systems which exhibit this bug will
- have H5_HAVE_FCNTL defined but not H5_HAVE_FLOCK in the configure output.
-
- This bug affects HDF5 1.10.0 through 1.10.5.
-
- fcntl(2)-based file locking now correctly passes the struct pointer.
-
- (DER - 2019/08/27, HDFFV-10892)
-
-
- Java Library:
- ----------------
- - JNI native library dependencies
-
- The build for the hdf5_java native library used the wrong
- hdf5 target library for CMake builds. Correcting the hdf5_java
- library to build with the shared hdf5 library required testing
- paths to change also.
-
- (ADB - 2018/08/31, HDFFV-10568)
- - Java iterator callbacks
-
- Change global callback object to a small stack structure in order
- to fix a runtime crash. This crash was discovered when iterating
- through a file with nested group members. The global variable
- visit_callback is overwritten when recursion starts. When recursion
- completes, visit_callback will be pointing to the wrong callback method.
-
- (ADB - 2018/08/15, HDFFV-10536)
-
- - Java HDFLibraryException class
-
- Change parent class from Exception to RuntimeException.
-
- (ADB - 2018/07/30, HDFFV-10534)
-
- - JNI Read and Write
-
- Refactored variable-length functions, H5DreadVL and H5AreadVL,
- to correct dataset and attribute reads. New write functions,
- H5DwriteVL and H5AwriteVL, are under construction.
-
- (ADB - 2018/06/02, HDFFV-10519)
-
- Configuration
- -------------
- - Correct option for default API version
-
- CMake options for default API version are not mutually exclusive.
- Change the multiple BOOL options to a single STRING option with the
- strings; v16, v18, v110, v112.
-
- (ADB - 2019/08/12, HDFFV-10879)
-
- Performance
- -------------
- -
-
- Fortran
- --------
- - Added symbolic links libhdf5_hl_fortran.so to libhdf5hl_fortran.so and
- libhdf5_hl_fortran.a to libhdf5hl_fortran.a in hdf5/lib directory for
- autotools installs. These were added to match the name of the files
- installed by cmake and the general pattern of hl lib files. We will
- change the names of the installed lib files to the matching name in
- the next major release.
-
- (LRK - 2019/01/04, HDFFV-10596)
-
- - Made Fortran specific subroutines PRIVATE in generic procedures.
-
- Effected generic procedures were functions in H5A, H5D, H5P, H5R and H5T.
-
- (MSB, 2018/12/04, HDFFV-10511)
-
- - Fixed issue with Fortran not returning h5o_info_t field values
- meta_size%attr%index_size and meta_size%attr%heap_size.
-
- (MSB, 2018/1/8, HDFFV-10443)
-
-
- Tools
- -----
- -
-
- High-Level APIs:
- ------
- -
-
- Fortran High-Level APIs:
- ------
- -
-
- Documentation
- -------------
- -
-
- F90 APIs
- --------
- -
-
- C++ APIs
- --------
- -
-
- Testing
- -------
- - Fixed a test failure in testpar/t_dset.c caused by
- the test trying to use the parallel filters feature
- on MPI-2 implementations.
-
- (JTH, 2019/2/7)
-
-Bug Fixes since HDF5-1.10.2 release
+Major Bug Fixes since HDF5-1.10.0 release
==================================
- Library
- -------
- - Java HDF5LibraryException class
-
- The error minor and major values would be lost after the
- constructor executed.
-
- Created two local class variables to hold the values obtained during
- execution of the constructor. Refactored the class functions to retrieve
- the class values rather then calling the native functions.
- The native functions were renamed and called only during execution
- of the constructor.
- Added error checking to calling class constructors in JNI classes.
-
- (ADB - 2018/08/06, HDFFV-10544)
-
- - Added checks of the defined MPI_VERSION to guard against usage of
- MPI-3 functions in the Parallel Compression and "big Parallel I/O"
- features when HDF5 is built with MPI-2. Previously, the configure
- step would pass but the build itself would fail when it could not
- locate the MPI-3 functions used.
-
- As a result of these new checks, HDF5 can again be built with MPI-2,
- but the Parallel Compression feature will be disabled as it relies
- on the MPI-3 functions used.
-
- (JTH - 2018/08/02, HDFFV-10512)
-
- - User's patches: CVEs
-
- The following patches have been applied:
-
- CVE-2018-11202 - NULL pointer dereference was discovered in
- H5S_hyper_make_spans in H5Shyper.c (HDFFV-10476)
- https://security-tracker.debian.org/tracker/CVE-2018-11202
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11202
+ - For major bug fixes please see HISTORY-1_10_0-1_12_0.txt file
- CVE-2018-11203 - A division by zero was discovered in
- H5D__btree_decode_key in H5Dbtree.c (HDFFV-10477)
- https://security-tracker.debian.org/tracker/CVE-2018-11203
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11203
-
- CVE-2018-11204 - A NULL pointer dereference was discovered in
- H5O__chunk_deserialize in H5Ocache.c (HDFFV-10478)
- https://security-tracker.debian.org/tracker/CVE-2018-11204
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11204
-
- CVE-2018-11206 - An out of bound read was discovered in
- H5O_fill_new_decode and H5O_fill_old_decode in H5Ofill.c
- (HDFFV-10480)
- https://security-tracker.debian.org/tracker/CVE-2018-11206
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11206
-
- CVE-2018-11207 - A division by zero was discovered in
- H5D__chunk_init in H5Dchunk.c (HDFFV-10481)
- https://security-tracker.debian.org/tracker/CVE-2018-11207
- https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11207
-
- (BMR - 2018/7/22, PR#s: 1134 and 1139,
- HDFFV-10476, HDFFV-10477, HDFFV-10478, HDFFV-10480, HDFFV-10481)
-
- - H5Adelete
-
- H5Adelete failed when deleting the last "large" attribute that
- is stored densely via fractal heap/v2 b-tree.
-
- After removing the attribute, update the ainfo message. If the
- number of attributes goes to zero, remove the message.
-
- (VC - 2018/07/20, HDFFV-9277)
-
- - A bug was discovered in the parallel library which caused partial
- parallel reads of filtered datasets to return incorrect data. The
- library used the incorrect dataspace for each chunk read, causing
- the selection used in each chunk to be wrong.
-
- The bug was not caught during testing because all of the current
- tests which do parallel reads of filtered data read all of the data
- using an H5S_ALL selection. Several tests were added which exercise
- partial parallel reads.
-
- (JTH - 2018/07/16, HDFFV-10467)
-
- - A bug was discovered in the parallel library which caused parallel
- writes of filtered datasets to trigger an assertion failure in the
- file free space manager.
-
- This occurred when the filter used caused chunks to repeatedly shrink
- and grow over the course of several dataset writes. The previous chunk
- information, such as the size of the chunk and the offset in the file,
- was being cached and not updated after each write, causing the next write
- to the chunk to retrieve the incorrect cached information and run into
- issues when reallocating space in the file for the chunk.
-
- (JTH - 2018/07/16, HDFFV-10509)
-
- - A bug was discovered in the parallel library which caused the
- H5D__mpio_array_gatherv() function to allocate too much memory.
-
- When the function is called with the 'allgather' parameter set
- to a non-true value, the function will receive data from all MPI
- ranks and gather it to the single rank specied by the 'root'
- parameter. However, the bug in the function caused memory for
- the received data to be allocated on all MPI ranks, not just the
- singular rank specified as the receiver. In some circumstances,
- this would cause an application to fail due to the large amounts
- of memory being allocated.
-
- (JTH - 2018/07/16, HDFFV-10467)
-
- - Error checks in h5stat and when decoding messages
-
- h5stat exited with seg fault/core dumped when
- errors are encountered in the internal library.
-
- Add error checks and --enable-error-stack option to h5stat.
- Add range checks when decoding messages: old fill value, old
- layout and refcount.
-
- (VC - 2018/07/11, HDFFV-10333)
-
- - If an HDF5 file contains a malformed compound datatype with a
- suitably large offset, the type conversion code can run off
- the end of the type conversion buffer, causing a segmentation
- fault.
-
- This issue was reported to The HDF Group as issue #CVE-2017-17507.
-
- NOTE: The HDF5 C library cannot produce such a file. This condition
- should only occur in a corrupt (or deliberately altered) file
- or a file created by third-party software.
-
- THE HDF GROUP WILL NOT FIX THIS BUG AT THIS TIME
-
- Fixing this problem would involve updating the publicly visible
- H5T_conv_t function pointer typedef and versioning the API calls
- which use it. We normally only modify the public API during
- major releases, so this bug will not be fixed at this time.
-
- (DER - 2018/02/26, HDFFV-10356)
-
- - Inappropriate linking with deprecated MPI C++ libraries
-
- HDF5 does not define *_SKIP_MPICXX in the public headers, so applications
- can inadvertently wind up linking to the deprecated MPI C++ wrappers.
-
- MPICH_SKIP_MPICXX and OMPI_SKIP_MPICXX have both been defined in H5public.h
- so this should no longer be an issue. HDF5 makes no use of the deprecated
- MPI C++ wrappers.
-
- (DER - 2019/09/17, HDFFV-10893)
-
-
-
- Configuration
- -------------
- - Applied patches to address Cywin build issues
-
- There were three issues for Cygwin builds:
- - Shared libs were not built.
- - The -std=c99 flag caused a SIG_SETMASK undeclared error.
- - Undefined errors when buildbing test shared libraries.
-
- Patches to address these issues were received and incorporated in this version.
-
- (LRK - 2018/07/18, HDFFV-10475)
-
- - Moved the location of gcc attribute.
-
- The gcc attribute(no_sanitize), named as the macro HDF_NO_UBSAN,
- was located after the function name. Builds with GCC 7 did not
- indicate any problem, but GCC 8 issued errors. Moved the
- attribute before the function name, as required.
-
- (ADB - 2018/05/22, HDFFV-10473)
-
- - Reworked java test suite into individual JUnit tests.
-
- Testing the whole suite of java unit tests in a single JUnit run
- made it difficult to determine actual failures when tests would fail.
- Running each file set of tests individually, allows individual failures
- to be diagnosed easier. A side benefit is that tests for optional components
- of the library can be disabled if not configured.
-
- (ADB - 2018/05/16, HDFFV-9739)
-
- - Converted CMake global commands ADD_DEFINITIONS and INCLUDE_DIRECTORIES
- to use target_* type commands. This change modernizes the CMake usage
- in the HDF5 library.
-
- In addition, there is the intention to convert to generator expressions,
- where possible. The exception is Fortran FLAGS on Windows Visual Studio.
- The HDF macros TARGET_C_PROPERTIES and TARGET_FORTRAN_PROPERTIES have
- been removed with this change in usage.
-
- The additional language (C++ and Fortran) checks have also been localized
- to only be checked when that language is enabled.
-
- (ADB - 2018/05/08)
-
- Performance
- -------------
- -
-
- Fortran
- --------
- -
-
- Tools
- -----
- -
-
- High-Level APIs:
- ------
- -
-
- Fortran High-Level APIs:
- ------
- -
-
- Documentation
- -------------
- -
-
- F90 APIs
- --------
- -
-
- C++ APIs
- --------
- - Adding default arguments to existing functions
-
- Added the following items:
- + Two more property list arguments are added to H5Location::createDataSet:
- const DSetAccPropList& dapl = DSetAccPropList::DEFAULT
- const LinkCreatPropList& lcpl = LinkCreatPropList::DEFAULT
-
- + One more property list argument is added to H5Location::openDataSet:
- const DSetAccPropList& dapl = DSetAccPropList::DEFAULT
-
- (BMR - 2018/07/21, PR# 1146)
-
- - Improvement C++ documentation
-
- Replaced the table in main page of the C++ documentation from mht to htm format
- for portability.
-
- (BMR - 2018/07/17, PR# 1141)
-
- Testing
- -------
- - The dt_arith test failed on IBM Power8 and Power9 machines when testing
- conversions from or to long double types, especially when special values
- such as infinity or NAN were involved. In some cases the results differed
- by extremely small amounts from those on other machines, while some other
- tests resulted in segmentation faults. These conversion tests with long
- double types have been disabled for ppc64 machines until the problems are
- better understood and can be properly addressed.
-
- (SRL - 2019/01/07, TRILAB-98)
Supported Platforms
===================
@@ -981,11 +152,35 @@ Supported Platforms
Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
#1 SMP x86_64 GNU/Linux compilers:
(kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
- Version 4.9.3, Version 5.2.0,
+ Version 4.9.3, 5.2.0, 7.1.0
Intel(R) C (icc), C++ (icpc), Fortran (icc)
compilers:
Version 17.0.0.098 Build 20160721
- MPICH 3.1.4 compiled with GCC 4.9.3
+ MPICH 3.1.4
+
+ Linux-3.10.0- spectrum-mpi/rolling-release with cmake>3.10 and
+ 862.14.4.1chaos.ch6.ppc64le clang/3.9,8.0
+ #1 SMP ppc64le GNU/Linux gcc/7.3
+ (ray) xl/2016,2019
+
+ Linux 3.10.0- openmpi/3.1,4.0 with cmake>3.10 and
+ 957.12.2.1chaos.ch6.x86_64 clang 5.0
+ #1 SMP x86_64 GNU/Linux gcc/7.3,8.2
+ (serrano) intel/17.0,18.0/19.0
+
+ Linux 3.10.0- openmpi/3.1/4.0 with cmake>3.10 and
+ 1062.1.1.1chaos.ch6.x86_64 clang/3.9,5.0,8.0
+ #1 SMP x86_64 GNU/Linux gcc/7.3,8.1,8.2
+ (chama,quartz) intel/16.0,18.0,19.0
+
+ Linux 4.4.180-94.100-default cray-mpich/7.7.6 with PrgEnv-*/6.0.5, cmake>3.10 and
+ #1 SMP x86_64 GNU/Linux gcc/7.2.0,8.2.0
+ (mutrino) intel/17.0,18.0
+
+ Linux 4.14.0- spectrum-mpi/rolling-release with cmake>3.10 and
+ 49.18.1.bl6.ppc64le clang/6.0,8.0
+ #1 SMP ppc64le GNU/Linux gcc/7.3
+ (lassen) xl/2019
SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
(emu) Sun Fortran 95 8.6 SunOS_sparc
@@ -993,8 +188,7 @@ Supported Platforms
Windows 7 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
- Windows 7 x64 Visual Studio 2013
- Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+ Windows 7 x64 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
Visual Studio 2015 w/ Intel C, Fortran 2018 (cmake)
Visual Studio 2015 w/ MSMPI 8 (cmake)
@@ -1011,9 +205,13 @@ Supported Platforms
64-bit gfortran GNU Fortran (GCC) 5.2.0
(osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2
- Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42)
- 64-bit gfortran GNU Fortran (GCC) 7.1.0
- (swallow/kite) Intel icc/icpc/ifort version 17.0.2
+ Mac OS High Sierra 10.13.6 Apple LLVM version 10.0.0 (clang/clang++-1000.10.44.4)
+ 64-bit gfortran GNU Fortran (GCC) 6.3.0
+ (bear) Intel icc/icpc/ifort version 19.0.4
+
+ Mac OS Mojave 10.14.6 Apple LLVM version 10.0.1 (clang/clang++-1001.0.46.4)
+ 64-bit gfortran GNU Fortran (GCC) 6.3.0
+ (bobcat) Intel icc/icpc/ifort version 19.0.4
Tested Configuration Features Summary
@@ -1100,33 +298,11 @@ The following platforms are not supported but have been tested for this release.
Intel(R) C (icc) and C++ (icpc) compilers
Version 17.0.0.098 Build 20160721
with NAG Fortran Compiler Release 6.1(Tozai)
-
- Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0
- #1 SMP x86_64 GNU/Linux
- (moohan)
-
- Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
- #1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
- (ostrich) and IBM XL Fortran for Linux, V15.1
-
- Debian 8.4 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1 x86_64 GNU/Linux
- gcc, g++ (Debian 4.9.2-10) 4.9.2
- GNU Fortran (Debian 4.9.2-10) 4.9.2
- (cmake and autotools)
-
- Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
- gcc, g++ (GCC) 6.1.1 20160621
- (Red Hat 6.1.1-3)
- GNU Fortran (GCC) 6.1.1 20160621
- (Red Hat 6.1.1-3)
- (cmake and autotools)
-
- Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
- gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
- 5.4.0 20160609
- GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
- 5.4.0 20160609
- (cmake and autotools)
+ PGI C (pgcc), C++ (pgc++), Fortran (pgf90)
+ compilers:
+ Version 18.4, 19.4
+ MPICH 3.3
+ OpenMPI 2.1.5, 3.1.3, 4.0.0
Known Problems