summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
authorAllen Byrne <byrn@hdfgroup.org>2019-07-23 21:22:45 (GMT)
committerAllen Byrne <byrn@hdfgroup.org>2019-07-23 21:22:45 (GMT)
commitab0b1a00aff3bee4a3cf0d31614368935c435b6f (patch)
tree0c71793fb36cded8f5e8d26e30fc7a99e4a49031 /release_docs
parentc04ed97d4c587d6e7feeef6412e31cf38701b6d4 (diff)
parent308393a020bd7a812c231eee8130c9365d192e18 (diff)
downloadhdf5-ab0b1a00aff3bee4a3cf0d31614368935c435b6f.zip
hdf5-ab0b1a00aff3bee4a3cf0d31614368935c435b6f.tar.gz
hdf5-ab0b1a00aff3bee4a3cf0d31614368935c435b6f.tar.bz2
Merging in latest from upstream (HDFFV/hdf5:refs/heads/hdf5_1_10)
* commit '308393a020bd7a812c231eee8130c9365d192e18': Change "bad" hid_t_value to H5I_INVALID_HID in test_libver_bounds_copy() test. Bring pull request #1729 from develop to 1.10: Fix for HDFFV-10800 H5Ocopy failure: The value for the H5F_LIBVER_V18 index in H5O_fill_ver_bounds[], the format version bounds array for fill value message, should be version 3 not 2.
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/RELEASE.txt13
1 files changed, 13 insertions, 0 deletions
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 5ef9d9f..4f4efe7 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -385,6 +385,19 @@ Bug Fixes since HDF5-1.10.4 release
Library
-------
+ - Fixed an issue where copying a version 1.8 dataset between files using
+ H5Ocopy fails due to an incompatible fill version
+
+ When using the HDF5 1.10.x H5Ocopy() API call to copy a version 1.8
+ dataset to a file created with both high and low library bounds set to
+ H5F_LIBVER_V18, the H5Ocopy() call will fail with the error stack indicating
+ that the fill value version is out of bounds.
+
+ This was fixed by changing the fill value message version to H5O_FILL_VERSION_3
+ (from H5O_FILL_VERSION_2) for H5F_LIBVER_V18.
+
+ (VC - 2019/6/14, HDFFV-10800)
+
- Fixed a bug that would cause an error or cause fill values to be
incorrectly read from a chunked dataset using the "single chunk" index if
the data was held in cache and there was no data on disk.