summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
authorDavid Young <dyoung@hdfgroup.org>2019-12-09 16:30:58 (GMT)
committerDavid Young <dyoung@hdfgroup.org>2019-12-09 16:30:58 (GMT)
commitc8f533cfc33ac743227cbed8eba361c715a2976f (patch)
treebcae5320f80bac774647cacbbd8493604f9384d2 /release_docs
parentadcf8a315e82c0848d126e7e46b662930c081896 (diff)
downloadhdf5-c8f533cfc33ac743227cbed8eba361c715a2976f.zip
hdf5-c8f533cfc33ac743227cbed8eba361c715a2976f.tar.gz
hdf5-c8f533cfc33ac743227cbed8eba361c715a2976f.tar.bz2
Merge all of my changes from merge-back-to-feature-vfd_swmr-attempt-1,
including the merge of `hdffv/hdf5/develop`, back to the branch that Vailin and I share. Now I need to put this branch on a fork with a less confusing name than vchoi_fork!
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/HISTORY-1_10.txt1035
-rw-r--r--release_docs/INSTALL_CMake.txt48
-rw-r--r--release_docs/INSTALL_Cygwin.txt12
-rw-r--r--release_docs/INSTALL_parallel10
-rw-r--r--release_docs/README_HDF5_CMake23
-rw-r--r--release_docs/README_HPC206
-rw-r--r--release_docs/RELEASE.txt605
-rw-r--r--release_docs/USING_CMake_Examples.txt24
8 files changed, 1853 insertions, 110 deletions
diff --git a/release_docs/HISTORY-1_10.txt b/release_docs/HISTORY-1_10.txt
index 9887a54..ad8beb2 100644
--- a/release_docs/HISTORY-1_10.txt
+++ b/release_docs/HISTORY-1_10.txt
@@ -3,6 +3,8 @@ HDF5 History
This file contains development history of the HDF5 1.10 branch
+06. Release Information for hdf5-1.10.4
+05. Release Information for hdf5-1.10.3
04. Release Information for hdf5-1.10.2
03. Release Information for hdf5-1.10.1
02. Release Information for hdf5-1.10.0-patch1
@@ -10,6 +12,1039 @@ This file contains development history of the HDF5 1.10 branch
[Search on the string '%%%%' for section breaks of each release.]
+%%%%1.10.4%%%%
+
+HDF5 version 1.10.4 released on 2018-10-05
+================================================================================
+
+
+INTRODUCTION
+
+This document describes the differences between this release and the previous
+HDF5 release. It contains information on the platforms tested and known
+problems in this release. For more details check the HISTORY*.txt files in the
+HDF5 source.
+
+Note that documentation in the links below will be updated at the time of each
+final release.
+
+Links to HDF5 documentation can be found on The HDF5 web page:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5
+
+The official HDF5 releases can be obtained from:
+
+ https://www.hdfgroup.org/downloads/hdf5/
+
+Changes from Release to Release and New Features in the HDF5-1.10.x release series
+can be found at:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide
+
+If you have any questions or comments, please send them to the HDF Help Desk:
+
+ help@hdfgroup.org
+
+
+CONTENTS
+
+- Bug Fixes since HDF5-1.10.3
+- Supported Platforms
+- Tested Configuration Features Summary
+- More Tested Platforms
+- Known Problems
+- CMake vs. Autotools installations
+
+
+New Features
+============
+
+ Configuration:
+ -------------
+ - Add toolchain and cross-compile support
+
+ Added info on using a toolchain file to INSTALL_CMAKE.txt. A
+ toolchain file is also used in cross-compiling, which requires
+ CMAKE_CROSSCOMPILING_EMULATOR to be set. To help with cross-compiling
+ the fortran configure process, the HDF5UseFortran.cmake file macros
+ were improved. Fixed a Fortran configure file issue that incorrectly
+ used #cmakedefine instead of #define.
+
+ (ADB - 2018/10/04, HDFFV-10594)
+
+ - Add warning flags for Intel compilers
+
+ Identified Intel compiler specific warnings flags that should be used
+ instead of GNU flags.
+
+ (ADB - 2018/10/04, TRILABS-21)
+
+ - Add default rpath to targets
+
+ Default rpaths should be set in shared executables and
+ libraries to allow the use of loading dependent libraries
+ without requiring LD_LIBRARY_PATH to be set. The default
+ path should be relative using @rpath on osx and $ORIGIN
+ on linux. Windows is not affected.
+
+ (ADB - 2018/09/26, HDFFV-10594)
+
+ Library:
+ --------
+ - Allow pre-generated H5Tinit.c and H5make_libsettings.c to be used.
+
+ Rather than always running H5detect and generating H5Tinit.c and
+ H5make_libsettings.c, supply a location for those files.
+
+ (ADB - 2018/09/18, HDFFV-10332)
+
+
+Bug Fixes since HDF5-1.10.3 release
+==================================
+
+ Library
+ -------
+ - Allow H5detect and H5make_libsettings to take a file as an argument.
+
+ Rather than only writing to stdout, add a command argument to name
+ the file that H5detect and H5make_libsettings will use for output.
+ Without an argument, stdout is still used, so backwards compatibility
+ is maintained.
+
+ (ADB - 2018/09/05, HDFFV-9059)
+
+ - A bug was discovered in the parallel library where an application
+ would hang if a collective read/write of a chunked dataset occurred
+ when collective metadata reads were enabled and some of the ranks
+ had no selection in the dataset's dataspace. The ranks which had no
+ selection in the dataset's dataspace called H5D__chunk_addrmap() to
+ retrieve the lowest chunk address in the dataset. This is because we
+ require reads/writes to be performed in strictly non-decreasing order
+ of chunk address in the file.
+
+ When the chunk index used was a version 1 or 2 B-tree, these
+ non-participating ranks would issue a collective MPI_Bcast() call
+ that the participating ranks would not issue, causing the hang. Since
+ the non-participating ranks are not actually reading/writing anything,
+ the H5D__chunk_addrmap() call can be safely removed and the address used
+ for the read/write can be set to an arbitrary number (0 was chosen).
+
+ (JTH - 2018/08/25, HDFFV-10501)
+
+ Java Library:
+ ----------------
+ - JNI native library dependencies
+
+ The build for the hdf5_java native library used the wrong
+ hdf5 target library for CMake builds. Correcting the hdf5_java
+ library to build with the shared hdf5 library required testing
+ paths to change also.
+
+ (ADB - 2018/08/31, HDFFV-10568)
+
+ - Java iterator callbacks
+
+ Change global callback object to a small stack structure in order
+ to fix a runtime crash. This crash was discovered when iterating
+ through a file with nested group members. The global variable
+ visit_callback is overwritten when recursion starts. When recursion
+ completes, visit_callback will be pointing to the wrong callback method.
+
+ (ADB - 2018/08/15, HDFFV-10536)
+
+ - Java HDFLibraryException class
+
+ Change parent class from Exception to RuntimeException.
+
+ (ADB - 2018/07/30, HDFFV-10534)
+
+ - JNI Read and Write
+
+ Refactored variable-length functions, H5DreadVL and H5AreadVL,
+ to correct dataset and attribute reads. New write functions,
+ H5DwriteVL and H5AwriteVL, are under construction.
+
+ (ADB - 2018/06/02, HDFFV-10519)
+
+
+Supported Platforms
+===================
+
+ Linux 2.6.32-696.16.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ #1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ (ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ IBM XL C/C++ V13.1
+ IBM XL Fortran V15.1
+
+ Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ Version 4.9.3, Version 5.2.0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.0.098 Build 20160721
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
+ (emu) Sun Fortran 95 8.6 SunOS_sparc
+ Sun C++ 5.12 SunOS_sparc
+
+ Windows 7 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+
+ Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2013 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+ Visual Studio 2015 w/ Intel C, Fortran 2017 (cmake)
+ Visual Studio 2015 w/ MSMPI 8 (cmake)
+
+ Windows 10 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+
+ Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+ Visual Studio 2017 w/ Intel Fortran 18 (cmake)
+
+ Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0
+ 64-bit gfortran GNU Fortran (GCC) 4.9.2
+ (osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3
+
+ Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3.0 from Xcode 7.3
+ 64-bit gfortran GNU Fortran (GCC) 5.2.0
+ (osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2
+
+ Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42)
+ 64-bit gfortran GNU Fortran (GCC) 7.1.0
+ (kite) Intel icc/icpc/ifort version 17.0.2
+
+
+Tested Configuration Features Summary
+=====================================
+
+ In the tables below
+ y = tested
+ n = not tested in this release
+ C = Cluster
+ W = Workstation
+ x = not working in this release
+ dna = does not apply
+ ( ) = footnote appears below second table
+ <blank> = testing incomplete on this feature or platform
+
+Platform C F90/ F90 C++ zlib SZIP
+ parallel F2003 parallel
+Solaris2.11 32-bit n y/y n y y y
+Solaris2.11 64-bit n y/n n y y y
+Windows 7 y y/y n y y y
+Windows 7 x64 y y/y y y y y
+Windows 7 Cygwin n y/n n y y y
+Windows 7 x64 Cygwin n y/n n y y y
+Windows 10 y y/y n y y y
+Windows 10 x64 y y/y n y y y
+Mac OS X Mavericks 10.9.5 64-bit n y/y n y y y
+Mac OS X Yosemite 10.10.5 64-bit n y/y n y y y
+Mac OS X El Capitan 10.11.6 64-bit n y/y n y y y
+Mac OS Sierra 10.12.6 64-bit n y/y n y y y
+CentOS 7.2 Linux 3.10.0 x86_64 PGI n y/y n y y y
+CentOS 7.2 Linux 3.10.0 x86_64 GNU y y/y y y y y
+CentOS 7.2 Linux 3.10.0 x86_64 Intel n y/y n y y y
+Linux 2.6.32-573.18.1.el6.ppc64 n y/y n y y y
+
+
+Platform Shared Shared Shared Thread-
+ C libs F90 libs C++ libs safe
+Solaris2.11 32-bit y y y y
+Solaris2.11 64-bit y y y y
+Windows 7 y y y y
+Windows 7 x64 y y y y
+Windows 7 Cygwin n n n y
+Windows 7 x64 Cygwin n n n y
+Windows 10 y y y y
+Windows 10 x64 y y y y
+Mac OS X Mavericks 10.9.5 64-bit y n y y
+Mac OS X Yosemite 10.10.5 64-bit y n y y
+Mac OS X El Capitan 10.11.6 64-bit y n y y
+Mac OS Sierra 10.12.6 64-bit y n y y
+CentOS 7.2 Linux 3.10.0 x86_64 PGI y y y n
+CentOS 7.2 Linux 3.10.0 x86_64 GNU y y y y
+CentOS 7.2 Linux 3.10.0 x86_64 Intel y y y n
+Linux 2.6.32-573.18.1.el6.ppc64 y y y n
+
+Compiler versions for each platform are listed in the preceding
+"Supported Platforms" table.
+
+
+More Tested Platforms
+=====================
+The following platforms are not supported but have been tested for this release.
+
+ Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (mayll/platypus) Version 4.4.7 20120313
+ Version 4.9.3, 5.3.0, 6.2.0
+ PGI C, Fortran, C++ for 64-bit target on
+ x86-64;
+ Version 17.10-0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.4.196 Build 20170411
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
+ #1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ (jelly) with NAG Fortran Compiler Release 6.1(Tozai)
+ GCC Version 7.1.0
+ OpenMPI 3.0.0-GCC-7.2.0-2.29,
+ 3.1.0-GCC-7.2.0-2.29
+ Intel(R) C (icc) and C++ (icpc) compilers
+ Version 17.0.0.098 Build 20160721
+ with NAG Fortran Compiler Release 6.1(Tozai)
+
+ Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0
+ #1 SMP x86_64 GNU/Linux
+ (moohan)
+
+ Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
+ #1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
+ (ostrich) and IBM XL Fortran for Linux, V15.1
+
+ Debian 8.4 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1 x86_64 GNU/Linux
+ gcc, g++ (Debian 4.9.2-10) 4.9.2
+ GNU Fortran (Debian 4.9.2-10) 4.9.2
+ (cmake and autotools)
+
+ Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
+ gcc, g++ (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ GNU Fortran (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ (cmake and autotools)
+
+ Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
+ gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ (cmake and autotools)
+
+
+Known Problems
+==============
+
+ At present, metadata cache images may not be generated by parallel
+ applications. Parallel applications can read files with metadata cache
+ images, but since this is a collective operation, a deadlock is possible
+ if one or more processes do not participate.
+
+ Three tests fail with OpenMPI 3.0.0/GCC-7.2.0-2.29:
+ testphdf5 (ecdsetw, selnone, cchunk1, cchunk3, cchunk4, and actualio)
+ t_shapesame (sscontig2)
+ t_pflush1/fails on exit
+ The first two tests fail attempting collective writes.
+
+ Known problems in previous releases can be found in the HISTORY*.txt files
+ in the HDF5 source. Please report any new problems found to
+ help@hdfgroup.org.
+
+
+CMake vs. Autotools installations
+=================================
+While both build systems produce similar results, there are differences.
+Each system produces the same set of folders on linux (only CMake works
+on standard Windows); bin, include, lib and share. Autotools places the
+COPYING and RELEASE.txt file in the root folder, CMake places them in
+the share folder.
+
+The bin folder contains the tools and the build scripts. Additionally, CMake
+creates dynamic versions of the tools with the suffix "-shared". Autotools
+installs one set of tools depending on the "--enable-shared" configuration
+option.
+ build scripts
+ -------------
+ Autotools: h5c++, h5cc, h5fc
+ CMake: h5c++, h5cc, h5hlc++, h5hlcc
+
+The include folder holds the header files and the fortran mod files. CMake
+places the fortran mod files into separate shared and static subfolders,
+while Autotools places one set of mod files into the include folder. Because
+CMake produces a tools library, the header files for tools will appear in
+the include folder.
+
+The lib folder contains the library files, and CMake adds the pkgconfig
+subfolder with the hdf5*.pc files used by the bin/build scripts created by
+the CMake build. CMake separates the C interface code from the fortran code by
+creating C-stub libraries for each Fortran library. In addition, only CMake
+installs the tools library. The names of the szip libraries are different
+between the build systems.
+
+The share folder will have the most differences because CMake builds include
+a number of CMake specific files for support of CMake's find_package and support
+for the HDF5 Examples CMake project.
+
+%%%%1.10.3%%%%
+
+HDF5 version 1.10.3 released on 2018-08-21
+================================================================================
+
+
+INTRODUCTION
+
+This document describes the differences between this release and the previous
+HDF5 release. It contains information on the platforms tested and known
+problems in this release. For more details check the HISTORY*.txt files in the
+HDF5 source.
+
+Note that documentation in the links below will be updated at the time of each
+final release.
+
+Links to HDF5 documentation can be found on The HDF5 web page:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5
+
+The official HDF5 releases can be obtained from:
+
+ https://www.hdfgroup.org/downloads/hdf5/
+
+Changes from Release to Release and New Features in the HDF5-1.10.x release series
+can be found at:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide
+
+If you have any questions or comments, please send them to the HDF Help Desk:
+
+ help@hdfgroup.org
+
+
+CONTENTS
+
+- New Features
+- Bug Fixes since HDF5-1.10.2
+- Supported Platforms
+- Tested Configuration Features Summary
+- More Tested Platforms
+- Known Problems
+- CMake vs. Autotools installations
+
+
+New Features
+============
+
+ Library
+ -------
+ - Moved the H5DOread/write_chunk() API calls to H5Dread/write_chunk()
+
+ The functionality of the direct chunk I/O calls in the high-level
+ library has been moved to the H5D package in the main library. This
+ will allow using those functions without building the high-level
+ library. The parameters and functionality of the H5D calls are
+ identical to the H5DO calls.
+
+ The original H5DO high-level API calls have been retained, though
+ they are now just wrappers for the H5D calls. They are marked as
+ deprecated and are only available when the library is built with
+ deprecated functions. New code should use the H5D calls for this
+ reason.
+
+ As a part of this work, the following symbols from H5Dpublic.h are no
+ longer used:
+
+ H5D_XFER_DIRECT_CHUNK_WRITE_FLAG_NAME
+ H5D_XFER_DIRECT_CHUNK_WRITE_FILTERS_NAME
+ H5D_XFER_DIRECT_CHUNK_WRITE_OFFSET_NAME
+ H5D_XFER_DIRECT_CHUNK_WRITE_DATASIZE_NAME
+ H5D_XFER_DIRECT_CHUNK_READ_FLAG_NAME
+ H5D_XFER_DIRECT_CHUNK_READ_OFFSET_NAME
+ H5D_XFER_DIRECT_CHUNK_READ_FILTERS_NAME
+
+ And properties with these names are no longer stored in the dataset
+ transfer property lists. The symbols are still defined in H5Dpublic.h,
+ but only when the library is built with deprecated symbols.
+
+ (DER - 2018/05/04)
+
+ Configuration:
+ -------------
+ - Add missing USE_110_API_DEFAULT option.
+
+ Option USE_110_API_DEFAULT sets the default version of
+ versioned APIs. The bin/makevers perl script did not set
+ the maxidx variable correctly when the 1.10 branch was
+ created. This caused the versioning process to always use
+ the latest version of any API.
+
+ (ADB - 2018/08/17, HDFFV-10552)
+
+ - Added configuration checks for the following MPI functions:
+
+ MPI_Mprobe - Used for the Parallel Compression feature
+ MPI_Imrecv - Used for the Parallel Compression feature
+
+ MPI_Get_elements_x - Used for the "big Parallel I/O" feature
+ MPI_Type_size_x - Used for the "big Parallel I/O" feature
+
+ (JTH - 2018/08/02, HDFFV-10512)
+
+ - Added section to the libhdf5.settings file to indicate
+ the status of the Parallel Compression and "big Parallel I/O"
+ features.
+
+ (JTH - 2018/08/02, HDFFV-10512)
+
+ - Add option to execute swmr shell scripts from CMake.
+
+ Option TEST_SHELL_SCRIPTS redirects processing into a
+ separate ShellTests.cmake file for UNIX types. The tests
+ execute the shell scripts if a SH program is found.
+
+ (ADB - 2018/07/16)
+
+
+ C++ Library:
+ ------------
+ - New wrappers
+
+ Added the following items:
+
+ + Class DSetAccPropList for the dataset access property list.
+
+ + Wrapper for H5Dget_access_plist to class DataSet
+ // Gets the access property list of this dataset.
+ DSetAccPropList getAccessPlist() const;
+
+ + Wrappers for H5Pset_chunk_cache and H5Pget_chunk_cache to class DSetAccPropList
+ // Sets the raw data chunk cache parameters.
+ void setChunkCache(size_t rdcc_nslots, size_t rdcc_nbytes, double rdcc_w0)
+
+ // Retrieves the raw data chunk cache parameters.
+ void getChunkCache(size_t &rdcc_nslots, size_t &rdcc_nbytes, double &rdcc_w0)
+
+ + New operator!= to class DataType (HDFFV-10472)
+ // Determines whether two datatypes are not the same.
+ bool operator!=(const DataType& compared_type)
+
+ + Wrappers for H5Oget_info2, H5Oget_info_by_name2, and H5Oget_info_by_idx2
+ (HDFFV-10458)
+
+ // Retrieves information about an HDF5 object.
+ void getObjinfo(H5O_info_t& objinfo, unsigned fields = H5O_INFO_BASIC) const;
+
+ // Retrieves information about an HDF5 object, given its name.
+ void getObjinfo(const char* name, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+ void getObjinfo(const H5std_string& name, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+
+ // Retrieves information about an HDF5 object, given its index.
+ void getObjinfo(const char* grp_name, H5_index_t idx_type,
+ H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+ void getObjinfo(const H5std_string& grp_name, H5_index_t idx_type,
+ H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+
+ (BMR - 2018/07/22, HDFFV-10150, HDFFV-10458, HDFFV-1047)
+
+
+ Java Library:
+ ----------------
+ - Java HDFLibraryException class
+
+ Change parent class from Exception to RuntimeException.
+
+ (ADB - 2018/07/30, HDFFV-10534)
+
+ - JNI Read and Write
+
+ Refactored variable-length functions, H5DreadVL and H5AreadVL,
+ to correct dataset and attribute reads. New write functions,
+ H5DwriteVL and H5AwriteVL, are under construction.
+
+ (ADB - 2018/06/02, HDFFV-10519)
+
+
+Bug Fixes since HDF5-1.10.2 release
+==================================
+
+ Library
+ -------
+ - Performance issue with H5Oget_info
+
+ H5Oget_info family of routines retrieves information for an object such
+ as object type, access time, number of attributes, and storage space etc.
+ Retrieving all such information regardless is an overkill and causes
+ performance issue when doing so for many objects.
+
+ Add an additional parameter "fields" to the the H5Oget_info family of routines
+ indicating the type of information to be retrieved. The same is done to
+ the H5Ovisit family of routines which recursively visits an object
+ returning object information in a callback function. Both sets of routines
+ are versioned and the corresponding compatibility macros are added.
+
+ The version 2 names of the two sets of routines are:
+ (1) H5Oget_info2, H5Oget_info_by_idx2, H5Oget_info_by_name2
+ (2) H5Ovisit2, H5Ovisit_by_name2
+
+ (VC - 2018/08/15, HDFFV-10180)
+
+ - Test failure due to metadata size in test/vds.c
+
+ The size of metadata from test_api_get_ex_dcpl() in test/vds.c is not as expected
+ because the latest format should be used when encoding the layout for VDS.
+
+ Set the latest format in a temporary fapl and pass the setting to the routines that
+ encode the dataset selection for VDS.
+
+ (VC - 2018/08/14 HDFFV-10469)
+
+ - Java HDF5LibraryException class
+
+ The error minor and major values would be lost after the
+ constructor executed.
+
+ Created two local class variables to hold the values obtained during
+ execution of the constructor. Refactored the class functions to retrieve
+ the class values rather then calling the native functions.
+ The native functions were renamed and called only during execution
+ of the constructor.
+ Added error checking to calling class constructors in JNI classes.
+
+ (ADB - 2018/08/06, HDFFV-10544)
+
+ - Added checks of the defined MPI_VERSION to guard against usage of
+ MPI-3 functions in the Parallel Compression and "big Parallel I/O"
+ features when HDF5 is built with MPI-2. Previously, the configure
+ step would pass but the build itself would fail when it could not
+ locate the MPI-3 functions used.
+
+ As a result of these new checks, HDF5 can again be built with MPI-2,
+ but the Parallel Compression feature will be disabled as it relies
+ on the MPI-3 functions used.
+
+ (JTH - 2018/08/02, HDFFV-10512)
+
+ - User's patches: CVEs
+
+ The following patches have been applied:
+
+ CVE-2018-11202 - NULL pointer dereference was discovered in
+ H5S_hyper_make_spans in H5Shyper.c (HDFFV-10476)
+ https://security-tracker.debian.org/tracker/CVE-2018-11202
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11202
+
+ CVE-2018-11203 - A division by zero was discovered in
+ H5D__btree_decode_key in H5Dbtree.c (HDFFV-10477)
+ https://security-tracker.debian.org/tracker/CVE-2018-11203
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11203
+
+ CVE-2018-11204 - A NULL pointer dereference was discovered in
+ H5O__chunk_deserialize in H5Ocache.c (HDFFV-10478)
+ https://security-tracker.debian.org/tracker/CVE-2018-11204
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11204
+
+ CVE-2018-11206 - An out of bound read was discovered in
+ H5O_fill_new_decode and H5O_fill_old_decode in H5Ofill.c
+ (HDFFV-10480)
+ https://security-tracker.debian.org/tracker/CVE-2018-11206
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11206
+
+ CVE-2018-11207 - A division by zero was discovered in
+ H5D__chunk_init in H5Dchunk.c (HDFFV-10481)
+ https://security-tracker.debian.org/tracker/CVE-2018-11207
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11207
+
+ (BMR - 2018/7/22, PR#s: 1134 and 1139,
+ HDFFV-10476, HDFFV-10477, HDFFV-10478, HDFFV-10480, HDFFV-10481)
+
+ - H5Adelete
+
+ H5Adelete failed when deleting the last "large" attribute that
+ is stored densely via fractal heap/v2 b-tree.
+
+ After removing the attribute, update the ainfo message. If the
+ number of attributes goes to zero, remove the message.
+
+ (VC - 2018/07/20, HDFFV-9277)
+
+ - A bug was discovered in the parallel library which caused partial
+ parallel reads of filtered datasets to return incorrect data. The
+ library used the incorrect dataspace for each chunk read, causing
+ the selection used in each chunk to be wrong.
+
+ The bug was not caught during testing because all of the current
+ tests which do parallel reads of filtered data read all of the data
+ using an H5S_ALL selection. Several tests were added which exercise
+ partial parallel reads.
+
+ (JTH - 2018/07/16, HDFFV-10467)
+
+ - A bug was discovered in the parallel library which caused parallel
+ writes of filtered datasets to trigger an assertion failure in the
+ file free space manager.
+
+ This occurred when the filter used caused chunks to repeatedly shrink
+ and grow over the course of several dataset writes. The previous chunk
+ information, such as the size of the chunk and the offset in the file,
+ was being cached and not updated after each write, causing the next write
+ to the chunk to retrieve the incorrect cached information and run into
+ issues when reallocating space in the file for the chunk.
+
+ (JTH - 2018/07/16, HDFFV-10509)
+
+ - A bug was discovered in the parallel library which caused the
+ H5D__mpio_array_gatherv() function to allocate too much memory.
+
+ When the function is called with the 'allgather' parameter set
+ to a non-true value, the function will receive data from all MPI
+ ranks and gather it to the single rank specied by the 'root'
+ parameter. However, the bug in the function caused memory for
+ the received data to be allocated on all MPI ranks, not just the
+ singular rank specified as the receiver. In some circumstances,
+ this would cause an application to fail due to the large amounts
+ of memory being allocated.
+
+ (JTH - 2018/07/16, HDFFV-10467)
+
+ - Error checks in h5stat and when decoding messages
+
+ h5stat exited with seg fault/core dumped when
+ errors are encountered in the internal library.
+
+ Add error checks and --enable-error-stack option to h5stat.
+ Add range checks when decoding messages: old fill value, old
+ layout and refcount.
+
+ (VC - 2018/07/11, HDFFV-10333)
+
+ - If an HDF5 file contains a malformed compound datatype with a
+ suitably large offset, the type conversion code can run off
+ the end of the type conversion buffer, causing a segmentation
+ fault.
+
+ This issue was reported to The HDF Group as issue #CVE-2017-17507.
+
+ NOTE: The HDF5 C library cannot produce such a file. This condition
+ should only occur in a corrupt (or deliberately altered) file
+ or a file created by third-party software.
+
+ THE HDF GROUP WILL NOT FIX THIS BUG AT THIS TIME
+
+ Fixing this problem would involve updating the publicly visible
+ H5T_conv_t function pointer typedef and versioning the API calls
+ which use it. We normally only modify the public API during
+ major releases, so this bug will not be fixed at this time.
+
+ (DER - 2018/02/26, HDFFV-10356)
+
+
+ Configuration
+ -------------
+ - Applied patches to address Cywin build issues
+
+ There were three issues for Cygwin builds:
+ - Shared libs were not built.
+ - The -std=c99 flag caused a SIG_SETMASK undeclared error.
+ - Undefined errors when buildbing test shared libraries.
+
+ Patches to address these issues were received and incorporated in this version.
+
+ (LRK - 2018/07/18, HDFFV-10475)
+
+ - The --enable-debug/production configure flags are listed as 'deprecated'
+ when they should really be listed as 'removed'.
+
+ In the autotools overhaul several years ago, we removed these flags and
+ implemented a new --enable-build-mode= flag. This was done because we
+ changed the semantics of the modes and didn't want users to silently
+ be exposed to them. The newer system is also more flexible and us to
+ add other modes (like 'clean').
+
+ The --enable-debug/production flags are now listed as removed.
+
+ (DER - 2018/05/31, HDFFV-10505)
+
+ - Moved the location of gcc attribute.
+
+ The gcc attribute(no_sanitize), named as the macro HDF_NO_UBSAN,
+ was located after the function name. Builds with GCC 7 did not
+ indicate any problem, but GCC 8 issued errors. Moved the
+ attribute before the function name, as required.
+
+ (ADB - 2018/05/22, HDFFV-10473)
+
+ - Reworked java test suite into individual JUnit tests.
+
+ Testing the whole suite of java unit tests in a single JUnit run
+ made it difficult to determine actual failures when tests would fail.
+ Running each file set of tests individually, allows individual failures
+ to be diagnosed easier. A side benefit is that tests for optional components
+ of the library can be disabled if not configured.
+
+ (ADB - 2018/05/16, HDFFV-9739)
+
+ - Converted CMake global commands ADD_DEFINITIONS and INCLUDE_DIRECTORIES
+ to use target_* type commands. This change modernizes the CMake usage
+ in the HDF5 library.
+
+ In addition, there is the intention to convert to generator expressions,
+ where possible. The exception is Fortran FLAGS on Windows Visual Studio.
+ The HDF macros TARGET_C_PROPERTIES and TARGET_FORTRAN_PROPERTIES have
+ been removed with this change in usage.
+
+ The additional language (C++ and Fortran) checks have also been localized
+ to only be checked when that language is enabled.
+
+ (ADB - 2018/05/08)
+
+
+ Performance
+ -------------
+ - Revamped internal use of DXPLs, improving performance
+
+ (QAK - 2018/05/20)
+
+
+ Fortran
+ --------
+ - Fixed issue with h5fget_obj_count_f and using a file id of H5F_OBJ_ALL_F not
+ returning the correct count.
+
+ (MSB - 2018/5/15, HDFFV-10405)
+
+
+ C++ APIs
+ --------
+ - Adding default arguments to existing functions
+
+ Added the following items:
+ + Two more property list arguments are added to H5Location::createDataSet:
+ const DSetAccPropList& dapl = DSetAccPropList::DEFAULT
+ const LinkCreatPropList& lcpl = LinkCreatPropList::DEFAULT
+
+ + One more property list argument is added to H5Location::openDataSet:
+ const DSetAccPropList& dapl = DSetAccPropList::DEFAULT
+
+ (BMR - 2018/07/21, PR# 1146)
+
+ - Improvement C++ documentation
+
+ Replaced the table in main page of the C++ documentation from mht to htm format
+ for portability.
+
+ (BMR - 2018/07/17, PR# 1141)
+
+
+Supported Platforms
+===================
+
+ Linux 2.6.32-696.16.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ #1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ (ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ IBM XL C/C++ V13.1
+ IBM XL Fortran V15.1
+
+ Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ Version 4.9.3, Version 5.2.0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.0.098 Build 20160721
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
+ (emu) Sun Fortran 95 8.6 SunOS_sparc
+ Sun C++ 5.12 SunOS_sparc
+
+ Windows 7 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+
+ Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2013 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+ Visual Studio 2015 w/ Intel C, Fortran 2017 (cmake)
+ Visual Studio 2015 w/ MSMPI 8 (cmake)
+
+ Windows 10 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+
+ Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+ Visual Studio 2017 w/ Intel Fortran 18 (cmake)
+
+ Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0
+ 64-bit gfortran GNU Fortran (GCC) 4.9.2
+ (osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3
+
+ Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3.0 from Xcode 7.3
+ 64-bit gfortran GNU Fortran (GCC) 5.2.0
+ (osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2
+
+ Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42)
+ 64-bit gfortran GNU Fortran (GCC) 7.1.0
+ (swallow/kite) Intel icc/icpc/ifort version 17.0.2
+
+Tested Configuration Features Summary
+=====================================
+
+ In the tables below
+ y = tested
+ n = not tested in this release
+ C = Cluster
+ W = Workstation
+ x = not working in this release
+ dna = does not apply
+ ( ) = footnote appears below second table
+ <blank> = testing incomplete on this feature or platform
+
+Platform C F90/ F90 C++ zlib SZIP
+ parallel F2003 parallel
+Solaris2.11 32-bit n y/y n y y y
+Solaris2.11 64-bit n y/n n y y y
+Windows 7 y y/y n y y y
+Windows 7 x64 y y/y y y y y
+Windows 7 Cygwin n y/n n y y y
+Windows 7 x64 Cygwin n y/n n y y y
+Windows 10 y y/y n y y y
+Windows 10 x64 y y/y n y y y
+Mac OS X Mavericks 10.9.5 64-bit n y/y n y y y
+Mac OS X Yosemite 10.10.5 64-bit n y/y n y y y
+Mac OS X El Capitan 10.11.6 64-bit n y/y n y y y
+Mac OS Sierra 10.12.6 64-bit n y/y n y y y
+CentOS 7.2 Linux 2.6.32 x86_64 PGI n y/y n y y y
+CentOS 7.2 Linux 2.6.32 x86_64 GNU y y/y y y y y
+CentOS 7.2 Linux 2.6.32 x86_64 Intel n y/y n y y y
+Linux 2.6.32-573.18.1.el6.ppc64 n y/y n y y y
+
+
+Platform Shared Shared Shared Thread-
+ C libs F90 libs C++ libs safe
+Solaris2.11 32-bit y y y y
+Solaris2.11 64-bit y y y y
+Windows 7 y y y y
+Windows 7 x64 y y y y
+Windows 7 Cygwin n n n y
+Windows 7 x64 Cygwin n n n y
+Windows 10 y y y y
+Windows 10 x64 y y y y
+Mac OS X Mavericks 10.9.5 64-bit y n y y
+Mac OS X Yosemite 10.10.5 64-bit y n y y
+Mac OS X El Capitan 10.11.6 64-bit y n y y
+Mac OS Sierra 10.12.6 64-bit y n y y
+CentOS 7.2 Linux 2.6.32 x86_64 PGI y y y n
+CentOS 7.2 Linux 2.6.32 x86_64 GNU y y y y
+CentOS 7.2 Linux 2.6.32 x86_64 Intel y y y n
+Linux 2.6.32-573.18.1.el6.ppc64 y y y n
+
+Compiler versions for each platform are listed in the preceding
+"Supported Platforms" table.
+
+
+More Tested Platforms
+=====================
+The following platforms are not supported but have been tested for this release.
+
+ Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (mayll/platypus) Version 4.4.7 20120313
+ Version 4.9.3, 5.3.0, 6.2.0
+ PGI C, Fortran, C++ for 64-bit target on
+ x86-64;
+ Version 17.10-0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.4.196 Build 20170411
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
+ #1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ (jelly) with NAG Fortran Compiler Release 6.1(Tozai)
+ GCC Version 7.1.0
+ OpenMPI 3.0.0-GCC-7.2.0-2.29,
+ 3.1.0-GCC-7.2.0-2.29
+ Intel(R) C (icc) and C++ (icpc) compilers
+ Version 17.0.0.098 Build 20160721
+ with NAG Fortran Compiler Release 6.1(Tozai)
+
+ Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0
+ #1 SMP x86_64 GNU/Linux
+ (moohan)
+
+ Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
+ #1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
+ (ostrich) and IBM XL Fortran for Linux, V15.1
+
+ Debian 8.4 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1 x86_64 GNU/Linux
+ gcc, g++ (Debian 4.9.2-10) 4.9.2
+ GNU Fortran (Debian 4.9.2-10) 4.9.2
+ (cmake and autotools)
+
+ Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
+ gcc, g++ (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ GNU Fortran (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ (cmake and autotools)
+
+ Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
+ gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ (cmake and autotools)
+
+
+Known Problems
+==============
+
+ At present, metadata cache images may not be generated by parallel
+ applications. Parallel applications can read files with metadata cache
+ images, but since this is a collective operation, a deadlock is possible
+ if one or more processes do not participate.
+
+ Three tests fail with OpenMPI 3.0.0/GCC-7.2.0-2.29:
+ testphdf5 (ecdsetw, selnone, cchunk1, cchunk3, cchunk4, and actualio)
+ t_shapesame (sscontig2)
+ t_pflush1/fails on exit
+ The first two tests fail attempting collective writes.
+
+ Known problems in previous releases can be found in the HISTORY*.txt files
+ in the HDF5 source. Please report any new problems found to
+ help@hdfgroup.org.
+
+
+CMake vs. Autotools installations
+=================================
+While both build systems produce similar results, there are differences.
+Each system produces the same set of folders on linux (only CMake works
+on standard Windows); bin, include, lib and share. Autotools places the
+COPYING and RELEASE.txt file in the root folder, CMake places them in
+the share folder.
+
+The bin folder contains the tools and the build scripts. Additionally, CMake
+creates dynamic versions of the tools with the suffix "-shared". Autotools
+installs one set of tools depending on the "--enable-shared" configuration
+option.
+ build scripts
+ -------------
+ Autotools: h5c++, h5cc, h5fc
+ CMake: h5c++, h5cc, h5hlc++, h5hlcc
+
+The include folder holds the header files and the fortran mod files. CMake
+places the fortran mod files into separate shared and static subfolders,
+while Autotools places one set of mod files into the include folder. Because
+CMake produces a tools library, the header files for tools will appear in
+the include folder.
+
+The lib folder contains the library files, and CMake adds the pkgconfig
+subfolder with the hdf5*.pc files used by the bin/build scripts created by
+the CMake build. CMake separates the C interface code from the fortran code by
+creating C-stub libraries for each Fortran library. In addition, only CMake
+installs the tools library. The names of the szip libraries are different
+between the build systems.
+
+The share folder will have the most differences because CMake builds include
+a number of CMake specific files for support of CMake's find_package and support
+for the HDF5 Examples CMake project.
+
+
%%%%1.10.2%%%%
HDF5 version 1.10.2 released on 2018-03-29
diff --git a/release_docs/INSTALL_CMake.txt b/release_docs/INSTALL_CMake.txt
index 7352dae..49fef76 100644
--- a/release_docs/INSTALL_CMake.txt
+++ b/release_docs/INSTALL_CMake.txt
@@ -324,6 +324,34 @@ IV. Further considerations
-DHDF5_ENABLE_SZIP_SUPPORT:BOOL=OFF -DHDF5_ENABLE_Z_LIB_SUPPORT:BOOL=OFF \
-DCMAKE_BUILD_TYPE:STRING=Release ..
+ 6. CMake uses a toolchain of utilities to compile, link libraries and
+ create archives, and other tasks to drive the build. The toolchain
+ utilities available are determined by the languages enabled. In normal
+ builds, CMake automatically determines the toolchain for host builds
+ based on system introspection and defaults. In cross-compiling
+ scenarios, a toolchain file may be specified with information about
+ compiler and utility paths.
+ Variables and Properties
+ Several variables relate to the language components of a toolchain which
+ are enabled. CMAKE_<LANG>_COMPILER is the full path to the compiler used
+ for <LANG>. CMAKE_<LANG>_COMPILER_ID is the identifier used by CMake for
+ the compiler and CMAKE_<LANG>_COMPILER_VERSION is the version of the compiler.
+
+ The CMAKE_<LANG>_FLAGS variables and the configuration-specific equivalents
+ contain flags that will be added to the compile command when compiling a
+ file of a particular language.
+
+ As the linker is invoked by the compiler driver, CMake needs a way to
+ determine which compiler to use to invoke the linker. This is calculated
+ by the LANGUAGE of source files in the target, and in the case of static
+ libraries, the language of the dependent libraries. The choice CMake makes
+ may be overridden with the LINKER_LANGUAGE target property.
+
+ See the CMake help for more information on using toolchain files.
+
+ To use a toolchain file with the supplied cmake scripts, see the
+ HDF5options.cmake file under the toolchain section.
+
Notes: CMake and HDF5
1. Using CMake for building and using HDF5 is under active development.
@@ -607,13 +635,13 @@ HDF5_ENABLE_DIRECT_VFD "Build the Direct I/O Virtual File Driver"
HDF5_ENABLE_EMBEDDED_LIBINFO "embed library info into executables" ON
HDF5_ENABLE_HSIZET "Enable datasets larger than memory" ON
HDF5_ENABLE_PARALLEL "Enable parallel build (requires MPI)" OFF
+HDF5_ENABLE_PREADWRITE "Use pread/pwrite in sec2/log/core VFDs in place of read/write (when available)" ON
HDF5_ENABLE_TRACE "Enable API tracing capability" OFF
HDF5_ENABLE_USING_MEMCHECKER "Indicate that a memory checker is used" OFF
HDF5_GENERATE_HEADERS "Rebuild Generated Files" ON
HDF5_BUILD_GENERATORS "Build Test Generators" OFF
HDF5_JAVA_PACK_JRE "Package a JRE installer directory" OFF
HDF5_MEMORY_ALLOC_SANITY_CHECK "Indicate that internal memory allocation sanity checks are enabled" OFF
-HDF5_METADATA_TRACE_FILE "Enable metadata trace file collection" OFF
HDF5_NO_PACKAGES "Do not include CPack Packaging" OFF
HDF5_PACK_EXAMPLES "Package the HDF5 Library Examples Compressed File" OFF
HDF5_PACK_MACOSX_FRAMEWORK "Package the HDF5 Library in a Frameworks" OFF
@@ -623,10 +651,8 @@ HDF5_PACKAGE_EXTLIBS "CPACK - include external libraries"
HDF5_STRICT_FORMAT_CHECKS "Whether to perform strict file format checks" OFF
HDF_TEST_EXPRESS "Control testing framework (0-3)" "0"
HDF5_TEST_VFD "Execute tests with different VFDs" OFF
-HDF5_USE_16_API_DEFAULT "Use the HDF5 1.6.x API by default" OFF
-HDF5_USE_18_API_DEFAULT "Use the HDF5 1.8.x API by default" OFF
-HDF5_USE_110_API_DEFAULT "Use the HDF5 1.10.x API by default" OFF
-HDF5_USE_112_API_DEFAULT "Use the HDF5 1.12.x API by default" ON
+HDF5_TEST_PASSTHROUGH_VOL "Execute tests with different passthrough VOL connectors" OFF
+DEFAULT_API_VERSION "Enable default API (v16, v18, v110, v112)" "v112"
HDF5_USE_FOLDERS "Enable folder grouping of projects in IDEs." ON
HDF5_WANT_DATA_ACCURACY "IF data accuracy is guaranteed during data conversions" ON
HDF5_WANT_DCONV_EXCEPTION "exception handling functions is checked during data conversions" ON
@@ -639,13 +665,13 @@ if (HDF5_TEST_VFD)
HDF5_TEST_FHEAP_VFD "Execute fheap test with different VFDs" ON
---------------- External Library Options ---------------------
-HDF5_ALLOW_EXTERNAL_SUPPORT "Allow External Library Building" "NO"
-HDF5_ENABLE_SZIP_SUPPORT "Use SZip Filter" OFF
-HDF5_ENABLE_Z_LIB_SUPPORT "Enable Zlib Filters" OFF
-ZLIB_USE_EXTERNAL "Use External Library Building for ZLIB" 0
-SZIP_USE_EXTERNAL "Use External Library Building for SZIP" 0
+HDF5_ALLOW_EXTERNAL_SUPPORT "Allow External Library Building (NO GIT TGZ)" "NO"
+HDF5_ENABLE_SZIP_SUPPORT "Use SZip Filter" OFF
+HDF5_ENABLE_Z_LIB_SUPPORT "Enable Zlib Filters" OFF
+ZLIB_USE_EXTERNAL "Use External Library Building for ZLIB" 0
+SZIP_USE_EXTERNAL "Use External Library Building for SZIP" 0
if (HDF5_ENABLE_SZIP_SUPPORT)
- HDF5_ENABLE_SZIP_ENCODING "Use SZip Encoding" OFF
+ HDF5_ENABLE_SZIP_ENCODING "Use SZip Encoding" OFF
if (WINDOWS)
H5_DEFAULT_PLUGINDIR "%ALLUSERSPROFILE%/hdf5/lib/plugin"
else ()
diff --git a/release_docs/INSTALL_Cygwin.txt b/release_docs/INSTALL_Cygwin.txt
index ddffcf1..74f494c 100644
--- a/release_docs/INSTALL_Cygwin.txt
+++ b/release_docs/INSTALL_Cygwin.txt
@@ -66,12 +66,11 @@ Preconditions:
2.2.2 Szip
The HDF5 library has a predefined compression filter that uses
the extended-Rice lossless compression algorithm for chunked
- datatsets. For more information about Szip compression and
- license terms see
- http://hdfgroup.org/HDF5/doc_resource/SZIP/index.html.
+ datatsets. For more information on Szip compression, license terms,
+ and obtaining the Szip source code, see:
+
+ https://portal.hdfgroup.org/display/HDF5/Szip+Compression+in+HDF+Products
- The latest supported public release of SZIP is available from
- ftp://ftp.hdfgroup.org/lib-external/szip/2.1.
2.3 Additional Utilities
@@ -266,4 +265,5 @@ Build, Test and Install HDF5 on Cygwin
-----------------------------------------------------------------------
-Need Further assistance, email help@hdfgroup.org
+ HDF Forum: https://forum.hdfgroup.org/
+ HDF Helpdesk: https://portal.hdfgroup.org/display/support/The+HDF+Help+Desk
diff --git a/release_docs/INSTALL_parallel b/release_docs/INSTALL_parallel
index f32fffc..d3d7830 100644
--- a/release_docs/INSTALL_parallel
+++ b/release_docs/INSTALL_parallel
@@ -102,7 +102,7 @@ qsub -I -q debug -l mppwidth=8
- configure HDF5:
RUNSERIAL="aprun -q -n 1" RUNPARALLEL="aprun -q -n 6" FC=ftn CC=cc /path/to/source/configure --enable-fortran --enable-parallel --disable-shared
- RUNSERIAL and RUNPARALLEL tells the library how it should launch programs that are part of the build procedure.
+ RUNSERIAL and RUNPARALLEL tell the library how it should launch programs that are part of the build procedure.
- Compile HDF5:
gmake
@@ -155,12 +155,16 @@ to run a parallel application on one processor and on many processors. If the
compiler is `mpicc' and the user hasn't specified values for RUNSERIAL and
RUNPARALLEL then configure chooses `mpiexec' from the same directory as `mpicc':
- RUNSERIAL: /usr/local/mpi/bin/mpiexec -np 1
- RUNPARALLEL: /usr/local/mpi/bin/mpiexec -np $${NPROCS:=6}
+ RUNSERIAL: mpiexec -n 1
+ RUNPARALLEL: mpiexec -n $${NPROCS:=6}
The `$${NPROCS:=6}' will be substituted with the value of the NPROCS
environment variable at the time `make check' is run (or the value 6).
+Note that some MPI implementations (e.g. OpenMPI 4.0) disallow oversubscribing
+nodes by default so you'll have to either set NPROCS equal to the number of
+processors available (or fewer) or redefine RUNPARALLEL with appropriate
+flag(s) (--oversubscribe in OpenMPI).
4. Parallel test suite
----------------------
diff --git a/release_docs/README_HDF5_CMake b/release_docs/README_HDF5_CMake
new file mode 100644
index 0000000..5737624
--- /dev/null
+++ b/release_docs/README_HDF5_CMake
@@ -0,0 +1,23 @@
+This tar file contains
+
+ build-unix.sh script to build HDF5 with CMake on unix machines
+ build-unix-hpc.sh script to build HDF5 with CMake on unix machines and run
+ tests with batch scripts (sbatch).
+ CTestScript.cmake
+ HDF5config.cmake CMake scripts for building HDF5
+ HDF5options.cmake
+ hdf5-1.13.0 HDF5 1.13.0 source
+ SZip.tar.gz source for building SZIP
+ ZLib.tar.gz source for building ZLIB
+
+For more information about building HDF5 with CMake, see USING_HDF5_CMake.txt in
+hdf5-1.13.0/release_docs, or
+https://portal.hdfgroup.org/display/support/Building+HDF5+with+CMake.
+
+For more information about building HDF5 with CMake on HPC machines, including
+cross compiling on Cray XC40, see README_HPC in hdf5-1.13.0/release_docs.
+
+
+
+
+
diff --git a/release_docs/README_HPC b/release_docs/README_HPC
new file mode 100644
index 0000000..67a5d6c
--- /dev/null
+++ b/release_docs/README_HPC
@@ -0,0 +1,206 @@
+************************************************************************
+* Using CMake to build and test HDF5 source on HPC machines *
+************************************************************************
+
+ Contents
+
+Section I: Prerequisites
+Section II: Obtain HDF5 source
+Section III: Using ctest command to build and test
+Section IV: Cross compiling
+Section V: Manual alternatives
+Section VI: Other cross compiling options
+
+************************************************************************
+
+========================================================================
+I. Prerequisites
+========================================================================
+ 1. Create a working directory that is accessible from the compute nodes for
+ running tests; the working directory should be in a scratch space or a
+ parallel file system space since testing will use this space. Building
+ from HDF5 source in a 'home' directory typically results in test
+ failures and should be avoided.
+
+ 2. Load modules for desired compilers, module for cmake version 3.10 or greater,
+ and set any needed environment variables for compilers (i.e., CC, FC, CXX).
+ Unload any problematic modules (i.e., craype-hugepages2M).
+
+========================================================================
+II. Obtain HDF5 source
+========================================================================
+Obtain HDF5 source code from the HDF5 repository using a git command or
+from a release tar file in a working directory:
+
+ git clone https://git@bitbucket.hdfgroup.org/scm/hdffv/hdf5.git
+ [-b branch] [source directory]
+
+If no branch is specified, then the 'develop' version will be checked out.
+If no source directory is specified, then the source will be located in the
+'hdf5' directory. The Cmake scripts expect the source to be in a directory
+named hdf5-<version string>, where 'version string' uses the format '1.xx.xx'.
+For example, for the current 'develop' version, the "hdf5" directory should
+be renamed "hdf5-1.11.4", or for the first hdf5_1_10_5 pre-release version,
+it should be renamed "hdf5-1.10.5-pre1".
+
+If the version number is not known a priori, the version string
+can be obtained by running bin/h5vers in the top level directory of the source clone, and
+the source directory renamed 'hdf5-<version string>'.
+
+Release or snapshot tar files may also be extracted and used.
+
+========================================================================
+III. Using ctest command to build and test
+========================================================================
+
+The ctest command [1]:
+
+ ctest -S HDF5config.cmake,BUILD_GENERATOR=Unix -C Release -V -O hdf5.log
+
+will configure, build, test and package HDF5 from the downloaded source
+after the setup steps outlined below are followed.
+
+CMake option variables are available to allow running test programs in batch
+scripts on compute nodes and to cross-compile for compute node hardware using
+a cross-compiling emulator. The setup steps will make default settings for
+parallel or serial only builds available to the CMake command.
+
+ 1. For the current 'develop' version the "hdf5" directory should be renamed
+ "hdf5-1.11.4".
+
+ 2. Three cmake script files need to be copied to the working directory, or
+ have symbolic links to them, created in the working directory:
+
+ hdf5-1.11.4/config/cmake/scripts/HDF5config.cmake
+ hdf5-1.11.4/config/cmake/scripts/CTestScript.cmake
+ hdf5-1.11.4/config/cmake/scripts/HDF5options.cmake
+
+ should be copied to the working directory.
+
+ 3. The resulting contents of the working directory are then:
+
+ CTestScript.cmake
+ HDF5config.cmake
+ HDF5options.cmake
+ hdf5-1.11.4
+
+ Additionally, when the ctest command runs [1], it will add a build directory
+ in the working directory.
+
+ 4. The following options (among others) can be added to the ctest
+ command [1], following '-S HDF5config.cmake,' and separated by ',':
+
+ HPC=sbatch (or 'bsub' or 'raybsub') indicates which type of batch
+ files to use for running tests. If omitted, test
+ will run on the local machine or login node.
+
+ KNL=true to cross-compile for KNL compute nodes on CrayXC40
+ (see section IV)
+
+ MPI=true enables parallel, disables c++, java, and threadsafe
+
+ LOCAL_BATCH_SCRIPT_ARGS="--account=<account#>" to supply user account
+ information for batch jobs
+
+ The HPC options will add BUILD_GENERATOR=Unix for the three HPC options.
+ An example ctest command for a parallel build on a system using sbatch is
+
+ ctest -S HDF5config.cmake,HPC=sbatch,MPI=true -C Release -V -O hdf5.log
+
+ Adding the option 'KNL=true' to the above list will compile for KNL nodes,
+ for example, on 'mutrino' and other CrayXC40 machines.
+
+ Changing -V to -VV will produce more logging information in HDF5.log.
+
+ More detailed CMake information can be found in the HDF5 source in
+ release_docs/INSTALL_CMake.txt.
+
+========================================================================
+IV. Cross-compiling
+========================================================================
+For cross-compiling on Cray, set environment variables CC=cc, FC=ftn
+and CXX=CC (for c++) after all compiler modules are loaded since switching
+compiler modules may unset or reset these variables.
+
+CMake provides options for cross-compiling. To cross-compile for KNL hardware
+on mutrino and other CrayXC40 machines, add HPC=sbatch,KNL=true to the
+ctest command line. This will set the following options from the
+config/cmake/scripts/HPC/sbatch-HDF5options.cmake file:
+
+ set (COMPILENODE_HWCOMPILE_MODULE "craype-haswell")
+ set (COMPUTENODE_HWCOMPILE_MODULE "craype-mic-knl")
+ set (LOCAL_BATCH_SCRIPT_NAME "knl_ctestS.sl")
+ set (LOCAL_BATCH_SCRIPT_PARALLEL_NAME "knl_ctestP.sl")
+ set (ADD_BUILD_OPTIONS "${ADD_BUILD_OPTIONS} -DCMAKE_TOOLCHAIN_FILE:STRING=config/toolchain/crayle.cmake")
+
+On the Cray XC40 the craype-haswell module is needed for configuring, and the
+craype-mic-knl module is needed for building to run on the KNL nodes. CMake
+with the above options will swap modules after configuring is complete,
+but before compiling programs for KNL.
+
+The sbatch script arguments for running jobs on KNL nodes may differ on CrayXC40
+machines other than mutrino. The batch scripts knl_ctestS.sl and knl_ctestP.sl
+have the correct arguments for mutrino: "#SBATCH -p knl -C quad,cache". For
+cori, another CrayXC40, that line is replaced by "#SBATCH -C knl,quad,cache".
+For cori (and other machines), the values in LOCAL_BATCH_SCRIPT_NAME and
+LOCAL_BATCH_SCRIPT_PARALLEL_NAME in the config/cmake/scripts/HPC/sbatch-HDF5options.cmake
+file can be replaced by cori_knl_ctestS.sl and cori_knl_ctestS.sl, or the lines
+can be edited in the batch files in hdf5-1.11.4/bin/batch.
+
+========================================================================
+V. Manual alternatives
+========================================================================
+If using ctest is undesirable, one can create a build directory and run the cmake
+configure command, for example
+
+"/projects/Mutrino/hpcsoft/cle6.0/common/cmake/3.10.2/bin/cmake"
+-C "<working directory>/hdf5-1.11.4/config/cmake/cacheinit.cmake"
+-DCMAKE_BUILD_TYPE:STRING=Release -DHDF5_BUILD_FORTRAN:BOOL=ON
+-DHDF5_BUILD_JAVA:BOOL=OFF
+-DCMAKE_INSTALL_PREFIX:PATH=<working directory>/HDF_Group/HDF5/1.11.4
+-DHDF5_ENABLE_Z_LIB_SUPPORT:BOOL=OFF -DHDF5_ENABLE_SZIP_SUPPORT:BOOL=OFF
+-DHDF5_ENABLE_PARALLEL:BOOL=ON -DHDF5_BUILD_CPP_LIB:BOOL=OFF
+-DHDF5_BUILD_JAVA:BOOL=OFF -DHDF5_ENABLE_THREADSAFE:BOOL=OFF
+-DHDF5_PACKAGE_EXTLIBS:BOOL=ON -DLOCAL_BATCH_TEST:BOOL=ON
+-DMPIEXEC_EXECUTABLE:STRING=srun -DMPIEXEC_NUMPROC_FLAG:STRING=-n
+-DMPIEXEC_MAX_NUMPROCS:STRING=6
+-DCMAKE_TOOLCHAIN_FILE:STRING=config/toolchain/crayle.cmake
+-DLOCAL_BATCH_SCRIPT_NAME:STRING=knl_ctestS.sl
+-DLOCAL_BATCH_SCRIPT_PARALLEL_NAME:STRING=knl_ctestP.sl -DSITE:STRING=mutrino
+-DBUILDNAME:STRING=par-knl_GCC493-SHARED-Linux-4.4.156-94.61.1.16335.0.PTF.1107299-default-x86_64
+"-GUnix Makefiles" "" "<working directory>/hdf5-1.11.4"
+
+followed by make and batch jobs to run tests.
+
+To cross-compile on CrayXC40, run the configure command with the craype-haswell
+module loaded, then switch to the craype-mic-knl module for the build process.
+
+Tests on machines using slurm can be run with
+
+"sbatch -p knl -C quad,cache ctestS.sl"
+
+or
+
+"sbatch -p knl -C quad,cache ctestP.sl"
+
+for parallel builds.
+
+Tests on machines using LSF will typically use "bsub ctestS.lsf", etc.
+
+========================================================================
+VI. Other cross compiling options
+========================================================================
+Settings for two other cross-compiling options are also in the config/toolchain
+files which do not seem to be necessary with the Cray PrgEnv-* modules
+
+1. HDF5_USE_PREGEN. This option, along with the HDF5_USE_PREGEN_DIR CMake
+ variable would allow the use of an appropriate H5Tinit.c file with type
+ information generated on a compute node to be used when cross compiling
+ for those compute nodes. The use of the variables in lines 110 and 111
+ of HDF5options.cmake file seem to preclude needing this option with the
+ available Cray modules and CMake option.
+
+2. HDF5_BATCH_H5DETECT and associated CMake variables. This option when
+ properly configured will run H5detect in a batch job on a compute node
+ at the beginning of the CMake build process. It was also found to be
+ unnecessary with the available Cray modules and CMake options.
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 393a9b2..c2f2ebe 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -1,4 +1,4 @@
-HDF5 version 1.11.3 currently under development
+HDF5 version 1.13.0 currently under development
================================================================================
@@ -48,6 +48,112 @@ New Features
Configuration:
-------------
+ - Update CMake tests to use FIXTURES
+
+ CMake test fixtures allow setup/cleanup tests and other dependency
+ requirements as properties for tests. This is more flexible for
+ modern CMake code.
+
+ (ADB - 2019/07/23, HDFFV-10529)
+
+ - Windows PDB files are always installed
+
+ There are build configuration or flag settings for Windows that may not
+ generate PDB files. If those files are not generated then the install
+ utility will fail because those PDB files are not found. An optional
+ variable, DISABLE_PDB_FILES, was added to not install PDB files.
+
+ (ADB - 2019/07/17, HDFFV-10424)
+
+ - Add mingw CMake support with a toolchain file
+
+ There has been a number of mingw issues that has been linked under
+ HDFFV-10845. It has been decided to implement the CMake cross-compiling
+ technique of toolchain files. We will use a linux platform with the mingw
+ compiler stack for testing. Only the C language is fully supported, and
+ the error tests are skipped. The C++ language works for static but shared
+ builds has a shared library issue with the mingw Standard Exception Handling
+ library, which is not available on Windows. Fortran has a common cross-compile
+ problem with the fortran configure tests.
+
+ (ADB - 2019/07/12, HDFFV-10845, HDFFV-10595)
+
+ - Windows PDB files are installed incorrectly
+
+ For static builds, the PDB files for windows should be installed next
+ to the static libraries in the lib folder. Also the debug versions of
+ libraries and PDB files are now correctly built using the default
+ CMAKE_DEBUG_POSTFIX setting.
+
+ (ADB - 2019/07/09, HDFFV-10581)
+
+ - Add option to build only shared libs
+
+ A request was made to prevent building static libraries and only build
+ shared. A new option was added to CMake, ONLY_SHARED_LIBS, which will
+ skip building static libraries. Certain utility functions will build with
+ static libs but are not published. Tests are adjusted to use the correct
+ libraries depending on SHARED/STATIC settings.
+
+ (ADB - 2019/06/12, HDFFV-10805)
+
+ - Add options to enable or disable building tools and tests
+
+ Configure options --enable-tests and --enable-tools were added for
+ autotools configure. These options are enabled by default, and can be
+ disabled with either --disable-tests (or tools) or --enable-tests=no
+ (or --enable-tools=no). Build time is reduced ~20% when tools are
+ disabled, 35% when tests are disabled, 45% when both are disabled.
+ Reenabling them after the initial build requires running configure
+ again with the option(s) enabled.
+
+ (LRK - 2019/06/12, HDFFV-9976)
+
+ - Change tools test that test the error stack
+
+ There are some use cases which can cause the error stack of tools to be
+ different then the expected output. These tests now use grepTest.cmake,
+ this was changed to allow the error file to be searched for an expected string.
+
+ (ADB - 2019/04/15, HDFFV-10741)
+
+ - Keep stderr and stdout separate in tests
+
+ Changed test handling of output capture. Tests now keep the stderr
+ output separate from the stdout output. It is up to the test to decide
+ which output to check against a reference. Also added the option
+ to grep for a string in either output.
+
+ (ADB - 2018/12/12, HDFFV-10632)
+
+ - Add toolchain and cross-compile support
+
+ Added info on using a toolchain file to INSTALL_CMAKE.txt. A
+ toolchain file is also used in cross-compiling, which requires
+ CMAKE_CROSSCOMPILING_EMULATOR to be set. To help with cross-compiling
+ the fortran configure process, the HDF5UseFortran.cmake file macros
+ were improved. Fixed a Fortran configure file issue that incorrectly
+ used #cmakedefine instead of #define.
+
+ (ADB - 2018/10/04, HDFFV-10594)
+
+ - Add warning flags for Intel compilers
+
+ Identified Intel compiler specific warnings flags that should be used
+ instead of GNU flags.
+
+ (ADB - 2018/10/04, TRILABS-21)
+
+ - Add default rpath to targets
+
+ Default rpaths should be set in shared executables and
+ libraries to allow the use of loading dependent libraries
+ without requiring LD_LIBRARY_PATH to be set. The default
+ path should be relative using @rpath on osx and $ORIGIN
+ on linux. Windows is not affected.
+
+ (ADB - 2018/09/26, HDFFV-10594)
+
- Add missing USE_110_API_DEFAULT option.
Option USE_110_API_DEFAULT sets the default version of
@@ -82,8 +188,82 @@ New Features
(ADB - 2018/07/16)
+
Library:
--------
+ - Add S3 and HDFS VFDs to HDF5 maintenance
+
+ Fix windows requirements and java tests. Windows requires CMake 3.13.
+ Install openssl library (with dev files);
+ from "Shining Light Productions". msi package preferred.
+
+ PATH should have been updated with the installation dir.
+ set ENV variable OPENSSL_ROOT_DIR to the installation dir.
+ set ENV variable OPENSSL_CONF to the cfg file, likely %OPENSSL_ROOT_DIR%\bin\openssl.cfg
+ Install libcurl library (with dev files);
+ download the latest released version using git: https://github.com/curl/curl.git
+
+ Open a Visual Studio Command prompt
+ change to the libcurl root folder
+ run the "buildconf.bat" batch file
+ change to the winbuild directory
+ nmake /f Makefile.vc mode=dll MACHINE=x64
+ copy libcurl-vc-x64-release-dll-ipv6-sspi-winssl dir to C:\curl (installation dir)
+ set ENV variable CURL_ROOT to C:\curl (installation dir)
+ update PATH ENV variable to %CURL_ROOT%\bin (installation bin dir).
+ the aws credentials file should be in %USERPROFILE%\.aws folder
+ set the ENV variable "HDF5_ROS3_TEST_BUCKET_URL=https://s3.us-east-2.amazonaws.com/hdf5ros3"
+
+ (ADB - 2019/09/12, HDFFV-10854)
+
+ - Added new chunk query functions
+
+ The following public functions were added to discover information about
+ the chunks in an HDF5 file.
+ herr_t H5Dget_num_chunks(dset_id, fspace_id, *nchunks)
+ herr_t H5Dget_chunk_info_by_coord(dset_id, *coord, *filter_mask, *addr, *size)
+ herr_t H5Dget_chunk_info(dset_id, fspace_id, index, *coord, *filter_mask, *addr, *size)
+
+ (BMR - 2019/06/11, HDFFV-10677)
+
+ - Improved the performance of virtual dataset I/O
+
+ Refactored the internal dataspace routines used by the virtual dataset
+ code to improve performance, especially when one of the selections
+ involved is very long and non-contiguous.
+
+ (NAF - 2019/05/31, HDFFV-10693)
+
+ - Added the ability to open files with UTF-8 file names on Windows.
+
+ The POSIX open(2) API call on Windows is limited to ASCII
+ file names. The library has been updated to convert incoming file
+ names to UTF-16 (via MultiByteToWideChar(CP_UTF8, ...) and use
+ _wopen() instead.
+
+ (DER - 2019/03/15, HDFFV-2714, HDFFV-3914, HDFFV-3895, HDFFV-8237, HDFFV-10413, HDFFV-10691)
+
+ - Add new API H5M for map objects. Currently not supported by native
+ library, can be supported by VOL connectors.
+
+ (NAF - 2019/03/01)
+
+ - Add new H5R_ref_t type for object, dataset region and _attribute_
+ references. This new type will deprecate the current hobj_ref_t
+ and hdset_reg_ref_t types for references. Added H5T_REF datatype
+ to read and write new reference types. As opposed to previous
+ reference types, reference creation no longer modifies existing
+ files. New reference types also now support references to external
+ files.
+
+ (JS - 2019/10/08)
+
+ - Remove H5I_REFERENCE from the library
+
+ This ID class was never used by the library and has been removed.
+
+ (DER - 2018/12/08, HDFFV-10252)
+
- Allow pre-generated H5Tinit.c and H5make_libsettings.c to be used.
Rather than always running H5detect and generating H5Tinit.c and
@@ -94,100 +274,86 @@ New Features
Parallel Library:
-----------------
- -
+ - Changed the default behavior in parallel when reading the same dataset in its entirely
+ (i.e. H5S_ALL dataset selection) which is being read by all the processes collectively.
+ The dataset mush be contiguous, less than 2GB, and of an atomic datatype.
+ The new behavior is the HDF5 library will use an MPI_Bcast to pass the data read from
+ the disk by the root process to the remain processes in the MPI communicator associated
+ with the HDF5 file.
+
+ (MSB - 2019/01/02, HDFFV-10652)
Fortran Library:
----------------
- -
+ - Added new Fortran derived type, c_h5o_info_t, which is interoperable with
+ C's h5o_info_t. This is needed for callback functions which
+ pass C's h5o_info_t data type definition.
- C++ Library:
- ------------
- - New wrappers
+ (MSB, 2019/01/08, HDFFV-10443)
- Added the following items:
+ - Added new Fortran API, H5gmtime, which converts (C) 'time_t' structure
+ to Fortran DATE AND TIME storage format.
- + Class DSetAccPropList for the dataset access property list.
+ (MSB, 2019/01/08, HDFFV-10443)
- + Wrapper for H5Dget_access_plist to class DataSet
- // Gets the access property list of this dataset.
- DSetAccPropList getAccessPlist() const;
+ - Added new Fortran 'fields' optional parameter to: h5ovisit_f, h5oget_info_by_name_f,
+ h5oget_info, h5oget_info_by_idx and h5ovisit_by_name_f.
- + Wrappers for H5Pset_chunk_cache and H5Pget_chunk_cache to class DSetAccPropList
- // Sets the raw data chunk cache parameters.
- void setChunkCache(size_t rdcc_nslots, size_t rdcc_nbytes, double rdcc_w0)
+ (MSB, 2019/01/08, HDFFV-10443)
- // Retrieves the raw data chunk cache parameters.
- void getChunkCache(size_t &rdcc_nslots, size_t &rdcc_nbytes, double &rdcc_w0)
+ C++ Library:
+ ------------
+ - Added new wrappers for H5Pset/get_create_intermediate_group()
+ LinkCreatPropList::setCreateIntermediateGroup()
+ LinkCreatPropList::getCreateIntermediateGroup()
- + New operator!= to class DataType (HDFFV-10472)
- // Determines whether two datatypes are not the same.
- bool operator!=(const DataType& compared_type)
+ (BMR - 2019/04/22, HDFFV-10622)
- + Wrappers for H5Oget_info2, H5Oget_info_by_name2, and H5Oget_info_by_idx2
- (HDFFV-10458)
+ - Added new wrapper for H5Ovisit2()
+ H5Object::visit()
- // Retrieves information about an HDF5 object.
- void getObjinfo(H5O_info_t& objinfo, unsigned fields = H5O_INFO_BASIC) const;
+ (BMR - 2019/02/14, HDFFV-10532)
- // Retrieves information about an HDF5 object, given its name.
- void getObjinfo(const char* name, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- void getObjinfo(const H5std_string& name, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- // Retrieves information about an HDF5 object, given its index.
- void getObjinfo(const char* grp_name, H5_index_t idx_type,
- H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- void getObjinfo(const H5std_string& grp_name, H5_index_t idx_type,
- H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+ Java Library:
+ ----------------
+ - Fix a failure in JUnit-TestH5P on 32-bit architectures
- (BMR - 2018/07/22, HDFFV-10150, HDFFV-10458, HDFFV-1047)
+ (JTH - 2019/04/30)
+ - Duplicate the data read/write functions of Datasets for Attributes.
- Java Library:
- ----------------
- - JNI native library dependencies
+ Region references could not be displayed for attributes as they could
+ for datasets. Datasets had overloaded read and write functions for different
+ datatypes that were not available for attributes. After adding similar
+ functions, attribute region references work normally.
- The build for the hdf5_java native library used the wrong
- hdf5 target library for CMake builds. Correcting the hdf5_java
- library to build with the shared hdf5 library required testing
- paths to change also.
+ (ADB - 2018/12/12, HDFVIEW-4)
- (ADB - 2018/08/31, HDFFV-10568)
+ - Removed H5I_REFERENCE from the Java wrappers
- - Java iterator callbacks
+ This ID class was never used by the library and has been removed
+ from the Java wrappers.
- Change global callback object to a small stack structure in order
- to fix a runtime crash. This crash was discovered when iterating
- through a file with nested group members. The global variable
- visit_callback is overwritten when recursion starts. When recursion
- completes, visit_callback will be pointing to the wrong callback method.
+ (DER - 2018/12/08, HDFFV-10252)
- (ADB - 2018/08/15, HDFFV-10536)
- - Java HDFLibraryException class
+ Tools:
+ ------
+ - h5repack was fixed to repack datasets with external storage
+ to other types of storage.
- Change parent class from Exception to RuntimeException.
+ New test added to repack files and verify the correct data using h5diff.
- (ADB - 2018/07/30, HDFFV-10534)
+ (JS - 2019/09/25, HDFFV-10408)
+ (ADB - 2019/10/02, HDFFV-10918)
- - JNI Read and Write
+ - h5dump was fixed for 128-bit floats, but was missing a test.
- Refactored variable-length functions, H5DreadVL and H5AreadVL,
- to correct dataset and attribute reads. New write functions,
- H5DwriteVL and H5AwriteVL, are under construction.
+ New test greps for the first 15 numbers of the 128-bit value.
- (ADB - 2018/06/02, HDFFV-10519)
+ (ADB - 2019/06/23, HDFFV-9407)
- Tools:
- ------
- -
High-Level APIs:
---------------
@@ -214,6 +380,178 @@ Bug Fixes since HDF5-1.10.3 release
Library
-------
+ - Fixed the iteration error in test_versionbounds() in test/dtypes.c
+
+ The test was supposed to loop through all valid combinations of
+ low and high bounds in the array versions[], but they were set to
+ H5F_LIBVER_EARLIEST always without changing.
+
+ The problem was fixed by indexing low and high into the array versions[].
+
+ (VC - 2019/09/30)
+
+ - Fixed the slowness of regular hyperslab selection in a chunked dataset
+
+ It was reported that the selection of every 10th element from a 20G
+ chunked dataset was extremely slow and sometimes could hang the system.
+ The problem was due to the iteration and the building of the span tree
+ for all the selected elements in file space.
+
+ As the selected elements are going to a 1-d contiguous single block
+ memory space, the problem was fixed by building regular hyperslab selections
+ in memory space for the selected elements in file space.
+
+ (VC - 2019/09/26, HDFFV-10585)
+
+ - Fixed a bug caused by bad tag value when condensing object header
+ messages
+
+ There was an assertion failure when moving meessages from running a
+ user test program with library release hdf5.1.10.4. It was because
+ the tag value (object header's address) was not set up when entering
+ the library routine H5O__chunk_update_idx(), which will eventually
+ verifies the metadata tag value when protecting the object header.
+
+ The problem was fixed by replacing FUNC_ENTER_PACKAGE in H5O__chunk_update_idx()
+ with FUNC_ENTER_PACKAGE_TAG(oh->cache_info.addr) to set up the metadata tag.
+
+ (VC - 2019/08/23, HDFFV-10873)
+
+ - Fixed the test failure from test_metadata_read_retry_info() in
+ test/swmr.c
+
+ The test failure is due to the incorrect number of bins returned for
+ retry info (info.nbins). The # of bins expected for 101 read attempts
+ is 3 instead of 2. The routine H5F_set_retries() in src/H5Fint.c
+ calculates the # of bins by first obtaining the log10 value for
+ (read attempts - 1). For PGI/19, the log10 value for 100 read attempts
+ is 1.9999999999999998 instead of 2.00000. When casting the log10 value
+ to unsigned later on, the decimal part is chopped off causing the test
+ failure.
+
+ This was fixed by obtaining the rounded integer value (HDceil) for the
+ log10 value of read attempts first before casting the result to unsigned.
+
+ (VC - 2019/8/14, HDFFV-10813)
+
+ - Fixed an issue where creating a file with non-default file space info
+ together with library high bound setting to H5F_LIBVER_V18.
+
+ When setting non-default file space info in fcpl via
+ H5Pset_file_space_strategy() and then creating a file with
+ both high and low library bounds set to
+ H5F_LIBVER_V18 in fapl, the library succeeds in creating the file.
+ File creation should fail because the feature of setting non-default
+ file space info does not exist in library release 1.8 or earlier.
+
+ This was fixed by setting and checking the proper version in the
+ file space info message based on the library low and high bounds
+ when creating and opening the HDF5 file.
+
+ (VC - 2019/6/25, HDFFV-10808)
+
+ - When iterating over an old-style group (i.e., when not using the latest
+ file format) of size 0, a NULL pointer representing the empty links
+ table would be sent to qsort(3) for sorting, which is undefined behavior.
+
+ Iterating over an empty group is explicitly tested in the links test.
+ This has not caused any failures to date and was flagged by gcc's
+ -fsanitize=undefined.
+
+ The library no longer attempts to sort an empty array.
+
+ (DER - 2019/06/18, HDFFV-10829)
+
+ - Fixed an issue where copying a version 1.8 dataset between files using
+ H5Ocopy fails due to an incompatible fill version
+
+ When using the HDF5 1.10.x H5Ocopy() API call to copy a version 1.8
+ dataset to a file created with both high and low library bounds set to
+ H5F_LIBVER_V18, the H5Ocopy() call will fail with the error stack indicating
+ that the fill value version is out of bounds.
+
+ This was fixed by changing the fill value message version to H5O_FILL_VERSION_3
+ (from H5O_FILL_VERSION_2) for H5F_LIBVER_V18.
+
+ (VC - 2019/6/14, HDFFV-10800)
+
+ - Some oversights in the index iterating area of the library caused
+ a callback function to continue iterating even though it's supposed
+ to stop.
+
+ Added the returned value check to the for loop's conditions in
+ H5EA_iterate(), H5FA_iterate(), and H5D__none_idx_iterate(). The
+ iteration now stops when it should.
+
+ (BMR - 2019/06/11, HDFFV-10661)
+
+ - Fixed a bug that would cause an error or cause fill values to be
+ incorrectly read from a chunked dataset using the "single chunk" index if
+ the data was held in cache and there was no data on disk.
+
+ (NAF - 2019/03/06)
+
+ - Fixed a bug that could cause an error or cause fill values to be
+ incorrectly read from a dataset that was written to using H5Dwrite_chunk
+ if the dataset was not closed after writing.
+
+ (NAF - 2019/03/06, HDFFV-10716)
+
+ - Fixed memory leak in scale offset filter
+
+ In a special case where the MinBits is the same as the number of bits in
+ the datatype's precision, the filter's data buffer was not freed, causing
+ the memory usage to grow. In general the buffer was freed correctly. The
+ Minbits are the minimal number of bits to store the data values. Please
+ see the reference manual for H5Pset_scaleoffset for the detail.
+
+ (RL - 2019/3/4, HDFFV-10705)
+
+ - Fix hangs with collective metadata reads during chunked dataset I/O
+
+ In the parallel library, it was discovered that when a particular
+ sequence of operations following a pattern of:
+
+ "write to chunked dataset" -> "flush file" -> "read from dataset"
+
+ occurred with collective metadata reads enabled, hangs could be
+ observed due to certain MPI ranks not participating in the collective
+ metadata reads.
+
+ To fix the issue, collective metadata reads are now disabled during
+ chunked dataset raw data I/O.
+
+ (JTH - 2019/02/11, HDFFV-10563, HDFFV-10688)
+
+ - Performance issue when closing an object
+
+ The slow down is due to the search of the "tag_list" to find
+ out the "corked" status of an object and "uncork" it if so.
+
+ Improve porformance by skipping the search of the "tag_list"
+ if there are no "corked" objects when closing an object.
+
+ (VC - 2019/2/6)
+
+ - Fixed a potential invalid memory access and failure that could occur when
+ decoding an unknown object header message (from a future version of the
+ library).
+
+ (NAF - 2019/01/07)
+
+ - Deleting attributes in dense storage
+
+ The library aborts with "infinite loop closing library" after
+ attributes in dense storage are created and then deleted.
+
+ When deleting the attribute nodes from the name index v2 B-tree,
+ if an attribute is found in the intermediate B-tree nodes,
+ which may be merged/redistributed in the process, we need to
+ free the dynamically allocated spaces for the intermediate
+ decoded attribute.
+
+ (VC - 2018/12/26, HDFFV-10659)
+
- Allow H5detect and H5make_libsettings to take a file as an argument.
Rather than only writing to stdout, add a command argument to name
@@ -241,9 +579,72 @@ Bug Fixes since HDF5-1.10.3 release
(JTH - 2018/08/25, HDFFV-10501)
+ - fcntl(2)-based file locking incorrectly passed the lock argument struct
+ instead of a pointer to the struct, causing errors on systems where
+ flock(2) is not available.
+
+ File locking is used when files are opened to enforce SWMR semantics. A
+ lock operation takes place on all file opens unless the
+ HDF5_USE_FILE_LOCKING environment variable is set to the string "FALSE".
+ flock(2) is preferentially used, with fcntl(2) locks as a backup if
+ flock(2) is unavailable on a system (if neither is available, the lock
+ operation fails). On these systems, the file lock will often fail, which
+ causes HDF5 to not open the file and report an error.
+
+ This bug only affects POSIX systems. Win32 builds on Windows use a no-op
+ locking call which always succeeds. Systems which exhibit this bug will
+ have H5_HAVE_FCNTL defined but not H5_HAVE_FLOCK in the configure output.
+
+ This bug affects HDF5 1.10.0 through 1.10.5.
+
+ fcntl(2)-based file locking now correctly passes the struct pointer.
+
+ (DER - 2019/08/27, HDFFV-10892)
+
+
+ Java Library:
+ ----------------
+ - JNI native library dependencies
+
+ The build for the hdf5_java native library used the wrong
+ hdf5 target library for CMake builds. Correcting the hdf5_java
+ library to build with the shared hdf5 library required testing
+ paths to change also.
+
+ (ADB - 2018/08/31, HDFFV-10568)
+ - Java iterator callbacks
+
+ Change global callback object to a small stack structure in order
+ to fix a runtime crash. This crash was discovered when iterating
+ through a file with nested group members. The global variable
+ visit_callback is overwritten when recursion starts. When recursion
+ completes, visit_callback will be pointing to the wrong callback method.
+
+ (ADB - 2018/08/15, HDFFV-10536)
+
+ - Java HDFLibraryException class
+
+ Change parent class from Exception to RuntimeException.
+
+ (ADB - 2018/07/30, HDFFV-10534)
+
+ - JNI Read and Write
+
+ Refactored variable-length functions, H5DreadVL and H5AreadVL,
+ to correct dataset and attribute reads. New write functions,
+ H5DwriteVL and H5AwriteVL, are under construction.
+
+ (ADB - 2018/06/02, HDFFV-10519)
+
Configuration
-------------
- -
+ - Correct option for default API version
+
+ CMake options for default API version are not mutually exclusive.
+ Change the multiple BOOL options to a single STRING option with the
+ strings; v16, v18, v110, v112.
+
+ (ADB - 2019/08/12, HDFFV-10879)
Performance
-------------
@@ -251,7 +652,26 @@ Bug Fixes since HDF5-1.10.3 release
Fortran
--------
- -
+ - Added symbolic links libhdf5_hl_fortran.so to libhdf5hl_fortran.so and
+ libhdf5_hl_fortran.a to libhdf5hl_fortran.a in hdf5/lib directory for
+ autotools installs. These were added to match the name of the files
+ installed by cmake and the general pattern of hl lib files. We will
+ change the names of the installed lib files to the matching name in
+ the next major release.
+
+ (LRK - 2019/01/04, HDFFV-10596)
+
+ - Made Fortran specific subroutines PRIVATE in generic procedures.
+
+ Effected generic procedures were functions in H5A, H5D, H5P, H5R and H5T.
+
+ (MSB, 2018/12/04, HDFFV-10511)
+
+ - Fixed issue with Fortran not returning h5o_info_t field values
+ meta_size%attr%index_size and meta_size%attr%heap_size.
+
+ (MSB, 2018/1/8, HDFFV-10443)
+
Tools
-----
@@ -279,6 +699,11 @@ Bug Fixes since HDF5-1.10.3 release
Testing
-------
+ - Fixed a test failure in testpar/t_dset.c caused by
+ the test trying to use the parallel filters feature
+ on MPI-2 implementations.
+
+ (JTH, 2019/2/7)
Bug Fixes since HDF5-1.10.2 release
==================================
@@ -424,6 +849,19 @@ Bug Fixes since HDF5-1.10.2 release
(DER - 2018/02/26, HDFFV-10356)
+ - Inappropriate linking with deprecated MPI C++ libraries
+
+ HDF5 does not define *_SKIP_MPICXX in the public headers, so applications
+ can inadvertently wind up linking to the deprecated MPI C++ wrappers.
+
+ MPICH_SKIP_MPICXX and OMPI_SKIP_MPICXX have both been defined in H5public.h
+ so this should no longer be an issue. HDF5 makes no use of the deprecated
+ MPI C++ wrappers.
+
+ (DER - 2019/09/17, HDFFV-10893)
+
+
+
Configuration
-------------
- Applied patches to address Cywin build issues
@@ -437,19 +875,6 @@ Bug Fixes since HDF5-1.10.2 release
(LRK - 2018/07/18, HDFFV-10475)
- - The --enable-debug/production configure flags are listed as 'deprecated'
- when they should really be listed as 'removed'.
-
- In the autotools overhaul several years ago, we removed these flags and
- implemented a new --enable-build-mode= flag. This was done because we
- changed the semantics of the modes and didn't want users to silently
- be exposed to them. The newer system is also more flexible and us to
- add other modes (like 'clean').
-
- The --enable-debug/production flags are now listed as removed.
-
- (DER - 2018/05/31, HDFFV-10505)
-
- Moved the location of gcc attribute.
The gcc attribute(no_sanitize), named as the macro HDF_NO_UBSAN,
@@ -534,6 +959,15 @@ Bug Fixes since HDF5-1.10.2 release
Testing
-------
+ - The dt_arith test failed on IBM Power8 and Power9 machines when testing
+ conversions from or to long double types, especially when special values
+ such as infinity or NAN were involved. In some cases the results differed
+ by extremely small amounts from those on other machines, while some other
+ tests resulted in segmentation faults. These conversion tests with long
+ double types have been disabled for ppc64 machines until the problems are
+ better understood and can be properly addressed.
+
+ (SRL - 2019/01/07, TRILAB-98)
Supported Platforms
===================
@@ -559,10 +993,9 @@ Supported Platforms
Windows 7 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
- Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
- Visual Studio 2013 w/ Intel Fortran 15 (cmake)
+ Windows 7 x64 Visual Studio 2013
Visual Studio 2015 w/ Intel Fortran 16 (cmake)
- Visual Studio 2015 w/ Intel C, Fortran 2017 (cmake)
+ Visual Studio 2015 w/ Intel C, Fortran 2018 (cmake)
Visual Studio 2015 w/ MSMPI 8 (cmake)
Windows 10 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
@@ -698,6 +1131,10 @@ The following platforms are not supported but have been tested for this release.
Known Problems
==============
+ CMake files do not behave correctly with paths containing spaces.
+ Do not use spaces in paths because the required escaping for handling spaces
+ results in very complex and fragile build files.
+ ADB - 2019/05/07
At present, metadata cache images may not be generated by parallel
applications. Parallel applications can read files with metadata cache
diff --git a/release_docs/USING_CMake_Examples.txt b/release_docs/USING_CMake_Examples.txt
index d5fae39..ea352fe 100644
--- a/release_docs/USING_CMake_Examples.txt
+++ b/release_docs/USING_CMake_Examples.txt
@@ -22,7 +22,7 @@ I. Preconditions
1. We suggest you obtain the latest CMake for windows from the Kitware
web site. The HDF5 1.10.x product requires a minimum CMake version
- of 3.2.2.
+ of 3.10.2.
2. You have installed the HDF5 library built with CMake, by executing
the HDF Install Utility (the *.msi file in the binary package for
@@ -38,12 +38,16 @@ II. Building HDF5 Examples with CMake
Files in the HDF5 install directory:
HDF5Examples folder
+ CTestScript.cmake
HDF5_Examples.cmake
+ HDF5_Examples_options.cmake
Default installation process:
Create a directory to run the examples, i.e. \test_hdf5.
Copy HDF5Examples folder to this directory.
+ Copy CTestScript.cmake to this directory.
Copy HDF5_Examples.cmake to this directory.
+ Copy HDF5_Examples_options.cmake to this directory.
The default source folder is defined as "HDF5Examples". It can be changed
with the CTEST_SOURCE_NAME script option.
The default installation folder is defined for the platform.
@@ -54,8 +58,9 @@ Default installation process:
with the CTEST_CONFIGURATION_TYPE script option. Note that this must
be the same as the value used with the -C command line option.
The default build configuration is defined to build and use static libraries.
- Shared libraries can be used with the STATIC_ONLY script option set to "NO".
- Other options can be changed by editing the HDF5_Examples.cmake file.
+
+ Shared libraries and other options can be changed by editing the
+ HDF5_Examples_options.cmake file.
If the defaults are okay, execute from this directory:
ctest -S HDF5_Examples.cmake -C Release -V -O test.log
@@ -69,9 +74,16 @@ Default installation process:
========================================================================
-III. Other changes to the HDF5_Examples.cmake file
+III. Defaults in the HDF5_Examples_options.cmake file
========================================================================
-Line 45-48: uncomment to use a source tarball or zipfile;
- Add script option "TAR_SOURCE=MySource.tar".
+#### DEFAULT: ###
+#### BUILD_SHARED_LIBS:BOOL=OFF ###
+#### HDF_BUILD_C:BOOL=ON ###
+#### HDF_BUILD_CXX:BOOL=OFF ###
+#### HDF_BUILD_FORTRAN:BOOL=OFF ###
+#### HDF_BUILD_JAVA:BOOL=OFF ###
+#### BUILD_TESTING:BOOL=OFF ###
+#### HDF_ENABLE_PARALLEL:BOOL=OFF ###
+#### HDF_ENABLE_THREADSAFE:BOOL=OFF ###