summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
authorM. Scot Breitenfeld <brtnfld@hdfgroup.org>2019-06-25 17:39:35 (GMT)
committerM. Scot Breitenfeld <brtnfld@hdfgroup.org>2019-06-25 17:39:35 (GMT)
commit35c9af8371c4da7f5327c76ddab097b442128f59 (patch)
treed51be51c385a9b463388ba154efc3fa37cad49e8 /release_docs
parentc752332bfd0e9c3090f3a0c02d0253cd45c2e2ce (diff)
parent1d8f7bf297100ec11204442708a7f670a89f3f02 (diff)
downloadhdf5-35c9af8371c4da7f5327c76ddab097b442128f59.zip
hdf5-35c9af8371c4da7f5327c76ddab097b442128f59.tar.gz
hdf5-35c9af8371c4da7f5327c76ddab097b442128f59.tar.bz2
Merge branch 'develop' into parallel_vds_developinactive/parallel_vds_develop
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/HISTORY-1_10.txt1035
-rw-r--r--release_docs/INSTALL_CMake.txt2
-rw-r--r--release_docs/README_HDF5_CMake23
-rw-r--r--release_docs/README_HPC285
-rw-r--r--release_docs/RELEASE.txt205
-rw-r--r--release_docs/USING_CMake_Examples.txt24
6 files changed, 1428 insertions, 146 deletions
diff --git a/release_docs/HISTORY-1_10.txt b/release_docs/HISTORY-1_10.txt
index 9887a54..ad8beb2 100644
--- a/release_docs/HISTORY-1_10.txt
+++ b/release_docs/HISTORY-1_10.txt
@@ -3,6 +3,8 @@ HDF5 History
This file contains development history of the HDF5 1.10 branch
+06. Release Information for hdf5-1.10.4
+05. Release Information for hdf5-1.10.3
04. Release Information for hdf5-1.10.2
03. Release Information for hdf5-1.10.1
02. Release Information for hdf5-1.10.0-patch1
@@ -10,6 +12,1039 @@ This file contains development history of the HDF5 1.10 branch
[Search on the string '%%%%' for section breaks of each release.]
+%%%%1.10.4%%%%
+
+HDF5 version 1.10.4 released on 2018-10-05
+================================================================================
+
+
+INTRODUCTION
+
+This document describes the differences between this release and the previous
+HDF5 release. It contains information on the platforms tested and known
+problems in this release. For more details check the HISTORY*.txt files in the
+HDF5 source.
+
+Note that documentation in the links below will be updated at the time of each
+final release.
+
+Links to HDF5 documentation can be found on The HDF5 web page:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5
+
+The official HDF5 releases can be obtained from:
+
+ https://www.hdfgroup.org/downloads/hdf5/
+
+Changes from Release to Release and New Features in the HDF5-1.10.x release series
+can be found at:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide
+
+If you have any questions or comments, please send them to the HDF Help Desk:
+
+ help@hdfgroup.org
+
+
+CONTENTS
+
+- Bug Fixes since HDF5-1.10.3
+- Supported Platforms
+- Tested Configuration Features Summary
+- More Tested Platforms
+- Known Problems
+- CMake vs. Autotools installations
+
+
+New Features
+============
+
+ Configuration:
+ -------------
+ - Add toolchain and cross-compile support
+
+ Added info on using a toolchain file to INSTALL_CMAKE.txt. A
+ toolchain file is also used in cross-compiling, which requires
+ CMAKE_CROSSCOMPILING_EMULATOR to be set. To help with cross-compiling
+ the fortran configure process, the HDF5UseFortran.cmake file macros
+ were improved. Fixed a Fortran configure file issue that incorrectly
+ used #cmakedefine instead of #define.
+
+ (ADB - 2018/10/04, HDFFV-10594)
+
+ - Add warning flags for Intel compilers
+
+ Identified Intel compiler specific warnings flags that should be used
+ instead of GNU flags.
+
+ (ADB - 2018/10/04, TRILABS-21)
+
+ - Add default rpath to targets
+
+ Default rpaths should be set in shared executables and
+ libraries to allow the use of loading dependent libraries
+ without requiring LD_LIBRARY_PATH to be set. The default
+ path should be relative using @rpath on osx and $ORIGIN
+ on linux. Windows is not affected.
+
+ (ADB - 2018/09/26, HDFFV-10594)
+
+ Library:
+ --------
+ - Allow pre-generated H5Tinit.c and H5make_libsettings.c to be used.
+
+ Rather than always running H5detect and generating H5Tinit.c and
+ H5make_libsettings.c, supply a location for those files.
+
+ (ADB - 2018/09/18, HDFFV-10332)
+
+
+Bug Fixes since HDF5-1.10.3 release
+==================================
+
+ Library
+ -------
+ - Allow H5detect and H5make_libsettings to take a file as an argument.
+
+ Rather than only writing to stdout, add a command argument to name
+ the file that H5detect and H5make_libsettings will use for output.
+ Without an argument, stdout is still used, so backwards compatibility
+ is maintained.
+
+ (ADB - 2018/09/05, HDFFV-9059)
+
+ - A bug was discovered in the parallel library where an application
+ would hang if a collective read/write of a chunked dataset occurred
+ when collective metadata reads were enabled and some of the ranks
+ had no selection in the dataset's dataspace. The ranks which had no
+ selection in the dataset's dataspace called H5D__chunk_addrmap() to
+ retrieve the lowest chunk address in the dataset. This is because we
+ require reads/writes to be performed in strictly non-decreasing order
+ of chunk address in the file.
+
+ When the chunk index used was a version 1 or 2 B-tree, these
+ non-participating ranks would issue a collective MPI_Bcast() call
+ that the participating ranks would not issue, causing the hang. Since
+ the non-participating ranks are not actually reading/writing anything,
+ the H5D__chunk_addrmap() call can be safely removed and the address used
+ for the read/write can be set to an arbitrary number (0 was chosen).
+
+ (JTH - 2018/08/25, HDFFV-10501)
+
+ Java Library:
+ ----------------
+ - JNI native library dependencies
+
+ The build for the hdf5_java native library used the wrong
+ hdf5 target library for CMake builds. Correcting the hdf5_java
+ library to build with the shared hdf5 library required testing
+ paths to change also.
+
+ (ADB - 2018/08/31, HDFFV-10568)
+
+ - Java iterator callbacks
+
+ Change global callback object to a small stack structure in order
+ to fix a runtime crash. This crash was discovered when iterating
+ through a file with nested group members. The global variable
+ visit_callback is overwritten when recursion starts. When recursion
+ completes, visit_callback will be pointing to the wrong callback method.
+
+ (ADB - 2018/08/15, HDFFV-10536)
+
+ - Java HDFLibraryException class
+
+ Change parent class from Exception to RuntimeException.
+
+ (ADB - 2018/07/30, HDFFV-10534)
+
+ - JNI Read and Write
+
+ Refactored variable-length functions, H5DreadVL and H5AreadVL,
+ to correct dataset and attribute reads. New write functions,
+ H5DwriteVL and H5AwriteVL, are under construction.
+
+ (ADB - 2018/06/02, HDFFV-10519)
+
+
+Supported Platforms
+===================
+
+ Linux 2.6.32-696.16.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ #1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ (ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ IBM XL C/C++ V13.1
+ IBM XL Fortran V15.1
+
+ Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ Version 4.9.3, Version 5.2.0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.0.098 Build 20160721
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
+ (emu) Sun Fortran 95 8.6 SunOS_sparc
+ Sun C++ 5.12 SunOS_sparc
+
+ Windows 7 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+
+ Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2013 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+ Visual Studio 2015 w/ Intel C, Fortran 2017 (cmake)
+ Visual Studio 2015 w/ MSMPI 8 (cmake)
+
+ Windows 10 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+
+ Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+ Visual Studio 2017 w/ Intel Fortran 18 (cmake)
+
+ Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0
+ 64-bit gfortran GNU Fortran (GCC) 4.9.2
+ (osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3
+
+ Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3.0 from Xcode 7.3
+ 64-bit gfortran GNU Fortran (GCC) 5.2.0
+ (osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2
+
+ Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42)
+ 64-bit gfortran GNU Fortran (GCC) 7.1.0
+ (kite) Intel icc/icpc/ifort version 17.0.2
+
+
+Tested Configuration Features Summary
+=====================================
+
+ In the tables below
+ y = tested
+ n = not tested in this release
+ C = Cluster
+ W = Workstation
+ x = not working in this release
+ dna = does not apply
+ ( ) = footnote appears below second table
+ <blank> = testing incomplete on this feature or platform
+
+Platform C F90/ F90 C++ zlib SZIP
+ parallel F2003 parallel
+Solaris2.11 32-bit n y/y n y y y
+Solaris2.11 64-bit n y/n n y y y
+Windows 7 y y/y n y y y
+Windows 7 x64 y y/y y y y y
+Windows 7 Cygwin n y/n n y y y
+Windows 7 x64 Cygwin n y/n n y y y
+Windows 10 y y/y n y y y
+Windows 10 x64 y y/y n y y y
+Mac OS X Mavericks 10.9.5 64-bit n y/y n y y y
+Mac OS X Yosemite 10.10.5 64-bit n y/y n y y y
+Mac OS X El Capitan 10.11.6 64-bit n y/y n y y y
+Mac OS Sierra 10.12.6 64-bit n y/y n y y y
+CentOS 7.2 Linux 3.10.0 x86_64 PGI n y/y n y y y
+CentOS 7.2 Linux 3.10.0 x86_64 GNU y y/y y y y y
+CentOS 7.2 Linux 3.10.0 x86_64 Intel n y/y n y y y
+Linux 2.6.32-573.18.1.el6.ppc64 n y/y n y y y
+
+
+Platform Shared Shared Shared Thread-
+ C libs F90 libs C++ libs safe
+Solaris2.11 32-bit y y y y
+Solaris2.11 64-bit y y y y
+Windows 7 y y y y
+Windows 7 x64 y y y y
+Windows 7 Cygwin n n n y
+Windows 7 x64 Cygwin n n n y
+Windows 10 y y y y
+Windows 10 x64 y y y y
+Mac OS X Mavericks 10.9.5 64-bit y n y y
+Mac OS X Yosemite 10.10.5 64-bit y n y y
+Mac OS X El Capitan 10.11.6 64-bit y n y y
+Mac OS Sierra 10.12.6 64-bit y n y y
+CentOS 7.2 Linux 3.10.0 x86_64 PGI y y y n
+CentOS 7.2 Linux 3.10.0 x86_64 GNU y y y y
+CentOS 7.2 Linux 3.10.0 x86_64 Intel y y y n
+Linux 2.6.32-573.18.1.el6.ppc64 y y y n
+
+Compiler versions for each platform are listed in the preceding
+"Supported Platforms" table.
+
+
+More Tested Platforms
+=====================
+The following platforms are not supported but have been tested for this release.
+
+ Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (mayll/platypus) Version 4.4.7 20120313
+ Version 4.9.3, 5.3.0, 6.2.0
+ PGI C, Fortran, C++ for 64-bit target on
+ x86-64;
+ Version 17.10-0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.4.196 Build 20170411
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
+ #1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ (jelly) with NAG Fortran Compiler Release 6.1(Tozai)
+ GCC Version 7.1.0
+ OpenMPI 3.0.0-GCC-7.2.0-2.29,
+ 3.1.0-GCC-7.2.0-2.29
+ Intel(R) C (icc) and C++ (icpc) compilers
+ Version 17.0.0.098 Build 20160721
+ with NAG Fortran Compiler Release 6.1(Tozai)
+
+ Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0
+ #1 SMP x86_64 GNU/Linux
+ (moohan)
+
+ Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
+ #1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
+ (ostrich) and IBM XL Fortran for Linux, V15.1
+
+ Debian 8.4 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1 x86_64 GNU/Linux
+ gcc, g++ (Debian 4.9.2-10) 4.9.2
+ GNU Fortran (Debian 4.9.2-10) 4.9.2
+ (cmake and autotools)
+
+ Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
+ gcc, g++ (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ GNU Fortran (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ (cmake and autotools)
+
+ Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
+ gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ (cmake and autotools)
+
+
+Known Problems
+==============
+
+ At present, metadata cache images may not be generated by parallel
+ applications. Parallel applications can read files with metadata cache
+ images, but since this is a collective operation, a deadlock is possible
+ if one or more processes do not participate.
+
+ Three tests fail with OpenMPI 3.0.0/GCC-7.2.0-2.29:
+ testphdf5 (ecdsetw, selnone, cchunk1, cchunk3, cchunk4, and actualio)
+ t_shapesame (sscontig2)
+ t_pflush1/fails on exit
+ The first two tests fail attempting collective writes.
+
+ Known problems in previous releases can be found in the HISTORY*.txt files
+ in the HDF5 source. Please report any new problems found to
+ help@hdfgroup.org.
+
+
+CMake vs. Autotools installations
+=================================
+While both build systems produce similar results, there are differences.
+Each system produces the same set of folders on linux (only CMake works
+on standard Windows); bin, include, lib and share. Autotools places the
+COPYING and RELEASE.txt file in the root folder, CMake places them in
+the share folder.
+
+The bin folder contains the tools and the build scripts. Additionally, CMake
+creates dynamic versions of the tools with the suffix "-shared". Autotools
+installs one set of tools depending on the "--enable-shared" configuration
+option.
+ build scripts
+ -------------
+ Autotools: h5c++, h5cc, h5fc
+ CMake: h5c++, h5cc, h5hlc++, h5hlcc
+
+The include folder holds the header files and the fortran mod files. CMake
+places the fortran mod files into separate shared and static subfolders,
+while Autotools places one set of mod files into the include folder. Because
+CMake produces a tools library, the header files for tools will appear in
+the include folder.
+
+The lib folder contains the library files, and CMake adds the pkgconfig
+subfolder with the hdf5*.pc files used by the bin/build scripts created by
+the CMake build. CMake separates the C interface code from the fortran code by
+creating C-stub libraries for each Fortran library. In addition, only CMake
+installs the tools library. The names of the szip libraries are different
+between the build systems.
+
+The share folder will have the most differences because CMake builds include
+a number of CMake specific files for support of CMake's find_package and support
+for the HDF5 Examples CMake project.
+
+%%%%1.10.3%%%%
+
+HDF5 version 1.10.3 released on 2018-08-21
+================================================================================
+
+
+INTRODUCTION
+
+This document describes the differences between this release and the previous
+HDF5 release. It contains information on the platforms tested and known
+problems in this release. For more details check the HISTORY*.txt files in the
+HDF5 source.
+
+Note that documentation in the links below will be updated at the time of each
+final release.
+
+Links to HDF5 documentation can be found on The HDF5 web page:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5
+
+The official HDF5 releases can be obtained from:
+
+ https://www.hdfgroup.org/downloads/hdf5/
+
+Changes from Release to Release and New Features in the HDF5-1.10.x release series
+can be found at:
+
+ https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide
+
+If you have any questions or comments, please send them to the HDF Help Desk:
+
+ help@hdfgroup.org
+
+
+CONTENTS
+
+- New Features
+- Bug Fixes since HDF5-1.10.2
+- Supported Platforms
+- Tested Configuration Features Summary
+- More Tested Platforms
+- Known Problems
+- CMake vs. Autotools installations
+
+
+New Features
+============
+
+ Library
+ -------
+ - Moved the H5DOread/write_chunk() API calls to H5Dread/write_chunk()
+
+ The functionality of the direct chunk I/O calls in the high-level
+ library has been moved to the H5D package in the main library. This
+ will allow using those functions without building the high-level
+ library. The parameters and functionality of the H5D calls are
+ identical to the H5DO calls.
+
+ The original H5DO high-level API calls have been retained, though
+ they are now just wrappers for the H5D calls. They are marked as
+ deprecated and are only available when the library is built with
+ deprecated functions. New code should use the H5D calls for this
+ reason.
+
+ As a part of this work, the following symbols from H5Dpublic.h are no
+ longer used:
+
+ H5D_XFER_DIRECT_CHUNK_WRITE_FLAG_NAME
+ H5D_XFER_DIRECT_CHUNK_WRITE_FILTERS_NAME
+ H5D_XFER_DIRECT_CHUNK_WRITE_OFFSET_NAME
+ H5D_XFER_DIRECT_CHUNK_WRITE_DATASIZE_NAME
+ H5D_XFER_DIRECT_CHUNK_READ_FLAG_NAME
+ H5D_XFER_DIRECT_CHUNK_READ_OFFSET_NAME
+ H5D_XFER_DIRECT_CHUNK_READ_FILTERS_NAME
+
+ And properties with these names are no longer stored in the dataset
+ transfer property lists. The symbols are still defined in H5Dpublic.h,
+ but only when the library is built with deprecated symbols.
+
+ (DER - 2018/05/04)
+
+ Configuration:
+ -------------
+ - Add missing USE_110_API_DEFAULT option.
+
+ Option USE_110_API_DEFAULT sets the default version of
+ versioned APIs. The bin/makevers perl script did not set
+ the maxidx variable correctly when the 1.10 branch was
+ created. This caused the versioning process to always use
+ the latest version of any API.
+
+ (ADB - 2018/08/17, HDFFV-10552)
+
+ - Added configuration checks for the following MPI functions:
+
+ MPI_Mprobe - Used for the Parallel Compression feature
+ MPI_Imrecv - Used for the Parallel Compression feature
+
+ MPI_Get_elements_x - Used for the "big Parallel I/O" feature
+ MPI_Type_size_x - Used for the "big Parallel I/O" feature
+
+ (JTH - 2018/08/02, HDFFV-10512)
+
+ - Added section to the libhdf5.settings file to indicate
+ the status of the Parallel Compression and "big Parallel I/O"
+ features.
+
+ (JTH - 2018/08/02, HDFFV-10512)
+
+ - Add option to execute swmr shell scripts from CMake.
+
+ Option TEST_SHELL_SCRIPTS redirects processing into a
+ separate ShellTests.cmake file for UNIX types. The tests
+ execute the shell scripts if a SH program is found.
+
+ (ADB - 2018/07/16)
+
+
+ C++ Library:
+ ------------
+ - New wrappers
+
+ Added the following items:
+
+ + Class DSetAccPropList for the dataset access property list.
+
+ + Wrapper for H5Dget_access_plist to class DataSet
+ // Gets the access property list of this dataset.
+ DSetAccPropList getAccessPlist() const;
+
+ + Wrappers for H5Pset_chunk_cache and H5Pget_chunk_cache to class DSetAccPropList
+ // Sets the raw data chunk cache parameters.
+ void setChunkCache(size_t rdcc_nslots, size_t rdcc_nbytes, double rdcc_w0)
+
+ // Retrieves the raw data chunk cache parameters.
+ void getChunkCache(size_t &rdcc_nslots, size_t &rdcc_nbytes, double &rdcc_w0)
+
+ + New operator!= to class DataType (HDFFV-10472)
+ // Determines whether two datatypes are not the same.
+ bool operator!=(const DataType& compared_type)
+
+ + Wrappers for H5Oget_info2, H5Oget_info_by_name2, and H5Oget_info_by_idx2
+ (HDFFV-10458)
+
+ // Retrieves information about an HDF5 object.
+ void getObjinfo(H5O_info_t& objinfo, unsigned fields = H5O_INFO_BASIC) const;
+
+ // Retrieves information about an HDF5 object, given its name.
+ void getObjinfo(const char* name, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+ void getObjinfo(const H5std_string& name, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+
+ // Retrieves information about an HDF5 object, given its index.
+ void getObjinfo(const char* grp_name, H5_index_t idx_type,
+ H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+ void getObjinfo(const H5std_string& grp_name, H5_index_t idx_type,
+ H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
+ unsigned fields = H5O_INFO_BASIC,
+ const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
+
+ (BMR - 2018/07/22, HDFFV-10150, HDFFV-10458, HDFFV-1047)
+
+
+ Java Library:
+ ----------------
+ - Java HDFLibraryException class
+
+ Change parent class from Exception to RuntimeException.
+
+ (ADB - 2018/07/30, HDFFV-10534)
+
+ - JNI Read and Write
+
+ Refactored variable-length functions, H5DreadVL and H5AreadVL,
+ to correct dataset and attribute reads. New write functions,
+ H5DwriteVL and H5AwriteVL, are under construction.
+
+ (ADB - 2018/06/02, HDFFV-10519)
+
+
+Bug Fixes since HDF5-1.10.2 release
+==================================
+
+ Library
+ -------
+ - Performance issue with H5Oget_info
+
+ H5Oget_info family of routines retrieves information for an object such
+ as object type, access time, number of attributes, and storage space etc.
+ Retrieving all such information regardless is an overkill and causes
+ performance issue when doing so for many objects.
+
+ Add an additional parameter "fields" to the the H5Oget_info family of routines
+ indicating the type of information to be retrieved. The same is done to
+ the H5Ovisit family of routines which recursively visits an object
+ returning object information in a callback function. Both sets of routines
+ are versioned and the corresponding compatibility macros are added.
+
+ The version 2 names of the two sets of routines are:
+ (1) H5Oget_info2, H5Oget_info_by_idx2, H5Oget_info_by_name2
+ (2) H5Ovisit2, H5Ovisit_by_name2
+
+ (VC - 2018/08/15, HDFFV-10180)
+
+ - Test failure due to metadata size in test/vds.c
+
+ The size of metadata from test_api_get_ex_dcpl() in test/vds.c is not as expected
+ because the latest format should be used when encoding the layout for VDS.
+
+ Set the latest format in a temporary fapl and pass the setting to the routines that
+ encode the dataset selection for VDS.
+
+ (VC - 2018/08/14 HDFFV-10469)
+
+ - Java HDF5LibraryException class
+
+ The error minor and major values would be lost after the
+ constructor executed.
+
+ Created two local class variables to hold the values obtained during
+ execution of the constructor. Refactored the class functions to retrieve
+ the class values rather then calling the native functions.
+ The native functions were renamed and called only during execution
+ of the constructor.
+ Added error checking to calling class constructors in JNI classes.
+
+ (ADB - 2018/08/06, HDFFV-10544)
+
+ - Added checks of the defined MPI_VERSION to guard against usage of
+ MPI-3 functions in the Parallel Compression and "big Parallel I/O"
+ features when HDF5 is built with MPI-2. Previously, the configure
+ step would pass but the build itself would fail when it could not
+ locate the MPI-3 functions used.
+
+ As a result of these new checks, HDF5 can again be built with MPI-2,
+ but the Parallel Compression feature will be disabled as it relies
+ on the MPI-3 functions used.
+
+ (JTH - 2018/08/02, HDFFV-10512)
+
+ - User's patches: CVEs
+
+ The following patches have been applied:
+
+ CVE-2018-11202 - NULL pointer dereference was discovered in
+ H5S_hyper_make_spans in H5Shyper.c (HDFFV-10476)
+ https://security-tracker.debian.org/tracker/CVE-2018-11202
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11202
+
+ CVE-2018-11203 - A division by zero was discovered in
+ H5D__btree_decode_key in H5Dbtree.c (HDFFV-10477)
+ https://security-tracker.debian.org/tracker/CVE-2018-11203
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11203
+
+ CVE-2018-11204 - A NULL pointer dereference was discovered in
+ H5O__chunk_deserialize in H5Ocache.c (HDFFV-10478)
+ https://security-tracker.debian.org/tracker/CVE-2018-11204
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11204
+
+ CVE-2018-11206 - An out of bound read was discovered in
+ H5O_fill_new_decode and H5O_fill_old_decode in H5Ofill.c
+ (HDFFV-10480)
+ https://security-tracker.debian.org/tracker/CVE-2018-11206
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11206
+
+ CVE-2018-11207 - A division by zero was discovered in
+ H5D__chunk_init in H5Dchunk.c (HDFFV-10481)
+ https://security-tracker.debian.org/tracker/CVE-2018-11207
+ https://cve.mitre.org/cgi-bin/cvename.cgi?name=3DCVE-2018-11207
+
+ (BMR - 2018/7/22, PR#s: 1134 and 1139,
+ HDFFV-10476, HDFFV-10477, HDFFV-10478, HDFFV-10480, HDFFV-10481)
+
+ - H5Adelete
+
+ H5Adelete failed when deleting the last "large" attribute that
+ is stored densely via fractal heap/v2 b-tree.
+
+ After removing the attribute, update the ainfo message. If the
+ number of attributes goes to zero, remove the message.
+
+ (VC - 2018/07/20, HDFFV-9277)
+
+ - A bug was discovered in the parallel library which caused partial
+ parallel reads of filtered datasets to return incorrect data. The
+ library used the incorrect dataspace for each chunk read, causing
+ the selection used in each chunk to be wrong.
+
+ The bug was not caught during testing because all of the current
+ tests which do parallel reads of filtered data read all of the data
+ using an H5S_ALL selection. Several tests were added which exercise
+ partial parallel reads.
+
+ (JTH - 2018/07/16, HDFFV-10467)
+
+ - A bug was discovered in the parallel library which caused parallel
+ writes of filtered datasets to trigger an assertion failure in the
+ file free space manager.
+
+ This occurred when the filter used caused chunks to repeatedly shrink
+ and grow over the course of several dataset writes. The previous chunk
+ information, such as the size of the chunk and the offset in the file,
+ was being cached and not updated after each write, causing the next write
+ to the chunk to retrieve the incorrect cached information and run into
+ issues when reallocating space in the file for the chunk.
+
+ (JTH - 2018/07/16, HDFFV-10509)
+
+ - A bug was discovered in the parallel library which caused the
+ H5D__mpio_array_gatherv() function to allocate too much memory.
+
+ When the function is called with the 'allgather' parameter set
+ to a non-true value, the function will receive data from all MPI
+ ranks and gather it to the single rank specied by the 'root'
+ parameter. However, the bug in the function caused memory for
+ the received data to be allocated on all MPI ranks, not just the
+ singular rank specified as the receiver. In some circumstances,
+ this would cause an application to fail due to the large amounts
+ of memory being allocated.
+
+ (JTH - 2018/07/16, HDFFV-10467)
+
+ - Error checks in h5stat and when decoding messages
+
+ h5stat exited with seg fault/core dumped when
+ errors are encountered in the internal library.
+
+ Add error checks and --enable-error-stack option to h5stat.
+ Add range checks when decoding messages: old fill value, old
+ layout and refcount.
+
+ (VC - 2018/07/11, HDFFV-10333)
+
+ - If an HDF5 file contains a malformed compound datatype with a
+ suitably large offset, the type conversion code can run off
+ the end of the type conversion buffer, causing a segmentation
+ fault.
+
+ This issue was reported to The HDF Group as issue #CVE-2017-17507.
+
+ NOTE: The HDF5 C library cannot produce such a file. This condition
+ should only occur in a corrupt (or deliberately altered) file
+ or a file created by third-party software.
+
+ THE HDF GROUP WILL NOT FIX THIS BUG AT THIS TIME
+
+ Fixing this problem would involve updating the publicly visible
+ H5T_conv_t function pointer typedef and versioning the API calls
+ which use it. We normally only modify the public API during
+ major releases, so this bug will not be fixed at this time.
+
+ (DER - 2018/02/26, HDFFV-10356)
+
+
+ Configuration
+ -------------
+ - Applied patches to address Cywin build issues
+
+ There were three issues for Cygwin builds:
+ - Shared libs were not built.
+ - The -std=c99 flag caused a SIG_SETMASK undeclared error.
+ - Undefined errors when buildbing test shared libraries.
+
+ Patches to address these issues were received and incorporated in this version.
+
+ (LRK - 2018/07/18, HDFFV-10475)
+
+ - The --enable-debug/production configure flags are listed as 'deprecated'
+ when they should really be listed as 'removed'.
+
+ In the autotools overhaul several years ago, we removed these flags and
+ implemented a new --enable-build-mode= flag. This was done because we
+ changed the semantics of the modes and didn't want users to silently
+ be exposed to them. The newer system is also more flexible and us to
+ add other modes (like 'clean').
+
+ The --enable-debug/production flags are now listed as removed.
+
+ (DER - 2018/05/31, HDFFV-10505)
+
+ - Moved the location of gcc attribute.
+
+ The gcc attribute(no_sanitize), named as the macro HDF_NO_UBSAN,
+ was located after the function name. Builds with GCC 7 did not
+ indicate any problem, but GCC 8 issued errors. Moved the
+ attribute before the function name, as required.
+
+ (ADB - 2018/05/22, HDFFV-10473)
+
+ - Reworked java test suite into individual JUnit tests.
+
+ Testing the whole suite of java unit tests in a single JUnit run
+ made it difficult to determine actual failures when tests would fail.
+ Running each file set of tests individually, allows individual failures
+ to be diagnosed easier. A side benefit is that tests for optional components
+ of the library can be disabled if not configured.
+
+ (ADB - 2018/05/16, HDFFV-9739)
+
+ - Converted CMake global commands ADD_DEFINITIONS and INCLUDE_DIRECTORIES
+ to use target_* type commands. This change modernizes the CMake usage
+ in the HDF5 library.
+
+ In addition, there is the intention to convert to generator expressions,
+ where possible. The exception is Fortran FLAGS on Windows Visual Studio.
+ The HDF macros TARGET_C_PROPERTIES and TARGET_FORTRAN_PROPERTIES have
+ been removed with this change in usage.
+
+ The additional language (C++ and Fortran) checks have also been localized
+ to only be checked when that language is enabled.
+
+ (ADB - 2018/05/08)
+
+
+ Performance
+ -------------
+ - Revamped internal use of DXPLs, improving performance
+
+ (QAK - 2018/05/20)
+
+
+ Fortran
+ --------
+ - Fixed issue with h5fget_obj_count_f and using a file id of H5F_OBJ_ALL_F not
+ returning the correct count.
+
+ (MSB - 2018/5/15, HDFFV-10405)
+
+
+ C++ APIs
+ --------
+ - Adding default arguments to existing functions
+
+ Added the following items:
+ + Two more property list arguments are added to H5Location::createDataSet:
+ const DSetAccPropList& dapl = DSetAccPropList::DEFAULT
+ const LinkCreatPropList& lcpl = LinkCreatPropList::DEFAULT
+
+ + One more property list argument is added to H5Location::openDataSet:
+ const DSetAccPropList& dapl = DSetAccPropList::DEFAULT
+
+ (BMR - 2018/07/21, PR# 1146)
+
+ - Improvement C++ documentation
+
+ Replaced the table in main page of the C++ documentation from mht to htm format
+ for portability.
+
+ (BMR - 2018/07/17, PR# 1141)
+
+
+Supported Platforms
+===================
+
+ Linux 2.6.32-696.16.1.el6.ppc64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ #1 SMP ppc64 GNU/Linux g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ (ostrich) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
+ IBM XL C/C++ V13.1
+ IBM XL Fortran V15.1
+
+ Linux 3.10.0-327.10.1.el7 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (kituo/moohan) Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ Version 4.9.3, Version 5.2.0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.0.098 Build 20160721
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc
+ (emu) Sun Fortran 95 8.6 SunOS_sparc
+ Sun C++ 5.12 SunOS_sparc
+
+ Windows 7 Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+
+ Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2013 w/ Intel Fortran 15 (cmake)
+ Visual Studio 2015 w/ Intel Fortran 16 (cmake)
+ Visual Studio 2015 w/ Intel C, Fortran 2017 (cmake)
+ Visual Studio 2015 w/ MSMPI 8 (cmake)
+
+ Windows 10 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+
+ Windows 10 x64 Visual Studio 2015 w/ Intel Fortran 18 (cmake)
+ Visual Studio 2017 w/ Intel Fortran 18 (cmake)
+
+ Mac OS X Yosemite 10.10.5 Apple clang/clang++ version 6.1 from Xcode 7.0
+ 64-bit gfortran GNU Fortran (GCC) 4.9.2
+ (osx1010dev/osx1010test) Intel icc/icpc/ifort version 15.0.3
+
+ Mac OS X El Capitan 10.11.6 Apple clang/clang++ version 7.3.0 from Xcode 7.3
+ 64-bit gfortran GNU Fortran (GCC) 5.2.0
+ (osx1011dev/osx1011test) Intel icc/icpc/ifort version 16.0.2
+
+ Mac OS Sierra 10.12.6 Apple LLVM version 8.1.0 (clang/clang++-802.0.42)
+ 64-bit gfortran GNU Fortran (GCC) 7.1.0
+ (swallow/kite) Intel icc/icpc/ifort version 17.0.2
+
+Tested Configuration Features Summary
+=====================================
+
+ In the tables below
+ y = tested
+ n = not tested in this release
+ C = Cluster
+ W = Workstation
+ x = not working in this release
+ dna = does not apply
+ ( ) = footnote appears below second table
+ <blank> = testing incomplete on this feature or platform
+
+Platform C F90/ F90 C++ zlib SZIP
+ parallel F2003 parallel
+Solaris2.11 32-bit n y/y n y y y
+Solaris2.11 64-bit n y/n n y y y
+Windows 7 y y/y n y y y
+Windows 7 x64 y y/y y y y y
+Windows 7 Cygwin n y/n n y y y
+Windows 7 x64 Cygwin n y/n n y y y
+Windows 10 y y/y n y y y
+Windows 10 x64 y y/y n y y y
+Mac OS X Mavericks 10.9.5 64-bit n y/y n y y y
+Mac OS X Yosemite 10.10.5 64-bit n y/y n y y y
+Mac OS X El Capitan 10.11.6 64-bit n y/y n y y y
+Mac OS Sierra 10.12.6 64-bit n y/y n y y y
+CentOS 7.2 Linux 2.6.32 x86_64 PGI n y/y n y y y
+CentOS 7.2 Linux 2.6.32 x86_64 GNU y y/y y y y y
+CentOS 7.2 Linux 2.6.32 x86_64 Intel n y/y n y y y
+Linux 2.6.32-573.18.1.el6.ppc64 n y/y n y y y
+
+
+Platform Shared Shared Shared Thread-
+ C libs F90 libs C++ libs safe
+Solaris2.11 32-bit y y y y
+Solaris2.11 64-bit y y y y
+Windows 7 y y y y
+Windows 7 x64 y y y y
+Windows 7 Cygwin n n n y
+Windows 7 x64 Cygwin n n n y
+Windows 10 y y y y
+Windows 10 x64 y y y y
+Mac OS X Mavericks 10.9.5 64-bit y n y y
+Mac OS X Yosemite 10.10.5 64-bit y n y y
+Mac OS X El Capitan 10.11.6 64-bit y n y y
+Mac OS Sierra 10.12.6 64-bit y n y y
+CentOS 7.2 Linux 2.6.32 x86_64 PGI y y y n
+CentOS 7.2 Linux 2.6.32 x86_64 GNU y y y y
+CentOS 7.2 Linux 2.6.32 x86_64 Intel y y y n
+Linux 2.6.32-573.18.1.el6.ppc64 y y y n
+
+Compiler versions for each platform are listed in the preceding
+"Supported Platforms" table.
+
+
+More Tested Platforms
+=====================
+The following platforms are not supported but have been tested for this release.
+
+ Linux 2.6.32-573.22.1.el6 GNU C (gcc), Fortran (gfortran), C++ (g++)
+ #1 SMP x86_64 GNU/Linux compilers:
+ (mayll/platypus) Version 4.4.7 20120313
+ Version 4.9.3, 5.3.0, 6.2.0
+ PGI C, Fortran, C++ for 64-bit target on
+ x86-64;
+ Version 17.10-0
+ Intel(R) C (icc), C++ (icpc), Fortran (icc)
+ compilers:
+ Version 17.0.4.196 Build 20170411
+ MPICH 3.1.4 compiled with GCC 4.9.3
+
+ Linux 3.10.0-327.18.2.el7 GNU C (gcc) and C++ (g++) compilers
+ #1 SMP x86_64 GNU/Linux Version 4.8.5 20150623 (Red Hat 4.8.5-4)
+ (jelly) with NAG Fortran Compiler Release 6.1(Tozai)
+ GCC Version 7.1.0
+ OpenMPI 3.0.0-GCC-7.2.0-2.29,
+ 3.1.0-GCC-7.2.0-2.29
+ Intel(R) C (icc) and C++ (icpc) compilers
+ Version 17.0.0.098 Build 20160721
+ with NAG Fortran Compiler Release 6.1(Tozai)
+
+ Linux 3.10.0-327.10.1.el7 MPICH 3.2 compiled with GCC 5.3.0
+ #1 SMP x86_64 GNU/Linux
+ (moohan)
+
+ Linux 2.6.32-573.18.1.el6.ppc64 MPICH mpich 3.1.4 compiled with
+ #1 SMP ppc64 GNU/Linux IBM XL C/C++ for Linux, V13.1
+ (ostrich) and IBM XL Fortran for Linux, V15.1
+
+ Debian 8.4 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1 x86_64 GNU/Linux
+ gcc, g++ (Debian 4.9.2-10) 4.9.2
+ GNU Fortran (Debian 4.9.2-10) 4.9.2
+ (cmake and autotools)
+
+ Fedora 24 4.7.2-201.fc24.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
+ gcc, g++ (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ GNU Fortran (GCC) 6.1.1 20160621
+ (Red Hat 6.1.1-3)
+ (cmake and autotools)
+
+ Ubuntu 16.04.1 4.4.0-38-generic #57-Ubuntu SMP x86_64 GNU/Linux
+ gcc, g++ (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.2)
+ 5.4.0 20160609
+ (cmake and autotools)
+
+
+Known Problems
+==============
+
+ At present, metadata cache images may not be generated by parallel
+ applications. Parallel applications can read files with metadata cache
+ images, but since this is a collective operation, a deadlock is possible
+ if one or more processes do not participate.
+
+ Three tests fail with OpenMPI 3.0.0/GCC-7.2.0-2.29:
+ testphdf5 (ecdsetw, selnone, cchunk1, cchunk3, cchunk4, and actualio)
+ t_shapesame (sscontig2)
+ t_pflush1/fails on exit
+ The first two tests fail attempting collective writes.
+
+ Known problems in previous releases can be found in the HISTORY*.txt files
+ in the HDF5 source. Please report any new problems found to
+ help@hdfgroup.org.
+
+
+CMake vs. Autotools installations
+=================================
+While both build systems produce similar results, there are differences.
+Each system produces the same set of folders on linux (only CMake works
+on standard Windows); bin, include, lib and share. Autotools places the
+COPYING and RELEASE.txt file in the root folder, CMake places them in
+the share folder.
+
+The bin folder contains the tools and the build scripts. Additionally, CMake
+creates dynamic versions of the tools with the suffix "-shared". Autotools
+installs one set of tools depending on the "--enable-shared" configuration
+option.
+ build scripts
+ -------------
+ Autotools: h5c++, h5cc, h5fc
+ CMake: h5c++, h5cc, h5hlc++, h5hlcc
+
+The include folder holds the header files and the fortran mod files. CMake
+places the fortran mod files into separate shared and static subfolders,
+while Autotools places one set of mod files into the include folder. Because
+CMake produces a tools library, the header files for tools will appear in
+the include folder.
+
+The lib folder contains the library files, and CMake adds the pkgconfig
+subfolder with the hdf5*.pc files used by the bin/build scripts created by
+the CMake build. CMake separates the C interface code from the fortran code by
+creating C-stub libraries for each Fortran library. In addition, only CMake
+installs the tools library. The names of the szip libraries are different
+between the build systems.
+
+The share folder will have the most differences because CMake builds include
+a number of CMake specific files for support of CMake's find_package and support
+for the HDF5 Examples CMake project.
+
+
%%%%1.10.2%%%%
HDF5 version 1.10.2 released on 2018-03-29
diff --git a/release_docs/INSTALL_CMake.txt b/release_docs/INSTALL_CMake.txt
index 233c2cc..4bba5e8 100644
--- a/release_docs/INSTALL_CMake.txt
+++ b/release_docs/INSTALL_CMake.txt
@@ -651,7 +651,7 @@ HDF5_PACKAGE_EXTLIBS "CPACK - include external libraries"
HDF5_STRICT_FORMAT_CHECKS "Whether to perform strict file format checks" OFF
HDF_TEST_EXPRESS "Control testing framework (0-3)" "0"
HDF5_TEST_VFD "Execute tests with different VFDs" OFF
-HDF5_TEST_VOL "Execute tests with different VOL connectors" OFF
+HDF5_TEST_PASSTHROUGH_VOL "Execute tests with different passthrough VOL connectors" OFF
HDF5_USE_16_API_DEFAULT "Use the HDF5 1.6.x API by default" OFF
HDF5_USE_18_API_DEFAULT "Use the HDF5 1.8.x API by default" OFF
HDF5_USE_110_API_DEFAULT "Use the HDF5 1.10.x API by default" OFF
diff --git a/release_docs/README_HDF5_CMake b/release_docs/README_HDF5_CMake
new file mode 100644
index 0000000..a2e7dce
--- /dev/null
+++ b/release_docs/README_HDF5_CMake
@@ -0,0 +1,23 @@
+This tar file contains
+
+ build-unix.sh script to build HDF5 with CMake on unix machines
+ build-unix-hpc.sh script to build HDF5 with CMake on unix machines and run
+ tests with batch scripts (sbatch).
+ CTestScript.cmake
+ HDF5config.cmake CMake scripts for building HDF5
+ HDF5options.cmake
+ hdf5-1.11.4 HDF5 1.11.4 source
+ SZip.tar.gz source for building SZIP
+ ZLib.tar.gz source for building ZLIB
+
+For more information about building HDF5 with CMake, see USING_HDF5_CMake.txt in
+hdf5-1.11.4/release_docs, or
+https://portal.hdfgroup.org/display/support/Building+HDF5+with+CMake.
+
+For more information about building HDF5 with CMake on HPC machines, including
+cross compiling on Cray XC40, see README_HPC in hdf5-1.11.4/release_docs.
+
+
+
+
+
diff --git a/release_docs/README_HPC b/release_docs/README_HPC
index bdeab67..67a5d6c 100644
--- a/release_docs/README_HPC
+++ b/release_docs/README_HPC
@@ -1,79 +1,206 @@
-HDF5 version 1.11.4 currently under development
-
-HDF5 source tar files with the HPC prefix are intended for use on clusters where
-configuration and build steps will be done on a login node and executable and
-lib files that are built will be run on compute nodes.
-
-Note these differences from the regular CMake tar and zip files:
- - Test programs produced by this tar file will be run using batch scripts.
- - Serial and parallel HDF5options.cmake files, using parallel options by default.
-
-Note also that options are now available in HDF5 source to facilitate use of
-toolchain files for using cross compilers available on login nodes to compile
-HDF5 for compute nodes.
-
-Instructions to configure build and test HDF5 using CMake:
-
-1. The cmake version must be 3.10 or later (cmake --version).
-2. Load or switch modules and set CC, FC, CXX for compilers desired.
-3. run build-unix.sh to configure, build, test and package HDF5 with CMake.
-
-Contents:
-
-build-unix.sh Simple script for running CMake to configure, build,
- test, and package HDF5.
-CTestScript.cmake CMake script to configure, build, test and package
- HDF5.
-hdf5-<version> HDF5 source for <version>.
-HDF5config.cmake CMake script to configure, build, test and package
- HDF5.
-HDF5Examples Source for HDF5 Examples.
-HDF5options.cmake symlink to parallel or serial HDF5options.cmake files.
- Default is parallel file, which builds and tests both
- serial and parallel C and Fortran wrappers.
- To build serial only, C Fortran and C++ wrappers, delete
- The HDF5options.cmake link and run
- 'ln -s ser-HDF5options.cmake HDF5options.cmake' to switch.
-par-HDF5options.cmake Options file for HDF5 serial and parallel build and test.
-ser-HDF5options.cmake Options file for HDF5 serial only build and test.
-SZip.tar.gz Source for building SZip.
-ZLib.tar.gz Source for buildng Zlib.
-
-
-To cross compile with this HPC-CMake tar.gz HDF5 source file:
-On Cray XC40 haswell login node for knl compute nodes using CMake and Cray modules:
- 1. Uncomment line in HDF5options.txt to use a toolchain file - line 106 for
- config/toolchain/crayle.cmake.
- 2. Uncomment lines 110, 111, and 115 - 122 of HDF5options.cmake.
- Line 110 allows configuring to complete on the haswell node.
- Line 111 switches the compiler to build files for knl nodes.
- Lines 115 - 122 set up test files to use sbatch to run build tests
- in batch jobs on a knl compute node with 6 processes.
- 3. Compiler module may be the default PrgEnv-intel/6.0.4 to use
- intel/18.0.2 or other intel, PrgEnv-cray/6.0.4 to use cce/8.7.4,
- or PrgEnv-gnu/6.0.4 for GCC compilers. PrgEnv-pgi/6.0.4 is also
- available but has not been tested with this tar file.
- 4. These CMake options are set in config/toolchain/crayle.cmake:
- set(CMAKE_SYSTEM_NAME Linux)
- set(CMAKE_COMPILER_VENDOR "CrayLinuxEnvironment")
- set(CMAKE_C_COMPILER cc)
- set(CMAKE_CXX_COMPILER c++)
- set(CMAKE_Fortran_COMPILER ftn)
- set(CMAKE_CROSSCOMPILING_EMULATOR "")
-
- 5. Settings for two other cross-compiling options are also in the
- config/toolchain files which do not seem to be necessary with the
- Cray PrgEnv-* modules
- a. HDF5_USE_PREGEN. This option, along with the HDF5_USE_PREGEN_DIR
- CMake variable would allow the use of an appropriate H5Tinit.c
- file with type information generated on a compute node to be used
- when cross compiling for those compute nodes. The use of the
- variables in lines 110 and 111 of HDF5options.cmake file seem to
- preclude needing this option with the available Cray modules and
- CMake options.
- b. HDF5_BATCH_H5DETECT and associated CMake variables. This option
- when properly configured will run H5detect in a batch job on a
- compute node at the beginning of the CMake build process. It
- was also found to be unnecessary with the available Cray modules
- and CMake options.
--
+************************************************************************
+* Using CMake to build and test HDF5 source on HPC machines *
+************************************************************************
+
+ Contents
+
+Section I: Prerequisites
+Section II: Obtain HDF5 source
+Section III: Using ctest command to build and test
+Section IV: Cross compiling
+Section V: Manual alternatives
+Section VI: Other cross compiling options
+
+************************************************************************
+
+========================================================================
+I. Prerequisites
+========================================================================
+ 1. Create a working directory that is accessible from the compute nodes for
+ running tests; the working directory should be in a scratch space or a
+ parallel file system space since testing will use this space. Building
+ from HDF5 source in a 'home' directory typically results in test
+ failures and should be avoided.
+
+ 2. Load modules for desired compilers, module for cmake version 3.10 or greater,
+ and set any needed environment variables for compilers (i.e., CC, FC, CXX).
+ Unload any problematic modules (i.e., craype-hugepages2M).
+
+========================================================================
+II. Obtain HDF5 source
+========================================================================
+Obtain HDF5 source code from the HDF5 repository using a git command or
+from a release tar file in a working directory:
+
+ git clone https://git@bitbucket.hdfgroup.org/scm/hdffv/hdf5.git
+ [-b branch] [source directory]
+
+If no branch is specified, then the 'develop' version will be checked out.
+If no source directory is specified, then the source will be located in the
+'hdf5' directory. The Cmake scripts expect the source to be in a directory
+named hdf5-<version string>, where 'version string' uses the format '1.xx.xx'.
+For example, for the current 'develop' version, the "hdf5" directory should
+be renamed "hdf5-1.11.4", or for the first hdf5_1_10_5 pre-release version,
+it should be renamed "hdf5-1.10.5-pre1".
+
+If the version number is not known a priori, the version string
+can be obtained by running bin/h5vers in the top level directory of the source clone, and
+the source directory renamed 'hdf5-<version string>'.
+
+Release or snapshot tar files may also be extracted and used.
+
+========================================================================
+III. Using ctest command to build and test
+========================================================================
+
+The ctest command [1]:
+
+ ctest -S HDF5config.cmake,BUILD_GENERATOR=Unix -C Release -V -O hdf5.log
+
+will configure, build, test and package HDF5 from the downloaded source
+after the setup steps outlined below are followed.
+
+CMake option variables are available to allow running test programs in batch
+scripts on compute nodes and to cross-compile for compute node hardware using
+a cross-compiling emulator. The setup steps will make default settings for
+parallel or serial only builds available to the CMake command.
+
+ 1. For the current 'develop' version the "hdf5" directory should be renamed
+ "hdf5-1.11.4".
+
+ 2. Three cmake script files need to be copied to the working directory, or
+ have symbolic links to them, created in the working directory:
+
+ hdf5-1.11.4/config/cmake/scripts/HDF5config.cmake
+ hdf5-1.11.4/config/cmake/scripts/CTestScript.cmake
+ hdf5-1.11.4/config/cmake/scripts/HDF5options.cmake
+
+ should be copied to the working directory.
+
+ 3. The resulting contents of the working directory are then:
+
+ CTestScript.cmake
+ HDF5config.cmake
+ HDF5options.cmake
+ hdf5-1.11.4
+
+ Additionally, when the ctest command runs [1], it will add a build directory
+ in the working directory.
+
+ 4. The following options (among others) can be added to the ctest
+ command [1], following '-S HDF5config.cmake,' and separated by ',':
+
+ HPC=sbatch (or 'bsub' or 'raybsub') indicates which type of batch
+ files to use for running tests. If omitted, test
+ will run on the local machine or login node.
+
+ KNL=true to cross-compile for KNL compute nodes on CrayXC40
+ (see section IV)
+
+ MPI=true enables parallel, disables c++, java, and threadsafe
+
+ LOCAL_BATCH_SCRIPT_ARGS="--account=<account#>" to supply user account
+ information for batch jobs
+
+ The HPC options will add BUILD_GENERATOR=Unix for the three HPC options.
+ An example ctest command for a parallel build on a system using sbatch is
+
+ ctest -S HDF5config.cmake,HPC=sbatch,MPI=true -C Release -V -O hdf5.log
+
+ Adding the option 'KNL=true' to the above list will compile for KNL nodes,
+ for example, on 'mutrino' and other CrayXC40 machines.
+
+ Changing -V to -VV will produce more logging information in HDF5.log.
+
+ More detailed CMake information can be found in the HDF5 source in
+ release_docs/INSTALL_CMake.txt.
+
+========================================================================
+IV. Cross-compiling
+========================================================================
+For cross-compiling on Cray, set environment variables CC=cc, FC=ftn
+and CXX=CC (for c++) after all compiler modules are loaded since switching
+compiler modules may unset or reset these variables.
+
+CMake provides options for cross-compiling. To cross-compile for KNL hardware
+on mutrino and other CrayXC40 machines, add HPC=sbatch,KNL=true to the
+ctest command line. This will set the following options from the
+config/cmake/scripts/HPC/sbatch-HDF5options.cmake file:
+
+ set (COMPILENODE_HWCOMPILE_MODULE "craype-haswell")
+ set (COMPUTENODE_HWCOMPILE_MODULE "craype-mic-knl")
+ set (LOCAL_BATCH_SCRIPT_NAME "knl_ctestS.sl")
+ set (LOCAL_BATCH_SCRIPT_PARALLEL_NAME "knl_ctestP.sl")
+ set (ADD_BUILD_OPTIONS "${ADD_BUILD_OPTIONS} -DCMAKE_TOOLCHAIN_FILE:STRING=config/toolchain/crayle.cmake")
+
+On the Cray XC40 the craype-haswell module is needed for configuring, and the
+craype-mic-knl module is needed for building to run on the KNL nodes. CMake
+with the above options will swap modules after configuring is complete,
+but before compiling programs for KNL.
+
+The sbatch script arguments for running jobs on KNL nodes may differ on CrayXC40
+machines other than mutrino. The batch scripts knl_ctestS.sl and knl_ctestP.sl
+have the correct arguments for mutrino: "#SBATCH -p knl -C quad,cache". For
+cori, another CrayXC40, that line is replaced by "#SBATCH -C knl,quad,cache".
+For cori (and other machines), the values in LOCAL_BATCH_SCRIPT_NAME and
+LOCAL_BATCH_SCRIPT_PARALLEL_NAME in the config/cmake/scripts/HPC/sbatch-HDF5options.cmake
+file can be replaced by cori_knl_ctestS.sl and cori_knl_ctestS.sl, or the lines
+can be edited in the batch files in hdf5-1.11.4/bin/batch.
+
+========================================================================
+V. Manual alternatives
+========================================================================
+If using ctest is undesirable, one can create a build directory and run the cmake
+configure command, for example
+
+"/projects/Mutrino/hpcsoft/cle6.0/common/cmake/3.10.2/bin/cmake"
+-C "<working directory>/hdf5-1.11.4/config/cmake/cacheinit.cmake"
+-DCMAKE_BUILD_TYPE:STRING=Release -DHDF5_BUILD_FORTRAN:BOOL=ON
+-DHDF5_BUILD_JAVA:BOOL=OFF
+-DCMAKE_INSTALL_PREFIX:PATH=<working directory>/HDF_Group/HDF5/1.11.4
+-DHDF5_ENABLE_Z_LIB_SUPPORT:BOOL=OFF -DHDF5_ENABLE_SZIP_SUPPORT:BOOL=OFF
+-DHDF5_ENABLE_PARALLEL:BOOL=ON -DHDF5_BUILD_CPP_LIB:BOOL=OFF
+-DHDF5_BUILD_JAVA:BOOL=OFF -DHDF5_ENABLE_THREADSAFE:BOOL=OFF
+-DHDF5_PACKAGE_EXTLIBS:BOOL=ON -DLOCAL_BATCH_TEST:BOOL=ON
+-DMPIEXEC_EXECUTABLE:STRING=srun -DMPIEXEC_NUMPROC_FLAG:STRING=-n
+-DMPIEXEC_MAX_NUMPROCS:STRING=6
+-DCMAKE_TOOLCHAIN_FILE:STRING=config/toolchain/crayle.cmake
+-DLOCAL_BATCH_SCRIPT_NAME:STRING=knl_ctestS.sl
+-DLOCAL_BATCH_SCRIPT_PARALLEL_NAME:STRING=knl_ctestP.sl -DSITE:STRING=mutrino
+-DBUILDNAME:STRING=par-knl_GCC493-SHARED-Linux-4.4.156-94.61.1.16335.0.PTF.1107299-default-x86_64
+"-GUnix Makefiles" "" "<working directory>/hdf5-1.11.4"
+
+followed by make and batch jobs to run tests.
+
+To cross-compile on CrayXC40, run the configure command with the craype-haswell
+module loaded, then switch to the craype-mic-knl module for the build process.
+
+Tests on machines using slurm can be run with
+
+"sbatch -p knl -C quad,cache ctestS.sl"
+
+or
+
+"sbatch -p knl -C quad,cache ctestP.sl"
+
+for parallel builds.
+
+Tests on machines using LSF will typically use "bsub ctestS.lsf", etc.
+
+========================================================================
+VI. Other cross compiling options
+========================================================================
+Settings for two other cross-compiling options are also in the config/toolchain
+files which do not seem to be necessary with the Cray PrgEnv-* modules
+
+1. HDF5_USE_PREGEN. This option, along with the HDF5_USE_PREGEN_DIR CMake
+ variable would allow the use of an appropriate H5Tinit.c file with type
+ information generated on a compute node to be used when cross compiling
+ for those compute nodes. The use of the variables in lines 110 and 111
+ of HDF5options.cmake file seem to preclude needing this option with the
+ available Cray modules and CMake option.
+
+2. HDF5_BATCH_H5DETECT and associated CMake variables. This option when
+ properly configured will run H5detect in a batch job on a compute node
+ at the beginning of the CMake build process. It was also found to be
+ unnecessary with the available Cray modules and CMake options.
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index f9b2362..647cb8f 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -1,4 +1,4 @@
- version 1.11.4 currently under development
+HDF5 version 1.11.6 currently under development
================================================================================
@@ -48,6 +48,36 @@ New Features
Configuration:
-------------
+ - Add option to build only shared libs
+
+ A request was made to prevent building static libraries and only build
+ shared. A new option was added to CMake, ONLY_SHARED_LIBS, which will
+ skip building static libraries. Certain utility functions will build with
+ static libs but are not published. Tests are adjusted to use the correct
+ libraries.
+
+ (ADB - 2019/06/12, HDFFV-10805)
+
+ - Add options to enable or disable building tools and tests
+
+ Configure options --enable-tests and --enable-tools were added for
+ autotools configure. These options are enabled by default, and can be
+ disabled with either --disable-tests (or tools) or --enable-tests=no
+ (or --enable-tools=no). Build time is reduced ~20% when tools are
+ disabled, 35% when tests are disabled, 45% when both are disabled.
+ Reenabling them after the initial build requires running configure
+ again with the option(s) enabled.
+
+ (LRK - 2019/06/12, HDFFV-9976)
+
+ - Change tools test that test the error stack
+
+ There are some use cases which can cause the error stack of tools to be
+ different then the expected. These tests now use grepTest.cmake, this was
+ changed to allow the error file to be searched for an expected string.
+
+ (ADB - 2019/04/15, HDFFV-10741)
+
- Keep stderr and stdout separate in tests
Changed test handling of output capture. Tests now keep the stderr
@@ -122,6 +152,14 @@ New Features
Library:
--------
+ - Improved the performance of virtual dataset I/O
+
+ Refactored the internal dataspace routines used by the virtual dataset
+ code to improve performance, especially when one of the selections
+ involved is very long and non-contiguous.
+
+ (NAF - 2019/05/31, HDFFV-10693)
+
- Allow pre-generated H5Tinit.c and H5make_libsettings.c to be used.
Rather than always running H5detect and generating H5Tinit.c and
@@ -135,6 +173,14 @@ New Features
(DER - 2018/12/08, HDFFV-10252)
+ - Added the ability to open files with UTF-8 file names on Windows.
+
+ The POSIX open(2) API call on Windows is limited to ASCII
+ file names. The library has been updated to convert incoming file
+ names to UTF-16 (via MultiByteToWideChar(CP_UTF8, ...) and use
+ _wopen() instead.
+
+ (DER - 2019/03/15, HDFFV-2714, HDFFV-3914, HDFFV-3895, HDFFV-8237, HDFFV-10413, HDFFV-10691)
Parallel Library:
-----------------
@@ -154,8 +200,8 @@ New Features
- Changed the default behavior in parallel when reading the same dataset in its entirely
(i.e. H5S_ALL dataset selection) which is being read by all the processes collectively.
The dataset mush be contiguous, less than 2GB, and of an atomic datatype.
- The new behavior is the HDF5 library will use an MPI_Bcast to pass the data read from
- the disk by the root process to the remain processes in the MPI communicator associated
+ The new behavior is the HDF5 library will use an MPI_Bcast to pass the data read from
+ the disk by the root process to the remain processes in the MPI communicator associated
with the HDF5 file.
(MSB - 2019/01/02, HDFFV-10652)
@@ -170,66 +216,34 @@ New Features
- Added new Fortran API, H5gmtime, which converts (C) 'time_t' structure
to Fortran DATE AND TIME storage format.
-
+
(MSB, 2019/01/08, HDFFV-10443)
- - Added new Fortran 'fields' optional parameter to: h5ovisit_f, h5oget_info_by_name_f,
+ - Added new Fortran 'fields' optional parameter to: h5ovisit_f, h5oget_info_by_name_f,
h5oget_info, h5oget_info_by_idx and h5ovisit_by_name_f.
(MSB, 2019/01/08, HDFFV-10443)
C++ Library:
------------
- - New wrappers
-
- Added the following items:
-
- + Class DSetAccPropList for the dataset access property list.
-
- + Wrapper for H5Dget_access_plist to class DataSet
- // Gets the access property list of this dataset.
- DSetAccPropList getAccessPlist() const;
-
- + Wrappers for H5Pset_chunk_cache and H5Pget_chunk_cache to class DSetAccPropList
- // Sets the raw data chunk cache parameters.
- void setChunkCache(size_t rdcc_nslots, size_t rdcc_nbytes, double rdcc_w0)
+ - Added new wrapper for H5Ovisit2()
+ H5Object::visit()
- // Retrieves the raw data chunk cache parameters.
- void getChunkCache(size_t &rdcc_nslots, size_t &rdcc_nbytes, double &rdcc_w0)
+ (BMR - 2019/02/14, HDFFV-10532)
- + New operator!= to class DataType (HDFFV-10472)
- // Determines whether two datatypes are not the same.
- bool operator!=(const DataType& compared_type)
+ - Added new wrappers for H5Pset/get_create_intermediate_group()
+ LinkCreatPropList::setCreateIntermediateGroup()
+ LinkCreatPropList::getCreateIntermediateGroup()
- + Wrappers for H5Oget_info2, H5Oget_info_by_name2, and H5Oget_info_by_idx2
- (HDFFV-10458)
-
- // Retrieves information about an HDF5 object.
- void getObjinfo(H5O_info_t& objinfo, unsigned fields = H5O_INFO_BASIC) const;
-
- // Retrieves information about an HDF5 object, given its name.
- void getObjinfo(const char* name, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- void getObjinfo(const H5std_string& name, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
-
- // Retrieves information about an HDF5 object, given its index.
- void getObjinfo(const char* grp_name, H5_index_t idx_type,
- H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- void getObjinfo(const H5std_string& grp_name, H5_index_t idx_type,
- H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
-
- (BMR - 2018/07/22, HDFFV-10150, HDFFV-10458, HDFFV-1047)
+ (BMR - 2019/04/22, HDFFV-10622)
Java Library:
----------------
+ - Fix a failure in JUnit-TestH5P on 32-bit architectures
+
+ (JTH - 2019/04/30)
+
- Duplicate the data read/write functions of Datasets for Attributes.
Region references could not be displayed for attributes as they could
@@ -239,7 +253,6 @@ New Features
(ADB - 2018/12/12, HDFVIEW-4)
-
- Removed H5I_REFERENCE from the Java wrappers
This ID class was never used by the library and has been removed
@@ -250,7 +263,12 @@ New Features
Tools:
------
- -
+ - h5dump was fixed for 128-bit floats, but was missing a test.
+
+ New test greps for the first 15 numbers of the 128-bit value.
+
+ (ADB - 2019/06/23, HDFFV-9407)
+
High-Level APIs:
---------------
@@ -277,6 +295,57 @@ Bug Fixes since HDF5-1.10.3 release
Library
-------
+ - Fixed an issue where copying a version 1.8 dataset between files using
+ H5Ocopy fails due to an incompatible fill version
+
+ When using the HDF5 1.10.x H5Ocopy() API call to copy a version 1.8
+ dataset to a file created with both high and low library bounds set to
+ H5F_LIBVER_V18, the H5Ocopy() call will fail with the error stack indicating
+ that the fill value version is out of bounds.
+
+ This was fixed by changing the fill value message version to H5O_FILL_VERSION_3
+ (from H5O_FILL_VERSION_2) for H5F_LIBVER_V18.
+
+ (VC - 2019/6/14, HDFFV-10800)
+
+ - Fixed a bug that would cause an error or cause fill values to be
+ incorrectly read from a chunked dataset using the "single chunk" index if
+ the data was held in cache and there was no data on disk.
+
+ (NAF - 2019/03/06)
+
+ - Fixed a bug that could cause an error or cause fill values to be
+ incorrectly read from a dataset that was written to using H5Dwrite_chunk
+ if the dataset was not closed after writing.
+
+ (NAF - 2019/03/06, HDFFV-10716)
+
+ - Fixed memory leak in scale offset filter
+
+ In a special case where the MinBits is the same as the number of bits in
+ the datatype's precision, the filter's data buffer was not freed, causing
+ the memory usage to grow. In general the buffer was freed correctly. The
+ Minbits are the minimal number of bits to store the data values. Please
+ see the reference manual for H5Pset_scaleoffset for the detail.
+
+ (RL - 2019/3/4, HDFFV-10705)
+
+ - Fix hangs with collective metadata reads during chunked dataset I/O
+
+ In the parallel library, it was discovered that when a particular
+ sequence of operations following a pattern of:
+
+ "write to chunked dataset" -> "flush file" -> "read from dataset"
+
+ occurred with collective metadata reads enabled, hangs could be
+ observed due to certain MPI ranks not participating in the collective
+ metadata reads.
+
+ To fix the issue, collective metadata reads are now disabled during
+ chunked dataset raw data I/O.
+
+ (JTH - 2019/02/11, HDFFV-10563, HDFFV-10688)
+
- Performance issue when closing an object
The slow down is due to the search of the "tag_list" to find
@@ -300,8 +369,8 @@ Bug Fixes since HDF5-1.10.3 release
When deleting the attribute nodes from the name index v2 B-tree,
if an attribute is found in the intermediate B-tree nodes,
- which may be merged/redistributed in the process, we need to
- free the dynamically allocated spaces for the intermediate
+ which may be merged/redistributed in the process, we need to
+ free the dynamically allocated spaces for the intermediate
decoded attribute.
(VC - 2018/12/26, HDFFV-10659)
@@ -333,6 +402,18 @@ Bug Fixes since HDF5-1.10.3 release
(JTH - 2018/08/25, HDFFV-10501)
+ - When iterating over an old-style group (i.e., when not using the latest
+ file format) of size 0, a NULL pointer representing the empty links
+ table would be sent to qsort(3) for sorting, which is undefined behavior.
+
+ Iterating over an empty group is explicitly tested in the links test.
+ This has not caused any failures to date and was flagged by gcc's
+ -fsanitize=undefined.
+
+ The library no longer attempts to sort an empty array.
+
+ (DER - 2019/06/18, HDFFV-10829)
+
Java Library:
----------------
- JNI native library dependencies
@@ -378,10 +459,10 @@ Bug Fixes since HDF5-1.10.3 release
Fortran
--------
- Added symbolic links libhdf5_hl_fortran.so to libhdf5hl_fortran.so and
- libhdf5_hl_fortran.a to libhdf5hl_fortran.a in hdf5/lib directory for
- autotools installs. These were added to match the name of the files
- installed by cmake and the general pattern of hl lib files. We will
- change the names of the installed lib files to the matching name in
+ libhdf5_hl_fortran.a to libhdf5hl_fortran.a in hdf5/lib directory for
+ autotools installs. These were added to match the name of the files
+ installed by cmake and the general pattern of hl lib files. We will
+ change the names of the installed lib files to the matching name in
the next major release.
(LRK - 2019/01/04, HDFFV-10596)
@@ -392,7 +473,7 @@ Bug Fixes since HDF5-1.10.3 release
(MSB, 2018/12/04, HDFFV-10511)
- - Fixed issue with Fortran not returning h5o_info_t field values
+ - Fixed issue with Fortran not returning h5o_info_t field values
meta_size%attr%index_size and meta_size%attr%heap_size.
(MSB, 2018/1/8, HDFFV-10443)
@@ -675,10 +756,10 @@ Bug Fixes since HDF5-1.10.2 release
conversions from or to long double types, especially when special values
such as infinity or NAN were involved. In some cases the results differed
by extremely small amounts from those on other machines, while some other
- tests resulted in segmentation faults. These conversion tests with long
- double types have been disabled for ppc64 machines until the problems are
+ tests resulted in segmentation faults. These conversion tests with long
+ double types have been disabled for ppc64 machines until the problems are
better understood and can be properly addressed.
-
+
(SRL - 2019/01/07, TRILAB-98)
Supported Platforms
@@ -843,6 +924,10 @@ The following platforms are not supported but have been tested for this release.
Known Problems
==============
+ CMake files do not behave correctly with paths containing spaces.
+ Do not use spaces in paths because the required escaping for handling spaces
+ results in very complex and fragile build files.
+ ADB - 2019/05/07
At present, metadata cache images may not be generated by parallel
applications. Parallel applications can read files with metadata cache
diff --git a/release_docs/USING_CMake_Examples.txt b/release_docs/USING_CMake_Examples.txt
index d5fae39..ea352fe 100644
--- a/release_docs/USING_CMake_Examples.txt
+++ b/release_docs/USING_CMake_Examples.txt
@@ -22,7 +22,7 @@ I. Preconditions
1. We suggest you obtain the latest CMake for windows from the Kitware
web site. The HDF5 1.10.x product requires a minimum CMake version
- of 3.2.2.
+ of 3.10.2.
2. You have installed the HDF5 library built with CMake, by executing
the HDF Install Utility (the *.msi file in the binary package for
@@ -38,12 +38,16 @@ II. Building HDF5 Examples with CMake
Files in the HDF5 install directory:
HDF5Examples folder
+ CTestScript.cmake
HDF5_Examples.cmake
+ HDF5_Examples_options.cmake
Default installation process:
Create a directory to run the examples, i.e. \test_hdf5.
Copy HDF5Examples folder to this directory.
+ Copy CTestScript.cmake to this directory.
Copy HDF5_Examples.cmake to this directory.
+ Copy HDF5_Examples_options.cmake to this directory.
The default source folder is defined as "HDF5Examples". It can be changed
with the CTEST_SOURCE_NAME script option.
The default installation folder is defined for the platform.
@@ -54,8 +58,9 @@ Default installation process:
with the CTEST_CONFIGURATION_TYPE script option. Note that this must
be the same as the value used with the -C command line option.
The default build configuration is defined to build and use static libraries.
- Shared libraries can be used with the STATIC_ONLY script option set to "NO".
- Other options can be changed by editing the HDF5_Examples.cmake file.
+
+ Shared libraries and other options can be changed by editing the
+ HDF5_Examples_options.cmake file.
If the defaults are okay, execute from this directory:
ctest -S HDF5_Examples.cmake -C Release -V -O test.log
@@ -69,9 +74,16 @@ Default installation process:
========================================================================
-III. Other changes to the HDF5_Examples.cmake file
+III. Defaults in the HDF5_Examples_options.cmake file
========================================================================
-Line 45-48: uncomment to use a source tarball or zipfile;
- Add script option "TAR_SOURCE=MySource.tar".
+#### DEFAULT: ###
+#### BUILD_SHARED_LIBS:BOOL=OFF ###
+#### HDF_BUILD_C:BOOL=ON ###
+#### HDF_BUILD_CXX:BOOL=OFF ###
+#### HDF_BUILD_FORTRAN:BOOL=OFF ###
+#### HDF_BUILD_JAVA:BOOL=OFF ###
+#### BUILD_TESTING:BOOL=OFF ###
+#### HDF_ENABLE_PARALLEL:BOOL=OFF ###
+#### HDF_ENABLE_THREADSAFE:BOOL=OFF ###