summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/INSTALL_CMake.txt1
-rw-r--r--release_docs/README_HPC79
-rw-r--r--release_docs/RELEASE.txt94
3 files changed, 130 insertions, 44 deletions
diff --git a/release_docs/INSTALL_CMake.txt b/release_docs/INSTALL_CMake.txt
index 26cf3ce..233c2cc 100644
--- a/release_docs/INSTALL_CMake.txt
+++ b/release_docs/INSTALL_CMake.txt
@@ -635,6 +635,7 @@ HDF5_ENABLE_DIRECT_VFD "Build the Direct I/O Virtual File Driver"
HDF5_ENABLE_EMBEDDED_LIBINFO "embed library info into executables" ON
HDF5_ENABLE_HSIZET "Enable datasets larger than memory" ON
HDF5_ENABLE_PARALLEL "Enable parallel build (requires MPI)" OFF
+HDF5_ENABLE_PREADWRITE "Use pread/pwrite in sec2/log/core VFDs in place of read/write (when available)" ON
HDF5_ENABLE_TRACE "Enable API tracing capability" OFF
HDF5_ENABLE_USING_MEMCHECKER "Indicate that a memory checker is used" OFF
HDF5_GENERATE_HEADERS "Rebuild Generated Files" ON
diff --git a/release_docs/README_HPC b/release_docs/README_HPC
new file mode 100644
index 0000000..bdeab67
--- /dev/null
+++ b/release_docs/README_HPC
@@ -0,0 +1,79 @@
+HDF5 version 1.11.4 currently under development
+
+HDF5 source tar files with the HPC prefix are intended for use on clusters where
+configuration and build steps will be done on a login node and executable and
+lib files that are built will be run on compute nodes.
+
+Note these differences from the regular CMake tar and zip files:
+ - Test programs produced by this tar file will be run using batch scripts.
+ - Serial and parallel HDF5options.cmake files, using parallel options by default.
+
+Note also that options are now available in HDF5 source to facilitate use of
+toolchain files for using cross compilers available on login nodes to compile
+HDF5 for compute nodes.
+
+Instructions to configure build and test HDF5 using CMake:
+
+1. The cmake version must be 3.10 or later (cmake --version).
+2. Load or switch modules and set CC, FC, CXX for compilers desired.
+3. run build-unix.sh to configure, build, test and package HDF5 with CMake.
+
+Contents:
+
+build-unix.sh Simple script for running CMake to configure, build,
+ test, and package HDF5.
+CTestScript.cmake CMake script to configure, build, test and package
+ HDF5.
+hdf5-<version> HDF5 source for <version>.
+HDF5config.cmake CMake script to configure, build, test and package
+ HDF5.
+HDF5Examples Source for HDF5 Examples.
+HDF5options.cmake symlink to parallel or serial HDF5options.cmake files.
+ Default is parallel file, which builds and tests both
+ serial and parallel C and Fortran wrappers.
+ To build serial only, C Fortran and C++ wrappers, delete
+ The HDF5options.cmake link and run
+ 'ln -s ser-HDF5options.cmake HDF5options.cmake' to switch.
+par-HDF5options.cmake Options file for HDF5 serial and parallel build and test.
+ser-HDF5options.cmake Options file for HDF5 serial only build and test.
+SZip.tar.gz Source for building SZip.
+ZLib.tar.gz Source for buildng Zlib.
+
+
+To cross compile with this HPC-CMake tar.gz HDF5 source file:
+On Cray XC40 haswell login node for knl compute nodes using CMake and Cray modules:
+ 1. Uncomment line in HDF5options.txt to use a toolchain file - line 106 for
+ config/toolchain/crayle.cmake.
+ 2. Uncomment lines 110, 111, and 115 - 122 of HDF5options.cmake.
+ Line 110 allows configuring to complete on the haswell node.
+ Line 111 switches the compiler to build files for knl nodes.
+ Lines 115 - 122 set up test files to use sbatch to run build tests
+ in batch jobs on a knl compute node with 6 processes.
+ 3. Compiler module may be the default PrgEnv-intel/6.0.4 to use
+ intel/18.0.2 or other intel, PrgEnv-cray/6.0.4 to use cce/8.7.4,
+ or PrgEnv-gnu/6.0.4 for GCC compilers. PrgEnv-pgi/6.0.4 is also
+ available but has not been tested with this tar file.
+ 4. These CMake options are set in config/toolchain/crayle.cmake:
+ set(CMAKE_SYSTEM_NAME Linux)
+ set(CMAKE_COMPILER_VENDOR "CrayLinuxEnvironment")
+ set(CMAKE_C_COMPILER cc)
+ set(CMAKE_CXX_COMPILER c++)
+ set(CMAKE_Fortran_COMPILER ftn)
+ set(CMAKE_CROSSCOMPILING_EMULATOR "")
+
+ 5. Settings for two other cross-compiling options are also in the
+ config/toolchain files which do not seem to be necessary with the
+ Cray PrgEnv-* modules
+ a. HDF5_USE_PREGEN. This option, along with the HDF5_USE_PREGEN_DIR
+ CMake variable would allow the use of an appropriate H5Tinit.c
+ file with type information generated on a compute node to be used
+ when cross compiling for those compute nodes. The use of the
+ variables in lines 110 and 111 of HDF5options.cmake file seem to
+ preclude needing this option with the available Cray modules and
+ CMake options.
+ b. HDF5_BATCH_H5DETECT and associated CMake variables. This option
+ when properly configured will run H5detect in a batch job on a
+ compute node at the beginning of the CMake build process. It
+ was also found to be unnecessary with the available Cray modules
+ and CMake options.
+-
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 2dfcfc1..879c3f3 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -167,52 +167,12 @@ New Features
C++ Library:
------------
- - New wrappers
+ - Added new function to the C++ interface
- Added the following items:
-
- + Class DSetAccPropList for the dataset access property list.
-
- + Wrapper for H5Dget_access_plist to class DataSet
- // Gets the access property list of this dataset.
- DSetAccPropList getAccessPlist() const;
-
- + Wrappers for H5Pset_chunk_cache and H5Pget_chunk_cache to class DSetAccPropList
- // Sets the raw data chunk cache parameters.
- void setChunkCache(size_t rdcc_nslots, size_t rdcc_nbytes, double rdcc_w0)
-
- // Retrieves the raw data chunk cache parameters.
- void getChunkCache(size_t &rdcc_nslots, size_t &rdcc_nbytes, double &rdcc_w0)
-
- + New operator!= to class DataType (HDFFV-10472)
- // Determines whether two datatypes are not the same.
- bool operator!=(const DataType& compared_type)
+ Added wrapper for H5Ovisit2:
+ H5Object::visit()
- + Wrappers for H5Oget_info2, H5Oget_info_by_name2, and H5Oget_info_by_idx2
- (HDFFV-10458)
-
- // Retrieves information about an HDF5 object.
- void getObjinfo(H5O_info_t& objinfo, unsigned fields = H5O_INFO_BASIC) const;
-
- // Retrieves information about an HDF5 object, given its name.
- void getObjinfo(const char* name, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- void getObjinfo(const H5std_string& name, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
-
- // Retrieves information about an HDF5 object, given its index.
- void getObjinfo(const char* grp_name, H5_index_t idx_type,
- H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
- void getObjinfo(const H5std_string& grp_name, H5_index_t idx_type,
- H5_iter_order_t order, hsize_t idx, H5O_info_t& objinfo,
- unsigned fields = H5O_INFO_BASIC,
- const LinkAccPropList& lapl = LinkAccPropList::DEFAULT) const;
-
- (BMR - 2018/07/22, HDFFV-10150, HDFFV-10458, HDFFV-1047)
+ (BMR - 2019/02/14, HDFFV-10532)
Java Library:
@@ -264,6 +224,38 @@ Bug Fixes since HDF5-1.10.3 release
Library
-------
+ - Fix hangs with collective metadata reads during chunked dataset I/O
+
+ In the parallel library, it was discovered that when a particular
+ sequence of operations following a pattern of:
+
+ "write to chunked dataset" -> "flush file" -> "read from dataset"
+
+ occurred with collective metadata reads enabled, hangs could be
+ observed due to certain MPI ranks not participating in the collective
+ metadata reads.
+
+ To fix the issue, collective metadata reads are now disabled during
+ chunked dataset raw data I/O.
+
+ (JTH - 2019/02/11, HDFFV-10563, HDFFV-10688)
+
+ - Performance issue when closing an object
+
+ The slow down is due to the search of the "tag_list" to find
+ out the "corked" status of an object and "uncork" it if so.
+
+ Improve porformance by skipping the search of the "tag_list"
+ if there are no "corked" objects when closing an object.
+
+ (VC - 2019/2/6)
+
+ - Fixed a potential invalid memory access and failure that could occur when
+ decoding an unknown object header message (from a future version of the
+ library).
+
+ (NAF - 2019/01/07)
+
- Deleting attributes in dense storage
The library aborts with "infinite loop closing library" after
@@ -395,6 +387,11 @@ Bug Fixes since HDF5-1.10.3 release
Testing
-------
+ - Fixed a test failure in testpar/t_dset.c caused by
+ the test trying to use the parallel filters feature
+ on MPI-2 implementations.
+
+ (JTH, 2019/2/7)
Bug Fixes since HDF5-1.10.2 release
==================================
@@ -637,6 +634,15 @@ Bug Fixes since HDF5-1.10.2 release
Testing
-------
+ - The dt_arith test failed on IBM Power8 and Power9 machines when testing
+ conversions from or to long double types, especially when special values
+ such as infinity or NAN were involved. In some cases the results differed
+ by extremely small amounts from those on other machines, while some other
+ tests resulted in segmentation faults. These conversion tests with long
+ double types have been disabled for ppc64 machines until the problems are
+ better understood and can be properly addressed.
+
+ (SRL - 2019/01/07, TRILAB-98)
Supported Platforms
===================