summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
authorJordan Henderson <jhenderson@hdfgroup.org>2019-02-12 01:13:39 (GMT)
committerJordan Henderson <jhenderson@hdfgroup.org>2019-02-12 01:13:39 (GMT)
commit34508f0620363d90ef3f76b314d52e4b01c20a81 (patch)
treeaf8054ab51fc5f4cc495882df9ed4343960586b9 /release_docs
parent65a820ae8981a84fe7fbac87c48482e9f82b35f4 (diff)
parentfa83ab9f7c7dd7966f475932325b5e3740810cfd (diff)
downloadhdf5-34508f0620363d90ef3f76b314d52e4b01c20a81.zip
hdf5-34508f0620363d90ef3f76b314d52e4b01c20a81.tar.gz
hdf5-34508f0620363d90ef3f76b314d52e4b01c20a81.tar.bz2
Merge in latest from develop
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/INSTALL_CMake.txt1
-rw-r--r--release_docs/README_HPC79
-rw-r--r--release_docs/RELEASE.txt34
3 files changed, 104 insertions, 10 deletions
diff --git a/release_docs/INSTALL_CMake.txt b/release_docs/INSTALL_CMake.txt
index 26cf3ce..233c2cc 100644
--- a/release_docs/INSTALL_CMake.txt
+++ b/release_docs/INSTALL_CMake.txt
@@ -635,6 +635,7 @@ HDF5_ENABLE_DIRECT_VFD "Build the Direct I/O Virtual File Driver"
HDF5_ENABLE_EMBEDDED_LIBINFO "embed library info into executables" ON
HDF5_ENABLE_HSIZET "Enable datasets larger than memory" ON
HDF5_ENABLE_PARALLEL "Enable parallel build (requires MPI)" OFF
+HDF5_ENABLE_PREADWRITE "Use pread/pwrite in sec2/log/core VFDs in place of read/write (when available)" ON
HDF5_ENABLE_TRACE "Enable API tracing capability" OFF
HDF5_ENABLE_USING_MEMCHECKER "Indicate that a memory checker is used" OFF
HDF5_GENERATE_HEADERS "Rebuild Generated Files" ON
diff --git a/release_docs/README_HPC b/release_docs/README_HPC
new file mode 100644
index 0000000..bdeab67
--- /dev/null
+++ b/release_docs/README_HPC
@@ -0,0 +1,79 @@
+HDF5 version 1.11.4 currently under development
+
+HDF5 source tar files with the HPC prefix are intended for use on clusters where
+configuration and build steps will be done on a login node and executable and
+lib files that are built will be run on compute nodes.
+
+Note these differences from the regular CMake tar and zip files:
+ - Test programs produced by this tar file will be run using batch scripts.
+ - Serial and parallel HDF5options.cmake files, using parallel options by default.
+
+Note also that options are now available in HDF5 source to facilitate use of
+toolchain files for using cross compilers available on login nodes to compile
+HDF5 for compute nodes.
+
+Instructions to configure build and test HDF5 using CMake:
+
+1. The cmake version must be 3.10 or later (cmake --version).
+2. Load or switch modules and set CC, FC, CXX for compilers desired.
+3. run build-unix.sh to configure, build, test and package HDF5 with CMake.
+
+Contents:
+
+build-unix.sh Simple script for running CMake to configure, build,
+ test, and package HDF5.
+CTestScript.cmake CMake script to configure, build, test and package
+ HDF5.
+hdf5-<version> HDF5 source for <version>.
+HDF5config.cmake CMake script to configure, build, test and package
+ HDF5.
+HDF5Examples Source for HDF5 Examples.
+HDF5options.cmake symlink to parallel or serial HDF5options.cmake files.
+ Default is parallel file, which builds and tests both
+ serial and parallel C and Fortran wrappers.
+ To build serial only, C Fortran and C++ wrappers, delete
+ The HDF5options.cmake link and run
+ 'ln -s ser-HDF5options.cmake HDF5options.cmake' to switch.
+par-HDF5options.cmake Options file for HDF5 serial and parallel build and test.
+ser-HDF5options.cmake Options file for HDF5 serial only build and test.
+SZip.tar.gz Source for building SZip.
+ZLib.tar.gz Source for buildng Zlib.
+
+
+To cross compile with this HPC-CMake tar.gz HDF5 source file:
+On Cray XC40 haswell login node for knl compute nodes using CMake and Cray modules:
+ 1. Uncomment line in HDF5options.txt to use a toolchain file - line 106 for
+ config/toolchain/crayle.cmake.
+ 2. Uncomment lines 110, 111, and 115 - 122 of HDF5options.cmake.
+ Line 110 allows configuring to complete on the haswell node.
+ Line 111 switches the compiler to build files for knl nodes.
+ Lines 115 - 122 set up test files to use sbatch to run build tests
+ in batch jobs on a knl compute node with 6 processes.
+ 3. Compiler module may be the default PrgEnv-intel/6.0.4 to use
+ intel/18.0.2 or other intel, PrgEnv-cray/6.0.4 to use cce/8.7.4,
+ or PrgEnv-gnu/6.0.4 for GCC compilers. PrgEnv-pgi/6.0.4 is also
+ available but has not been tested with this tar file.
+ 4. These CMake options are set in config/toolchain/crayle.cmake:
+ set(CMAKE_SYSTEM_NAME Linux)
+ set(CMAKE_COMPILER_VENDOR "CrayLinuxEnvironment")
+ set(CMAKE_C_COMPILER cc)
+ set(CMAKE_CXX_COMPILER c++)
+ set(CMAKE_Fortran_COMPILER ftn)
+ set(CMAKE_CROSSCOMPILING_EMULATOR "")
+
+ 5. Settings for two other cross-compiling options are also in the
+ config/toolchain files which do not seem to be necessary with the
+ Cray PrgEnv-* modules
+ a. HDF5_USE_PREGEN. This option, along with the HDF5_USE_PREGEN_DIR
+ CMake variable would allow the use of an appropriate H5Tinit.c
+ file with type information generated on a compute node to be used
+ when cross compiling for those compute nodes. The use of the
+ variables in lines 110 and 111 of HDF5options.cmake file seem to
+ preclude needing this option with the available Cray modules and
+ CMake options.
+ b. HDF5_BATCH_H5DETECT and associated CMake variables. This option
+ when properly configured will run H5detect in a batch job on a
+ compute node at the beginning of the CMake build process. It
+ was also found to be unnecessary with the available Cray modules
+ and CMake options.
+-
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
index 9e63045..ef3bda0 100644
--- a/release_docs/RELEASE.txt
+++ b/release_docs/RELEASE.txt
@@ -264,21 +264,21 @@ Bug Fixes since HDF5-1.10.3 release
Library
-------
- - Fix hangs with collective metadata reads during chunked dataset I/O
+ - Performance issue when closing an object
- In the parallel library, it was discovered that when a particular
- sequence of operations following a pattern of:
+ The slow down is due to the search of the "tag_list" to find
+ out the "corked" status of an object and "uncork" it if so.
- "write to chunked dataset" -> "flush file" -> "read from dataset"
+ Improve porformance by skipping the search of the "tag_list"
+ if there are no "corked" objects when closing an object.
- occurred with collective metadata reads enabled, hangs could be
- observed due to certain MPI ranks not participating in the collective
- metadata reads.
+ (VC - 2019/2/6)
- To fix the issue, collective metadata reads are now disabled during
- chunked dataset raw data I/O.
+ - Fixed a potential invalid memory access and failure that could occur when
+ decoding an unknown object header message (from a future version of the
+ library).
- (JTH - 2019/02/11, HDFFV-10563, HDFFV-10688)
+ (NAF - 2019/01/07)
- Deleting attributes in dense storage
@@ -411,6 +411,11 @@ Bug Fixes since HDF5-1.10.3 release
Testing
-------
+ - Fixed a test failure in testpar/t_dset.c caused by
+ the test trying to use the parallel filters feature
+ on MPI-2 implementations.
+
+ (JTH, 2019/2/7)
Bug Fixes since HDF5-1.10.2 release
==================================
@@ -653,6 +658,15 @@ Bug Fixes since HDF5-1.10.2 release
Testing
-------
+ - The dt_arith test failed on IBM Power8 and Power9 machines when testing
+ conversions from or to long double types, especially when special values
+ such as infinity or NAN were involved. In some cases the results differed
+ by extremely small amounts from those on other machines, while some other
+ tests resulted in segmentation faults. These conversion tests with long
+ double types have been disabled for ppc64 machines until the problems are
+ better understood and can be properly addressed.
+
+ (SRL - 2019/01/07, TRILAB-98)
Supported Platforms
===================