From 239206ef1982d735ad2b9900496de58c42ad03ab Mon Sep 17 00:00:00 2001 From: Richard Warren Date: Fri, 25 Sep 2020 20:09:45 -0400 Subject: Edits (mostly spelling) to the SUBFILING_README.txt file --- SUBFILING_README.txt | 35 +++++++++++++++++------------------ 1 file changed, 17 insertions(+), 18 deletions(-) diff --git a/SUBFILING_README.txt b/SUBFILING_README.txt index 028dcff..71dd9e4 100644 --- a/SUBFILING_README.txt +++ b/SUBFILING_README.txt @@ -7,19 +7,19 @@ complete but provides support for HDF5 contigous reads and writes of datasets via a Sub-filing VOL and a prototype VFD. HDF5 Sub-filing is designed as a software RAID-0 implementation in which -software controlers called IO-Concentrators provide access to individual data -files. At present, only a single IO-Conentrator (IOC) is allocated per NODE, -though is can be tuned via environment variable. +software controllers called IO-Concentrators provide access to individual data +files. At present, only a single IO-Concentrator (IOC) is allocated per NODE, +though this can be tuned via an environment variable. An important detail with respect to the IOC implementation is that this -functionality is provided via a dedicated exeution thread on the lowest MPI +functionality is provided via a dedicated execution thread on the lowest MPI rank on each node. This thread is augmented with a supporting pool of -"worker threads" to off-load the actual file reads and writes and thereby -improve the IOC thread message handling latencies. Communication between the -parallel HDF5 appplication processes and the collection of IOCs is +"worker threads" to off-load the actual file reading and writeing and thereby +improve IOC thread message handling latency. Communication between the +parallel HDF5 application processes and the collection of IOCs is accomplished by utilizing MPI. A consequence of this reliance on MPI for -IOC messaging is that parallel HDF5 applications *MUST* initialize MPI -using MPI_Init_thread, e.g. +IOC messaging is that parallel HDF5 applications *MUST* initialize the MPI +library using MPI_Init_thread, e.g. int main(int argc, char **argv) @@ -35,18 +35,17 @@ main(int argc, char **argv) } NOTE: -On Cori, the default modules providing the 'cc' (compiler) and access to +On Cori, the default modules providing the 'cc' compiler and access to MPI libraries are not sufficient for use with SUB-FILING. In particular, the default MPI library does not support MPI_THREAD_MULTIPLE. I believe -that supports only MPI_THREAD_FUNNELED or potentially MPI_THREAD_SERIALIZED. -The initial benchmarking efforts utilize OpenMPI to provide the necessary -functionality. +that it supports Multithreading, but only MPI_THREAD_FUNNELED or potentially +MPI_THREAD_SERIALIZED. The initial benchmarking efforts utilize OpenMPI to +provide the necessary thread safe functionality. Interestingly, the default C compiler (or at least a compiler wrapper which calls the actual C compiler) appears to know about the default MPI and thus -avoids the necessity by users to provide an include path or a library link -path when creating executables or libraries. As a consequence, we unload -these specific modules prior to selecting alternatives for compilation and -the MPI library. - +avoids the necessity of users providing an include path (-I) or a library +link path (-L) when creating executables or shared objects (libraries). +As a consequence, we unload these specific modules prior to selecting +alternatives for compilation and for the MPI library implementation. -- cgit v0.12