Please, help us to better serve our user community by answering the following short survey: https://www.hdfgroup.org/website-survey/
HDF5  1.15.0.68e8c0e
API Reference
 
Loading...
Searching...
No Matches
HDF5 Raw I/O Flow Notes
HDF5 Raw I/O Flow Notes

HDF5 Raw I/O Flow Notes

Quincey Koziol
koziol@ncsa.uiuc.edu
August 20, 2003

  1. Document's Audience:

    • Current H5 library designers and knowledgeable external developers.
  2. Background Reading:

  3. Introduction:

    What is this document about?
    This document attempts to supplement the flow charts describing the flow of control for raw data I/O in the library.

  4. Figures:

    The following figures provide the main information:

    High-Level View of Writing Raw Data
    Perform Serial or Parallel I/O
    Gather/Convert/Scatter
  5. Notes From Accompanying Figures:

    This section provides notes to augment the information in the accompanying figures.

    1. Validate Parameters - Resolve any H5S_ALL parameters for dataspace selections to actual dataspaces, allocate conversion buffers, etc.
    2. Space Allocated in File? - Space may not have been allocated in the file to store the dataset data, if "late allocation" was chosen for the allocation time when the dataset was created.
    3. Allocate & Fill Space - These operations allocate both contiguous and chunked dataset's space in the file. The chunked dataset space allocation iterates through all the chunks in the file and allocates both the B-tree information and the raw data in the file. Because of the way filters work, fill-values are written out for chunked datasets as they are allocated, instead of as a separate step. In parallel I/O, the chunked dataset allocation can potentially be time-consuming, since all the raw data in the dataset is allocated from one process.
    4. Datatype Conversion Needed? - This currently is the deciding factor between doing "direct I/O" (in serial or parallel) and needing to perform gather/convert/scatter operations. I believe that MPI is capable of performing a limited range of type conversions and if so, we should add support to detect when they can be used. This will allow more I/O operations to be performed collectively.
    5. Collective I/O Requested/Allowed? - A user has to both request that collective I/O occur and also their I/O operation must meet the requirements that the library sets for supporting collective parallel I/O:
      • The dataspace must be scalar or simple (which is a no-op really, since we don't support "complex" dataspaces in the library currently).
      • The selection must be regular. "all" selections and hyperslab selections that were made with only one call to H5Sselect_hyperslab() (i.e. not a hyperslab selection that has been aggregated over multiple selection calls) are regular. Supporting point and irregular hyperslab selections are on the "to do" list.
      • The dataset must be stored contiguously on disk (as shown in the figure also). Supporting chunked dataset storage is also on the "to do" list.
    6. Build "chunk map" - This step still has some scalability issues as it creates a data structure that is proportional to the number of chunks which will be written to, which could potentially be very large. Building the "chunk map" information incrementally is on the "to do" list also.
    7. Perform Chunked I/O - As the figure shows, there is no support for collective parallel I/O on chunked datasets currently. As noted earlier, this is on the "to do" list.
    8. Perform "Direct" Serial I/O - "Direct" serial I/O writes data from the application's buffer, without any intervening buffer or memory copies. For maximum efficiency and performance, the elements in the selections should be adjoining.
    9. Perform Collective Parallel I/O - This step also writes data directly from an application buffer, but additionally uses collective MPI I/O operations to combine the data from each process in the parallel application in an efficient manner.