diff options
author | Quincey Koziol <koziol@hdfgroup.org> | 2002-05-17 12:53:46 (GMT) |
---|---|---|
committer | Quincey Koziol <koziol@hdfgroup.org> | 2002-05-17 12:53:46 (GMT) |
commit | a6b4cba798a494dea1d29474cc5658f7003615d9 (patch) | |
tree | 5ffa6f7b9868849e81a6392b29ad59ec9218dfe1 /src/H5FDmpio.h | |
parent | 567c04276158059089d64e0e9fd5b9c7e1b8d7ba (diff) | |
download | hdf5-a6b4cba798a494dea1d29474cc5658f7003615d9.zip hdf5-a6b4cba798a494dea1d29474cc5658f7003615d9.tar.gz hdf5-a6b4cba798a494dea1d29474cc5658f7003615d9.tar.bz2 |
[svn-r5429] Purpose:
Bug fix/Code improvement.
Description:
Currently, the chunk data allocation routine invoked to allocate space for
the entire dataset is inefficient. It writes out each chunk in the dataset,
whether it is already allocated or not. Additionally, this happens not
only when it is created, but also anytime it is opened for writing, or the
dataset is extended. Worse, there's too much parallel I/O syncronization,
which slows things down even more.
Solution:
Only attempt to write out chunks that don't already exist. Additionally,
share the I/O writing between all the nodes, instead of writing everything
with process 0. Then, only block with MPI_Barrier if chunks were actually
created.
Platforms tested:
IRIX64 6.5 (modi4)
Diffstat (limited to 'src/H5FDmpio.h')
-rw-r--r-- | src/H5FDmpio.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/src/H5FDmpio.h b/src/H5FDmpio.h index 425a346..4750ef2 100644 --- a/src/H5FDmpio.h +++ b/src/H5FDmpio.h @@ -62,6 +62,8 @@ __DLL__ herr_t H5FD_mpio_setup(H5FD_t *_file, MPI_Datatype btype, MPI_Datatype f __DLL__ herr_t H5FD_mpio_wait_for_left_neighbor(H5FD_t *file); __DLL__ herr_t H5FD_mpio_signal_right_neighbor(H5FD_t *file); __DLL__ herr_t H5FD_mpio_closing(H5FD_t *file); +__DLL__ int H5FD_mpio_mpi_rank(H5FD_t *_file); +__DLL__ int H5FD_mpio_mpi_size(H5FD_t *_file); #ifdef __cplusplus } #endif |