diff options
author | Quincey Koziol <koziol@hdfgroup.org> | 2002-05-17 12:53:46 (GMT) |
---|---|---|
committer | Quincey Koziol <koziol@hdfgroup.org> | 2002-05-17 12:53:46 (GMT) |
commit | a6b4cba798a494dea1d29474cc5658f7003615d9 (patch) | |
tree | 5ffa6f7b9868849e81a6392b29ad59ec9218dfe1 /pablo | |
parent | 567c04276158059089d64e0e9fd5b9c7e1b8d7ba (diff) | |
download | hdf5-a6b4cba798a494dea1d29474cc5658f7003615d9.zip hdf5-a6b4cba798a494dea1d29474cc5658f7003615d9.tar.gz hdf5-a6b4cba798a494dea1d29474cc5658f7003615d9.tar.bz2 |
[svn-r5429] Purpose:
Bug fix/Code improvement.
Description:
Currently, the chunk data allocation routine invoked to allocate space for
the entire dataset is inefficient. It writes out each chunk in the dataset,
whether it is already allocated or not. Additionally, this happens not
only when it is created, but also anytime it is opened for writing, or the
dataset is extended. Worse, there's too much parallel I/O syncronization,
which slows things down even more.
Solution:
Only attempt to write out chunks that don't already exist. Additionally,
share the I/O writing between all the nodes, instead of writing everything
with process 0. Then, only block with MPI_Barrier if chunks were actually
created.
Platforms tested:
IRIX64 6.5 (modi4)
Diffstat (limited to 'pablo')
0 files changed, 0 insertions, 0 deletions