diff options
author | MuQun Yang <ymuqun@hdfgroup.org> | 2005-08-11 18:48:09 (GMT) |
---|---|---|
committer | MuQun Yang <ymuqun@hdfgroup.org> | 2005-08-11 18:48:09 (GMT) |
commit | 870c5b2f66c158446b385cf67f507f1641aca1e2 (patch) | |
tree | 73d52b3101ef2b1a054e035710670bd5a694b48c /acsite.m4 | |
parent | 0e1b41d0fd1521784128e8637b5afa8371d2779d (diff) | |
download | hdf5-870c5b2f66c158446b385cf67f507f1641aca1e2.zip hdf5-870c5b2f66c158446b385cf67f507f1641aca1e2.tar.gz hdf5-870c5b2f66c158446b385cf67f507f1641aca1e2.tar.bz2 |
[svn-r11231] Purpose:
bug fix for collective chunk IO, phase 1
Optimization hasn't been done yet, the collective chunk IO bug should be fixed.
Description:
In chunking storage, memory space and file space will be remapped, So to check
whether file space and memory space are regular in order to use optimized MPI derived
datatype for collective call one has to check per-chunk wise instead of per hyperslab wise.
Even a regular memory space will be stored in span-tree and will be irregular before chunk IO.
Solution:
1. Check file space and memory space per chunk wise instead of per hyperslab wise.
2. For collective IO mode, number of chunks covered by hyperslab may be different. Since we are
handing per chunk per IO, for the extra chunk IO for some(not all) processors, collective mode will
cause program hanged. So for the extra chunk Io mode independent IO has to be used.
3. On some platforms, Complex MPI derived datatype is not working, so we have to use independent IO for collective IO mode if the selection is irregular. However, when the selection is regular, we do want to use collective IO since that will improve performance. Special cares have to be added for this case.
Platforms tested:
copper(AIX 5.1) Linux(heping mpich 1.2.6), Teragrid machine, Cobalt(altix), modi4
Misc. update:
Diffstat (limited to 'acsite.m4')
0 files changed, 0 insertions, 0 deletions