| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix/Code Cleanup/Doc Cleanup/Optimization/Branch Sync :-)
Description:
Generally speaking, this is the "signed->unsigned" change to selections.
However, in the process of merging code back, things got stickier and stickier
until I ended up doing a big "sync the two branches up" operation. So... I
brought back all the "infrastructure" fixes from the development branch to the
release branch (which I think were actually making some improvement in
performance) as well as fixed several bugs which had been fixed in one branch,
but not the other.
I've also tagged the repository before making this checkin with the label
"before_signed_unsigned_changes".
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel & fphdf5
FreeBSD 4.10 (sleipnir) w/threadsafe
FreeBSD 4.10 (sleipnir) w/backward compatibility
Solaris 2.7 (arabica) w/"purify options"
Solaris 2.8 (sol) w/FORTRAN & C++
AIX 5.x (copper) w/parallel & FORTRAN
IRIX64 6.5 (modi4) w/FORTRAN
Linux 2.4 (heping) w/FORTRAN & C++
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Relax restrictions on parallel I/O to allow compressed, chunked datasets
to be read in parallel (collective access will be degraded to independent
access, but will retrieve the information still).
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make collective chunk IO test more general
Description:
Previously collective chunk IO test is only fit for processor =4 with
the dimension size to be set small; sometimes people would like to test
with more than 4 processors(5,6 or more), the test therefore failed.
Solution:
To make the test case more general, dimensional size of the data is set to be large(right now 288 for each dimension), the disjoint hyperslab selection is re-calculated. Now the test cases should pass with 5,6 or 12 processors. Note, there is nothing wrong with the implementation of the library, it is the test case that causes the failure with the number of processor greater than 4.
Platforms tested:
Only at eirene, since only the test code is modified a little and it is very slow to test the parallel case.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up collective chunking code a bit.
Also, add '--enable-instrument' configure flag to have a mechanism for
determining that optimized operations happened correctly in the library (instead
of just the "normal" way) by allowing 'flag' properties to be set outside the
library and set when the "right" thing happens. This is mainly for debugging
and regression checks, so we make certain we don't break optimized I/O by
accident. It's enabled by default when --enable-debug is on (which is on by
default in the development branch and off by default in the release branch),
but can also be independently controlled with its own configure flag.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IBM p690 (copper) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To test collective chunk IO properly.
Description:
See the previous message.
Solution:
See the previous message.
Platforms tested:
arabica(Sol 2.7), eirene(Linux), copper(AIX)
Misc. update:
|
|
To add collective chunk IO tests.
Description:
three tests are added.
1. Only one hyperslab for each process, and this hyperslab is fit in exactly one chunk.
2. non-contiguous hyperslabs in each process, these hyperslabs are fit in one chunk.
3. Single hyperslab for each process, smaller chunk is assigned. Number of chunks for
every process is equal.
Solution:
the dataset size is set to be very small, will enlarge later.
Platforms tested:
AIX 5.1(copper)
Misc. update:
|