| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Some tests showed the filesize was not as expected. But the error was
intermittent. This was a racing condition as some processes finish
extend_chunked_dataset() sooner than others and return to the main
body which proceeds to call the next test which also uses the same
test data file and alters it. That messes up the "slower" processes
which then see unexpected filesize.
Also, the routine create_chunked_dataset() which creates test data file
actually was executed by all processes. That is wrong.
Solution:
Added a barrier at the end of extend_chunked_dataset to make sure all
processes are done with the test data file before returning.
Changed create_chunked_dataset such that only one process would create
the test data file. The rest does nothing but just wait for it to finish.
Platforms tested:
Tested in TG-NCSA in which the errors were detected.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix #281
Description:
Committed a wrong copy in the previous checkin.
Solution:
Checked in the right one and did some code cleanup, rearrangment.
Platforms tested:
heping pp.
Misc. update:
|
|
Bug #281
Description:
A dataset created in serial mode with H5D_ALLOC_TIME_INCR allocation setting
was not extendible, either explicitly by H5Dextend or implicitly by writing
to unallocated chunks. This was because parallel mode expects the allocation
mode be H5D_ALLOC_TIME_INCR only.
Solution:
Modified library to allocate more space when needed or directed if the
file is opened by parallel mode, independent of what the dataset allocation
mode is.
Platforms tested:
Heping pp.
|