diff options
author | John Mainzer <mainzer@hdfgroup.org> | 2004-04-19 17:42:34 (GMT) |
---|---|---|
committer | John Mainzer <mainzer@hdfgroup.org> | 2004-04-19 17:42:34 (GMT) |
commit | 4a85877fdc3b6b4a4e87dad27149148c7ac6ebf5 (patch) | |
tree | 40d5a2ea5107c34e5163495cd59acdfa6fb029f2 /src/H5AC.c | |
parent | 55a64a7359e61dc1d67c4bce9d50fc5166735b10 (diff) | |
download | hdf5-4a85877fdc3b6b4a4e87dad27149148c7ac6ebf5.zip hdf5-4a85877fdc3b6b4a4e87dad27149148c7ac6ebf5.tar.gz hdf5-4a85877fdc3b6b4a4e87dad27149148c7ac6ebf5.tar.bz2 |
[svn-r8391] Purpose:
Checkpoint checkin of FP bug fixes. FP is still quite
buggy, but I must go deal with other matters.
Description:
Fixed two major bugs:
1) H5FPserver.c was clobbering meta data in its care.
2) H5FPserver.c was allocating the same space multiple
times, causing both data and meta data corruption.
Also made minor fixes, added debugging code, and familiarized
myself with the FP code.
All development work with FP enabled was done on Eirene.
On this platform, FP now passes its test reliably with
up to 9 processes. At 10 processes it seg faults every
time. I haven't looked into this issue.
There are also several known locking bugs which have to
be fixed. However, they are of sufficiently low probability
that I didn't bother with them on this pass.
FP has not been tested with deletions -- this should be
done.
Also, need to test FP chunked I/O.
Solution:
1) Modified cache in H5FPserver.c to merge changes correctly.
Found and fixed a bug in H5TB.c in passing.
2) Multiple space allocation was caused by a race condition
with set eoa requests.
Most of these eoa requests appeared to be superfluous, so
I deleted them.
Those issued during the superblock read seemed necessary,
so I inserted a barrier at the end of the superblock read,
to prevent races with allocations.
Platforms tested:
h5committested
Diffstat (limited to 'src/H5AC.c')
-rw-r--r-- | src/H5AC.c | 12 |
1 files changed, 11 insertions, 1 deletions
@@ -1283,17 +1283,27 @@ H5AC_protect(H5F_t *f, hid_t dxpl_id, const H5AC_class_t *type, haddr_t addr, if (H5FP_request_lock(H5FD_fphdf5_file_id(lf), addr, rw == H5AC_WRITE ? H5FP_LOCK_WRITE : H5FP_LOCK_READ, TRUE, &req_id, &status) < 0) { +#if 0 + HDfprintf(stdout, "H5AC_protect: Lock failed.\n"); /* * FIXME: Check the status variable. If the lock is got * by some other process, we can loop and wait or bail * out of this function */ -HDfprintf(stderr, "Couldn't get lock for metadata at address %a\n", addr); + HDfprintf(stderr, "Couldn't get lock for metadata at address %a\n", addr); +#endif /* 0 */ HGOTO_ERROR(H5E_FPHDF5, H5E_CANTLOCK, NULL, "can't lock data on SAP!") } /* Load a thing from the SAP. */ if (NULL == (thing = type->load(f, dxpl_id, addr, udata1, udata2))) { +#if 0 + HDfprintf(stdout, + "%s: Load failed. addr = %a, type->id = %d.\n", + "H5AC_protect", + addr, + (int)(type->id)); +#endif /* 0 */ HCOMMON_ERROR(H5E_CACHE, H5E_CANTLOAD, "unable to load object") if (H5FP_request_release_lock(H5FD_fphdf5_file_id(lf), addr, |