diff options
author | Quincey Koziol <koziol@hdfgroup.org> | 2004-07-13 18:42:47 (GMT) |
---|---|---|
committer | Quincey Koziol <koziol@hdfgroup.org> | 2004-07-13 18:42:47 (GMT) |
commit | 803bb3e532c0c2ff26f6b7cc115a8c6f33ea00f5 (patch) | |
tree | 6e71ca792f570ce7280c5338e9e7b9f5f7fdc003 /src | |
parent | 0a8d8c54b249b81c58e4ab7d6481d737e2857c7a (diff) | |
download | hdf5-803bb3e532c0c2ff26f6b7cc115a8c6f33ea00f5.zip hdf5-803bb3e532c0c2ff26f6b7cc115a8c6f33ea00f5.tar.gz hdf5-803bb3e532c0c2ff26f6b7cc115a8c6f33ea00f5.tar.bz2 |
[svn-r8867] Purpose:
Bug fix
Description:
Fix error in chunked dataset I/O where data written out wasn't read
correctly from a chunked, extendible dataset after the dataset was extended.
Also, fix parallel I/O tests to gather error results from all processes,
in order to detect errors that only occur on one process.
Solution:
Bypass chunk cache for reads as well as writes, if parallel I/O driver is
used and file is opened for writing.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Diffstat (limited to 'src')
-rw-r--r-- | src/H5Distore.c | 16 |
1 files changed, 13 insertions, 3 deletions
diff --git a/src/H5Distore.c b/src/H5Distore.c index 4520e65..1720a09 100644 --- a/src/H5Distore.c +++ b/src/H5Distore.c @@ -1860,10 +1860,20 @@ HDfprintf(stderr,"%s: mem_offset_arr[%Zu]=%Hu\n",FUNC,*mem_curr_seq,mem_offset_a * If the chunk is too large to load into the cache and it has no * filters in the pipeline (i.e. not compressed) and if the address * for the chunk has been defined, then don't load the chunk into the - * cache, just write the data to it directly. + * cache, just read the data from it directly. + * + * If MPI based VFD is used, must bypass the + * chunk-cache scheme because other MPI processes could be + * writing to other elements in the same chunk. Do a direct + * read-through of only the elements requested. */ - if (dset->layout.u.chunk.size>dset->cache.chunk.nbytes && dset->dcpl_cache.pline.nused==0 && - chunk_addr!=HADDR_UNDEF) { + if ((dset->layout.u.chunk.size>dset->cache.chunk.nbytes && dset->dcpl_cache.pline.nused==0 && chunk_addr!=HADDR_UNDEF) + || (IS_H5FD_MPI(f) && (H5F_ACC_RDWR & H5F_get_intent(f)))) { +#ifdef H5_HAVE_PARALLEL + /* Additional sanity check when operating in parallel */ + if (chunk_addr==HADDR_UNDEF || dset->dcpl_cache.pline.nused>0) + HGOTO_ERROR (H5E_IO, H5E_WRITEERROR, FAIL, "unable to locate raw data chunk"); +#endif /* H5_HAVE_PARALLEL */ if ((ret_value=H5D_contig_readvv(f, dxpl_id, dset, chunk_addr, (hsize_t)dset->layout.u.chunk.size, chunk_max_nseq, chunk_curr_seq, chunk_len_arr, chunk_offset_arr, mem_max_nseq, mem_curr_seq, mem_len_arr, mem_offset_arr, buf))<0) HGOTO_ERROR (H5E_IO, H5E_READERROR, FAIL, "unable to read raw data to file"); } /* end if */ |