diff options
author | Quincey Koziol <koziol@hdfgroup.org> | 2004-07-13 18:42:50 (GMT) |
---|---|---|
committer | Quincey Koziol <koziol@hdfgroup.org> | 2004-07-13 18:42:50 (GMT) |
commit | e240c00154d986f20e8c7c0158a222324ec10d68 (patch) | |
tree | 620633d183bd557ab86a7a2e9ebbf9850d984638 /src | |
parent | 0b2827f9ced045e36e4a88e16cfef7ca9250c89e (diff) | |
download | hdf5-e240c00154d986f20e8c7c0158a222324ec10d68.zip hdf5-e240c00154d986f20e8c7c0158a222324ec10d68.tar.gz hdf5-e240c00154d986f20e8c7c0158a222324ec10d68.tar.bz2 |
[svn-r8868] Purpose:
Bug fix
Description:
Fix error in chunked dataset I/O where data written out wasn't read
correctly from a chunked, extendible dataset after the dataset was extended.
Also, fix parallel I/O tests to gather error results from all processes,
in order to detect errors that only occur on one process.
Solution:
Bypass chunk cache for reads as well as writes, if parallel I/O driver is
used and file is opened for writing.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Diffstat (limited to 'src')
-rw-r--r-- | src/H5Distore.c | 16 |
1 files changed, 13 insertions, 3 deletions
diff --git a/src/H5Distore.c b/src/H5Distore.c index fb21b75..209dd46 100644 --- a/src/H5Distore.c +++ b/src/H5Distore.c @@ -1868,10 +1868,20 @@ HDfprintf(stderr,"%s: mem_offset_arr[%Zu]=%Hu\n",FUNC,*mem_curr_seq,mem_offset_a * If the chunk is too large to load into the cache and it has no * filters in the pipeline (i.e. not compressed) and if the address * for the chunk has been defined, then don't load the chunk into the - * cache, just write the data to it directly. + * cache, just read the data from it directly. + * + * If MPI based VFD is used, must bypass the + * chunk-cache scheme because other MPI processes could be + * writing to other elements in the same chunk. Do a direct + * read-through of only the elements requested. */ - if (dset->layout.u.chunk.size>dset->cache.chunk.nbytes && dset->dcpl_cache.pline.nused==0 && - chunk_addr!=HADDR_UNDEF) { + if ((dset->layout.u.chunk.size>dset->cache.chunk.nbytes && dset->dcpl_cache.pline.nused==0 && chunk_addr!=HADDR_UNDEF) + || (IS_H5FD_MPI(f) && (H5F_ACC_RDWR & H5F_get_intent(f)))) { +#ifdef H5_HAVE_PARALLEL + /* Additional sanity check when operating in parallel */ + if (chunk_addr==HADDR_UNDEF || dset->dcpl_cache.pline.nused>0) + HGOTO_ERROR (H5E_IO, H5E_WRITEERROR, FAIL, "unable to locate raw data chunk"); +#endif /* H5_HAVE_PARALLEL */ if ((ret_value=H5D_contig_readvv(f, dxpl_id, dset, chunk_addr, (hsize_t)dset->layout.u.chunk.size, chunk_max_nseq, chunk_curr_seq, chunk_len_arr, chunk_offset_arr, mem_max_nseq, mem_curr_seq, mem_len_arr, mem_offset_arr, buf))<0) HGOTO_ERROR (H5E_IO, H5E_READERROR, FAIL, "unable to read raw data to file"); } /* end if */ |