diff options
author | Pedro Vicente Nunes <pvn@hdfgroup.org> | 2009-03-25 20:28:50 (GMT) |
---|---|---|
committer | Pedro Vicente Nunes <pvn@hdfgroup.org> | 2009-03-25 20:28:50 (GMT) |
commit | bd1fe8cd74d80f03f84ade1b0d0255222830e916 (patch) | |
tree | c9801bd36835a5440505e1c1a0a530b292e96668 /tools/h5repack/h5repack_copy.c | |
parent | b03ffd19f4fe522930f90700b60db47e537c84e1 (diff) | |
download | hdf5-bd1fe8cd74d80f03f84ade1b0d0255222830e916.zip hdf5-bd1fe8cd74d80f03f84ade1b0d0255222830e916.tar.gz hdf5-bd1fe8cd74d80f03f84ade1b0d0255222830e916.tar.bz2 |
[svn-r16614] 3. #1501 (B1) tools bug if dataset is larger than H5TOOLS_BUFSIZE limit.
ISSUE : the tools use the following formula to read by hyperslabs: hyperslab_size[i] = MIN( dim_size[i], H5TOOLS_BUFSIZE / datum_size) where H5TOOLS_BUFSIZE is a constant defined of 1024K. This is OK as long as the datum_size does not exceed 1024K, otherwise we have a hyperslab size of 0 (since 1024K/(greater than 1024K) = 0). This affects h5dump. h5repack, h5diff
SOLUTION: add a check for a 0 size and define as 1 if so.
TEST FOR H5DUMP: Defined a case in the h5dump test generator program of such a type (an array type of doubles with a large array dimension, that was the case the user reported). Since the written file commited in svn would be around 1024K, opted for not writing the data (the part of the code where the hyperslab is defined is executed, since h5dump always reads the files). Defined a macro WRITE_ARRAY to enable such writing if needed. Added a run on the h5dump shell script. Added 2 new files to svn: tools/testfiles/tarray8.ddl, tools/testfiles/tarray8.h5. NOTE: while doing this I thought of adding this dataset case to an existing file, but that would add the large array output to those files (the ddls). The issue is that the file list is increasing.
TEST FOR H5DIFF: for h5diff the check for reading by hyperslabs is H5TOOLS_MALLOCSIZE (128 * H5TOOLS_BUFSIZE) or 128 Mb. This makes it not possible to add such a file to svn, so used the same method as h5dump (only write the dataset if WRITE_ARRAY is defined). As opposed to h5dump, the hyperslab code is NOT executed when the dataset is empty (dataset is not read). Added the new dataset to existing files and shell run (tools/h5diff/testfiles/h5diff_dset1.h5 and tools/h5diff/testfiles/h5diff_dset2.h5 and output in tools/h5diff/testfiles/h5diff_80.txt).
TEST FOR H5REPACK: similar issue as h5diff with the difference that the hyperslab code is run. Added a run to the shell script (with a filter, otherwise the code uses H5Ocopy).
tested: linux (h5commitest failed , apparently it did not detect the code changes in /tools/lib that fix the bug: the error in an assertion in the hyperslab of 0. I am sure that making h5ccomitest --distclean will detect the new code , but don't want to wait more 3 hours :-) )
Diffstat (limited to 'tools/h5repack/h5repack_copy.c')
-rw-r--r-- | tools/h5repack/h5repack_copy.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/tools/h5repack/h5repack_copy.c b/tools/h5repack/h5repack_copy.c index b165dc7..4e2e036 100644 --- a/tools/h5repack/h5repack_copy.c +++ b/tools/h5repack/h5repack_copy.c @@ -650,7 +650,7 @@ int do_copy_objects(hid_t fidin, { int j; - + if((dset_in = H5Dopen2(fidin, travt->objs[i].name, H5P_DEFAULT)) < 0) goto error; if((f_space_id = H5Dget_space(dset_in)) < 0) @@ -790,8 +790,12 @@ int do_copy_objects(hid_t fidin, */ sm_nbytes = p_type_nbytes; - for (k = rank; k > 0; --k) { - sm_size[k - 1] = MIN(dims[k - 1], H5TOOLS_BUFSIZE / sm_nbytes); + for (k = rank; k > 0; --k) + { + hsize_t size = H5TOOLS_BUFSIZE / sm_nbytes; + if ( size == 0) /* datum size > H5TOOLS_BUFSIZE */ + size = 1; + sm_size[k - 1] = MIN(dims[k - 1], size); sm_nbytes *= sm_size[k - 1]; assert(sm_nbytes > 0); } |