diff options
author | Quincey Koziol <koziol@hdfgroup.org> | 2002-07-24 18:56:48 (GMT) |
---|---|---|
committer | Quincey Koziol <koziol@hdfgroup.org> | 2002-07-24 18:56:48 (GMT) |
commit | 40df66ebd027351e48eea8b7499490c9654d3943 (patch) | |
tree | 9dd8b9170bec792e840e2b350ca48153cd45ce88 /src/H5Tconv.c | |
parent | c968525b29ce8cee8e842f6a7a1bbd78acc6b63f (diff) | |
download | hdf5-40df66ebd027351e48eea8b7499490c9654d3943.zip hdf5-40df66ebd027351e48eea8b7499490c9654d3943.tar.gz hdf5-40df66ebd027351e48eea8b7499490c9654d3943.tar.bz2 |
[svn-r5834] Purpose:
Large code cleanup/re-write
Description:
This is phase 1 of the data I/O re-architecture, with the following changes:
- Changed the selection drivers to not actually do any I/O, they
only generate the sequences of offset/length pairs needed for
the I/O (or memory access, in the case of iterating or filling
a selection in a memory buffer)
- Wrote more abstract I/O routines which get the sequence of offset/
length pairs for each selection and access perform the I/O or
memory access.
Benefits of this change include:
- Removed ~3400 lines of quite redundant code, with corresponding
reduction in the size of library binary.
- Any selection can now directly access memory when performing I/O,
if no type conversions are required, instead of just "regular"
hyperslab and 'all' selections, which speeds up I/O.
- Sped up I/O for hyperslab selections which have contiguous lower
dimensions by "flattening" them out into lesser dimensional objects
for the I/O.
No file format or API changes were necessary for this change.
The next phase will be to create a "selection driver" for each type of
selection, allowing each type of selection to directly call certain
methods that only apply to that type of selection, instead of passing
through dozens of functions which have switch statements to call the
appropriate method for each selection type. This will also reduce
the amount of code in the library and speed things up a bit more.
Phase 3 will involve generating an MPI datatype for all types of selections,
instead of only "regular" hyperslab and 'all' selections. This will
allow collective parallel I/O for all I/O operations which don't
require type conversions. It will also open up the door for allowing
collective I/O on datasets which require type conversion.
Phase 4 will involve changing the access pattern to deal with chunked
datasets in a more optimal way (in serial).
Phase 5 will deal with accessing chunked datasets more optimally for
collective parallel I/O operations.
Platforms tested:
FreeBSD 4.6 (sleipnir) w/ parallel & C++ and IRIX64 6.5 (modi4) w/parallel
Diffstat (limited to 'src/H5Tconv.c')
-rw-r--r-- | src/H5Tconv.c | 10 |
1 files changed, 7 insertions, 3 deletions
diff --git a/src/H5Tconv.c b/src/H5Tconv.c index e61e2cb..beeb7bf 100644 --- a/src/H5Tconv.c +++ b/src/H5Tconv.c @@ -2372,8 +2372,11 @@ H5T_conv_vlen(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata, hsize_t nelmts, } /* end if */ /* If the sequence gets shorter, pad out the original sequence with zeros */ - if(bg_seq_len<seq_len) - HDmemset((uint8_t *)tmp_buf+dst_base_size*bg_seq_len,0,(seq_len-bg_seq_len)*dst_base_size); + H5_CHECK_OVERFLOW(bg_seq_len,hsize_t,hssize_t); + if((hssize_t)bg_seq_len<seq_len) { + H5_CHECK_OVERFLOW((seq_len-bg_seq_len),hsize_t,size_t); + HDmemset((uint8_t *)tmp_buf+dst_base_size*bg_seq_len,0,(size_t)(seq_len-bg_seq_len)*dst_base_size); + } /* end if */ } /* end if */ /* Convert VL sequence */ @@ -2388,7 +2391,8 @@ H5T_conv_vlen(hid_t src_id, hid_t dst_id, H5T_cdata_t *cdata, hsize_t nelmts, "can't write VL data"); /* For nested VL case, free leftover heap objects from the deeper level if the length of new data elements is shorted than the old data elements.*/ - if(nested && seq_len<bg_seq_len) { + H5_CHECK_OVERFLOW(bg_seq_len,hsize_t,hssize_t); + if(nested && seq_len<(hssize_t)bg_seq_len) { uint8_t *tmp_p=tmp_buf; tmp_p += seq_len*dst_base_size; for(i=0; i<(bg_seq_len-seq_len); i++) { |