diff options
author | Quincey Koziol <koziol@hdfgroup.org> | 2008-05-16 03:04:56 (GMT) |
---|---|---|
committer | Quincey Koziol <koziol@hdfgroup.org> | 2008-05-16 03:04:56 (GMT) |
commit | 22f48585bdf5e13898b7728b33ec71fd7c9cf4ec (patch) | |
tree | 3c6f99b03d177a2b1c88442a93cf017a8c465a24 /src/H5Dint.c | |
parent | afbdbb8e93d2b2d96098abfa4bf1615205487ca5 (diff) | |
download | hdf5-22f48585bdf5e13898b7728b33ec71fd7c9cf4ec.zip hdf5-22f48585bdf5e13898b7728b33ec71fd7c9cf4ec.tar.gz hdf5-22f48585bdf5e13898b7728b33ec71fd7c9cf4ec.tar.bz2 |
[svn-r15015] Description:
Detect chunks that are >4GB before dataset gets created and return error
to application.
Tweak lots of internal variables that hold the chunk size/dimensions to
use an 'uint32_t', instead of a 'size_t', so that the integer size is constant.
Correct a number of our tests which were creating datasets with chunks
that were >4GB and add some specific tests for >4GB chunk size detection.
Minor whitespace & other code cleanups.
Tested on:
Mac OS X/32 10.5.2 (amazon)
Forthcoming testing on other platforms...
Diffstat (limited to 'src/H5Dint.c')
-rw-r--r-- | src/H5Dint.c | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/src/H5Dint.c b/src/H5Dint.c index acb95d2..f8577cc 100644 --- a/src/H5Dint.c +++ b/src/H5Dint.c @@ -1174,6 +1174,7 @@ H5D_create(H5F_t *file, hid_t type_id, const H5S_t *space, hid_t dcpl_id, case H5D_CHUNKED: { hsize_t max_dim[H5O_LAYOUT_NDIMS]; /* Maximum size of data in elements */ + uint64_t chunk_size; /* Size of chunk in bytes */ /* Set up layout information */ if((ndims = H5S_GET_EXTENT_NDIMS(new_dset->shared->space)) < 0) @@ -1208,8 +1209,17 @@ H5D_create(H5F_t *file, hid_t type_id, const H5S_t *space, hid_t dcpl_id, HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, NULL, "chunk size must be <= maximum dimension size for fixed-sized dimensions") /* Compute the total size of a chunk */ - for(u = 1, new_dset->shared->layout.u.chunk.size = new_dset->shared->layout.u.chunk.dim[0]; u < new_dset->shared->layout.u.chunk.ndims; u++) - new_dset->shared->layout.u.chunk.size *= new_dset->shared->layout.u.chunk.dim[u]; + /* (Use 64-bit value to ensure that we can detect >4GB chunks) */ + for(u = 1, chunk_size = (uint64_t)new_dset->shared->layout.u.chunk.dim[0]; u < new_dset->shared->layout.u.chunk.ndims; u++) + chunk_size *= (uint64_t)new_dset->shared->layout.u.chunk.dim[u]; + + /* Check for chunk larger than can be represented in 32-bits */ + /* (Chunk size is encoded in 32-bit value in v1 B-tree records) */ + if(chunk_size > (uint64_t)0xffffffff) + HGOTO_ERROR(H5E_DATASET, H5E_CANTINIT, NULL, "chunk size must be < 4GB") + + /* Retain computed chunk size */ + H5_ASSIGN_OVERFLOW(new_dset->shared->layout.u.chunk.size, chunk_size, uint64_t, uint32_t); /* Initialize the chunk cache for the dataset */ if(H5D_istore_init(file, new_dset) < 0) |