diff options
author | MuQun Yang <ymuqun@hdfgroup.org> | 2001-11-19 21:29:26 (GMT) |
---|---|---|
committer | MuQun Yang <ymuqun@hdfgroup.org> | 2001-11-19 21:29:26 (GMT) |
commit | debeaf6e6438cfacd102db792b87d31a6fd3ac3d (patch) | |
tree | 24568e4154359fd6edceb0666b895e3c77ecc9a2 /test/tconfig.c | |
parent | 6db1b78950a6494d50d6d77f53222024cd120a09 (diff) | |
download | hdf5-debeaf6e6438cfacd102db792b87d31a6fd3ac3d.zip hdf5-debeaf6e6438cfacd102db792b87d31a6fd3ac3d.tar.gz hdf5-debeaf6e6438cfacd102db792b87d31a6fd3ac3d.tar.bz2 |
[svn-r4612]
Purpose:
A new feature
Description:
While testing h4toh5 utility with real NASA files, we find an example that the data array(one SDS) is so big that it exceeds the physical memory of some machine(>128 MB) and the conversion failed. Before the smart hyperslab operation is out, I am dividing the whole SDS into smaller hyperslabs with each hyperslab propotational to the original SDS array dimensions. For example, a three dimension array with 1000*1000*1000 elements, I can divide them into eight 500*500*500 pieces. I can read and write each piece and remember their starting and ending points. In this way, the memory allocation failure can be avoided; however, it may not be the efficient way.
I've tested this feature using SDS without chunking. It works fine. However, when testing SDS with chunking, it is extremely slow. This happens to be a bug in HDF5 library now. Quincey may fix this later and give me a more efficient way to handle the problem. Currently all my testing files are with UNLIMITED dimensions, so in HDF5 the chunking feature will be required.
SO by default, this feature will not be turned on.
Solution:
see the above
Platforms tested:
linux 2.2.18
Diffstat (limited to 'test/tconfig.c')
0 files changed, 0 insertions, 0 deletions