| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Code removal
Description:
Removed the HDF4 source files from the HDF5 tree. The directories
will remain. Use the "-P" option when doing a cvs checkout or update
to "prune" the empty directories from your personal tree.
|
|
|
|
|
|
|
|
|
| |
Purpose:
minor code changes for SDS conversion with unlimited dimension case
Description:
Solution:
Platforms tested:
eirene
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
A new feature
Description:
While testing h4toh5 utility with real NASA files, we find an example that the data array(one SDS) is so big that it exceeds the physical memory of some machine(>128 MB) and the conversion failed. Before the smart hyperslab operation is out, I am dividing the whole SDS into smaller hyperslabs with each hyperslab propotational to the original SDS array dimensions. For example, a three dimension array with 1000*1000*1000 elements, I can divide them into eight 500*500*500 pieces. I can read and write each piece and remember their starting and ending points. In this way, the memory allocation failure can be avoided; however, it may not be the efficient way.
I've tested this feature using SDS without chunking. It works fine. However, when testing SDS with chunking, it is extremely slow. This happens to be a bug in HDF5 library now. Quincey may fix this later and give me a more efficient way to handle the problem. Currently all my testing files are with UNLIMITED dimensions, so in HDF5 the chunking feature will be required.
SO by default, this feature will not be turned on.
Solution:
see the above
Platforms tested:
linux 2.2.18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
1) fix the implementation of image according to image specfication
2) fix two bugs of SDS implemention. the first one is
to handle the unlimited SDS with the first dimensional size set to 0.
the second one is to change the way how HDF5 dataset is written.
Description:
1) mapping 24-bit image to 3D arrays instead of 2D compound datatype.
2) previously forgot considering unlimited SDS with the size set to 0.
3) H5P_set_buffer seems not working well for a extremely small size.
Solution:
1) see above.
2) add a special case to deal with this.
3) don't use H5Pset_buffer.
Platforms tested:
RedHat Zoot 6.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
new features
Description:
1. add an option to convert HDF4 file without HDF4 specified attributes such as
HDF4_OBJECT_TYPE, HDF4_REF_NUM etc.
it can be done by inputting "h4toh5 -na input.hdf"
The default converter will still keep HDF4 specfied attributes.
2. Add compression features (gzip) for image too. Now the compressed HDF4 image
can be supported by using HDF5 gzip. Not sure whether tools can read it. Need to be tested.
3. Change SPACEPAD to NULLTERM for HDF4 dimensional name list. We can use variable length HDF5 string to represent these names, however currently H5dump and H5view cannot support variable length HDF5 string. converter will wait for other tools' update.
Solution:
Platforms tested:
eirene(Red Hat 6.2) and arabica(solaris 2.7)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Add definations of two new functions
Description:
Solution:
Platforms tested:
eirene
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
|
|
Code Movement
Description:
Moved tools code into their own special subdirectories.
Platforms tested:
Linux, Kelgia
|