| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Code removal
Description:
Removed the HDF4 source files from the HDF5 tree. The directories
will remain. Use the "-P" option when doing a cvs checkout or update
to "prune" the empty directories from your personal tree.
|
|
|
|
|
|
|
|
|
| |
Purpose:
minor code changes for SDS conversion with unlimited dimension case
Description:
Solution:
Platforms tested:
eirene
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
a bug fix
a feature added
Description:
1.conversion of unlimited dimension data with the current dimensional size 0
2. Use a parameter file to control some special cases:
1) To subdivide the huge array into hyperslabs, a memory buffer size has i
to be set.
2) when current dimensional size is 0, a default chunk size can be set.
Solution:
bug fixed 1. Old approach: the current dimensional size is set to H5S_UNLIMITED, which is
a huge number. The default chunk size is set as a *FIXED* default value.
2. New approach: the current dimensional size is set to the current value 0.
Users can provide the chunk size through a parameter file.
Platforms tested:
eirene
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Backward Compatibility Fix
Description:
One of H5P[gs]et_buffer's parameters changed between v1.4 and the
development branch.
Solution:
Added v1.4 compat stuff around H5P[gs]et_buffer implementation and testing
to allow v1.4.x users to continue to use their source code without
modification.
These changes are for everything except the FORTRAN wrappers - I spoke with
Elena and she will make the FORTRAN wrapper changes.
Platforms tested:
FreeBSD 4.4 (hawkwind)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
A new feature
Description:
While testing h4toh5 utility with real NASA files, we find an example that the data array(one SDS) is so big that it exceeds the physical memory of some machine(>128 MB) and the conversion failed. Before the smart hyperslab operation is out, I am dividing the whole SDS into smaller hyperslabs with each hyperslab propotational to the original SDS array dimensions. For example, a three dimension array with 1000*1000*1000 elements, I can divide them into eight 500*500*500 pieces. I can read and write each piece and remember their starting and ending points. In this way, the memory allocation failure can be avoided; however, it may not be the efficient way.
I've tested this feature using SDS without chunking. It works fine. However, when testing SDS with chunking, it is extremely slow. This happens to be a bug in HDF5 library now. Quincey may fix this later and give me a more efficient way to handle the problem. Currently all my testing files are with UNLIMITED dimensions, so in HDF5 the chunking feature will be required.
SO by default, this feature will not be turned on.
Solution:
see the above
Platforms tested:
linux 2.2.18
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix a bunch of warnings
Platforms tested:
Linux 2.2 (eirene)
|
|
|
|
|
|
|
|
| |
Purpose:
Changed to the new generic property list for dataset creation property
list.
Platforms tested:
Arabica, modi4 and Hawkwind
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup for better compatibility with C++ compilers
Description:
C++ compilers are choking on our C code, for various reasons:
we used our UNUSED macro incorrectly when referring to pointer types
we used various C++ keywords as variables, etc.
we incremented enum's with the ++ operator.
Solution:
Changed variables, etc.to avoid C++ keywords (new, class, typename, typeid,
template)
Fixed usage of UNUSED macro from this:
char UNUSED *c
to this:
char * UNUSED c
Switched the enums from x++ to x=x+1
Platforms tested:
FreeBSD 4.4 (hawkwind)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup (sorta)
Description:
When the first versions of the HDF5 library were designed, I remembered
vividly the difficulties of porting code from a 32-bit platform to a 16-bit
platform and asked that people use intn & uintn instead of int & unsigned
int, respectively. However, in hindsight, this was overkill and
unnecessary since we weren't going to be porting the HDF5 library to
16-bit architectures.
Currently, the extra uintn & intn typedefs are causing problems for users
who'd like to include both the HDF5 and HDF4 header files in one source
module (like Kent's h4toh5 library).
Solution:
Changed the uintn & intn's to unsigned and int's respectively.
Platforms tested:
FreeBSD 4.4 (hawkwind)
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
check-in the second time to update the handling of data transfer in h4toh5.
This will make up for the cvs conflict checking a couple hours ago.
Description:
Solution:
Platforms tested:
eirene
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
1) fix the implementation of image according to image specfication
2) fix two bugs of SDS implemention. the first one is
to handle the unlimited SDS with the first dimensional size set to 0.
the second one is to change the way how HDF5 dataset is written.
Description:
1) mapping 24-bit image to 3D arrays instead of 2D compound datatype.
2) previously forgot considering unlimited SDS with the size set to 0.
3) H5P_set_buffer seems not working well for a extremely small size.
Solution:
1) see above.
2) add a special case to deal with this.
3) don't use H5Pset_buffer.
Platforms tested:
RedHat Zoot 6.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New Features!
Description:
Start migrating the internal use of property lists in the library from the
older implementation to the new generic property lists.
Currently, only the dataset transfer property lists are migrated to the
new architecture, all the rest of the property list types are still using
the older architecture.
Also, the backward compatibility features are not implemented yet, so
applications which use dataset transfer properties may need to make the
following changes:
H5Pcreate(H5P_DATASET_XFER) -> H5Pcreate_list(H5P_DATASET_XFER_NEW)
and
H5Pclose(<a dataset transfer property list>) -> H5Pclose_list(id)
This still may have some bugs in it, especially with Fortran, but I should
be wrapping up those later today.
Platforms tested:
FreeBSD 4.4 (hawkwind)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
new features
Description:
1. add an option to convert HDF4 file without HDF4 specified attributes such as
HDF4_OBJECT_TYPE, HDF4_REF_NUM etc.
it can be done by inputting "h4toh5 -na input.hdf"
The default converter will still keep HDF4 specfied attributes.
2. Add compression features (gzip) for image too. Now the compressed HDF4 image
can be supported by using HDF5 gzip. Not sure whether tools can read it. Need to be tested.
3. Change SPACEPAD to NULLTERM for HDF4 dimensional name list. We can use variable length HDF5 string to represent these names, however currently H5dump and H5view cannot support variable length HDF5 string. converter will wait for other tools' update.
Solution:
Platforms tested:
eirene(Red Hat 6.2) and arabica(solaris 2.7)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
bug fix Adding more features
Description:
Bugs: 1) hdf4 dimensional scale data can be none, but the dim name can still defined by users, so number of hdf4 dimensional names and number of object reference may be different
Previously, this problem is not considered.
2) SDcheckempty will return true when fill value is set to HDF4 SDS, and then fill value information is lost
3) check whether SDS have fill value set although SDcheckempty return true.
Use H5Psetfillvalue and H5Dcreate in HDF5 part, still needs to wait for the new development of HDF5 and also need to investigate whether this part of code has bugs.
New features: compressed SDS will get compressed with gzip when it is converted. That will save some space.
[describe the bug, or describe the new feature, etc]
Solution:
See above and design document
Platforms tested:
eirene(linux)
[machines you have tested the changed version. This is absolute
important. Test it out on at least two or three different platforms
such as Big-endian-32bit (SUN/IRIX), little-endian-32(LINUX) and
64-bit (IRIX64/UNICOS/DEC-ALPHA) would be good.]
|
|
Code Movement
Description:
Moved tools code into their own special subdirectories.
Platforms tested:
Linux, Kelgia
|