| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance
Description: Documented SZIP change.
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Improvement
Description: HDF5 Library set pixels_per_scanline parameter to the size of the chunk's
fastest changing dimension. As a result, fastest changing dimension
of the chunk could not be bigger than 4K and smaller than pixels_per_block
value and szip compression couldn't be used for many real datasets.
Solution: Reworked algorithm how HDF5 sets pixels_per_scanline value; only chunks
with the total number of elements less than pixels_per_block value are rejected.
There is no restriction on the size of the chunk's fastest changing
dimension anymore.
Modified the test according to the new algorithm.
Platforms tested: verbena, copper, sol
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Improvement
Description: HDF5 Library set pixels_per_scanline parameter to the size of the chunk's
fastest changing dimension. As a result, fastest changing dimension
of the chunk could not be bigger than 4K and smaller than pixels_per_block
value and szip compression couldn't be used for many real datasets.
Solution: Reworked algorithm how HDF5 sets pixels_per_scanline value; only chunks
with the total number of elements less than pixels_per_block value are rejected.
There is no restriction on the size of the chunk's fastest changing
dimension anymore.
Platforms tested: verbena, copper, sol
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix and feature.
Description:
The setenv was done in runtest but it has the effect will linger
onto the next test. So, if the first test sets $CXX to a certain
value, it lingers on the all following tests of the same host.
This is usually not desired.
Solution:
Move the actual setenv code to snapshot. Runtest now just parses them
and pass along the setenv request to snapshot.
Platforms tested:
no h5comittest which does not really test the change.
Hand tested in eirene with Tg-NCSA.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fixes
Description:
the return error code for a function was not initialized.
in HP-UX it happened that this variable was initialized to -1
causing the function to return with an error condtion
solution : initialized the variable to 0
the name of the dataset was printed after the differences in verbose mode
and report when differences were found
solution : check first if differences were found and then
print the name of dataset and differences
in verbose mode always print the name first
Solution:
Platforms tested:
linux
aix
solaris
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add t_coll_chunk.c at testpar for collective chunk IO test.
Description:
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
update documentation and usage message
Description:
updated the html documentation for the new h5diff modes
added a section for h5repack
Solution:
Platforms tested:
linux
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To add collective chunk IO tests.
Description:
three tests are added.
1. Only one hyperslab for each process, and this hyperslab is fit in exactly one chunk.
2. non-contiguous hyperslabs in each process, these hyperslabs are fit in one chunk.
3. Single hyperslab for each process, smaller chunk is assigned. Number of chunks for
every process is equal.
Solution:
the dataset size is set to be very small, will enlarge later.
Platforms tested:
AIX 5.1(copper)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the first round of patches about supporting collective chunk IO in HDF5
Description:
The current HDF5 library doesn't support collective MPIO with chunk storage. When users set collective option in their data transfer with chunk storage, the library silently converted the option to INDEPENDENT and that caused trememdous performance penalty. Some application like
WRF-parallel HDF5 IO module has to use contiguous storage for this reason. However, chunking storage has its own advantage(supporting compression filters and extensible dataset), so to make collective MPIO possible inside HDF5 with chunking storage is a very important task.
This check-in make collective chunk IO possible for some special cases. The condition is as follows(either case is fine with using collective chunk IO)
1. for each process, the hyperslab selection of the file data space of each dataset is regular and it is fit in one chunk.
2. for each process, the hyperslab selection of the file data space of each dataset is single and the number of chunks for the hyperslab selection should be equal.
Solution:
Lift up the contiguous storage requirement for collective IO.
Use H5D_isstore_get_addr to get the corresponding chunk address. Then the original library routines will take care of getting the correct address to make sure that MPI FILE TYPE is built correctly for collective IO>
Platforms tested:
arabica(sol), copper(AIX), eirene(Linux)
parallel test is checked at copper.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix for windows testing
Description:
A function called H5C_stats_reset doesn't have H5_DLL in front of it,
it cause windows DLL test failed.
Solution:
Add H5_DLL in front of it.
Platforms tested:
windows xp, sol 2.7, linux 2.4, aix 5.1
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
h5diff and h5repack changes
Description:
h5diff
introduced the following four modes of output:
Normal mode: print the number of differences found and where they occured
Report mode: print the above plus the differences
Verbose mode: print the above plus a list of objects and warnings
Quiet mode: do not print output (h5diff always returns an exit code of 1 when differences are found)
h5repack
added an extra parameter for SZIP filter (coding method)
the new syntax is
-f SZIP=<pixels per block,coding>
(pixels per block is a even number in 2-32 and coding method is 'EC' or 'NN')
Example of use:
./h5repack -i file1 -o file2 -f SZIP=8,NN -v
updated usage messages, test scripts and files accordingly
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Allow buffer parameter to H5Dread & H5Dwrite to be NULL if there are no
elements to transfer.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug Fix
Description:
Calling H5Sset_extent_simple to change a dataspace's maxdims from nonzero to
zero causes errors (infinite loops, seg faults, asserts) because the pointer
to the maximum size isn't cleaned up properly
Solution:
Clean up that pointer. Added a test for this case.
Platforms tested:
sleipnir (very minor change)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug Fix
Description:
Trying to create the root group or the working group ("/" or ".") fakes out
HDF5 so that it neither creates a group nor returns an error.
Solution:
H5G_namei now throws an error if it was supposed to insert but didn't.
Platforms tested:
sleipnir, Visual Studio 7 (very minor change)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feature
Description:
Showed the fortran compiler and FFLAGS and CXX compiler and CXXFLAGS
when the corresponding language API is enabled.
Platforms tested:
No h5committest since it is just a simple shell script change.
Tested in Eirene.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up a bunch of warnings and bring new code better inline with current
library coding practice.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update.
Description:
Due to source code change, added 2 new projects to the Windows tests and
removed some files from a Windows project. Updated h5repack testing batch file
in Windows. Did some minor updates for cache project.
Solution:
1. Added 2 new projects reserved and reserveddll to the Windows workspace. These two projects include
the new source code reserved.c.
2. testh5repack_filters.c and testh5repack_layout.c were removed from HDF5 1.7 branch by Pedro. Removed
these 2 files from h5repacktst project.
3. Pedro updated h5repack testings. Updated repacktest.bat batch file to match with new h5repack testings
in Unix.
4. cache project settings->Link->Ignore libraries: add libcd.lib for release version and libc.lib for
debug version.
Platforms tested:
Microsoft Visual C++ 6.0/.NET on Windows XP.
(Will test on Windows 2000 with Visual C++ 6.0 after this check-in).
Misc. update:
|
|
|
|
| |
Bump version # after making snapshot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix.
Description:
The previous patch of -D__GNUC__ was causing failure in the
newer compiler. The previous failure could not be repeated
any more. So, removed it.
Platforms tested:
Tested only in TG-NCSA since the change affects only the ia64 platform.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
It uses the value of $ARCH as a gcc option but the linux clusters
at NCSA define $ARCH as environment variable with values that are
not a valid compiler option. That caused the configure to fail
because it was not able to compile at all.
Solution:
Change ARCH to lower case $arch (convention dictates environment
variables are upper cases.) Also preset $arch to null and do not
honor any pass it values.
Platforms tested:
Attempted to run h5committest but sol was failing due to /tmp
filled. Copper and verbena passed. Also passed in TG-NCSA.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix, new feature
Description:
fixed bug in the parse function:
cases where we have an already inserted name but there is a new name also
example:
-f dset1:GZIP=1 -l dset1,dset2:CHUNK=20x20
dset1 is already inserted, but dset2 must also be (it was not)
added a CHECK_SZIP symbol to enable/disable checking of library related szip parameters
added the print of the filter name in verbose mode (confirms visually that the filter was applied )
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
Replaced "unsigned long long" with hsize_t in H5MF
Added "return 0" at end of reserved.c test
Platforms tested:
arabica, sleipnir
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: New feature
Description: New API H5Tencode and H5Tdecode. Given object ID, H5Tencode encodes object information into a binary form. H5Tdecode decode an object information in a binary form, reconstructs the object and return a new object ID.
Solution: Use object header functions H5O_dtype_decode and H5O_dtype_encode to
facilitate them. The encoded binary is exactly like object header information.
This is the first step checkin. Will check in H5Sencode and H5Sdecode later.
Platforms tested: h5committed and fuss.
Misc. update: will update release.txt after 2nd step checkin.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Description:
Missed adding a test file in previous commit.
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug Fix
Description:
If an HDF5 file grows larger than its address space, it dies and is unable to
write any data. This is more likely to happen since users are able to change
the number of bytes used to store addresses in the file.
Solution:
HDF5 now throws an error instead of dying. In addition, it "reserves" address
space for the local heap and for object headers (which do not allocate space
immediately). This ensures that after the error occurs, there is enough address
space left to flush the entire file to disk, so no data is lost.
A more complete explanation is at /doc/html/TechNotes/ReservedFileSpace.html
Platforms tested:
sleipnir, copper (parallel), verbena, arabica, Windows (Visual Studio 7)
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct problems with "resurrecting" a dataset in a file. (This occurs
when a dataset which is open gets unlinked from the group hierarchy (making it
"dead" and marked for deletion in the file) and then is re-linked to the group
hierarchy). Note that the current solution applies only to datasets, further
work will fix this for groups and named datatypes also.
Also, fix the "debug" routines to be a little more helpful in certain
situations.
Additionally, fix a locking bug in the symbol table node splitting routine
which could be [one of] the cause[s] of the file corruption in flexible
parallel operation.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
h5repack changes
Description:
there were some requests to change some minor h5repack features
h5repack only made a warning about a non available filter in verbose mode ( -v )
without -v it kept silent, and users sometimes missed this warning
the request was that it should print this warning always. so, the new format, is e.g
./h5repack -i test_szip.h5 -o out.h5
Warning: dataset </dset_szip> cannot be read, SZIP filter is not available
due to this, and to avoid a lot of these messages in the shell test script, I modified
the script h5repack.sh so that it detects the presence of all filters in the environment
(previously it only detected SZIP)
the test files were also divided in more files , to make the script code easier to
follow
Solution:
Platforms tested:
linux
AIX (no szip)
solaris (no szip, no gzip )
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix error in chunked dataset I/O where data written out wasn't read
correctly from a chunked, extendible dataset after the dataset was extended.
Also, fix parallel I/O tests to gather error results from all processes,
in order to detect errors that only occur on one process.
Solution:
Bypass chunk cache for reads as well as writes, if parallel I/O driver is
used and file is opened for writing.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Re-work the insertion of a new child into an existing node, to exploit
some speedups for adding the rightmost child, since this is a very common case
when appending records to an unlimited size dataset.
Also, hoist the checks for the tree's 'K' value into a field in the shared
information about the tree, instead of re-calculating them all the time.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid calling vector comparison routine when operating on 1-D chunks.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
| |
Correct typo in file format for compact layout information
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update.
Description:
Update new files into Windows workspace.
Solution:
Add H5RC.c and H5RCprivate.h to hdf5 and hdf5dll projects in Windows workspace.
Platforms tested:
MS Visual C++ 6.0 on Windows 2000.
(will test on Windows XP with Visual C++ 6.0 and .NET after this check-in)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Refactor B-tree code to extract all common information for a B-tree into a
shared structure that is pointed to by all the nodes in tree (instead of being
included in each node).
Also re-order B-tree node comparison checks for chunked datasets to
check for >= the upper node first, since the comparison is a bit "heavy" and
this check is more likely to succeed when you are adding records to the
dataset.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
(also, recent h5dump commits have broken testing...)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
h5dump new tests
Description:
added new tests for the print of array indices (nested objects, several ranks)
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Changed call to H5File::getFileSize according to C library and
removed CHECK for this call because failure will be handled by
exception.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (eirene)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Added function headers with doxygen.
Changed H5File::getFileSize according to C library.
Platforms tested:
Linux 2.4 (eirene)
FreeBSD 4.10 (sleipnir)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup & small optimization
Description:
Eliminate redundant recomputation of native key pointer offsets.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
The "shared" raw B-tree node can get freed before all the B-tree nodes
had been flushed out to disk and released by the cache.
Solution:
Implement a simple reference counting wrapper for objects in the library
and use it to hold the shared raw B-tree nodes so they aren't freed before all
references to them in memory are released.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir)
IRIX64 6.5 (modei4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
when printing array indices , the calculation of the current column was not done correctly
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
|
|
|
|
|
| |
Purpose:
Version 3 of document, from August 2003
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
H5IdComponent.cpp: initialized a pointer to NULL
H5Object.cpp: removed functions being added by mistake
Update function headers for the rest.
Platforms tested:
SunOS 5.7 (arabica)
Linux 2.4 (eirene)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance
Description: Added h5fget_name_f and h5fget_filesize_f subroutines and tests.
Solution: N/A
Platforms tested: arabica (32-bit), sol (64-bit)
parallle build on copper failed for the C library with the the
following error:
ld: 0711-317 ERROR: Undefined symbol: .H5FD_stdio_term
Since this change doesn't affect the C library, I am cheking it in
and will retest the fresh CVS copy after this check-in.
Misc. update:
|
|
|
|
|
| |
Revert whitespace commit since CVS seem to be working correctly from
sleipnir.
|
|
|
|
|
| |
Whitespace commit to double check that the repository is working correctly
(esp. from sleipnir)
|
|
|
|
|
| |
Remove testing file after verifying that binary adds of *.h5 files are
working correctly.
|
|
|
|
|
| |
Testing "binary" addition of new *.h5 files. (This file will be removed
immediately)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update.
Description:
Added cache.c to the Windows tests. Updated H5Tinit.c.
Solution:
1. Added 2 new projects cache and cachedll to the Windows workspace. These two projects include
the new source code cache.c.
2. Updated H5Tinit.c.
Platforms tested:
MS Visual C++ 6.0 and .NET in Windows XP.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
h5dump new tests
Description:
added more tests for the escape/not escape feature for string data (with vlen, with
compound, with char data)
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Since the raw B-tree nodes are the same size and only used when reading in
or writing out a B-tree node, move raw B-tree node buffer from being per node
to a single node that is shared among all B-tree nodes of a particular tree,
freeing up a lot of space and eliminating lots of memory copies, etc.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|