| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Update dependencies
Description:
Update dependencies after config/depend1.in bugfix
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IRIX64 6.5 (modi4)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct typo which was causing incorrect srcdir paths in generated
dependencies.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IRIX64 6.5 (modi4)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Various minor tweaks to clean code up and bring it into closer
syncronization with the release branch.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
h5committested
IRIX64 6.5 (modi4)
|
| |
|
|
|
|
|
| |
Purpose:
Updated RELEASE.txt for previous bug fixes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Address two problems:
- The computation of the scanline in the szip filter was being
performed in the "can apply" callback routine instead of the
"set local" routine.
- The routine which allocated all the chunks for an entire dataset
(which is invoked when the allocation time is early or late,
rather than incremental) wasn't recording a failed filter in
the information for the chunk, causing the library to believe
that the chunk had the filter applied when it really hadn't.
Solution:
- Move the scanline computation to the "set local" callback.
- Record the filter mask with each chunk created when allocating them.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/szip
Too obscure to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Added support for Absoft Fortran compiler
Solution: Modified configuration file to check which Fortran compiler is used
and set appropriate flags.
Platforms tested: verbena with pgf90 and Absoft f95 compilers
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Revise new feature
Description:
Add buffer type and version # bytes to the encoded datatype and dataspace
buffers (for H5Tencode & H5Sencode)
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Allow I/O on extendible chunked datasets with (currently) zero-sized
dimensions to proceed harmlessly instead of dumping core on an assertion.
Solution:
Removed assertion and added checks to avoid problem situation in H5TB_end
Platforms tested:
FreeBSD 4.10 (sleipnir) w/ & w/o parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Clean up new testfile from earlier checkin.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Always write fill values to chunks when initializing entire B-tree and
any filters are defined.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update.
Description:
Update new source code into windows project and rename hdf5_cpp debug version library.
Solution:
1. Add hdf5\src\H5Dmpioc. into hdf5 and hdf5dll projects.
2. Rename hdf5_cpp debug version library from hdf5_cpp.lib to hdf5_cppd.lib to differentiate
it from hdf5_cpp release version library hdf5_cpp.lib.
Platforms tested:
Microsoft Visual C++ 6.0 on Windows 2000/XP.
(will test with .NET on Windows XP after this check-in.)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance for MAC OSX
Description: Added support for Absoft Fortran compiler f95;
Ddefault compiler is set to IBM xlf.
Solution:
Platforms tested: pommier with xlf and Absoft f95 compilers
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
When a simple dataspace is created, its extent should be set before using it,
or it will silently function as a NULL dataspace.
Solution:
Added checks on user-supplied dataspaces. Now dataspaces without extents set
will throw errors; users must explicitly set a dataspace to be NULL.
Platforms tested:
sleipnir, windows
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup, sorta
Description:
Added ifdef sections for "H5_USING_PURIFY" in various places in the code,
which are designed to reduce the spurious "uninitialized memory read" warnings
from purify which are actually OK. Note that this macro will have to be
turned on by adding it to the CFLAGS for the build - I didn't think it was
important enough to add a configure flag for.
Also, the changes in H5HG.c optimize the walks through the objects in a
heap to only look at the 'used' entries instead of all the 'allocated' entries.
Platforms tested:
Solaris 2.7 (arabica) w/purify
Not tested by h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Eliminate duplicated call to H5T_detect_class()
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Close memory leak I introduced in H5Sencode() routine.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Changed calloc() calls to malloc() calls allocating background buffers
during dataset writes, since the background buffer information will be read
from disk anyway, overwriting any existing values.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
Modification time test (mtime) would die silently on some systems. This is
because the code is very system-dependant (it relies on getting the current
time and the timezone from the OS).
Solution:
mtime test now uses TEST_ERROR macro to print "FAILED" and to output where the
failure occurred. Configure script is a little smarter about whether
gettimeofday() function returns the timezone correctly.
Further bugs will need to be addressed on a system-by-system basis.
Platforms tested:
sleipnir, arabica, verbena, copper, windows (VC7)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Bug fix
Description: While working on the SZIP documentation with Frank, I realized
that when scanline was less than 4k and bigger than pixels_per_block,
it was not adjusted if number_of_blocks_per_scanline was bigger
than max_number_of_blocks_per_scanline.
Solution: Fixed the code. Unfortunately it didn't help with the problem
I had using h5repack with DOQGROD.he5 file.
Platforms tested: copper
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up collective chunking code a bit.
Also, add '--enable-instrument' configure flag to have a mechanism for
determining that optimized operations happened correctly in the library (instead
of just the "normal" way) by allowing 'flag' properties to be set outside the
library and set when the "right" thing happens. This is mainly for debugging
and regression checks, so we make certain we don't break optimized I/O by
accident. It's enabled by default when --enable-debug is on (which is on by
default in the development branch and off by default in the release branch),
but can also be independently controlled with its own configure flag.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IBM p690 (copper) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fixed reserved.c test to use h5_fileaccess/h5_fixname/h5_cleanup.
Updated RELEASE.txt for previous bug fix
Platforms tested:
sleipnir, verbena
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up various recent changes a little.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update.
Description:
Update all the hdf5 testing batch files.
Solution:
1. Updated dumptest.bat, difftest.bat, lstest.bat and repacktest.bat files
to match with correspondent testings in Unix.
2. Added one new batch file mask.bat to mask off time information displayed
in some h5ls tests. The time displayed with h5ls uses a system all that
accountants for the local timezone of the cumputer that run that tests. To solve
this issue, the time information has to be masked off. Otherwise, the expected
output may be different with the actual output.
3. Updated H5pubconf.h to enable szlib encoder.
Platforms tested:
Microsoft Visual C++ 6.0 on Windows XP and 2000.
(will test with .NET on Windows XP after this check-in.)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To test collective chunk IO properly.
Description:
See the previous message.
Solution:
See the previous message.
Platforms tested:
arabica(Sol 2.7), eirene(Linux), copper(AIX)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To add a special section of codes for collective-chunk IO tests.
Description:
The current patch of collective chunk IO support in HDF5 can only handle
some special("however, can cover many applications") cases,
Inside source code, we did a careful checking to make sure other cases would
not fall into this category and would not use collective IO.
We also would like to test whether those collective conditions were met in our
test programs.
The current parallel HDF5 handled those collective IO requests in a special way.
If the library finds it cannot do collective IO, it will silently change to independent IO.
So basically there is no better way to check whether the library is doing what it should do without "hacking" into HDF5 source codes for the purpose of testing. But the "hacking" should not affect library work and should be easily pulled out after we get more general collective IO algorithm to work.
With Quincey's suggestion, we used HDF5 property APIs to finish the job.
Solution:
The approach includes three parts:
1) In the test program, insert a property inside data transfer property list.
Set a default value for this property.
2) Inside H5Dio.c, when the library finds that it cannot do collective IO with chunking storage, it will change the default value.
3) Then the test program will recheck the value after H5Dwrite or H5Dread to evaluate whether
the current collective IO case is doing the right thing.
Note: The test won't stop after it finds that the library is not doing the right thing and probably it will finish normally. The current approach is that the test program just printed out an error message. It should be changed later.
Platforms tested:o
copper,arabica,eirene
Misc. update:
|
|
|
|
|
|
|
|
| |
Description: Line 38, "#define H5_NO_FREE_LISTS" were commented out during debugging
Solution: put back in.
Platforms tested: No test needed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: This is the second step of checkin for encoding and decoding objects.
H5Tencode and H5Tdecode have been committed in the previous step. H5Sencode
and H5Sdecode are checked in this time.
Solution: Given object ID, these functions encode and decode object information
into and from binary buffer and return new object ID. They take advantage of the
existing codes of object header message and encode in the same format.
Platforms tested: fuss and h5committest.
Misc. update: RELEASE.txt
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance
Description: Documented SZIP change.
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Improvement
Description: HDF5 Library set pixels_per_scanline parameter to the size of the chunk's
fastest changing dimension. As a result, fastest changing dimension
of the chunk could not be bigger than 4K and smaller than pixels_per_block
value and szip compression couldn't be used for many real datasets.
Solution: Reworked algorithm how HDF5 sets pixels_per_scanline value; only chunks
with the total number of elements less than pixels_per_block value are rejected.
There is no restriction on the size of the chunk's fastest changing
dimension anymore.
Modified the test according to the new algorithm.
Platforms tested: verbena, copper, sol
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Improvement
Description: HDF5 Library set pixels_per_scanline parameter to the size of the chunk's
fastest changing dimension. As a result, fastest changing dimension
of the chunk could not be bigger than 4K and smaller than pixels_per_block
value and szip compression couldn't be used for many real datasets.
Solution: Reworked algorithm how HDF5 sets pixels_per_scanline value; only chunks
with the total number of elements less than pixels_per_block value are rejected.
There is no restriction on the size of the chunk's fastest changing
dimension anymore.
Platforms tested: verbena, copper, sol
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix and feature.
Description:
The setenv was done in runtest but it has the effect will linger
onto the next test. So, if the first test sets $CXX to a certain
value, it lingers on the all following tests of the same host.
This is usually not desired.
Solution:
Move the actual setenv code to snapshot. Runtest now just parses them
and pass along the setenv request to snapshot.
Platforms tested:
no h5comittest which does not really test the change.
Hand tested in eirene with Tg-NCSA.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fixes
Description:
the return error code for a function was not initialized.
in HP-UX it happened that this variable was initialized to -1
causing the function to return with an error condtion
solution : initialized the variable to 0
the name of the dataset was printed after the differences in verbose mode
and report when differences were found
solution : check first if differences were found and then
print the name of dataset and differences
in verbose mode always print the name first
Solution:
Platforms tested:
linux
aix
solaris
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add t_coll_chunk.c at testpar for collective chunk IO test.
Description:
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
update documentation and usage message
Description:
updated the html documentation for the new h5diff modes
added a section for h5repack
Solution:
Platforms tested:
linux
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To add collective chunk IO tests.
Description:
three tests are added.
1. Only one hyperslab for each process, and this hyperslab is fit in exactly one chunk.
2. non-contiguous hyperslabs in each process, these hyperslabs are fit in one chunk.
3. Single hyperslab for each process, smaller chunk is assigned. Number of chunks for
every process is equal.
Solution:
the dataset size is set to be very small, will enlarge later.
Platforms tested:
AIX 5.1(copper)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the first round of patches about supporting collective chunk IO in HDF5
Description:
The current HDF5 library doesn't support collective MPIO with chunk storage. When users set collective option in their data transfer with chunk storage, the library silently converted the option to INDEPENDENT and that caused trememdous performance penalty. Some application like
WRF-parallel HDF5 IO module has to use contiguous storage for this reason. However, chunking storage has its own advantage(supporting compression filters and extensible dataset), so to make collective MPIO possible inside HDF5 with chunking storage is a very important task.
This check-in make collective chunk IO possible for some special cases. The condition is as follows(either case is fine with using collective chunk IO)
1. for each process, the hyperslab selection of the file data space of each dataset is regular and it is fit in one chunk.
2. for each process, the hyperslab selection of the file data space of each dataset is single and the number of chunks for the hyperslab selection should be equal.
Solution:
Lift up the contiguous storage requirement for collective IO.
Use H5D_isstore_get_addr to get the corresponding chunk address. Then the original library routines will take care of getting the correct address to make sure that MPI FILE TYPE is built correctly for collective IO>
Platforms tested:
arabica(sol), copper(AIX), eirene(Linux)
parallel test is checked at copper.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix for windows testing
Description:
A function called H5C_stats_reset doesn't have H5_DLL in front of it,
it cause windows DLL test failed.
Solution:
Add H5_DLL in front of it.
Platforms tested:
windows xp, sol 2.7, linux 2.4, aix 5.1
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
h5diff and h5repack changes
Description:
h5diff
introduced the following four modes of output:
Normal mode: print the number of differences found and where they occured
Report mode: print the above plus the differences
Verbose mode: print the above plus a list of objects and warnings
Quiet mode: do not print output (h5diff always returns an exit code of 1 when differences are found)
h5repack
added an extra parameter for SZIP filter (coding method)
the new syntax is
-f SZIP=<pixels per block,coding>
(pixels per block is a even number in 2-32 and coding method is 'EC' or 'NN')
Example of use:
./h5repack -i file1 -o file2 -f SZIP=8,NN -v
updated usage messages, test scripts and files accordingly
Solution:
Platforms tested:
linux
AIX
solaris
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Allow buffer parameter to H5Dread & H5Dwrite to be NULL if there are no
elements to transfer.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug Fix
Description:
Calling H5Sset_extent_simple to change a dataspace's maxdims from nonzero to
zero causes errors (infinite loops, seg faults, asserts) because the pointer
to the maximum size isn't cleaned up properly
Solution:
Clean up that pointer. Added a test for this case.
Platforms tested:
sleipnir (very minor change)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug Fix
Description:
Trying to create the root group or the working group ("/" or ".") fakes out
HDF5 so that it neither creates a group nor returns an error.
Solution:
H5G_namei now throws an error if it was supposed to insert but didn't.
Platforms tested:
sleipnir, Visual Studio 7 (very minor change)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feature
Description:
Showed the fortran compiler and FFLAGS and CXX compiler and CXXFLAGS
when the corresponding language API is enabled.
Platforms tested:
No h5committest since it is just a simple shell script change.
Tested in Eirene.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up a bunch of warnings and bring new code better inline with current
library coding practice.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update.
Description:
Due to source code change, added 2 new projects to the Windows tests and
removed some files from a Windows project. Updated h5repack testing batch file
in Windows. Did some minor updates for cache project.
Solution:
1. Added 2 new projects reserved and reserveddll to the Windows workspace. These two projects include
the new source code reserved.c.
2. testh5repack_filters.c and testh5repack_layout.c were removed from HDF5 1.7 branch by Pedro. Removed
these 2 files from h5repacktst project.
3. Pedro updated h5repack testings. Updated repacktest.bat batch file to match with new h5repack testings
in Unix.
4. cache project settings->Link->Ignore libraries: add libcd.lib for release version and libc.lib for
debug version.
Platforms tested:
Microsoft Visual C++ 6.0/.NET on Windows XP.
(Will test on Windows 2000 with Visual C++ 6.0 after this check-in).
Misc. update:
|
|
|
|
| |
Bump version # after making snapshot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix.
Description:
The previous patch of -D__GNUC__ was causing failure in the
newer compiler. The previous failure could not be repeated
any more. So, removed it.
Platforms tested:
Tested only in TG-NCSA since the change affects only the ia64 platform.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
It uses the value of $ARCH as a gcc option but the linux clusters
at NCSA define $ARCH as environment variable with values that are
not a valid compiler option. That caused the configure to fail
because it was not able to compile at all.
Solution:
Change ARCH to lower case $arch (convention dictates environment
variables are upper cases.) Also preset $arch to null and do not
honor any pass it values.
Platforms tested:
Attempted to run h5committest but sol was failing due to /tmp
filled. Copper and verbena passed. Also passed in TG-NCSA.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix, new feature
Description:
fixed bug in the parse function:
cases where we have an already inserted name but there is a new name also
example:
-f dset1:GZIP=1 -l dset1,dset2:CHUNK=20x20
dset1 is already inserted, but dset2 must also be (it was not)
added a CHECK_SZIP symbol to enable/disable checking of library related szip parameters
added the print of the filter name in verbose mode (confirms visually that the filter was applied )
Solution:
Platforms tested:
linux
solaris
AIX
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
Replaced "unsigned long long" with hsize_t in H5MF
Added "return 0" at end of reserved.c test
Platforms tested:
arabica, sleipnir
|