| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Bump version # after creating private snapshot.
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Change some variables that are keywords in C++ to non-keywords.
Platforms tested:
FreeBSD 4.8 (sleipnir)
too minor to require full h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix and refactored code
Description:
Return the file space we have allocated already when a dataset creation
fails.
Also track the change to the H5Fget_obj_<foo> APIs
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix, refactored code
Description:
Fixed closing objects for "strong" file degree, which previously would
sometime attempt to close the same object twice (when a named datatype and
a dataset which used it were both left open before the file was closed).
Stopped datatype iteration from querying for the group entry of non-named
datatypes.
Added attributes to the list of objects that can be queried by
H5Fget_obj_count and H5Fget_obj_ids, since they can hold open a file also.
Took a suggestion from Robb to return the number of open objects
in the return values of H5Fget_obj_count and H5Fget_obj_ids.
Also, added a "max_objs" parameter to the H5Fget_obj_ids function, so that
it can work well with staticly allocated arrays.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
When a duplicate object was attempted to be created, the library would
leak file memory and object references in the file, potentially causing an
infinite loop when shutting the library down.
Solution:
Clean up after ourselves... :-)
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Correct name of H5FD_term_interface function in FUNC_ENTER macro.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
When objects were left open in a file, the API shutdown code would end
up closing property lists "out from underneath" files that were left open
pending the objects in the being closed. This caused problems later when
file objects would try to access their property list and fail to close,
causing an infinite loop in the library shutdown.
Solution:
Create some dependencies in the order that APIs are shut down, trying to
close "higher" level APIs before closing "low" level APIs. This still isn't
precise, but it does work correctly now.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Windows portability fix
Description:
Change "long long"s in code to "long_long"s, which is the portable version
of this type.
Platforms tested:
FreeBSD 4.8 (sleipnir)
to small for h5committest
|
|
|
|
|
|
|
|
|
|
| |
Description: H5Tget_native_type fails on Cray for compound datatype.
Solution: there's size comfusion in the library on H5T_get_native_int function
Platforms tested: Cray, h5committest
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
The fill time in a dataset with no fill value information created with an
older version of the library was getting set to H5D_FILL_TIME_ALLOC instead of
the new default H5D_FILL_TIME_IFSET and was causing H5Dcreate() calls with that
dataset creation property list to fail now.
Solution:
Set the new default in the fill time initialization for missing fill value
information.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Add new H5D_FILL_TIME_IFSET value to debugging output.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature/Bug fix
Description:
Add new fill time value - H5D_FILL_TIME_IFSET which writes the fill value
to a dataset if the user has defined one, otherwise not writing the fill value
to the dataset.
Platforms tested:
FreeBSD 4.8 (sleipnir) serial & parallel
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
MPE instrumentation was out of date and wasn't reporting the correct name
of the API functions in the library.
Platforms tested:
IBM p690 (copper)
h5committest not performed because it doesn't test MPE.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance for the third round of testing
Description: Increased the version number to 1.5.59 after creating
a tar ball for testing.
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
| |
Description: H5Tget_native_type fails for multiple kinds of datatype on Cray; it fails
fix-length string type, too.
Platforms tested: Cray, h5committest
|
|
|
|
| |
Bump version number after making snapshot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compatibility fix
Description:
The H5P[set|get]_fapl_mpiposix calls changed between v1.4.x and v1.5.x.
Solution:
Wrap them in the v1.4 backward compatibility #ifdefs and update tests, etc.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/paralle & v1.4 compatibility
h5committest pointless
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
The dataset's modification time was getting set whenever raw data was
written with H5Dwrite. Unfortunately, this is a metadata change (which are
required to be performed collectively) and H5Dwrite may be called independently
from a parallel program, resulting in metadata cache corruption and/or program
hangs.
Solution:
Don't update the modification time when raw data it written. :-(
Platforms tested:
FreeBSD 4.8 (sleipnir) w/parallel
h5committest
Misc. update:
Noted in release notes and also sent to Frank for updating the docs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Chunked datasets with early space allocation and unlimited dimensions were
running into problems where the dataset's "layout" message was marked as
constant too early, preventing the dataset's dimensions from being extended.
Solution:
Change logic for marking the layout message constant to wait a bit longer.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/parallel
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
libhdf5.settings should be removed in DISTCLEAN, not CLEAN.
Solution:
Platforms tested:
COpper.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct file->mpi->size to just mpi_size.
Platforms tested:
FreeBSD 4.8 (sleipnir)
Misc. update:
Reported by: Bill
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Changed "x=(x++)%y" to "x=(x+1)%y", since the former was not guaranteed to
compile to the latter.
Platforms tested:
Eyeballed.
Misc. update:
Pointed out by: Forsythe, Christi <caforsy@sandia.gov>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature
Description:
Dump variable-length sequence datatype info with h5debug
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
Misc. update:
Patch provided by Robb Matzke (matzke@llnl.gov)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Don't dump core when displaying global heaps in h5debug.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
Misc. update:
Patch submitted by Robb
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Don't attempt to perform collective I/O on chunked datasets.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
| |
Bump version #
Description:
Bump the version # of the library after creating snapshot for SAF developers
to test with.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Twist code
Description:
Try to find a happy definition of HSSIZET_MAX, HSSIZET_MIN and HSIZET_MAX
for all platforms.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committested
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Correct a couple of mistakes in error macros.
Platforms tested:
FreeBSD 4.8 (sleipnir)
triple check not necesssary.
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix HSIZET_MAX, HSSIZET_MAX and HSSIZET_MIN to work with Windows (hopefully)
Platforms tested:
h5committested
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Update dependencies and tracing information
Platforms tested:
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
An earlier checkin changed some of the assumptions about single block
hyperslabs, causing them to fail in odd ways.
Solution:
Fix errors with single block hyperslabs by keying off of count==1 instead
of stride==1.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Move dataspace testing code into separate module to avoid linking it into
user's applications.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup & performance improvements
Description:
Optimize hyperslabs that are built to detect situations where "regular"
hyperslabs can be recovered from span tree descriptions.
Also, improve "same shape" routine to correctly work with all the different
combinations of selections.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Add another error code...
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Tweak HSIZET_MAX macro and add HSSIZE_MAX and HSSIZET_MIN macros.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Move testing routines into their own module, to avoid linking them into
user's applications needlessly.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up various warnings & comment out unused code.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
API tweak.
Description:
The H5Sget_select_bounds() API call was using hsize_t arrays for retrieving
the 'start' and 'end' coordinates, which is counter to the rest of the dataspace
API.
Solution:
Change the arrays to be hssize_t instead.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/C++
FreeBSD 4.8 (sleipnir) w/parallel
h5committested
Misc. update:
Updated all docs for this change.
Added 1.4 compatibility #ifdef's
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Performance improment
Description:
Speed up chunked dataset I/O. This breaks down into several areas:
- Compute chunk selections in the file by using hyperslab operations
instead of iterating over each element in the selection.
- If the file and memory selections are the same shape, use the file
chunk selections to compute the memory chunk selections.
This required several additional dataspace, dataspace selection and
hyperslab routines.
Platforms tested:
h5committestted (although Fortran tests failed for some reason)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
When free lists where disabled, the macros for array free list operations
could possibly expand in ways that allowed incorrect numbers of objects to be
operated on.
Solution:
Put parentheses around macro arguments when performing operations with them.
Platforms tested:
h5committestted (although Fortran tests failed for some reason)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
(void *)~((size_t)NULL) on the Cray is a different value than
(foo *)~((size_t)NULL) and causes some of the hyperslab algorithms to
fail.
Solution:
Change all the 'void *' forms to 'foo *' forms.
Platforms tested:
Cray SV1
h5committest not needed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct typo which didn't show up during my previous testing in production
mode.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest not necessary.
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix some unused parameter warnings.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest not necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
The chunk dataspace selection information for raw data I/O are being leaked.
Solution:
Free the chunk information during the cleanup code.
Platforms tested:
Solaris 2.6 (baldric) w/purify
h5committest not needed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance for the second round of testing
Description: Used bin/h5vers to change version number to 1.5.55
Solution:
Platforms tested: arabica
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix three (!) bugs in szip filter:
- we were using bytes per pixel instead of bits per pixel
- we were using the size of the slowest changing dimension instead of
the fastest changing dimension to compute the blocks per scanline
parameter
- we swapped two parameters when setting up szip_options block.
Solution:
Addressed bugs above.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest not needed
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Improve error reporting for pixels per scanline check.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest not needed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Raw data I/O on chunked datasets would attempt to allocate data structures
proportional to the number of chunks in the dataset on disk, instead of just
the number of chunks that the I/O operation would interact with, causing
operations on datasets with large #'s of chunks to fail (or become very slow),
even though the actual I/O operation was very modest.
Solution:
This is the "scalability fix" for chunked datasets that I've mentioned
we need to do, althought it's not the complete fix for the issue. Read on
for the details...
Only create data structures for the chunks that the I/O operation will
actually act on, reducing the amount of information allocated in memory,
normally.
I say "normally", because this algorithm has the same problems as the
original algorithm (worse actually, since the data structure for each chunk
is larger now) if _all_ the chunks in a dataset with a lot of chunks are
actually involved in the I/O operation. If that is the case, this code
will fail in a similar way.
To truly fix the problem, we would need to only create data structures for
a particular number of chunks, perform the I/O on just those chunks, then
release the data structures for those chunks and create data structures for
the next set of chunks to access, etc. However, I think this case is pretty
rare right now and we should worry about it after the 1.6.0 release.
Platforms tested:
h5committested
|
|
|
|
|
|
|
|
|
|
|
| |
New feature
Description:
Added "fast comparison" code for hsize_t's, since they are used in th
raw data chunking I/O code now.
Platforms tested:
h5committested
|