| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix/code cleanup
Description:
Clean up raw data I/O code to bundle the I/O parameters (dataset, DXPL ID,
etc) into a single struct to pass around through the dataset I/O routines,
since they are always passed together, until very near the bottom of the I/O
stack.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix another couple of int <-> pointer checks.
Platforms tested:
AIX 5.1 (copper)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct assertion to check pointer value correctly.
Platforms tested:
AIX 5.1 (copper)
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Another attempt to fix the address overflow in the core VFL, hopefully one
that works on 64-bit platforms.
Platforms tested:
AIX 5.1 (copper)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Feature
Description:
Datatypes and groups now use H5FO "file object" code that was previously
only used by datasets. These objects will hold a file open if the file
is closed but they have not yet been closed. If these objects are unlinked
then relinked, they will not be destroyed. If they are opened twice (even
by two different names), both IDs will "see" changes made to the object
using the other ID.
When an object is opened using two different names (e.g., if a dataset was
opened under one name, then mounted and opened under its new name), calling
H5Iget_name() on a given hid_t will return the name used to open that hid_t,
not the current name of the object (this is a feature, and a change from the
previous behavior of datasets).
Solution:
Used H5FO code that was already in place for datasets. Broke H5D_t's, H5T_t's,
and H5G_t's into a "shared" struct and a private struct. The shared structs
(H5D_shared_t, etc.) hold the object's information and are used by all IDs
that point to a given object in the file. The private structs are pointed
to by the hid_t and contain the object's group entry information (including its
name) and a pointer to the shared struct for that object.
This changed the naming of structs throughout the library (e.g., datatype->size
is now datatype->shared->size). I added an updated H5Tinit.c to windows.zip.
Platforms tested:
Visual Studio 7, sleipnir, arabica, verbena
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Fix off-by-one error in Core VFL driver which would cause spurious address
or size overflow errors when an odd valued address or size was checked.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (heping)
Solaris 2.7 (arabica)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix situation where deleting a chunked datasets with B-tree nodes that
weren't in the metadata cache would die with a core dump.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (heping)
Solaris 2.7 (arabica)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix for small memory leak that occurs when destroying the data transform
property. Forgot to free the array of pointers to the temorary data.
Solution:
Freed memory.
Platforms tested:
sol + eirene
Misc. update:
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Allow I/O to occur on 0 element selections.
Platforms tested:
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid performing a check on the number of objects in a group (which
currently involves iterating over all entries in the group's B-tree) before
calling H5G_get_obj<foo>_by_idx. Instead, just have H5G_get_obj<foo>_by_idx()
notice when you've walked off the end and return fail then.
Platforms tested:
FreeBSD 4.10 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Code would attempt to Calloc with zero count when a simple expression
that has no x term. That resulted in NULL for some platform (like AIX).
That appeared as a failure treated as out of space.
Solution:
Checked if count is larger than 0 before making the calloc request.
Platforms tested:
Tested in copper (pp) where the failure appeared. Also in eirene
as double check. No h5committest as the change is trivial.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Port fix of parallel I/O mode confusion bug from the 1.6 tree to the 1.7 tree.
Description:
Collective I/O is not supported for point selections. Thus when some
processes attempted I/O with point selections, and others without, some
attempted collective I/O while others did independent I/O.
Solution:
Arranged for all processes to compare notes before starting I/O, and
for all to use independent I/O if any one of them can't do collective
I/O.
Platforms tested:
copper
h5committested
eirene (parallel)
Misc. update:
|
|
|
|
|
|
|
| |
Update for the new API, H5Pget_data_transform.
Platforms tested:
Copper only. No h5committest since this is trivial.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added H5Pget_data_transform
Added support for polynomial data transforms
Description:
There is now support for polynomial data transforms (ie, (2+x)*(x-5)) instead
of just linear ones.
Note that, in order to compute a polynomial transform, one temporary copy of
the original data must be stored for each occurence of "x" in the transform
expression. This can result in very high memory usage for expressions of high
order.
Platforms tested:
sol + eirene
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct typedef for dataset region references to avoid struct alignment
issues on Crays.
Solution:
Change the typedef for hdset_reg_ref_t from a struct to an array of
unsigned char's of the correct size and propagate the appropriate adjustments
around the code.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (verbena) w/fortran
Cray T90 (subzero) w/fortran
Cray SV1 (wind) w/fortran & parallel
Cray T3E (cyclone) w/fortran & parallel
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Tweak recent "forward compatibility" changes to the H5E* API (which allowed
for the old H5E API functions to remain unchanged) by allowing for the error
stack callback function (H5E_auto_t) to also remain unchanged from the 1.6
branch. This required changing the H5E{get|set}_auto routines to have the
old style H5E_auto_t type (which didn't have a stack ID parameter) and the new
H5E{get|set}_auto_stack routines to have a newer "H5E_auto_stack_t" type (which
has a stack ID parameter). This should make the H5E API changes as forwardly
compatible as possible.
One side-affect of this change was that it was impossible to determine if
the current auto error callback was the old style (H5E_auto_t) or the new style
(H5E_auto_stack_t) of callback, so a new API function (H5Eauto_is_stack) was
adde to query this.
Platforms tested:
FreeBSD 4.10 (sleipnir)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
The H5O_mtime_decode function was not handling properly the case for the
Code Warrior compiler
Solution:
Platforms tested:
Code Warrior
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
on the Cray SV1 an INT type was wrongly converted to a SHORT type
by the get_native_integer function
Choose the type based on the precision; this is to support cases
like the Cray SV1, where the size of short is 8 but precision is 32
(e.g an INT (size 8, prec 64) would be converted to a SHORT
(size 8, prec 32) if the size was the deciding factor)
Solution:
Platforms tested:
linux
solaris
aix
Misc. update:
|
| |
|
|
|
|
|
|
|
|
| |
Description: A few items were left out when tried to restore the old Error API.
There are also a few minor bug fixes.
Platforms tested: arabica fuss h5committest.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
Description:
the dataset region reference data was not portable between the Cray T3E and other machines
Solution:
this was due to the fact that the buffer to store the heap ID and index was using a sizeof(int) for its
size
4 is used instead of sizeof(int) to permit portability between
the Crays and other machines (the heap ID is always encoded as an int32 anyway)
Solution:
Platforms tested:
linux
aix
solaris
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Restore 6 old error API functions back to the library to be backward
compatible with v1.6. They are H5Epush, H5Eprint, H5Ewalk, H5Eclear, H5Eset_auto,
H5Eget_auto. These functions do not have error stack as parameter.
Solution: Internally, these functions use default error stack.
Platforms tested: h5committest and fuss.
Misc. update: RELEASE.txt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"bug fix" sort of.
Description:
The current mpicc at TG-NCSA recognizes uint64_t but not
int64_t. hdf5 code rightly assumes when the unsigned type
is defined, the corresponding signed type should be valid
too. So, when it detected uint64_t is valid, it went ahead
using int64_t which ended in compiling failure.
Solution:
Changed the detection to check on int64_t instead. This does
not change any logic, just goes around the mpicc compiler error
at TG-NCSA.
Platforms tested:
h5committested and tested at TG-NCSA (pp) too.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (sorta)
Description:
Change reading of "missing" chunks from datasets with undefined fill
values to not overwrite the application buffer with random garbage from
memory. Note that this is almost the same, since whatever garbage the
application had in those locations will still be there...
Platforms tested:
FreeBSD 4.10 (sleipnir)
IRIX64 6.5 (modi4)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance/bug fixes (OSF1 C++ and missing Fortran APIs)
Description: bringing 1.6 changes to 1.7
Solution:
Platforms tested: OSF1, Solaris 2.8, AIX5.1
Misc. update:
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Description: In H5O_fill_new_decode and H5O_fill_new_encode, UINT32DECODE and
UINT32ENCODE were used to decode and encode message size, which is ssize_t.
Solution: Change to INT32DECODE and INT32ENCODE.
Platforms tested: fuss - very simple change.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
| |
Description: In H5O_fill_new_decode, it tries to read message size(-1) when
fill value is undefined for version 1. During UINT32DECODE, if the machine is
64-bit, a value 0x00000000ffffffff is returned, which is like a valid value.
Solution: If fill value is undefined, don't read the message size, simply
assign -1 to it.
Platforms tested: fuss - did h5committest for v1.6 already.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
1 - Dataset contiguous storage cache information had a bug where it was
possible to try to access invalid cache information if the cache wasn't filled
the first time it attempted to loop through the list of offset/length vectors.
2 - Additionally, the contiguous storage cache information was being used
in certain circumstances from the chunked dataset I/O code path, which was
generally fatal since the chunk storage and contiguous storage information
were stored together in a union.
Solution:
1 - Avoid special case of first trip through loop over offset/length
I/O vectors and always check for the contiguous storage sieve buffer buffer
being NULL.
2 - Change the union containing the chunk and contiguous storage cache
information into a struct, allowing both to be used at the same time.
Platforms tested:
FreeBSD 4.10 (sleipnir)
h5committested
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug. See other checkin.
Description:
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct possible core dump when a datatype conversion function is
registered with the library after a compound datatype has been converted
(having it's type conversion information cached by the library). The compound
datatype must have been created by inserting the fields in non-increasing
offset order to see the bug.
Solution:
Re-sort the fields in the compound datatypes before recalculating the
cached information when performing the conversion on them.
Platforms tested:
FreeBSD 4.10 (sleipnir)
h5committested
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/bug fix
Description:
Check for _O_BINARY being defined instead of O_BINARY, since we actually use
_O_BINARY. (Note that this only affects Windows)
Platforms tested:
FreeBSD 4.10 (sleipnir)
Not tested w/h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct buffer overrun in "multi" VFL driver that was writing past the
end of the "driver name" buffer when encoding the driver info block for the
file's superblock.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Cache was running too slowly.
Solution: Added a hash table for indexing. Retained the tree, but
only for dirty entries. As we need to flush dirty entries
in increasing address order, this is sufficient.
Updated statistics collection code for the above.
Converted a number of local functions into macros to avoid
the function call overhead.
Added code to disable the clean and dirty LRU lists in serial
mode.
Updated test code to account for the above changes.
Platforms tested: h5committested + serial, parallel, and fp on Eirene.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (Failures when dataset size >= 1 GB, reported by Bill Loewe.)
Description:
In the IBM AIX system using 32bit mode, if a dataset size was 1GB or
larger, when the "end" of the dataset was selected, MPI would complain
it could not keep the Upper bound of a datatype within the range of
MPI_Aint. This was because the old algorithm would derive the selection
with extent of each row first. After all dimensions were processed,
it then calculate the start position and just displace the whole
MPI derived type. So, the final MPI type was actually the start
position plus the whole dataset. Since the start can be as big as
the whole dataset, this made the final derived twice as big as 1GB.
That would hit the 2GB MPI_Aint range limit in the 32 bit mode.
Solution:
Use a different algorithm to include the start position in the
defining of MPI type for each dimension. When all dimensions
are processed, the MPI type represents the selection exactly.
Platforms tested:
h5committested
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update Szip to accept 'n-bit' data
Description:
See earlier checkins.
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix another batch of minor differences between the development and release
branches.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update dependencies
Description:
Update dependencies after config/depend1.in bugfix
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IRIX64 6.5 (modi4)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Various minor tweaks to clean code up and bring it into closer
syncronization with the release branch.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
h5committested
IRIX64 6.5 (modi4)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Address two problems:
- The computation of the scanline in the szip filter was being
performed in the "can apply" callback routine instead of the
"set local" routine.
- The routine which allocated all the chunks for an entire dataset
(which is invoked when the allocation time is early or late,
rather than incremental) wasn't recording a failed filter in
the information for the chunk, causing the library to believe
that the chunk had the filter applied when it really hadn't.
Solution:
- Move the scanline computation to the "set local" callback.
- Record the filter mask with each chunk created when allocating them.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/szip
Too obscure to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Revise new feature
Description:
Add buffer type and version # bytes to the encoded datatype and dataspace
buffers (for H5Tencode & H5Sencode)
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Allow I/O on extendible chunked datasets with (currently) zero-sized
dimensions to proceed harmlessly instead of dumping core on an assertion.
Solution:
Removed assertion and added checks to avoid problem situation in H5TB_end
Platforms tested:
FreeBSD 4.10 (sleipnir) w/ & w/o parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Always write fill values to chunks when initializing entire B-tree and
any filters are defined.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
When a simple dataspace is created, its extent should be set before using it,
or it will silently function as a NULL dataspace.
Solution:
Added checks on user-supplied dataspaces. Now dataspaces without extents set
will throw errors; users must explicitly set a dataspace to be NULL.
Platforms tested:
sleipnir, windows
|