| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix core dump when flushing a file with a newly created attribute which
hasn't had a value written to it still open.
Solution:
Write the attribute fill value when appropriate.
Platforms tested:
FreeBSd 4.10 (sleipnir)
Linux 2.4 (verbena)
Solaris 2.7 (arabica)
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix a couple of return values from NULL -> FAIL.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Further diff reductions against development branch, in preparation for
merge of new metadata cache code.
Solution:
Change 'dirty' field in metadata cache info struct (H5AC_info_t) to
match development branch 'is_dirty' name.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
Linux 2.4 (verbena) w/FORTRAN
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Port development branch changes to release branch.
Description:
Initial step in bringing changes to support new metadata cache from the
development branch to the release branch.
Solution:
This checkin just aligns the H5AC* API changes, as well as bringing back
various minor code cleanups, etc.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
Linux 2.4 (verbena) w/FORTRAN
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Typo fix
Description:
Fix another typo with mis-merged info.
Platforms tested:
Solaris 2.7 (arabica)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Typo fix
Description:
Correct minor error value to something appropriate for the release branch.
Platforms tested:
FreeBSD 4.10 (sleipnir)
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Close a couple of memory leaks
Platforms tested:
FreeBSD 4.10 (sleipnir)
Solaris 2.7 (arabica) w/purify
Linux 2.4 (verbena)
too minor for h5committest
|
|
|
|
|
|
|
| |
Description: Prevent create datatype of size 0
Platforms tested: heping(simple change)
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: change feature
Description: Back up supporting bitfield and time datatypes in H5Tget_native_type. Leave it for future support. Simply returns "not supported" error message for now.
Platforms tested: h5committest
Misc. update: RELEASE.txt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Relax restrictions on parallel I/O to allow compressed, chunked datasets
to be read in parallel (collective access will be degraded to independent
access, but will retrieve the information still).
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix & code cleanup
Description:
More dataset cleanups to get to a point where we can fix the chunked I/O
bug.
Also fix a couple of errors in the recent file object resurrection changes
which should hopefully address the recent daily test failres (H5T.c)
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: enhanced feature
Description: enable size adjustment for compound datatype. The size
can be increased and decreased(as long as the members are not cut).
Solution: mainly check if any member is being cut when decreasing the
size. Others are simply taking out the assertion against 0 size.
Platforms tested: h5committest and fuss.
Misc. update: RELEASE.txt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix/code cleanup
Description:
Clean up raw data I/O code to bundle the I/O parameters (dataset, DXPL ID,
etc) into a single struct to pass around through the dataset I/O routines,
since they are always passed together, until very near the bottom of the I/O
stack.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct a couple of typos in the recent object resurrection code.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
Fix bugs found by daily tests.
Platforms tested:
copper
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Feature
Description:
(Same change to release branch)
Datatypes and groups now use H5FO "file object" code that was previously
only used by datasets. These objects will hold a file open if the file
is closed but they have not yet been closed. If these objects are unlinked
then relinked, they will not be destroyed. If they are opened twice (even
by two different names), both IDs will "see" changes made to the object
using the other ID.
When an object is opened using two different names (e.g., if a dataset was
opened under one name, then mounted and opened under its new name), calling
H5Iget_name() on a given hid_t will return the name used to open that hid_t,
not the current name of the object (this is a feature, and a change from the
previous behavior of datasets).
Solution:
Used H5FO code that was already in place for datasets. Broke H5D_t's, H5T_t's,
and H5G_t's into a "shared" struct and a private struct. The shared structs
(H5D_shared_t, etc.) hold the object's information and are used by all IDs
that point to a given object in the file. The private structs are pointed
to by the hid_t and contain the object's group entry information (including its
name) and a pointer to the shared struct for that object.
This changed the naming of structs throughout the library (e.g., datatype->size
is now datatype->shared->size). I added an updated H5Tinit.c to windows.zip.
Platforms tested:
Visual Studio 7, sleipnir, arabica, verbena
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Another attempt to fix the address overflow in the core VFL, hopefully one
that works on 64-bit platforms.
Platforms tested:
AIX 5.1 (copper)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Fix off-by-one error in Core VFL driver which would cause spurious address
or size overflow errors when an odd valued address or size was checked.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (heping)
Solaris 2.7 (arabica)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix situation where deleting a chunked datasets with B-tree nodes that
weren't in the metadata cache would die with a core dump.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (heping)
Solaris 2.7 (arabica)
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Allow I/O to occur on 0 element selections.
Platforms tested:
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid performing a check on the number of objects in a group (which
currently involves iterating over all entries in the group's B-tree) before
calling H5G_get_obj<foo>_by_idx. Instead, just have H5G_get_obj<foo>_by_idx()
notice when you've walked off the end and return fail then.
Platforms tested:
FreeBSD 4.10 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
| |
Description: Changed version number to 1.6.3-post0
Solution: ran bin/h5vers script on eirene
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
| |
Description: I changed the version number to 1.6.3
Solution: Ran bin/h5vers -s 1.6.3 on eirene
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Created test tar ball and changed version to 1.6.3-pre4
using bin/h5vers script on eirene
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct leftover issue from recent changes to dataset region reference
which occurred when 1.4 backward compatibility is turned on.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/1.4 compat
Configuration not tested by h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct typedef for dataset region references to avoid struct alignment
issues on Crays.
Solution:
Change the typedef for hdset_reg_ref_t from a struct to an array of
unsigned char's of the correct size and propagate the appropriate adjustments
around the code.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Linux 2.4 (verbena) w/fortran
Cray T90 (subzero) w/fortran
Cray SV1 (wind) w/fortran & parallel
Cray T3E (cyclone) w/fortran & parallel
|
|
|
|
|
|
|
|
|
|
| |
Description: I created a tar ball and changed version to 1.6.3-pre3
Solution: run bin/h5vers on eirene
Platforms tested: eirene
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
The H5O_mtime_decode function was not handling properly the case for the
Code Warrior compiler
Solution:
Platforms tested:
Code Warrior
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
on the Cray SV1 an INT type was wrongly converted to a SHORT type
by the get_native_integer function
Choose the type based on the precision; this is to support cases
like the Cray SV1, where the size of short is 8 but precision is 32
(e.g an INT (size 8, prec 64) would be converted to a SHORT
(size 8, prec 32) if the size was the deciding factor)
Solution:
Platforms tested:
linux
solaris
aix
cray sv1
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
updated some commnents regarding the latest cray bug fix
Description:
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bug fix
Description:
the dataset region reference data was not portable between the Cray T3E and other machines
Solution:
this was due to the fact that the buffer to store the heap ID and index was using a sizeof(int) for its
size
4 is used instead of sizeof(int) to permit portability between
the Crays and other machines (the heap ID is always encoded as an int32 anyway)
Platforms tested:
Cray T3E (read data from linux)
linux
solaris
aix
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: After discussing with Albert:
I changed the version to hdf5-1.6.3-pre1 and created
a tar ball for the first round of testing.
Then I changed the version to hdf5-1.6.3-pre2 and
now I am committing the changes.
pre1 on ftp is the same as snap6 :-)
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix parallel bug reported by Thomas Guignon, in which different
processes became confused as to whether they were doing collective
or individual I/O.
Description:
When one process had a point selection, and another didn't, the
first concluded that the I/O was independant, while the second
presumed that it was collective. A hang resulted.
Solution:
Get all processes involved in an I/O to compare notes. If all
agree that the I/O is collective, they proceed with same. If
any think the I/O should be independant, all use independant
I/O.
Note that this is an interrim fix -- the correct solution is to
support collective I/O on point selections. This will take a
while.
Platforms tested:
copper
h5committested
Eirene (serial and parallel)
In the parallel test on Eirene, I encountered a bug in h5repacktst.
However the problem vanished on recompile. Since I couldn't reproduce
it elsewhere, I went ahead with the checkin.
Given my druthers, I would have liked to study the code more
carefully before this check-in. However, there is some time pressure.
The new code implementing the consensus check must not be executed
unless MPI is initialized, and there is a communicator associated
with the file. I think my guards against this case are adequate,
but if we run into a hang or an illegal instruction error, this
change should be suspect.
Misc. update:
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"bug fix" sort of.
Description:
The current mpicc at TG-NCSA recognizes uint64_t but not
int64_t. hdf5 code rightly assumes when the unsigned type
is defined, the corresponding signed type should be valid
too. So, when it detected uint64_t is valid, it went ahead
using int64_t which ended in compiling failure.
Solution:
Changed the detection to check on int64_t instead. This does
not change any logic, just goes around the mpicc compiler error
at TG-NCSA.
Platforms tested:
h5committested and tested at TG-NCSA (pp) too.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (sorta)
Description:
Change reading of "missing" chunks from datasets with undefined fill
values to not overwrite the application buffer with random garbage from
memory. Note that this is almost the same, since whatever garbage the
application had in those locations will still be there...
Platforms tested:
FreeBSD 4.10 (sleipnir)
IRIX64 6.5 (modi4)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Maintenance/bug fix
Description: On OSF1 C++ compilation failed complaining about stdint.h file
being missing.
Solution: By Binh-Minh's suggestion: include stdint.h file only when
__cpluplus is NOT defined. inttypes.h is used by C++ compiler;
it is more common than stdint.h
Platforms tested: OSF1 (lemieux), Solaris 2.8 (sol), AIX 5.1 (copper)
Misc. update:
|
| |
|
|
|
|
|
|
|
|
|
| |
Description: In H5O_fill_new_decode and H5O_fill_new_encode, macros UINT32DECODE
and UINT32ENCODE were used to decode and encode message size, which is ssize_t.
Solution: Changed to INT32DECODE and INT32ENCODE.
Platforms tested: Tested v1.7 on fuss - very simple change.
|
|
|
|
|
|
|
|
|
|
|
| |
Description: In H5O_fill_new_decode, it tries to read message size(-1) when fill
value is undefined for version 1. During UINT32DECODE, if the machine is 64-bit,
a value 0x00000000ffffffff is returned, which is like a valid value.
Solution: Don't decode message size if fill value is undefined, simply assign
-1 to message size.
Platforms tested: h5committest.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
1 - Dataset contiguous storage cache information had a bug where it was
possible to try to access invalid cache information if the cache wasn't filled
the first time it attempted to loop through the list of offset/length vectors.
2 - Additionally, the contiguous storage cache information was being used
in certain circumstances from the chunked dataset I/O code path, which was
generally fatal since the chunk storage and contiguous storage information
were stored together in a union.
Solution:
1 - Avoid special case of first trip through loop over offset/length
I/O vectors and always check for the contiguous storage sieve buffer buffer
being NULL.
2 - Change the union containing the chunk and contiguous storage cache
information into a struct, allowing both to be used at the same time.
Platforms tested:
FreeBSD 4.10 (sleipnir)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Wrong value returned for error
Solution:
Return correct value
Platforms tested:
verbena
Misc. update:
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct possible core dump when a datatype conversion function is
registered with the library after a compound datatype has been converted
(having it's type conversion information cached by the library). The compound
datatype must have been created by inserting the fields in non-increasing
offset order to see the bug.
Solution:
Re-sort the fields in the compound datatypes before recalculating the
cached information when performing the conversion on them.
Platforms tested:
FreeBSD 4.10 (sleipnir)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/bug fix
Description:
Check for _O_BINARY being defined instead of O_BINARY, since we actually use
_O_BINARY. (Note that this only affects Windows)
Platforms tested:
FreeBSD 4.10 (sleipnir)
Not tested w/h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (Failures when dataset size >= 1 GB, reported by Bill Loewe.)
Description:
In the IBM AIX system using 32bit mode, if a dataset size was 1GB or
larger, when the "end" of the dataset was selected, MPI would complain
it could not keep the Upper bound of a datatype within the range of
MPI_Aint. This was because the old algorithm would derive the selection
with extent of each row first. After all dimensions were processed,
it then calculate the start position and just displace the whole
MPI derived type. So, the final MPI type was actually the start
position plus the whole dataset. Since the start can be as big as
the whole dataset, this made the final derived twice as big as 1GB.
That would hit the 2GB MPI_Aint range limit in the 32 bit mode.
Solution:
Use a different algorithm to include the start position in the
defining of MPI type for each dimension. When all dimensions
are processed, the MPI type represents the selection exactly.
Platforms tested:
h5committested
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Correct buffer overrun in "multi" VFL driver that was writing past the
end of the "driver name" buffer when encoding the driver info block for the
file's superblock.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix to feature added yesterday
Description:
Needed additional check on the SZIP bits per pixel parameter
Solution:
if (precision > 24 && precision < 31) precision = 32
if (precision > 32 && precision < 64) precision = 64
Platforms tested:
arabica,verbena,hirdls
Misc. update:
|