| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Bug fix
Description: on Cray T90IEEE compact storage test (in test_misc8)
fails since dataset dimensions are too big; dataset
will not fit into the message header.
Solution: Reduced dimension sizes from 100 to 50.
Platforms tested: h5committested on arabica and mod4; verbena failed
because of the F90 license problem. I tested on verbena
by hand and C only; Cray T90IEEE
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update
Description:
Updated the Copyright statement
Platforms tested:
Linux (This change is only in the comments, so I just check that the
modules still compile)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
When calling H5Fopen with the core VFL driver, but without the
H5F_ACC_CREAT flag goes ahead and creates a memory file.
Solution:
Check for the H5F_ACC_CREAT flag before allowing the memory file to be
created.
Platforms tested:
FreeBSD 4.7 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code Cleanup & New Tests
Description:
tmisc.c:
Switched from using H5_HAVE_COMPRESSION flag in favor of
H5_HAVE_FILTER_DEFLATE.
dsets.c:
Switched from using H5_HAVE_COMPRESSION flag in favor of
H5_HAVE_FILTER_DEFLATE.
Refactored I/O filter tests to allow new filters to be [more] easily
added.
Added tests for shuffle & deflate+shuffle I/O filters (if the filter(s)
are enabled).
Added test for creating a new dataset with a filter that is not
available.
Added test for attempting to read a dataset created with a filter that
is not available.
Platforms tested:
Tested h5committest {arabica (fortran), eirene (fortran, C++)
modi4 (parallel, fortran)}
FreeBSD 4.7 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
API name change
Description:
Change all "space time" references to "alloc time", including API functions
and macro definitions, etc.
Platforms tested:
FreeBSD 4.6 (sleipnir) w/C++
Solaris 2.7 (arabica) w/FORTRAN
IRIX64 6.5 (modi4) w/parallel & FORTRAN
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/More tests
Description:
Cleaned up some compiler warnings and wrote additional tests for space
allocation and storage size routines.
Platforms tested:
FreeBSD 4.6 (sleipnir) w/serial & parallel. Will be testing on IRIX64
6.5 (modi4) in serial & parallel shortly.
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
bug fix
Description:
The H5Dget_storage_size test doesn't handle the situation when no
compression is configured.
Platforms tested:
IRIX 6.5
|
|
|
|
|
|
|
|
|
|
|
| |
Regression test for bug fix
Description:
Adjust selection so chunked data needs to be read from pre-allocated chunks
w/filters, to verify that filter is applied correctly.
Platforms tested:
FreeBSD 4.6 (sleipnir) w/parallel
|
|
|
|
|
|
|
|
|
| |
Purpose:
Design for compact dataset
Description:
Compact dataset is stored in the header message for dataset layout.
Platforms tested:
arabica, eirene.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Additional regression tests & bug fixes
Description:
There was no testing for the H5Dget_storage_size function and it seemed to
be having problems with compressed, chunked datasets, so write some tests
to verify that its working correctly.
Also, fix case for allocating storage early for chunked datasets
Platforms tested:
FreeBSD 4.6 (sleipnir) serial & parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
H5Dcreate and H5Tcommit allow "empty" compound and enumerated types (i.e.
ones with no members) to be stored in the file, but this causes an assertion
failure and is somewhat vapid.
Solution:
Check the datatype "makes sense" before using it for H5Dcreate and
H5Tcommit.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test Bug Fix
Description:
Under certain [obscure] circumstances, an object header would get paged out
of the metadata cache, and when it was accessed again and brought back into
the cache, and immediately had additional metadata added to it (an
attribute, usually, or perhaps adding an object to a group), and needed to
be extended with a continuation message, but there was no room in any
existing object header chunks for the continuation message and an existing
object header message needed to be moved to the new object header chunk (I
told you it was obscure :-), the object header message moved to the new
chunk (not the new metadata being added) would get corrupted. *whew* :-)
Solution:
Actually copy the "raw" object header message information of the object
header message being moved to the new chunk, instead of relying on the
"native" object header message information being re-encoded when the object
header is flushed. This is because when an object header is paged out of
the metadata cache and subsequently brought back in, the "native"
information pointer in memory is reset to NULL and only the "raw"
information exists.
[Actually, this additional testing doesn't trigger the bug, which needs
_lots_ of objects to be created and accessed, but it does execise the
object header continuation code more than other tests in the library.]
Platforms tested:
Solaris 2.7 (arabica) & FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
When several level deep nested compound & VL datatypes are used, the data
in the nested compound datatypes is incorrectly sharing the same "background
buffer", causing data corruption when the data is written to the file.
Solution:
Allocate a separate background buffer for each level of the nested types
to convert. (Also allocate temporary background buffers for array
datatypes, where this sort of problem could occur also)
Added more regression tests to check for these errors.
Platforms tested:
FreeBSD 4.5 (sleipnir) & Solaris 2.6 (baldric)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
Regression test for following bug:
The H5Gget_objinfo() function was not setting the 'fileno' field in the
H5G_stat_t struct passed in.
Solution:
Added a "file serial number" to each file currently open in the library
and put that in the 'fileno' field. If a file is opened twice (with
H5Fopen) and the VFL driver detects that it is the same file (i.e. the
two file structures have the same "shared file info" in the library's
memory structures), they will have the same serial number.
This serial number has two drawbacks:
- If a VFL driver doesn't/can't detect that two calls to H5Fopen with
the same file actually _are_ the same file, each will get a
different serial number
- If the same file is closed and re-opened, the serial number will be
different.
It is be possible to fix the second drawback for many VFL drivers, but it
would be a lot of effort and probably isn't worth it until we've got a
good reason to do it. Dunno if we'll ever be able to fix the first
drawback...
Platforms tested:
FreeBSD 4.5 (sleipnir)
VS: ----------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
If a non-zero fill-value is used for a chunked dataset, any non-existent
chunked read with an "all" selection (or a contiguous hyperslab selection)
will return zero for those instead of the user's fill-value.
Solution:
Fixed I/O code to pass down fill-value to "optimized" I/O routines, so it
will be available to fill the user's buffer with.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
| |
Add regression tests
Description:
Added regression test for using a VL-datatype in two separate files.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
Bug Fix
Description:
When file space was returned to the file space free-list for reuse,
occasionally raw data allocations which used space from the free-list
would overlap with the metadata accumulator and get over-written with
the cached information in the accumulator, corrupting the data.
Solution:
Check if the space about to be recycled on the free-list is going to be
used for raw data and also overlaps with the metadata accumulator cache,
avoiding using space that fits those criteria.
This fixes bug #701
Platforms tested:
FreeBSD 4.5 (sleipnir)
|