| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Actually set the exit value for the test (*sigh*)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Clean up many compiler warnings and make test return non-zero exit code
when a failure is detected.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/Intel compilers w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
Linux/64-amd64 2.6 (abe) w/parallel, w/FORTRAN, in production mode
Mac OS X/32 10.5.6 (amazon) in debug mode
Mac OS X/32 10.5.6 (amazon) w/C++ & FORTRAN, w/threadsafe,
in production mode
|
|
|
|
| |
the script
|
| |
|
|
|
|
|
|
|
|
| |
tool validates the output with h5dump (comparing the h5dump current output with a h5dump output from a pre-existent h5 file).
added the byte order keyword that was removed on the last check in
tested: linux, solaris
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
was being selected.
When reading the compression parameter keyword, the compression type read flag was incorrectly set to read, removed this line of code
in->configOptionVector[COMPRESS] = 1;
Modified one configuration file to have the COMPRESSION-TYPE GZIP
Keyword.
Entered a bug description fix of
- h5import: By selecting a compression type, a big endian byte order was being
selected (PVN - 2009/11/3)
tested: linux
|
|
|
|
| |
tested: linux
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
Fixing BZ #1381. The --includedir=DIR configure option, which is used
to specify the installation location of C header files, did not work
correctly as the path was hard-coded in config/commence.am. I'm presuming
this is because an older version of automake didn't know where to put
c header files. In any case, removing this line now defaults the includedir
to the same directory that it is currently hard-coded to, and also fixes
the configure flag to allow customization of this value.
Tested:
jam, liberty
|
|
|
|
|
|
|
|
|
|
| |
Remove another call to H5E_clear_stack() from within the library.
Clean up lots of compiler warnings.
Tested on:
Mac OS X/32 10.5.6 (amazon)
(followup on other platforms forthcoming)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
Removing the code from configure which strips the '-g' flag from CFLAGS
when in production mode. The current default CFLAGS in production mode
does not include '-g', as intended, but we should allow users to
override this and enable '-g' by setting the CFLAGS environment variable
if desired. Note that this applies to FCFLAGS and CXXFLAGS as well.
Tested:
kagiso, linew, liberty
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
string ptr's belonging to buffer in H5DSset_label and H5DSget_label. Also added free of buffers in error section of both functions. Potential memory leaks may exist elsewhere, and this will not close the bug.
Tested:
h5committest
vista 32 VS2008
|
| |
|
|
|
|
| |
been added with r16489.
|
|
|
|
| |
Not tested yet.
|
|
|
|
|
|
| |
H5G_dense_iterate
Tested: Fedora 10 (too minor for full committest)
|
|
|
|
|
|
|
|
|
|
|
| |
test_big_chunks_bypass_cache,
test the correctness of the data when the fill value is defined or not. The
library should let the chunks bypass the cache depending on the size of the
chunks and whether to write fill value to the chunks.
Tested on jam - simple change.
|
|
|
|
|
|
|
|
|
|
| |
Pass the chunk "user data" to H5D_chunk_unlock(), so that chunks with
an address already aren't reallocated.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
FreeBSD/64 6.3 (liberty) w/C++ & FORTRAN, in debug mode
Linux/64-ia64 2.4 (tg-login3) w/parallel, w/FORTRAN, in production mode
|
| |
|
|
|
|
|
|
|
|
|
| |
Description:
In some situations it was possible for the fill value to not be written to parts
of a chunked dataset, particularly when extending and/or shrinking. Prior to
the fix for the chunk cache (1015) these bugs would have been exceedingly rare.
Tested: jam, smirom, linew (h5committest)
|
|
|
|
|
|
|
| |
always LE on all platforms, simply added le to the two locations where these files are used.
Tested:
Vista 32 VS2008
|
|
|
|
|
|
|
| |
big or little endian machine. Configure.in was modified to export a variable carrying endianess information to testh5ls.sh. This script then compares the current run with 2 expected outputs, one for a big-endian machine (linew was used to generate the output), other for little endian (jam was used to generate the output)
the way h5ls prints types, it starts searching for NATIVE types first. One solution would be h5ls not to detect these native types, using for example the same print datatype function that h5dump does, that would make the output look the same on all platforms ("32-bit little-endian integer" would be printed instead). Drawback, this "native" information would not be available. Other solution is to have not one but 2 expected outputs and make the shell script detect the endianess and compare with one output or other
tested: h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
#1404). When the index was set to
creation order in query function but there's no creation order indexed in the file, the library
tried to build and sort a table of all links. To optimize it, let the library use the B-tree for
names of the links.
Tested on jam. I tested the same change for v1.8 with h5committest.
|
|
|
|
|
|
| |
(it adds an extra space at the beginning of output, for indentation) was already available for HL and caused compilation problems on AIX. Replaced the name with TESTING_2
tested: h5committest
|
| |
|
|
|
|
| |
Platforms tested: Mac OSX and AIX (by Ed) (minor fix)
|
|
|
|
|
|
| |
causing compilition error on AIX
tested: h5commitest
|
|
|
|
| |
tested: h5committest
|
|
|
|
|
|
|
|
| |
Cache chunk info for newly created chunk.
Tested on:
FreeBSD/32 6.3 (duty)
(Tests included in upcoming revise_chunks branch changes)
|
|
|
|
| |
by rev 16489.
|
|
|
|
|
|
|
|
|
|
|
| |
replacing all instances with long long.
Tested:
h5comittest
fedora 10 x64
Vista 32, VS2005, IVF101
XP32, Cygwin
|
|
|
|
|
|
|
|
|
| |
Clean up code and eliminate resource leaks. Also avoid "null" I/O when
chunk doesn't exist and we can skip it.
Tested on:
Mac OS X/32 10.5.6 (amazon)
(too minor to require h5committest)
|
| |
|
|
|
|
|
|
|
|
|
| |
Clean up (i.e. remove) more internal calls to H5E_clear_stack(), along with
some other minor code cleanups.
Tested on:
Mac OS X/32 10.5.6 (amazon)
(too minor to require h5committest)
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
The meaning of the "nbytes" field in H5D_rdcc_t was not clear, and some places
assumed it was the maximum size of the chunk cache, while some assumed it was
the current size of the chunk cache. The end result was that only 1 chunk could
be held in cache at a time. This field has been replaced by "nbytes_max" and
"nbytes_used". Performance of cached I/O should improve greatly.
Tested: jam, smirom (h5committest)
|
|
|
|
| |
Tested: kate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
through multiple
file handles.
Description:
An attribute's "oloc" field which specifies the file it resides in was located
in the attribute's "shared" structure. So when an attribute was opened multiple
times all of the handles for that attribute pointed to the same file id, even if
different file id's were used to open the different handles for the attribute.
The "oloc" has been moved to the top level H5A_t struct.
Tested: jam, smirom (h5committest)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Since the new object header format, it has been possible for a situation to be
created where none of the messages are large enough to hold a continuation
message and there are no null messages to merge with. This makes it impossible
to add a new object header chunk. This case will now be handled by moving every
message in the last chunk to the newly allocated one, except for null messages
which are deleted.
Tested: jam, smirom (h5committest)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
When an attribute was created with a datatype or dataspace that was shared in
the same object header that the attribute was in, the attribute could not be
deleted. Changes made to ensure that the attribute can be deleted both when the
attribute is in the object header and when it is shared in the heap. Object
header message decode routines now take an "open_oh" parameter to enable them to
avoid opening the same object header twice.
Tested: jam, smirom (h5committest)
|
|
|
|
| |
arguments for H5Dopen1 when using v1.6 compatible flag. Trivial change, tested on smirom and jam.
|
|
|
|
|
|
|
| |
updated script file of rev #16461
Tested:
Vista32, XP64 - VS2005, VS2008
|
|
|
|
|
|
|
|
|
|
| |
than the cache size and isn't allocated
on disk, the library still loaded it in the cache, which is redundant. I changed it to bypass the
cache and added a test in dsets.c.
Tested on jam and smirom.
|
|
|
|
|
|
|
|
|
| |
than the cache size and isn't allocated
on disk, the library still loaded it in the cache, which is redundant. I changed it to bypass the
cache and added a test in dsets.c.
Tested on jam and smirom.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
members was not done
Solution: for compound types, recursively apply that check
Two new cases are added
1) the compound type has a different number of members. Message printed is
<obj1> has X members <obj2> has Y members
Where X and Y are the number of members of each compound type being compared
2) the compound type has not comparable types (for example a double and an int at the same index)
In this case the message
Comparison not possible: object1 is of class1 and object2 is of class2
Is replaced with
Comparison not possible: object1 has a class1 and object2 has a class2
Modified the test generator program to have these 2 cases
Added a shell run for these 2 cases
Tested: windows, h5committest
|
|
|
|
| |
Tested: Notepad
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The failure was caused by some over active sanity checking code in
unlock_entry(). In essence the code did not consider the possibility
that under certain, very unusual circumstances, an entry could be flushed
to disk during the H5AC_unprotect() call. Instead, it simply failed
if a dirty entry was marked clean after the call to H5AC_unprotect().
This bug in the test code was exposed by recent changes to the default
cache configuration made as part of the "metadata blizard" bug fix.
Fixed the bug by adding code to detect when an entry is flushed during
the call to H5AC_unprotect(), and not trigger a failure if a dirty entry
is marked clean after a call to H5AC_unprotect() if the entry has been
flushed.
In passing also found and fixed another test bug in which expunged
entries were erroneously marked as dirty in the test code's independant
register of entry status.
Tested parallel on Phoenix (AMD64 Linux) and Jam. Also ran t_cache
manually hundreds of times looking for intermittant failures.
Larry kindly tested (parallel) on Mercury.
|