| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
same property in property list multiple times.
fix that bug.
test with h5committest
|
|
|
|
|
|
|
|
| |
Clean up more FUNC_ENTER/FUNC_LEAVE macros and move H5D & H5T code toward
the final design (as exemplified by the H5EA & H5FA code).
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug & parallel
|
|
|
|
| |
Tested: windows under debug
|
|
|
|
| |
Tested: local linux
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correct several errors in fractal heap code: root indirect block was
getting pinned/protected more than once, "single" free space sections weren't
getting "re-parented" correctly when the heap transitioned between having a
root indirect block and a root direct block, and several related issues. Also
cleaned up some warnings in library/tests.
Tested on:
FreeBSD/32 8.2 (loyalty) w/gcc4.6, w/C++ & FORTRAN, in debug mode
FreeBSD/64 8.2 (freedom) w/gcc4.6, w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (koala) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
Mac OSX/64 10.7.3 (amazon) w/debug
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add 'H5O_mcdt_search_cb_t' to bin/trace script and re-run the
bin/reconfigure script.
Tested on:
FreeBSD/32 8.2 (loyalty) w/gcc4.6, w/C++ & FORTRAN, in debug mode
FreeBSD/64 8.2 (freedom) w/gcc4.6, w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (jam) w/PGI compilers, w/default API=1.8.x,
w/C++ & FORTRAN, w/threadsafe, in debug mode
Linux/64-amd64 2.6 (koala) w/Intel compilers, w/default API=1.6.x,
w/C++ & FORTRAN, in production mode
|
|
|
|
|
|
|
|
| |
Merge "file image" changes from feature branch back to trunk.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug
(h5committest upcoming)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
When copying an object with attribute creation order tracked, the attribute
creation order was not copied correctly to the destination file, causing an
error if the creation order was also indexed (due to attempting to insert
duplicate keys) or incorrect creation orders otherwise. Fixed to copy the
creation order correctly.
Also fixed the attribute character set not being copied, and fixed an issue
where an attribute opened with H5Aopen (or similar, but not by_idx), from an
object using the latest format but without creation order being tracked, would
always report the creation order as 0 (and marked as valid).
Tested: jam, koala, ostrich (h5committest), durandal
|
|
|
|
|
|
|
|
|
| |
Bring "merge committed datatypes during H5Ocopy" feature from branch to
trunk. (Also has some minor bugfixes with it)
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug
(h5committest coming up)
|
|
|
|
|
|
|
| |
the object's
flush class action to ensure that cached data is flushed so that H5Ocopy will get
the correct data. (HDFFV-7853)
|
|
|
|
|
|
| |
a lack of sparse file support.
Minor change: tested on 64-bit Windows 7
|
|
|
|
|
| |
Minor change: tested on jam (test not skipped) and Mac OS-X Lion (test skipped
due to HFS not supporting sparse files).
|
|
|
|
|
|
|
|
| |
Minor code safety issue in test/fheap.c and whitespace in other files.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug
(Too minor to require h5committest)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VFD is set).
This occurs due to the istore test creating very large files on systems which
do not have POSIX-like sparse file semantics. The large amount of I/O causes
the test to run for a very long period of time.
The fix was to copy the "big" test's sparse file check and only run the
largest sparse file test when POSIX-like sparse file semantics are found.
Tested on:
jam (nfs)
ostrich (nfs)
loyalty (ufs)
64-bit linux VM (ext4)
64-bit OS-X Lion (hfs, detected as not POSIX sparse)
64-bit Windows 7 (NTFS, detected as not POSIX sparse)
The OS-X failure to pass the sparse check is likely an error and will be
entered as a new bug.
|
|
|
|
|
|
|
|
|
| |
operations like x*-100.
The parser mistaked "-" as substraction. I fixed it and also fixed another problem
with some special cases like 100-x and 2/x.
Tested on jam, koala, and ostrich.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
When using the new object header format, it was possible for corruption to occur
if the first object header chunk changed size such that the lenght of the "chunk
0 size" field changed. This only occurred if there were messages that had not
been decoded. The original algorithm that changed the object header chunk size
marked all messages as dirty, causing those that had not been decoded to have
both the raw and native form invalidated. Changed the algorithm to avoid
marking messages dirty and added assertions to catch the case where messages
are dirtied without being decoded (or recently created) first.
Tested: jam, koala, ostrich (h5committest), durandal
|
|
|
|
|
|
|
|
|
|
| |
Description:
When using the new object header format and adding an attribute with a size near
64K, it was possible for file corruption to occur. This happened only if the
first object header chunk was smaller than 256 bytes and then grew to larger
than 64K after the attribute was added.
Tested: ostrich, jam, koala (h5committest), durandal
|
|
|
|
|
|
|
|
|
|
| |
Correct corner case for creating a contiguous dataset with a zero-sized
dataspace, when the allocation time is set to early.
Also clean up a few compiler warnings in the dataspace code.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug & parallel
|
|
|
|
|
|
| |
allocation/free in tests.
Tested: local linux/ changes h5committetest against 1.8 version
|
|
|
|
|
|
|
|
|
| |
Remove some leftover uses of the __FUNCTION__ macro, replacing them with
FUNC macro, as used everywhere else.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
(too minor to require h5committest)
|
|
|
|
|
|
|
|
|
| |
Add FUNC_ENTER macros for package-private routines and begin process of
switching package routines to use them. All H5G routines are currently
finished.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
| |
in a read-only file caused seg fault when the file is closed. I changed the error ID from H5E_CACHE to H5E_OHDR in the error report macro in H5O_create and fixed a minor problem in tfile.c.
Tested on jam and MacGoblin - minor changes.
|
|
|
|
|
|
| |
fault when the file is closed. I fixed the problem by putting a condition check early in H5O_create of H5O.c. The old code checked it too late, not until a file space is created. I added a test case in tfile.c to check the creation of group, dataset, attribute, and datatype.
Tested on koala, jam, and linew.
|
|
|
|
| |
standard 2.8.6
|
|
|
|
| |
Tested: durandal
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
When shrinking a chunked dataset, the library fills in the unused parts of
chunks that have been shrunk. The fill value buffer allocated for this purpose
had a maximum size of 1 MB, but the fill was performed in a single operation.
Therefore, if the amount of unused space in a chunk after being shrunk was
greater than 1 MB, the library would read off the end of the fill value buffer.
Changed the maximum fill buffer size to be equal to the chunk size.
Tested: durandal; jam, koala, heiwa (h5committest)
|
|
|
|
|
|
| |
between ASCII and UTF8. I added more test cases to the previous commit. Now it has conversion from UTF8 to ASCII, ASCII to UTF8, VL and fixed length, and H5Tconvert.
Tested on jam, koala, linew.
|
|
|
|
|
|
| |
between ASCII and UTF8. I corrected it by adding a condition check in H5T_conv_s_s and H5T_conv_vlen to report an error under this situation.
Tested on jam, koala, linew.
|
| |
|
|
|
|
|
|
|
|
|
| |
Don't check dataset storage size for compressed datasets with region
reference datatypes. (The address of the region reference type in the file
varies and affects the compressed size)
Tested on:
Mac OS X/32 10.7.2 (amazon) w/debug & production + check-vfd
|
|
|
|
|
|
|
|
| |
the size of compound data type through H5Tset_size immedia
tely after the type was created. I fixed it in this commit.
Tested on jam, linew, and koala.
|
|
|
|
|
|
| |
OPTION command for solution folder and no packaging.
Tested: local linux
|
|
|
|
|
|
| |
needs some polish - the solution folder assignment should go closer to the target declaration and not all projects are grouped (parallel, c++, fortran, hl, and a few others).
Tested on Windows.
|
|
|
|
|
|
|
|
| |
Description:
An old patch was mistakenly committed in r21556. Replaced this fix with the
latest.
Tested: jam, koala, heiwa (h5committest)
|
|
|
|
|
|
|
|
|
|
| |
Recalculate the size of destination attribute message when the source and
destination versions are different during an object copy operation.
(Jira: HDFF-7718)
Tested on:
Mac OS X/32 10.7.2 (amazon) w/debug
(h5committest upcoming)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Added new H5SL_TYPE_GENERIC skip list type, which uses void *'s as keys and a
client-supplied callback for key comparison. This was added to support the
upcoming "merge named datatype" feature for H5Ocopy, but may be used in other
places as well. Also added testing.
Also fixed a potential bug with the H5SL_TYPE_OBJ implementation, and added
testing for that.
Tested: jam, koala, heiwa (h5committest), durandal
|
|
|
|
|
|
| |
1.8 branch)
Tested: durandal (too minor for full h5committest)
|
|
|
|
| |
Tested: durandal (too minor for full h5committest)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correct error in loading local heap prefix & data block from the file.
Sometimes the local heap's prefix could be loaded before the data block (e.g.
using H5Oget_info), but then when the data block was loaded later, the free
list information would get lost, causing the heap's size to grow larger than
necessary. This is Jira bug #HDFFV-7767
Tested on:
Mac OS X/32 10.7.2 (amazon) w/debug
(h5committest coming up)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
H5Ocopy could get confused when copying a named datatype containing an
attribute which used that named datatype as its datatype. This happened
because H5Ocopy would recurse into the attribute's datatype before the object
the attribute was in was fully copied (i.e. before the "post-copy" pass).
Modified H5Ocopy to avoid recursing before the post-copy step in this case.
Required many changes, including to how non-committed shared messages are
copied.
Tested: jam, koala, heiwa (h5committest); durandal
|
|
|
|
| |
pending a careful evaluation of enum conversion behavior.
|
|
|
|
|
|
|
|
|
|
| |
Some machines, like LLNL udawn, a blue-gene machine, requires all executables,
be launched by some command like mpirun.
Solution:
Added $RUNSERIAL to launch the executable.
Tested: LLNL uDawn.
|
|
|
|
|
|
| |
defines H5_HAVE_WIN32_API and H5_HAVE_VISUAL_STUDIO defines to use. These can be properly set during configurration.
Tested: windows and local linux - reviewed internally
|
|
|
|
|
|
|
|
|
| |
-ftrapv discovered several problems in the test suite. One of
them is in the INIT_INTEGER macro definition in dt_arith.c. It complained about line 150 where it tried to subtract 1 from
the negative minimal value of "int", causing it to overflow (or underflow). So I revised the code to avoid it.
Tested on jam, koala, linew, and Mac OS Lion with CLANG compiler.
|
|
|
|
|
|
|
|
|
| |
temp_point->l = (unsigned long long)((i * 100 + j * 1000) * n);
The value can overflow the signed int before being converted to unsigned long long. So I changed it to
temp_point->l = (unsigned long long)((i * 40 + j * 400) * n);
to keep it under the maximal value.
Tested on jam. Simple change.
|
|
|
|
| |
Tested: local linux
|