| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
non-executable files.
|
|
|
|
|
|
| |
Reviewed HDF5-68
Tested: windows
|
|
|
|
|
|
|
| |
the object's
flush class action to ensure that cached data is flushed so that H5Ocopy will get
the correct data. (HDFFV-7853)
|
| |
|
|
|
|
|
|
| |
HDFFV-5897.
Tested: jam, durandal (too minor for full h5committest)
|
|
|
|
|
|
| |
structure in H5D_read as I did in H5D_write.
Tested on jam - simple change.
|
|
|
|
|
|
|
|
| |
memory and caused seg fault. I added checks in two places to make sure the library returns error stack
when it fails to allocate memory. I didn't add any test to the test suite since there is no good way to test it. But I tested and verified the error stack by hand.
Tested on jam, koala, ostrich.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
core VFD) and HDFFV-7603 (core VFD has trouble with 2GB+ files on Windows).
Propagates the SEC2 driver fixes from HDF5 1.8.8 to the core VFD (mainly concerning the backing store). These fixes also conveniently fixed 7603 as well.
Tested on:
64-bit Windows 7
jam
koala
ostrich
|
|
|
|
|
|
| |
private functions and a macro according to Quincey's suggestion.
Tested on jam - simple change.
|
|
|
|
|
|
|
| |
opened with the MPI-IO VFD
Add test cases for these two routines
Jira issue HDFFV-7961
|
|
|
|
|
|
|
|
| |
Minor code safety issue in test/fheap.c and whitespace in other files.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug
(Too minor to require h5committest)
|
| |
|
|
|
|
|
|
| |
(11.8-0) on jam and koala has trouble with the command "*p++ = *p OP tree_val" in the macro definition of H5Z_XFORM_DO_OP1 of H5Ztrans.c. It increments P first before doing the operation. So I break down the command into two lines: "*p = *p OP tree_val; p++;" I also reported the problem to PGI.
Tested on jam, koala, and ostrich.
|
|
|
|
|
|
|
|
|
| |
operations like x*-100.
The parser mistaked "-" as substraction. I fixed it and also fixed another problem
with some special cases like 100-x and 2/x.
Tested on jam, koala, and ostrich.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
When using the new object header format, it was possible for corruption to occur
if the first object header chunk changed size such that the lenght of the "chunk
0 size" field changed. This only occurred if there were messages that had not
been decoded. The original algorithm that changed the object header chunk size
marked all messages as dirty, causing those that had not been decoded to have
both the raw and native form invalidated. Changed the algorithm to avoid
marking messages dirty and added assertions to catch the case where messages
are dirtied without being decoded (or recently created) first.
Tested: jam, koala, ostrich (h5committest), durandal
|
|
|
|
|
|
|
|
|
|
| |
Description:
When using the new object header format and adding an attribute with a size near
64K, it was possible for file corruption to occur. This happened only if the
first object header chunk was smaller than 256 bytes and then grew to larger
than 64K after the attribute was added.
Tested: ostrich, jam, koala (h5committest), durandal
|
|
|
|
|
|
|
| |
Better fix for zero-sized dataset error (r22053).
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug & parallel
|
|
|
|
|
|
|
|
|
|
| |
Correct corner case for creating a contiguous dataset with a zero-sized
dataspace, when the allocation time is set to early.
Also clean up a few compiler warnings in the dataspace code.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug & parallel
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Catch a missing FUNC_ENTER that escaped the recent pass through the source
code.
Tested on:
None, too minor, just eyeballed.
|
|
|
|
| |
No testing necessary.
|
|
|
|
|
|
|
|
| |
Correct misnamed FUNC_ENTER macro.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
(too minor to require h5committest)
|
|
|
|
|
|
|
| |
Correct a few typos in r21923 checkin that caused failures on linew & ember.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
|
|
| |
Add FUNC_ENTER macros for package-private routines and begin process of
switching package routines to use them. All H5G routines are currently
finished.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
|
| |
Refactor function name macros and simplify the FUNC_ENTER macros, to clear
away the cruft and prepare for further cleanups.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
| |
Add more braces to master conversion macro that was changed in r21919
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
|
| |
Refactor function name macros and simplify the FUNC_ENTER macros, to clear
away the cruft and prepare for further cleanups.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
|
| |
Refactor function name macros and simplify the FUNC_ENTER macros, to clear
away the cruft and prepare for further cleanups.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
|
|
|
| |
Bring [spirit of] r20393 from coverity branch back to trunk:
Move initialization ocrt_info.new_obj = NULL; before FUNC_ENTER_NOAPI -- gh
Tested on:
Mac OS X/64 10.7.3 (amazon) w/debug, production & parallel
(too minor to require h5commitest)
|
| |
|
|
|
|
|
|
| |
in a read-only file caused seg fault when the file is closed. I changed the error ID from H5E_CACHE to H5E_OHDR in the error report macro in H5O_create and fixed a minor problem in tfile.c.
Tested on jam and MacGoblin - minor changes.
|
| |
|
|
|
|
|
|
| |
fault when the file is closed. I fixed the problem by putting a condition check early in H5O_create of H5O.c. The old code checked it too late, not until a file space is created. I added a test case in tfile.c to check the creation of group, dataset, attribute, and datatype.
Tested on koala, jam, and linew.
|
| |
|
|
|
|
| |
Tested: durandal (too minor for full h5committest)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Updates Windows thread-safe code in H5TS.c to use _beginthread instead of CreateThread.
Tested on 64-bit Windows 7 with Visual Studio 2010 using CMake. Both 32- and 64-bit builds were tested.
|
|
|
|
| |
standard 2.8.6
|
|
|
|
|
|
|
| |
Back out r21782 while I figure out what the problem is with the change.
Tested on:
Daily tests... :-/
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
When shrinking a chunked dataset, the library fills in the unused parts of
chunks that have been shrunk. The fill value buffer allocated for this purpose
had a maximum size of 1 MB, but the fill was performed in a single operation.
Therefore, if the amount of unused space in a chunk after being shrunk was
greater than 1 MB, the library would read off the end of the fill value buffer.
Changed the maximum fill buffer size to be equal to the chunk size.
Tested: durandal; jam, koala, heiwa (h5committest)
|
|
|
|
|
|
|
|
|
|
| |
Rearrange checks for reasons why we break collective I/O back to independent
I/O into "global" and "local" sections. We should try to minimize the checks
in the "local" section...
Tested on:
Mac OS X/32 10.7.2 (amazon) w/parallel
(too minor to require h5committest)
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
between ASCII and UTF8. I corrected it by adding a condition check in H5T_conv_s_s and H5T_conv_vlen to report an error under this situation.
Tested on jam, koala, linew.
|