| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
h5repacktst creates some files that h5repack.sh reads, therefore, h5repacktst
must run before h5repack.sh runs. Added such a dependency.
Tested: Jam by hand (running "make h5repack.sh.chkexe_" and see that
h5repacktst is invoked first and until it finishes, h5repacktst does not run.)
|
| |
|
|
|
|
|
|
|
| |
Changed exit(1) to exit(EXIT_FAILURE) and exit(0) to exit(EXIT_SUCCESS) for
better coding.
Tested: jam.
|
|
|
|
|
|
|
|
| |
Comment says Error exit code is -1 but actual code use 1.
Changed it to EXIT_FAILURE (1) and exit(0) to exit(EXIT_SUCCESS) for
better coding.
Tested: jam.
|
|
|
|
|
|
|
|
| |
Error exit code of -1 is illegal (exit code is unsigned).
Changed it to EXIT_FAILURE (1).
Also changed exit(0) to exit(EXIT_SUCCESS) for better coding.
Tested: jam.
|
| |
|
|
|
|
|
|
|
|
| |
- h5diff: h5diff treats two INFINITY values different. Fixed by checking (value==expect)
before call ABS(...) at h5diff_array.c This will make that (INF==INF) is true
(INF is treated as an number instead of NaN) (PC -- 2009/07/28)
- h5diff: add option "--use-system-epsilon" to print difference if (|a-b| > EPSILON)
Change default to use strict equality (PC -- 2009/09/12)
|
|
|
|
|
|
|
| |
with HDF macro calls (HDstrdup, HDdup, HDclose, ...). No real code
change.
Tested: Jam. h5committest is in progress. Expect okay.
|
|
|
|
|
|
|
|
|
|
|
| |
Add backward compatibility test to make certain that the 1.6 library
handles encountering a file with a fixed array chunk index gracefully.
Also, remove the (generated) testh5ls.sh at 'make distclean'
Tested on:
FreeBSD/32 6.3 (duty) w/production
(too minor to require h5committest)
|
|
|
|
|
|
| |
The test generates an array datatype of ints larger than the display buffer. The bug was exposed when the buffer was reallocated.
Tested: linux
|
|
|
|
|
|
| |
functions: va_start(), HDvsnprintf(), and va_end(). When the boundary of the string buffer was reached and resized, the HDvsnprintf() function recalled without the other two functions in the code loop. This usage exposed bug 1520 by a user.
Tested: linux
|
|
|
|
|
|
| |
Also updated usage output.
Tested: linux
|
|
|
|
|
|
|
| |
bug fix: h5repack was not applying a requested contiguous layout for a dataset with filters
added a test to the C program test (not to the script), that verifies the layout and filters
tested: windows (development and tested manually) , linux
|
|
|
|
|
|
|
|
|
| |
Add a run to the h5repack shell script to read a family file
The file used for input is located in the common source tools for testfiles, in tools/testfiles
Modified the h5repack shell script to read files from this location (h5repack reads its input files from a dedicated testfiles location in h5repack/testfiles)
Changed the h5diff open file call to use h5tools_fopen, so that it can open all file drivers
Tested: linux
|
|
|
|
|
| |
change -c messages
tested: windows, linux
|
|
|
|
| |
tested: windows, linux
|
|
|
|
| |
Tested: Fedora 10 gcc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
limit.
ISSUE : the tools use the following formula to read by hyperslabs: hyperslab_size[i] = MIN( dim_size[i], H5TOOLS_BUFSIZE / datum_size) where H5TOOLS_BUFSIZE is a constant defined of 1024K. This is OK as long as the datum_size does not exceed 1024K, otherwise we have a hyperslab size of 0 (since 1024K/(greater than 1024K) = 0). This affects h5dump. h5repack, h5diff
SOLUTION: add a check for a 0 size and define as 1 if so.
TEST FOR H5DUMP: Defined a case in the h5dump test generator program of such a type (an array type of doubles with a large array dimension, that was the case the user reported). Since the written file commited in svn would be around 1024K, opted for not writing the data (the part of the code where the hyperslab is defined is executed, since h5dump always reads the files). Defined a macro WRITE_ARRAY to enable such writing if needed. Added a run on the h5dump shell script. Added 2 new files to svn: tools/testfiles/tarray8.ddl, tools/testfiles/tarray8.h5. NOTE: while doing this I thought of adding this dataset case to an existing file, but that would add the large array output to those files (the ddls). The issue is that the file list is increasing.
TEST FOR H5DIFF: for h5diff the check for reading by hyperslabs is H5TOOLS_MALLOCSIZE (128 * H5TOOLS_BUFSIZE) or 128 Mb. This makes it not possible to add such a file to svn, so used the same method as h5dump (only write the dataset if WRITE_ARRAY is defined). As opposed to h5dump, the hyperslab code is NOT executed when the dataset is empty (dataset is not read). Added the new dataset to existing files and shell run (tools/h5diff/testfiles/h5diff_dset1.h5 and tools/h5diff/testfiles/h5diff_dset2.h5 and output in tools/h5diff/testfiles/h5diff_80.txt).
TEST FOR H5REPACK: similar issue as h5diff with the difference that the hyperslab code is run. Added a run to the shell script (with a filter, otherwise the code uses H5Ocopy).
FURTHER ISSUES: the type in question ("double") has a different output cross platforms (e.g on liberty some garbage number is printed at some array locations)
SOLUTION: defined an "int" type for this test. However the printing of such an array has a bogus output at least in one platform (FreeBsd), so eliminated the test run altogether and filed a bug report on this
tested: h5committest
|
|
|
|
| |
regarding a AIX failure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
was being selected.
When reading the compression parameter keyword, the compression type read flag was incorrectly set to read, removed this line of code
in->configOptionVector[COMPRESS] = 1;
Modified one configuration file to have the COMPRESSION-TYPE GZIP
Keyword.
Entered a bug description fix of
- h5import: By selecting a compression type, a big endian byte order was being
selected (PVN - 2009/11/3)
tested: linux
|
|
|
|
|
|
|
| |
big or little endian machine. Configure.in was modified to export a variable carrying endianess information to testh5ls.sh. This script then compares the current run with 2 expected outputs, one for a big-endian machine (linew was used to generate the output), other for little endian (jam was used to generate the output)
the way h5ls prints types, it starts searching for NATIVE types first. One solution would be h5ls not to detect these native types, using for example the same print datatype function that h5dump does, that would make the output look the same on all platforms ("32-bit little-endian integer" would be printed instead). Drawback, this "native" information would not be available. Other solution is to have not one but 2 expected outputs and make the shell script detect the endianess and compare with one output or other
tested: h5committest
|
|
|
|
|
|
| |
causing compilition error on AIX
tested: h5commitest
|
|
|
|
|
|
|
|
| |
Implemented RFC. The new option is <-c, --compare List objects that are not comparable>
added some test cases and new test files for output
tested: h5committest
|
|
|
|
|
|
|
|
|
|
| |
Compare the return value of H5Tequal with 1, so that error return codes (-1) don't make the type to be printed
This came out because in windows the call
H5Tequal(type, H5T_NATIVE_INT_LEAST8)
returns error (-1) making h5ls to print
"native int_least8_t" on a VL type case (unix went to the correct case)
Tested: windows, linux
|
|
|
|
| |
tested: linux
|
|
|
|
|
|
| |
parameter ported from the 1.8 branch
tested: linux
|
|
|
|
|
|
| |
Changed 'THG' to 'The HDF Group' in various HDF5 source files,
most of which are <subdirectory>/COPYING.
-- Closes Bugzilla entry 1403.
|
|
|
|
|
|
|
|
|
| |
-N, --nan Avoid NaNs detection
Note: there is no shell script run for datasets with NaN because the output is non portable (different results and NaN strings for different systems)
Tested: windows, linux
|
|
|
|
|
|
|
|
|
| |
Storage: information not available
When displaying storage information for VL and dataset region types
Added 2 shell runs that display this information
#818
Tested: windows, linux
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
compression is 1024 bytes
M - is an integer greater than 1, size of dataset in bytes (default is 1024)
#bgz 1426
tested: windows, linux
|
|
|
|
|
|
|
|
| |
PG compiler complains about array out of bounds (a rank of zero was not checked)
Adding a scalar dataset to the test generator program. this case is run on a previous existing run, the case was added to 2 existing files
Changed the print_pos function so that it avoids to print the dimension brackets [] for scalar datasets
Tested: windows, linux
|
|
|
|
|
|
| |
match 1.8 and and 1.9
tested: windows, linux
|
|
|
|
|
|
| |
currently as the same size of hyperslab (that compares the dimension sizes against a predefined constant size and chooses the minimum between the two)
Tested : linux, windows
|
|
|
|
|
|
|
|
|
|
| |
to additional argument ":")
when "*" is present after a letter switch, that letter can have or not the extra argument
used in -b for h5dump to make the default NATIVE when no extra argument is present
tested: windows, linux
|
|
|
|
|
|
|
| |
For scalar string datasets print the character position when a difference is found instead of a non-existing array position
Tested: windows, linux
|
| |
|
|
|
|
|
|
| |
difference is found instead of a non-existing array position
Tested: windows
|
|
|
|
|
|
| |
file names
tested: windows
|
|
|
|
|
|
|
|
| |
files, 1 difference )
objects with the same name must be of the same type
tested: windows, linux
|
|
|
|
|
|
|
|
|
|
| |
Description: Porting fix to 'make check install' bug that was applied in 1.8,
as the problem appears in 1.6 as well. Tools will now be built
as 'ordered', with h5dump getting built last.
Received permission from Elena to check in post-code-freeze.
Tested: kagiso, liberty
|
|
|
|
|
|
| |
duplicating a dataset generation
tested: linux
|
|
|
|
|
|
|
|
|
|
| |
--Tring individual objects the file graph is not currently compared, so make h5diff return 0 (no diffrences)
Tested: windows, linux
his line, and those below, will be ignored--
M tools/lib/h5diff.c
|
|
|
|
|
|
| |
currently compared, so make h5diff return 0 (no diffrences)
Tested: windows, linux
|
|
|
|
| |
tested: linux
|
|
|
|
| |
tested: windows. linux
|
|
|
|
| |
tested: linux
|
|
|
|
| |
tested: windows. linux
|
|
|
|
|
|
| |
return 2 instead of -1 on error status
Tested: windows, linux
|
|
|
|
|
|
| |
option was not taken in consideration when printing the compression ratio
Tested: windows, linux
|
|
|
|
|
|
|
| |
Solution:
Check for the existence of objects in the file before calling malloc with the number of objects (this is because in AIX, malloc(0) returns NULL)
Tested: windows , linux
|