| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
| |
Verified by user, who reported issue.
|
| |
|
|
|
|
|
| |
Add new files for repack tests.
Add repack tests for VDS files.
|
|
|
|
|
|
|
|
| |
revise_chunks.
Tested on: 64-bit Ubuntu 15.10 w/ gcc 5.2.1
autotools serial
cmake serial
|
| |
|
|
|
|
|
|
|
|
|
| |
compound type.
Added the testing to h5repack where it belongs.
Undo the tests added to h5dump.
Tested: h5committested plus tested in jam by hand.
|
|
|
|
|
|
| |
to test script changes in autotools test scripts.
Tested: local linux
|
|
|
|
| |
Tested: local linux autotools
|
|
|
|
| |
CMake only until library updates approved.
|
|
|
|
|
|
| |
for objects in examples.
Tested: local linux
|
|
|
|
|
|
| |
Reviewed in H5T-61
Tested: local linux - cmake and autotools
|
|
|
|
|
|
|
|
| |
Reverted code change and changed default to 0 from 1024. Changed limit test to use h5dump to compare repacked file instead of h5diff.
Corrected test scripts for test folder path
Tested: h5committest and local linux with CMake
|
|
|
|
|
|
| |
Reverted code change and changed default to 0 from 1024. Changed limit test to use h5dump to compare repacked file instead of h5diff.
Tested local linux with CMake
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
HDFFV-8012 - h5repack changes max dims and cause failure if only "-f none" is used without changing layout for chunked dataset when a chunk dim is bigger than a dataset dim
Description:
"h5repack -f <obj>:NONE <file.h5> out.h5" command failed if source file contains chunked dataset and a chunk dim is bigger than a dataset dim.
Another issue is that the command changed max dims if chunk dim is smaller than the dataset dim.
These issue occurred when dataset size is smaller than 64k (compact size limit)
Fixed them.
Tested:
jam (linux32-LE), koala (linux64-LE), ostrich (linuxppc64-BE), tejeda (mac32-LE), linew (solaris-BE), Windows (32-LE cmake), cmake (jam)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for HDFFV-7993 - h5repack fails with error "chunk size must be <= maximum dimension size for fixed-sized dimensions"
Description:
Fixed a failure when change the chunk size of a specified chunked dataset with unlimited max dims.
Also took care of converting to contiguous and compact from the dataset.
Test cases were added and tagged with jira#.
Tested:
jam (linux32-LE), koala (linux64-LE), ostrich (linuxppc64-BE), tejeda (mac32-LE), linew (solaris-BE), Windows (32-LE cmake), Cmake (jam)
|
|
|
|
|
|
|
| |
- h5repack: h5repack failed to copy dataset if the layout is changed from chunked with
unlimited dims to contiguous. (PC -- 2011/07/15)
- h5diff: "--delta" option considers two NaN of the same type are different, which is wrong
based on http://www.hdfgroup.org/HDF5/doc/RM/Tools.html#Tools-Diff. (PC -- 2011/07/15)
|
|
|
|
|
|
|
|
|
|
| |
Fix for Bug1896 h5repack - changing layout to COMPACT does not work
Description:
Make h5repack be able to convert a layout to COMPACT for small size dataset as default. Also add verifying layout changes in our test script.
Tested:
jam, amani, heiwa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add test cases for bug1797 - h5repack doesn't handle references in
compound and vlen datatypes for attributes
Description:
Add test cases ahead as waiting for H5Acopy to complete the fix.
1. obj references in attr of compound type
2. region references in attr of compound type
3. obj references in attr of vlen type
4. region references in attr of vlen type
NOTE:
This test is skipped now and will be included when code part is completed.
( H5Acopy() replaces copy_attr() )
The file&code can be used for lib portion test.
Tested:
jam, amani, linew
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for the bug1726 - NPOESS: h5repack loses attributes for datasets of
type H5T_REFERENCE.
Description:
include test cases.
also test cases for attribute with object and region reference.
Tested:
jam, amani, linew
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for bug1814 - NPOESS: h5repack doesn't handle references to the groups
as an element of a dataset
Description:
handles object reference to named-datatype as well.
Add test cases.
Tested:
jam, amani, linew
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
h5repack previously would not take named datatypes into consideration when copying
datasets and attributes. This would cause extra anonymous datatypes in the target file
at best, and cause errors halfway through the repacking at worst. h5repack should now
always handle named datatypes correctly. Named datatypes are also now converted to the
native type when -n is given.
Tested: jam, linew, smirom (h5committest)
|
|
|
|
|
|
|
|
|
|
|
|
| |
H5TOOLS_BUFSIZE limit.
ISSUE : the tools use the following formula to read by hyperslabs: hyperslab_size[i] = MIN( dim_size[i], H5TOOLS_BUFSIZE / datum_size) where H5TOOLS_BUFSIZE is a constant defined of 1024K. This is OK as long as the datum_size does not exceed 1024K, otherwise we have a hyperslab size of 0 (since 1024K/(greater than 1024K) = 0). This affects h5dump. h5repack, h5diff
SOLUTION: add a check for a 0 size and define as 1 if so.
TEST FOR H5DUMP: Defined a case in the h5dump test generator program of such a type (an array type of doubles with a large array dimension, that was the case the user reported). Since the written file commited in svn would be around 1024K, opted for not writing the data (the part of the code where the hyperslab is defined is executed, since h5dump always reads the files). Defined a macro WRITE_ARRAY to enable such writing if needed. Added a run on the h5dump shell script. Added 2 new files to svn: tools/testfiles/tarray8.ddl, tools/testfiles/tarray8.h5. NOTE: while doing this I thought of adding this dataset case to an existing file, but that would add the large array output to those files (the ddls). The issue is that the file list is increasing.
TEST FOR H5DIFF: for h5diff the check for reading by hyperslabs is H5TOOLS_MALLOCSIZE (128 * H5TOOLS_BUFSIZE) or 128 Mb. This makes it not possible to add such a file to svn, so used the same method as h5dump (only write the dataset if WRITE_ARRAY is defined). As opposed to h5dump, the hyperslab code is NOT executed when the dataset is empty (dataset is not read). Added the new dataset to existing files and shell run (tools/h5diff/testfiles/h5diff_dset1.h5 and tools/h5diff/testfiles/h5diff_dset2.h5 and output in tools/h5diff/testfiles/h5diff_80.txt).
TEST FOR H5REPACK: similar issue as h5diff with the difference that the hyperslab code is run. Added a run to the shell script (with a filter, otherwise the code uses H5Ocopy).
tested: linux (h5commitest failed , apparently it did not detect the code changes in /tools/lib that fix the bug: the error in an assertion in the hyperslab of 0. I am sure that making h5ccomitest --distclean will detect the new code , but don't want to wait more 3 hours :-) )
|
|
|
|
|
|
|
|
|
| |
Correct error introduced in r16353 with layout version, and add test
so it gets caught earlier.
Tested on:
FreeBSD/32 6.3 (duty)
Too minor to require h5committest
|
|
|
|
| |
windows shell script only Note: for the unix shell script this file is not used
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
files with incorrect
datatype versions are encountered.
Description: The library now recognizes some problems with datatype versions in
H5O_decode_helper(), and, if not performing strict format checks, automatically
corrects them. Framework added for other message decode routines to
automatically correct file errors. Datatype version information added to
h5debug.
Tested: kagiso, smirom, linew (h5committest)
|
|
tools/h5repack/testfiles
tested: linux
|