| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
and replace it with new url for COPYING file.
Fix 2 lines in java error test expected output file where messages
include line numbers changed by reducing the copyright header by 2
lines.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Complete revamp of package initialization/shutdown mechanism in the library.
Each package now has a single init/term routine.
This new way should avoid packages being re-initialized during library
shutdown and is also be _much_ more proactive about giving feedback for
resource leaks internal to the library.
Introduces a new "module" header file for packages in the library
(e.g src/H5Fmodule.h) which sets up some necessary package configuration macros
for the FUNC_ENTER/LEAVE macros. (The VFL drivers have their own slightly
modified version of this header, src/H5FDdrvr_module.h)
Also cleaned up a bunch of resources leaks all across the library and tests,
along with addressing many warnings, as I encountered them.
Tested on:
MacOSX/64 10.10.5 (amazon) w/serial & parallel
Linux/64 3.10.x (kituo) w/serial & parallel
Linux/64 2.6.x (ostrich) w/serial
|
|
|
|
|
|
|
|
| |
* H5_ATTR_FORMAT(X,Y,Z) __attribute__((format(X, Y, Z)))
- Rename UNUSED attribute characterstic to H5_ATTR_UNUSED.
- Rename NORETURN attribute characterstic to H5_ATTR_NORETURN
tested with h5committest.
|
|
|
|
|
|
| |
the arguments to be in a more logical order.
Tested on: h5committest
|
|
|
|
|
|
|
|
|
|
| |
Revert r25273, 25283 & 25439 (the hyperslab improvement changes). They
are buggy and it's taking me a long time to correct the problem. I'll check
in a revised form of the changes when I've got them straightened out.
Tested on:
Mac OSX 10.10.0 (amazon) w/gcc 4.9.x, C++, FORTRAN
Linux 2.6.x (jam) w/parallel
|
|
|
|
|
|
|
|
|
|
|
| |
Bring in Chao/Neil/my changes to optimize hyperslab selection operations
further, along with 3 new public API routines: H5Scombine_hyperslab(),
H5Sselect_select() and H5Scombine_select(), along with many minor cleanups to
the code and fixing a few compiler warnings.
Tested on:
Mac OSX/64 10.9.3 w/gcc 4.9.x and parallel w/OpenMPI
(h5commttest forthcoming)
|
|
|
|
|
|
|
|
|
|
| |
Clean up warnings, switch library code to use Standard C/POSIX wrapper
macros, remove internal calls to API routines, update checkapi and checkposix
scripts.
Tested on:
Mac OSX/64 10.8.3 (amazon) w/C++ & FORTRAN
Big-Endian Linux/64 (ostrich)
|
|
|
|
|
|
|
|
| |
Refactor function name macros and simplify the FUNC_ENTER macros, to clear
away the cruft and prepare for further cleanups.
Tested on:
Mac OSX/64 10.7.3 (amazon) w/debug, production & parallel
|
|
|
|
|
|
|
|
|
|
|
|
| |
should return htri_t not
herr_t. To minimize the change of the library's behavior, in the function
H5Z_prelude_callback of H5Z.c, if the return value of can_apply is FALSE and
the filter is MANDATE, this function returns a FAILURE. If the return value is FALSE
but the filter is OPTIONAL, this function returns a SUCCEED. During the IO, the filter
will fail and return a size of zero. But the pipeline will skip this filter.
Tested on jam, lnew, and amani. Tested on jam with szip.
|
|
|
|
|
|
|
|
|
|
|
| |
Description:
Previously, there was no versioning for H5Z_class_t. This prevented applications
written for 1.6 using custom filters from being able to use the 1.8 library.
There is now an H5Z_class1_t and H5Z_class2_t to enable compatibility. H5Zregister is
*not* versioned, it determines which version of the struct has been passed in by the
value of the first field (id or version, both are ints).
Tested: jam, linew, smirom (h5committest), jam (--with-default-api-version=v16)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (#1357)
Description:
Three filters have not assigned correct value to one value-result argument, "buf_size". N-bit, szip, and scale offset filter have had this problem.
However, I don't think this problem has been making buffer overrun because those filters were informing the caller that the "buf", another value-result argument, is smaller than it actually is. If there was actual buffer overrun, I believe another problem exists although I don't know.
Tested:
jam, smirom, linew
Although all test were passed, I'm concerned about valgrind memcheck error. There can be another miscommunication between filter and the caller.
|
|
|
|
|
|
| |
NOTE: szip client symbols were made public
Tested: windows, linux, solaris
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make H5Pget_filter_by_id() API versioned and switch internal usage
to H5Pget_filter_by_id2().
Add simple regression test for H5Pget_filter_by_id1().
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
|
|
|
|
|
|
|
|
|
| |
copyright notice.
Tested platform:
Kagiso only since it is only a comment block change. If it works in one
machine, it should work in all, I hope. Still need to check the parallel
build on copper.
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up some compiler warnings (esp. those flagged on Windows builds)
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Trim trailing whitespace, which is making 'diff'ing the two branches
difficult.
Solution:
Ran this script in each directory:
foreach f (*.[ch] *.cpp)
sed 's/[[:blank:]]*$//' $f > sed.out && mv sed.out $f
end
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Removed PABLO from the source
Solution:
Platforms tested: arabica with 64-bit, copper with parallel,
heping with GNU C and C++ and PGI fortran (but
I disabled hl, there is some weird problem only
on heping: F9XMODFLAG is not
propagated to the Makefile files
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup & optimization
Description:
Improve ADF/CGNS benchmark by reducing the number of internal attribute
copies made during creations, opens and writes.
Added new H5O_iterate() routine for iterating through messages of a certain
type in the object header (attributes are the only message currently that can
have multiple instances in the object header).
Cross-pollinated various minor code cleanups to reduce diffs between
branches.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix SZIP filter to dynmically detect encoder.
Description:
Solution:
See:
http://hdf.ncsa.uiuc.edu/RFC/SZIP/Szip_dynamic_12_Oct.pdf
Changes to h5dump tests, contingent on detecting SZIP encoder.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug. See other checkin.
Description:
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update Szip to accept 'n-bit' data
Description:
See earlier checkins.
Solution:
Platforms tested:
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Address two problems:
- The computation of the scanline in the szip filter was being
performed in the "can apply" callback routine instead of the
"set local" routine.
- The routine which allocated all the chunks for an entire dataset
(which is invoked when the allocation time is early or late,
rather than incremental) wasn't recording a failed filter in
the information for the chunk, causing the library to believe
that the chunk had the filter applied when it really hadn't.
Solution:
- Move the scanline computation to the "set local" callback.
- Record the filter mask with each chunk created when allocating them.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/szip
Too obscure to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Bug fix
Description: While working on the SZIP documentation with Frank, I realized
that when scanline was less than 4k and bigger than pixels_per_block,
it was not adjusted if number_of_blocks_per_scanline was bigger
than max_number_of_blocks_per_scanline.
Solution: Fixed the code. Unfortunately it didn't help with the problem
I had using h5repack with DOQGROD.he5 file.
Platforms tested: copper
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up various recent changes a little.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose: Improvement
Description: HDF5 Library set pixels_per_scanline parameter to the size of the chunk's
fastest changing dimension. As a result, fastest changing dimension
of the chunk could not be bigger than 4K and smaller than pixels_per_block
value and szip compression couldn't be used for many real datasets.
Solution: Reworked algorithm how HDF5 sets pixels_per_scanline value; only chunks
with the total number of elements less than pixels_per_block value are rejected.
There is no restriction on the size of the chunk's fastest changing
dimension anymore.
Platforms tested: verbena, copper, sol
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
HDF5 now supports SZIP with no encoder.
Description:
SZIP can be configured to have both encoder and decoder or just to have the decoder. HDF5 can now query the configuration of any filter, and will throw errors if users try to write using a filter with encoding disabled.
Solution:
Added H5Zget_filter_info function, changed API for H5Pget_filter and H5P_get_filter_by_id. See SZIP RFC.
Platforms tested:
Copper (fortran, C++, parallel), Sleipnir (C++), Arabica (fortran, C++), Verbena (fortran, C++)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup & minor optimization
Description:
Re-work the way interface initialization routines are specified in the
library to avoid the overhead of checking for them in routines where there is
no interface initialization routine. This cleans up warnings with gcc 3.4,
reduces the library binary size a bit (about 2-3%) and should speedup the
library's execution slightly.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/gcc34
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Range check the szip 'pixels per block' against the chunk size of the
dataset when attempting to create a new dataset, since the szip library
requires the PPB to be at least the size of the fastest changing dimension
in the chunk.
Platforms tested:
FreeBSD 4.9 (sleipnir)
too minor for h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Don't check the number of szip parameters set during the "can apply" and
"set local" callbacks, be safe about setting the parameters instead.
Platforms tested:
FreeBSD 4.9 (sleipnir)
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Move PABLO_MASK above including headers.
Platforms tested:
FreeBSD 4.8 (sleipnir)
too minor for h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up more warnings from lint.
Platforms tested:
FreeBSD 4.8 (sleipnir)
too minor for h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Add in rest of szip "options mask" macros that were missing. Also made
"raw" options mask set by the library, instead of requiring users to always set
it.
Platforms tested:
FreeBSD 4.8 (sleipnir)
Minor tweaks too small fo h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Fix three (!) bugs in szip filter:
- we were using bytes per pixel instead of bits per pixel
- we were using the size of the slowest changing dimension instead of
the fastest changing dimension to compute the blocks per scanline
parameter
- we swapped two parameters when setting up szip_options block.
Solution:
Addressed bugs above.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest not needed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Uncompressed buffers can't get to the szip filter's decompression code now
that they are handled correctly by the chunk's filter mask.
Solution:
Remove handling of uncompressed buffers from szip filter's decompression
code.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/szip
h5committest not necessary & doesn't test szip code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix
Description:
This is an analogue to the previous bug-fix for filters not being
applied to data written but being applied when read. The old way was
if the SZlib library couldn't deflate a dataset, then we'd munge
along pretending that it was okay.
Solution:
Trigger it as an error in this situation. The H5Z_pipeline code which
calls this function can clear the error stack if need be.
Platforms tested:
Modi4 (Parallel & Fortran)
Burrwhite (Fortran & C++)
Baldric (Fortran), but make check didn't work because of "libucb.so"
error that I can't fix...)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/new features
Description:
Switch over to a new style for registering filters with the library -
instead passing in an ID, a string and a callback function to H5Zregister,
the client should pass in a single pointer to a H5Z_claass_t struct which
contains the ID, the description string and all the function callbacks as
fields.
Added support for a new "can apply" callback for each filter, which is
called when a dataset is created to check whether the parameters for that
filter apply correctly to the combination of the datatype and the chunk size
(i.e. dataspace) for the dataset.
Added support for a new "set local" callback for each filter, which
is called when a dataset is created (after the "can apply" filter callback)
and sets filter parameters that are specific to that particular dataset.
Switched the filters we ship over to use the new H5Z_class_t struct for
their internal registrations and also added "set local" callbacks to the
szip and shuffle filters and a "can apply" callback to the szip filter.
Lots of other code cleanups, etc. also
Solution:
Platforms tested:
FreeBSD 4.8 (sleipnir) w/szip
Linux 2.4 (sleipnir) w/szip
Solaris 2.7 (arabica) w/FORTRAN
IRIX64 6.5 (modi4) w/szip, FORTRAN & parallel
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup & bug fix
Description:
Cleaned up comments & copyright statement.
Blocks which were unable to be compressed were missing the "length" prefix,
causing the decompression routine to fail.
Solution:
Always add "length" prefix to all blocks, even ones the fail to compress.
Platforms tested:
FreeBSD 4.8 (sleipnir)
IRIX64 6.5 (modi4) w/parallel
Misc. update:
|
|
To support szip compression in HDF5
Description:
This is where szip filter function is located.
Solution:
The filter function composes of "encode" and "decode" part, which
is similar as deflate filter.
One critical difference is:
when szip decompresses the data, it needs to know the buffer size to hold
the decompressed data in advance.
Currently it's hard for HDF5 lib to give the buffer size easily. So to avoid
this problem, in each chunk, we add a small header to hold the buffer size of
each chunk. The code will use UINT32 ENCODE and UINT32 DECODE to finish this
part of work.
Platforms tested:
Since there are changes of configure.in and configure,I didn't use h5committest.
I tested with four platforms.
1) Linux 2.4 (eirene)
2) Solaris 2.7(arabica)
3) windows 2000(VS 6.0)
4) SGI IRIX6.5-64(modi4)
For test 1)-3), only basic C tests were done
For modi4 test, I tested 64-bit C,parallel and fortran.
All tests passed, except a warning message from szip library when checksum is used in some order, which doesn't
e any real problems.
Misc. update:
|