| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
hdf5_1_10
* commit '54957d37f5aa73912763dbb6e308555e863c43f4':
Commit copyright header change for src/H5PLpkg.c which was added after running script to make changes.
Add new files in release_docs to MANIFEST. Cimmit changes to Makefile.in(s) and H5PL.c that resulted from running autogen.sh.
Merge pull request #407 in HDFFV/hdf5 from ~LRKNOX/hdf5_lrk:hdf5_1_10_1 to hdf5_1_10_1
Change copyright headers to replace url referring to file to be removed and replace it with new url for COPYING file.
|
|
|
|
|
|
|
|
| |
makes
the H5Fcreate fail when prefix is specified.
tested parallel on Jam.
|
|
|
|
|
|
| |
is used to run the test.
- fix warning in t_chunk_alloc.
|
|
|
|
|
| |
MPI_File_set_size as they are supported by most MPI implementations.
- fix bug in t_mpi.c (HDFFV-8856)
|
|
|
|
|
|
| |
where necessary
reviewed
|
|
|
|
|
|
|
|
|
| |
Trim trailing whitespace from source code files with this command:
find . \( -name "*.[ch]" -or -name "*.cpp" -or -name "*.f90" \) -print |xargs -n 1 sed -i "" 's/[[:blank:]]*$//'
Tested on:
None - eyeballed only
|
|
|
|
|
|
| |
that are multiples of the number of processors.
Tested on jam and abe.
|
| |
|
|
|
|
|
|
| |
that are multiples of the number of processors.
Tested on jam and abe.
|
|
|
|
|
|
|
|
|
|
| |
Remove another call to H5E_clear_stack() from within the library.
Clean up lots of compiler warnings.
Tested on:
Mac OS X/32 10.5.6 (amazon)
(followup on other platforms forthcoming)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add H5Dcreate to API versioned routines, replacing internal usage with
H5Dcreate2
Fix thread-safe error stack initialization for API versioned error
stack printing routines.
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make H5Dopen versioned and change all internal usage to use H5Dopen2
Add simple regression test for H5Dopen1
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Deprecate H5Dextend in favor of H5Dset_extent (without using API
versioning, due to changed behavior) and switch internal usage to H5Dset_extent
Tested on:
FreeBSD/32 6.2 (duty) in debug mode
FreeBSD/64 6.2 (liberty) w/C++ & FORTRAN, in debug mode
Linux/32 2.6 (kagiso) w/PGI compilers, w/C++ & FORTRAN, w/threadsafe,
in debug mode
Linux/64-amd64 2.6 (smirom) w/default API=1.6.x, w/C++ & FORTRAN,
in production mode
Linux/64-ia64 2.6 (cobalt) w/Intel compilers, w/C++ & FORTRAN,
in production mode
Solaris/32 2.10 (linew) w/deprecated symbols disabled, w/C++ & FORTRAN,
w/szip filter, in production mode
Mac OS X/32 10.4.10 (amazon) in debug mode
|
|
|
|
|
|
|
|
|
|
|
|
| |
doesn't return correct
file size from MPI_File_get_size. Bypass this problem by replacing it with
stat. Add an option --disable-mpi-size in configure to indicate this function
doesn't work properly. Add a test in testpar/t_mpi.c, too. If it returns wrong
file size, print out a warning.
Tested on kagiso (parallel) because already tested the same change to v1.6 on
several platforms (kagiso, cobalt, copper, and sol).
|
|
|
|
|
|
|
|
|
| |
copyright notice.
Tested platform:
Kagiso only since it is only a comment block change. If it works in one
machine, it should work in all, I hope. Still need to check the parallel
build on copper.
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Trim trailing whitespace in Makefile.am and C/C++ source files to make
diffing changes easier.
Platforms tested:
None necessary, whitespace only change
|
|
|
|
|
|
|
|
|
|
| |
slight cleanup.
Description:
Changed the name of write type from write to write_pattern.
Platforms tested:
h5committested.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
It failed when only 1 is used to test.
Solution:
Cleanup the code a little so that it works for any number
of processes to invoke it.
Platforms tested:
h5committested,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Sometimes the parallel prefix is given in the form of nfs:/mnt/pfs which
if given to a non-MPIO VFD like the default H5Fcreate, it would fail.
Called h5_rmprefix which would return the non-prefix component
part of the file name which would be okay for the default H5Fcreate and
such.
Platforms tested:
Tested in heping parallel.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feature
(Code and tests are done by Christian. I just committed the code.)
Description:
Rewrote the purpose of this test. Now it tests these three cases,
/*
* Test following possible scenarios,
* Case 1:
* Sequential create a file and dataset with H5D_ALLOC_TIME_EARLY and large
* size, no write, close, reopen in parallel, read to verify all return
* the fill value.
* Case 2:
* Sequential create a file and dataset with H5D_ALLOC_TIME_EARLY but small
* size, no write, close, reopen in parallel, extend to large size, then close,
* then reopen in parallel and read to verify all return the fill value.
* Case 3:
* Sequential create a file and dataset with H5D_ALLOC_TIME_EARLY and large
* size, write just a small part of the dataset (second to the last), close,
* then reopen in parallel, read to verify all return the fill value except
* those small portion that has been written. Without closing it, writes
* all parts of the dataset in a interleave pattern, close it, and reopen
* it, read to verify all data are as written.
*/
Platforms tested:
Tested in copper, tg-ncsa and heping, all in parallel mode.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Some tests showed the filesize was not as expected. But the error was
intermittent. This was a racing condition as some processes finish
extend_chunked_dataset() sooner than others and return to the main
body which proceeds to call the next test which also uses the same
test data file and alters it. That messes up the "slower" processes
which then see unexpected filesize.
Also, the routine create_chunked_dataset() which creates test data file
actually was executed by all processes. That is wrong.
Solution:
Added a barrier at the end of extend_chunked_dataset to make sure all
processes are done with the test data file before returning.
Changed create_chunked_dataset such that only one process would create
the test data file. The rest does nothing but just wait for it to finish.
Platforms tested:
Tested in TG-NCSA in which the errors were detected.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix #281
Description:
Committed a wrong copy in the previous checkin.
Solution:
Checked in the right one and did some code cleanup, rearrangment.
Platforms tested:
heping pp.
Misc. update:
|
|
Bug #281
Description:
A dataset created in serial mode with H5D_ALLOC_TIME_INCR allocation setting
was not extendible, either explicitly by H5Dextend or implicitly by writing
to unallocated chunks. This was because parallel mode expects the allocation
mode be H5D_ALLOC_TIME_INCR only.
Solution:
Modified library to allocate more space when needed or directed if the
file is opened by parallel mode, independent of what the dataset allocation
mode is.
Platforms tested:
Heping pp.
|