| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
| |
doesn't return correct
file size from MPI_File_get_size. Bypass this problem by replacing it with
stat. Add an option --disable-mpi-size in configure to indicate this function
doesn't work properly. Add a test in testpar/t_mpi.c, too. If it returns wrong
file size, print out a warning.
Tested on kagiso (parallel) because already tested the same change to v1.6 on
several platforms (kagiso, cobalt, copper, and sol).
|
|
|
|
| |
Regenerate configuration files after latest checkin
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a header message showing the purpose and explanation that the test is
for inoformation and always exits 0.
Also added a summary at the end.
Some other cosmetic changes (moved couple function code up, added some more
printf, fflush statements.)
Tested platform:
kagiso.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
(combination of) filters. Tested on copper and kagiso.
|
|
|
|
|
|
|
| |
WIN32, so I've standardized all #ifdef's to use _WIN32. This should not affect any other platform.
Tested:
Visual Studio (32- and 64-bit) on Win XP
|
|
|
|
|
|
|
|
| |
Clean up problems from error handling API changes in parallel and
threadsafe builds.
Tested on:
FreeBSD/64 6.2 (liberty) w/parallel & threadsafe
|
|
|
|
|
|
|
|
| |
most recent versions of the autotools.
Updated autotool versions are: autoconf 2.61, automake 1.10.0, and libtool 1.5.22.
Tested on kagiso.
|
|
|
|
|
|
|
|
| |
some comments in the
template file config/Makefile.am.blank.
This is just a cleanup checkin. Tested on kagiso.
|
|
|
|
|
|
|
|
| |
bin/makehelp (formatting
the output in the makefile was pretty hard).
Tested that make still works on kagiso; no code changes at all.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Multiple copies of Copyright appeared in Makefile.in. This was
due to automake copying the copyright right in the included files such as
config/commence.am.
Solution: Automake treats double hashes as comments and does not copy them
to Makefile.in. Changed all the copyright notices in config/*.am to use
double hashes for the Copyright right notice.
Tested: kagiso via bin/reconfigure.
|
|
|
|
|
|
|
|
| |
Dependency line
in the Makefile.
Makefile change only. Tested on kagiso, to be used to test on cobalt.
|
|
|
|
| |
Ran reconfigure to generate the Makefile.in files.
|
|
|
|
| |
Tested: visual inspection as they are all just comments.
|
|
|
|
|
|
|
|
|
| |
copyright notice.
Tested platform:
Kagiso only since it is only a comment block change. If it works in one
machine, it should work in all, I hope. Still need to check the parallel
build on copper.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems that while Cygwin supports the time command, it has trouble with
the syntax
srcdir="../../hdf5/test" time ./testhdf5
and complains.
The solution is to test the above case in configure and not to use the time
command if it fails; Cygwin is fine with
srcdir="../../hdf5/test" ./testhdf5
Tested on Cygwin and kagiso. This feature shouldn't be a major compatibility
problem since every platform but Cygwin is already fine with the current
syntax.
|
|
|
|
|
|
|
| |
The h5_mpi_get_file_size() is no longer used. The unused code caused some
compiling warning messages. Removed the whole routine.
Tested in heping pp mode.
|
|
|
|
| |
Tested in heping pp.
|
|
|
|
|
|
|
|
|
|
|
| |
The version of libtool used by HDF5 isn't directly affected by the reconfigure
script; instead, libtoolize --force must be used by hand. Libtool was the
source of the problem, so rolling its version back to 1.5.14 should solve the
issue (at least temporarily).
Reconfigure should still work on both heping and kagiso.
Tested on heping, kagiso, and tg-login3.
|
|
|
|
|
|
|
|
| |
Hopefully this will fix
issues on tg-login3.
bin/reconfigure should still work on both heping/mir and kagiso.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Should disable linking against shared libraries in Fortran for compilers that
don't support shared libraries.
Should also fix problem when the wrong Fortran file extension was specified.
If these changes don't solve the Daily Test issues, I'll look at backing out
the autotool version change until I have time to fix them.
Tested on heping, kagiso, juniper.
|
|
|
|
|
|
|
|
| |
Linux machines.
Updated to the latest versions of autotools.
Tested on kagiso, heping, and juniper. Let me know if you have any problems.
|
|
|
|
|
|
|
| |
Fix parallel build failure for property list class initialization refactor.
Tested on:
AIX (copper) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
AIX complained if some files are still open when MPI_Finalize is called,
so code called _exit without calling MPI_Finalize. But in Linux hosts
with MPICH, the MPI processes terminated but the launch processes got
stuck waiting for those processes to end properly and they would hang
on forever. As more tests ran, more processes got stuck.
Solution:
In order to please both AIX and MPICH, the MPI file handles are retrieved
and closed outside of the HDF5 library, then call MPI_Finalize and then
_exit.
Tested:
in heping and copper.
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix (Bug 544)
Description:
SGI Altix's MPI_File_get_size overflowed at 2GB and more.
Replaced h5_mpi_get_file_size calls by h5_get_file_size.
Tested:
Cobalt.
|
|
|
|
|
|
|
|
|
|
| |
only tested
if it is enabled.
Added Direct VFD status to the configure summary.
Removed a line left over from pablo support. Oops!
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix (Bug 544)
Description:
SGI Altix's MPI_File_get_size overflowed at 2GB and more.
Put in a temporary patch to use stat() instead to make Cobalt
passing on this test (bigdset). A better fix (like detect if
MPI_File_get_size does not work before using this is preferred.)
Tested:
Cobalt and Heping.
|
|
|
|
|
|
|
|
|
|
| |
Description:
It seems that, on AIX, calling MPI_Finalize without closing all files results in an error.
This causes t_pflush1 to fail because the whole point of the test is to see what happens if you
don't close a file. Try getting rid of the call to MPI_Finalize to see if this will silence the error.
Tested:
AIX (copper)
|
|
|
|
|
|
|
|
|
|
|
|
| |
utility
and quote its arguments. Also checks for the 'socket' library on
Solaris.
If this patch passes the Daily Tests and makes the user happy, I'll
port it back to the 1.6 branch.
Tested on mir and sol.
|
|
|
|
|
| |
Description:
Fix copper failures by adding MPI_Finalize call and also close the dataset and file in case of failure prior to exiting.
|
|
|
|
|
|
|
|
| |
Description:
Preliminary test of H5Fflush to verify that it still works when using MPIO collective mode.
Platforms:
Linux (heping)
|
|
|
|
|
|
|
| |
Re-run 'bin/reconfigure' script after recent checkins
Tested on:
none - shouldn't have any affect on compilation
|
|
|
|
|
|
|
|
| |
Description:
Add per-directory abbreviated copyright notices
(abbreviated COPYING files pointing to full notices).
Tested:
MANIFEST verified; not otherwise tested.
|
|
|
|
|
|
|
|
| |
Since these examples need to follow filesystem paths, the Makefiles need
to create directories in the examples directory; added this to the
Makefile.am.
Tested on Windows, mir, juniper
|
|
|
|
|
|
|
|
| |
H5C_insert_entry() allowing
insertion and pinning of a cache entry in one call.
h5commit tested
|
|
|
|
|
|
|
|
|
| |
VFD is being used
when a test is run.
Running reconfigure also regenerated error header files (because someone edited
them manually?).
|
|
|
|
|
|
|
|
| |
independent IO with file setview.
To activite this test,
add the command option -i.
For example, at IBM AIX, type "poe testphdf5 -i" will test the library with independent IO with file setview. It simply replaces all the collective IO tests with independent IO with file setview.
|
|
|
|
|
|
|
|
|
|
| |
"make check-vfd" will now run all tests in the test directory with different
file drivers (at least, all of those tests that use the testing framework's
FAPL). Tests that fail will be skipped.
This is not a perfect fix, but is better than nothing.
Along with this change, check-vfd should be added to the Daily Tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1) Added trace file support to the metadata cache. This allows capture
of all metadata cache calls in trace files for purposes of optimization
and debuging.
2) Added an expunge entry function. This allows an entry to be deleteded
from the cache without writing it to disk even if it is dirty.
3) Added a function call to resize pinned entries.
4) Added code to deal with entries that are dirty on load. This is
needed in support of a bug fix which can alter object headers on
load to repair files.
5) Added progress reporting code to the "MDC API smoke check" test in
cache_api.c. To enable the progress reporting, set report_progress
to TRUE in mdc_api_call_smoke_check().
Tested with h5committest, and a parallel test on phoenix (dual athelon
linux box).
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Trim trailing whitespace in Makefile.am and C/C++ source files to make
diffing changes easier.
Platforms tested:
None necessary, whitespace only change
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Configuration feature
Description:
'make install' now tests both static and shared libraries if both are installed.
Solution:
Previously, shared libraries were only tested when static libraries were not installed.
Also cleaned up line in commence.am that was including HL library in all Makefiles.
Platforms tested:
mir (Makefile change only)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix - bugzilla #552
Description:
On Cray X1, trying to use : as an argument confused the system.
Solution:
Added a test in configure to see if : as an argument is bad.
If so, skipped the test.
Platforms tested:
mir, Cray X1 (change to configure only)
|
|
|
|
|
|
|
|
|
|
| |
slight cleanup.
Description:
Changed the name of write type from write to write_pattern.
Platforms tested:
h5committested.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
It failed when only 1 is used to test.
Solution:
Cleanup the code a little so that it works for any number
of processes to invoke it.
Platforms tested:
h5committested,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some collective chunk IO macro names are confusing, change them to more meaningful
names.
Description:
H5Pset_dxpl_mpio_chunk_opt will set a flag so that the library can do one linked IO or mul
ti-chunk IO with collective in chunking storage directly. That is, the library won't do an
alyses to determine this.
The flags for the enum type we used before are:
H5FD_MPIO_OPT_ONE_IO
H5FD_MPIO_OPT_MULTI_IO
They are not good names because of the following two reasons:
1. It doesn't reflect chunking storage
2. OPT is kind of redundant and misleading,
Solution:
We change the names to
H5FD_MPIO_CHUNK_ONE_IO
H5FD_MPIO_CHUNK_MULTI_IO
Platforms tested:
Since only macro names are changed, no need to test with h5committest.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Sometimes the parallel prefix is given in the form of nfs:/mnt/pfs which
if given to a non-MPIO VFD like the default H5Fcreate, it would fail.
Called h5_rmprefix which would return the non-prefix component
part of the file name which would be okay for the default H5Fcreate and
such.
Platforms tested:
Tested in heping parallel.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Feature
(Code and tests are done by Christian. I just committed the code.)
Description:
Rewrote the purpose of this test. Now it tests these three cases,
/*
* Test following possible scenarios,
* Case 1:
* Sequential create a file and dataset with H5D_ALLOC_TIME_EARLY and large
* size, no write, close, reopen in parallel, read to verify all return
* the fill value.
* Case 2:
* Sequential create a file and dataset with H5D_ALLOC_TIME_EARLY but small
* size, no write, close, reopen in parallel, extend to large size, then close,
* then reopen in parallel and read to verify all return the fill value.
* Case 3:
* Sequential create a file and dataset with H5D_ALLOC_TIME_EARLY and large
* size, write just a small part of the dataset (second to the last), close,
* then reopen in parallel, read to verify all return the fill value except
* those small portion that has been written. Without closing it, writes
* all parts of the dataset in a interleave pattern, close it, and reopen
* it, read to verify all data are as written.
*/
Platforms tested:
Tested in copper, tg-ncsa and heping, all in parallel mode.
|