| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Hyperslab selections that had a selection offset and were applied to a
chunked dataset could get into an infinite loop or core dump if the same
selection was used multiple times, with different selection offsets.
Solution:
"Normalize" the selection with the selection offset, generate the
selections for the chunks overlapped and then "denormalize" the selection.
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up a few compiler warnings of various sorts...
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clarify error string.
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Activating collective IO supports for irregular selction inside HDF5 dataset.
This support doesn't include to build the final complicated MPI derived datatype support for chunking storage.
Description:
Support collective chunk IO for both contiguous and chunking storage for irregular selection( using H5S_SELECT_OR for multiple hyperslab selection)
Solution:
Using MPI derived datatype to realize this feature.
Problems still need to be investigated:
Big size irregular hyperslab selection test may cause MPI hang or abnormalexit with MPICH family on various platforms. This is really hard to debug since sometimes it can work and sometimes it cannot work. We will continue investigating those cases. This may not be parallel HDF5 bugs since with the recent version of poe at IBM platforms, all tests passed.
Platforms tested:
1. Linux heping 2.4.21-27.0.1 with mpich
2. AIX 5.1 copper with mpcc_r
3. Altix cobalt SGI linux 2.4.21-sgi304rp05031014_10149 with icc -lmpi
4. Linux Cluster SDSC TG, intel 8-r2 with mpich 1.2.5
5. NCSA Linux Cluster Tungsten, MPICH-TCP-1.2.5.2, Intel 8.0 under lustre
6. NCSA Linux Cluster Tungsten, MPICH-LAM-INTEL711, sometimes not working
7. NCSA Linux CLuster Tungsten, champion-pro-1.2.0-3, not working for other collective IO tests, but work for irregular selection collective IO test.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup, mostly
Description:
Remove remaining TBBT error info
Add new error code for block tracker
Platforms tested:
FreeBSD 4.11 (sleipnir)
Solaris 2.9 (shanti)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: For variable-length string, H5Tget_class returned H5T_STRING as
its class. But H5Tdetect_class and H5Tget_member_class considered it as
H5T_VLEN. This is fixed to let all these 3 functions treat it as H5T_STRING.
Some test cases have been added to dtypes.c
Platforms tested: heping - already tested for v1.6 with h5committest
Misc. update: RELEASE.txt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Description: Removed PABLO from the source
Solution:
Platforms tested: arabica with 64-bit, copper with parallel,
heping with GNU C and C++ and PGI fortran (but
I disabled hl, there is some weird problem only
on heping: F9XMODFLAG is not
propagated to the Makefile files
Misc. update:
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Change pablo mask to conform to the style used by the rest of the library
Platforms tested:
None needed - very, very minor
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Convert chunk iteration code to use skip lists instead of threaded, balanced
binary trees.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel & szip
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix/Code Cleanup/Doc Cleanup/Optimization/Branch Sync :-)
Description:
Generally speaking, this is the "signed->unsigned" change to selections.
However, in the process of merging code back, things got stickier and stickier
until I ended up doing a big "sync the two branches up" operation. So... I
brought back all the "infrastructure" fixes from the development branch to the
release branch (which I think were actually making some improvement in
performance) as well as fixed several bugs which had been fixed in one branch,
but not the other.
I've also tagged the repository before making this checkin with the label
"before_signed_unsigned_changes".
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel & fphdf5
FreeBSD 4.10 (sleipnir) w/threadsafe
FreeBSD 4.10 (sleipnir) w/backward compatibility
Solaris 2.7 (arabica) w/"purify options"
Solaris 2.8 (sol) w/FORTRAN & C++
AIX 5.x (copper) w/parallel & FORTRAN
IRIX64 6.5 (modi4) w/FORTRAN
Linux 2.4 (heping) w/FORTRAN & C++
Misc. update:
|
|
|
|
|
|
|
|
|
|
| |
Fix whitespace
Description:
Clean up whitespace a bit
Platforms tested:
None, just eyeballed, very minor
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
small bug fix
Description:
When checking whether the current chunk covers the irregular hyperslab,
the ending point of the chunk is not updated.
This will cause the wrong checking output and fail for some irregular hyperslab
selection.
Solution:
Updating the ending point of the chunk.
Platforms tested:
Linux 2.4 + parallel (too small to use h5committest)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding code for using MPI derived datatype to handle collective IO
Description:
No testing yet, won't affect the library.
Solution:
Platforms tested:
linux 2.4 + mpich 1.2.6
Aix 5.1 + mpcc_r
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check in some new fixes for MPI derived datatype routines
Description:
MPI derived datatype algorithm seems working for a simple case;
however, there are still other problems need to be solved. So the
code cannot be used for the time being. Check-in only for debugging.
It won't affect other part of the library.
Solution:
Platforms tested:
Linux 2.4 (heping, serial and parallel)
(Since no new tests were added and changes are mostly restricted to
one fuction, no need to test three platforms).
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding codes for the general MPI derived datatype in order to
better incorporate new fixes of HDF5 library.
Description:
Note: These codes have not been tested for general use.
Don't call these functions in your developments of the HDF5 library.
Also these codes are stand-alone codes, they should not affect
other library codes.
Solution:
Platforms tested:
Heping(C and Parallel linux 2.4, mpich 1.2.6)
Arabica(C,C++,Fortran, Solaris 2.7)
Copper(C,c++,Fortran, AIX 5.1, NOTE: c++ FAILED, seems not due to the recent check-in)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix a couple of return values from NULL -> FAIL.
Platforms tested:
FreeBSD 4.10 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Relax restrictions on parallel I/O to allow compressed, chunked datasets
to be read in parallel (collective access will be degraded to independent
access, but will retrieve the information still).
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix & code cleanup
Description:
More dataset cleanups to get to a point where we can fix the chunked I/O
bug.
Also fix a couple of errors in the recent file object resurrection changes
which should hopefully address the recent daily test failres (H5T.c)
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix/code cleanup
Description:
Clean up raw data I/O code to bundle the I/O parameters (dataset, DXPL ID,
etc) into a single struct to pass around through the dataset I/O routines,
since they are always passed together, until very near the bottom of the I/O
stack.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Feature
Description:
Datatypes and groups now use H5FO "file object" code that was previously
only used by datasets. These objects will hold a file open if the file
is closed but they have not yet been closed. If these objects are unlinked
then relinked, they will not be destroyed. If they are opened twice (even
by two different names), both IDs will "see" changes made to the object
using the other ID.
When an object is opened using two different names (e.g., if a dataset was
opened under one name, then mounted and opened under its new name), calling
H5Iget_name() on a given hid_t will return the name used to open that hid_t,
not the current name of the object (this is a feature, and a change from the
previous behavior of datasets).
Solution:
Used H5FO code that was already in place for datasets. Broke H5D_t's, H5T_t's,
and H5G_t's into a "shared" struct and a private struct. The shared structs
(H5D_shared_t, etc.) hold the object's information and are used by all IDs
that point to a given object in the file. The private structs are pointed
to by the hid_t and contain the object's group entry information (including its
name) and a pointer to the shared struct for that object.
This changed the naming of structs throughout the library (e.g., datatype->size
is now datatype->shared->size). I added an updated H5Tinit.c to windows.zip.
Platforms tested:
Visual Studio 7, sleipnir, arabica, verbena
Misc. update:
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Allow I/O to occur on 0 element selections.
Platforms tested:
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Fix another batch of minor differences between the development and release
branches.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix
Description:
When a simple dataspace is created, its extent should be set before using it,
or it will silently function as a NULL dataspace.
Solution:
Added checks on user-supplied dataspaces. Now dataspaces without extents set
will throw errors; users must explicitly set a dataspace to be NULL.
Platforms tested:
sleipnir, windows
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Changed calloc() calls to malloc() calls allocating background buffers
during dataset writes, since the background buffer information will be read
from disk anyway, overwriting any existing values.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up collective chunking code a bit.
Also, add '--enable-instrument' configure flag to have a mechanism for
determining that optimized operations happened correctly in the library (instead
of just the "normal" way) by allowing 'flag' properties to be set outside the
library and set when the "right" thing happens. This is mainly for debugging
and regression checks, so we make certain we don't break optimized I/O by
accident. It's enabled by default when --enable-debug is on (which is on by
default in the development branch and off by default in the release branch),
but can also be independently controlled with its own configure flag.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IBM p690 (copper) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To add a special section of codes for collective-chunk IO tests.
Description:
The current patch of collective chunk IO support in HDF5 can only handle
some special("however, can cover many applications") cases,
Inside source code, we did a careful checking to make sure other cases would
not fall into this category and would not use collective IO.
We also would like to test whether those collective conditions were met in our
test programs.
The current parallel HDF5 handled those collective IO requests in a special way.
If the library finds it cannot do collective IO, it will silently change to independent IO.
So basically there is no better way to check whether the library is doing what it should do without "hacking" into HDF5 source codes for the purpose of testing. But the "hacking" should not affect library work and should be easily pulled out after we get more general collective IO algorithm to work.
With Quincey's suggestion, we used HDF5 property APIs to finish the job.
Solution:
The approach includes three parts:
1) In the test program, insert a property inside data transfer property list.
Set a default value for this property.
2) Inside H5Dio.c, when the library finds that it cannot do collective IO with chunking storage, it will change the default value.
3) Then the test program will recheck the value after H5Dwrite or H5Dread to evaluate whether
the current collective IO case is doing the right thing.
Note: The test won't stop after it finds that the library is not doing the right thing and probably it will finish normally. The current approach is that the test program just printed out an error message. It should be changed later.
Platforms tested:o
copper,arabica,eirene
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the first round of patches about supporting collective chunk IO in HDF5
Description:
The current HDF5 library doesn't support collective MPIO with chunk storage. When users set collective option in their data transfer with chunk storage, the library silently converted the option to INDEPENDENT and that caused trememdous performance penalty. Some application like
WRF-parallel HDF5 IO module has to use contiguous storage for this reason. However, chunking storage has its own advantage(supporting compression filters and extensible dataset), so to make collective MPIO possible inside HDF5 with chunking storage is a very important task.
This check-in make collective chunk IO possible for some special cases. The condition is as follows(either case is fine with using collective chunk IO)
1. for each process, the hyperslab selection of the file data space of each dataset is regular and it is fit in one chunk.
2. for each process, the hyperslab selection of the file data space of each dataset is single and the number of chunks for the hyperslab selection should be equal.
Solution:
Lift up the contiguous storage requirement for collective IO.
Use H5D_isstore_get_addr to get the corresponding chunk address. Then the original library routines will take care of getting the correct address to make sure that MPI FILE TYPE is built correctly for collective IO>
Platforms tested:
arabica(sol), copper(AIX), eirene(Linux)
parallel test is checked at copper.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
Allow buffer parameter to H5Dread & H5Dwrite to be NULL if there are no
elements to transfer.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Set up datatype ID for dataset's datatype on disk. This allows us to avoid
repeatedly copying the datatype when an ID is needed.
Also, clean up a few warnings in various other places.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up almost all warnings from Windows builds.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
HDF5 now supports SZIP with no encoder.
Description:
SZIP can be configured to have both encoder and decoder or just to have the decoder. HDF5 can now query the configuration of any filter, and will throw errors if users try to write using a filter with encoding disabled.
Solution:
Added H5Zget_filter_info function, changed API for H5Pget_filter and H5P_get_filter_by_id. See SZIP RFC.
Platforms tested:
Copper (fortran, C++, parallel), Sleipnir (C++), Arabica (fortran, C++), Verbena (fortran, C++)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup & minor optimization
Description:
Re-work the way interface initialization routines are specified in the
library to avoid the overhead of checking for them in routines where there is
no interface initialization routine. This cleans up warnings with gcc 3.4,
reduces the library binary size a bit (about 2-3%) and should speedup the
library's execution slightly.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/gcc34
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid memcpy() when setting up new chunk coordinates
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Eliminate memcpy() when using default DXPL by pointing at existing
default object, instead of copying it.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Use 'size_t' instead of 'hsize_t' to track the number of elements in
memory buffers, especially for type conversion.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Revised dataspace selections to use a more "object oriented" mechanism
to set the function pointers for each selection and selection iterator. This
reduces the amount and number of times that dataspace selection info has to
be copied.
Additionally, change hyperslab selection information to be dynamically
allocated instead of an inline struct.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Eliminate redundant memset() when creating chunk map structure.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Don't allocate conversion buffer larger than the user's buffer.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor code
Description:
Move chunk and contiguous cached raw data from file information to dataset
information. This simplifies a number of internal interfaces, aligns the
code with it's purpose better and should allow more optimizations to the
chunked data I/O performance.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir)
h5committest
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Don't recompute the internal index value for looking up the chunk in the
hash table, just use the value already computed from iterating through the
chunks.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization & bug fix
Description:
When dimension information is being stored in the storage layout message
on disk, it is stored as 32-bit quantities, possibly truncating the dimension
information, if a dimension is greater than 32-bits in size.
Solution:
Fix the storage layout message problem by revising file format to not store
dimension information, since it is already available in the dataspace.
Also revise the storage layout data structures to be more compartmentalized
for the information for contiguous, chunked and compact storage.
Platforms tested:
FreeBSD 4.9 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Expand the use of macros to inline trivial function pointer lookup and
calls to reduce the overall number of functions invoked during normal operation
of the library.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Defer creating the span trees for hyperslab selections until they are
actually needed (which may be never, in certain circumstances).
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Further reduce the number of copies we make of a hyperslab selection for
chunked I/O, especially when we are only going to throw the old selection away
for a new one.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
Fixed handling of trivial data transform expressions (like 5/3 + 3) and some
data coversion fixes. Also added more tests to dtransform.c
Solution:
Added some more checks in the H5Z_xform_reduce_tree function to see if perhaps
the transform expression is complicated and is a non-trivial reduction.
Added tests for data conversion to dtransform as well as tests for a trivial
data transform expression.
Platforms tested:
h5committest'ed, except used arabica instead of sol and didn't do on copper
b/c no logon there. Problem noted with mtime test...doesn't appear to be
related to anything having to do with data transforms.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Refactored data transform code to reduce amount of symbols in the global
scope and also cleaned up & simplified the code a bit.
Platforms tested:
h5committest (minus copper, plus serial modi4)
FreeBSD 4.9 (sleipnir) w & w/o parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New Feature
Description:
Add the data transform function, H5Pset_transform().
Platforms tested:
"h5committested".
Copper was down. Ran parallel tests in sol instead.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Reduce the number of times the number of elements in a selection is
computed.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Move the element size for the selection into the selection iterator instead
of always passing it as a parameter.
Also, eleminate another 64-bit multiply for "all" selections.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir)
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid dividing the chunk coordinates at the top levels of the chunk I/O
routines, only to multiply them at the bottom of the routines.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir)
too minor to require h5committest
|