| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Bring r16354 back from trunk:
Refactor internal layout information, making it easier to add another
type of chunk index.
Tested on:
FreeBSD/32 (duty)
(other configurations tested with original change)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bring revision 16278 back from revise_chunks branch:
Update layout information in DCPL to unify all information in one
underlying property and switch to using H5O_layout_t for storing it, which
simplifies things considerably.
Also, fix many compiler warnings.
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
(Original patch tested on many machines)
|
|
|
|
|
|
|
|
|
|
| |
Bring r16182 back from trunk:
Rename internal routines, variables, macros, typedefs, etc. for chunked
dataset storage from "istore" to some variant of "chunk" or "btree".
Tested on:
FreeBSD/32 6.3 (duty) in debug mode
|
|
|
|
|
|
|
|
| |
H5Dget_access_plist
functions to 1.8 branch
Tested: kagiso, jam (trunk version tested with h5committest)
|
|
|
|
|
|
| |
When enable-debug is turned on, a special macro block H5_HAVE_INSTRUMENTED_LIBRARY inside HDF5 will be executed to check if some collective chunk IO test cases are being run with the correct settings(one link, multiple chunk etc.). However,when complicated derived datatype in some mpi-io packages are not supported, the library has to switch one link IO with/without the optimization to multiple chunk IO with/without the optimization. The current testsuite doesn't know this and generates a false assertion failure message.
This check-in fix this problem by providing a second property to avoid the false faiure message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bring back revision 15131 from trunk:
Finish omnibus chunked dataset I/O refactoring, to separate general
actions on chunked datasets from actions that are specific to using the v1
B-tree index.
Cleaned up a few bugs and added some additional tests also.
Tested on:
Mac OS X/32 10.5.3 (amazon)
Linux/32 2.4 (chicago)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bring revision 14860 back to 1.8 branch, change log for rev 14860 is:
Omnibus raw data I/O revisions, with wide-ranging changes and
refactoring, in order to prepare for implementing "fast append" feature.
These changes remove the majority of the code duplication for raw data
I/O which has crept in over the last ten years and introduces a more object-
oriented design for operating on different types of dataset storage.
Description:
Omnibus raw data I/O revisions, with wide-ranging changes and
refactoring, in order to prepare for implementing "fast append" feature.
These changes remove the majority of the code duplication for raw data
I/O which has crept in over the last ten years and introduces a more object-
oriented design for operating on different types of dataset storage.
Chunked storage no longer has it's own I/O routines, it is now handled
as either contiguous (if chunk is not pulled into the cache) or compact (if the
chunk is cached in memory).
No bug or feature changes, at least intentionally... :-)
Tested on:
Mac OS X/32 10.5.2 (amazon) w/production
|
|
|
|
|
|
|
|
|
| |
copyright notice.
Tested platform:
Kagiso only since it is only a comment block change. If it works in one
machine, it should work in all, I hope. Still need to check the parallel
build on copper.
|
|
|
|
|
|
|
|
|
|
|
| |
Get rid of two different types of fill value struct (merge H5O_fill_t
and H5O_fill_new_t) and clean up & simplify dataset initialization code.
(In preparation for shared object header message method call
refactoring).
Tested on:
FreeBSD/32 6.2 (duty)
Mac OS X/32 10.4.8 (amazon)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Finish refactoring job on the library's property list class initialization
code, so that the library determines the parent class dependencies at run-time,
eliminating the need for developers to initialize the classes in a particular
order. Also eliminates some more redundant code...
Tested on:
FreeBSD/32 4.11 (sleipnir) w/threadsafe & debugging turned on
Linux/32 2.4 (heping) w/FORTRAN & C++
Linux/64 2.4 (mir) w/build-all & 1.6 compat enabled
AIX/32 5.x (copper) w/FORTRAN & parallel
|
|
|
|
|
|
|
| |
Fix parallel build failure for property list class initialization refactor.
Tested on:
AIX (copper) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor generic property list initialization code to put property list
specific routines in property list modules, instead of scattered to the four
winds. Also, introduce property list class initialization objects, to make
adding new property list classes in the library easier.
Fix daily test failure by using H5Pget_elink_prefix() API routine instead
of looking at the "raw" generic property list information.
Tested on:
Mac OS X/32 10.4.8 (amazon)
FreeBSD/32 4.11 (sleipnir) w/threadsafe
Linux/32 2.4 (heping) w/C++ & FORTRAN
Linux/64 2.4 (mir) w/build-all & 1.6 compat
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
optimization codes:
1. Provide another option for users to do independent IO with MPI file setview(collectively)
2. With the request of collective IO from users, using Independent IO with MPI file setview if we find collective IO is not good for the applications for IO per chunk(multi-chunk IO) case. Previously we used pure independent IO and that actually performed small IO(IO each row) for this case. The recent performance study suggested the independent IO with file setview can acheieve significantly better performance than collective IO when not many processes participate in the IO.
3. For applications that explicitly choose to do collective IO per chunk case, the library won't do any optimization(gather/broadcast) operations. The library simply passes the collective IO request to MPI-IO.
Tested at copper, kagiso, heping, mir and tungsten(cmpi and mpich)
Kagiso is using LAM, t_mpi test was broken even.
The cchunk10 test failed at heping and mir. I suspected it was an MPICH problem. Will investigate later.
Everything passed at copper.
at tungsten: the old cmpi bug(failed at esetw) is still there. Other tests passed.
Some sequential fheap tests failed at kagiso.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some collective chunk IO macro names are confusing, change them to more meaningful
names.
Description:
H5Pset_dxpl_mpio_chunk_opt will set a flag so that the library can do one linked IO or multi-chunk IO with collective in chunking storage directly. That is, the library won't do analyses to determine this.
The flags for the enum type we used before are:
H5FD_MPIO_OPT_ONE_IO
H5FD_MPIO_OPT_MULTI_IO
They are not good names because of the following two reasons:
1. It doesn't reflect chunking storage
2. OPT is kind of redundant and misleading,
Solution:
We change the names to
H5FD_MPIO_CHUNK_ONE_IO
H5FD_MPIO_CHUNK_MULTI_IO
Platforms tested:
Since only macro names are changed, no need to test with h5committest.
heping(mpich 1.2.6)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding parallel tests for optional collective chunk APIs
Description:
Three new APIs
"H5Pset_dxpl_mpio_chunk_opt_ratio
H5Pset_dxpl_mpio_chunk_opt_num
H5Pset_dxpl_mpio_chunk_opt"
for optional optimization choices from users
have been added to the libraries.
This check-in adds six tests to verify the funcationality and correctedness
of these APIs.
These tests need to be verified with 3 or more processors and with MPI-IO driver only.
Solution:
Using H5Pinsert, H5Pget, H5Pset to verify that the library indeed goes into the branch we hope for.
Using H5_HAVE_INSTRUMENT macro to isolate these changes so that it won't affect or be misused by the application.
Platforms tested:
h5committest(shanti still refused to be connected)
Parallel tests on heping somehow are skipped. Manually testing at heping. Have checked
1,2,3,4,5 processes.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New APIs to add for collective chunk IO
Description:
Three new APIs
H5Pset_dxpl_mpio_chunk_opt_ratio
H5Pset_dxpl_mpio_chunk_opt_num
H5Pset_dxpl_mpio_chunk_opt
for optional optimization choices from users.
Solution:
Haven't added tests yet, won't affect other parts of the library.
Will add tests after urgent investigations of memory leaking problems from NASA Aura team.
Platforms tested:
heping: both parallel and sequential
shanti
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix & new feature
Description:
Support variable-length datatypes in compact data storage and chunked
data storage, along with attributes.
Bug fix on the H5T_vlen_set_loc to allow for changing the file on a
variable-length datatype on disk.
Platforms tested:
FreeBSD 4.11 (sleipnir)
Linux 2.4
Can't h5committest right now, due to missing cache files.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature
Description:
Add in a combination of Peter's & my code to support copying
variable-length data from one file to another, although currently only
supported with contiguous data storage.
Platforms tested:
FreeBSD 4.11 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature
Description:
Check in baseline for compact group revisions, which radically revises the
source code for managing groups and object headers.
WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!!
WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!!
This initiates the "unstable" phase of the 1.7.x branch, leading up
to the 1.8.0 release. Please test this code, but do _NOT_ keep files created
with it - the format will change again before the release and you will not
be able to read your old files!!!
WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!!
WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!! WARNING!!!!
Solution:
There's too many changes to really describe them all, but some of them
include:
- Stop abusing the H5G_entry_t structure and split it into two separate
structures for non-symbol table node use within the library: H5O_loc_t
for object locations in a file and H5G_name_t to store the path to
an opened object. H5G_entry_t is now only used for storing symbol
table entries on disk.
- Retire H5G_namei() in favor of a more general mechanism for traversing
group paths and issuing callbacks on objects located. This gets us out
of the business of hacking H5G_namei() for new features, generally.
- Revised H5O* routines to take a H5O_loc_t instead of H5G_entry_t
- Lots more...
Platforms tested:
h5committested and maybe another dozen configurations.... :-)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up & standardize a bit in preparation for coding standards
discussion.
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Trim trailing whitespace, which is making 'diff'ing the two branches
difficult.
Solution:
Ran this script in each directory:
foreach f (*.[ch] *.cpp)
sed 's/[[:blank:]]*$//' $f > sed.out && mv sed.out $f
end
Platforms tested:
FreeBSD 4.11 (sleipnir)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up some compiler warnings
Platforms tested:
FreeBSD 4.11 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix/Code Cleanup/Doc Cleanup/Optimization/Branch Sync :-)
Description:
Generally speaking, this is the "signed->unsigned" change to selections.
However, in the process of merging code back, things got stickier and stickier
until I ended up doing a big "sync the two branches up" operation. So... I
brought back all the "infrastructure" fixes from the development branch to the
release branch (which I think were actually making some improvement in
performance) as well as fixed several bugs which had been fixed in one branch,
but not the other.
I've also tagged the repository before making this checkin with the label
"before_signed_unsigned_changes".
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel & fphdf5
FreeBSD 4.10 (sleipnir) w/threadsafe
FreeBSD 4.10 (sleipnir) w/backward compatibility
Solaris 2.7 (arabica) w/"purify options"
Solaris 2.8 (sol) w/FORTRAN & C++
AIX 5.x (copper) w/parallel & FORTRAN
IRIX64 6.5 (modi4) w/FORTRAN
Linux 2.4 (heping) w/FORTRAN & C++
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Change how default allocation time is handled internally to the library,
to avoid some performance issues with property lists.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix & code cleanup
Description:
More dataset cleanups to get to a point where we can fix the chunked I/O
bug.
Also fix a couple of errors in the recent file object resurrection changes
which should hopefully address the recent daily test failres (H5T.c)
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix/code cleanup
Description:
Clean up raw data I/O code to bundle the I/O parameters (dataset, DXPL ID,
etc) into a single struct to pass around through the dataset I/O routines,
since they are always passed together, until very near the bottom of the I/O
stack.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
Solaris 2.7 (arabica)
IRIX64 6.5 (modi4)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Feature
Description:
Datatypes and groups now use H5FO "file object" code that was previously
only used by datasets. These objects will hold a file open if the file
is closed but they have not yet been closed. If these objects are unlinked
then relinked, they will not be destroyed. If they are opened twice (even
by two different names), both IDs will "see" changes made to the object
using the other ID.
When an object is opened using two different names (e.g., if a dataset was
opened under one name, then mounted and opened under its new name), calling
H5Iget_name() on a given hid_t will return the name used to open that hid_t,
not the current name of the object (this is a feature, and a change from the
previous behavior of datasets).
Solution:
Used H5FO code that was already in place for datasets. Broke H5D_t's, H5T_t's,
and H5G_t's into a "shared" struct and a private struct. The shared structs
(H5D_shared_t, etc.) hold the object's information and are used by all IDs
that point to a given object in the file. The private structs are pointed
to by the hid_t and contain the object's group entry information (including its
name) and a pointer to the shared struct for that object.
This changed the naming of structs throughout the library (e.g., datatype->size
is now datatype->shared->size). I added an updated H5Tinit.c to windows.zip.
Platforms tested:
Visual Studio 7, sleipnir, arabica, verbena
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Clean up collective chunking code a bit.
Also, add '--enable-instrument' configure flag to have a mechanism for
determining that optimized operations happened correctly in the library (instead
of just the "normal" way) by allowing 'flag' properties to be set outside the
library and set when the "right" thing happens. This is mainly for debugging
and regression checks, so we make certain we don't break optimized I/O by
accident. It's enabled by default when --enable-debug is on (which is on by
default in the development branch and off by default in the release branch),
but can also be independently controlled with its own configure flag.
Platforms tested:
FreeBSD 4.10 (sleipnir) w/parallel
IBM p690 (copper) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adding the first round of patches about supporting collective chunk IO in HDF5
Description:
The current HDF5 library doesn't support collective MPIO with chunk storage. When users set collective option in their data transfer with chunk storage, the library silently converted the option to INDEPENDENT and that caused trememdous performance penalty. Some application like
WRF-parallel HDF5 IO module has to use contiguous storage for this reason. However, chunking storage has its own advantage(supporting compression filters and extensible dataset), so to make collective MPIO possible inside HDF5 with chunking storage is a very important task.
This check-in make collective chunk IO possible for some special cases. The condition is as follows(either case is fine with using collective chunk IO)
1. for each process, the hyperslab selection of the file data space of each dataset is regular and it is fit in one chunk.
2. for each process, the hyperslab selection of the file data space of each dataset is single and the number of chunks for the hyperslab selection should be equal.
Solution:
Lift up the contiguous storage requirement for collective IO.
Use H5D_isstore_get_addr to get the corresponding chunk address. Then the original library routines will take care of getting the correct address to make sure that MPI FILE TYPE is built correctly for collective IO>
Platforms tested:
arabica(sol), copper(AIX), eirene(Linux)
parallel test is checked at copper.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Eliminate memcpy() when using default DXPL by pointing at existing
default object, instead of copying it.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir) w/parallel
Too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor code
Description:
Move chunk and contiguous cached raw data from file information to dataset
information. This simplifies a number of internal interfaces, aligns the
code with it's purpose better and should allow more optimizations to the
chunked data I/O performance.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.10 (sleipnir)
h5committest
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Don't recompute the internal index value for looking up the chunk in the
hash table, just use the value already computed from iterating through the
chunks.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization & bug fix
Description:
When dimension information is being stored in the storage layout message
on disk, it is stored as 32-bit quantities, possibly truncating the dimension
information, if a dimension is greater than 32-bits in size.
Solution:
Fix the storage layout message problem by revising file format to not store
dimension information, since it is already available in the dataspace.
Also revise the storage layout data structures to be more compartmentalized
for the information for contiguous, chunked and compact storage.
Platforms tested:
FreeBSD 4.9 (sleipnir) w/parallel
Solaris 2.7 (arabica)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Refactored data transform code to reduce amount of symbols in the global
scope and also cleaned up & simplified the code a bit.
Platforms tested:
h5committest (minus copper, plus serial modi4)
FreeBSD 4.9 (sleipnir) w & w/o parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New Feature
Description:
Add the data transform function, H5Pset_transform().
Platforms tested:
"h5committested".
Copper was down. Ran parallel tests in sol instead.
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Eliminate the B-tree "split_ratios" as a parameter and pull it from the
DXPL instead.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir) w/parallel
too minor to require h5committest
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Eliminate memory allocations for I/O vectors when using the default
vector size.
Platforms tested:
Solaris 2.7 (arabica)
FreeBSD 4.9 (sleipnir)
too minor to require h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Query the dataset creation and transfer properties less often.
Platforms tested:
Solaris 2.7 (arabica)
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/optimization
Description:
Query property list values once, at the beginning of the I/O routines,
instead of querying the property list values multiple (lots!) of times in
lower level routines.
Solution:
Create "property list caches" for internal library queries of the property
list values.
Platforms tested:
IBM p690 (copper) w/parallel & fphdf5
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
When two property lists are compared, the H5Pequal routine was just
comparing the raw information for the property values. This causes problems
when the raw information contains pointers to other information.
Solution:
Allow a 'compare' callback to be registered for properties, so that a user
application get perform the comparison itself, allowing for "deep" compares of
the property value.
This was exported to the H5Pregister & H5Pinsert routines in the development
branch, but not the release branch.
Platforms tested:
FreeBSD 4.9 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
More linting...
Platforms tested:
FreeBSD 4.8 (sleipnir)
too minor to need h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Various code cleanups suggested by lint tool
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Version update
Description:
Removed 1.4 compatibility code in the library.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature/Bug fix
Description:
Add new fill time value - H5D_FILL_TIME_IFSET which writes the fill value
to a dataset if the user has defined one, otherwise not writing the fill value
to the dataset.
Platforms tested:
FreeBSD 4.8 (sleipnir) serial & parallel
h5committest
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Limit the scope on more function prototypes/macros/typedefs.
Platforms tested:
FreeBSD 4.8 (sleipnir)
h5committest not necessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup.
Description:
Move many package or internal function prototypes and macro definitions
into tighter scope according to their current use.
Added more comments where appropriate.
Eliminate ancient, unused functions.
Added a couple "accessor" functions to get parts of data structures which
were moved out of scope.
Platforms tested:
h5committested
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature/enhancement
Description:
Chunked datasets are handled poorly in several circumstances involving
certain selections and chunks that are too large for the chunk cache and/or
chunks with filters, causing the chunk to be read from disk multiple times.
Solution:
Rearrange raw data I/O infrastructure to handle chunked datasets in a much
more friendly way by creating a selection in memory and on disk for each chunk
in a chunked dataset and performing all of the I/O on that chunk at one time.
There are still some scalability (the current code attempts to
create a selection for all the chunks in the dataset, instead of just the
chunks that are accessed, requiring portions of the istore.c and fillval.c
tests to be commented out) and performance issues, but checking this in will
allow the changes to be tested by a much wider audience while I address the
remaining issues.
Platforms tested:
h5committested, FreeBSD 4.8 (sleipnir) serial & parallel, Linux 2.4 (eirene)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup
Description:
Array declaration was using hard-coded constant for maximum number of
dimensions.
Solution:
Changed to use H5O_LAYOUT_NDIMS.
Platforms tested:
FreeBSD 4.8 (sleipnir) w/szip
Linux 2.4 (sleipnir) w/szip
Solaris 2.7 (arabica) w/FORTRAN
IRIX64 6.5 (modi4) w/szip, FORTRAN & parallel
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update
Description:
Updated copyright statement in files which hadn't been updated yet.
Platforms tested:
Linux (Only comment change)
Misc. update:
|