| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature/enhancement
Description:
Chunked datasets are handled poorly in several circumstances involving
certain selections and chunks that are too large for the chunk cache and/or
chunks with filters, causing the chunk to be read from disk multiple times.
Solution:
Rearrange raw data I/O infrastructure to handle chunked datasets in a much
more friendly way by creating a selection in memory and on disk for each chunk
in a chunked dataset and performing all of the I/O on that chunk at one time.
There are still some scalability (the current code attempts to
create a selection for all the chunks in the dataset, instead of just the
chunks that are accessed, requiring portions of the istore.c and fillval.c
tests to be commented out) and performance issues, but checking this in will
allow the changes to be tested by a much wider audience while I address the
remaining issues.
Platforms tested:
h5committested, FreeBSD 4.8 (sleipnir) serial & parallel, Linux 2.4 (eirene)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update
Description:
Updated copyright statement in files which hadn't been updated yet.
Platforms tested:
Linux (Only comment change)
Misc. update:
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/new feature.
Description:
Split FUNC_LEAVE into API and non-API specific versions. This allows a
solution to compiling this branch with C++, as well as reducing the size
of the binaries produced.
Platforms tested:
FreeBSD 4.7 (sleipnir) w/serial, parallel (including MPE) & thread-safe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix/Code cleanup/New Feature
Description:
Correct problems with writing fill-values to external storage and allocate
the data storage at the correct times.
Also, mostly straighten out the strange code which allocates and fills
raw data storage for datasets. Things are still a bit odd in that the
fill-values for chunked datasets are written when the space is allocated,
instead of in a separate routine, but there are two reasons for this:
it's inefficient (especially in parallel) to iterate through all the chunks
twice, and (more importantly) the space needed to store compressed chunks
isn't known until we've got a buffer of compressed fill-values ready to
write to the chunk.
Additionally, add in the H5D_SPACE_ALLOC_INCR and H5D_SPACE_ALLOC_DEFAULT
setting for the "space time", which incorporate the previous behavior of
the space allocation for chunked datasets.
The default settings for the different types of dataset storage are now
as follows:
Contiguous - Late
Chunked - Incremental
Compact - Early
This checkin also incorporates a change to the behavior of external data
storage in two ways - fill-values are _never_ written to external storage
(under the assumption that writing fill-values is triggered by allocating
space in an HDF5 file, and since space is not allocated in the file, the
fill-values should not be written) and external data files are now created
if they don't exist when data is written to them. The fill-value will
probably need to be revisited at some time in the future, this just seemed
like the safer course currently.
I think I cleaned up some compiler errors also, before getting bogged down
in the fixes for the space allocation and fill-values.
Platforms tested:
FreeBSD 4.6 (sleipnir) w/serial & parallel. Will be testing on IRIX64
6.5 (modi4) in serial & parallel shortly.
|
|
Purpose:
Design for compact dataset
Description:
Compact dataset is stored in the header message for dataset layout.
Platforms tested:
arabica, eirene.
|