| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
The chunking code was using internal allocation routines to put blocks on
a free list for reuse, instead of using the system allocation routines (ie.
malloc, free, etc.). This causes problems when user filters attempt to
allocate/free chunks for their algorithm's use.
Solution:
Switched the chunking code back to using the system allocation routines,
we can address performance issues with them if it becomes a real problem.
Platforms tested:
Linux 2.2.x (eirene) && IRIX64 6.5 (modi4)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid creating MPI types (and thus requiring a MPI_File_set_view() call)
when contiguous selections are used for dataset I/O. This should be a
performance improvement for those sorts of selections.
Platforms tested:
Linux 2.2.x (eirene) w/parallel && IRIX64 6.5 (modi4) w/parallel & FORTRAN
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Default change
Description:
Enable the use of MPI types for collective I/O by default.
Platforms tested:
Linux 2.2.x (eirene) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
I/O on "Regular" hyperslab selections could fail to transfer correctly
if the number of elements in the selection's row did now fit "evenly"
into the buffer being used for the transfer.
Solution:
Correct the calculation of the block & count offsets within the optimized
"regular" hyperslab routines.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
| |
Update
Description:
Explained that, if the user uses the "make install prefix=NEW_DIR"
option, they'll need to modify the installed h5cc file to reflect the
change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
H5Dcreate and H5Tcommit allow "empty" compound and enumerated types (i.e.
ones with no members) to be stored in the file, but this causes an assertion
failure and is somewhat vapid.
Solution:
Check the datatype "makes sense" before using it for H5Dcreate and
H5Tcommit.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix (#709)/Code improvement.
Description:
Allow chunks for chunked datasets to be cached when file is open for
read-only access.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (bug #777)
Description:
Current code allows a compound datatype to be inserted into itself.
Solution:
Check if the ID for the member is the same as the ID for the compound
datatype and reject it if so.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix for bug #789
Description:
Creating a 1-D dataset region reference caused the library to hang (go into
an infinite loop).
Solution:
Corrected algorithm for serializing hyperslab regions.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature.
Description:
Added a "small data" block allocation mechanism to the library, similar to
the mechanism used for allocating metadata currently.
See the RFC for more details:
http://hdf.ncsa.uiuc.edu/RFC/SmallData/SmallData.html
This reduces the number of I/O operations which hit the disk for my test
program from 19 to 15 (i.e. from 393 to 15, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN and FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
| |
Updated the instructions for tflops and O2K.
Platforms tested:
eye balled.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix (#699), fix provided by a user, approved by Quincey
Description:
When a scalar dataspace was written to the file and then
subsequently queried with the H5Sget_simple_extent_type function,
type was reported H5S_SIMPLE instead of H5S_SCALAR.
Solution:
Applied a fix (see bug report 699)
Platforms tested:
Solaris 2.7 and Linux 2.2.18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code improvement
Description:
The metadata aggregation code in the library was not terribly smart about
extending contiguous regions of metadata in the file and would not extend
them as far as possible. This causes space in the file to be wasted, also.
Solution:
Be smarter about extending the space used in the file for metadata by
checking whether new metadata blocks allocated in the file are at the end
of the current metadata aggregation region and append them to the metadata
region if so. This has the nice side benefit of reducing the number of
bytes we waste in the file and reducing the size of the file by a small
amount in some cases.
This reduces the number of I/O operations which hit the disk for my test
program from 53 to 19 (i.e. from 393 to 19, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN and FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
The "dirty" flag for symbol table entries and symbol table nodes was not
being cleared when they were flushed to the file, causing lots of extra
metadata I/O.
Solution:
Reset the symbol table entry & nodes' flags when thy are flushed to disk.
This reduces the number of I/O operations which hit the disk for my test
program from 83 to 53 (i.e. from 393 to 53, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN & FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/bug fix
Description:
The "metadata accumulator" cache in the library (which is designed to catch
small metadata writes/reads and bundle them together into larger I/O
buffers) was incorrectly detecting the important case of metadata pieces
being written sequentially to the file, adjoining but not overlapping.
Additionally, the metadata accumulator was not being used to cache data
read in from disk, only caching writes.
Solution:
Fix accumulator to correctly cache adjoining metadata writes and also to
cache metadata read from disk.
Between these two fixes, the number of I/O requests which resulted in actual
reads/writes to the filesystem dropped from 393 requests to 82 for the
particular test I was using. :-)
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN & FreeBSD 4.5 (sleipnir) w/C++
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Document Bug Fix
Description:
Under certain [obscure] circumstances, an object header would get paged out
of the metadata cache, and when it was accessed again and brought back into
the cache, and immediately had additional metadata added to it (an
attribute, usually, or perhaps adding an object to a group), and needed to
be extended with a continuation message, but there was no room in any
existing object header chunks for the continuation message and an existing
object header message needed to be moved to the new object header chunk (I
told you it was obscure :-), the object header message moved to the new
chunk (not the new metadata being added) would get corrupted. *whew* :-)
Solution:
Actually copy the "raw" object header message information of the object
header message being moved to the new chunk, instead of relying on the
"native" object header message information being re-encoded when the object
header is flushed. This is because when an object header is paged out of
the metadata cache and subsequently brought back in, the "native"
information pointer in memory is reset to NULL and only the "raw"
information exists.
Platforms tested:
Solaris 2.7 (arabica) & FreeBSD 4.5 (sleipnir)
|
| |
|
|
|
|
|
|
|
|
|
| |
Purpose:
update, remove hdf4-related stuff.
Description:
hdf4 related tools have been moved out of HDF5 CVS tree, The install doc should reflect this.
Solution:
Platforms tested:
|
|
|
|
| |
Document VFL "flush" changes.
|
|
|
|
|
|
|
|
|
|
| |
Document Code improvement below:
Description:
Propagated the "fill time" property into the parallel chunk allocation
routine, allowing it to avoid writing fill values to each new chunk
allocated. This improves the performance of chunked datasets in parallel
I/O to be on par with contiguous datasets again (on modi4).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Document Bug fix/Code improvement below:
Description:
Currently, the chunk data allocation routine invoked to allocate space for
the entire dataset is inefficient. It writes out each chunk in the dataset,
whether it is already allocated or not. Additionally, this happens not
only when it is created, but also anytime it is opened for writing, or the
dataset is extended. Worse, there's too much parallel I/O syncronization,
which slows things down even more.
Solution:
Only attempt to write out chunks that don't already exist. Additionally,
share the I/O writing between all the nodes, instead of writing everything
with process 0. Then, only block with MPI_Barrier if chunks were actually
created.
|
|
|
|
| |
Document Bug Fix
|
|
|
|
| |
Document Performance enhancement
|
|
|
|
|
|
|
| |
Back out change
Description:
Back out description of VFL 'flush' change.
|
|
|
|
|
|
|
| |
Update
Description:
Added documentation on how you can install in a different directory
than the one you specified during configuration.
|
|
|
|
| |
Document new VFL flush parameter.
|
|
|
|
| |
Update release notes about rotating metadata writes.
|
|
|
|
|
|
| |
Update
Description:
Updated how to compile HDF5 with Intel compilers (ecc or icc).
|
|
|
|
|
|
|
|
| |
Purpose:
Maintenance
Description:
Added information about Parallel Fortran Support for HP-UX 11.00 SysV
and write/read overloaded subroutines (bug #670)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
Selection offsets were not being used correctly when iterating through
all hyperslabs selections and point selections.
Solution:
Use the selection offset appropriately.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
New feature
Description:
Allow H5Glink and H5Gmove to handle links across different locations.
Solution:
Added H5Glink2 and H5Gmove2 functions with new parameter of destination
location.
Platforms tested:
Linux 2.2(eirene)
|
|
|
|
|
|
|
|
| |
Updated the installation instruction for the Tflops machine.
Moved the parallel HDF5 building instructions to the front
and putting in a NOTE that the sequential version is not supported
any more because it has little practical value to build sequential
applications for the Tflops machine.
|
| |
|
|
|
|
|
|
|
|
|
| |
Update
Description:
Added the support platform summary paragraph.
Thread safe is supported for solaris 2.8_32bit.
Platforms tested:
hdfsun8
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Maintenance
Description:
The tflops option local modification in bin/config.sub was wiped out
during the latest autoconfigure tools upgrade. Instead of adding it
in for every autoconfigure tools upgrade, I changed the instruction
to use a standard feature in configure.
./configure --host=i386-intel-osf1
This is a bit more typing but no more local modification.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
When several level deep nested compound & VL datatypes are used, the data
in the nested compound datatypes is incorrectly sharing the same "background
buffer", causing data corruption when the data is written to the file.
Solution:
Allocate a separate background buffer for each level of the nested types
to convert. (Also allocate temporary background buffers for array
datatypes, where this sort of problem could occur also)
Added more regression tests to check for these errors.
Platforms tested:
FreeBSD 4.5 (sleipnir) & Solaris 2.6 (baldric)
|
| |
|
|
|
|
|
| |
Purpose:
added description of H5Dset_extent
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
New feature
Description:
Fill-value's behaviors for contiguous dataset have been redefined.
Basicly, dataset won't allocate space until it's necessary. Full details
are available at http://hdf.ncsa.uiuc.edu/RFC/Fill_Value, at this moment.
Platforms tested:
Linux 2.2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New Feature
Description:
Added new H5Dfill() routine to fill the elements in a selection for a
memory buffer with a fill value. This is a user API wrapper around some
internal routines which were needed for the fill-value modifications
from Raymond as well as Pedro's code for reducing the size of a chunked
dataset.
Platforms tested:
FreeBSD 4.5 (sleipnir) [and IRIX64 6.5 (modi4) in parallel, in a few
minutes]
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
New feature
Description:
Added a query function H5Tget_member_index for compound and enumeration
data types, to retrieve member's index by its name.
Platforms tested:
Linux 2.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix & Code Cleanup
Description:
The MPI-IO optimized transfer routines
(H5S_mpio_spaces_read/H5S_mpio_space_write) are not being invoked in all
the cases where they could be used.
Additionally, the code for determining if an optimized transfer is wrapped
into the actual I/O transfer routine in a very confusing way.
Solution:
Re-enabled MPI-IO optimized transfer routines in all the cases where they
should work.
Extracted all the pre-conditions for optimized transfers into separate
routines from the transfer routines.
Platforms tested:
FreeBSD 4.5 (sleipnir) & IRIX64 6.5 (modi4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix & Feature
Description:
The selection offset was being ignored for optimized hyperslab selection
I/O operations.
Additionally, I've found that the restrictions on optimized selection
I/O operations were too strict and found a way to allow more hyperslabs
to use the optimized I/O routines.
Solution:
Incorporate the selection offset into the selection location when performing
optimized I/O operations.
Allow optimized I/O on any single hyperslab selection and also allow
hyperslab operations on chunked datasets.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|