| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Purpose:
Maintenance: I am commiting files that were modified by h5vers -s 1.4.4
command and the files that made into the hdf5-1.4.4 released tar ball
(*.zip and INATALL_windows.txt). Please do not check anything in
until I send an email telling that you can do it!
My next step will be tag the release with hdf5_1_4_4 tag.
Thank you!
|
|
|
|
|
|
|
| |
Purpose:
Added 1 blank line for more uniform spacing.
Platforms tested:
Viewed in vi.
|
|
|
|
|
|
|
|
| |
Description:
Filled in "Documentation" section.
Made a light editorial pass on the entire file.
Platforms tested:
Viewed in vi. Printed.
|
|
|
|
|
|
|
|
|
| |
Purpose:
Maintenance for the hdf5-1.4.4 release
Description:
Added h5redeploy description, and removed post0 from the text.
I also added the link to the official Zlib Website instead of ???.
Frank is going to update Documentation section after this checkin.
|
|
|
|
|
|
|
| |
Purpose:
Inserting R1.4.4 version of "Supported Configuarion Features Summary".
Platforms tested:
Viewed in vi.
|
| |
|
|
|
|
|
|
|
|
| |
Purpose:
update install on windows doc.
Description:
Solution:
Platforms tested:
|
|
|
|
|
|
|
|
| |
Purpose:
remove zlib related old info.
Description:
Solution:
Platforms tested:
|
|
|
|
|
|
|
|
| |
Purpose:
correct typos.
Description:
Solution:
Platforms tested:
|
|
|
|
|
|
|
| |
Purpose:
Maintenance
Description:
Updated INSTALL_Windows_withF90.txt for the upcomin release
|
|
|
|
|
|
|
|
|
| |
Purpose:
Information update.
Description:
Added IA-32 platform result.
Platforms tested:
platinum.
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Update release doc for windows-specified issue.
Description:
Things like zlib out, binary and source with different
zlib and backward compatible are addressed.
Solution:
Platforms tested:
|
|
|
|
|
|
|
|
|
| |
Purpose:
update INSTALL_Windows.txt
Description:
zlib is moved out of HDF5 source tree, re-arrange the project file.
Solution:
Platforms tested:
|
|
|
|
|
| |
Described the removal of the H4 to H5 tools from the main source
tarball...
|
|
|
|
| |
Added "Performance Improvements" section.
|
|
|
|
|
|
|
|
| |
Purpose:
Maintenance
Description:
I forgot to describe C compiler flag -DIA64 that should be specified
in order to build Fortran Library on IA64.
|
|
|
|
|
|
|
|
| |
Purpose:
Maintenance
Description:
Did more clean-up of the "Tested Platforms" section and
added information how to build with Intel Fortran compiler.
|
|
|
|
| |
Update FreeBSD version info.
|
|
|
|
|
|
|
| |
Purpose:
Maintenance
Description:
Updated information about AIX 5.1, SV1 and Linux 2.4 testing platforms.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
When parallel I/O is used, the MPI-I/O VFL driver uses a "lazy" model to
call MPI_File_set_view() in order to reduce the number of calls to this
function. However, this is unsafe, because if a collective I/O which uses
MPI derived types (and thus uses MPI_File_set_view()) is immediately
followed by an independent I/O, the code will attempt to call
MPI_File_set_view() in order to switch back to the default view of the
file. MPI_File_set_view() is a collective call however, and this causes
the application to hang.
Solution:
Removed "lazy" MPI_File_set_view() code, instead set the file view when it
is needed (with MPI derived types) and immediately set the file view back to
the default view before leaving the I/O routine.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel. Also, tested with the latest development
and release code for the SAF library, which now works correctly with this
change. (Although the release branch of the SAF library seems to have a
bug, this 1.4.4 release candidate code gets as far as the version the SAF
library is released on top of (1.4.2-patch1, I believe)).
|
|
|
|
|
|
|
| |
Make a tarball of 1.4.4-pre4 that contains CVS stuff for easier
commit later. Bump up version information by bin/h5vers -i.
Platforms tested:
No test since this is the same process the release script will do.
|
|
|
|
|
|
|
|
| |
Version bump
Description:
I'm making another prerelease available for the SAF team, with some
features I'd like them to test, so bump the prerelease number again.
|
|
|
|
|
|
|
|
|
| |
Update version info.
Description:
Bump the prelease number to reflect the fact that I gave out a prerelease
to the SAF developers yesterday and I don't want to confuse people when we
make another prelease tarball.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature.
Description:
There is some discussion among the SAF team as to whether it is better
to use MPI derived types for raw data transfers (thus needing a
MPI_File_set_view() call), or whether it is better to use a sequence of
low-level MPI types (i.e. MPI_BYTE) for the raw data transfer.
Solution:
Added an internal flag to determine whether derived types are preferred
(the default), or whether they should be avoided. An environment variable
("HDF5_MPI_PREFER_DERIVED_TYPES") can be set by users to control whether MPI
types should be used or not. Set the environment variable to "0" (i.e.:
'setenv HDF5_MPI_PREFER_DERIVED_TYPES 0') to avoid using MPI derived types.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix.
Description:
The chunking code was using internal allocation routines to put blocks on
a free list for reuse, instead of using the system allocation routines (ie.
malloc, free, etc.). This causes problems when user filters attempt to
allocate/free chunks for their algorithm's use.
Solution:
Switched the chunking code back to using the system allocation routines,
we can address performance issues with them if it becomes a real problem.
Platforms tested:
Linux 2.2.x (eirene) && IRIX64 6.5 (modi4)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code optimization
Description:
Avoid creating MPI types (and thus requiring a MPI_File_set_view() call)
when contiguous selections are used for dataset I/O. This should be a
performance improvement for those sorts of selections.
Platforms tested:
Linux 2.2.x (eirene) w/parallel && IRIX64 6.5 (modi4) w/parallel & FORTRAN
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
version information were not updated when 1.4.4-pre1 was created.
So, 1.4.4-pre1 actually contained 1.4.4-snap6 in it.
Updated the version information to 1.4.4-pre2.
Platforms tested:
eyeballed all changes. Pretty sure only text changes in some character
strings.
|
|
|
|
|
|
|
|
|
|
| |
Default change
Description:
Enable the use of MPI types for collective I/O by default.
Platforms tested:
Linux 2.2.x (eirene) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix
Description:
I/O on "Regular" hyperslab selections could fail to transfer correctly
if the number of elements in the selection's row did now fit "evenly"
into the buffer being used for the transfer.
Solution:
Correct the calculation of the block & count offsets within the optimized
"regular" hyperslab routines.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
| |
Update
Description:
Explained that if the user uses the "make install prefix=NEW_DIR"
option, they'll need to update the installed h5cc script.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
H5Dcreate and H5Tcommit allow "empty" compound and enumerated types (i.e.
ones with no members) to be stored in the file, but this causes an assertion
failure and is somewhat vapid.
Solution:
Check the datatype "makes sense" before using it for H5Dcreate and
H5Tcommit.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix (#709)/Code improvement.
Description:
Allow chunks for chunked datasets to be cached when file is open for
read-only access.
Platforms tested:
IRIX64 6.5 (modi4) w/parallel
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug fix (bug #777)
Description:
Current code allows a compound datatype to be inserted into itself.
Solution:
Check if the ID for the member is the same as the ID for the compound
datatype and reject it if so.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix for bug #789
Description:
Creating a 1-D dataset region reference caused the library to hang (go into
an infinite loop).
Solution:
Corrected algorithm for serializing hyperslab regions.
Platforms tested:
FreeBSD 4.5 (sleipnir)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New feature.
Description:
Added a "small data" block allocation mechanism to the library, similar to
the mechanism used for allocating metadata currently.
See the RFC for more details:
http://hdf.ncsa.uiuc.edu/RFC/SmallData/SmallData.html
This reduces the number of I/O operations which hit the disk for my test
program from 19 to 15 (i.e. from 393 to 15, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN and FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
| |
Updated the instructions for tflops and O2K.
Platforms tested:
eye balled.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
Bug fix (#699), fix provided by a user, approved by Quincey
Description:
When a scalar dataspace was written to the file and then
subsequently queried with the H5Sget_simple_extent_type function,
type was reported H5S_SIMPLE instead H5S_SCALAR.
Solution:
Applied a fix
Platforms tested:
Solaris 2.7 and Linux 2.2.18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code improvement
Description:
The metadata aggregation code in the library was not terribly smart about
extending contiguous regions of metadata in the file and would not extend
them as far as possible. This causes space in the file to be wasted, also.
Solution:
Be smarter about extending the space used in the file for metadata by
checking whether new metadata blocks allocated in the file are at the end
of the current metadata aggregation region and append them to the metadata
region if so. This has the nice side benefit of reducing the number of
bytes we waste in the file and reducing the size of the file by a small
amount in some cases.
This reduces the number of I/O operations which hit the disk for my test
program from 53 to 19 (i.e. from 393 to 19, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN and FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bug Fix
Description:
The "dirty" flag for symbol table entries and symbol table nodes was not
being cleared when they were flushed to the file, causing lots of extra
metadata I/O.
Solution:
Reset the symbol table entry & nodes' flags when thy are flushed to disk.
This reduces the number of I/O operations which hit the disk for my test
program from 83 to 53 (i.e. from 393 to 53, overall).
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN & FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code cleanup/bug fix
Description:
The "metadata accumulator" cache in the library (which is designed to catch
small metadata writes/reads and bundle them together into larger I/O
buffers) was incorrectly detecting the important case of metadata pieces
being written sequentially to the file, adjoining but not overlapping.
Additionally, the metadata accumulator was not being used to cache data
read in from disk, only caching writes.
Solution:
Fix accumulator to correctly cache adjoining metadata writes and also to
cache metadata read from disk.
Between these two fixes, the number of I/O requests which resulted in actual
reads/writes to the filesystem dropped from 393 requests to 82. :-)
Platforms tested:
Solaris 2.7 (arabica) w/FORTRAN & FreeBSD 4.5 (sleipnir) w/C++
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Document Bug Fix
Description:
Under certain [obscure] circumstances, an object header would get paged out
of the metadata cache, and when it was accessed again and brought back into
the cache, and immediately had additional metadata added to it (an
attribute, usually, or perhaps adding an object to a group), and needed to
be extended with a continuation message, but there was no room in any
existing object header chunks for the continuation message and an existing
object header message needed to be moved to the new object header chunk (I
told you it was obscure :-), the object header message moved to the new
chunk (not the new metadata being added) would get corrupted. *whew* :-)
Solution:
Actually copy the "raw" object header message information of the object
header message being moved to the new chunk, instead of relying on the
"native" object header message information being re-encoded when the object
header is flushed. This is because when an object header is paged out of
the metadata cache and subsequently brought back in, the "native"
information pointer in memory is reset to NULL and only the "raw"
information exists.
Platforms tested:
Solaris 2.7 (arabica) & FreeBSD 4.5 (sleipnir)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purpose:
update install_windows.txt
remove hdf4-related stuff in the install_windows.txt
modify the this file to reflect the retirement of HAVE_*** in HDF5 configuration process.
Description:
1) hdf4 related tools have been moved out of HDF5 CVS tree, The install doc should reflect this.
2) Albert finished macro changes from HAVE_*** to H5_HAVE_***. The doc should reflect this.
Solution:
Platforms tested:
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Document Bug fix/Code improvement below:
Description:
Currently, the chunk data allocation routine invoked to allocate space for
the entire dataset is inefficient. It writes out each chunk in the dataset,
whether it is already allocated or not. Additionally, this happens not
only when it is created, but also anytime it is opened for writing, or the
dataset is extended. Worse, there's too much parallel I/O syncronization,
which slows things down even more.
Solution:
Only attempt to write out chunks that don't already exist. Additionally,
share the I/O writing between all the nodes, instead of writing everything
with process 0. Then, only block with MPI_Barrier if chunks were actually
created.
|
|
|
|
| |
Document Bug Fix
|