summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorBill Wendling <wendling@ncsa.uiuc.edu>2000-12-08 18:33:20 (GMT)
committerBill Wendling <wendling@ncsa.uiuc.edu>2000-12-08 18:33:20 (GMT)
commit28ee815050e97f563eeab009fdb1df6f859af784 (patch)
treeecd2aac82cb230e887368760c6bf8d92e599b8d1
parentdc08de60a62bfeecd3f35553f8bfc353053b0986 (diff)
downloadhdf5-28ee815050e97f563eeab009fdb1df6f859af784.zip
hdf5-28ee815050e97f563eeab009fdb1df6f859af784.tar.gz
hdf5-28ee815050e97f563eeab009fdb1df6f859af784.tar.bz2
[svn-r3101] Purpose:
Updates and reformatting Description: Reformatting of RELEASE document. Updated some of the platforms in the INSTALL doc.
-rw-r--r--INSTALL7
-rw-r--r--RELEASE156
2 files changed, 83 insertions, 80 deletions
diff --git a/INSTALL b/INSTALL
index a95286f..e8fb3a2 100644
--- a/INSTALL
+++ b/INSTALL
@@ -102,7 +102,7 @@
used to configure, build, test, and install the HDF5 library,
header files, and support programs.
- $ gunzip <hdf5-1.2.0.tar.gz |tar xf -
+ $ gunzip < hdf5-1.2.0.tar.gz | tar xf -
$ cd hdf5-1.2.0
$ make check
$ make install
@@ -345,7 +345,10 @@
saying `--disable-static' or `--disable-shared'.
$ ./configure --disable-shared
-
+
+ The C++ and Fortran libraries are currently only available in the
+ static format.
+
To build only statically linked executables on platforms which
support shared libraries, use the `--enable-static-exec' flag.
diff --git a/RELEASE b/RELEASE
index 4113ec0..d275626 100644
--- a/RELEASE
+++ b/RELEASE
@@ -6,13 +6,12 @@
INTRODUCTION
-This document describes the differences between HDF5-1.2.0 and
+This document describes the differences between HDF5-1.2.0 and
HDF5-1.3.x, and contains information on the platforms where HDF5-1.3.x
-was tested (????? careful, under construction)
-and known problems in HDF5-1.3.x. For more details check the
-HISTORY file in the HDF5 source.
+was tested (????? careful, under construction) and known problems in
+HDF5-1.3.x. For more details check the HISTORY file in the HDF5 source.
-The HDF5 documentation can be found on the NCSA ftp server
+The HDF5 documentation can be found on the NCSA ftp server
(ftp.ncsa.uiuc.edu) in the directory:
/HDF/HDF5/docs/
@@ -37,35 +36,34 @@ CONTENTS
New features
============
* The Virtual File Layer, VFL, is added to replace the old file
- drivers. It also provides an API for user defined file drivers.
- * New features added to snapshots. Use 'snapshot help' to see a
+ drivers. It also provides an API for user defined file drivers.
+ * New features added to snapshots. Use 'snapshot help' to see a
complete list of features.
* Improved configure to detect if MPIO routines are available when
parallel mode is requested.
- * Added Thread-Safe support. Phase I implemented.
- * Added data sieve buffering to raw data I/O path. This is enabled for
- all VFL drivers except the mpio & core drivers. Setting the sieve buffer
- size is controlled with new API functions: H5Pset_sieve_buf_size() and
- retrieved with H5Pget_sieve_buf_size().
+ * Added Thread-Safe support. Phase I implemented.
+ * Added data sieve buffering to raw data I/O path. This is enabled
+ for all VFL drivers except the mpio & core drivers. Setting the
+ sieve buffer size is controlled with new API functions:
+ H5Pset_sieve_buf_size() and retrieved with H5Pget_sieve_buf_size().
* Added new Virtual File Driver, Stream VFD, to send/receive entire
HDF5 files via socket connections.
- * As parts of VFL, HDF-GASS and HDF-SRB are also added to this release.
- To find out details, please read INSTALL_VFL file.
- * Increased maximum number of dimensions for a dataset (H5S_MAX_RANK) from
- 31 to 32 to align with HDF4 & netCDF.
+ * As parts of VFL, HDF-GASS and HDF-SRB are also added to this
+ release. To find out details, please read INSTALL_VFL file.
+ * Increased maximum number of dimensions for a dataset (H5S_MAX_RANK)
+ from 31 to 32 to align with HDF4 & netCDF.
* Added 'query' function to VFL drivers. Also added 'type' parameter to
- VFL 'read' & 'write' calls, so they are aware of the type of data being
- accessed in the file. Updated the VFL document also.
- * A new h4toh5 uitlity, to convert HDF4 files to analogous
- HDF5 files.
+ VFL 'read' & 'write' calls, so they are aware of the type of data
+ being accessed in the file. Updated the VFL document also.
+ * A new h4toh5 uitlity, to convert HDF4 files to analogous HDF5 files.
* Added a new array datatype to the datatypes which can be created. Removed
"array fields" from compound datatypes (use an array datatype instead).
Release Notes for h4toh5 beta
=============================
- The h4toh5 utility converts an HDF4 file to an HDF5 file.
- See the document, "Mapping HDF4 Objects to HDF5 Objects",
+ The h4toh5 utility converts an HDF4 file to an HDF5 file. See the
+ document, "Mapping HDF4 Objects to HDF5 Objects",
http://hdf.ncsa.uiuc.edu/HDF5/papers/H4-H5MappingGuidelines.pdf
Known Limitations of the h4toh5 beta release
@@ -77,10 +75,10 @@ Release Notes for h4toh5 beta
2. String datatype
- HDF4 has no 'string' type. String valued data are usually defined
- as an array of 'char' in HDF4. The h4toh5 utility will generally
- map these to HDF5 'String' types rather than array of char, with
- the following additional rules:
+ HDF4 has no 'string' type. String valued data are usually defined as
+ an array of 'char' in HDF4. The h4toh5 utility will generally map
+ these to HDF5 'String' types rather than array of char, with the
+ following additional rules:
* For the data of HDF4 SDS, image, and palette, if the data is
declared 'DFNT_CHAR8' it will be assumed to be integer and
@@ -95,25 +93,23 @@ Release Notes for h4toh5 beta
3. Compression, Chunking and External storage
- Chunking is supported, but compression and external storage is
- not.
+ Chunking is supported, but compression and external storage is not.
- An HDF4 object that uses chunking will be converted to an HDF5
- file with analogous chunked storage.
+ An HDF4 object that uses chunking will be converted to an HDF5 file
+ with analogous chunked storage.
An HDF4 object that uses compression will be converted to an
uncompressed HDF5 object.
- An HDF4 object that uses external storage will be converted to an
- an HDF5 object without external storage.
+ An HDF4 object that uses external storage will be converted to an an
+ HDF5 object without external storage.
4. Memory use
- The beta version of the h4toh5 utility copies data from HDF4
- objects in a single read followed by a single write to the
- HDF5 object. For large objects, this requires a very large
- amount of memory, which may be extremely slow or fail on
- some platforms.
+ The beta version of the h4toh5 utility copies data from HDF4 objects
+ in a single read followed by a single write to the HDF5 object. For
+ large objects, this requires a very large amount of memory, which may
+ be extremely slow or fail on some platforms.
Note that a dataset that has only been partly written will
be read completely, including uninitialized data, and all the
@@ -133,61 +129,62 @@ Bug fixes since HDF5-1.2.0
Library
-------
+
* The function H5Pset_mpi is renamed as H5Pset_fapl_mpio.
- * Corrected a floating point number conversion error for the
- Cray J90 platform. The error did not convert the value 0.0
- correctly.
- * Error was fixed which was not allowing dataset region references to have
- their regions retrieved correctly.
+ * Corrected a floating point number conversion error for the Cray J90
+ platform. The error did not convert the value 0.0 correctly.
+ * Error was fixed which was not allowing dataset region references to
+ have their regions retrieved correctly.
* Corrected a bug that caused non-parallel file drivers to fail in
the parallel version.
- * Added internal free-lists to reduce memory required by the library and
- H5garbage_collect API function
- * Fixed error in H5Giterate which was not updating the "index" parameter
- correctly.
+ * Added internal free-lists to reduce memory required by the library
+ and H5garbage_collect API function
+ * Fixed error in H5Giterate which was not updating the "index"
+ parameter correctly.
* Fixed error in hyperslab iteration which was not walking through the
correct sequence of array elements if hyperslabs were staggered in a
certain pattern
* Fixed several other problems in hyperslab iteration code.
- * Fixed another H5Giterate bug which was causes groups with large numbers
- of objects in them to misbehave when the callback function returned
- non-zero values.
+ * Fixed another H5Giterate bug which was causes groups with large
+ numbers of objects in them to misbehave when the callback function
+ returned non-zero values.
* Changed return type of H5Aiterate and H5A_operator_t typedef to be
herr_t, to align them with the dataset and group iterator functions.
- * Changed H5Screate_simple and H5Sset_extent_simple to not allow dimensions
- of size 0 with out the same dimension being unlimited.
- * QAK - 4/19/00 - Improved metadata hashing & caching algorithms to avoid
- many hash flushes and also remove some redundant I/O when moving metadata
- blocks in the file.
+ * Changed H5Screate_simple and H5Sset_extent_simple to not allow
+ dimensions of size 0 with out the same dimension being unlimited.
+ * QAK - 4/19/00 - Improved metadata hashing & caching algorithms to
+ avoid many hash flushes and also remove some redundant I/O when
+ moving metadata blocks in the file.
* The "struct(opt)" type conversion function which gets invoked for
certain compound datatype conversions was fixed for nested compound
types. This required a small change in the datatype conversion
function API.
* Re-wrote lots of the hyperslab code to speed it up quite a bit.
- * Added bounded garbage collection for the free lists when they run out of
- memory and also added H5set_free_list_limits API call to allow users to
- put an upper limit on the amount of memory used for free lists.
- * Checked for non-existent or deleted objects when dereferencing one with
- object or region references and disallow dereference.
- * "Time" datatypes (H5T_UNIX_D*) were not being stored and retrieved from
- object headers correctly, fixed now.
- * Fixed H5Dread or H5Dwrite calls with H5FD_MPIO_COLLECTIVE requests that
- may hang because not all processes are transfer the same amount of data.
- (A.K.A. prematured collective return when zero amount data requested.)
- Collective calls that may cause hanging is done via the corresponding
- MPI-IO independent calls.
+ * Added bounded garbage collection for the free lists when they run
+ out of memory and also added H5set_free_list_limits API call to
+ allow users to put an upper limit on the amount of memory used for
+ free lists.
+ * Checked for non-existent or deleted objects when dereferencing one
+ with object or region references and disallow dereference.
+ * "Time" datatypes (H5T_UNIX_D*) were not being stored and retrieved
+ from object headers correctly, fixed now.
+ * Fixed H5Dread or H5Dwrite calls with H5FD_MPIO_COLLECTIVE requests
+ that may hang because not all processes are transfer the same amount
+ of data. (A.K.A. prematured collective return when zero amount data
+ requested.) Collective calls that may cause hanging is done via the
+ corresponding MPI-IO independent calls.
Configuration
-------------
- * The hdf5.h include file was fixed to allow the HDF5 Library to be compiled
- with other libraries/applications that use GNU autoconf.
- * Configuration for parallel HDF5 was improved. Configure now attempts to
- link with libmpi.a and/or libmpio.a as the MPI libraries by default.
- It also uses "mpirun" to launch MPI tests by default. It tests to
- link MPIO routines during the configuration stage, rather than failing
- later as before. One can just do "./configure --enable-parallel"
- if the MPI library is in the system library.
+ * The hdf5.h include file was fixed to allow the HDF5 Library to be
+ compiled with other libraries/applications that use GNU autoconf.
+ * Configuration for parallel HDF5 was improved. Configure now attempts
+ to link with libmpi.a and/or libmpio.a as the MPI libraries by
+ default. It also uses "mpirun" to launch MPI tests by default. It
+ tests to link MPIO routines during the configuration stage, rather
+ than failing later as before. One can just do "./configure
+ --enable-parallel" if the MPI library is in the system library.
* Added support for pthread library and thread-safe option.
* The libhdf5.settings file shows the correct machine byte-sex.
* Added option "--enable-stream-vfd" to configure w/o the Stream VFD.
@@ -206,9 +203,9 @@ Tools
* The test script for h5toh4 used to not able to detect the hdp
dumper command was not valid. It now detects and reports the
failure of hdp execution.
- * Merged the tools with the 1.2.2 branch. Required adding new macros, VERSION12
- and VERSION13, used in conditional compilation. Updated the Windows project files
- for the tools.
+ * Merged the tools with the 1.2.2 branch. Required adding new
+ macros, VERSION12 and VERSION13, used in conditional compilation.
+ Updated the Windows project files for the tools.
* h5dump displays opaque and bitfield data correctly.
* h5dump and h5ls can browse files created with the Stream VFD
(eg. "h5ls <hostname>:<port>").
@@ -227,7 +224,8 @@ Documentation
Platforms Tested:
================
- Note: Due to the nature of bug fixes, only static versions of the library and tools were tested.
+ Note: Due to the nature of bug fixes, only static versions of the
+ library and tools were tested.
AIX 4.3.2 (IBM SP) 3.6.6
@@ -235,6 +233,7 @@ Platforms Tested:
mpt.1.3
FreeBSD 3.3-STABLE gcc 2.95.2
HP-UX B.10.20 HP C HP92453-01 A.10.32
+ HP-UX B.11.00 HP C HP92453-01 A.11.00.13
IRIX 6.5 MIPSpro cc 7.30
IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
mpt.1.3 (SGI MPI 3.2.0.0)
@@ -242,6 +241,7 @@ Platforms Tested:
Linux 2.2.10 SuSE egcs-2.91.66 configured with
(i686-pc-linux-gnu) --disable-hsizet
mpich-1.2.0 egcs-2.91.66 19990314/Linux
+ Linux 2.2.16 RedHat gcc-2.95.2
OSF1 V4.0 DEC-V5.2-040
SunOS 5.6 cc WorkShop Compilers 4.2 no optimization