summaryrefslogtreecommitdiffstats
path: root/release_docs
diff options
context:
space:
mode:
Diffstat (limited to 'release_docs')
-rw-r--r--release_docs/HISTORY.txt984
-rw-r--r--release_docs/INSTALL538
-rw-r--r--release_docs/INSTALL_TFLOPS163
-rw-r--r--release_docs/INSTALL_VFL130
-rw-r--r--release_docs/INSTALL_Windows.txt915
-rw-r--r--release_docs/INSTALL_parallel177
-rw-r--r--release_docs/RELEASE.txt227
7 files changed, 3134 insertions, 0 deletions
diff --git a/release_docs/HISTORY.txt b/release_docs/HISTORY.txt
new file mode 100644
index 0000000..79f5065
--- /dev/null
+++ b/release_docs/HISTORY.txt
@@ -0,0 +1,984 @@
+HDF5 HISTORY
+============
+
+CONTENTS
+I. Release Information for hdf5-1.2.2
+II. Release Information for hdf5-1.2.1
+III. Release Information for hdf5-1.2.0
+ A. Platforms Supported
+ B. Known Problems
+ C. Changes Since Version 1.0.1
+ 1. Documentation
+ 2. Configuration
+ 3. Debugging
+ 4. Datatypes
+ 5. Dataspaces
+ 6. Persistent Pointers
+ 7. Parallel Support
+ 8. New API Functions
+ a. Property List Interface
+ b. Dataset Interface
+ c. Dataspace Interface
+ d. Datatype Interface
+ e. Identifier Interface
+ f. Reference Interface
+ g. Ragged Arrays
+ 9. Tools
+
+IV. Changes from Release 1.0.0 to Release 1.0.1
+V. Changes from the Beta 1.0.0 Release to Release 1.0.0
+VI. Changes from the Second Alpha 1.0.0 Release to the Beta 1.0.0 Release
+VII. Changes from the First Alpha 1.0.0 Release to the
+ Second Alpha 1.0.0 Release
+
+[Search on the string '%%%%' for per-release section breaks.]
+
+-----------------------------------------------------------------------
+
+
+
+%%%%1.2.2%%%% Release Information for hdf5-1.2.2 (6/23/00)
+
+I. Release Information for hdf5-1.2.2
+
+INTRODUCTION
+
+This document describes the differences between HDF5-1.2.1 and
+HDF5-1.2.2, and contains information on the platforms where HDF5-1.2.2
+was tested and known problems in HDF5-1.2.2.
+
+The HDF5 documentation can be found on the NCSA ftp server
+(ftp.ncsa.uiuc.edu) in the directory:
+
+ /HDF/HDF5/docs/
+
+For more information look at the HDF5 home page at:
+
+ http://hdf.ncsa.uiuc.edu/HDF5/
+
+If you have any questions or comments, please send them to:
+
+ hdfhelp@ncsa.uiuc.edu
+
+
+CONTENTS
+
+- Features Added since HDF5-1.2.1
+- Bug Fixes since HDF5-1.2.1
+- Known Problems
+- Platforms Tested
+
+
+Features Added since HDF5-1.2.1
+===============================
+ * Added internal free-lists to reduce memory required by the library and
+ H5garbage_collect API function.
+ * h5dump displays opaque and bitfield types.
+ * New features added to snapshots. Use 'snapshot help' to see a
+ complete list of features.
+ * Improved configure to detect if MPIO routines are available when
+ parallel mode is requested.
+
+Bug Fixes since HDF5-1.2.1
+==========================
+ * h5dump correctly displays compound datatypes, including simple and
+ nested compound types.
+ * h5dump correctly displays the committed copy of predefined types.
+ * Corrected an error in h5toh4 which did not convert the 32-bit
+ int from HDF5 to HDF4 correctly for the T3E platform.
+ * Corrected a floating point number conversion error for the
+ Cray J90 platform. The error did not convert the value 0.0
+ correctly.
+ * Fixed error in H5Giterate which was not updating the "index" parameter
+ correctly.
+ * Fixed error in hyperslab iteration which was not walking through the
+ correct sequence of array elements if hyperslabs were staggered in a
+ certain pattern.
+ * Fixed several other problems in hyperslab iteration code.
+ * Fixed another H5Giterate bug which caused groups with large numbers
+ of objects in them to misbehave when the callback function returned
+ non-zero values.
+ * Changed return type of H5Aiterate and H5A_operator_t typedef to be
+ herr_t, to align them with the dataset and group iterator functions.
+ * Changed H5Screate_simple and H5Sset_extent_simple to not allow dimensions
+ of size 0 without the same dimension being unlimited.
+ * Improved metadata hashing & caching algorithms to avoid
+ many hash flushes and also removed some redundant I/O when moving metadata
+ blocks in the file.
+ * The libhdf5.settings file shows the correct machine byte-sex.
+ * The "struct(opt)" type conversion function which gets invoked for
+ certain compound datatype conversions was fixed for nested compound
+ types. This required a small change in the datatype conversion
+ function API.
+
+Known Problems
+==============
+
+o SunOS 5.6 with C WorkShop Compilers 4.2: hyperslab selections will
+ fail if library is compiled using optimization of any level.
+o TFLOPS: dsets test fails if compiled with optimization turned on.
+o J90: tools fail to dispay data for the datasets with a compound datatype.
+
+Platforms Tested
+================
+
+ AIX 4.3.3 (IBM SP) 3.6.6 | binaries
+ mpicc using mpich 1.1.2 | are not
+ mpicc_r using IBM MPI-IO prototype | available
+ AIX 4.3.2.0 (IBM SP) xlc 5.0.1.0
+ Cray J90 10.0.0.7 cc 6.3.0.2
+ Cray T3E 2.0.5.29 cc 6.3.0.2
+ mpt.1.3
+ FreeBSD 4.0 gcc 2.95.2
+ HP-UX B.10.20 HP C HP92453-01 A.10.32
+ HP-UX B.11.00 HP92453-01 A.11.00.13 HP C Compiler
+ (static library only, h5toh4 tool is not available)
+ IRIX 6.5 MIPSpro cc 7.30
+ IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
+ mpt.1.4
+
+ Linux 2.2.10 SMP gcc 2.95.1
+ mpicc(gcc-2.95.1)
+ gcc (egcs-2.91.66)
+ mpicc (egcs-2.91.66)
+ Linux 2.2.16 (RedHat 6.2) gcc 2.95.2
+
+ OSF1 V4.0 DEC-V5.2-040
+ SunOS 5.6 cc WorkShop Compilers 5.0 no optimization
+ SunOS 5.7 cc WorkShop Compilers 5.0
+ SolarisX86 SunOS 5.5.1 gcc version 2.7.2 with --disable-hsizet
+ TFLOPS 3.2.1 pgcc Rel 3.1-3i
+ mpich-1.1.2 with local changes
+ Windows NT4.0 sp5 MSVC++ 6.0
+ Windows 98 MSVC++ 6.0
+ Windows 2000 MSVC++ 6.0
+
+
+
+%%%%1.2.1%%%% Release Information for hdf5-1.2.1
+
+II. Release Information for hdf5-1.2.1
+
+Bug fixes since HDF5-1.2.0
+==========================
+
+Configuration
+-------------
+
+ * The hdf5.h include file was fixed to allow the HDF5 Library to be compiled
+ with other libraries/applications that use GNU autoconf.
+ * Configuration for parallel HDF5 was improved. Configure now attempts to
+ link with libmpi.a and/or libmpio.a as the MPI libraries by default.
+ It also uses "mpirun" to launch MPI tests by default. It tests to
+ link MPIO routines during the configuration stage, rather than failing
+ later as before. One can just do "./configure --enable-parallel"
+ if the MPI library is in the system library.
+
+Library
+-------
+
+ * Error was fixed which was not allowing dataset region references to have
+ their regions retrieved correctly.
+ * Added internal free-lists to reduce memory required by the library and
+ H5garbage_collect API function
+ * Fixed error in H5Giterate which was not updating the "index" parameter
+ correctly.
+ * Fixed error in hyperslab iteration which was not walking through the
+ correct sequence of array elements if hyperslabs were staggered in a
+ certain pattern
+ * Fixed several other problems in hyperslab iteration code.
+
+Tests
+------
+
+ * Added additional tests for group and attribute iteration.
+ * Added additional test for staggered hyperslab iteration.
+ * Added additional test for random 5-D hyperslab selection.
+
+Tools
+------
+
+ * Added an option, -V, to show the version information of h5dump.
+ * Fixed a core dumping bug of h5toh4 when executed on platforms like
+ TFLOPS.
+ * The test script for h5toh4 used to not able to detect the hdp
+ dumper command was not valid. It now detects and reports the
+ failure of hdp execution.
+
+Documentation
+-------------
+
+ * User's Guide and Reference Manual were updated.
+ See doc/html/PSandPDF/index.html for more details.
+
+
+Platforms Tested:
+================
+Note: Due to the nature of bug fixes, only static versions of the library and tools were tested.
+
+
+ AIX 4.3.2 (IBM SP) 3.6.6
+ Cray T3E 2.0.4.81 cc 6.3.0.1
+ mpt.1.3
+ FreeBSD 3.3-STABLE gcc 2.95.2
+ HP-UX B.10.20 HP C HP92453-01 A.10.32
+ IRIX 6.5 MIPSpro cc 7.30
+ IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
+ mpt.1.3 (SGI MPI 3.2.0.0)
+
+ Linux 2.2.10 SuSE egcs-2.91.66 configured with
+ (i686-pc-linux-gnu) --disable-hsizet
+ mpich-1.2.0 egcs-2.91.66 19990314/Linux
+
+ OSF1 V4.0 DEC-V5.2-040
+ SunOS 5.6 cc WorkShop Compilers 4.2 no optimization
+ SunOS 5.7 cc WorkShop Compilers 5.0
+ TFLOPS 2.8 cicc (pgcc Rel 3.0-5i)
+ mpich-1.1.2 with local changes
+ Windows NT4.0 sp5 MSVC++ 6.0
+
+Known Problems:
+==============
+
+o SunOS 5.6 with C WorkShop Compilers 4.2: Hyperslab selections will
+ fail if library is compiled using optimization of any level.
+
+
+
+%%%%1.2.0%%%% Release Information for hdf5-1.2.0
+
+III. Release Information for hdf5-1.2.0
+
+A. Platforms Supported
+ -------------------
+
+Operating systems listed below with compiler information and MPI library, if
+applicable, are systems that HDF5 1.2.0 was tested on.
+
+ Compiler & libraries
+ Platform Information Comment
+ -------- ---------- --------
+
+ AIX 4.3.2 (IBM SP) 3.6.6
+
+ Cray J90 10.0.0.6 cc 6.3.0.0
+
+ Cray T3E 2.0.4.61 cc 6.2.1.0
+ mpt.1.3
+
+ FreeBSD 3.2 gcc 2.95.1
+
+ HP-UX B.10.20 HP C HP92453-01 A.10.32
+ gcc 2.8.1
+
+ IRIX 6.5 MIPSpro cc 7.30
+
+ IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
+ mpt.1.3 (SGI MPI 3.2.0.0)
+
+ Linux 2.2.10 egcs-2.91.66 configured with
+ --disable-hsizet
+ lbraries: glibc2
+
+ OSF1 V4.0 DEC-V5.2-040
+
+ SunOS 5.6 cc WorkShop Compilers 4.2
+ no optimization
+ gcc 2.8.1
+
+ SunOS 5.7 cc WorkShop Compilers 5.0
+ gcc 2.8.1
+
+ TFLOPS 2.7.1 cicc (pgcc Rel 3.0-4i)
+ mpich-1.1.2 with local changes
+
+ Windows NT4.0 intel MSVC++ 5.0 and 6.0
+
+ Windows NT alpha 4.0 MSVC++ 5.0
+
+ Windows 98 MSVC++ 5.0
+
+
+B. Known Problems
+ --------------
+
+* NT alpha 4.0
+ Dumper utiliy h5dump fails if linked with DLL.
+
+* SunOS 5.6 with C WorkShop Compilers 4.2
+ Hyperslab selections will fail if library is compiled using optimization
+ of any level.
+
+
+C. Changes Since Version 1.0.1
+ ---------------------------
+
+1. Documentation
+ -------------
+
+* More examples
+
+* Updated user guide, reference manual, and format specification.
+
+* Self-contained documentation for installations isolated from the
+ Internet.
+
+* HDF5 Tutorial was added to the documentation
+
+2. Configuration
+ -------------
+
+* Better detection and support for MPI-IO.
+
+* Recognition of compilers with known code generation problems.
+
+* Support for various compilers on a single architecture (e.g., the
+ native compiler and the GNU compilers).
+
+* Ability to build from read-only media and with different compilers
+ and/or options concurrently.
+
+* Added a libhdf5.settings file which summarizes the configuration
+ information and is installed along with the library.
+
+* Builds a shared library on most systems that support it.
+
+* Support for Cray T3E, J90 and Windows/NT.
+
+3. Debugging
+ ---------
+
+* Improved control and redirection of debugging and tracing messages.
+
+4. Datatypes
+ ---------
+
+* Optimizations to compound datatype conversions and I/O operations.
+
+* Added nearly 100 optimized conversion functions for native datatypes
+ including support for non-aligned data.
+
+* Added support for bitfield, opaque, and enumeration types.
+
+* Added distinctions between signed and unsigned char types to the
+ list of predefined native hdf5 datatypes.
+
+* Added HDF5 type definitions for C9x types like int32_t.
+
+* Application-defined type conversion functions can handle non-packed
+ data.
+
+* Changed the H5Tunregister() function to use wildcards when matching
+ conversion functions. H5Tregister_hard() and H5Tregister_soft()
+ were combined into H5Tregister().
+
+* Support for variable-length datatypes (arrays of varying length per
+ dataset element). Variable length strings currently supported only
+ as variable length arrays of 1-byte integers.
+
+5. Dataspaces
+ ----------
+
+* New query functions for selections.
+
+* I/O operations bypass the stripmining loop and go directly to
+ storage for certain contiguous selections in the absense of type
+ conversions. In other cases the stripmining buffers are used more
+ effectively.
+
+* Reduced the number of I/O requests under certain circumstances,
+ improving performance on systems with high I/O latency.
+
+6. Persistent Pointers
+ -------------------
+
+* Object (serial and parallel) and dataset region (serial only)
+ references are implemented.
+
+7. Parallel Support
+ ----------------
+
+* Improved parallel I/O performance.
+
+* Supported new platforms: Cray T3E, Linux, DEC Cluster.
+
+* Use vendor supported version of MPIO on SGI O2K and Cray platforms.
+
+* Improved the algorithm that translates an HDF5 hyperslab selection
+ into an MPI type for better collective I/O performance.
+
+8. New API functions
+ -----------------
+
+ a. Property List Interface:
+ ------------------------
+
+ H5Pset_xfer - set data transfer properties
+ H5Pset_preserve - set dataset transfer property list status
+ H5Pget_preserve - get dataset transfer property list status
+ H5Pset_hyper_cache - indicates whether to cache hyperslab blocks during I/O
+ H5Pget_hyper_cache - returns information regarding the caching of
+ hyperslab blocks during I/O
+ H5Pget_btree_ratios - sets B-tree split ratios for a dataset
+ transfer property list
+ H5Pset_btree_ratios - gets B-tree split ratios for a dataset
+ transfer property list
+ H5Pset_vlen_mem_manager - sets the memory manager for variable-length
+ datatype allocation
+ H5Pget_vlen_mem_manager - sets the memory manager for variable-length
+ datatype allocation
+
+ b. Dataset Interface:
+ ------------------
+
+ H5Diterate - iterate over all selected elements in a dataspace
+ H5Dget_storage_size - return the amount of storage required for a dataset
+ H5Dvlen_reclaim - reclaim VL datatype memory buffers
+
+ c. Dataspace Interface:
+ --------------------
+ H5Sget_select_hyper_nblocks - get number of hyperslab blocks
+ H5Sget_select_hyper_blocklist - get the list of hyperslab blocks
+ currently selected
+ H5Sget_select_elem_npoints - get the number of element points
+ in the current selection
+ H5Sget_select_elem_pointlist - get the list of element points
+ currently selected
+ H5Sget_select_bounds - gets the bounding box containing
+ the current selection
+
+ d. Datatype Interface:
+ -------------------
+ H5Tget_super - return the base datatype from which a
+ datatype is derived
+ H5Tvlen_create - creates a new variable-length dataype
+ H5Tenum_create - creates a new enumeration datatype
+ H5Tenum_insert - inserts a new enumeration datatype member
+ H5Tenum_nameof - returns the symbol name corresponding to a
+ specified member of an enumeration datatype
+ H5Tvalueof - return the value corresponding to a
+ specified member of an enumeration datatype
+ H5Tget_member_value - return the value of an enumeration datatype member
+ H5Tset_tag - tags an opaque datatype
+ H5Tget_tag - gets the tag associated with an opaque datatype
+
+ e. Identifier Interface:
+ ---------------------
+ H5Iget_type - retrieve the type of an object
+
+ f. Reference Interface:
+ --------------------
+ H5Rcreate - creates a reference
+ H5Rdereference - open the HDF5 object referenced
+ H5Rget_region - retrieve a dataspace with the specified region selected
+ H5Rget_object_type - retrieve the type of object that an
+ object reference points to
+
+ g. Ragged Arrays (alpha) (names of those API functions were changed):
+ ------------------------------------------------------------------
+ H5RAcreate - create a new ragged array (old name was H5Rcreate)
+ H5RAopen - open an existing array (old name was H5Ropen)
+ H5RAclose - close a ragged array (old name was H5Rclose)
+ H5RAwrite - write to an array (old name was H5Rwrite)
+ H5RAread - read from an array (old name was H5Rread)
+
+
+9. Tools
+ -----
+
+* Enhancements to the h5ls tool including the ability to list objects
+ from more than one file, to display raw hexadecimal data, to
+ show file addresses for raw data, to format output more reasonably,
+ to show object attributes, and to perform a recursive listing,
+
+* Enhancements to h5dump: support new data types added since previous
+ versions.
+
+* h5toh4: An hdf5 to hdf4 converter.
+
+
+
+%%%%1.0.1%%%% Release Information for hdf5-1.0.1
+
+IV. Changes from Release 1.0.0 to Release 1.0.1
+
+* [Improvement]: configure sets up the Makefile in the parallel tests
+ suit (testpar/) correctly.
+
+* [Bug-Fix]: Configure failed for all IRIX versions other than 6.3.
+ It now configures correctly for all IRIX 6.x version.
+
+* Released Parallel HDF5
+
+ Supported Features:
+ ------------------
+
+ HDF5 files are accessed according to the communicator and INFO
+ object defined in the property list set by H5Pset_mpi.
+
+ Independent read and write accesses to fixed and extendable dimension
+ datasets.
+
+ Collective read and write accesses to fixed dimension datasets.
+
+ Supported Platforms:
+ -------------------
+
+ Intel Red
+ IBM SP2
+ SGI Origin 2000
+
+ Changes In This Release:
+ -----------------------
+
+ o Support of Access to Extendable Dimension Datasets.
+ Extendable dimension datasets must use chunked storage methods.
+ A new function, H5Dextend, is created to extend the current
+ dimensions of a dataset. The current release requires the
+ MPI application must make a collective call to extend the
+ dimensions of an extendable dataset before writing to the
+ newly extended area. (The serial does not require the
+ call of H5Dextend. The dimensions of an extendable
+ dataset is increased when data is written to beyond the
+ current dimensions but within the maximum dimensions.)
+ The required collective call of H5Dextend may be relaxed
+ in future release.
+
+ This release only support independent read and write accesses
+ to extendable datasets. Collective accesses to extendable
+ datasets will be implemented in future releases.
+
+ o Collective access to fixed dimension datasets.
+ Collective access to a dataset can be specified in the transfer
+ property list argument in H5Dread and H5Dwrite. The current
+ release supports collective access to fixed dimension datasets.
+ Collective access to extendable datasets will be implemented in
+ future releases.
+
+ o HDF5 files are opened according to Communicator and INFO object.
+ H5Dopen now records the communicator and INFO setup by H5Pset_mmpi
+ and pass them to the corresponding MPIO open file calls for
+ processing.
+
+ o This release has been tested on IBM SP2, Intel Red and SGI Origin 2000
+ systems. It uses the ROMIO version of MPIO interface for parallel
+ I/O supports.
+
+
+
+%%%%1.0.0%%%% Release Information for hdf5-1.0.0
+
+V. Changes from the Beta 1.0.0 Release to Release 1.0.0
+
+* Added fill values for datasets. For contiguous datasets fill value
+ performance may be quite poor since the fill value is written to the
+ entire dataset when the dataset is created. This will be remedied
+ in a future version. Chunked datasets using fill values do not
+ incur any additional overhead. See H5Pset_fill_value().
+
+* Multiple hdf5 files can be "mounted" on one another to create a
+ larger virtual file. See H5Fmount().
+
+* Object names can be removed or changed but objects are never
+ actually removed from the file yet. See H5Gunlink() and H5Gmove().
+
+* Added a tuning mechanism for B-trees to insure that sequential
+ writes to chunked datasets use less overhead. See H5Pset_btree_ratios().
+
+* Various optimizations and bug fixes.
+
+
+
+%%%%1.0.0 Beta%%%% Release Information for hdf5-1.0.0 Beta
+
+VI. Changes from the Second Alpha 1.0.0 Release to the Beta 1.0.0 Release
+
+* Strided hyperslab selections in dataspaces now working.
+
+* The compression API has been replaced with a more general filter
+ API. See doc/html/Filters.html for details.
+
+* Alpha-quality 2d ragged arrays are implemented as a layer built on
+ top of other hdf5 objects. The API and storage format will almost
+ certainly change.
+
+* More debugging support including API tracing. See Debugging.html.
+
+* C and Fortran style 8-bit fixed-length character string types are
+ supported with space or null padding or null termination and
+ translations between them.
+
+* Added function H5Fflush() to write all cached data immediately to
+ the file.
+
+* Datasets maintain a modification time which can be retrieved with
+ H5Gstat().
+
+* The h5ls tool can display much more information, including all the
+ values of a dataset.
+
+
+
+%%%%1.0.0 Alpha 2%%%% Release Information for hdf5-1.0.0 Alpha 2
+
+VII. Changes from the First Alpha 1.0.0 Release to
+ the Second Alpha 1.0.0 Release
+
+* Two of the packages have been renamed. The data space API has been
+ renamed from `H5P' to `H5S' and the property list (template) API has
+ been renamed from `H5C' to `H5P'.
+
+* The new attribute API `H5A' has been added. An attribute is a small
+ dataset which can be attached to some other object (for instance, a
+ 4x4 transformation matrix attached to a 3-dimensional dataset, or an
+ English abstract attached to a group).
+
+* The error handling API `H5E' has been completed. By default, when an
+ API function returns failure an error stack is displayed on the
+ standard error stream. The H5Eset_auto() controls the automatic
+ printing and H5E_BEGIN_TRY/H5E_END_TRY macros can temporarily
+ disable the automatic error printing.
+
+* Support for large files and datasets (>2GB) has been added. There
+ is an html document that describes how it works. Some of the types
+ for function arguments have changed to support this: all arguments
+ pertaining to sizes of memory objects are `size_t' and all arguments
+ pertaining to file sizes are `hsize_t'.
+
+* More data type conversions have been added although none of them are
+ fine tuned for performance. There are new converters from integer
+ to integer and float to float, but not between integers and floating
+ points. A bug has been fixed in the converter between compound
+ types.
+
+* The numbered types have been removed from the API: int8, uint8,
+ int16, uint16, int32, uint32, int64, uint64, float32, and float64.
+ Use standard C types instead. Similarly, the numbered types were
+ removed from the H5T_NATIVE_* architecture; use unnumbered types
+ which correspond to the standard C types like H5T_NATIVE_INT.
+
+* More debugging support was added. If tracing is enabled at
+ configuration time (the default) and the HDF5_TRACE environment
+ variable is set to a file descriptor then all API calls will emit
+ the function name, argument names and values, and return value on
+ that file number. There is an html document that describes this.
+ If appropriate debugging options are enabled at configuration time,
+ some packages will display performance information on stderr.
+
+* Data types can be stored in the file as independent objects and
+ multiple datasets can share a data type.
+
+* The raw data I/O stream has been implemented and the application can
+ control meta and raw data caches, so I/O performance should be
+ improved from the first alpha release.
+
+* Group and attribute query functions have been implemented so it is
+ now possible to find out the contents of a file with no prior
+ knowledge.
+
+* External raw data storage allows datasets to be written by other
+ applications or I/O libraries and described and accessed through
+ HDF5.
+
+* Hard and soft (symbolic) links are implemented which allow groups to
+ share objects. Dangling and recursive symbolic links are supported.
+
+* User-defined data compression is implemented although we may
+ generalize the interface to allow arbitrary user-defined filters
+ which can be used for compression, checksums, encryption,
+ performance monitoring, etc. The publicly-available `deflate'
+ method is predefined if the GNU libz.a can be found at configuration
+ time.
+
+* The configuration scripts have been modified to make it easier to
+ build debugging vs. production versions of the library.
+
+* The library automatically checks that the application was compiled
+ with the correct version of header files.
+
+
+ Parallel HDF5 Changes
+
+* Parallel support for fixed dimension datasets with contiguous or
+ chunked storages. Also, support unlimited dimension datasets which
+ must use chunk storage. No parallel support for compressed datasets.
+
+* Collective data transfer for H5Dread/H5Dwrite. Collective access
+ support for datasets with contiguous storage only, thus only fixed
+ dimension datasets for now.
+
+* H5Pset_mpi and H5Pget_mpi no longer have the access_mode
+ argument. It is taken over by the data-transfer property list
+ of H5Dread/H5Dwrite.
+
+* New functions H5Pset_xfer and H5Pget_xfer to handle the
+ specification of independent or collective data transfer_mode
+ in the dataset transfer properties list. The properties
+ list can be used to specify data transfer mode in the H5Dwrite
+ and H5Dread function calls.
+
+* Added parallel support for datasets with chunked storage layout.
+ When a dataset is extend in a PHDF5 file, all processes that open
+ the file must collectively call H5Dextend with identical new dimension
+ sizes.
+
+
+ LIST OF API FUNCTIONS
+
+The following functions are implemented. Errors are returned if an
+attempt is made to use some feature which is not implemented and
+printing the error stack will show `not implemented yet'.
+
+Library
+ H5check - check that lib version matches header version
+ H5open - initialize library (happens automatically)
+ H5close - shut down the library (happens automatically)
+ H5dont_atexit - don't call H5close on exit
+ H5get_libversion - retrieve library version info
+ H5check_version - check for specific library version
+
+Property Lists
+ H5Pclose - release template resources
+ H5Pcopy - copy a template
+ H5Pcreate - create a new template
+ H5Pget_chunk - get chunked storage properties
+ H5Pset_chunk - set chunked storage properties
+ H5Pget_class - get template class
+ H5Pget_istore_k - get chunked storage properties
+ H5Pset_istore_k - set chunked storage properties
+ H5Pget_layout - get raw data layout class
+ H5Pset_layout - set raw data layout class
+ H5Pget_sizes - get address and size sizes
+ H5Pset_sizes - set address and size sizes
+ H5Pget_sym_k - get symbol table storage properties
+ H5Pset_sym_k - set symbol table storage properties
+ H5Pget_userblock - get user-block size
+ H5Pset_userblock - set user-block size
+ H5Pget_version - get file version numbers
+ H5Pget_alignment - get data alignment properties
+ H5Pset_alignment - set data alignment properties
+ H5Pget_external_count- get count of external data files
+ H5Pget_external - get information about an external data file
+ H5Pset_external - add a new external data file to the list
+ H5Pget_driver - get low-level file driver class
+ H5Pget_stdio - get properties for stdio low-level driver
+ H5Pset_stdio - set properties for stdio low-level driver
+ H5Pget_sec2 - get properties for sec2 low-level driver
+ H5Pset_sec2 - set properties for sec2 low-level driver
+ H5Pget_core - get properties for core low-level driver
+ H5Pset_core - set properties for core low-level driver
+ H5Pget_split - get properties for split low-level driver
+ H5Pset_split - set properties for split low-level driver
+ H5P_get_family - get properties for family low-level driver
+ H5P_set_family - set properties for family low-level driver
+ H5Pget_cache - get meta- and raw-data caching properties
+ H5Pset_cache - set meta- and raw-data caching properties
+ H5Pget_buffer - get raw-data I/O pipe buffer properties
+ H5Pset_buffer - set raw-data I/O pipe buffer properties
+ H5Pget_preserve - get type conversion preservation properties
+ H5Pset_preserve - set type conversion preservation properties
+ H5Pget_nfilters - get number of raw data filters
+ H5Pget_filter - get raw data filter properties
+ H5Pset_filter - set raw data filter properties
+ H5Pset_deflate - set deflate compression filter properties
+ H5Pget_mpi - get MPI-IO properties
+ H5Pset_mpi - set MPI-IO properties
+ H5Pget_xfer - get data transfer properties
+ + H5Pset_xfer - set data transfer properties
+ + H5Pset_preserve - set dataset transfer property list status
+ + H5Pget_preserve - get dataset transfer property list status
+ + H5Pset_hyper_cache - indicates whether to cache hyperslab blocks during I/O
+ + H5Pget_hyper_cache - returns information regarding the caching of
+ hyperslab blocks during I/O
+ + H5Pget_btree_ratios - sets B-tree split ratios for a dataset
+ transfer property list
+ + H5Pset_btree_ratios - gets B-tree split ratios for a dataset
+ transfer property list
+ + H5Pset_vlen_mem_manager - sets the memory manager for variable-length
+ datatype allocation
+ + H5Pget_vlen_mem_manager - sets the memory manager for variable-length
+ datatype allocation
+
+Datasets
+ H5Dclose - release dataset resources
+ H5Dcreate - create a new dataset
+ H5Dget_space - get data space
+ H5Dget_type - get data type
+ H5Dget_create_plist - get dataset creation properties
+ H5Dopen - open an existing dataset
+ H5Dread - read raw data
+ H5Dwrite - write raw data
+ H5Dextend - extend a dataset
+ + H5Diterate - iterate over all selected elements in a dataspace
+ + H5Dget_storage_size - return the amount of storage required for a dataset
+ + H5Dvlen_reclaim - reclaim VL datatype memory buffers
+
+Attributes
+ H5Acreate - create a new attribute
+ H5Aopen_name - open an attribute by name
+ H5Aopen_idx - open an attribute by number
+ H5Awrite - write values into an attribute
+ H5Aread - read values from an attribute
+ H5Aget_space - get attribute data space
+ H5Aget_type - get attribute data type
+ H5Aget_name - get attribute name
+ H5Anum_attrs - return the number of attributes for an object
+ H5Aiterate - iterate over an object's attributes
+ H5Adelete - delete an attribute
+ H5Aclose - close an attribute
+
+Errors
+ H5Eclear - clear the error stack
+ H5Eprint - print an error stack
+ H5Eget_auto - get automatic error reporting settings
+ H5Eset_auto - set automatic error reporting
+ H5Ewalk - iterate over the error stack
+ H5Ewalk_cb - the default error stack iterator function
+ H5Eget_major - get the message for the major error number
+ H5Eget_minor - get the message for the minor error number
+
+Files
+ H5Fclose - close a file and release resources
+ H5Fcreate - create a new file
+ H5Fget_create_plist - get file creation property list
+ H5Fget_access_plist - get file access property list
+ H5Fis_hdf5 - determine if a file is an hdf5 file
+ H5Fopen - open an existing file
+ H5Freopen - reopen an HDF5 file
+ H5Fmount - mount a file
+ H5Funmount - unmount a file
+ H5Fflush - flush all buffers associated with a file to disk
+
+Groups
+ H5Gclose - close a group and release resources
+ H5Gcreate - create a new group
+ H5Gopen - open an existing group
+ H5Giterate - iterate over the contents of a group
+ H5Gmove - change the name of some object
+ H5Glink - create a hard or soft link to an object
+ H5Gunlink - break the link between a name and an object
+ H5Gget_objinfo - get information about a group entry
+ H5Gget_linkval - get the value of a soft link
+ H5Gget_comment - get the comment string for an object
+ H5Gset_comment - set the comment string for an object
+
+Dataspaces
+ H5Screate - create a new data space
+ H5Scopy - copy a data space
+ H5Sclose - release data space
+ H5Screate_simple - create a new simple data space
+ H5Sset_space - set simple data space extents
+ H5Sis_simple - determine if data space is simple
+ H5Sset_extent_simple - set simple data space dimensionality and size
+ H5Sget_simple_extent_npoints - get number of points in simple extent
+ H5Sget_simple_extent_ndims - get simple data space dimensionality
+ H5Sget_simple_extent_dims - get simple data space size
+ H5Sget_simple_extent_type - get type of simple extent
+ H5Sset_extent_none - reset extent to be empty
+ H5Sextent_copy - copy the extent from one data space to another
+ H5Sget_select_npoints - get number of points selected for I/O
+ H5Sselect_hyperslab - set hyperslab dataspace selection
+ H5Sselect_elements - set element sequence dataspace selection
+ H5Sselect_all - select entire extent for I/O
+ H5Sselect_none - deselect all elements of extent
+ H5Soffset_simple - set selection offset
+ H5Sselect_valid - determine if selection is valid for extent
+ + H5Sget_select_hyper_nblocks - get number of hyperslab blocks
+ + H5Sget_select_hyper_blocklist - get the list of hyperslab blocks
+ currently selected
+ + H5Sget_select_elem_npoints - get the number of element points
+ in the current selection
+ + H5Sget_select_elem_pointlist - get the list of element points
+ currently selected
+ + H5Sget_select_bounds - gets the bounding box containing
+ the current selection
+
+Datatypes
+ H5Tclose - release data type resources
+ H5Topen - open a named data type
+ H5Tcommit - name a data type
+ H5Tcommitted - determine if a type is named
+ H5Tcopy - copy a data type
+ H5Tcreate - create a new data type
+ H5Tequal - compare two data types
+ H5Tlock - lock type to prevent changes
+ H5Tfind - find a data type conversion function
+ H5Tconvert - convert data from one type to another
+ H5Tregister - register a conversion function
+ H5Tunregister - remove a conversion function
+ H5Tget_overflow - get function that handles overflow conv. cases
+ H5Tset_overflow - set function to handle overflow conversion cases
+ H5Tget_class - get data type class
+ H5Tget_cset - get character set
+ H5Tget_ebias - get exponent bias
+ H5Tget_fields - get floating point fields
+ H5Tget_inpad - get inter-field padding
+ H5Tget_member_dims - get struct member dimensions
+ H5Tget_member_name - get struct member name
+ H5Tget_member_offset - get struct member byte offset
+ H5Tget_member_type - get struct member type
+ H5Tget_nmembers - get number of struct members
+ H5Tget_norm - get floating point normalization
+ H5Tget_offset - get bit offset within type
+ H5Tget_order - get byte order
+ H5Tget_pad - get padding type
+ H5Tget_precision - get precision in bits
+ H5Tget_sign - get integer sign type
+ H5Tget_size - get size in bytes
+ H5Tget_strpad - get string padding
+ H5Tinsert - insert scalar struct member
+ H5Tinsert_array - insert array struct member
+ H5Tpack - pack struct members
+ H5Tset_cset - set character set
+ H5Tset_ebias - set exponent bias
+ H5Tset_fields - set floating point fields
+ H5Tset_inpad - set inter-field padding
+ H5Tset_norm - set floating point normalization
+ H5Tset_offset - set bit offset within type
+ H5Tset_order - set byte order
+ H5Tset_pad - set padding type
+ H5Tset_precision - set precision in bits
+ H5Tset_sign - set integer sign type
+ H5Tset_size - set size in bytes
+ H5Tset_strpad - set string padding
+ + H5Tget_super - return the base datatype from which a
+ datatype is derived
+ + H5Tvlen_create - creates a new variable-length dataype
+ + H5Tenum_create - creates a new enumeration datatype
+ + H5Tenum_insert - inserts a new enumeration datatype member
+ + H5Tenum_nameof - returns the symbol name corresponding to a
+ specified member of an enumeration datatype
+ + H5Tvalueof - return the value corresponding to a
+ specified member of an enumeration datatype
+ + H5Tget_member_value - return the value of an enumeration datatype member
+ + H5Tset_tag - tags an opaque datatype
+ + H5Tget_tag - gets the tag associated with an opaque datatype
+
+ - H5Tregister_hard - register specific type conversion function
+ - H5Tregister_soft - register general type conversion function
+
+Filters
+ H5Tregister - register a conversion function
+
+Compression
+ H5Zregister - register new compression and uncompression
+ functions for a method specified by a method number
+
+Identifiers
+ + H5Iget_type - retrieve the type of an object
+
+References
+ + H5Rcreate - creates a reference
+ + H5Rdereference - open the HDF5 object referenced
+ + H5Rget_region - retrieve a dataspace with the specified region selected
+ + H5Rget_object_type - retrieve the type of object that an
+ object reference points to
+
+Ragged Arrays (alpha)
+ H5RAcreate - create a new ragged array
+ H5RAopen - open an existing array
+ H5RAclose - close a ragged array
+ H5RAwrite - write to an array
+ H5RAread - read from an array
+
+
diff --git a/release_docs/INSTALL b/release_docs/INSTALL
new file mode 100644
index 0000000..20ce060
--- /dev/null
+++ b/release_docs/INSTALL
@@ -0,0 +1,538 @@
+
+ Instructions for the Installation of HDF5 Software
+ ==================================================
+
+ CONTENTS
+ --------
+ 1. Obtaining HDF5
+
+ 2. Warnings about compilers
+ 2.1. GNU (Intel platforms)
+ 2.2. DEC
+ 2.3. SGI (Irix64 6.2)
+ 2.4. Windows/NT
+
+ 3. Quick installation
+ 3.1. TFLOPS
+ 3.2. Windows
+ 3.3. Certain Virtual File Layer(VFL)
+
+ 4. HDF5 dependencies
+ 4.1. Zlib
+ 4.2. MPI and MPI-IO
+
+ 5. Full installation instructions for source distributions
+ 5.1. Unpacking the distribution
+ 5.1.1. Non-compressed tar archive (*.tar)
+ 5.1.2. Compressed tar archive (*.tar.Z)
+ 5.1.3. Gzip'd tar archive (*.tar.gz)
+ 5.1.4. Bzip'd tar archive (*.tar.bz2)
+ 5.2. Source vs. Build Directories
+ 5.3. Configuring
+ 5.3.1. Specifying the installation directories
+ 5.3.2. Using an alternate C compiler
+ 5.3.3. Additional compilation flags
+ 5.3.4. Compiling HDF5 wrapper libraries
+ 5.3.5. Specifying other programs
+ 5.3.6. Specifying other libraries and headers
+ 5.3.7. Static versus shared linking
+ 5.3.8. Optimization versus symbolic debugging
+ 5.3.9. Large (>2GB) vs. small (<2GB) file capability
+ 5.3.10. Parallel vs. serial library
+ 5.4. Building
+ 5.5. Testing
+ 5.6. Installing
+
+ 6. Using the Library
+
+ 7. Support
+
+*****************************************************************************
+
+1. Obtaining HDF5
+ The latest supported public release of HDF5 is available from
+ ftp://hdf.ncsa.uiuc.edu/pub/dist/HDF5. For Unix platforms, it is
+ available in tar format uncompressed or compressed with compress,
+ gzip, or bzip2. For Microsoft Windows, it is in ZIP format.
+
+ The HDF team also makes snapshots of the source code available on
+ a regular basis. These snapshots are unsupported (that is, the
+ HDF team will not release a bug-fix on a particular snapshot;
+ rather any bug fixes will be rolled into the next snapshot).
+ Furthermore, the snapshots have only been tested on a few
+ machines and may not test correctly for parallel applications.
+ Snapshots can be found at
+ ftp://hdf.ncsa.uiuc.edu/pub/outgoing/hdf5/snapshots in a limited
+ number of formats.
+
+2. Warnings about compilers
+ OUTPUT FROM THE FOLLOWING COMPILERS SHOULD BE EXTREMELY SUSPECT
+ WHEN USED TO COMPILE THE HDF5 LIBRARY, ESPECIALLY IF
+ OPTIMIZATIONS ARE ENABLED. IN ALL CASES, HDF5 ATTEMPTS TO WORK
+ AROUND THE COMPILER BUGS BUT THE HDF5 DEVELOPMENT TEAM MAKES NO
+ GUARANTEES THAT THERE ARE OTHER CODE GENERATION PROBLEMS.
+
+2.1. GNU (Intel platforms)
+ Versions before 2.8.1 have serious problems allocating registers
+ when functions contain operations on `long long' data types.
+ Supplying the `--disable-hsizet' switch to configure (documented
+ below) will prevent hdf5 from using `long long' data types in
+ situations that are known not to work, but it limits the hdf5
+ address space to 2GB.
+
+2.2. DEC
+ The V5.2-038 compiler (and possibly others) occasionally
+ generates incorrect code for memcpy() calls when optimizations
+ are enabled, resulting in unaligned access faults. HDF5 works
+ around the problem by casting the second argument to `char *'.
+
+2.3. SGI (Irix64 6.2)
+ The Mongoose 7.00 compiler has serious optimization bugs and
+ should be upgraded to MIPSpro 7.2.1.2m. Patches are available
+ from SGI.
+
+2.4. Windows/NT
+ The MicroSoft Win32 5.0 compiler is unable to cast unsigned long
+ long values to doubles. HDF5 works around this bug by first
+ casting to signed long long and then to double.
+ A link warning: defaultlib "LIBC" conflicts with use of other
+ libs appears for debug version of VC++ 6.0. This warning will
+ not affect building and testing hdf5 libraries.
+
+
+3. Quick installation
+ For those that don't like to read ;-) the following steps can be
+ used to configure, build, test, and install the HDF5 library,
+ header files, and support programs.
+
+ $ gunzip < hdf5-1.4.0.tar.gz | tar xf -
+ $ cd hdf5-1.4.0
+ $ make check
+ $ make install
+
+3.1. TFLOPS
+ Users of the Intel TFLOPS machine, after reading this file,
+ should see the INSTALL_TFLOPS for more instructions.
+
+3.2. Windows
+ Users of Microsoft Windows should see the INSTALL_Windows for
+ detailed instructions.
+
+3.3. Certain Virtual File Layer(VFL)
+ If users want to install with special Virtual File Layer(VFL),
+ please go to read INSTALL_VFL file. SRB and Globus-GASS have
+ been documented.
+
+
+4. HDF5 dependencies
+4.1. Zlib
+ The HDF5 library has a predefined compression filter that uses
+ the "deflate" method for chunked datatsets. If zlib-1.1.2 or
+ later is found then HDF5 will use it, otherwise HDF5's predefined
+ compression method will degenerate to a no-op (the compression
+ filter will succeed but the data will not be compressed).
+
+4.2. MPI and MPI-IO
+ The parallel version of the library is built upon the foundation
+ provided by MPI and MPI-IO. If these libraries are not available
+ when HDF5 is configured then only a serial version of HDF5 can be
+ built.
+
+5. Full installation instructions for source distributions
+5.1. Unpacking the distribution
+ The HDF5 source code is distributed in a variety of formats which
+ can be unpacked with the following commands, each of which
+ creates an `hdf5-1.2.0' directory.
+
+5.1.1. Non-compressed tar archive (*.tar)
+
+ $ tar xf hdf5-1.4.0.tar
+
+5.1.2. Compressed tar archive (*.tar.Z)
+
+ $ uncompress -c < hdf5-1.4.0.tar.Z | tar xf -
+
+5.1.3. Gzip'd tar archive (*.tar.gz)
+
+ $ gunzip < hdf5-1.4.0.tar.gz | tar xf -
+
+5.1.4. Bzip'd tar archive (*.tar.bz2)
+
+ $ bunzip2 < hdf5-1.2.0.tar.bz2 | tar xf -
+
+5.2. Source vs. Build Directories
+ On most systems the build can occur in a directory other than the
+ source directory, allowing multiple concurrent builds and/or
+ read-only source code. In order to accomplish this, one should
+ create a build directory, cd into that directory, and run the
+ `configure' script found in the source directory (configure
+ details are below).
+
+ Unfortunately, this does not work on recent Irix platforms (6.5?
+ and later) because that `make' doesn't understand the VPATH
+ variable. However, hdf5 also supports Irix `pmake' which has a
+ .PATH target which serves a similar purpose. Here's what the man
+ pages say about VPATH, which is the facility used by HDF5
+ makefiles for this feature:
+
+ The VPATH facility is a derivation of the undocumented VPATH
+ feature in the System V Release 3 version of make. System V
+ Release 4 has a new VPATH implementation, much like the
+ pmake(1) .PATH feature. This new feature is also undocumented
+ in the standard System V Release 4 manual pages. For this
+ reason it is not available in the IRIX version of make. The
+ VPATH facility should not be used with the new parallel make
+ option.
+
+5.3. Configuring
+ HDF5 uses the GNU autoconf system for configuration, which
+ detects various features of the host system and creates the
+ Makefiles. On most systems it should be sufficient to say:
+
+ $ ./configure OR
+ $ sh configure
+
+ The configuration process can be controlled through environment
+ variables, command-line switches, and host configuration files.
+ For a complete list of switches type:
+
+ $ ./configure --help
+
+ The host configuration files are located in the `config'
+ directory and are based on architecture name, vendor name, and/or
+ operating system which are displayed near the beginning of the
+ `configure' output. The host config file influences the behavior
+ of configure by setting or augmenting shell variables.
+
+5.3.1. Specifying the installation directories
+ Typing `make install' will install the HDF5 library, header
+ files, and support programs in /usr/local/lib,
+ /usr/local/include, and /usr/local/bin. To use a path other than
+ /usr/local specify the path with the `--prefix=PATH' switch:
+
+ $ ./configure --prefix=$HOME
+
+ If shared libraries are being built (the default) then the final
+ home of the shared library must be specified with this switch
+ before the library and executables are built.
+
+5.3.2. Using an alternate C compiler
+ By default, configure will look for the C compiler by trying
+ `gcc' and `cc'. However, if the environment variable "CC" is set
+ then its value is used as the C compiler (users of csh and
+ derivatives will need to prefix the commands below with `env').
+ For instance, to use the native C compiler on a system which also
+ has the GNU gcc compiler:
+
+ $ CC=cc ./configure
+
+ A parallel version of hdf5 can be built by specifying `mpicc' as
+ the C compiler (the `--enable-parallel' flag documented below is
+ optional). Using the `mpicc' compiler will insure that the
+ correct MPI and MPI-IO header files and libraries are used.
+
+ $ CC=/usr/local/mpi/bin/mpicc ./configure
+
+ On Irix64 the default compiler is `cc'. To use an alternate
+ compiler specify it with the CC variable:
+
+ $ CC='cc -o32' ./configure
+
+ One may also use various environment variables to change the
+ behavior of the compiler. E.g., to ask for -n32 ABI:
+
+ $ SGI_ABI=-n32
+ $ export SGI_ABI
+ $ ./configure
+
+ Similarly, users compiling on a Solaris machine and desiring to
+ build the distribution with 64-bit support should specify the
+ correct flags with the CC variable:
+
+ $ CC='cc -xarch=v9' ./configure
+
+ Specifying these machine architecture flags in the CFLAGS variable
+ (see below) will not work correctly.
+
+5.3.3. Additional compilation flags
+ If addtional flags must be passed to the compilation commands
+ then specify those flags with the CFLAGS variable. For instance,
+ to enable symbolic debugging of a production version of HDF5 one
+ might say:
+
+ $ CFLAGS=-g ./configure --enable-production
+
+5.3.4. Compiling HDF5 wrapper libraries
+ One can optionally build the Fortran and/or C++ interface to the
+ HDF5 C library. By default, both options are disabled. To build
+ them, specify `--enable-fortran' and `--enable-cxx' respectively.
+
+ $ ./configure --enable-fortran
+ $ ./configure --enable-cxx
+
+ Configuration will halt if a working Fortran 90 or 95 compiler or
+ C++ compiler is not found. Currently, the Fortran configure tests
+ for these compilers in order: f90, pgf90, f95. To use an
+ alternative compiler specify it with the F9X variable:
+
+ $ F9X=/usr/local/bin/g95 ./configure --enable-fortran
+
+ Note: The Fortran and C++ interfaces are not supported on all the
+ platforms the main HDF5 library supports. Also, the Fortran
+ interface supports parallel HDF5 while the C++ interface does
+ not.
+
+ Note: On T3E and J90 the following files should be modified before
+ building the Fortran Library:
+ fortran/src/H5Dff.f90
+ fortran/src/H5Aff.f90
+ fortran/src/H5Pff.f90
+ Check for "Comment if on T3E ..." comment and comment out
+ specified lines.
+
+
+5.3.5. Specifying other programs
+ The build system has been tuned for use with GNU make but works
+ also with other versions of make. If the `make' command runs a
+ non-GNU version but a GNU version is available under a different
+ name (perhaps `gmake') then HDF5 can be configured to use it by
+ setting the MAKE variable. Note that whatever value is used for
+ MAKE must also be used as the make command when building the
+ library:
+
+ $ MAKE=gmake ./configure
+ $ gmake
+
+ The `AR' and `RANLIB' variables can also be set to the names of
+ the `ar' and `ranlib' (or `:') commands to override values
+ detected by configure.
+
+ The HDF5 library, include files, and utilities are installed
+ during `make install' (described below) with a BSD-compatible
+ install program detected automatically by configure. If none is
+ found then the shell script bin/install-sh is used. Configure
+ doesn't check that the install script actually works, but if a
+ bad install is detected on your system (e.g., on the ASCI blue
+ machine as of March 2, 1999) you have two choices:
+
+ 1. Copy the bin/install-sh program to your $HOME/bin
+ directory, name it `install', and make sure that $HOME/bin
+ is searched before the system bin directories.
+
+ 2. Specify the full path name of the `install-sh' program
+ as the value of the INSTALL environment variable. Note: do
+ not use `cp' or some other program in place of install
+ because the HDF5 makefiles also use the install program to
+ also change file ownership and/or access permissions.
+
+5.3.6. Specifying other libraries and headers
+ Configure searches the standard places (those places known by the
+ systems compiler) for include files and header files. However,
+ additional directories can be specified by using the CPPFLAGS
+ and/or LDFLAGS variables:
+
+ $ CPPFLAGS=-I/home/robb/include \
+ LDFLAGS=-L/home/robb/lib \
+ ./configure
+
+ HDF5 uses the zlib library for two purposes: it provides support
+ for the HDF5 deflate data compression filter, and it is used by
+ the h5toh4 converter and the h4toh5 converter in support of
+ HDF4. Configure searches the standard places (plus those
+ specified above with CPPFLAGS and LDFLAGS variables) for the zlib
+ headers and library. The search can be disabled by specifying
+ `--without-zlib' or alternate directories can be specified with
+ `--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and LDFLAGS
+ variables:
+
+ $ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib
+
+ $ CPPFLAGS=-I/usr/unsup/include \
+ LDFLAGS=-L/usr/unsup/lib \
+ ./configure
+
+ The HDF5-to-HDF4 and HDF4-to-HDF5 conversion tool requires the
+ HDF4 library and header files which are detected the same way as
+ zlib. The switch to give to configure is `--with-hdf4'. Note
+ that HDF5 requires a newer version of zlib than the one shipped
+ with some versions of HDF4. Also, unless you have the "correct"
+ version of hdf4 the confidence testing will fail in the tools
+ directory.
+
+5.3.7. Static versus shared linking
+ The build process will create static libraries on all systems and
+ shared libraries on systems that support dynamic linking to a
+ sufficient degree. Either form of library may be suppressed by
+ saying `--disable-static' or `--disable-shared'.
+
+ $ ./configure --disable-shared
+
+ The C++ and Fortran libraries are currently only available in the
+ static format.
+
+ To build only statically linked executables on platforms which
+ support shared libraries, use the `--enable-static-exec' flag.
+
+ $ ./configure --enable-static-exec
+
+5.3.8. Optimization versus symbolic debugging
+ The library can be compiled to provide symbolic debugging support
+ so it can be debugged with gdb, dbx, ddd, etc or it can be
+ compiled with various optimizations. To compile for symbolic
+ debugging (the default for snapshots) say `--disable-production';
+ to compile with optimizations (the default for supported public
+ releases) say `--enable-production'. On some systems the library
+ can also be compiled for profiling with gprof by saying
+ `--enable-production=profile'.
+
+ $ ./configure --disable-production #symbolic debugging
+ $ ./configure --enable-production #optimized code
+ $ ./configure --enable-production=profile #for use with gprof
+
+ Regardless of whether support for symbolic debugging is enabled,
+ the library also is able to perform runtime debugging of certain
+ packages (such as type conversion execution times, and extensive
+ invariant condition checking). To enable this debugging supply a
+ comma-separated list of package names to to the `--enable-debug'
+ switch (see Debugging.html for a list of package names).
+ Debugging can be disabled by saying `--disable-debug'. The
+ default debugging level for snapshots is a subset of the
+ available packages; the default for supported releases is no
+ debugging (debugging can incur a significant runtime penalty).
+
+ $ ./configure --enable-debug=s,t #debug only H5S and H5T
+ $ ./configure --enable-debug #debug normal packages
+ $ ./configure --enable-debug=all #debug all packages
+ $ ./configure --disable-debug #no debugging
+
+ HDF5 is also able to print a trace of all API function calls,
+ their arguments, and the return values. To enable or disable the
+ ability to trace the API say `--enable-trace' (the default for
+ snapthots) or `--disable-trace' (the default for public
+ releases). The tracing must also be enabled at runtime to see any
+ output (see Debugging.html).
+
+5.3.9. Large (>2GB) vs. small (<2GB) file capability
+ In order to read or write files that could potentially be larger
+ than 2GB it is necessary to use the non-ANSI `long long' data
+ type on some platforms. However, some compilers (e.g., GNU gcc
+ versions before 2.8.1 on Intel platforms) are unable to produce
+ correct machine code for this data type. To disable use of the
+ `long long' type on these machines say:
+
+ $ ./configure --disable-hsizet
+
+5.3.10. Parallel vs. serial library
+ The HDF5 library can be configured to use MPI and MPI-IO for
+ parallelizm on a distributed multi-processor system. Read the
+ file INSTALL_parallel for detailed explanations.
+
+5.3.11. Threadsafe capability
+ The HDF5 library can be configured to be thread-safe (on a very
+ large scale) with the with the `--enable-threadsafe' flag to
+ configure. Read the file doc/TechNotes/ThreadSafeLibrary.html for
+ further details.
+
+5.3.12. Backward compatibility
+ The 1.4 version of the HDF5 library can be configured to operate
+ identically to the v1.2 library with the `--enable-hdf5v1_2'
+ configure flag. This allows existing code to be compiled with the
+ v1.4 library without requiring immediate changes to the
+ application source code. This flag will only be supported in the
+ v1.4 branch of the library, it will not be available in v1.5+.
+
+5.3.13. Network stream capability
+ The HDF5 library can be configured with a network stream file
+ driver with the `--enable-stream-vfd' configure flag. This option
+ compiles the "stream" Virtual File Driver into the main library.
+ See the documentation on the Virtual File Layer for more details
+ about the use of this driver.
+
+
+5.4. Building
+ The library, confidence tests, and programs can be build by
+ saying just:
+
+ $ make
+
+ Note that if you supplied some other make command via the MAKE
+ variable during the configuration step then that same command
+ must be used here.
+
+ When using GNU make you can add `-j -l6' to the make command to
+ compile in parallel on SMP machines. Do not give a number after
+ th `-j' since GNU make will turn it off for recursive invocations
+ of make.
+
+ $ make -j -l6
+
+5.5. Testing
+ HDF5 comes with various test suites, all of which can be run by
+ saying
+
+ $ make check
+
+ To run only the tests for the library change to the `test'
+ directory before issuing the command. Similarly, tests for the
+ parallel aspects of the library are in `testpar' and tests for
+ the support programs are in `tools'.
+
+ Temporary files will be deleted by each test when it complets,
+ but may continue to exist in an incomplete state if the test
+ fails. To prevent deletion of the files define the HDF5_NOCLEANUP
+ environment variable.
+
+5.6. Installing
+ The HDF5 library, include files, and support programs can be
+ installed in a (semi-)public place by saying `make install'. The
+ files are installed under the directory specified with
+ `--prefix=DIR' (or '/usr/local') in directories named `lib',
+ `include', and `bin'. The prefix directory must exist prior to
+ `make install', but its subdirectories are created automatically.
+
+ If `make install' fails because the install command at your site
+ somehow fails, you may use the install-sh that comes with the
+ source. You need to run ./configure again.
+
+ $ INSTALL="$PWD/bin/install-sh -c" ./configure ...
+ $ make install
+
+ The library can be used without installing it by pointing the
+ compiler at the `src' directory for both include files and
+ libraries. However, the minimum which must be installed to make
+ the library publically available is:
+
+ The library:
+ ./src/libhdf5.a
+
+ The public header files:
+ ./src/H5*public.h
+
+ The main header file:
+ ./src/hdf5.h
+
+ The configuration information:
+ ./src/H5pubconf.h
+
+ The support programs that are useful are:
+ ./tools/h5ls (list file contents)
+ ./tools/h5dump (dump file contents)
+ ./tools/h5repart (repartition file families)
+ ./tools/h5toh4 (hdf5 to hdf4 file converter)
+ ./tools/h5debug (low-level file debugging)
+ ./tools/h5import (a demo)
+ ./tools/h4toh5 (hdf4 to hdf5 file converter)
+
+6. Using the Library
+ Please see the User Manual in the doc/html directory.
+
+ Most programs will include <hdf5.h> and link with -lhdf5.
+ Additional libraries may also be necessary depending on whether
+ support for compression, etc. was compiled into the hdf5 library.
+
+ A summary of the hdf5 installation can be found in the
+ libhdf5.settings file in the same directory as the static and/or
+ shared hdf5 libraries.
+
+7. Support
+ Support is described in the README file.
diff --git a/release_docs/INSTALL_TFLOPS b/release_docs/INSTALL_TFLOPS
new file mode 100644
index 0000000..a05ef7f
--- /dev/null
+++ b/release_docs/INSTALL_TFLOPS
@@ -0,0 +1,163 @@
+
+FOR THE INTEL TFLOPS MACHINE:
+
+Below are the step-by-step procedures for building, testing, and
+installing both the sequential and parallel versions of the HDF5 library.
+
+-----------------
+Software locations
+------------------
+The zlib compression library is installed in /usr/community/hdf5/ZLIB.
+The mpich library, including mpi-io support, is in
+/usr/community/mpich/mpich-1.2.0.
+
+---------------
+Sequential HDF5:
+---------------
+
+The setup process for building the sequential HDF5 library for the
+ASCI Red machine is done by a coordination of events from sasn100 and
+janus. Though janus can do compiling, it is better to build it
+from sasn100 which has more complete building tools and runs faster.
+It is also anti-social to tie up janus with compiling. The HDF5 building
+requires the use of janus because one of steps is to execute a program
+to find out the run-time characteristics of the TFLOPS machine.
+
+Assuming you have already unpacked the HDF5 tar-file into the
+directory <hdf5>, follow the steps below:
+
+FROM SASN100,
+
+1) cd <hdf5>
+
+2) ./configure tflop
+ Due to a bug, you need to patch up two Makefile, src/Makefile and
+ test/Makefile, before the next step. You can use the following
+ shell commands.
+
+# Patch up various Makefile's.
+# patch up src/Makefile
+echo "patching src/Makefile"
+ed - src/Makefile <<'EOF'
+/^LT_RUN=.*/s//LT_RUN=$(RUNTEST)/
+w
+q
+EOF
+
+# patch up test/Makefile
+echo "patching test/Makefile"
+ed - test/Makefile <<'EOF'
+/^RUNTEST=$(LT_RUN)/s/^/#/
+w
+q
+EOF
+
+3) make H5detect
+
+
+FROM JANUS,
+
+4) cd <hdf5>
+
+5) make H5Tinit.c
+
+
+FROM SASN100,
+
+6) make
+
+
+When everything is finished compiling and linking,
+you can run the tests by
+FROM JANUS,
+
+7) make check
+ Sometimes the "make check" fails in the sub-directories of test
+ or tools with a message as "print not found". This is due to the
+ "make" of Janus thinking some binary code needs to be recompiled.
+ The easiest way to fix it is
+ FROM SASN100
+ cd <hdf5>/test # or cd <hdf5>/tools
+ make clean; make # re-make all binary
+
+
+Once satisfied with the test results, you can install
+the software by
+FROM SASN100,
+
+8) make install
+
+
+---------------
+Parallel HDF5:
+---------------
+
+The setup process for building the parallel version of the HDF5 library for the
+ASCI Red machine is very similar to the sequential version. Since TFLOPS
+does not support MPIO, we have prepared a shell-script file that configures
+with the appropriate MPI library.
+
+Assuming you have already unpacked the HDF5 tar-file into the
+directory <hdf5>, follow the steps below:
+FROM SASN100,
+
+1) cd <hdf5>
+
+2) CC=/usr/community/mpich/mpich-1.2.0/bin/mpicc ./configure tflop
+ Due to a bug, you need to patch up two Makefile, src/Makefile and
+ test/Makefile, before the next step. You can use the following
+ shell commands.
+
+# Patch up various Makefile's.
+# patch up src/Makefile
+echo "patching src/Makefile"
+ed - src/Makefile <<'EOF'
+/^LT_RUN=.*/s//LT_RUN=$(RUNTEST)/
+w
+q
+EOF
+
+# patch up test/Makefile
+echo "patching test/Makefile"
+ed - test/Makefile <<'EOF'
+/^RUNTEST=$(LT_RUN)/s/^/#/
+w
+q
+EOF
+
+
+3) make H5detect
+
+
+FROM JANUS,
+
+4) cd <hdf5>
+
+5) make H5Tinit.c
+
+
+FROM SASN100,
+
+6) make
+
+
+When everything is finished compiling and linking,
+FROM JANUS,
+
+7) make check
+ Sometimes the "make check" fails in the sub-directories of test
+ or tools with a message as "print not found". This is due to the
+ "make" of Janus thinking some binary code needs to be recompiled.
+ The easiest way to fix it is
+ FROM SASN100
+ cd <hdf5>/test # or cd <hdf5>/tools
+ make clean; make # re-make all binary
+
+
+Once satisfied with the parallel test results, as long as you
+have the correct permission,
+FROM SASN100,
+
+8) make install
+
+
diff --git a/release_docs/INSTALL_VFL b/release_docs/INSTALL_VFL
new file mode 100644
index 0000000..d0466a3
--- /dev/null
+++ b/release_docs/INSTALL_VFL
@@ -0,0 +1,130 @@
+ Installation Instructions for HDF5
+ with Different Virtual File Layer
+
+ * * *
+
+This file contains installation instructions for HDF5 with certain Virtual File
+Layer to handle file I/O. We currently have documented SRB and Globus-GASS.
+
+
+
+ --- Part I. SRB ---
+I. Overview
+-----------
+This part contains instructions for remote-accessing HDF5 through SRB. The
+SRB version 1.1.7 on Sun Solaris 2.7 platform has been tested. If you have
+difficulties installing the software in your system, please send mails to me
+(Raymond Lu) at
+
+ slu@ncsa.uiuc.edu
+
+First, you must obtain and unpack the HDF5 source as described in the file
+INSTALL. You need the SRB library(client part) installed. You should also
+have access to SRB server.
+
+
+The Storage Resource Broker(SRB) from San Diego Supercomputer Center is client-
+server middleware that provides a uniform interface for connecting to
+heterogeneous data resources over a network and accessig replicated data sets.
+SRB, in conjunction with the Metadata Catalog(MCAT), provides a way to access
+data sets and resources based on their attributes rather than their names or
+physical locations. Their webpage is at http://www.npaci.edu/Research/DI/srb.
+
+HDF5 is built on the top of SRB as a client to remotely access files on SRB
+server through SRB. Right now, HDF-SRB only support low-level file transfer of
+SRB. The MCAT part is not supported yet. Low-level file transfer means files
+are treated just like Unix type files. Files can be read, written and
+appended. Partial access(read and write to a chunk of file without transferrig
+the whole) is also supported.
+
+
+II. Installation Steps
+----------------------
+The installation steps are similar to the ones in INSTALL file:
+
+1. Run 'configure' file with SRB options:
+ configure --with-srb=$SRB/include,$SRB/lib
+ where $SRB is your SRB installed library directory.
+
+ For example, below is a script file to run 'configure':
+ #! /bin/sh
+ # how to configure to use the SRB
+
+ SRB_DIR=/afs/ncsa.uiuc.edu/projects/hdf/users/slu/srb_install
+ configure --with-srb=$SRB_DIR/include,$SRB_DIR/lib
+
+2. Run 'make'
+
+3. Run 'make check'
+
+4. Run 'make install'
+
+5. Run testing program(Optional):
+ Go to the testing directory(cd test) and run srb_write, srb_read,
+ srb_append. Actually, these tests have been run in step 3.
+
+ srb_write: Connect to SRB server, write an HDF5 file with an integer
+ dataset to SRB server.
+ srb_read: Connect to SRB server, read part of HDF5 file on the SRB server.
+ srb_append: Connect to SRB server, append an integer dataset to an existent
+ HDF5 file on the SRB server.
+
+6. For using HDF-SRB, please read comments in srb_write.c, srb_read.c,
+ srb_append.c in the hdf5/test directory.
+
+
+
+ --- Part II. Globus-GASS ---
+
+I. Overview
+-----------
+This part contains instructions for remote-accessing HDF5 through Globus-GASS.
+The SGI IRIX64(and IRIX) 6.5 platform have been tested. If you have
+difficulties installing the software in your system, please send mails to me
+(Raymond Lu) at
+ slu@ncsa.uiuc.edu
+
+First, you must obtain and unpack the HDF5 source as described in the file
+INSTALL. You need the Globus 1.1.x and SSL(should have come with Globus)
+packages.
+
+HDF5 is built on the top of Globus-GASS(1.1.x) to handle remote access.
+Globus-GASS(1.1.x) only supports HTTP and HTTPS protocals for 'whole file
+read and write'. More features may be added in the future.
+
+II. Installation Steps
+----------------------
+The installation steps are similar to the ones in INSTALL file:
+
+1. Run 'configure' file with SSL and GASS options:
+ configure --with-ssl=$SSL/lib --with-gass=$GASS/include,$GASS/lib
+ where $SSL is your SSL directory, and $GASS is your Globus directory.
+
+ For example, below is a script file to run 'configure':
+ #! /bin/sh
+ # how to configure to use the Globus-GASS(1.1.x)
+
+ GASS_DIR=/usr/local/globus-install-1.1.1/development/mips-sgi-irix6.5-64_nothreads_standard_debug
+ SSL_LIB=/usr/local/ssl/lib
+
+ configure --with-ssl=$SSL_LIB --with-gass=$GASS_DIR/include,$GASS_DIR/lib
+
+2. Run 'make'
+
+3. Run 'make check'
+
+4. Run 'make install'
+
+5. Run testing program:
+ There is one read testing program called 'gass_read' in the 'test'
+ directory. It does whole file read through HTTP protocal. The URL is
+ hard coded as
+ http://hdf.ncsa.uiuc.edu/GLOBUS/a.h5
+
+ The writing really depends on your web server. You have to set up your
+ server in right way to accept writing in files. We have tested it using
+ Apache Server(1.3.12) without authentication. If you need more details
+ about our testing, please contact us. Globus suggests using their GASS
+ server.
+
+ There is another program called 'gass_append' used for experiments.
diff --git a/release_docs/INSTALL_Windows.txt b/release_docs/INSTALL_Windows.txt
new file mode 100644
index 0000000..46e85e8
--- /dev/null
+++ b/release_docs/INSTALL_Windows.txt
@@ -0,0 +1,915 @@
+HDF5 Build and Install Instructions for Windows 2000/NT/98.
+----------------------------------------------------------
+
+The instructions which follow assume that you will be using the
+source code release 'zip' file (hdf5-1_4_0.zip).
+
+***************************WARNINGS****************************
+Please read CAREFULLY about Preconditions before you go to the
+following sections
+
+Preconditions
+
+ 1. Installed MicroSoft Developer Studio,Visual C++ 6.0 and
+ WinZip.
+
+ 2. Set up a directory structure to unpack the library.
+ For example:
+
+ c:\ (any drive)
+ MyHDFstuff\ (any folder name)
+
+ 3. Run WinZip on hdf5-1_4_0.zip (the entire source tree) and
+ extract hdf5 package into c:\MyHDFstuff(or whatever drive
+ and folder name you would like to choose). This creates a
+ directory called 'hdf5'under MyHDFstuff which contains
+ several files and directories.
+
+ 4. HDF5 uses zlib for compression and zlib is distributed
+ with hdf5 lib. If you have your own version read section
+ VI about the zlib library.
+
+ 5. You do need hdf4 (hdf and mfhdf) static and dll libraries
+ to generate hdf4-related tools.
+
+ 6. Currently you can build and test either hdf5 libraries
+ and non-hdf4 related tools or hdf5 libraries and hdf4
+ related tools;but not BOTH.In other words,you may either
+ follow Section II or Section III but NOT both Sections to
+ build HDF5 libraries and related tools.
+
+---------------------------------------------------------------
+
+The following sections discuss installation procedures in detail:
+
+ Section I: What do we build and install
+ Section II: Building and testing hdf5 libraries and non-hdf4
+ related hdf5 tools
+ Section III: Building and testing hdf5 libraries and all hdf5
+ tools
+ Section IV: Building an application using the HDF5 library or
+ DLL
+ Section V: Some more helpful pointers
+ Section VI: ZLIB library - removing or changing the path
+
+***************************************************************
+
+Section I:
+
+What do we build and install?
+
+ HDF5 static library:
+ debug and release version
+
+ HDF5 Dynamic Link Library(DLL):
+ debug and release version as well as export libraries for
+ DLL
+
+ HDF5 tool library:
+ debug and release version
+
+ HDF5 tool export library for DLL:
+ debug and release version
+
+ HDF5 tools:
+ non-hdf4 related tools and hdf4 related tools
+
+ HDF5 library testing programs:
+ hdf5 library related comprehensive tests
+
+ HDF5 related tools testing programs:
+ hdf5 tools testing programs
+
+ HDF5 examples:
+ simple HDF5 examples
+
+**************************************************************
+
+Section II:
+
+ Building and testing hdf5 libraries and non-hdf4 related tools
+
+ ==================================================
+
+STEP 1: Building hdf5 libraries and non-hdf4 related tools
+
+
+ 1. Unpack all.zip in 'hdf5' and move the zlib.dll from
+ c:\myHDFstuff\hdf5\src\zlib\dll to the Windows system
+ directory.
+
+ The system directory can usually be found under the path
+ C:\WINNT\system or C:\WINDOWS\system
+
+ 2. Invoke Microsoft Visual C++, go to "File" and select
+ the "Open Workspace" option.
+
+ Then open the c:\myHDFstuff\hdf5\proj\all\all.dsw
+ workspace.
+
+ 3. If you don't want to build and test backward
+ compatibility with hdf5 1.2, you may skip this part and
+ go to part 4.
+
+ To build and test backward compatibility with hdf5 1.2,
+ a) go to "Project" and select "Settings"
+ b) A window called "Project Settings" should be
+ popped up,on the upper-left corner is a small box
+ "Settings For". Set the option to
+ "Win32 Release".
+ c) Go to the the right side of "Project Settings"
+ window,choose C/C++ on the menu.
+ d) Go to the "Category" box below menu bar, choose
+ Preprocessor.
+ e) Go to "Preprocessor definitions" box below
+ "Category". Inside the box, type
+ "WANT_H5_V1_2_COMPAT". Note: don't type double
+ quote for "WANT_H5_V1_2_COMPAT" and add a comma
+ between "WANT_H5_V1_2_COMPAT" and the last
+ preprocessor definations(WIN32).
+ f) click OK at the bottom of "Project Settings"
+ window.
+ g) repeat a)-f) and choose the option as
+ "win32 Debug" on step b).
+
+ 4. Select "Build", then Select "Set Active Configuration".
+
+ On Windows platform select as the active configuration
+
+ "all -- Win32 Debug" to build debug versions of
+ single-threaded static libraries, Debug multithreaded
+ DLLs and tests.
+ or
+
+ "all -- Win32 Release" to build release versions of
+ single-threaded static libraries, multithreaded DLLs
+ and tests.
+
+
+ NOTE : "all" is a dummy target. You will get a link
+ error when "all.exe." is built:
+
+ LINK: error LNK2001: unresolved external symbol
+ _mainCRTStartup.....
+ all.exe - 2 error(s), ....
+
+ Warning messages can be ignored. The "all.exe" is never
+ created, so it is OK.
+
+ When the debug or release build is done the directories
+ listed below will contain the following files:
+
+ c:\MyHDFstuff\hdf5\proj\hdf5\debug -
+ c:\MyHDFstuff\hdf5\proj\hdf5\release -
+
+ hdf5.lib- the hdf5 library
+
+ c:\MyHDFstuff\hdf5\proj\hdf5dll\debug -
+
+ hdf5ddll.dll- the hdf5 library
+ hdf5ddll.lib- the dll export library
+
+ c:\MyHDFstuff\hdf5\proj\hdf5dll\release -
+
+ hdf5dll.dll- the hdf5 library
+ hdf5dll.lib- the dll export library
+
+ c:\MyHDFstuff\hdf5\test\"test directory"-
+
+ where test directory is one of the following:
+
+ big(dll)
+
+ bittests(dll)
+
+ chunk(dll)
+
+ cmpd_dset(dll)
+
+ dsets(dll)
+
+ dtypes(dll)
+
+ enum(dll)
+
+ extend(dll)
+
+ external(dll)
+
+ fillval(dll)
+
+ flush1(dll)
+
+ flush2(dll)
+
+ gheap(dll)
+
+ hyperslab(dll)
+
+ iopipe(dll)
+
+ istore(dll)
+
+ links(dll)
+
+ mount(dll)
+
+ mtime(dll)
+
+ ohdr(dll)
+
+ overhead(dll)
+
+ stab(dll)
+
+ testhdf5(dll)
+
+ unlink(dll)
+
+
+ c:\MyHDFstuff\hdf5\tools\toolslib\debug
+ c:\MyHDFstuff\hdf5\tools\toolslib\release
+
+ toolslib.lib- the tools library
+
+ c:\MyHDFstuff\hdf5\tools\toolslibD\debug
+ c:\MyHDFstuff\hdf5\tools\toolslibD\release
+
+ toolslibD.lib- the dll export library
+
+ c:\MyHDFstuff\hdf5\tools\"tools directory"-
+ where non-hdf4 related tools directory is one of the
+ following:
+
+ h5dump(dll)
+
+ h5ls(dll)
+
+ h5debug(dll)
+
+ h5import(dll)
+
+ h5repart(dll)
+
+
+ Test and tool directory contains debug and release
+ subdirectories with the corresponding tests and tools.
+
+
+STEP 2: testing hdf5 libraries and non-hdf4 related tools
+
+In a command prompt window run the test batch file which resides in
+the hdf5\test directory(for example: C:\MyHDFstuff\hdf5\test)to make
+sure that the library was built correctly.
+
+You can possibily test four versions of hdf5 library and tools. They are:
+
+ release version
+ release dll version
+ debug version
+ debug dll version
+
+NOTE: The hdf5ddll.dll and hdf5dll.dll should be placed into
+the C:\WINNT\system or C:\WINDOWS\system directory before using the dlls.
+
+We strongly suggest you to redirect your testing results into
+an output file and you can easily check the testing results.
+You may use either NotePad or NoteTab Light or whatever other
+windows tools to check the results. For the purpose of printing,
+please choose font less than 14 for better alignment of the text.
+
+ 1. hdf5 library testing
+
+ cd into the hdf5\test directory.
+
+ (1) basic tests
+
+ Go to a) b) c) or d) to test your chosen version
+
+ a) release static version
+ type:
+ hdf5test release >"Your output filename"
+
+ b) release dll version
+ type:
+ hdf5test release dll > "Your output filename"
+
+ c) debug static version
+ type:
+ hdf5test debug >"Your output filename"
+
+ d) debug dll version
+ type:
+ hdf5test debug dll >"Your output filename"
+
+ (2) timing tests
+
+ Go to a) b) c) or d) to test your chosen version
+
+ a) release static version
+ type:
+ hdf5timingtest release >"Your output filename"
+
+ b) release dll version
+ type:
+ hdf5timingtest release dll >"Your output filename"
+
+ c) debug static version
+ type:
+ hdf5timingtest debug >"Your output filename"
+
+ d) debug dll version
+ type:
+ hdf5timingtest debug dll >"Your output filename"
+
+
+ Use notepad or notetab light to check results. You should
+not find any FAILED marks in your output files.
+
+ Note: big test is currently not working for windows, we are
+still investigating this.
+
+ 2. hdf5 tools testing
+
+ Currently we are only supporting h5dump test. We are
+investigating h5ls test now.
+
+ 1) h5dump test
+
+ cd back into hdf5 directory and then cd into tools
+directory(...\hdf5\tools)
+
+ Go to a) b) c) or d) to test your chosen version
+
+ a) release static version
+ type:
+ dumptest release >"Your output filename"
+
+ b) release dll version
+ type:
+ dumptest release dll > "Your output filename"
+
+ c) debug static version
+ type:
+ dumptest debug >"Your output filename"
+
+ d) debug dll version
+ type:
+ dumptest debug dll >"Your output filename"
+
+
+ We are using "fc" command to compare whether dumper
+generates correct results. Supposedly you should find
+"FC: no differences encountered" in your output file.
+However, since we are comparing the actual dumper output
+with the expected dumper output under different directory,
+you may see something like:
+
+"
+***** ..\TESTFILES\tall-1.ddl
+#############################
+Expected output for 'h5dump tall.h5'
+#############################
+HDF5 "tall.h5" {
+GROUP "/" {
+***** TALL-1.RESULTS
+HDF5 "..\..\testfiles\tall.h5" {
+GROUP "/" {
+*****
+"
+
+ The actual dumper output is correct. The difference showing
+here is the different representations of filename of the same
+file.
+
+STEP 3: BUILDING THE EXAMPLES
+
+ 1. Invoke Microsoft Visual C++, go to "File" and select
+ the "Open Workspace" option.
+ Then open the workspace
+ c:\myHDFstuff\hdf5\examples\allexamples\allexamples.dsw.
+
+ 2. Select "Build", then Select "Set Active Configuration".
+
+ On Windows platform select as the active configuration
+
+ "allexamples -- Win32 Debug" to build debug versions
+ of the examples.
+
+ or
+
+ "allexamples -- Win32 Release" to build release
+ versions the examples.
+
+ When the debug build or release build is done
+ there should be the following subdirectories in
+ C:\myHDFstuff\hdf5\examples\
+
+ attributetest
+
+ chunkread
+
+ compoundtest
+
+ extendwritetest
+
+ grouptest
+
+ readtest
+
+ selecttest
+
+ writetest
+
+
+
+ 3. Run the batch file "InstallExamples.bat" which
+ resides in the top level directory(C:\MyHDFSTUFF\hdf5).
+ This file creates 2 new directories,examplesREL and
+ examplesDBG,in the examples directory and places
+ all the executables in it. Both the release and debug
+ versions of the examples should be built before this
+ step is done. The examples should be tested in these 2
+ new directories due to some dependencies between the
+ examples. Especially writetest.exe and extendwritetest.exe
+ should be executed before chunkread.exe and readtest.exe
+ due to dependencies among these files.
+
+**************************************************************
+
+Section III: BUILDING AND TESTING HDF5 LIBRARIES AND ALL HDF5 TOOLS
+
+------------------WARNINGS---------------------------------
+
+1. This section is specifically for building HDF5 tools that needs to call HDF4 library.
+
+Currently we are supporting two hdf4-relatedtools: H4toh5 converter
+and H5toh4 converter. If you are not using these tools, please go
+back to Section II for building and installing information.
+
+2. This section builds and tests all versions of hdf5 libraries,
+testing programs and tools covered in section II.
+Additionally, it also builds and tests hdf4-related tools. We may refer some duplicated parts of this section to section II.
+
+3. In case
+ a) you don't install hdf libraries and related tools in
+ your machine,
+ b) or if the top directory of your hdf4 libraries and
+ tools are not under C:\hdf41r4 and you are not familar
+ on how to change settings of VC++ projects.
+
+ you may get binary distribution from
+ftp://ftp.ncsa.uiuc.edu/HDF/HDF/HDF4.1r4/windows_precompiled_code/HDF41r4.zip
+and use Winzip to unpack HDF41r4.zip into C:\hdf41r4.
+
+4. We assume that you've installed hdf4(mfhdf and hdf)
+libraries into drive C. The top level path should be
+C:\HDF41r4. Under C:\HDF41r4 it should at least
+include the following six directories:
+
+C:\HDF41r4\bin where hdf4 utilities are stored
+
+C:\HDF41r4\dlllib where release dll versions of hdf and mfhdf
+libraries and export libraries of dlls are stored
+
+C:\HDF41r4\dlllibdbg where debug dll versions of hdf and mfhdf
+libraries and export libraries of dlls are stored
+
+C:\HDF41r4\lib where release versions of hdf and mfhdf
+libraries are stored
+
+C:\HDF41r4\libdbg where debug versions of hdf and mfhdf
+libraries are stored
+
+C:\HDF41r4\include where header files are included
+
+Make sure that you copy all *.dll files under C:\HDF41r4 into Windows
+system directory before the next step.
+
+If your path of hdf libraries and mfhdf libraries is different
+from the default assumption, please DO follow No.4 of Step 1 on
+the following:
+
+
+Step 1.
+
+ 1. Unpack all_withhdf4.zip in 'hdf5' and move the zlib.dll
+ from c:\myHDFstuff\hdf5\src\zlib\dll to the Windows
+ system directory.
+
+ The system directory can usually be found under the path
+ C:\WINNT\system or C:\WIN98\system
+
+ 2. Invoke Microsoft Visual C++, go to "File" and select the
+ "Open Workspace" option.
+
+ Then open the c:\myHDFstuff\hdf5\proj\all\all.dsw
+ workspace.
+
+ 3. If you don't want to build and test backward
+ compatibility with hdf5 1.2, you may skip this part and
+ go to part 4.
+
+ To build and test backward compatibility with hdf5 1.2,
+ a) go to "Project" and select "Settings"
+ b) A window called "Project Settings" should be
+ popped up,on the upper-left corner is a small box
+ "Settings For" Choose the option to
+ "Win32 Release".
+ c) Go to the the right side of "Project Settings"
+ window, choose C/C++ on the menu.
+ d) Go to the "Category" box below menu bar, choose
+ Preprocessor.
+ e) Go to "Preprocessor definitions" box below
+ "Category". Inside the box, type
+ "WANT_H5_V1_2_COMPAT". Note: don't type
+ double quote for "WANT_H5_V1_2_COMPAT" and add a
+ comma between "WANT_H5_V1_2_COMPAT" and the last
+ preprocessor defination(WIN32).
+ f) click OK at the bottom of "Project Settings"
+ window.
+ g) repeat a)-f) and set the option to "win32 Debug"
+ on step b).
+
+ 4. This part is for users who are familar with handling
+ settings of VC++ project and store their hdf4 libraries
+ not under C:\hdf41r4. Other users can skip this part.
+
+ 4.1 Change the path where hdf4 library header files are
+ located
+
+ a) On the View menu, click Workspace, you may see a
+ pop-up window with names of projects in all.dsw.
+ b) click FileView on the bottom of this window if you
+ are not seeing "all files big files ......."
+ c) You need to modify settings of four projects: h4toh5,
+ h4toh5dll,h5toh4 and h5toh4dll.
+ You also need to modify both debug and release
+ versions.
+
+ You may do as follows:
+ c1)Right click the selected project and then click
+ "Settings"
+ c2)A dialog box called "Project Settings" will be
+ poped up
+ c3)On the upper-left part of the "Project Settings"
+ box, you may find a small window "Settings for".
+ Make sure inside this "Settings for" box is either
+ "Win32 Debug" or "Win32 Release". Change contents
+ into "Win32 Debug" or "Win32 Release" otherwise.
+ Remember the version(Win32 Release or Debug) you
+ chose.
+ c4)On the upper-right menu of the "Project Settings"
+ window, find C/C++ and click it.
+ c5)Just below the upper-right menu, find a small
+ window called category, make sure that
+ "Preprocessor" appear in this window.
+ c6)In the middle of "Project Settings" windows, you
+ may find a box called
+ "Additional include directories:" . You may notice
+ "C:\hdf41r4\include" inside this box. This is the
+ path where the default hdf4 header files is
+ included. Replace only this path
+ (C:\hdf41r4\include) with your own path that
+ includes your hdf4 header files. Don't touch any
+ other paths.
+ c7)After you've done this, click OK at the bottom of
+ "Project Settings" window.
+ c8)Repeat c1)-c7)but change contents of "settings
+ for" in c3) from "Win32 Release" to "Win32 Debug"
+ or vice versa.
+ d) repeat step c) for the other three projects.
+
+ 4.2 Replace the user's hdf and mfhdf libraries, export
+ libraries of hdf and mfhdf DLLs
+
+ You also need to modify four projects: h4toh5,
+ h4toh5dll, h5toh4 and h5toh4dll:
+ a) select project h4toh5 following instruction 4.1 a)
+ and b).
+ b) click h4toh5, you may find four libraries:
+ hm414d.lib,hd414d.lib and hm414.lib,hd414.lib
+ attached under the project h4toh5. hm414d.lib and
+ hd414d.lib are debug versions of mfhdf and hdf
+ libraries. hm414.lib and hd414.lib are release
+ versions of mfhdf and hdf libraries.
+ c) select these four libraries; go back to Edit menu and
+ choose "delete" option to delete template of these
+ libraries.
+ d) select project h4toh5 and right click the mouse, find
+ "Add Files to Projects", follow the instructions on
+ the pop-up box, to insert your own hm414d.lib,
+ hd414d.lib,hm414.lib and hd414.lib. You must know
+ their paths at first.
+ e) select project h4toh5dll following instruction 4.1 a)
+ and b).
+ f) click h4toh5dll, you may also find four libraries:
+ hd414m.lib,hd414md.lib and hm414m.lib,hd414m.lib
+ attached under the project h4toh5dll. These libraries
+ are debug and release versions of Export libraries of
+ mfhdf and hdf DLLs.
+ g) select these four libraries; go back to Edit menu and
+ choose "delete" option to delete template of these
+ libraries.
+ h) select project h4toh5dll and right click the mouse,
+ find "Add Files to Projects", follow the instructions
+ on the pop-up box, to insert your own hd414m.lib,
+ hd414md.lib,hm414m.lib and hd414m.lib. You must know
+ their paths at first.
+ i) repeat a)-h) for h5toh4 and h5toh4dll.
+
+ 5. Select "Build", then Select "Set Active Configuration".
+
+ On Windows platform select as the active configuration
+
+ "all -- Win32 Debug" to build debug versions
+ single-threaded static libraries,
+ and tests.
+ or
+
+ "all -- Win32 Release" to build release versions of
+ single-threaded static
+ libraries, and tests.
+
+
+ NOTE : "all" is a dummy target. You will get a link
+ error when "all.exe." is built:
+
+ LINK: error LNK2001: unresolved external symbol
+ _mainCRTStartup.....
+ all.exe - 2 error(s), ....
+
+ Warning messages can be ignored. The "all.exe" is never
+ created, so it is OK.
+
+ You should see hdf5 libraries, tests and tools under
+ section II Step 1.
+ In addtion, under c:\MyHDFstuff\hdf5\tools\
+
+ You may also find:
+ h4toh5
+ h5toh4
+ h4toh5dll
+ h5toh4dll
+ for both the debug and release versions.
+
+STEP 2: testing hdf5 libraries and all hdf5 tools
+
+ 1. hdf5 library testing
+ Follow all instructions of the same part in Section II
+ STEP 2
+
+ 2. non-hdf4 related tools testing
+ Follow all instructions of the same part in Section II
+ STEP 2
+
+ 3. hdf4-related tools testing
+
+ 1) h4toh5 converter tool testing
+
+ First cd into hdf5\tools
+
+ Go to a) b) c) or d) to test your chosen version
+
+ a) release static version
+ type:
+ h4toh5testrun release >"Your output filename"
+
+ b) release dll version
+ type:
+ h4toh5testrun release dll > "Your output filename"
+
+ c) debug static version
+ type:
+ h4toh5testrun debug >"Your output filename"
+
+ d) debug dll version
+ type:
+ h4toh5testrun debug dll >"Your output filename"
+
+ We are using "fc" command to compare whether h4toh5
+ converter converts the hdf4 file into the correct hdf5
+ file.In your output files, Please only pay attention to
+ those lines which start with
+ "FC:",you should find "FC: no differences encountered"
+ for all tested hdf4 files in your output.
+
+ 2) h5toh4 converter tool testing
+
+ To test the h5toh4 utility, you need to have hdf4
+ dumper utility "hdp" in your system.
+
+ Note: Currently h5toh4 release dll doesn't work
+ for all test cases possibly due to windows dealing
+ with "free memory" conventions for dll versions.
+
+ 1) If your hdp utility is located at C:\hdf41r4\bin,
+ you may skip this part. Otherwise, copy your hdp.exe
+ file into the directory where your hdf5 tools are
+ located. For example, if your hdf5 tools directory is
+ C:\myHDFstuff\hdf5\tools; please copy hdp.exe into
+ this directory.
+
+ 2) cd into \...\hdf5\tools.
+ Go to a) b) c) or d) to test your chosen version
+
+ a) release static version
+ type:
+ h5toh4testrun release >"Your output filename"
+
+ b) release dll version
+ type:
+ h5toh4testrun release dll > "Your output filename"
+
+ c) debug static version
+ type:
+ h5toh4testrun debug >"Your output filename"
+
+ d) debug dll version
+ type:
+ h5toh4testrun debug dll >"Your output filename"
+
+ We are using "fc" command to compare whether h5toh4
+ converter converts the hdf5 file into the correct hdf4
+ file.In your output files, Please only pay attention to
+ those lines which start with
+ "FC:",you should find "FC: no differences encountered"
+ for all tested hdf4 files in your output.
+ Warnings appear on the prompt when testing h5toh4 converter
+ can be ignored.
+
+
+STEP 3: BUILDING THE EXAMPLES
+
+ Follow all instructions of SECTION II STEP 3.
+
+Section IV:
+BUILDING AN APPLICATION USING THE HDF5 LIBRARY OR DLL- SOME HELPFUL
+POINTERS
+====================================================================
+
+If you are building an application that uses the HDF5 library the
+following locations will need to be specified for locating header files
+and linking in the HDF libraries:
+
+ <top-level HDF5 directory>\src
+
+where <top-level HDF5 directory> may be
+
+ C:\MyHDFstuff\hdf5\
+
+To specify this location in the settings for your VC++ project:
+
+ 1. Open your VC project in Microsoft Visual C++ and make sure it is
+ the active project.
+
+ 2. Go to the Project menu and chose the 'Settings' option.
+
+ 3. Chose the build configuration you would like to modify in the
+ drop down menu labeled with 'Settings For:'
+
+ 4. Chose the C/C++ tab
+
+ 5. At the bottom of the window, there should be a text-area labeled
+ with 'Project Options:'. In this text-area, scroll until you
+ reach the end and type /I "<top-level HDF5 directory>\src" and
+ then click OK.
+
+To link the HDF5 library with your application:
+
+ 1. Open your VC project in Microsoft Visual C++ and make sure it is
+ the active project.
+
+ 2. Go to the Project menu and chose the 'Add to Project' option and
+ then 'Files' option.
+
+ 3. Change the 'Files of type:' to 'Library Files (.lib)'
+
+ 4. Navigate through the directories until you find the location of
+ the hdf5.lib.
+
+ 5. Select hdf5.lib and click OK.
+
+
+To use the DLL:
+
+ 1. Follow the steps for specifing the location of the header files
+ as shown above.
+
+ 2. Follow the steps for linking the HDF5 library as shown above
+ except now link the export library that is created with the DLL.
+ The export library is called hdf5dll.lib or hdf5ddll.lib for
+ debug version.
+
+ 3. Place the DLL in a location that Windows will be able to locate
+ it. The search path and order for DLL's is
+
+ a) The directory where the executable module for the current
+ process is located.
+ b) The current directory.
+ c} The Windows system directory. The GetSystemDirectory function
+ retrieves the path of this directory.
+ d) The Windows directory. The GetWindowsDirectory function
+ retrieves the path of this directory.
+ e) The directories listed in the PATH environment variable.
+
+
+Section V:
+MORE HELPFUL POINTERS
+=====================
+
+
+Here are some notes that may be of help if you are not familiar with
+using the Visual C++ Development Environment.
+
+Project name and location issues:
+
+ The files in all.zip must end up in the hdf5\ directory installed by
+ hdf5-1_4_0.zip
+
+ If you must install all.dsw and all.dsp in another directory,
+ relative to hdf5\ , you will be asked to locate the sub-project
+ files, when you open the project all.dsw.
+
+ If you want to rename all (the entire project), you will need to
+ modify two files all.dsw and all.dsp as text (contrary to the
+ explicit warnings in the files).
+
+ You can also modify all.dsw and all.dsp as text, to allow these 2
+ files to be installed in another directory.
+
+
+Settings... details:
+
+ If you create your own project, the necessary settings can be read
+ from the all.dsp file(as text), or from the Project Settings in the
+ Developer Studio project settings dialog.
+
+ Project
+ Settings
+ C/C++
+ Category
+ PreProcessor
+ Code Generation
+ Use run-time Library
+ These are all set to use
+ Single-Threaded
+
+DLL... hints:
+
+ If you want to use DLL versions of hdf5 library in your application,
+ you should
+ 1) put hdf5 DLL into windows system directory
+ 2) add hdf5 DLL export library into your project
+ 3) Follow "Settings... details" into the last line:
+ change Single-Threaded into Multithreaded DLL or
+ Debug Multithreaded DLL
+ 4) Follow "Settings.. details" into PreProcessor:
+ Project
+ Settings
+ C/C++
+ Category
+ PreProcessor
+
+ Find PreProcessor definations and Add _HDF5USEDLL_ at the
+ end of the PreProcessor definations
+
+
+
+
+Section VI:
+ZLIB LIBRARY- REMOVING OR CHANGING THE PATH
+============================================
+
+If you would like to remove the zlib library from the hdf5 library or
+use your own version of the zlib library then follow the steps below.
+
+Removing the zlib library completely:
+
+ Open the all.dsw workspace file in Microsoft Visual C++. Go to the
+ hdf5 project. Select the zlib.lib file from this project and
+ delete(press the 'delete' key) it. Next open the H5config.h and H5pubconf.h
+ files from the src directory. Remove the the following two lines:
+
+ #define HAVE_LIBZ 1
+ #define HAVE_COMPRESS2
+
+ then save the file.
+
+ Next go to the hdf5dll project. Remove the zlib.lib from this
+ project too. Open the project settings for the hdf5dll project.
+ Go to the C/C++ settings tab and under the preprocessor definitions
+ remove the ZLIB_DLL in both the debug and the release settings.
+ Recompile the all project and then save the workspace.
+
+
+Replacing the zlib library:
+
+ Open the all.dsw workspace and go to the hdf5 project. Delete the
+ zlib.lib file from the file listing. Then select the hdf5 project
+ and richt click to get a menu. Pick the "add files to project..."
+ option and find the version of the zlib that you would like to use.
+ Then click OK in the file chooser dialog. Repeat the steps for the
+ hdf5dll project. You may also want to replace the zlib.h and zconf.h
+ files which are in the src directory with your own versions of these
+ files. Then recompile the all project.
diff --git a/release_docs/INSTALL_parallel b/release_docs/INSTALL_parallel
new file mode 100644
index 0000000..e2a0709
--- /dev/null
+++ b/release_docs/INSTALL_parallel
@@ -0,0 +1,177 @@
+ Installation instructions for Parallel HDF5
+ -------------------------------------------
+
+
+1. Overview
+-----------
+
+This file contains instructions for the installation of parallel
+HDF5. Platforms supported by this release are SGI Origin 2000, IBM SP2,
+Intel TFLOPs, and Linux version 2.4 and greater. The steps are kind of
+unnatural and will be more automized in the next release. If you have
+difficulties installing the software in your system, please send mail to
+
+ hdfparallel@ncsa.uiuc.edu
+
+In your mail, please include the output of "uname -a". Also attach the
+content of "config.log" if you ran the "configure" command.
+
+First, you must obtain and unpack the HDF5 source as described in the
+INSTALL file. You also need to obtain the information of the include and
+library paths of MPI and MPIO software installed in your system since the
+parallel HDF5 library uses them for parallel I/O access.
+
+
+2. Quick Instruction for known systems
+--------------------------------------
+
+The following shows particular steps to run the parallel HDF5
+configure for a few machines we've tested. If your particular platform
+is not shown or somehow the steps do not work for yours, please go
+to the next section for more detailed explanations.
+
+------
+TFLOPS
+------
+
+Follow the instuctions in INSTALL_TFLOPS.
+
+-------
+IBM SP2
+-------
+
+First of all, make sure your environment variables are set correctly
+to compile and execute a single process mpi applications for the SP2
+machine. They should be similar to the following:
+
+ setenv CC mpcc_r
+ setenv MP_PROCS 1
+ setenv MP_NODES 1
+ setenv MP_LABELIO no
+ setenv MP_RMPOOL 0
+ setenv RUNPARALLEL "MP_PROCS=2 MP_TASKS_PER_NODE=2 poe"
+ setenv LLNL_COMPILE_SINGLE_THREADED TRUE
+
+The shared library configuration for this version is broken. So, only
+static library is supported.
+
+An error for powerpc-ibm-aix4.3.2.0's (LLNL Blue) install method was
+discovered after the code freeze. You need to remove the following line
+from config/powerpc-ibm-aix4.3.2.0 before configuration:
+
+ ac_cv_path_install=${ac_cv_path_install='cp -r'}
+
+Then do the following steps:
+
+ $ ./configure --disable-shared --prefix=<install-directory>
+ $ make # build the library
+ $ make check # verify the correctness
+ $ make install
+
+
+---------------
+SGI Origin 2000
+Cray T3E
+(where MPI-IO is part of system MPI library such as mpt 1.3)
+---------------
+
+#!/bin/sh
+
+RUNPARALLEL="mpirun -np 3"
+export RUNPARALLEL
+LIBS="-lmpi"
+export LIBS
+./configure --enable-parallel --disable-shared --prefix=$PWD/installdir
+make
+make check
+make install
+
+
+---------------
+SGI Origin 2000
+Cray T3E
+(where MPI-IO is not part of system MPI library or I want to use my own
+ version of MPIO)
+---------------
+
+mpi1_inc="" #mpi-1 include
+mpi1_lib="" #mpi-1 library
+mpio_inc=-I$HOME/ROMIO/include #mpio include
+mpio_lib="-L$HOME/ROMIO/lib/IRIX64" #mpio library
+
+MPI_INC="$mpio_inc $mpi1_inc"
+MPI_LIB="$mpio_lib $mpi1_lib"
+
+#for version 1.1
+CPPFLAGS=$MPI_INC
+export CPPFLAGS
+LDFLAGS=$MPI_LIB
+export LDFLAGS
+RUNPARALLEL="mpirun -np 3"
+export RUNPARALLEL
+LIBS="-lmpio -lmpi"
+export LIBS
+
+./configure --enable-parallel --disable-shared --prefix=$PWD/installdir
+make
+make check
+make install
+
+
+---------------------
+Linux 2.4 and greater
+---------------------
+
+Be sure that your installation of MPICH was configured with the following
+configuration command-line option:
+
+ -cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
+
+This allows for >2GB sized files on Linux systems and is only available
+with Linux kernels 2.4 and greater.
+
+
+------------------
+HP V2500 and N4000
+------------------
+
+Follow the instructions in section 3.
+
+
+3. Detail explanation
+---------------------
+
+The HDF5 library can be configured to use MPI and MPI-IO for parallelism
+on a distributed multi-processor system. The easiest way to do this is to
+have a properly installed parallel compiler (e.g., MPICH's mpicc or IBM's
+mpcc) and supply that executable as the value of the CC environment
+variable:
+
+ $ CC=mpcc ./configure
+ $ CC=/usr/local/mpi/bin/mpicc ./configure
+
+If no such wrapper script is available then you must specify your normal
+C compiler along with the distribution of MPI/MPI-IO which is to be used
+(values other than `mpich' will be added at a later date):
+
+ $ ./configure --enable-parallel=mpich
+
+If the MPI/MPI-IO include files and/or libraries cannot be found by the
+compiler then their directories must be given as arguments to CPPFLAGS
+and/or LDFLAGS:
+
+ $ CPPFLAGS=-I/usr/local/mpi/include \
+ LDFLAGS=-L/usr/local/mpi/lib/LINUX/ch_p4 \
+ ./configure --enable-parallel=mpich
+
+If a parallel library is being built then configure attempts to determine
+how to run a parallel application on one processor and on many
+processors. If the compiler is `mpicc' and the user hasn't specified
+values for RUNSERIAL and RUNPARALLEL then configure chooses `mpirun' from
+the same directory as `mpicc':
+
+ RUNSERIAL: /usr/local/mpi/bin/mpirun -np 1
+ RUNPARALLEL: /usr/local/mpi/bin/mpirun -np $${NPROCS:=3}
+
+The `$${NPROCS:=3}' will be substituted with the value of the NPROCS
+environment variable at the time `make check' is run (or the value 3).
diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt
new file mode 100644
index 0000000..eca88b5
--- /dev/null
+++ b/release_docs/RELEASE.txt
@@ -0,0 +1,227 @@
+HDF5 library version 1.5.7 currently under development
+================================================================================
+
+
+INTRODUCTION
+
+This document describes the differences between HDF5-1.4.0 and
+HDF5-1.5-snap0, and contains information on the platforms tested and
+known problems in HDF5-1.5-snap0. For more details check the HISTORY.txt
+file in the HDF5 source.
+
+The HDF5 documentation can be found on the NCSA ftp server
+(ftp.ncsa.uiuc.edu) in the directory:
+
+ /HDF/HDF5/docs/
+
+For more information look at the HDF5 home page at:
+
+ http://hdf.ncsa.uiuc.edu/HDF5/
+
+If you have any questions or comments, please send them to:
+
+ hdfhelp@ncsa.uiuc.edu
+
+CONTENTS
+
+- New Features
+- Bug Fixes since HDF5-1.4.0
+- Platforms Tested
+- Known Problems
+
+Bug Fixes since HDF5-1.4.0
+==========================
+
+Library
+-------
+ * Fixed bug with contiguous hyperslabs not being detected, causing
+ slower I/O than necessary.
+ * Fixed bug where non-aligned hyperslab I/O on chunked datasets was
+ causing errors during I/O
+ * The RCSID string in H5public.h was causing the C++ compiling problem
+ because when it was included multiple times, C++ did not like multiple
+ definitions of the same static variable. All occurance of RCSID
+ definition are removed since we have not used it consistently before.
+ * Fixed bug with non-zero userblock sizes causing raw data to not write
+ correctly.
+ * Fixed build on Linux systems with --enable-static-exec flag. It now
+ works correctly.
+ * IMPORTANT: Fixed file metadata corruption bug which could cause metadata
+ data loss in certain situations.
+ * The allocation by alignment (H5Pset_alignment) feature code somehow
+ got dropped in some 1.3.x version. Re-implemented it with "new and
+ improved" algorithm. It keeps track of "wasted" file-fragment in
+ the free-list too.
+ * Removed limitation that the data transfer buffer size needed to be
+ set for datasets whose dimensions were too large for the 'all' selection
+ code to handle. Any size dimensioned datasets should be handled
+ correctly now.
+
+Configuration
+-------------
+ * Changed the default value of $NPROCS from 2 to 3 since 3 processes
+ have a much bigger chance catching parallel errors than just 2.
+ * Basic port to Compaq (nee DEC) Alpha OSF 5.
+
+
+Tools
+-----
+
+Documentation
+-------------
+
+
+New Features
+============
+
+ * C++ API:
+ - Added two new member functions: Exception::getFuncName() and
+ Exception::getCFuncName() to provide the name of the member
+ function, where an exception is thrown.
+ - IdComponent::operator= becomes a virtual function because
+ DataType, DataSpace, and PropList provide their own
+ implementation. The new operator= functions invoke H5Tcopy,
+ H5Scopy, and H5Pcopy to make a copy of a datatype, dataspace,
+ and property list, respectively.
+ * F90 API:
+ - Added aditional parameter "dims" to the h5dread/h5dwrite and
+ h5aread/h5awrite subroutines. This parameter is 1D array of size
+ 7 and contains the sizes of the data buffer dimensions.
+ * F90 static library is available on Windows platforms. See
+ INSTALL_Windows.txt for details.
+ * F90 APIs are available on HPUX 11.00 and IBM SP platforms.
+ * File sizes greater than 2GB are now supported on Linux systems with
+ version 2.4.x or higher kernels.
+ * Added a global string variable H5_lib_vers_info_g which holds the
+ HDF5 library version information. This can be used to identify
+ an hdf5 library or hdf5 application binary.
+ Also added a verification of the consistency between H5_lib_vers_info_g
+ and other version information in the source code.
+ * H5 <-> GIF convertor has been added. This is available under
+ tools/gifconv. The convertor supports the ability to create animated
+ gifs as well.
+ * Parallel HDF5 now runs on the HP V2500 and HP N4000 machines.
+ * Verified correct operation of library on Solaris 2.8 in both 64-bit and
+ 32-bit compilation modes. See INSTALL document for instructions on
+ compiling the distribution with 64-bit support.
+ * Modified the Pablo build procedure to permit building of the instrumented
+ library to link either with the Trace libraries as before or with the
+ Pablo Performance Caputure Facility.
+
+Platforms Tested
+================
+
+ AIX 4.3.3.0 (IBM SP powerpc) mpcc_r 3.6.6
+ Cray T3E sn6711 2.0.5.45 Cray Standard C Version 6.4.0.0
+ Cray Fortran Version 3.4.0.2
+ Cray SV1 sn9605 10.0.0.7 Cray Standard C Version 6.4.0.0
+ Cray Fortran Version 3.4.0.2
+ FreeBSD 4.3 gcc 2.95.2
+ g++ 2.95.2
+ HP-UX B.10.20 HP C HP92453-01 A.10.32.30
+ HP-UX B.11.00 HP C HP92453-01 A.11.00.13
+ HP C HP92453-01 A.11.01.20
+ IRIX 6.5 MIPSpro cc 7.30
+ IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
+ mpt.1.4.0.2
+ mpich-1.2.1
+ Linux 2.4.4 gcc-2.95.3
+ g++ 2.95.3
+ Linux 2.2.16-3smp gcc-2.95.2
+ g++ 2.95.2
+ pgf90 3.1-3
+ OSF1 V4.0 DEC-V5.2-040
+ Digital Fortran 90 V4.1-270
+ SunOS 5.6 WorkShop Compilers 5.0 98/12/15 C 5.0
+ (Solaris 2.6) WorkShop Compilers 5.0 99/10/25 Fortran 90
+ 2.0 Patch 107356-04
+ Workshop Compilers 5.0 98/12/15 C++ 5.0
+ SunOS 5.7 WorkShop Compilers 5.0 98/12/15 C 5.0
+ (Solaris 2.7) WorkShop Compilers 5.0 99/10/25 Fortran 90
+ 2.0 Patch 107356-04
+ Workshop Compilers 5.0 98/12/15 C++ 5.0
+ TFLOPS r1.0.4 v4.0 mpich-1.2.1 with local changes
+ Windows NT4.0, 2000 (NT5.0) MSVC++ 6.0
+ Windows 98 MSVC++ 6.0
+
+Known Problems
+==============
+* DLLs do not work on Windows 98 (and probably on NT and 2000 too)
+
+* The stream-vfd test uses ip port 10007 for testing. If another application
+ is already using that port address, the test will hang indefinitely and
+ has to be terminated by the kill command. To try the test again, change
+ the port address in test/stream_test.c to one not being used in the host.
+
+* The --enable-static-exec configure flag fails to compile for Solaris
+ platforms. This is due to the fact that not all of the system
+ libraries on Solaris are available in a static format.
+
+ The --enable-static-exec configure flag also fails to correctly compile
+ on IBM SP2 platform for the serial mode. The parallel mode works fine
+ with this option.
+
+ It is suggested that you don't use this option on these platforms
+ during configuration.
+
+* With the gcc 2.95.2 compiler, HDF 5 uses the `-ansi' flag during
+ compilation. The ANSI version of the compiler complains about not being
+ able to handle the `long long' datatype with the warning:
+
+ warning: ANSI C does not support `long long'
+
+ This warning is innocuous and can be safely ignored.
+
+* SunOS 5.6 with C WorkShop Compilers 4.2: Hyperslab selections will
+ fail if library is compiled using optimization of any level.
+
+* When building hdf5 tools and applications on windows platform, a linking
+ warning: defaultlib "LIBC" conflicts with use of other libs will appear
+ on debug version when running VC++6.0. This warning doesn't affect building
+ and testing hdf5 applications. We will continue investigating this.
+
+* h5toh4 converter fails two cases(tstr.h5 and tmany.h5) for release dll version on
+ windows 2000 and NT. The reason is possibly due to windows NT DLL
+ convention on freeing memory. It seems that memory cannot be free
+ across library or DLL. It is still under investigated.
+
+* The Stream VFD was not tested yet under Windows.
+ It is not supported in the TFLOPS machine.
+
+* Shared library option is broken for IBM SP and some Origin 2000 platforms.
+ One needs to run ./configure with '--disable-shared'
+
+* The ./dsets tests failed in the TFLOPS machine if the test program,
+ dsets.c, is compiled with the -O option. The hdf5 library still works
+ correctly with the -O option. The test program works fine if it is
+ compiled with -O1 or -O0. Only -O (same as -O2) causes the test
+ program to fail.
+
+* Certain platforms give false negatives when testing h5ls:
+ - Solaris x86 2.5.1, Cray T3E and Cray J90 give errors during testing
+ when displaying object references in certain files. These are benign
+ differences due to the difference in sizes of the objects created on
+ those platforms. h5ls appears to be dumping object references
+ correctly.
+ - Cray J90 (and Cray T3E?) give errors during testing when displaying
+ some floating-point values. These are benign differences due to the
+ different precision in the values displayed and h5ls appears to be
+ dumping floating-point numbers correctly.
+
+* Before building HDF5 F90 Library from source on Crays (T3E and J90)
+ replace H5Aff.f90, H5Dff.f90 and H5Pff.f90 files in the fortran/src subdirectory
+ in the top level directory with the Cray-specific files from the
+ ftp://hdf.ncsa.uiuc.edu/pub/ougoing/hdf5/hdf5-1.4.0-beta/F90_source_for_Crays
+ directory.
+
+* The h4toh5 utility produces images that do not correctly conform
+ to the HDF5 Image and Palette Specification.
+
+ http://hdf.ncsa.uiuc.edu/HDF5/doc/ImageSpec.html
+
+ Several required HDF5 attributes are omitted, and the dataspace
+ is reversed (i.e., the ht. and width of the image dataset is
+ incorrectly described.) For more information, please see:
+
+ http://hdf.ncsa.uiuc.edu/HDF5/H5Image/ImageDetails.htm
+