HDF5 documents and links 
Introduction to HDF5 
HDF5 Reference Manual 

HDF5 User's Guide 
HDF5 Application Developer's Guide 

HDF5 Library Changes from Release to Release

This document describes the changes in the HDF5 library as it has progressed from release to release.

Release 1.4 (current release)

The following material is from the file hdf5/RELEASE, as distributed with the HDF5 library source code.

                       HDF5 Release 1.4-Beta2


INTRODUCTION

This document describes the differences between HDF5-1.2.0 and
HDF5-1.4-Beta2, and contains information on the platforms tested and
known problems in HDF5-1.4-Beta2. For more details check the HISTORY 
file in the HDF5 source.

The HDF5 documentation can be found on the NCSA ftp server
(ftp.ncsa.uiuc.edu) in the directory:

     /HDF/HDF5/docs/

For more information look at the HDF5 home page at:
   
    http://hdf.ncsa.uiuc.edu/HDF5/

If you have any questions or comments, please send them to:

    hdfhelp@ncsa.uiuc.edu


CONTENTS

- New Features
- h4toh5 Utility 
- F90 Support
- C++ Support
- Bug Fixes since HDF5-1.2.0 
- Platforms Tested
- Known Problems

New Features
============
   * The Virtual File Layer, VFL, was added to replace the old file
     drivers. It also provides an API for user defined file drivers.
   * New features added to snapshots. Use 'snapshot help' to see a
     complete list of features.
   * Improved configure to detect if MPIO routines are available when
     parallel mode is requested.
   * Added Thread-Safe support. Phase I implemented.
   * Added data sieve buffering to raw data I/O path. This is enabled
     for all VFL drivers except the mpio & core drivers. Setting the
     sieve buffer size is controlled with the new API function,
     H5Pset_sieve_buf_size(), and retrieved with H5Pget_sieve_buf_size().
   * Added new Virtual File Driver, Stream VFD, to send/receive entire
     HDF5 files via socket connections.
   * As parts of VFL, HDF-GASS and HDF-SRB are also added to this
     release. To find out details, please read INSTALL_VFL file. 
   * Increased maximum number of dimensions for a dataset (H5S_MAX_RANK)
     from 31 to 32 to align with HDF4 & netCDF.
   * Added 'query' function to VFL drivers.  Also added 'type' parameter to
     VFL 'read' & 'write' calls, so they are aware of the type of data
     being accessed in the file.  Updated the VFL document also.
   * A new h4toh5 uitlity, to convert HDF4 files to analogous HDF5 files.
   * Added a new array datatype to the datatypes which can be created.  Removed
     "array fields" from compound datatypes (use an array datatype instead).
   * Parallel HDF5 works correctly with mpich-1.2.1 on Solaris, SGI, Linux.

h4toh5 Utility 
==============

    The h4toh5 utility is a new utility that converts an HDF4 file to an 
    HDF5 file.  For details, see the document, "Mapping HDF4 Objects to 
    HDF5 Objects":
       http://hdf.ncsa.uiuc.edu/HDF5/papers/H4-H5MappingGuidelines.pdf

    Known Limitations of the h4toh5 beta release
    ---------------------------------------------

    1. Error Handling

    Error reporting is minimal.

    2. String Datatype

    HDF4 has no 'string' type. String valued data are usually defined as
    an array of 'char' in HDF4. The h4toh5 utility will generally map
    these to HDF5 'String' types rather than array of char, with the
    following additional rules:

       * For the data of an HDF4 SDS, image, and palette, if the data is
	 declared 'DFNT_CHAR8' it will be assumed to be integer and
	 will be an H5T_INTEGER type.
       * For attributes of any HDF4 object, data of type 'DFNT_CHAR8'
	 will be converted to an HDF5 'H5T_STRING' type.
       * For an HDF4 Vdata, it is difficult to determine whether data 
	 of type 'DFNT_CHAR8' is intended to be bytes or characters.
	 The h4toh5 utility will consider them to be C characters, and
	 will convert them to an HDF5 'H5T_STRING' type.


    3. Compression, Chunking and External Storage

    Chunking is supported, but compression and external storage is not.

    An HDF4 object that uses chunking will be converted to an HDF5 file
    with analogous chunked storage.

    An HDF4 object that uses compression will be converted to an
    uncompressed HDF5 object.  

    An HDF4 object that uses external storage will be converted to an
    HDF5 object without external storage.  

    4. Memory Use

    The beta version of the h4toh5 utility copies data from HDF4 objects
    in a single read followed by a single write to the HDF5 object. For
    large objects, this requires a very large amount of memory, which may
    be extremely slow or fail on some platforms.

    Note that a dataset that has only been partly written will
    be read completely, including uninitialized data, and all the
    data will be written to the HDF5 object.

    5. Platforms

    The h4toh5 utility requires HDF5.1.4.

    The beta h4toh5 utility has been tested on Solaris 2.6, Solaris 2.5,
    Irix 6.5, HPUX 11.0, DEC Unix, FreeBSD, and Windows 2000.

F90 Support
===========
 
    This is the first release of the HDF5 Library with fully integrated 
    F90 API support.  The Fortran Library is created when the 
    --enable-fortran flag is specified during configuration.

    Not all F90 subroutines are implemented. Please refer to the HDF5 
    Reference Manual for more details.
   
    F90 APIs are available for the Solaris 2.6 and 2.7, Linux, DEC UNIX, 
    T3E, J90 and O2K (64 bit option only) platforms. The Parallel version of 
    the HDF5 F90 Library is supported on the O2K and T3E platforms.

    Changes since the last prototype release (July 2000)
    ----------------------------------------------------

       * h5open_f and h5close_f must be called instead of h5init_types and 
         h5close_types.

       * The following subroutines are no longer available: 
             h5pset_xfer_f
             h5pget_xfer_f
             h5pset_mpi_f
             h5pget_mpi_f
             h5pset_stdio_f
             h5pget_stdio_f
             h5pset_sec2_f
             h5pget_sec2_f
             h5pset_core_f
             h5pget_core_f
             h5pset_family_f
             h5pget_family_f
             
       * The following functions have been added:
             h5pset_fapl_mpio_f
             h5pget_fapl_mpio_f
             h5pset_dxpl_mpio_f
             h5pget_dxpl_mpio_f
        
       * In the previous HDF5 F90 releases, the implementation of object 
         references and dataset region references was not portable. This 
         release introduces a portable implementation, but it also introduces 
         changes to the read/write APIs that handle references.  If object or 
         dataset region references are written or read to/from an HDF5 file, 
         h5dwrite_f and h5dread_f must use the extra parameter, n, for the 
         buffer size:

            h5dwrite(read)_f(dset_id, mem_type_id, buf, n, hdferr, &
                                                       ^^^ 
                             mem_space_id, file_space_id, xfer_prp)

         For other datatypes the APIs were not changed. 
              
             
C++ Support
===========

        This is the first release of the HDF5 Library with fully integrated 
        C++ API support. The HDF5 C++ library is built when the --enable-cxx
        flag is specified during configuration.

        Check the HDF5 Reference Manual for available C++ documentation.

        C++ APIs are available for Solaris 2.6 and 2.7, Linux, and FreeBSD. 
 

Bug Fixes since HDF5-1.2.0
==========================

Library
-------

   * The function H5Pset_mpi is renamed as H5Pset_fapl_mpio.
   * Corrected a floating point number conversion error for the Cray J90
     platform. The error did not convert the value 0.0 correctly.
   * Error was fixed which was not allowing dataset region references to
     have their regions retrieved correctly.
   * Corrected a bug that caused non-parallel file drivers to fail in
     the parallel version.
   * Added internal free-lists to reduce memory required by the library
     and H5garbage_collect API function
   * Fixed error in H5Giterate which was not updating the "index"
     parameter correctly.
   * Fixed error in hyperslab iteration which was not walking through the
     correct sequence of array elements if hyperslabs were staggered in a
     certain pattern
   * Fixed several other problems in hyperslab iteration code.
   * Fixed another H5Giterate bug which was causes groups with large
     numbers of objects in them to misbehave when the callback function
     returned non-zero values.
   * Changed return type of H5Aiterate and H5A_operator_t typedef to be
     herr_t, to align them with the dataset and group iterator functions.
   * Changed H5Screate_simple and H5Sset_extent_simple to not allow
     dimensions of size 0 with out the same dimension being unlimited.
   * QAK - 4/19/00 - Improved metadata hashing & caching algorithms to
     avoid many hash flushes and also remove some redundant I/O when
     moving metadata blocks in the file.
   * The "struct(opt)" type conversion function which gets invoked for
     certain compound datatype conversions was fixed for nested compound
     types. This required a small change in the datatype conversion
     function API.
   * Re-wrote lots of the hyperslab code to speed it up quite a bit.
   * Added bounded garbage collection for the free lists when they run
     out of memory and also added H5set_free_list_limits API call to
     allow users to put an upper limit on the amount of memory used for
     free lists.
   * Checked for non-existent or deleted objects when dereferencing one
     with object or region references and disallow dereference.
   * "Time" datatypes (H5T_UNIX_D*) were not being stored and retrieved
     from object headers correctly, fixed now.
   * Fixed H5Dread or H5Dwrite calls with H5FD_MPIO_COLLECTIVE requests
     that may hang because not all processes are transfer the same amount
     of data. (A.K.A. prematured collective return when zero amount data
     requested.) Collective calls that may cause hanging is done via the
     corresponding MPI-IO independent calls.

Configuration
-------------

   * The hdf5.h include file was fixed to allow the HDF5 Library to be
     compiled with other libraries/applications that use GNU autoconf. 
   * Configuration for parallel HDF5 was improved. Configure now attempts
     to link with libmpi.a and/or libmpio.a as the MPI libraries by
     default. It also uses "mpirun" to launch MPI tests by default. It
     tests to link MPIO routines during the configuration stage, rather
     than failing later as before. One can just do "./configure
     --enable-parallel" if the MPI library is in the system library.
   * Added support for pthread library and thread-safe option.
   * The libhdf5.settings file shows the correct machine byte-sex.
   * Added option "--enable-stream-vfd" to configure w/o the Stream VFD.
     For Solaris, added -lsocket to the LIBS list of libraries.

Tools
-----
    * h5dump now accepts both short and long command-line parameters. A
      change from the old way is that multiple attributes, datasets,
      groups, soft-links, and object-ids cannot be specified with just
      one flag but you have to use a flag with each object. I.e., instead
      of doing this:

        h5dump -a /attr1 /attr2 foo.h5

      do this:

        h5dump -a /attr1 -a /attr2 foo.h5

      The cases are similar for the other object types.
    * h5dump correctly displays compound datatypes.
    * Corrected an error in h5toh4 which did not convert the 32bits
      int from HDF5 to HDF4 corectly for the T3E platform.
    * h5dump correctly displays the committed copy of predefined types
      correctly.
    * Added an option, -V, to show the version information of h5dump.
    * Fixed a core dumping bug of h5toh4 when executed on platforms like 
      TFLOPS.
    * The test script for h5toh4 used to not able to detect the hdp
      dumper command was not valid.  It now detects and reports the
      failure of hdp execution.
    * Merged the tools with the 1.2.2 branch. Required adding new
      macros, VERSION12 and VERSION13, used in conditional compilation.
      Updated the Windows project files for the tools.            
    * h5dump displays opaque and bitfield data correctly.
    * h5dump and h5ls can browse files created with the Stream VFD
      (eg. "h5ls :").
    * h5dump has a new feature "-o " which outputs the raw data
      of the dataset into ascii text file .
    * h5toh4 used to converts hdf5 strings type to hdf4 DFNT_INT8 type.
      Corrected to produce hdf4 DFNT_CHAR type instead.
    * h5dump and h5ls displays array data correctly.

Documentation
-------------

    * User's Guide and Reference Manual were updated. 
      See doc/html/PSandPDF/index.html for more details. 

   
Bug Fixes since HDF5-1.4.0-beta2
================================

    * Corrected configuration error which was not including compression support
        correctly.
    * Cleaned up lots of warnings.
    * Changed a few h5dump command line switches and added long versions of
        the switches.

Platforms Tested
================

 Note: Due to the nature of the bug fixes, only static versions of the
       library and tools were tested.


  AIX 4.3.2 (IBM SP)            mpcc_r 3.6.6
  Cray T3E sn6711 2.0.539b      Cray Standard C Version 6.3.0.2
                                Cray Fortran Version 3.4.0.2
  FreeBSD 4.2-STABLE            gcc 2.95.2
                                g++ 2.95.2
  HP-UX B.10.20                 HP C  HP92453-01 A.10.32.30
  HP-UX B.11.00                 HP C  HP92453-01 A.11.00.13
  IRIX 6.5                      MIPSpro cc 7.30
  IRIX64 6.5 (64 & n32)         MIPSpro cc 7.3.1m
                                mpt.1.4.0.2
				mpich-1.2.1
  Linux 2.2.16-3smp             gcc-2.95.2
                                g++ 2.95.2
                                pgf90 3.1-3
  OSF1 V4.0                     DEC-V5.2-040
                                Digital Fortran 90 V4.1-270
  SunOS 5.6                     WorkShop Compilers 5.0 98/12/15 C 5.0 
  (Solaris 2.6)                 WorkShop Compilers 5.0 99/10/25 Fortran 90
                                       2.0 Patch 107356-04
                                Workshop Compilers 5.0 98/12/15 C++ 5.0
  SunOS 5.7                     WorkShop Compilers 5.0 98/12/15 C 5.0
  (Solaris 2.7)                 WorkShop Compilers 5.0 99/10/25 Fortran 90
                                       2.0 Patch 107356-04
                                Workshop Compilers 5.0 98/12/15 C++ 5.0
  TFLOPS 3.3                    mpich-1.2.0 with local changes
  Windows NT4.0, 2000 (NT5.0)   MSVC++ 6.0

Known Problems
==============

* When building the HDF5 test project on Windows NT 4.0 (testhdf5
  and testhdf5dll), the compiler fails to compile tvstr.c within
  the whole project; however, when separately selecting the 
  tvstr.c source code, it passes the compiler and everything that
  depends on tvstr.obj links correctly.

* h4toh5 fails on object references on the Cray T3E.

* The installation of the DEC Fortran binaries fails.  It can be
  done manually by copying the *.mod files from the fortran/src
  directory.

* Fortran modules are not installed when created. This should be done
  manually by copying the modules from the fortran/src directory.

* SunOS 5.6 with C WorkShop Compilers 4.2:  Hyperslab selections will 
  fail if library is compiled using optimization of any level.

* The Stream VFD was not tested yet under Windows.
  It is not supported in the TFLOPS machine.

* Shared library option is broken for IBM SP and some Origin 2000 platforms.
  One needs to run ./configure with '--disable-shared'

* The ./dsets tests failed in the TFLOPS machine if the test program,
  dsets.c, is compiled with the -O option.  The hdf5 library still works
  correctly with the -O option.  The test program works fine if it is
  compiled with -O1 or -O0.  Only -O (same as -O2) causes the test
  program to fail.

* Certain platforms give false negatives when testing h5ls:
    - Solaris x86 2.5.1, Cray T3E and Cray J90 give errors during testing
        when displaying object references in certain files.  These are benign 
        differences due to the difference in sizes of the objects created on
        those platforms.  h5ls appears to be dumping object references
        correctly.
    - Cray J90 (and Cray T3E?) give errors during testing when displaying
        some floating-point values.  These are benign differences due to the
        different precision in the values displayed and h5ls appears to be
        dumping floating-point numbers correctly.

Previous Releases

The following material is from the file hdf5/HISTORY, as distributed with the HDF5 library source code.

CONTENTS
I.   Release Information for hdf5-1.2.1
II.  Release Information for hdf5-1.2.0
     A. Platforms Supported
     B. Known Problems
     C. Changes Since Version 1.0.1  
        1. Documentation
        2. Configuration
        3. Debugging
        4. Datatypes
        5. Dataspaces
        6. Persistent Pointers
        7. Parallel Support
        8. New API Functions
           a. Property List Interface
           b. Dataset Interface
           c. Dataspace Interface
           d. Datatype Interface
           e. Identifier Interface
           f. Reference Interface
           g. Ragged Arrays
        9. Tools

III.  Changes Since the Version 1.0.0 Release   

IV.   Changes Since the Beta 1.0.0 Release

V.    Changes Since the Second Alpha 1.0.0 Release

VI.   Changes Since the First Alpha 1.0.0 Release 

-----------------------------------------------------------------------
I. Release Information for hdf5-1.2.1

Bug fixes since HDF5-1.2.0
==========================

Configuration
-------------

   * The hdf5.h include file was fixed to allow the HDF5 Library to be compiled
     with other libraries/applications that use GNU autoconf. 
   * Configuration for parallel HDF5 was improved. Configure now attempts to
     link with libmpi.a and/or libmpio.a as the MPI libraries by default.
     It also uses "mpirun" to launch MPI tests by default.  It tests to
     link MPIO routines during the configuration stage, rather than failing
     later as before.  One can just do "./configure --enable-parallel"
     if the MPI library is in the system library.

Library
-------

   * Error was fixed which was not allowing dataset region references to have
     their regions retrieved correctly.
   * Added internal free-lists to reduce memory required by the library and
     H5garbage_collect API function
   * Fixed error in H5Giterate which was not updating the "index" parameter
     correctly.
   * Fixed error in hyperslab iteration which was not walking through the
     correct sequence of array elements if hyperslabs were staggered in a
     certain pattern
   * Fixed several other problems in hyperslab iteration code.

Tests
------
   * Added additional tests for group and attribute iteration.
   * Added additional test for staggered hyperslab iteration.
   * Added additional test for random 5-D hyperslab selection.

Tools
------

   * Added an option, -V, to show the version information of h5dump.
   * Fixed a core dumping bug of h5toh4 when executed on platforms like 
     TFLOPS.
   * The test script for h5toh4 used to not able to detect the hdp
     dumper command was not valid.  It now detects and reports the
     failure of hdp execution.

Documentation
-------------

   * User's Guide and Reference Manual were updated. 
     See doc/html/PSandPDF/index.html for more details. 

   
Platforms Tested:
================
 Note: Due to the nature of bug fixes, only static versions of the library and tools were tested.


  AIX 4.3.2 (IBM SP)            3.6.6
  Cray T3E 2.0.4.81             cc 6.3.0.1
                                mpt.1.3
  FreeBSD 3.3-STABLE            gcc 2.95.2
  HP-UX B.10.20                 HP C  HP92453-01 A.10.32
  IRIX 6.5                      MIPSpro cc 7.30
  IRIX64 6.5 (64 & n32)         MIPSpro cc 7.3.1m
                                mpt.1.3 (SGI MPI 3.2.0.0)

  Linux 2.2.10 SuSE             egcs-2.91.66               configured with
  (i686-pc-linux-gnu)                                      --disable-hsizet
                                mpich-1.2.0 egcs-2.91.66 19990314/Linux 

  OSF1 V4.0                     DEC-V5.2-040
  SunOS 5.6                     cc WorkShop Compilers 4.2  no optimization
  SunOS 5.7                     cc WorkShop Compilers 5.0
  TFLOPS 2.8                    cicc (pgcc Rel 3.0-5i)
                                mpich-1.1.2 with local changes
  Windows NT4.0 sp5             MSVC++ 6.0

Known Problems:
==============

o SunOS 5.6 with C WorkShop Compilers 4.2:  Hyperslab selections will 
  fail if library is compiled using optimization of any level.


         
II. Release Information for hdf5-1.2.0

A. Platforms Supported
   -------------------

Operating systems listed below with compiler information and MPI library, if
applicable, are systems that HDF5 1.2.0 was tested on.

                           Compiler & libraries
  Platform                      Information              Comment
  --------                      ----------               -------- 
                                
  AIX 4.3.2 (IBM SP)            3.6.6

  Cray J90 10.0.0.6             cc 6.3.0.0

  Cray T3E 2.0.4.61             cc 6.2.1.0 
                                mpt.1.3

  FreeBSD 3.2                   gcc 2.95.1

  HP-UX B.10.20                 HP C  HP92453-01 A.10.32
                                gcc 2.8.1

  IRIX 6.5                      MIPSpro cc 7.30 

  IRIX64 6.5 (64 & n32)         MIPSpro cc 7.3.1m
  			        mpt.1.3 (SGI MPI 3.2.0.0)

  Linux 2.2.10                  egcs-2.91.66               configured with
  						         --disable-hsizet
                                                           lbraries: glibc2

  OSF1 V4.0                     DEC-V5.2-040

  SunOS 5.6                     cc WorkShop Compilers 4.2   
                                                           no optimization
  			        gcc 2.8.1

  SunOS 5.7                     cc WorkShop Compilers 5.0
                                gcc 2.8.1
 
  TFLOPS 2.7.1                  cicc (pgcc Rel 3.0-4i)
  			        mpich-1.1.2 with local changes

  Windows NT4.0 intel           MSVC++ 5.0 and 6.0

  Windows NT alpha 4.0          MSVC++ 5.0 

  Windows 98                    MSVC++ 5.0


B. Known Problems
   --------------

* NT alpha 4.0
  Dumper utiliy h5dump fails if linked with DLL.

* SunOS 5.6 with C WorkShop Compilers 4.2
  Hyperslab selections will fail if library is compiled using optimization
  of any level.

 
C. Changes Since Version 1.0.1
   ---------------------------

1. Documentation
   -------------

* More examples

* Updated user guide, reference manual, and format specification.

* Self-contained documentation for installations isolated from the
  Internet.

* HDF5 Tutorial was added to the documentation  

2. Configuration
   -------------

* Better detection and support for MPI-IO.

* Recognition of compilers with known code generation problems.

* Support for various compilers on a single architecture (e.g., the
  native compiler and the GNU compilers).

* Ability to build from read-only media and with different compilers
  and/or options concurrently.

* Added a libhdf5.settings file which summarizes the configuration
  information and is installed along with the library.

* Builds a shared library on most systems that support it.

* Support for Cray T3E, J90 and Windows/NT.

3. Debugging
   ---------

* Improved control and redirection of debugging and tracing messages.

4. Datatypes
   ---------

* Optimizations to compound datatype conversions and I/O operations.

* Added nearly 100 optimized conversion functions for native datatypes 
  including support for non-aligned data.

* Added support for bitfield, opaque, and enumeration types.

* Added distinctions between signed and unsigned char types to the
  list of predefined native hdf5 datatypes.

* Added HDF5 type definitions for C9x types like int32_t.

* Application-defined type conversion functions can handle non-packed
  data.

* Changed the H5Tunregister() function to use wildcards when matching
  conversion functions.  H5Tregister_hard() and H5Tregister_soft()
  were combined into H5Tregister().

* Support for variable-length datatypes (arrays of varying length per
  dataset element). Variable length strings currently supported only
  as variable length arrays of 1-byte integers.

5. Dataspaces
   ----------

* New query functions for selections.

* I/O operations bypass the stripmining loop and go directly to
  storage for certain contiguous selections in the absense of type
  conversions.  In other cases the stripmining buffers are used more
  effectively.

* Reduced the number of I/O requests under certain circumstances,
  improving performance on systems with high I/O latency.

6. Persistent Pointers
   -------------------

* Object (serial and parallel) and dataset region (serial only)
  references are implemented.

7. Parallel Support
   ----------------

* Improved parallel I/O performance.

* Supported new platforms: Cray T3E, Linux, DEC Cluster.

* Use vendor supported version of MPIO on SGI O2K and Cray platforms.

* Improved the algorithm that translates an HDF5 hyperslab selection
  into an MPI type for better collective I/O performance.

8. New API functions 
   -----------------

  a. Property List Interface:
     ------------------------

  H5Pset_xfer		- set data transfer properties
  H5Pset_preserve      - set dataset transfer property list status 
  H5Pget_preserve      - get dataset transfer property list status
  H5Pset_hyper_cache   - indicates whether to cache hyperslab blocks during I/O
  H5Pget_hyper_cache   - returns information regarding the caching of 
                         hyperslab blocks during I/O
  H5Pget_btree_ratios  - sets B-tree split ratios for a dataset 
                         transfer property list
  H5Pset_btree_ratios  - gets B-tree split ratios for a dataset
                         transfer property list
  H5Pset_vlen_mem_manager - sets the memory manager for variable-length 
                            datatype allocation
  H5Pget_vlen_mem_manager - sets the memory manager for variable-length
                            datatype allocation

  b. Dataset Interface:
     ------------------

  H5Diterate           - iterate over all selected elements in a dataspace
  H5Dget_storage_size  - return the amount of storage required for a dataset
  H5Dvlen_reclaim      - reclaim VL datatype memory buffers

  c. Dataspace Interface:
     --------------------
  H5Sget_select_hyper_nblocks   - get number of hyperslab blocks
  H5Sget_select_hyper_blocklist - get the list of hyperslab blocks 
                                  currently selected
  H5Sget_select_elem_npoints    - get the number of element points 
                                  in the current selection
  H5Sget_select_elem_pointlist  - get the list of element points 
                                  currently selected
  H5Sget_select_bounds          - gets the bounding box containing 
                                  the current selection

  d. Datatype Interface:
     -------------------
  H5Tget_super         - return the base datatype from which a 
                         datatype is derived
  H5Tvlen_create       - creates a new variable-length dataype
  H5Tenum_create       - creates a new enumeration datatype
  H5Tenum_insert       - inserts a new enumeration datatype member
  H5Tenum_nameof       - returns the symbol name corresponding to a 
                         specified member of an enumeration datatype
  H5Tvalueof           - return the value corresponding to a 
                         specified member of an enumeration datatype 
  H5Tget_member_value  - return the value of an enumeration datatype member
  H5Tset_tag           - tags an opaque datatype
  H5Tget_tag           - gets the tag associated with an opaque datatype

  e. Identifier Interface:
     ---------------------
  H5Iget_type          - retrieve the type of an object

  f. Reference Interface:
     --------------------
  H5Rcreate            - creates a reference
  H5Rdereference       - open the HDF5 object referenced
  H5Rget_region        - retrieve a dataspace with the specified region selected
  H5Rget_object_type   - retrieve the type of object that an 
                         object reference points to

  g. Ragged Arrays (alpha) (names of those API functions were changed):
     ------------------------------------------------------------------
   H5RAcreate		- create a new ragged array (old name was H5Rcreate)
   H5RAopen		- open an existing array    (old name was H5Ropen)
   H5RAclose		- close a ragged array      (old name was H5Rclose)
   H5RAwrite		- write to an array         (old name was H5Rwrite)
   H5RAread		- read from an array        (old name was H5Rread)


9. Tools
   -----

* Enhancements to the h5ls tool including the ability to list objects
  from more than one file, to display raw hexadecimal data, to
  show file addresses for raw data, to format output more reasonably,
  to show object attributes, and to perform a recursive listing, 

* Enhancements to h5dump: support new data types added since previous
  versions.

* h5toh4: An hdf5 to hdf4 converter.



III. Changes Since The Version 1.0.0 Release

* [Improvement]: configure sets up the Makefile in the parallel tests
  suit (testpar/) correctly.

* [Bug-Fix]: Configure failed for all IRIX versions other than 6.3.
  It now configures correctly for all IRIX 6.x version.

* Released Parallel HDF5 

     Supported Features:
     ------------------

     HDF5 files are accessed according to the communicator and INFO
     object defined in the property list set by H5Pset_mpi.

     Independent read and write accesses to fixed and extendable dimension
     datasets.

     Collective read and write accesses to fixed dimension datasets.

     Supported Platforms:
     -------------------

     Intel Red
     IBM SP2
     SGI Origin 2000

     Changes In This Release: 
     -----------------------

   o Support of Access to Extendable Dimension Datasets.
     Extendable dimension datasets must use chunked storage methods.
     A new function, H5Dextend, is created to extend the current
     dimensions of a dataset.  The current release requires the
     MPI application must make a collective call to extend the
     dimensions of an extendable dataset before writing to the
     newly extended area.  (The serial does not require the
     call of H5Dextend.  The dimensions of an extendable
     dataset is increased when data is written to beyond the
     current dimensions but within the maximum dimensions.)
     The required collective call of H5Dextend may be relaxed
     in future release.

     This release only support independent read and write accesses
     to extendable datasets.  Collective accesses to extendable
     datasets will be implemented in future releases.

   o Collective access to fixed dimension datasets.
     Collective access to a dataset can be specified in the transfer
     property list argument in H5Dread and H5Dwrite.  The current
     release supports collective access to fixed dimension datasets.
     Collective access to extendable datasets will be implemented in
     future releases.

   o HDF5 files are opened according to Communicator and INFO object.
     H5Dopen now records the communicator and INFO setup by H5Pset_mmpi
     and pass them to the corresponding MPIO open file calls for
     processing.

   o This release has been tested on IBM SP2, Intel Red and SGI Origin 2000
     systems.  It uses the ROMIO version of MPIO interface for parallel
     I/O supports.



IV. Changes Since The Beta 1.0.0 Release

* Added fill values for datasets.  For contiguous datasets fill value
  performance may be quite poor since the fill value is written to the 
  entire dataset when the dataset is created.  This will be remedied
  in a future version.  Chunked datasets using fill values do not
  incur any additional overhead. See H5Pset_fill_value().

* Multiple hdf5 files can be "mounted" on one another to create a
  larger virtual file. See H5Fmount().

* Object names can be removed or changed but objects are never
  actually removed from the file yet. See H5Gunlink() and H5Gmove().

* Added a tuning mechanism for B-trees to insure that sequential
  writes to chunked datasets use less overhead.  See H5Pset_btree_ratios().

* Various optimizations and bug fixes.


V. Changes Since The Second Alpha 1.0.0 Release

* Strided hyperslab selections in dataspaces now working.

* The compression API has been replaced with a more general filter
  API.  See doc/html/Filters.html for details.

* Alpha-quality 2d ragged arrays are implemented as a layer built on
  top of other hdf5 objects.  The API and storage format will almost
  certainly change.

* More debugging support including API tracing.  See Debugging.html.

* C and Fortran style 8-bit fixed-length character string types are
  supported with space or null padding or null termination and
  translations between them.

* Added function H5Fflush() to write all cached data immediately to
  the file.

* Datasets maintain a modification time which can be retrieved with
  H5Gstat().

* The h5ls tool can display much more information, including all the
  values of a dataset.


VI. Changes Since The First Alpha 1.0.0 Release

* Two of the packages have been renamed.  The data space API has been
  renamed from `H5P' to `H5S' and the property list (template) API has 
  been renamed from `H5C' to `H5P'.

* The new attribute API `H5A' has been added.  An attribute is a small 
  dataset which can be attached to some other object (for instance, a
  4x4 transformation matrix attached to a 3-dimensional dataset, or an 
  English abstract attached to a group).

* The error handling API `H5E' has been completed.  By default, when an
  API function returns failure an error stack is displayed on the
  standard error stream.  The H5Eset_auto() controls the automatic
  printing and H5E_BEGIN_TRY/H5E_END_TRY macros can temporarily
  disable the automatic error printing.

* Support for large files and datasets (>2GB) has been added.  There
  is an html document that describes how it works.  Some of the types
  for function arguments have changed to support this: all arguments
  pertaining to sizes of memory objects are `size_t' and all arguments 
  pertaining to file sizes are `hsize_t'.

* More data type conversions have been added although none of them are
  fine tuned for performance.  There are new converters from integer
  to integer and float to float, but not between integers and floating
  points.  A bug has been fixed in the converter between compound
  types.

* The numbered types have been removed from the API: int8, uint8,
  int16, uint16, int32, uint32, int64, uint64, float32, and float64.
  Use standard C types instead.  Similarly, the numbered types were
  removed from the H5T_NATIVE_* architecture; use unnumbered types
  which correspond to the standard C types like H5T_NATIVE_INT.

* More debugging support was added.  If tracing is enabled at
  configuration time (the default) and the HDF5_TRACE environment
  variable is set to a file descriptor then all API calls will emit
  the function name, argument names and values, and return value on
  that file number.  There is an html document that describes this.
  If appropriate debugging options are enabled at configuration time,
  some packages will display performance information on stderr.

* Data types can be stored in the file as independent objects and
  multiple datasets can share a data type.

* The raw data I/O stream has been implemented and the application can 
  control meta and raw data caches, so I/O performance should be
  improved from the first alpha release.

* Group and attribute query functions have been implemented so it is
  now possible to find out the contents of a file with no prior
  knowledge.

* External raw data storage allows datasets to be written by other
  applications or I/O libraries and described and accessed through
  HDF5.

* Hard and soft (symbolic) links are implemented which allow groups to 
  share objects. Dangling and recursive symbolic links are supported.

* User-defined data compression is implemented although we may
  generalize the interface to allow arbitrary user-defined filters
  which can be used for compression, checksums, encryption,
  performance monitoring, etc.  The publicly-available `deflate'
  method is predefined if the GNU libz.a can be found at configuration 
  time.

* The configuration scripts have been modified to make it easier to
  build debugging vs. production versions of the library.

* The library automatically checks that the application was compiled
  with the correct version of header files.


		    Parallel HDF5 Changes

* Parallel support for fixed dimension datasets with contiguous or
  chunked storages.  Also, support unlimited dimension datasets which
  must use chunk storage.  No parallel support for compressed datasets.

* Collective data transfer for H5Dread/H5Dwrite.  Collective access
  support for datasets with contiguous storage only, thus only fixed
  dimension datasets for now.

* H5Pset_mpi and H5Pget_mpi no longer have the access_mode
  argument.  It is taken over by the data-transfer property list
  of H5Dread/H5Dwrite.

* New functions H5Pset_xfer and H5Pget_xfer to handle the
  specification of independent or collective data transfer_mode
  in the dataset transfer properties list.  The properties
  list can be used to specify data transfer mode in the H5Dwrite
  and H5Dread function calls.

* Added parallel support for datasets with chunked storage layout.
  When a dataset is extend in a PHDF5 file, all processes that open
  the file must collectively call H5Dextend with identical new dimension
  sizes.


			LIST OF API FUNCTIONS

The following functions are implemented. Errors are returned if an
attempt is made to use some feature which is not implemented and
printing the error stack will show `not implemented yet'.

Library
   H5check		- check that lib version matches header version
   H5open		- initialize library (happens automatically)
   H5close		- shut down the library (happens automatically)
   H5dont_atexit	- don't call H5close on exit
   H5get_libversion	- retrieve library version info
   H5check_version	- check for specific library version

Property Lists
   H5Pclose		- release template resources
   H5Pcopy		- copy a template
   H5Pcreate		- create a new template
   H5Pget_chunk		- get chunked storage properties
   H5Pset_chunk		- set chunked storage properties
   H5Pget_class		- get template class
   H5Pget_istore_k	- get chunked storage properties
   H5Pset_istore_k	- set chunked storage properties
   H5Pget_layout	- get raw data layout class
   H5Pset_layout	- set raw data layout class
   H5Pget_sizes		- get address and size sizes
   H5Pset_sizes		- set address and size sizes
   H5Pget_sym_k		- get symbol table storage properties
   H5Pset_sym_k		- set symbol table storage properties
   H5Pget_userblock	- get user-block size
   H5Pset_userblock	- set user-block size
   H5Pget_version	- get file version numbers
   H5Pget_alignment	- get data alignment properties
   H5Pset_alignment	- set data alignment properties
   H5Pget_external_count- get count of external data files
   H5Pget_external	- get information about an external data file
   H5Pset_external	- add a new external data file to the list
   H5Pget_driver	- get low-level file driver class
   H5Pget_stdio		- get properties for stdio low-level driver
   H5Pset_stdio		- set properties for stdio low-level driver
   H5Pget_sec2		- get properties for sec2 low-level driver
   H5Pset_sec2		- set properties for sec2 low-level driver
   H5Pget_core		- get properties for core low-level driver
   H5Pset_core		- set properties for core low-level driver
   H5Pget_split		- get properties for split low-level driver
   H5Pset_split		- set properties for split low-level driver
   H5P_get_family	- get properties for family low-level driver
   H5P_set_family	- set properties for family low-level driver
   H5Pget_cache		- get meta- and raw-data caching properties
   H5Pset_cache		- set meta- and raw-data caching properties
   H5Pget_buffer	- get raw-data I/O pipe buffer properties
   H5Pset_buffer	- set raw-data I/O pipe buffer properties
   H5Pget_preserve	- get type conversion preservation properties
   H5Pset_preserve	- set type conversion preservation properties
   H5Pget_nfilters	- get number of raw data filters
   H5Pget_filter	- get raw data filter properties
   H5Pset_filter	- set raw data filter properties
   H5Pset_deflate	- set deflate compression filter properties
   H5Pget_mpi		- get MPI-IO properties
   H5Pset_mpi		- set MPI-IO properties
   H5Pget_xfer		- get data transfer properties
 + H5Pset_xfer		- set data transfer properties
 + H5Pset_preserve      - set dataset transfer property list status 
 + H5Pget_preserve      - get dataset transfer property list status
 + H5Pset_hyper_cache   - indicates whether to cache hyperslab blocks during I/O
 + H5Pget_hyper_cache   - returns information regarding the caching of 
                          hyperslab blocks during I/O
 + H5Pget_btree_ratios  - sets B-tree split ratios for a dataset 
                          transfer property list
 + H5Pset_btree_ratios  - gets B-tree split ratios for a dataset
                          transfer property list
 + H5Pset_vlen_mem_manager - sets the memory manager for variable-length 
                             datatype allocation
 + H5Pget_vlen_mem_manager - sets the memory manager for variable-length
                             datatype allocation

Datasets
   H5Dclose		- release dataset resources
   H5Dcreate		- create a new dataset
   H5Dget_space		- get data space
   H5Dget_type		- get data type
   H5Dget_create_plist	- get dataset creation properties
   H5Dopen		- open an existing dataset
   H5Dread		- read raw data
   H5Dwrite		- write raw data
   H5Dextend		- extend a dataset
 + H5Diterate           - iterate over all selected elements in a dataspace
 + H5Dget_storage_size  - return the amount of storage required for a dataset
 + H5Dvlen_reclaim      - reclaim VL datatype memory buffers

Attributes
   H5Acreate		- create a new attribute
   H5Aopen_name		- open an attribute by name
   H5Aopen_idx		- open an attribute by number
   H5Awrite		- write values into an attribute
   H5Aread		- read values from an attribute
   H5Aget_space		- get attribute data space
   H5Aget_type		- get attribute data type
   H5Aget_name		- get attribute name
   H5Anum_attrs		- return the number of attributes for an object
   H5Aiterate		- iterate over an object's attributes
   H5Adelete		- delete an attribute
   H5Aclose		- close an attribute

Errors
   H5Eclear		- clear the error stack
   H5Eprint		- print an error stack
   H5Eget_auto		- get automatic error reporting settings
   H5Eset_auto		- set automatic error reporting
   H5Ewalk		- iterate over the error stack
   H5Ewalk_cb		- the default error stack iterator function
   H5Eget_major		- get the message for the major error number
   H5Eget_minor		- get the message for the minor error number

Files
   H5Fclose		- close a file and release resources
   H5Fcreate		- create a new file
   H5Fget_create_plist	- get file creation property list
   H5Fget_access_plist	- get file access property list
   H5Fis_hdf5		- determine if a file is an hdf5 file
   H5Fopen		- open an existing file
   H5Freopen            - reopen an HDF5 file
   H5Fmount             - mount a file
   H5Funmount           - unmount a file
   H5Fflush             - flush all buffers associated with a file to disk

Groups
   H5Gclose		- close a group and release resources
   H5Gcreate    	- create a new group
   H5Gopen		- open an existing group
   H5Giterate   	- iterate over the contents of a group
   H5Gmove		- change the name of some object
   H5Glink		- create a hard or soft link to an object
   H5Gunlink    	- break the link between a name and an object
   H5Gget_objinfo	- get information about a group entry
   H5Gget_linkval	- get the value of a soft link
   H5Gget_comment	- get the comment string for an object
   H5Gset_comment	- set the comment string for an object

Dataspaces
   H5Screate	        - create a new data space
   H5Scopy		- copy a data space
   H5Sclose		- release data space
   H5Screate_simple	- create a new simple data space
   H5Sset_space		- set simple data space extents
   H5Sis_simple		- determine if data space is simple
   H5Sset_extent_simple	- set simple data space dimensionality and size
   H5Sget_simple_extent_npoints	- get number of points in simple extent
   H5Sget_simple_extent_ndims - get simple data space dimensionality
   H5Sget_simple_extent_dims - get simple data space size
   H5Sget_simple_extent_type - get type of simple extent
   H5Sset_extent_none	- reset extent to be empty
   H5Sextent_copy	- copy the extent from one data space to another
   H5Sget_select_npoints - get number of points selected for I/O
   H5Sselect_hyperslab	- set hyperslab dataspace selection
   H5Sselect_elements   - set element sequence dataspace selection
   H5Sselect_all	- select entire extent for I/O
   H5Sselect_none	- deselect all elements of extent
   H5Soffset_simple	- set selection offset
   H5Sselect_valid	- determine if selection is valid for extent
 + H5Sget_select_hyper_nblocks   - get number of hyperslab blocks
 + H5Sget_select_hyper_blocklist - get the list of hyperslab blocks 
                                   currently selected
 + H5Sget_select_elem_npoints    - get the number of element points 
                                   in the current selection
 + H5Sget_select_elem_pointlist  - get the list of element points 
                                   currently selected
 + H5Sget_select_bounds          - gets the bounding box containing 
                                   the current selection

Datatypes
   H5Tclose		- release data type resources
   H5Topen		- open a named data type
   H5Tcommit		- name a data type
   H5Tcommitted		- determine if a type is named
   H5Tcopy		- copy a data type
   H5Tcreate		- create a new data type
   H5Tequal		- compare two data types
   H5Tlock		- lock type to prevent changes
   H5Tfind		- find a data type conversion function
   H5Tconvert		- convert data from one type to another
   H5Tregister    	- register a conversion function
   H5Tunregister	- remove a conversion function
   H5Tget_overflow	- get function that handles overflow conv. cases
   H5Tset_overflow	- set function to handle overflow conversion cases
   H5Tget_class		- get data type class
   H5Tget_cset		- get character set
   H5Tget_ebias		- get exponent bias
   H5Tget_fields	- get floating point fields
   H5Tget_inpad		- get inter-field padding
   H5Tget_member_dims	- get struct member dimensions
   H5Tget_member_name	- get struct member name
   H5Tget_member_offset	- get struct member byte offset
   H5Tget_member_type	- get struct member type
   H5Tget_nmembers	- get number of struct members
   H5Tget_norm		- get floating point normalization
   H5Tget_offset	- get bit offset within type
   H5Tget_order		- get byte order
   H5Tget_pad		- get padding type
   H5Tget_precision	- get precision in bits
   H5Tget_sign		- get integer sign type
   H5Tget_size		- get size in bytes
   H5Tget_strpad	- get string padding
   H5Tinsert		- insert scalar struct member
   H5Tinsert_array	- insert array struct member
   H5Tpack		- pack struct members
   H5Tset_cset		- set character set
   H5Tset_ebias		- set exponent bias
   H5Tset_fields	- set floating point fields
   H5Tset_inpad		- set inter-field padding
   H5Tset_norm		- set floating point normalization
   H5Tset_offset	- set bit offset within type
   H5Tset_order		- set byte order
   H5Tset_pad		- set padding type
   H5Tset_precision	- set precision in bits
   H5Tset_sign		- set integer sign type
   H5Tset_size		- set size in bytes
   H5Tset_strpad	- set string padding
 + H5Tget_super         - return the base datatype from which a 
                          datatype is derived
 + H5Tvlen_create       - creates a new variable-length dataype
 + H5Tenum_create       - creates a new enumeration datatype
 + H5Tenum_insert       - inserts a new enumeration datatype member
 + H5Tenum_nameof       - returns the symbol name corresponding to a 
                          specified member of an enumeration datatype
 + H5Tvalueof           - return the value corresponding to a 
                          specified member of an enumeration datatype 
 + H5Tget_member_value  - return the value of an enumeration datatype member
 + H5Tset_tag           - tags an opaque datatype
 + H5Tget_tag           - gets the tag associated with an opaque datatype

 - H5Tregister_hard	- register specific type conversion function
 - H5Tregister_soft	- register general type conversion function

Filters
   H5Tregister		- register a conversion function 

Compression
   H5Zregister		- register new compression and uncompression 
                          functions for a method specified by a method number

Identifiers
 + H5Iget_type          - retrieve the type of an object

References
 + H5Rcreate            - creates a reference
 + H5Rdereference       - open the HDF5 object referenced
 + H5Rget_region        - retrieve a dataspace with the specified region selected
 + H5Rget_object_type   - retrieve the type of object that an 
                          object reference points to

Ragged Arrays (alpha)
   H5RAcreate		- create a new ragged array
   H5RAopen		- open an existing array
   H5RAclose		- close a ragged array
   H5RAwrite		- write to an array
   H5RAread		- read from an array


HDF5 documents and links 
Introduction to HDF5 
HDF5 Reference Manual 

HDF5 User's Guide 
HDF5 Application Developer's Guide 

HDF Help Desk
Last modified: 2 June 2000
Describes HDF5 Release 1.4 Beta, December 2000