HDF5 documents and links 
Introduction to HDF5 
HDF5 User Guide 
And in this document, the HDF5 Reference Manual  
H5   H5A   H5D   H5E   H5F   H5G   H5I   H5P  
H5R   H5S   H5T   H5Z   Tools   Datatypes  

HDF5 Tools

HDF5 Tool Interfaces

HDF5-related tools are available to assist the user in a variety of activities, including examining or managing HDF5 files, converting raw data between HDF5 and other special-purpose formats, moving data and files between the HDF4 and HDF5 formats, measuring HDF5 library performance, and managing HDF5 library and application compilation, installation and configuration. Unless otherwise specified below, these tools are distributed and installed with HDF5.


Tool Name: h5dump
Syntax:
h5dump [OPTIONS] file
Purpose:
Displays HDF5 file contents.
Description:
h5dump enables the user to examine the contents of an HDF5 file and dump those contents, in human readable form, to an ASCII file.

h5dump dumps HDF5 file content to standard output. It can display the contents of the entire HDF5 file or selected objects, which can be groups, datasets, a subset of a dataset, links, attributes, or datatypes.

The --header option displays object header information only.

Names are the absolute names of the objects. h5dump displays objects in the order same as the command order. If a name does not start with a slash, h5dump begins searching for the specified object starting at the root group.

If an object is hard linked with multiple names, h5dump displays the content of the object in the first occurrence. Only the link information is displayed in later occurrences.

h5dump assigns a name for any unnamed datatype in the form of #oid1:oid2, where oid1 and oid2 are the object identifiers assigned by the library. The unnamed types are displayed within the root group.

Datatypes are displayed with standard type names. For example, if a dataset is created with H5T_NATIVE_INT type and the standard type name for integer on that machine is H5T_STD_I32BE, h5dump displays H5T_STD_I32BE as the type of the dataset.

h5dump can also dump a subset of a dataset. This feature operates in much the same way as hyperslabs in HDF5; the parameters specified on the command line are passed to the function H5Sselect_hyperslab and the resulting selection is displayed.

The h5dump output is described in detail in the DDL for HDF5, the Data Description Language document.

Note: It is not permissible to specify multiple attributes, datasets, datatypes, groups, or soft links with one flag. For example, one may not issue the command
         WRONG:   h5dump -a /attr1 /attr2 foo.h5
to display both /attr1 and /attr2. One must issue the following command:
         CORRECT:   h5dump -a /attr1 -a /attr2 foo.h5

It's possible to select the file driver with which to open the HDF5 file by using the --filedriver (-f) command-line option. Acceptable values for the --filedriver option are: "sec2", "family", "split", "multi", and "stream". If the file driver flag isn't specified, then the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file.

XML Output:
With the --xml option, h5dump generates XML output. This output contains a complete description of the file, marked up in XML. The XML conforms to the HDF5 Document Type Definition (DTD) available at http://hdf.ncsa.uiuc.edu/DTDs/HDF5-File.dtd.

The XML output is suitable for use with other tools, including the HDF5 Java Tools.

Options and Parameters:

Examples:
  1. Dumping the group /GroupFoo/GroupBar in the file quux.h5:
         h5dump -g /GroupFoo/GroupBar quux.h5

  2. Dumping the dataset Fnord in the group /GroupFoo/GroupBar in the file quux.h5:
         h5dump -d /GroupFoo/GroupBar/Fnord quux.h5

  3. Dumping the attribute metadata of the dataset Fnord which is in group /GroupFoo/GroupBar in the file quux.h5:
         h5dump -a /GroupFoo/GroupBar/Fnord/metadata quux.h5

  4. Dumping the attribute metadata which is an attribute of the root group in the file quux.h5:
         h5dump -a /metadata quux.h5

  5. Producing an XML listing of the file bobo.h5:
         h5dump --xml bobo.h5 > bobo.h5.xml

  6. Dumping a subset of the dataset /GroupFoo/databar/ in the file quux.h5:
         h5dump -d /GroupFoo/databar --start="1,1" --stride="2,3"
             --count="3,19" --block="1,1" quux.h5


  7. The same example using the short form to specify the subsetting parameters:
         h5dump -d "/GroupFoo/databar[1,1;2,3;3,19;1,1]" quux.h5

Current Status:
The current version of h5dump displays the following information:
See Also:

Tool Name: h5ls
Syntax:
h5ls [OPTIONS] file [OBJECTS...]
Purpose:
Prints information about a file or dataset.
Description:
h5ls prints selected information about file objects in the specified format.
Options and Parameters:

Tool Name: h5diff    
Syntax:
h5diff file1 file2 [OPTIONS] [object1 [object2 ] ]
Purpose:
Compares two HDF5 files and reports the differences.
Description:
h5diff is a command line tool that compares two HDF5 files, file1 and file2, and reports the differences between them. 

Optionally, h5diff will compare two objects within these files. If only one object, object1, is specified, h5diff will compare object1 in file1 with object1 in file2. In two objects, object1 and object2, are specified, h5diff will compare object1 in file1 with object2 in file2. These objects must be HDF5 datasets.

object1 and object2 must be expressed as absolute paths from the respective file's root group.

h5diff has the following four modes of output:
Normal mode: print the number of differences found and where they occurred
Report mode (-r): print the above plus the differences
Verbose mode (-v): print the above plus a list of objects and warnings
Quiet mode (-q): do not print output (h5diff always returns an exit code of 1 when differences are found).

Additional information, with several sample cases, can be found in the document H5diff Examples.

Options and Parameters:
Examples:
The following h5diff call compares the object /a/b in file1 with the object /a/c in file2:
    h5diff file1 file2 /a/b /a/c
This h5diff call compares the object /a/b in file1 with the same object in file2:
    h5diff file1 file2 /a/b
And this h5diff call compares all objects in both files:
    h5diff file1 file2

Tool Name: h5repack    
Syntax:
h5repack -i file1-o file2 [-h] [-v] [-f 'filter'] [-l 'layout'][-m number][-e file]
Purpose:
Copies an HDF5 file to a new file with or without compression/chunking.
Description:
h5repack is a command line tool that applies HDF5 filters to a input file file1, saving the output in a new file, file2.

'filter' is a string with the format 
<list of objects> : <name of filter> = <filter parameters>.

 <list of objects> is a comma separated list of object names meaning apply compression only to those objects. If no object names are specified, the filter is applied to all objects
 <name of filter> can be: 
GZIP, to apply the HDF5 GZIP filter (GZIP compression)
SZIP, to apply the HDF5 SZIP filter (SZIP compression)
SHUF, to apply the HDF5 shuffle filter
FLET, to apply the HDF5 checksum filter
NONE, to remove the filter 
<filter parameters> is optional compression info 
SHUF (no parameter) 
FLET (no parameter) 
GZIP=<deflation level> from 1-9 
SZIP=<pixels per block,coding> (pixels per block is a even number in 2-32 and coding method is 'EC' or 'NN')

 
'layout' is a string with the format
 <list of objects> : <layout type> 

<list of objects> is a comma separated list of object names, meaning that layout information is supplied for those objects. If no object names are specified, the layout is applied to all objects 
<layout type> can be: 
CHUNK, to apply chunking layout 
COMPA, to apply compact layout 
CONTI, to apply continuous layout 
<layout parameters> is present for the chunk case only it is the chunk size of each dimension: <dim_1 x dim_2 x ... dim_n>
 
Options and Parameters:
file1
file2
The input and output HDF5 files
-h
help message.
-f filter
Filter type
-l layout
Layout type
-v
Verbose mode. Print output (list of objects in the file, filters and layout applied).
-e file
File with the -f and -l options (only filter and layout flags)
-d delta
Print only differences that are greater than the limit delta. delta must be a positive number. The comparison criterion is whether the absolute value of the difference of two corresponding values is greater than delta
(e.g., |a–b| > delta, where a is a value in file1 and b is a value in file2).
-m number
Do not apply the filter to objects which size in bytes is smaller than number. If no size is specified a minimum of 1024 bytes is assumed.
Examples:
1) h5repack -i file1 -o file2 -f GZIP=1 -v
        Applies GZIP compression to all objects in file1 and saves the output in file2

2) h5repack -i file1 -o file2 -f dset1:SZIP=8,NN -v
        Applies SZIP compression only to object 'dset1'

3) h5repack -i file1 -o file2 -l dset1,dset2:CHUNK=20x10 -v
        Applies chunked layout to objects 'dset1' and 'dset2'

 


Tool Name: h5repart
Syntax:
h5repart [-v] [-V] [-[b|m]N[g|m|k]] source_file dest_file
Purpose:
Repartitions a file or family of files.
Description:
h5repart splits a single file into a family of files, joins a family of files into a single file, or copies one family of files to another while changing the size of the family members. h5repart can also be used to copy a single file to a single file with holes.

Sizes associated with the -b and -m options may be suffixed with g for gigabytes, m for megabytes, or k for kilobytes.

File family names include an integer printf format such as %d.

Options and Parameters:

Tool Name: h5import
Syntax:
h5import infile in_options [infile in_options ...] -o outfile
h5import infile in_options [infile in_options ...] -outfile outfile
h5import -h
h5import -help
Purpose:
Imports data into an existing or new HDF5 file.
Description:
h5import converts data from one or more ASCII or binary files, infile, into the same number of HDF5 datasets in the existing or new HDF5 file, outfile. Data conversion is performed in accordance with the user-specified type and storage properties specified in in_options.

The primary objective of h5import is to import floating point or integer data. The utility's design allows for future versions that accept ASCII text files and store the contents as a compact array of one-dimensional strings, but that capability is not implemented in HDF5 Release 1.6.

Input data and options:
Input data can be provided in one of the following forms:

Each input file, infile, contains a single n-dimensional array of values of one of the above types expressed in the order of fastest-changing dimensions first.

Floating point data in an ASCII input file must be expressed in the fixed floating form (e.g., 323.56) h5import is designed to accept scientific notation (e.g., 3.23E+02) in an ASCII, but that is not implemented in HDF5 release 1.6.

Each input file can be associated with options specifying the datatype and storage properties. These options can be specified either as command line arguments or in a configuration file. Note that exactly one of these approaches must be used with a single input file.

Command line arguments, best used with simple input files, can be used to specify the class, size, dimensions of the input data and a path identifying the output dataset.

The recommended means of specifying input data options is in a configuration file; this is also the only means of specifying advanced storage features. See further discussion in "The configuration file" below.

The only required option for input data is dimension sizes; defaults are available for all others.

h5import will accept up to 30 input files in a single call. Other considerations, such as the maximum length of a command line, may impose a more stringent limitation.

Output data and options:
The name of the output file is specified following the -o or -output option in outfile. The data from each input file is stored as a separate dataset in this output file. outfile may be an existing file. If it does not yet exist, h5import will create it.

Output dataset information and storage properties can be specified only by means of a configuration file.
  Dataset path If the groups in the path leading to the dataset do not exist, h5import will create them.
If no group is specified, the dataset will be created as a member of the root group.
If no dataset name is specified, the default name is dataset1 for the first input dataset, dataset2 for the second input dataset, dataset3 for the third input dataset, etc.
h5import does not overwrite a pre-existing dataset of the specified or default name. When an existing dataset of a conflicting name is encountered, h5import quits with an error; the current input file and any subsequent input files are not processed.
  Output type Datatype parameters for output data
      Output data class Signed or unsigned integer or floating point
      Output data size 8-, 16-, 32-, or 64-bit integer
32- or 64-bit floating point
      Output architecture IEEE
STD
NATIVE (Default)
Other architectures are included in the h5import design but are not implemented in this release.
      Output byte order Little- or big-endian.
Relevant only if output architecture is IEEE, UNIX, or STD; fixed for other architectures.
  Dataset layout and storage  
        properties
Denote how raw data is to be organized on the disk. If none of the following are specified, the default configuration is contiguous layout and with no compression.
      Layout Contiguous (Default)
Chunked
      External storage Allows raw data to be stored in a non-HDF5 file or in an external HDF5 file.
Requires contiguous layout.
      Compressed Sets the type of compression and the level to which the dataset must be compressed.
Requires chunked layout.
      Extendable Allows the dimensions of the dataset increase over time and/or to be unlimited.
Requires chunked layout.
      Compressed and
        extendable
Requires chunked layout.
   

Command-line arguments:
The h5import syntax for the command-line arguments, in_options, is as follows:
     h5import infile -d dim_list [-p pathname] [-t input_class] [-s input_size] [infile ...] -o outfile
or
h5import infile -dims dim_list [-path pathname] [-type input_class] [-size input_size] [infile ...] -outfile outfile
or
h5import infile -c config_file [infile ...] -outfile outfile
Note the following: If the -c config_file option is used with an input file, no other argument can be used with that input file. If the -c config_file option is not used with an input data file, the -d dim_list argument (or -dims dim_list) must be used and any combination of the remaining options may be used. Any arguments used must appear in exactly the order used in the syntax declarations immediately above.

The configuration file:
A configuration file is specified with the -c config_file option:
     h5import infile -c config_file [infile -c config_file2 ...] -outfile outfile

The configuration file is an ASCII file and must be organized as "Configuration_Keyword Value" pairs, with one pair on each line. For example, the line indicating that the input data class (configuration keyword INPUT-CLASS) is floating point in a text file (value TEXTFP) would appear as follows:
    INPUT-CLASS TEXTFP

A configuration file may have the following keywords each followed by one of the following defined values. One entry for each of the first two keywords, RANK and DIMENSION-SIZES, is required; all other keywords are optional.


Keyword  
    Value

Description

RANK  

The number of dimensions in the dataset. (Required)
    rank An integer specifying the number of dimensions in the dataset.
Example:   4   for a 4-dimensional dataset.

DIMENSION-SIZES

Sizes of the dataset dimensions. (Required)
    dim_sizes A string of space-separated integers specifying the sizes of the dimensions in the dataset. The number of sizes in this entry must match the value in the RANK entry. The fastest-changing dimension must be listed first.
Example:   4 3 4 38   for a 38x4x3x4 dataset.

PATH

Path of the output dataset.
    path The full HDF5 pathname identifying the output dataset relative to the root group within the output file.
I.e., path is a string consisting of optional group names, each followed by a slash, and ending with a dataset name. If the groups in the path do no exist, they will be created.
If PATH is not specified, the output dataset is stored as a member of the root group and the default dataset name is dataset1 for the first input dataset, dataset2 for the second input dataset, dataset3 for the third input dataset, etc.
Note that h5import does not overwrite a pre-existing dataset of the specified or default name. When an existing dataset of a conflicting name is encountered, h5import quits with an error; the current input file and any subsequent input files are not processed.
Example: The configuration file entry
     PATH grp1/grp2/dataset1
indicates that the output dataset dataset1 will be written in the group grp2/ which is in the group grp1/, a member of the root group in the output file.

INPUT-CLASS  

A string denoting the type of input data.
    TEXTIN Input is signed integer data in an ASCII file.
    TEXTUIN Input is unsigned integer data in an ASCII file.
    TEXTFP Input is floating point data in fixed notation (e.g., 325.34) in an ASCII file.
    TEXTFPE Input is floating point data in scientific notation (e.g., 3.2534E+02) in an ASCII file.
(Not implemented in this release.)
    IN Input is signed integer data in a binary file.
    UIN Input is unsigned integer data in a binary file.
    FP Input is floating point data in a binary file. (Default)
    STR Input is character data in an ASCII file. With this value, the configuration keywords RANK, DIMENSION-SIZES, OUTPUT-CLASS, OUTPUT-SIZE, OUTPUT-ARCHITECTURE, and OUTPUT-BYTE-ORDER will be ignored.
(Not implemented in this release.)

INPUT-SIZE

An integer denoting the size of the input data, in bits.
    8
    16
    32
    64
For signed and unsigned integer data: TEXTIN, TEXTUIN, IN, or UIN. (Default: 32)
    32
    64
For floating point data: TEXTFP, TEXTFPE, or FP. (Default: 32)

OUTPUT-CLASS  

A string denoting the type of output data.
    IN Output is signed integer data.
(Default if INPUT-CLASS is IN or TEXTIN)
    UIN Output is unsigned integer data.
(Default if INPUT-CLASS is UIN or TEXTUIN)
    FP Output is floating point data.
(Default if INPUT-CLASS is not specified or is FP, TEXTFP, or TEXTFPE)
    STR Output is character data, to be written as a 1-dimensional array of strings.
(Default if INPUT-CLASS is STR)
(Not implemented in this release.)

OUTPUT-SIZE

An integer denoting the size of the output data, in bits.
    8
    16
    32
    64
For signed and unsigned integer data: IN or UIN. (Default: Same as INPUT-SIZE, else 32)
    32
    64
For floating point data: FP. (Default: Same as INPUT-SIZE, else 32)

OUTPUT-ARCHITECTURE

A string denoting the type of output architecture.
    NATIVE
    STD
    IEEE
    INTEL *
    CRAY *
    MIPS *
    ALPHA *
    UNIX *
See the "Predefined Atomic Types" section in the "HDF5 Datatypes" chapter of the HDF5 User's Guide for a discussion of these architectures.
Values marked with an asterisk (*) are not implemented in this release.
(Default: NATIVE)

OUTPUT-BYTE-ORDER

A string denoting the output byte order. This entry is ignored if the OUTPUT-ARCHITECTURE is not specified or if it is not specified as IEEE, UNIX, or STD.
    BE Big-endian. (Default)
    LE Little-endian.

The following options are disabled by default, making the default storage properties no chunking, no compression, no external storage, and no extensible dimensions.

CHUNKED-DIMENSION-SIZES

Dimension sizes of the chunk for chunked output data.
    chunk_dims A string of space-separated integers specifying the dimension sizes of the chunk for chunked output data. The number of dimensions must correspond to the value of RANK.
The presence of this field indicates that the output dataset is to be stored in chunked layout; if this configuration field is absent, the dataset will be stored in contiguous layout.

COMPRESSION-TYPE

Type of compression to be used with chunked storage. Requires that CHUNKED-DIMENSION-SIZES be specified.
    GZIP Gzip compression.
Other compression algorithms are not implemented in this release of h5import.

COMPRESSION-PARAM

Compression level. Required if COMPRESSION-TYPE is specified.
    1 through 9 Gzip compression levels: 1 will result in the fastest compression while 9 will result in the best compression ratio.
(Default: 6. The default gzip compression level is 6; not all compression methods will have a default level.)

EXTERNAL-STORAGE

Name of an external file in which to create the output dataset. Cannot be used with CHUNKED-DIMENSIONS-SIZES, COMPRESSION-TYPE, OR MAXIMUM-DIMENSIONS.
    external_file        A string specifying the name of an external file.

MAXIMUM-DIMENSIONS

Maximum sizes of all dimensions. Requires that CHUNKED-DIMENSION-SIZES be specified.
    max_dims A string of space-separated integers specifying the maximum size of each dimension of the output dataset. A value of -1 for any dimension implies unlimited size for that particular dimension.
The number of dimensions must correspond to the value of RANK.


Options and Parameters:
Examples:
Using command-line arguments:
h5import infile -dims 2,3,4 -type TEXTIN -size 32 -o out1
     This command creates a file out1 containing a single 2x3x4 32-bit integer dataset. Since no pathname is specified, the dataset is stored in out1 as /dataset1.
h5import infile -dims 20,50 -path bin1/dset1 -type FP -size 64 -o out2
     This command creates a file out2 containing a single a 20x50 64-bit floating point dataset. The dataset is stored in out2 as /bin1/dset1.
Sample configuration files:
The following configuration file specifies the following:
– The input data is a 5x2x4 floating point array in an ASCII file.
– The output dataset will be saved in chunked layout, with chunk dimension sizes of 2x2x2.
– The output datatype will be 64-bit floating point, little-endian, IEEE.
– The output dataset will be stored in outfile at /work/h5/pkamat/First-set.
– The maximum dimension sizes of the output dataset will be 8x8x(unlimited).
            PATH work/h5/pkamat/First-set
            INPUT-CLASS TEXTFP
            RANK 3
            DIMENSION-SIZES 5 2 4
            OUTPUT-CLASS FP
            OUTPUT-SIZE 64
            OUTPUT-ARCHITECTURE IEEE
            OUTPUT-BYTE-ORDER LE
            CHUNKED-DIMENSION-SIZES 2 2 2 
            MAXIMUM-DIMENSIONS 8 8 -1
        
The next configuration file specifies the following:
– The input data is a 6x3x5x2x4 integer array in a binary file.
– The output dataset will be saved in chunked layout, with chunk dimension sizes of 2x2x2x2x2.
– The output datatype will be 32-bit integer in NATIVE format (as the output architecture is not specified).
– The output dataset will be compressed using Gzip compression with a compression level of 7.
– The output dataset will be stored in outfile at /Second-set.
            PATH Second-set
            INPUT-CLASS IN
            RANK 5
            DIMENSION-SIZES 6 3 5 2 4
            OUTPUT-CLASS IN
            OUTPUT-SIZE 32
            CHUNKED-DIMENSION-SIZES 2 2 2 2 2
            COMPRESSION-TYPE GZIP
            COMPRESSION-PARAM 7
        

Tool Name: gif2h5
Syntax:
gif2h5 gif_file h5_file
Purpose:
Converts a GIF file to an HDF5 file.
Description:
gif2h5 accepts as input the GIF file gif_file and produces the HDF5 file h5_file as output.
Options and Parameters:

Tool Name: h52gif
Syntax:
h52gif h5_file gif_file -i h5_image [-p h5_palette]
Purpose:
Converts an HDF5 file to a GIF file.
Description:
h52gif accepts as input the HDF5 file h5_file and the names of images and associated palettes within that file as input and produces the GIF file gif_file, containing those images, as output.

h52gif expects at least one h5_image. You may repeat
     -i h5_image [-p h5_palette]
up to 50 times, for a maximum of 50 images.

Options and Parameters:

Tool Name: h5toh4
Syntax:
h5toh4 -h
h5toh4 h5file h4file
h5toh4 h5file
h5toh4 -m h5file1 h5file2 h5file3 ...
Purpose:
Converts an HDF5 file into an HDF4 file.
Description:
h5toh4 is an HDF5 utility which reads an HDF5 file, h5file, and converts all supported objects and pathways to produce an HDF4 file, h4file. If h4file already exists, it will be replaced.

If only one file name is given, the name must end in .h5 and is assumed to represent the HDF5 input file. h5toh4 replaces the .h5 suffix with .hdf to form the name of the resulting HDF4 file and proceeds as above. If a file with the name of the intended HDF4 file already exists, h5toh4 exits with an error without changing the contents of any file.

The -m option allows multiple HDF5 file arguments. Each file name is treated the same as the single file name case above.

The -h option causes the following syntax summary to be displayed:

              h5toh4 file.h5 file.hdf
              h5toh4 file.h5
              h5toh4 -m file1.h5 file2.h5 ...

The following HDF5 objects occurring in an HDF5 file are converted to HDF4 objects in the HDF4 file:

Other objects are not converted and are not recorded in the resulting h4file.

Attributes associated with any of the supported HDF5 objects are carried over to the HDF4 objects. Attributes may be of integer, floating point, or fixed length string datatype and they may have up to 32 fixed dimensions.

All datatypes are converted to big-endian. Floating point datatypes are converted to IEEE format.

Note:
The h5toh4 and h4toh5 utilities are no longer part of the HDF5 product; they are distributed separately through the page Converting between HDF (4.x) and HDF5.

Options and Parameters:

Tool Name: h4toh5
Syntax:
h4toh5 -h
h4toh5 h4file h5file
h4toh5 h4file
Purpose:
Converts an HDF4 file to an HDF5 file.
Description:
h4toh5 is a file conversion utility that reads an HDF4 file, h4file (input.hdf for example), and writes an HDF5 file, h5file (output.h5 for example), containing the same data.

If no output file h5file is specified, h4toh5 uses the input filename to designate the output file, replacing the extension .hdf with .h5. For example, if the input file scheme3.hdf is specified with no output filename, h4toh5 will name the output file scheme3.h5.

The -h option causes a syntax summary similar to the following to be displayed:

              h4toh5 inputfile.hdf outputfile.h5
              h4toh5 inputfile.hdf                     

Each object in the HDF4 file is converted to an equivalent HDF5 object, according to the mapping described in Mapping HDF4 Objects to HDF5 Objects. (If this mapping changes between HDF5 Library releases, a more up-to-date version may be available at Mapping HDF4 Objects to HDF5 Objects on the HDF FTP server.)

In this initial version, h4toh5 converts the following HDF4 objects:

HDF4 Object Resulting HDF5 Object
SDS Dataset
GR, RI8, and RI24 image Dataset
Vdata Dataset
Vgroup Group
Annotation Attribute
Palette Dataset
Note:
The h4toh5 and h5toh4 utilities are no longer part of the HDF5 product; they are distributed separately through the page Converting between HDF (4.x) and HDF5.

Options and Parameters:

Tool Name: h5perf
Syntax:
h5perf [-h | --help]
h5perf [options]
Purpose:
Tests Parallel HDF5 performance.
Description:
h5perf provides tools for testing the performance of the Parallel HDF5 library.

The following environment variables have the following effects on H5perf behavior:
     HDF5_NOCLEANUP If set, h5perf does not remove data files. (Default: Remove)
  HDF5_MPI_INFO Must be set to a string containing a list of semi-colon separated key=value pairs for the MPI INFO object.
Example:
  HDF5_PARAPREFIX   Sets the prefix for parallel output data files.

Options and Parameters:

Tool Name: h5redeploy
Syntax:
h5redeploy [help | -help]
h5redeploy [-echo] [-force] [-prefix=dir] [-tool=tool] [-show]
Purpose:
Updates HDF5 compiler tools after an HDF5 software installation in a new location.
Description:
h5redeploy updates the HDF5 compiler tools after the HDF5 software has been installed in a new location.
Options and Parameters:

Tool Name: h5cc
Syntax:
h5cc [OPTIONS] <compile line>
Purpose:
Helper script to compile HDF5 applications.
Description:
h5cc can be used in much the same way MPIch is used to compile an HDF5 program. It takes care of specifying where the HDF5 header files and libraries are on the command line.

h5cc supersedes all other compiler scripts in that if you've used them to compile the HDF5 library, then h5cc also uses those scripts. For example, when compiling an MPIch program, you use the mpicc script. If you've built HDF5 using MPIch, then h5cc uses the MPIch program for compilation.

Some programs use HDF5 in only a few modules. It isn't necessary to use h5cc to compile those modules which don't use HDF5. In fact, since h5cc is only a convenience script, you are still able to compile HDF5 modules in the normal way. In that case, you will have to specify the HDF5 libraries and include paths yourself.

An example of how to use h5cc to compile the program hdf_prog, which consists of modules prog1.c and prog2.c and uses the HDF5 shared library, would be as follows:
        # h5cc -c prog1.c
        # h5cc -c prog2.c
        # h5cc -shlib -o hdf_prog prog1.o prog2.o
Options and Parameters:
Environment Variables:
When set, these environment variables override some of the built-in defaults of h5cc.

Tool Name: h5fc
Syntax:
h5fc [OPTIONS] <compile line>
Purpose:
Helper script to compile HDF5 Fortran90 applications.
Description:

h5fc can be used in much the same way MPIch is used to compile an HDF5 program. It takes care of specifying where the HDF5 header files and libraries are on the command line.

h5fc supersedes all other compiler scripts in that if you've used them to compile the HDF5 Fortran library, then h5fc also uses those scripts. For example, when compiling an MPIch program, you use the mpif90 script. If you've built HDF5 using MPIch, then h5fc uses the MPIch program for compilation.

Some programs use HDF5 in only a few modules. It isn't necessary to use h5fc to compile those modules which don't use HDF5. In fact, since h5fc is only a convenience script, you are still able to compile HDF5 Fortran modules in the normal way. In that case, you will have to specify the HDF5 libraries and include paths yourself.

An example of how to use h5fc to compile the program hdf_prog, which consists of modules prog1.f90 and prog2.f90 and uses the HDF5 Fortran library, would be as follows:

        # h5fc -c prog1.f90
        # h5fc -c prog2.f90
        # h5fc -o hdf_prog prog1.o prog2.o
Options and Parameters:
Environment Variables:
When set, these environment variables override some of the built-in defaults of h5cc.

Tool Name: h5c++
Syntax:
h5c++ [OPTIONS] <compile line>
Purpose:
Helper script to compile HDF5 C++ applications.
Description:

h5c++ can be used in much the same way MPIch is used to compile an HDF5 program. It takes care of specifying where the HDF5 header files and libraries are on the command line.

h5c++ supersedes all other compiler scripts in that if you've used one set of compiler scripts to compile the HDF5 C++ library, then h5c++ uses those same scripts. For example, when compiling an MPIch program, you use the mpiCC script.

Some programs use HDF5 in only a few modules. It isn't necessary to use h5c++ to compile those modules which don't use HDF5. In fact, since h5c++ is only a convenience script, you are still able to compile HDF5 C++ modules in the normal way. In that case, you will have to specify the HDF5 libraries and include paths yourself.

An example of how to use h5c++ to compile the program hdf_prog, which consists of modules prog1.cpp and prog2.cpp and uses the HDF5 C++ library, would be as follows:

        # h5c++ -c prog1.cpp
        # h5c++ -c prog2.cpp
        # h5c++ -o hdf_prog prog1.o prog2.o
Options and Parameters:
Environment Variables:
When set, these environment variables override some of the built-in defaults of h5c++.

HDF5 documents and links 
Introduction to HDF5 
HDF5 User Guide 
And in this document, the HDF5 Reference Manual  
H5   H5A   H5D   H5E   H5F   H5G   H5I   H5P  
H5R   H5S   H5T   H5Z   Tools   Datatypes  

HDF Help Desk
Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0