HDF5 documents and links Introduction to HDF5 HDF5 User Guide |
And in this document, the
HDF5 Reference Manual
H5 H5A H5D H5E H5F H5G H5I H5P H5R H5S H5T H5Z Tools Datatypes |
HDF5-related tools are available to assist the user in a variety of activities, including examining or managing HDF5 files, converting raw data between HDF5 and other special-purpose formats, moving data and files between the HDF4 and HDF5 formats, measuring HDF5 library performance, and managing HDF5 library and application compilation, installation and configuration. Unless otherwise specified below, these tools are distributed and installed with HDF5.
http://hdf.ncsa.uiuc.edu/hdf-java-html/
)
HDFview
-- a browser that
works with both HDF4 and HDF5 files and
can be used to transfer data between the two formats
http://hdf.ncsa.uiuc.edu/h4toh5/
)
http://hdf.ncsa.uiuc.edu/tools5.html
)
h5dump
[
OPTIONS]
file
h5dump
enables the user to examine
the contents of an HDF5 file and dump those contents, in human
readable form, to an ASCII file.
h5dump
dumps HDF5 file content to standard output.
It can display the contents of the entire HDF5 file or
selected objects, which can be groups, datasets, a subset of a
dataset, links, attributes, or datatypes.
The --header
option displays object header
information only.
Names are the absolute names of the objects. h5dump
displays objects in the order same as the command order. If a
name does not start with a slash, h5dump
begins
searching for the specified object starting at the root group.
If an object is hard linked with multiple names,
h5dump
displays the content of the object in the
first occurrence. Only the link information is displayed in later
occurrences.
h5dump
assigns a name for any unnamed datatype in
the form of
#
oid1:
oid2, where
oid1 and oid2 are the object identifiers
assigned by the library. The unnamed types are displayed within
the root group.
Datatypes are displayed with standard type names. For example,
if a dataset is created with H5T_NATIVE_INT
type
and the standard type name for integer on that machine is
H5T_STD_I32BE
, h5dump
displays
H5T_STD_I32BE
as the type of the dataset.
h5dump
can also dump a subset of a dataset.
This feature operates in much the same way as hyperslabs in HDF5;
the parameters specified on the command line are passed to the
function
H5Sselect_hyperslab
and the resulting selection
is displayed.
The h5dump
output is described in detail in the
DDL for HDF5, the
Data Description Language document.
Note: It is not permissible to specify multiple
attributes, datasets, datatypes, groups, or soft links with one
flag. For example, one may not issue the command
WRONG:
h5dump -a /attr1 /attr2 foo.h5
to display both /attr1
and /attr2
.
One must issue the following command:
CORRECT:
h5dump -a /attr1 -a /attr2 foo.h5
It's possible to select the file driver with which to open the HDF5 file by using the --filedriver (-f) command-line option. Acceptable values for the --filedriver option are: "sec2", "family", "split", "multi", and "stream". If the file driver flag isn't specified, then the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file.
--xml
option, h5dump
generates
XML output. This output contains a complete description of the file,
marked up in XML. The XML conforms to the HDF5 Document Type
Definition (DTD) available at
http://hdf.ncsa.uiuc.edu/DTDs/HDF5-File.dtd
.
The XML output is suitable for use with other tools, including the HDF5 Java Tools.
-h or --help |
Print a usage message and exit. |
-B or --bootblock |
Print the content of the boot block. (This option is not yet implemented.) |
-H or --header |
Print the header only; no data is displayed. |
-A |
Print the header and value of attributes; data of datasets is not displayed. |
-i or --object-ids |
Print the object ids. |
-r or --string |
Print 1-bytes integer datasets as ASCII. |
-V or --version |
Print version number and exit. |
-a P or --attribute=P |
Print the specified attribute. |
-d P or
--dataset=P |
Print the specified dataset. |
-f D or --filedriver=D |
Specify which driver to open the file with. |
-g P or
--group=P |
Print the specified group and all members. |
-l P or --soft-link=P |
Print the value(s) of the specified soft link. |
-o F or
--output=F |
Output raw data into file F. |
-t T or
--datatype=T |
Print the specified named datatype. |
-w N or
--width=N |
Set the number of columns of output. |
-x or
--xml |
Output XML using XML schema (default) instead of DDL. |
-u or
--use-dtd |
Output XML using XML DTD instead of DDL. |
-D U or
--xml-dtd=U |
In XML output, refer to the DTD or schema at U instead of the default schema/DTD. |
-X S or
--xml-dns=S |
In XML output, (XML Schema) use qualified names in
the XML: ":": no namespace, default: "hdf5:" |
-s L or
--start=L |
Offset of start of subsetting selection. Default: the beginning of the dataset. |
-S L or
--stride=L |
Hyperslab stride. Default: 1 in all dimensions. |
-c L or
--count=L |
Number of blocks to include in the selection. |
-k L or
--block=L |
Size of block in hyperslab. Default: 1 in all dimensions. |
-- |
Indicate that all following arguments are non-options. E.g., to dump a file called `-f', use h5dump -- -f. |
file | The file to be examined. |
D | which file driver to use in opening the file. Acceptable values are "sec2", "family", "split", "multi", and "stream". Without the file driver flag the file will be opened with each driver in turn and in the order specified above until one driver succeeds in opening the file. |
P | The full path from the root group to the object |
T | The name of the datatype |
F | A filename |
N | An integer greater than 1 |
L | A list of integers, the number of which is equal to the number of dimensions in the dataspace being queried |
U | A URI (as defined in [IETF RFC 2396], updated by [IETF RFC 2732]) that refers to the DTD to be used to validate the XML |
Subsetting parameters can also be expressed in a convenient
compact form, as follows:
--dataset="/foo/mydataset[START;STRIDE;COUNT;BLOCK]"
All of the semicolons (;
) are required, even when
a parameter value is not specified.
When not specified, default parameter values are used.
|
h5dump
displays the
following information:
|
h5ls
[
OPTIONS]
file
[
OBJECTS...]
h5ls
prints selected information about file objects
in the specified format.
-h or -? or
--help |
Print a usage message and exit. |
-a or
--address |
Print addresses for raw data. |
-d or --data |
Print the values of datasets. |
-e or
--errors |
Show all HDF5 error reporting. |
-f or
--full |
Print full path names instead of base names. |
-g or
--group |
Show information about a group, not its contents. |
-l or
--label |
Label members of compound datasets. |
-r or --recursive |
List all groups recursively, avoiding cycles. |
-s or
--string |
Print 1-bytes integer datasets as ASCII. |
-S or
--simple |
Use a machine-readable output format. |
-w N or
--width= N |
Set the number of columns of output. |
-v or
--verbose |
Generate more verbose output. |
-V or
--version |
Print version number and exit. |
-x or
--hexdump |
Show raw data in hexadecimal format. |
file | The file name may include a printf(3C) integer format
such as %%05d to open a file family. |
objects | Each object consists of an HDF5 file name optionally
followed by a slash and an object name within the file
(if no object is specified within the file then the
contents of the root group are displayed). The file name
may include a printf(3C) integer format such
as "%05d" to open a file family. |
h5diff
file1 file2
[OPTIONS]
[object1 [object2 ] ]
h5diff
is a command line tool that compares
two HDF5 files, file1 and file2, and
reports the differences between them.
Optionally, h5diff
will compare two objects
within these files.
If only one object, object1, is specified,
h5diff
will compare
object1 in file1
with object1 in file2.
In two objects, object1 and object2,
are specified, h5diff
will compare
object1 in file1
with object2 in file2.
These objects must be HDF5 datasets.
object1 and object2 must be expressed as absolute paths from the respective file's root group.
h5diff
has the following four modes of output:
Normal mode: print the number of differences found and where they occurred
Report mode (-r): print the above plus the differences
Verbose mode (-v): print the above plus a list of objects and warnings
Quiet mode (-q): do not print output (h5diff always returns an exit code of
1 when differences are found).
Additional information, with several sample cases, can be found in the document H5diff Examples.
file1 | |
file2 | The HDF5 files to be compared. |
-h |
help message. |
-r |
Report mode. Print the differences. |
-v |
Verbose mode. Print the differences, list of objects, warnings. |
-q |
Quiet mode. Do not print output. |
-n count |
Print difference up to count differences, then stop. count must be a positive integer. |
-d delta |
Print only differences that are greater than the
limit delta. delta must be a positive number.
The comparison criterion is whether the absolute value of the
difference of two corresponding values is greater than
delta
(e.g., |a–b| > delta ,
where a is a value in file1 and
b is a value in file2). |
-p relative |
Print only differences that are greater than a
relative error. relative must be a positive number.
The comparison criterion is whether the absolute value of the
difference 1 and the ratio of two corresponding values
is greater than relative
(e.g., |1–(b/a)| > relative
where a is a value in file1 and
b is a value in file2). |
object1 | |
object2 | Specific object(s) within the files to be compared. |
h5diff
call compares
the object /a/b
in file1
with the object /a/c
in file2
: h5diff file1 file2 /a/b /a/c
h5diff
call compares
the object /a/b
in file1
with the same object in file2
: h5diff file1 file2 /a/b
h5diff
call compares
all objects in both files: h5diff file1 file2
h5repack
-i file1-o file2 [-h] [-v] [-f
'filter'] [-l 'layout'][-m number][-e file]
h5repack
is a command line tool that applies HDF5 filters
to a input file file1, saving the output in a new file, file2.'filter'
is a string with the format
<list of objects> : <name of filter> = <filter
parameters>.
<list of objects> is a comma separated list of object names
meaning apply compression only to those objects. If no object names are
specified, the filter is applied to all objects
<name of filter> can be:
GZIP, to apply the HDF5 GZIP filter (GZIP compression)
SZIP, to apply the HDF5 SZIP filter (SZIP compression)
SHUF, to apply the HDF5 shuffle filter
FLET, to apply the HDF5 checksum filter
NONE, to remove the filter
<filter parameters> is optional compression info
SHUF (no parameter)
FLET (no parameter)
GZIP=<deflation level> from 1-9
SZIP=<pixels per block,coding> (pixels per block is a even number in
2-32 and coding method is 'EC' or 'NN')
-h
-f
filter
-l
layout
-v
-e
file
-d
delta
|a–b| > delta
,
where a
is a value in file1 and
b
is a value in file2).-m
number
2) h5repack -i file1 -o file2 -f dset1:SZIP=8,NN -v
Applies SZIP compression only
to object 'dset1'
3) h5repack -i file1 -o file2 -l dset1,dset2:CHUNK=20x10 -v
Applies chunked layout to
objects 'dset1' and 'dset2'
h5repart
[-v]
[-V]
[-[b|m]
N[g|m|k]]
source_file
dest_file
h5repart
splits a single file into a family of
files, joins a family of files into a single file, or copies
one family of files to another while changing the size of the
family members. h5repart
can also be used to
copy a single file to a single file with holes.
Sizes associated with the -b
and -m
options may be suffixed with g
for gigabytes,
m
for megabytes, or k
for kilobytes.
File family names include an integer printf
format such as %d
.
-v |
Produce verbose output. |
-V |
Print a version number and exit. |
-b N |
The I/O block size, defaults to 1kB |
-m N |
The destination member size or 1GB |
source_file | The name of the source file |
dest_file | The name of the destination files |
h5import
infile in_options
[infile in_options ...]
-o outfile
h5import
infile in_options
[infile in_options ...]
-outfile outfile
h5import -h
h5import -help
h5import
converts data
from one or more ASCII or binary files, infile
,
into the same number of HDF5 datasets
in the existing or new HDF5 file, outfile
.
Data conversion is performed in accordance with the
user-specified type and storage properties
specified in in_options
.
The primary objective of h5import
is to
import floating point or integer data.
The utility's design allows for future versions that
accept ASCII text files and store the contents as a
compact array of one-dimensional strings,
but that capability is not implemented in HDF5 Release 1.6.
Input data and options:
Input data can be provided in one of the following forms:
infile
,
contains a single n-dimensional
array of values of one of the above types expressed
in the order of fastest-changing dimensions first.
Floating point data in an ASCII input file must be
expressed in the fixed floating form (e.g., 323.56)
h5import
is designed to accept scientific notation
(e.g., 3.23E+02) in an ASCII, but that is not implemented in HDF5 release 1.6.
Each input file can be associated with options specifying the datatype and storage properties. These options can be specified either as command line arguments or in a configuration file. Note that exactly one of these approaches must be used with a single input file.
Command line arguments, best used with simple input files, can be used to specify the class, size, dimensions of the input data and a path identifying the output dataset.
The recommended means of specifying input data options is in a configuration file; this is also the only means of specifying advanced storage features. See further discussion in "The configuration file" below.
The only required option for input data is dimension sizes; defaults are available for all others.
h5import
will accept up to 30 input files in a single call.
Other considerations, such as the maximum length of a command line,
may impose a more stringent limitation.
Output data and options:
The name of the output file is specified following
the -o
or -output
option
in outfile
.
The data from each input file is stored as a separate dataset
in this output file.
outfile
may be an existing file.
If it does not yet exist, h5import
will create it.
Output dataset information and storage properties can be specified only by means of a configuration file.
Dataset path | If the groups in the path leading to the dataset
do not exist, h5import will create them.If no group is specified, the dataset will be created as a member of the root group. If no dataset name is specified, the default name is dataset1 for the first input dataset,
dataset2 for the second input dataset,
dataset3 for the third input dataset,
etc.h5import does not overwrite a pre-existing
dataset of the specified or default name.
When an existing dataset of a conflicting name is
encountered, h5import quits with an error;
the current input file and any subsequent input files
are not processed.
| |
Output type | Datatype parameters for output data | |
Output data class | Signed or unsigned integer or floating point | |
Output data size | 8-, 16-, 32-, or 64-bit integer 32- or 64-bit floating point | |
Output architecture | IEEE STD NATIVE (Default)Other architectures are included in the h5import design
but are not implemented in this release.
| |
Output byte order | Little- or big-endian. Relevant only if output architecture is IEEE , UNIX , or STD ;
fixed for other architectures.
| |
Dataset layout and storage properties | Denote how raw data is to be organized on the disk. If none of the following are specified, the default configuration is contiguous layout and with no compression. | |
Layout | Contiguous (Default) Chunked | |
External storage | Allows raw data to be stored in a non-HDF5 file or in an
external HDF5 file. Requires contiguous layout. | |
Compressed | Sets the type of compression and the
level to which the dataset must be compressed. Requires chunked layout. | |
Extendable | Allows the dimensions of the dataset increase over time
and/or to be unlimited. Requires chunked layout. | |
Compressed and extendable | Requires chunked layout. | |
Command-line arguments:
The h5import
syntax for the command-line arguments,
in_options
, is as follows:
h5import infile -d dim_list
[-p pathname]
[-t input_class]
[-s input_size]
[infile ...]
-o outfile or h5import infile -dims dim_list
[-path pathname]
[-type input_class]
[-size input_size]
[infile ...]
-outfile outfile or h5import infile -c config_file
[infile ...]
-outfile outfile
|
-c config_file
option is used with
an input file, no other argument can be used with that input file.
If the -c config_file
option is not used with
an input data file, the -d dim_list
argument
(or -dims dim_list
)
must be used and any combination of the remaining options may be used.
Any arguments used must appear in exactly the order used
in the syntax declarations immediately above.
The configuration file:
A configuration file is specified with the
-c config_file
option:
h5import infile -c config_file
[infile -c config_file2 ...]
-outfile outfile
|
The configuration file is an ASCII file and must be
organized as "Configuration_Keyword Value" pairs,
with one pair on each line.
For example, the line indicating that
the input data class (configuration keyword INPUT-CLASS
)
is floating point in a text file (value TEXTFP
)
would appear as follows:
INPUT-CLASS TEXTFP
A configuration file may have the following keywords each
followed by one of the following defined values.
One entry for each of the first two keywords,
RANK
and DIMENSION-SIZES
,
is required; all other keywords are optional.
Keyword Value
| Description | ||
---|---|---|---|
RANK
| The number of dimensions in the dataset. (Required) | ||
rank
| An integer specifying the number of dimensions in the dataset. Example: 4 for a 4-dimensional dataset.
| ||
DIMENSION-SIZES
| Sizes of the dataset dimensions. (Required) | ||
dim_sizes
| A string of space-separated integers
specifying the sizes of the dimensions in the dataset.
The number of sizes in this entry must match the value in
the RANK entry.
The fastest-changing dimension must be listed first.Example: 4 3 4 38 for a 38x4x3x4 dataset.
| ||
PATH
| Path of the output dataset. | ||
path
| The full HDF5 pathname identifying the output dataset
relative to the root group within the output file. I.e., path is a string consisting of
optional group names, each followed by a slash,
and ending with a dataset name.
If the groups in the path do no exist, they will be
created.If PATH is not specified, the output dataset
is stored as a member of the root group and the
default dataset name is
dataset1 for the first input dataset,
dataset2 for the second input dataset,
dataset3 for the third input dataset, etc.Note that h5import does not overwrite a
pre-existing dataset of the specified or default name.
When an existing dataset of a conflicting name is
encountered, h5import quits with an error;
the current input file and any subsequent input files
are not processed.Example: The configuration file entry
dataset1 will
be written in the group grp2/ which is in
the group grp1/ ,
a member of the root group in the output file.
| ||
INPUT-CLASS
| A string denoting the type of input data. | ||
TEXTIN
| Input is signed integer data in an ASCII file. | ||
TEXTUIN
| Input is unsigned integer data in an ASCII file. | ||
TEXTFP
| Input is floating point data in fixed notation (e.g., 325.34) in an ASCII file. | ||
TEXTFPE
| Input is floating point data in scientific notation (e.g., 3.2534E+02)
in an ASCII file. (Not implemented in this release.) | ||
IN
| Input is signed integer data in a binary file. | ||
UIN
| Input is unsigned integer data in a binary file. | ||
FP
| Input is floating point data in a binary file. (Default) | ||
STR
| Input is character data in an ASCII file.
With this value, the configuration keywords
RANK , DIMENSION-SIZES ,
OUTPUT-CLASS , OUTPUT-SIZE ,
OUTPUT-ARCHITECTURE , and OUTPUT-BYTE-ORDER
will be ignored.(Not implemented in this release.) | ||
INPUT-SIZE
| An integer denoting the size of the input data, in bits. | ||
8 16 32 64
| For signed and unsigned integer data:
TEXTIN , TEXTUIN ,
IN , or UIN .
(Default: 32 )
| ||
32 64
| For floating point data:
TEXTFP , TEXTFPE ,
or FP .
(Default: 32 )
| ||
OUTPUT-CLASS
| A string denoting the type of output data. | ||
IN
| Output is signed integer data. (Default if INPUT-CLASS is
IN or TEXTIN )
| ||
UIN
| Output is unsigned integer data. (Default if INPUT-CLASS is
UIN or TEXTUIN )
| ||
FP
| Output is floating point data. (Default if INPUT-CLASS is not specified or is
FP , TEXTFP , or TEXTFPE )
| ||
STR
| Output is character data,
to be written as a 1-dimensional array of strings. (Default if INPUT-CLASS is STR )(Not implemented in this release.) | ||
OUTPUT-SIZE
| An integer denoting the size of the output data, in bits. | ||
8 16 32 64
| For signed and unsigned integer data:
IN or UIN .
(Default: Same as INPUT-SIZE , else 32 )
| ||
32 64
| For floating point data:
FP .
(Default: Same as INPUT-SIZE , else 32 )
| ||
OUTPUT-ARCHITECTURE
| A string denoting the type of output architecture. | ||
NATIVE STD IEEE INTEL * CRAY * MIPS * ALPHA * UNIX *
| See the "Predefined Atomic Types" section
in the "HDF5 Datatypes" chapter
of the HDF5 User's Guide
for a discussion of these architectures. Values marked with an asterisk (*) are not implemented in this release. (Default: NATIVE )
| ||
OUTPUT-BYTE-ORDER
| A string denoting the output byte order. This entry is ignored if the OUTPUT-ARCHITECTURE
is not specified or if it is not specified as IEEE ,
UNIX , or STD .
| ||
BE
| Big-endian. (Default) | ||
LE
| Little-endian. | ||
The following options are disabled by default, making the default storage properties no chunking, no compression, no external storage, and no extensible dimensions. | |||
CHUNKED-DIMENSION-SIZES | Dimension sizes of the chunk for chunked output data. | ||
chunk_dims
| A string of space-separated integers specifying the
dimension sizes of the chunk for chunked output data.
The number of dimensions must correspond to the value
of RANK .The presence of this field indicates that the output dataset is to be stored in chunked layout; if this configuration field is absent, the dataset will be stored in contiguous layout. | ||
COMPRESSION-TYPE
| Type of compression to be used with chunked storage. Requires that CHUNKED-DIMENSION-SIZES
be specified.
| ||
GZIP
| Gzip compression. Other compression algorithms are not implemented in this release of h5import .
| ||
COMPRESSION-PARAM
| Compression level. Required if COMPRESSION-TYPE is specified.
| ||
1 through 9
| Gzip compression levels:
1 will result in the fastest compression
while 9 will result in the
best compression ratio.(Default: 6. The default gzip compression level is 6; not all compression methods will have a default level.) | ||
EXTERNAL-STORAGE
| Name of an external file in which to create the output dataset. Cannot be used with CHUNKED-DIMENSIONS-SIZES ,
COMPRESSION-TYPE , OR MAXIMUM-DIMENSIONS .
| ||
external_file
| A string specifying the name of an external file. | ||
MAXIMUM-DIMENSIONS
| Maximum sizes of all dimensions. Requires that CHUNKED-DIMENSION-SIZES be specified.
| ||
max_dims
| A string of space-separated integers specifying the
maximum size of each dimension of the output dataset.
A value of -1 for any dimension implies
unlimited size for that particular dimension.The number of dimensions must correspond to the value of RANK . | ||
infile(s) |
Name of the Input file(s). |
in_options |
Input options. Note that while only the -dims argument
is required, arguments must used in the order in which they are listed below. |
-d dim_list |
|
-dims dim_list |
Input data dimensions.
dim_list is a string of
comma-separated numbers with no spaces
describing the dimensions of the input data.
For example, a 50 x 100 2-dimensional array would be
specified as -dims 50,100 .Required argument: if no configuration file is used, this command-line argument is mandatory. |
-p pathname |
|
-pathname pathname
|
pathname is a string consisting of
one or more strings separated by slashes (/ )
specifying the path of the dataset in the output file.
If the groups in the path do no exist, they will be
created.Optional argument: if not specified, the default path is dataset1 for the first input dataset,
dataset2 for the second input dataset,
dataset3 for the third input dataset,
etc.h5import does not overwrite a pre-existing
dataset of the specified or default name.
When an existing dataset of a conflicting name is
encountered, h5import quits with an error;
the current input file and any subsequent input files
are not processed. |
-t input_class |
|
-type input_class |
input_class specifies the class of the
input data and determines the class of the output data.Valid values are as defined in the Keyword/Values table in the section "The configuration file" above. Optional argument: if not specified, the default value is FP . |
-s input_size |
|
-size input_size |
input_size specifies the size in bits
of the input data and determines the size of the output data.Valid values for signed or unsigned integers are 8 , 16 , 32 , and 64 .Valid values for floating point data are 32 and 64 .Optional argument: if not specified, the default value is 32 . |
-c config_file |
config_file specifies a
configuration file.This argument replaces all other arguments except infile and
-o outfile |
-h |
|
-help |
Prints the h5import usage summary:h5import -h[elp], OR Then exits. |
outfile |
Name of the HDF5 output file. |
h5import infile -dims 2,3,4 -type TEXTIN -size 32 -o out1
| |
This command creates a file out1 containing
a single 2x3x4 32-bit integer dataset.
Since no pathname is specified, the dataset is stored
in out1 as /dataset1 .
| |
h5import infile -dims 20,50 -path bin1/dset1 -type FP -size 64 -o out2
| |
This command creates a file out2 containing
a single a 20x50 64-bit floating point dataset.
The dataset is stored in out2 as /bin1/dset1 .
|
outfile
at /work/h5/pkamat/First-set
.PATH work/h5/pkamat/First-set INPUT-CLASS TEXTFP RANK 3 DIMENSION-SIZES 5 2 4 OUTPUT-CLASS FP OUTPUT-SIZE 64 OUTPUT-ARCHITECTURE IEEE OUTPUT-BYTE-ORDER LE CHUNKED-DIMENSION-SIZES 2 2 2 MAXIMUM-DIMENSIONS 8 8 -1The next configuration file specifies the following:
NATIVE
format
(as the output architecture is not specified).outfile
at /Second-set
.
PATH Second-set INPUT-CLASS IN RANK 5 DIMENSION-SIZES 6 3 5 2 4 OUTPUT-CLASS IN OUTPUT-SIZE 32 CHUNKED-DIMENSION-SIZES 2 2 2 2 2 COMPRESSION-TYPE GZIP COMPRESSION-PARAM 7
gif2h5
gif_file h5_file
gif2h5
accepts as input the GIF file gif_file
and produces the HDF5 file h5_file as output.
gif_file | The name of the input GIF file |
h5_file | The name of the output HDF5 file |
h52gif
h5_file gif_file
-i
h5_image
[-p
h5_palette]
h52gif
accepts as input the HDF5 file h5_file
and the names of images and associated palettes within that file
as input and produces the GIF file gif_file,
containing those images, as output.
h52gif
expects at least
one h5_image.
You may repeat
-i
h5_image
[-p
h5_palette]
up to 50 times, for a maximum of 50 images.
h5_file | The name of the input HDF5 file |
gif_file | The name of the output GIF file |
-i h5_image |
Image option, specifying the name of an HDF5 image or dataset containing an image to be converted |
-p h5_palette |
Palette option, specifying the name of an HDF5 dataset containing a palette to be used in an image conversion |
h5toh4 -h
h5toh4
h5file
h4fileh5toh4
h5fileh5toh4 -m
h5file1
h5file2
h5file3 ...
h5toh4
is an HDF5 utility which reads
an HDF5 file, h5file, and converts all
supported objects and pathways to produce an HDF4 file,
h4file. If h4file already exists,
it will be replaced.
If only one file name is given, the name must end in
.h5
and is assumed to represent the
HDF5 input file. h5toh4
replaces the
.h5
suffix with .hdf
to form
the name of the resulting HDF4 file and proceeds as above.
If a file with the name of the intended HDF4 file already
exists, h5toh4
exits with an error without
changing the contents of any file.
The -m
option allows multiple HDF5 file
arguments. Each file name is treated the same as the
single file name case above.
The -h
option causes the following
syntax summary to be displayed:
h5toh4 file.h5 file.hdf h5toh4 file.h5 h5toh4 -m file1.h5 file2.h5 ...
The following HDF5 objects occurring in an HDF5 file are converted to HDF4 objects in the HDF4 file:
Attributes associated with any of the supported HDF5 objects are carried over to the HDF4 objects. Attributes may be of integer, floating point, or fixed length string datatype and they may have up to 32 fixed dimensions.
All datatypes are converted to big-endian. Floating point datatypes are converted to IEEE format.
h5toh4
and h4toh5
utilities
are no longer part of the HDF5 product;
they are distributed separately through the page
Converting between HDF (4.x) and HDF5.
-h |
Displays a syntax summary. |
-m |
Converts multiple HDF5 files to multiple HDF4 files. |
h5file | The HDF5 file to be converted. |
h4file | The HDF4 file to be created. |
h4toh5 -h
h4toh5
h4file
h5fileh4toh5
h4fileh4toh5
is a file conversion utility that reads
an HDF4 file, h4file (input.hdf
for example),
and writes an HDF5 file, h5file (output.h5
for example), containing the same data.
If no output file h5file is specified,
h4toh5
uses the input filename to designate
the output file, replacing the extension .hdf
with .h5
.
For example, if the input file scheme3.hdf
is
specified with no output filename, h4toh5
will
name the output file scheme3.h5
.
The -h
option causes a syntax summary
similar to the following to be displayed:
h4toh5 inputfile.hdf outputfile.h5 h4toh5 inputfile.hdf
Each object in the HDF4 file is converted to an equivalent HDF5 object, according to the mapping described in Mapping HDF4 Objects to HDF5 Objects. (If this mapping changes between HDF5 Library releases, a more up-to-date version may be available at Mapping HDF4 Objects to HDF5 Objects on the HDF FTP server.)
In this initial version, h4toh5
converts the following
HDF4 objects:
HDF4 Object | Resulting HDF5 Object |
---|---|
SDS | Dataset |
GR, RI8, and RI24 image | Dataset |
Vdata | Dataset |
Vgroup | Group |
Annotation | Attribute |
Palette | Dataset |
h4toh5
and h5toh4
utilities
are no longer part of the HDF5 product;
they are distributed separately through the page
Converting between HDF (4.x) and HDF5.
-h |
Displays a syntax summary. |
h4file | The HDF4 file to be converted. |
h5file | The HDF5 file to be created. |
h5perf
[-h
| --help
]
h5perf
[options]
h5perf
provides tools for testing the performance
of the Parallel HDF5 library.
The following environment variables have the following
effects on H5perf
behavior:
HDF5_NOCLEANUP |
If set, h5perf does not remove data files.
(Default: Remove) | |
HDF5_MPI_INFO |
Must be set to a string containing a list of semi-colon separated
key=value pairs for the MPI INFO object.Example: | |
HDF5_PARAPREFIX | Sets the prefix for parallel output data files. |
These terms are used as follows in this section: | |
file | A filename |
size | A size specifier, expressed as an integer
greater than or equal to 0 (zero) followed by a size indicator:K for kilobytes (1024 bytes)M for megabytes (1048576 bytes)G for gigabytes (1073741824 bytes)Example: 37M specifies 37 megabytes or 38797312 bytes. |
N | An integer greater than or equal to 0 (zero) |
-h , --help |
||||||||||||||||||||||
Prints a usage message and exits. | ||||||||||||||||||||||
-a size, --align= size |
||||||||||||||||||||||
Specifies the alignment of objects in the HDF5 file. (Default: 1) | ||||||||||||||||||||||
-A api_list, --api= api_list |
||||||||||||||||||||||
Specifies which APIs to test. api_list
is a comma-separated list with the following valid values:
Example, --api=mpiio,phdf5 specifies that the MPI I/O
and parallel HDf5 APIs are to be monitored. |
||||||||||||||||||||||
-B size, --block-size= size |
||||||||||||||||||||||
Specifies the block size within the transfer
buffer. (Default: 128K) Block size versus transfer buffer size: The transfer buffer size is the size of a buffer in memory. The data in that buffer is broken into block size pieces and written to the file. Transfer block size is set by the -x (or --min-xfer-size )
and -X (or --max-xfer-size ) options.The pattern in which the blocks are written to the file is described in the discussion of the -I (or --interleaved )
option. |
||||||||||||||||||||||
-c , --chunk |
||||||||||||||||||||||
Creates HDF5 datasets in chunked layout. (Default: Off) | ||||||||||||||||||||||
-C , --collective |
||||||||||||||||||||||
Use collective I/O for the MPI I/O and
Parallel HDF5 APIs. (Default: Off, i.e., independent I/O) If this option is set and the MPI-I/O and PHDF5 APIs are in use, all the blocks in each transfer buffer will be written at once with an MPI derived type. |
||||||||||||||||||||||
-d N, --num-dsets N |
||||||||||||||||||||||
Sets the number of datasets per file. (Default: 1 ) |
||||||||||||||||||||||
-D debug_flags, --debug= debug_flags |
||||||||||||||||||||||
Sets the debugging level. debug_flags
is a comma-separated list of debugging flags with the following valid
values:
Example: --debug=2,r,t specifies to run a moderate level
of debugging while collecting raw data I/O throughput information
and verifying the correctness of the data. |
||||||||||||||||||||||
-e size, --num-bytes= size |
||||||||||||||||||||||
Specifies the number of bytes per process per dataset.
(Default: 256K ) |
||||||||||||||||||||||
-F N, --num-files= N |
||||||||||||||||||||||
Specifies the number of files. (Default: 1 ) |
||||||||||||||||||||||
-i N, --num-iterations= N |
||||||||||||||||||||||
Sets the number of iterations to perform. (Default:
1 ) |
-I , --interleaved |
|
Sets interleaved block I/O. (Default: Contiguous block I/O) Interleaved vs. Contiguous blocks in a parallel environment: When contiguous blocks are written to a dataset, the dataset is divided into m regions, where m is the number of processes writing separate portions of the dataset. Each process then writes data to its own region. When interleaved blocks are written to a dataset, space for the first block of the first process is allocated in the dataset, then space is allocated for the first block of the second process, etc., until space has been allocated for the first block of each process. Space is then allocated for the second block of the first process, the second block of the second process, etc. For example, in the case of a 4 process run with 1M bytes-per-process, 256K transfer buffer size, and 64KB block size, 16 contiguous blocks per process would be written to the file in the manner 1111111111111111222222222222222233333333333333334444444444444444 while 16 interleaved blocks per process would be written to the file as 1234123412341234123412341234123412341234123412341234123412341234 If collective I/O is turned on, all of the four blocks per transfer buffer will be written in one collective I/O call. |
-m , --mpi-posix |
Sets use of MPI-posix driver for HDF5 I/O. (Default: MPI-I/O driver) |
-n , --no-fill |
Specifies to not write fill values to HDF5 datasets.
This option is supported only in HDF5 Release v1.6 or later. (Default: Off, i.e., write fill values) |
-o file, --output= file |
Sets the output file for raw data to file. (Default: None) |
-p N, --min-num-processes= N |
Sets the minimum number of processes to be used. (Default:
1 ) |
-P N, --max-num-processes= N
|
Sets the maximum number of processes to be used. (Default: All MPI_COMM_WORLD processes) |
-T size, --threshold= size |
Sets the threshold for alignment of objects in the
HDF5 file. (Default: 1 ) |
-w , --write-only |
Performs only write tests, not read tests. (Default: Read and write tests) |
-x size, --min-xfer-size= size |
Sets the minimum transfer buffer size. (Default: 128K ) |
-X size, --max-xfer-size= size |
Sets the maximum transfer buffer size. (Default: 1M ) |
h5redeploy
[help
| -help
]
h5redeploy
[-echo
]
[-force
]
[-prefix=
dir]
[-tool=
tool]
[-show
]
h5redeploy
updates the HDF5 compiler tools after
the HDF5 software has been installed in a new location.
help , -help |
Prints a help message. |
-echo |
Shows all the shell commands executed. |
-force |
Performs the requested action without offering any prompt requesting confirmation. |
-prefix= dir |
Specifies a new directory in which to find the
HDF5 subdirectories lib/ and include/ .
(Default: current working directory) |
-tool= tool |
Specifies the tool to update. tool must
be in the current directory and must be writable.
(Default: h5cc ) |
-show |
Shows all of the shell commands to be executed without actually executing them. |
h5cc
[
OPTIONS]
<compile line>
h5cc
can be used in much the same way MPIch is used
to compile an HDF5 program. It takes care of specifying where the
HDF5 header files and libraries are on the command line.
h5cc
supersedes all other compiler scripts in that
if you've used them to compile the HDF5 library, then
h5cc
also uses those scripts. For example, when
compiling an MPIch program, you use the mpicc
script. If you've built HDF5 using MPIch, then h5cc
uses the MPIch program for compilation.
Some programs use HDF5 in only a few modules. It isn't necessary
to use h5cc
to compile those modules which don't use
HDF5. In fact, since h5cc
is only a convenience
script, you are still able to compile HDF5 modules in the normal
way. In that case, you will have to specify the HDF5 libraries
and include paths yourself.
h5cc
to compile the program
hdf_prog
, which consists of modules
prog1.c
and prog2.c
and uses the HDF5
shared library, would be as follows:
# h5cc -c prog1.c # h5cc -c prog2.c # h5cc -shlib -o hdf_prog prog1.o prog2.o
-help |
Prints a help message. |
-echo |
Show all the shell commands executed. |
-prefix=DIR |
Use the directory DIR to find the HDF5
lib/ and include/ subdirectories.
Default: prefix specified when configuring HDF5. |
-show |
Show the commands without executing them. |
-shlib |
Compile using shared HDF5 libraries. |
-noshlib |
Compile using static HDF5 libraries [default]. |
<compile line> | The normal compile line options for your compiler.
h5cc uses the same compiler you used to compile HDF5.
Check your compiler's manual for more information on which
options are needed. |
h5cc
.
HDF5_CC |
Use a different C compiler. |
HDF5_CLINKER |
Use a different linker. |
HDF5_USE_SHLIB=[yes|no] |
Use shared version of the HDF5 library [default: no]. |
h5fc
[
OPTIONS]
<compile line>
h5fc
can be used in much the same way MPIch is used
to compile an HDF5 program. It takes care of specifying where the
HDF5 header files and libraries are on the command line.
h5fc
supersedes all other compiler scripts in that
if you've used them to compile the HDF5 Fortran library, then
h5fc
also uses those scripts. For example, when
compiling an MPIch program, you use the mpif90
script. If you've built HDF5 using MPIch, then h5fc
uses the MPIch program for compilation.
Some programs use HDF5 in only a few modules. It isn't necessary
to use h5fc
to compile those modules which don't use
HDF5. In fact, since h5fc
is only a convenience
script, you are still able to compile HDF5 Fortran modules in the
normal way. In that case, you will have to specify the HDF5 libraries
and include paths yourself.
An example of how to use h5fc
to compile the program
hdf_prog
, which consists of modules
prog1.f90
and prog2.f90
and uses the HDF5 Fortran library, would be as follows:
# h5fc -c prog1.f90 # h5fc -c prog2.f90 # h5fc -o hdf_prog prog1.o prog2.o
-help |
Prints a help message. |
-echo |
Show all the shell commands executed. |
-prefix=DIR |
Use the directory DIR to find HDF5
lib/ and include/ subdirectories
Default: prefix specified when configuring HDF5. |
-show |
Show the commands without executing them. |
<compile line> | The normal compile line options for your compiler.
h5fc uses the same compiler you used
to compile HDF5. Check your compiler's manual for
more information on which options are needed. |
h5cc
.
HDF5_FC |
Use a different Fortran90 compiler. |
HDF5_FLINKER |
Use a different linker. |
h5c++
[
OPTIONS]
<compile line>
h5c++
can be used in much the same way MPIch is used
to compile an HDF5 program. It takes care of specifying where the
HDF5 header files and libraries are on the command line.
h5c++
supersedes all other compiler scripts in that
if you've used one set of compiler scripts to compile the
HDF5 C++ library, then h5c++
uses those same scripts.
For example, when compiling an MPIch program,
you use the mpiCC
script.
Some programs use HDF5 in only a few modules. It isn't necessary
to use h5c++
to compile those modules which don't use
HDF5. In fact, since h5c++
is only a convenience
script, you are still able to compile HDF5 C++ modules in the
normal way. In that case, you will have to specify the HDF5 libraries
and include paths yourself.
An example of how to use h5c++
to compile the program
hdf_prog
, which consists of modules
prog1.cpp
and prog2.cpp
and uses the HDF5 C++ library, would be as follows:
# h5c++ -c prog1.cpp # h5c++ -c prog2.cpp # h5c++ -o hdf_prog prog1.o prog2.o
-help |
Prints a help message. |
-echo |
Show all the shell commands executed. |
-prefix=DIR |
Use the directory DIR to find HDF5
lib/ and include/ subdirectories
Default: prefix specified when configuring HDF5. |
-show |
Show the commands without executing them. |
<compile line> |
The normal compile line options for your compiler.
h5c++ uses the same compiler you used
to compile HDF5. Check your compiler's manual for
more information on which options are needed. |
h5c++
.
HDF5_CXX |
Use a different C++ compiler. |
HDF5_CXXLINKER |
Use a different linker. |
HDF5 documents and links Introduction to HDF5 HDF5 User Guide |
And in this document, the
HDF5 Reference Manual
H5 H5A H5D H5E H5F H5G H5I H5P H5R H5S H5T H5Z Tools Datatypes |