+
\section sec_learn Learning HDF5
There are several resources for learning about HDF5. The HDF Group provides an on-line HDF5 tutorial,
documentation, examples, and videos. There are also tutorials provided by other organizations that are very useful for learning about HDF5.
@@ -42,7 +45,7 @@ Parallel HDF5, and the HDF5-1.10 VDS and SWMR new features:
Using the High Level APIs
-@subpage IntroHDF5
+
+Navigate back: \ref index "Main"
+
*/
diff --git a/doxygen/dox/IntroHDF5.dox b/doxygen/dox/IntroHDF5.dox
index cd192a3..ec46217 100644
--- a/doxygen/dox/IntroHDF5.dox
+++ b/doxygen/dox/IntroHDF5.dox
@@ -1,5 +1,8 @@
/** @page IntroHDF5 Introduction to HDF5
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+
\section sec_intro_desc HDF5 Description
HDF5 consists of a file format for storing HDF5 data, a data model for logically organizing and accessing
HDF5 data from an application, and the software (libraries, language interfaces, and tools) for working with this format.
@@ -188,11 +191,11 @@ of object on which the function operates:
The HDF5 High Level APIs simplify many of the steps required to create and access objects, as well
as providing templates for storing objects. Following is a list of the High Level APIs:
-\li HDF5 @ref H5LT (H5LT) – simplifies steps in creating datasets and attributes
-\li HDF5 @ref H5IM (H5IM) – defines a standard for storing images in HDF5
-\li HDF5 @ref H5TB (H5TB) – condenses the steps required to create tables
-\li HDF5 @ref H5DS (H5DS) – provides a standard for dimension scale storage
-\li HDF5 @ref H5PT (H5PT) – provides a standard for storing packet data
+\li @ref H5LT – simplifies steps in creating datasets and attributes
+\li @ref H5IM – defines a standard for storing images in HDF5
+\li @ref H5TB – condenses the steps required to create tables
+\li @ref H5DS – provides a standard for dimension scale storage
+\li @ref H5PT – provides a standard for storing packet data
\subsubsection subsec_intro_desc_soft_tools Tools
Useful tools for working with HDF5 files include:
@@ -257,8 +260,8 @@ operation is to be performed.
FORTRAN routines are similar; they begin with “h5*” and end with “_f”.
-Java routines are similar; they begin with “H5*” and begin with “H5.” as the class. Constants are
-in the HDF5Constants class and begin with "HDF5Constants.". The function arguments
+Java routines are similar; the routine names begin with “H5*” and are prefixed with “H5.” as the class. Constants are
+in the HDF5Constants class and are prefixed with "HDF5Constants.". The function arguments
are usually similar, @see @ref HDF5LIB
@@ -592,4 +595,33 @@ to it and then close it in separate steps:
+
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+
+@page HDF5Examples HDF5 Examples
+Example programs of how to use HDF5 are provided below.
+For HDF-EOS specific examples, see the examples
+of how to access and visualize NASA HDF-EOS files using IDL, MATLAB, and NCL on the
+HDF-EOS Tools and Information Center page.
+
+\section secHDF5Examples Examples
+\li \ref LBExamples
+\li Examples by API
+\li Examples in the Source Code
+\li Other Examples
+
+\section secHDF5ExamplesCompile How To Compile
+For information on compiling in C, C++ and Fortran, see: \ref LBCompiling
+
+\section secHDF5ExamplesOther Other Examples
+IDL, MATLAB, and NCL Examples for HDF-EOS
+Examples of how to access and visualize NASA HDF-EOS files using IDL, MATLAB, and NCL.
+
+Miscellaneous Examples
+These (very old) examples resulted from working with users, and are not fully tested. Most of them are in C, with a few in Fortran and Java.
+
+Using Special Values
+These examples show how to create special values in an HDF5 application.
+
*/
diff --git a/doxygen/dox/LearnBasics.dox b/doxygen/dox/LearnBasics.dox
index a4f5cc6..298672d 100644
--- a/doxygen/dox/LearnBasics.dox
+++ b/doxygen/dox/LearnBasics.dox
@@ -1,30 +1,183 @@
/** @page LearnBasics Learning the Basics
+
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+
+\section secIntro Introduction
The following topics cover the basic features in HDF5. The topics build on each other and are
intended to be completed in order. Some sections use files created in earlier sections. The
-examples used can also be found on the Examples from Learning the Basics
+examples used can also be found on the \ref LBExamples
page and in the HDF5 source code (C, C++, Fortran).
-
+
+*See HDF5Mathematica for user-contributed
+HDF5 Mathematica Wrappers and Introductory Tutorial Examples. The examples use P/Invoke.
+
+\section secLBExamplesAddl Additional Examples
+These examples make minor changes to the tutorial examples.
+
+
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
*/
diff --git a/doxygen/dox/LearnBasics1.dox b/doxygen/dox/LearnBasics1.dox
new file mode 100644
index 0000000..a9b6d0e
--- /dev/null
+++ b/doxygen/dox/LearnBasics1.dox
@@ -0,0 +1,1023 @@
+/** @page LBFileOrg HDF5 File Organization
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBFileOrg HDF5 file
+An HDF5 file is a container for storing a variety of scientific data and is composed of two primary types of objects: groups and datasets.
+
+\li HDF5 group: a grouping structure containing zero or more HDF5 objects, together with supporting metadata
+\li HDF5 dataset: a multidimensional array of data elements, together with supporting metadata
+
+Any HDF5 group or dataset may have an associated attribute list. An HDF5 attribute is a user-defined HDF5 structure
+that provides extra information about an HDF5 object.
+
+Working with groups and datasets is similar in many ways to working with directories and files in UNIX. As with UNIX
+directories and files, an HDF5 object in an HDF5 file is often referred to by its full path name (also called an absolute path name).
+
+\li / signifies the root group.
+
+\li /foo signifies a member of the root group called foo.
+
+\li /foo/zoo signifies a member of the group foo, which in turn is a member of the root group.
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBAPI The HDF5 API
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBAPI HDF5 C API
+The HDF5 library provides several interfaces, or APIs. These APIs provide routines for creating,
+accessing, and manipulating HDF5 files and objects.
+
+The library itself is implemented in C. To facilitate the work of FORTRAN 90, C++ and Java programmers,
+HDF5 function wrappers have been developed in each of these languages. This tutorial discusses the use
+of the C functions and the FORTRAN wrappers.
+
+All C routines in the HDF5 library begin with a prefix of the form H5*, where * is one or two uppercase
+letters indicating the type of object on which the function operates.
+The FORTRAN wrappers come in the form of subroutines that begin with h5 and end with _f.
+Java routine names begin with “H5*” and are prefixed with “H5.” as the class. Constants are
+in the HDF5Constants class and are prefixed with "HDF5Constants.".
+The APIs are listed below:
+
+
+
API
+
+
DESCRIPTION
+
+
+
+
H5
+
+
Library Functions: general-purpose H5 functions
+
+
+
+
H5A
+
+
Annotation Interface: attribute access and manipulation routines
+
+
+
+
H5D
+
+
Dataset Interface: dataset access and manipulation routines
+
+
+
+
H5E
+
+
Error Interface: error handling routines
+
+
+
+
H5F
+
+
File Interface: file access routines
+
+
+
+
H5G
+
+
Group Interface: group creation and operation routines
+
+
+
+
H5I
+
+
Identifier Interface: identifier routines
+
+
+
+
H5L
+
+
Link Interface: link routines
+
+
+
+
H5O
+
+
Object Interface: object routines
+
+
+
+
H5P
+
+
Property List Interface: object property list manipulation routines
+
+
+
+
H5R
+
+
Reference Interface: reference routines
+
+
+
+
H5S
+
+
Dataspace Interface: dataspace definition and access routines
+
+
+
+
H5T
+
+
Datatype Interface: datatype creation and manipulation routines
+
+
+
+
H5Z
+
+
Compression Interface: compression routine(s)
+
+
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBProg Programming Issues
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+Keep the following in mind when looking at the example programs included in this tutorial:
+
+\section LBProgAPI APIs vary with different languages
+\li C routines begin with the prefix “H5*” where * is a single letter indicating the object on which the operation is to be performed:
+
+
+
File Interface:
+
#H5Fopen
+
+
Dataset Interface:
+
#H5Dopen
+
+
+
+\li FORTRAN routines begin with “h5*” and end with “_f”:
+
+
+
File Interface:
+
h5fopen_f
+
+
Dataset Interface:
+
h5dopen_f
+
+
+
+\li Java routine names begin with “H5*” and are prefixed with “H5.” as the class. Constants are
+in the HDF5Constants class and are prefixed with "HDF5Constants.".:
+
+
+
File Interface:
+
H5.H5Fopen
+
+
Dataset Interface:
+
H5.H5Dopen
+
+
+
+\li APIS for languages like C++, Java, and Python use methods associated with specific objects.
+
+\section LBProgTypes HDF5 library has its own defined types
+\li #hid_t is used for object handles
+\li hsize_t is used for dimensions
+\li #herr_t is used for many return values
+
+\section LBProgLang Language specific files must be included in applications
+
+
+Python: Add "import h5py / import numpy"
+
+
+C: Add "#include hdf5.h"
+
+
+FORTRAN: Add "USE HDF5" and call h5open_f and h5close_f to initialize and close the HDF5 FORTRAN interface
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBFileCreate Creating an HDF5 File
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+An HDF5 file is a binary file containing scientific data and supporting metadata.
+\section secLBFileCreate HDF5 File Access
+To create an HDF5 file, an application must specify not only a file name, but a file access mode,
+a file creation property list, and a file access property list. These terms are described below:
+
+
File access mode:
+When creating a file, the file access mode specifies the action to take if the file already exists:
+
+
#H5F_ACC_TRUNC specifies that if the file already exists, the current contents will be deleted so
+that the application can rewrite the file with new data.
+
+
#H5F_ACC_EXCL specifies that the open will fail if the file already exists. If the file does not
+already exist, the file access parameter is ignored.
+
+
+In either case, the application has both read and write access to the successfully created file.
+
+Note that there are two different access modes for opening existing files:
+
+
#H5F_ACC_RDONLY specifies that the application has read access but will not be allowed to write any data.
+
+
#H5F_ACC_RDWR specifies that the application has read and write access.
+
+
+
+
File creation property list: The file creation property list is used to
+control the file metadata. File metadata contains information about the size of the user-block*,
+the size of various file data structures used by the HDF5 library, etc. In this tutorial, the
+default file creation property list, #H5P_DEFAULT, is used.
+ *The user-block is a fixed-length block of data located at the beginning of the file which is
+ignored by the HDF5 library. The user-block may be used to store any data or information found
+to be useful to applications.
+
+
File access property list: The file access property list is used to
+control different methods of performing I/O on files. It also can be used to control how a file
+is closed (whether or not to delay the actual file close until all objects in a file are closed).
+The default file access property list, #H5P_DEFAULT, is used in this tutorial.
+
+
+
+Please refer to the \ref sec_file section of the \ref UG and \ref H5F section in the \ref RM for
+detailed information regarding file access/creation property lists and access modes.
+
+The steps to create and close an HDF5 file are as follows:
+
+
Specify the file creation and access property lists, if necessary.
+
Create the file.
+
Close the file, and if necessary, close the property lists.
+
+
+\section secLBFileExample Programming Example
+
+\subsection subsecLBFileExampleDesc Description
+The following example code demonstrates how to create and close an HDF5 file.
+
+C
+\code
+#include "hdf5.h"
+ #define FILE "file.h5"
+
+ int main() {
+
+ hid_t file_id; /* file identifier */
+ herr_t status;
+
+ /* Create a new file using default properties. */
+ file_id = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
+
+ /* Terminate access to the file. */
+ status = H5Fclose(file_id);
+ }
+\endcode
+
+Fortran
+\code
+ PROGRAM FILEEXAMPLE
+
+ USE HDF5 ! This module contains all necessary modules
+
+ IMPLICIT NONE
+
+ CHARACTER(LEN=8), PARAMETER :: filename = "filef.h5" ! File name
+ INTEGER(HID_T) :: file_id ! File identifier
+
+ INTEGER :: error ! Error flag
+
+!
+! Initialize FORTRAN interface.
+!
+ CALL h5open_f (error)
+ !
+ ! Create a new file using default properties.
+ !
+ CALL h5fcreate_f(filename, H5F_ACC_TRUNC_F, file_id, error)
+
+ !
+ ! Terminate access to the file.
+ !
+ CALL h5fclose_f(file_id, error)
+!
+! Close FORTRAN interface.
+!
+ CALL h5close_f(error)
+ END PROGRAM FILEEXAMPLE
+\endcode
+
+See \ref LBExamples for the examples used in the Learning the Basics tutorial.
+
+For details on compiling an HDF5 application:
+[ \ref LBCompiling ]
+
+\subsection subsecLBFileExampleRem Remarks
+\li In C: The include file hdf5.h contains definitions and declarations and must be included
+in any program that uses the HDF5 library.
+
+In FORTRAN: The module HDF5 contains definitions and declarations and must be used in any
+program that uses the HDF5 library. Also note that #H5open MUST be called at the beginning of an HDF5 Fortran
+application (prior to any HDF5 calls) to initialize the library and variables. The #H5close call MUST be at
+the end of the HDF5 Fortran application.
+\li #H5Fcreate creates an HDF5 file and returns the file identifier.
+For Fortran, the file creation property list and file access property list are optional. They can be omitted if the
+default values are to be used.
+The root group is automatically created when a file is created. Every file has a root group and the path name of
+the root group is always /.
+\li #H5Fclose terminates access to an HDF5 file.
+When an HDF5 file is no longer accessed by a program, #H5Fclose must be called to release the resources used by the file.
+This call is mandatory.
+Note that if #H5Fclose is called for a file, but one or more objects within the file remain open, those objects will
+remain accessible until they are individually closed. This can cause access problems for other users, if objects were
+inadvertently left open. A File Access property controls how the file is closed.
+
+\subsection subsecLBFileExampleCont File Contents
+The HDF Group has developed tools for examining the contents of HDF5 files. The tool used throughout the HDF5 tutorial
+is the HDF5 dumper, h5dump, which displays the file contents in human-readable form. The output of h5dump is an ASCII
+display formatted according to the HDF5 DDL grammar. This grammar is defined, using Backus-Naur Form, in the
+\ref DDLBNF110.
+
+To view the HDF5 file contents, simply type:
+\code
+h5dump
+\endcode
+
+
+
Describe the file contents of file.h5 using a directed graph.
+
+The selections above were tested with the
+h5_subsetbk.c
+example code. The memory dataspace was defined as one-dimensional.
+
+\subsection subsecLBDsetSubRWProgRem Remarks
+\li In addition to #H5Sselect_hyperslab, this example introduces the #H5Dget_space call to obtain the dataspace of a dataset.
+\li If using the default values for the stride and block parameters of #H5Sselect_hyperslab, then, for C you can specify NULL
+for these parameters, rather than passing in an array for each, and for Fortran 90 you can omit these parameters.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBDatatypes Datatype Basics
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBDtype What is a Datatype?
+A datatype is a collection of datatype properties which provide complete information for data conversion to or from that datatype.
+
+Datatypes in HDF5 can be grouped as follows:
+\li Pre-Defined Datatypes: These are datatypes that are created by HDF5. They are actually opened
+(and closed) by HDF5, and can have a different value from one HDF5 session to the next.
+\li Derived Datatypes: These are datatypes that are created or derived from the pre-defined datatypes.
+Although created from pre-defined types, they represent a category unto themselves. An example of a commonly used derived
+datatype is a string of more than one character.
+
+\section secLBDtypePre Pre-defined Datatypes
+The properties of pre-defined datatypes are:
+\li Pre-defined datatypes are opened and closed by HDF5.
+\li A pre-defined datatype is a handle and is NOT PERSISTENT. Its value can be different from one HDF5 session to the next.
+\li Pre-defined datatypes are Read-Only.
+\li As mentioned, other datatypes can be derived from pre-defined datatypes.
+
+There are two types of pre-defined datatypes, standard (file) and native.
+
+
Standard
+A standard (or file) datatype can be:
+
+
Atomic: A datatype which cannot be decomposed into smaller datatype units at the API level.
+The atomic datatypes are:
+
+
integer
+
float
+
string (1-character)
+
date and time
+
bitfield
+
reference
+
opaque
+
+
+
Composite: An aggregation of one or more datatypes.
+Composite datatypes include:
+
+
array
+
variable length
+
enumeration
+
compound datatypes
+
+Array, variable length, and enumeration datatypes are defined in terms of a single atomic datatype,
+whereas a compound datatype is a datatype composed of a sequence of datatypes.
+
+
+
+
+
+
Notes
+
+
+
+\li Standard pre-defined datatypes are the SAME on all platforms.
+\li They are the datatypes that you see in an HDF5 file.
+\li They are typically used when creating a dataset.
+
+
+
+
+
Native
+Native pre-defined datatypes are used for memory operations, such as reading and writing. They are
+NOT THE SAME on different platforms. They are similar to C type names, and are aliased to the
+appropriate HDF5 standard pre-defined datatype for a given platform.
+
+For example, when on an Intel based PC, #H5T_NATIVE_INT is aliased to the standard pre-defined type,
+#H5T_STD_I32LE. On a MIPS machine, it is aliased to #H5T_STD_I32BE.
+
+
+
Notes
+
+
+
+\li Native datatypes are NOT THE SAME on all platforms.
+\li Native datatypes simplify memory operations (read/write). The HDF5 library automatically converts as needed.
+\li Native datatypes are NOT in an HDF5 File. The standard pre-defined datatype that a native datatype corresponds
+to is what you will see in the file.
+
+
+
+
+
Pre-Defined
+The following table shows the native types and the standard pre-defined datatypes they correspond
+to. (Keep in mind that HDF5 can convert between datatypes, so you can specify a buffer of a larger
+type for a dataset of a given type. For example, you can read a dataset that has a short datatype
+into a long integer buffer.)
+
+
+
Some HDF5 pre-defined native datatypes and corresponding standard (file) type
+
+
C Type
+
HDF5 Memory Type
+
HDF5 File Type*
+
+
+
Integer
+
+
+
int
+
#H5T_NATIVE_INT
+
#H5T_STD_I32BE or #H5T_STD_I32LE
+
+
+
short
+
#H5T_NATIVE_SHORT
+
#H5T_STD_I16BE or #H5T_STD_I16LE
+
+
+
long
+
#H5T_NATIVE_LONG
+
#H5T_STD_I32BE, #H5T_STD_I32LE,
+ #H5T_STD_I64BE or #H5T_STD_I64LE
+
+
+
long long
+
#H5T_NATIVE_LLONG
+
#H5T_STD_I64BE or #H5T_STD_I64LE
+
+
+
unsigned int
+
#H5T_NATIVE_UINT
+
#H5T_STD_U32BE or #H5T_STD_U32LE
+
+
+
unsigned short
+
#H5T_NATIVE_USHORT
+
#H5T_STD_U16BE or #H5T_STD_U16LE
+
+
+
unsigned long
+
#H5T_NATIVE_ULONG
+
#H5T_STD_U32BE, #H5T_STD_U32LE,
+ #H5T_STD_U64BE or #H5T_STD_U64LE
+
+
+
unsigned long long
+
#H5T_NATIVE_ULLONG
+
#H5T_STD_U64BE or #H5T_STD_U64LE
+
+
+
Float
+
+
+
float
+
#H5T_NATIVE_FLOAT
+
#H5T_IEEE_F32BE or #H5T_IEEE_F32LE
+
+
+
double
+
#H5T_NATIVE_DOUBLE
+
#H5T_IEEE_F64BE or #H5T_IEEE_F64LE
+
+
+
+
+
Some HDF5 pre-defined native datatypes and corresponding standard (file) type
+
+
F90 Type
+
HDF5 Memory Type
+
HDF5 File Type*
+
+
+
integer
+
H5T_NATIVE_INTEGER
+
#H5T_STD_I32BE(8,16) or #H5T_STD_I32LE(8,16)
+
+
+
real
+
H5T_NATIVE_REAL
+
#H5T_IEEE_F32BE or #H5T_IEEE_F32LE
+
+
+
double-precision
+
#H5T_NATIVE_DOUBLE
+
#H5T_IEEE_F64BE or #H5T_IEEE_F64LE
+
+
+
+
+
+
* Note that the HDF5 File Types listed are those that are most commonly created.
+ The file type created depends on the compiler switches and platforms being
+ used. For example, on the Cray an integer is 64-bit, and using #H5T_NATIVE_INT (C)
+ or H5T_NATIVE_INTEGER (F90) would result in an #H5T_STD_I64BE file type.
+
+
+
+The following code is an example of when you would use standard pre-defined datatypes vs. native types:
+\code
+ #include "hdf5.h"
+
+ main() {
+
+ hid_t file_id, dataset_id, dataspace_id;
+ herr_t status;
+ hsize_t dims[2]={4,6};
+ int i, j, dset_data[4][6];
+
+ for (i = 0; i < 4; i++)
+ for (j = 0; j < 6; j++)
+ dset_data[i][j] = i * 6 + j + 1;
+
+ file_id = H5Fcreate ("dtypes.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
+
+ dataspace_id = H5Screate_simple (2, dims, NULL);
+
+ dataset_id = H5Dcreate (file_id, "/dset", H5T_STD_I32BE, dataspace_id,
+ H5P_DEFAULT);
+
+ status = H5Dwrite (dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL,
+ H5P_DEFAULT, dset_data);
+
+ status = H5Dclose (dataset_id);
+
+ status = H5Fclose (file_id);
+ }
+\endcode
+By using the native types when reading and writing, the code that reads from or writes to a dataset
+can be the same for different platforms.
+
+Can native types also be used when creating a dataset? Yes. However, just be aware that the resulting
+datatype in the file will be one of the standard pre-defined types and may be different than expected.
+
+What happens if you do not use the correct native datatype for a standard (file) datatype? Your data
+may be incorrect or not what you expect.
+
+\section secLBDtypeDer Derived Datatypes
+ANY pre-defined datatype can be used to derive user-defined datatypes.
+
+To create a datatype derived from a pre-defined type:
+
+
Make a copy of the pre-defined datatype:
+\code
+ tid = H5Tcopy (H5T_STD_I32BE);
+\endcode
+
+
Change the datatype.
+
+
+There are numerous datatype functions that allow a user to alter a pre-defined datatype. See
+\ref subsecLBDtypeSpecStr below for a simple example.
+
+Refer to the \ref H5T in the \ref RM. Example functions are #H5Tset_size and #H5Tset_precision.
+
+\section secLBDtypeSpec Specific Datatypes
+On the Examples by API
+page under Datatypes
+you will find many example programs for creating and reading datasets with different datatypes.
+
+Below is additional information on some of the datatypes. See
+the Examples by API
+page for examples of these datatypes.
+
+\subsection subsecLBDtypeSpec Array Datatype vs Array Dataspace
+#H5T_ARRAY is a datatype, and it should not be confused with the dataspace of a dataset. The dataspace
+of a dataset can consist of a regular array of elements. For example, the datatype for a dataset
+could be an atomic datatype like integer, and the dataset could be an N-dimensional appendable array,
+as specified by the dataspace. See #H5Screate and #H5Screate_simple for details.
+
+Unlimited dimensions and subsetting are not supported when using the #H5T_ARRAY datatype.
+
+The #H5T_ARRAY datatype was primarily created to address the simple case of a compound datatype
+when all members of the compound datatype are of the same type and there is no need to subset by
+compound datatype members. Creation of such a datatype is more efficient and I/O also requires
+less work, because there is no alignment involved.
+
+\subsection subsecLBDtypeSpecArr Array Datatype
+The array class of datatypes, #H5T_ARRAY, allows the construction of true, homogeneous,
+multi-dimensional arrays. Since these are homogeneous arrays, each element of the array
+will be of the same datatype, designated at the time the array is created.
+
+Users may be confused by this datatype, as opposed to a dataset with a simple atomic
+datatype (eg. integer) that is an array. See subsecLBDtypeSpec for more information.
+
+Arrays can be nested. Not only is an array datatype used as an element of an HDF5 dataset,
+but the elements of an array datatype may be of any datatype, including another array datatype.
+
+Array datatypes cannot be subdivided for I/O; the entire array must be transferred from one
+dataset to another.
+
+Within certain limitations, outlined in the next paragraph, array datatypes may be N-dimensional
+and of any dimension size. Unlimited dimensions, however, are not supported. Functionality similar
+to unlimited dimension arrays is available through the use of variable-length datatypes.
+
+The maximum number of dimensions, i.e., the maximum rank, of an array datatype is specified by
+the HDF5 library constant #H5S_MAX_RANK. The minimum rank is 1 (one). All dimension sizes must
+be greater than 0 (zero).
+
+One array datatype may only be converted to another array datatype if the number of dimensions
+and the sizes of the dimensions are equal and the datatype of the first array's elements can be
+converted to the datatype of the second array's elements.
+
+\subsubsection subsubsecLBDtypeSpecArrAPI Array Datatype APIs
+There are three functions that are specific to array datatypes: one, #H5Tarray_create, for creating
+an array datatype, and two, #H5Tget_array_ndims and #H5Tget_array_dims
+for working with existing array datatypes.
+
+
Creating
+The function #H5Tarray_create creates a new array datatype object. Parameters specify
+\li the base datatype of each element of the array,
+\li the rank of the array, i.e., the number of dimensions,
+\li the size of each dimension, and
+\li the dimension permutation of the array, i.e., whether the elements of the array are listed in C or FORTRAN order.
+
+
Working with existing array datatypes
+When working with existing arrays, one must first determine the the rank, or number of dimensions, of the array.
+
+The function #H5Tget_array_dims returns the rank of a specified array datatype.
+
+In many instances, one needs further information. The function #H5Tget_array_dims retrieves the
+permutation of the array and the size of each dimension.
+
+\subsection subsecLBDtypeSpecCmpd Compound
+
+\subsubsection subsubsecLBDtypeSpecCmpdProp Properties of compound datatypes
+A compound datatype is similar to a struct in C or a common block in Fortran. It is a collection of
+one or more atomic types or small arrays of such types. To create and use of a compound datatype
+you need to refer to various properties of the data compound datatype:
+\li It is of class compound.
+\li It has a fixed total size, in bytes.
+\li It consists of zero or more members (defined in any order) with unique names and which occupy non-overlapping regions within the datum.
+\li Each member has its own datatype.
+\li Each member is referenced by an index number between zero and N-1, where N is the number of members in the compound datatype.
+\li Each member has a name which is unique among its siblings in a compound datatype.
+\li Each member has a fixed byte offset, which is the first byte (smallest byte address) of that member in a compound datatype.
+\li Each member can be a small array of up to four dimensions.
+
+Properties of members of a compound datatype are defined when the member is added to the compound type and cannot be subsequently modified.
+
+\subsubsection subsubsecLBDtypeSpecCmpdDef Defining compound datatypes
+Compound datatypes must be built out of other datatypes. First, one creates an empty compound
+datatype and specifies its total size. Then members are added to the compound datatype in any order.
+
+Member names. Each member must have a descriptive name, which is the key used to uniquely identify
+the member within the compound datatype. A member name in an HDF5 datatype does not necessarily
+have to be the same as the name of the corresponding member in the C struct in memory, although
+this is often the case. Nor does one need to define all members of the C struct in the HDF5
+compound datatype (or vice versa).
+
+Offsets. Usually a C struct will be defined to hold a data point in memory, and the offsets of the
+members in memory will be the offsets of the struct members from the beginning of an instance of the
+struct. The library defines the macro to compute the offset of a member within a struct:
+\code
+ HOFFSET(s,m)
+\endcode
+This macro computes the offset of member m within a struct variable s.
+
+Here is an example in which a compound datatype is created to describe complex numbers whose type
+is defined by the complex_t struct.
+\code
+typedef struct {
+ double re; /*real part */
+ double im; /*imaginary part */
+} complex_t;
+
+complex_t tmp; /*used only to compute offsets */
+hid_t complex_id = H5Tcreate (H5T_COMPOUND, sizeof tmp);
+H5Tinsert (complex_id, "real", HOFFSET(tmp,re), H5T_NATIVE_DOUBLE);
+H5Tinsert (complex_id, "imaginary", HOFFSET(tmp,im), H5T_NATIVE_DOUBLE);
+\endcode
+
+\subsection subsecLBDtypeSpecRef Reference
+There are two types of Reference datatypes in HDF5:
+\li \ref subsubsecLBDtypeSpecRefObj
+\li \ref subsubsecLBDtypeSpecRefDset
+
+\subsubsection subsubsecLBDtypeSpecRefObj Reference to objects
+In HDF5, objects (i.e. groups, datasets, and named datatypes) are usually accessed by name.
+There is another way to access stored objects -- by reference.
+
+An object reference is based on the relative file address of the object header in the file
+and is constant for the life of the object. Once a reference to an object is created and
+stored in a dataset in the file, it can be used to dereference the object it points to.
+References are handy for creating a file index or for grouping related objects by storing
+references to them in one dataset.
+
+
Creating and storing references to objects
+The following steps are involved in creating and storing file references to objects:
+
+
Create the objects or open them if they already exist in the file.
+
Create a dataset to store the objects' references, by specifying #H5T_STD_REF_OBJ as the datatype
+
Create and store references to the objects in a buffer, using #H5Rcreate.
+
Write a buffer with the references to the dataset, using #H5Dwrite with the #H5T_STD_REF_OBJ datatype.
+
+
+
Reading references and accessing objects using references
+The following steps are involved:
+
+
Open the dataset with the references and read them. The #H5T_STD_REF_OBJ datatype must be used to describe the memory datatype.
+
Use the read reference to obtain the identifier of the object the reference points to using #H5Rdereference.
+
Open the dereferenced object and perform the desired operations.
+
Close all objects when the task is complete.
+
+
+\subsubsection subsubsecLBDtypeSpecRefDset Reference to a dataset region
+A dataset region reference points to a dataset selection in another dataset.
+A reference to the dataset selection (region) is constant for the life of the dataset.
+
+
Creating and storing references to dataset regions
+The following steps are involved in creating and storing references to a dataset region:
+\li Create a dataset to store the dataset region (selection), by passing in #H5T_STD_REF_DSETREG for the datatype when calling #H5Dcreate.
+\li Create selection(s) in existing dataset(s) using #H5Sselect_hyperslab and/or #H5Sselect_elements.
+\li Create reference(s) to the selection(s) using #H5Rcreate and store them in a buffer.
+\li Write the references to the dataset regions in the file.
+\li Close all objects.
+
+
Reading references to dataset regions
+The following steps are involved in reading references to dataset regions and referenced dataset regions (selections).
+
+
Open and read the dataset containing references to the dataset regions.
+The datatype #H5T_STD_REF_DSETREG must be used during read operation.
+
Use #H5Rdereference to obtain the dataset identifier from the read dataset region reference.
+ OR
+ Use #H5Rget_region to obtain the dataspace identifier for the dataset containing the selection from the read dataset region reference.
+
+
With the dataspace identifier, the \ref H5S interface functions, H5Sget_select_*,
+can be used to obtain information about the selection.
+
Close all objects when they are no longer needed.
+
+
+The dataset with the region references was read by #H5Dread with the #H5T_STD_REF_DSETREG datatype specified.
+
+The read reference can be used to obtain the dataset identifier by calling #H5Rdereference or by obtaining
+obtain spacial information (dataspace and selection) with the call to #H5Rget_region.
+
+The reference to the dataset region has information for both the dataset itself and its selection. In both functions:
+\li The first parameter is an identifier of the dataset with the region references.
+\li The second parameter specifies the type of reference stored. In this example, a reference to the dataset region is stored.
+\li The third parameter is a buffer containing the reference of the specified type.
+
+This example introduces several H5Sget_select_* functions used to obtain information about selections:
+
+
Examples of HDF5 predefined datatypes
+
+
Function
+
Description
+
+
+
#H5Sget_select_npoints
+
Returns the number of elements in the hyperslab
+
+
+
#H5Sget_select_hyper_nblocks
+
Returns the number of blocks in the hyperslab
+
+
+
#H5Sget_select_hyper_blocklist
+
Returns the "lower left" and "upper right" coordinates of the blocks in the hyperslab selection
+
+
+
#H5Sget_select_bounds
+
Returns the coordinates of the "minimal" block containing a hyperslab selection
+
+
+
#H5Sget_select_elem_npoints
+
Returns the number of points in the element selection
+
+
+
#H5Sget_select_elem_pointlist
+
Returns the coordinates of points in the element selection
+
+
+
+\subsection subsecLBDtypeSpecStr String
+A simple example of creating a derived datatype is using the string datatype,
+#H5T_C_S1 (#H5T_FORTRAN_S1) to create strings of more than one character. Strings
+can be stored as either fixed or variable length, and may have different rules
+for padding of unused storage.
+
+\subsubsection subsecLBDtypeSpecStrFix Fixed Length 5-character String Datatype
+\code
+ hid_t strtype; /* Datatype ID */
+ herr_t status;
+
+ strtype = H5Tcopy (H5T_C_S1);
+ status = H5Tset_size (strtype, 5); /* create string of length 5 */
+\endcode
+
+\subsubsection subsecLBDtypeSpecStrVar Variable Length String Datatype
+\code
+ strtype = H5Tcopy (H5T_C_S1);
+ status = H5Tset_size (strtype, H5T_VARIABLE);
+\endcode
+
+The ability to derive datatypes from pre-defined types allows users to create any number of datatypes,
+from simple to very complex.
+
+As the term implies, variable length strings are strings of varying lengths. They are stored internally
+in a heap, potentially impacting efficiency in the following ways:
+\li Heap storage requires more space than regular raw data storage.
+\li Heap access generally reduces I/O efficiency because it requires individual read or write operations
+for each data element rather than one read or write per dataset or per data selection.
+\li A variable length dataset consists of pointers to the heaps of data, not the actual data. Chunking
+and filters, including compression, are not available for heaps.
+
+See \ref subsubsec_datatype_other_strings in the \ref UG, for more information on how fixed and variable
+length strings are stored.
+
+\subsection subsecLBDtypeSpecVL Variable Length
+Variable-length (VL) datatypes are sequences of an existing datatype (atomic, VL, or compound)
+which are not fixed in length from one dataset location to another. In essence, they are similar
+to C character strings -- a sequence of a type which is pointed to by a particular type of
+pointer -- although they are implemented more closely to FORTRAN strings by including an explicit
+length in the pointer instead of using a particular value to terminate the sequence.
+
+VL datatypes are useful to the scientific community in many different ways, some of which are listed below:
+
+
Ragged arrays: Multi-dimensional ragged arrays can be implemented with the last (fastest changing)
+dimension being ragged by using a VL datatype as the type of the element stored. (Or as a field in a compound datatype.)
+
+
Fractal arrays: If a compound datatype has a VL field of another compound type with VL fields
+(a nested VL datatype), this can be used to implement ragged arrays of ragged arrays, to whatever
+nesting depth is required for the user.
+
+
Polygon lists: A common storage requirement is to efficiently store arrays of polygons with
+different numbers of vertices. VL datatypes can be used to efficiently and succinctly describe an
+array of polygons with different numbers of vertices.
+
+
Character strings: Perhaps the most common use of VL datatypes will be to store C-like VL character
+strings in dataset elements or as attributes of objects.
+
+
Indices: An array of VL object references could be used as an index to all the objects in a file
+which contain a particular sequence of dataset values. Perhaps an array something like the following:
+\code
+ Value1: Object1, Object3, Object9
+ Value2: Object0, Object12, Object14, Object21, Object22
+ Value3: Object2
+ Value4:
+ Value5: Object1, Object10, Object12
+ .
+ .
+\endcode
+
+
Object Tracking: An array of VL dataset region references can be used as a method of tracking
+objects or features appearing in a sequence of datasets. Perhaps an array of them would look like:
+\code
+ Feature1: Dataset1:Region, Dataset3:Region, Dataset9:Region
+ Feature2: Dataset0:Region, Dataset12:Region, Dataset14:Region,
+ Dataset21:Region, Dataset22:Region
+ Feature3: Dataset2:Region
+ Feature4:
+ Feature5: Dataset1:Region, Dataset10:Region, Dataset12:Region
+ .
+ .
+\endcode
+
+
+
+\subsubsection subsubsecLBDtypeSpecVLMem Variable-length datatype memory management
+With each element possibly being of different sequence lengths for a dataset with a VL datatype,
+the memory for the VL datatype must be dynamically allocated. Currently there are two methods
+of managing the memory for VL datatypes: the standard C malloc/free memory allocation routines
+or a method of calling user-defined memory management routines to allocate or free memory. Since
+the memory allocated when reading (or writing) may be complicated to release, an HDF5 routine is
+provided to traverse a memory buffer and free the VL datatype information without leaking memory.
+
+\subsubsection subsubsecLBDtypeSpecVLDiv Variable-length datatypes cannot be divided
+VL datatypes are designed so that they cannot be subdivided by the library with selections, etc.
+This design was chosen due to the complexities in specifying selections on each VL element of a
+dataset through a selection API that is easy to understand. Also, the selection APIs work on
+dataspaces, not on datatypes. At some point in time, we may want to create a way for dataspaces
+to have VL components to them and we would need to allow selections of those VL regions, but
+that is beyond the scope of this document.
+
+\subsubsection subsubsecLBDtypeSpecVLErr What happens if the library runs out of memory while reading?
+It is possible for a call to #H5Dread to fail while reading in VL datatype information if the memory
+required exceeds that which is available. In this case, the #H5Dread call will fail gracefully and any
+VL data which has been allocated prior to the memory shortage will be returned to the system via the
+memory management routines detailed below. It may be possible to design a partial read API function
+at a later date, if demand for such a function warrants.
+
+\subsubsection subsubsecLBDtypeSpecVLStr Strings as variable-length datatypes
+Since character strings are a special case of VL data that is implemented in many different ways on
+different machines and in different programming languages, they are handled somewhat differently from
+other VL datatypes in HDF5.
+
+HDF5 has native VL strings for each language API, which are stored the same way on disk, but are
+exported through each language API in a natural way for that language. When retrieving VL strings
+from a dataset, users may choose to have them stored in memory as a native VL string or in HDF5's
+#hvl_t struct for VL datatypes.
+
+VL strings may be created in one of two ways: by creating a VL datatype with a base type of
+#H5T_C_S1 and setting its length to #H5T_VARIABLE. The second method is used to access native VL strings in memory. The
+library will convert between the two types, but they are stored on disk using different datatypes
+and have different memory representations.
+
+Multi-byte character representations, such as \em UNICODE or \em wide characters in C/C++, will need the
+appropriate character and string datatypes created so that they can be described properly through
+the datatype API. Additional conversions between these types and the current ASCII characters
+will also be required.
+
+Variable-width character strings (which might be compressed data or some other encoding) are not
+currently handled by this design. We will evaluate how to implement them based on user feedback.
+
+\subsubsection subsubsecLBDtypeSpecVLAPIs Variable-length datatype APIs
+
+
Creation
+VL datatypes are created with the #H5Tvlen_create function as follows:
+\code
+type_id = H5Tvlen_create(hid_t base_type_id);
+\endcode
+The base datatype will be the datatype that the sequence is composed of, characters for character
+strings, vertex coordinates for polygon lists, etc. The base datatype specified for the VL datatype
+can be of any HDF5 datatype, including another VL datatype, a compound datatype, or an atomic datatype.
+
+
Querying base datatype of VL datatype
+It may be necessary to know the base datatype of a VL datatype before memory is allocated, etc.
+The base datatype is queried with the #H5Tget_super function, described in the \ref H5T documentation.
+
+
Querying minimum memory required for VL information
+It order to predict the memory usage that #H5Dread may need to allocate to store VL data while
+reading the data, the #H5Dvlen_get_buf_size function is provided:
+\code
+herr_t H5Dvlen_get_buf_size(hid_t dataset_id, hid_t type_id, hid_t space_id, hsize_t *size)
+\endcode
+This routine checks the number of bytes required to store the VL data from the dataset, using
+the \em space_id for the selection in the dataset on disk and the \em type_id for the memory representation
+of the VL data in memory. The *\em size value is modified according to how many bytes are required
+to store the VL data in memory.
+
+
Specifying how to manage memory for the VL datatype
+The memory management method is determined by dataset transfer properties passed into the
+#H5Dread and #H5Dwrite functions with the dataset transfer property list.
+
+Default memory management is set by using #H5P_DEFAULT for the dataset transfer
+property list identifier. If #H5P_DEFAULT is used with #H5Dread, the system \em malloc and \em free
+calls will be used for allocating and freeing memory. In such a case, #H5P_DEFAULT should
+also be passed as the property list identifier to #H5Dvlen_reclaim.
+
+The rest of this subsection is relevant only to those who choose not to use default memory management.
+
+The user can choose whether to use the system \em malloc and \em free calls or user-defined, or custom,
+memory management functions. If user-defined memory management functions are to be used, the
+memory allocation and free routines must be defined via #H5Pset_vlen_mem_manager(), as follows:
+\code
+herr_t H5Pset_vlen_mem_manager(hid_t plist_id, H5MM_allocate_t alloc, void *alloc_info, H5MM_free_t free, void *free_info)
+\endcode
+The \em alloc and \em free parameters identify the memory management routines to be used. If the user
+has defined custom memory management routines, \em alloc and/or \em free should be set to make those
+routine calls (i.e., the name of the routine is used as the value of the parameter); if the user
+prefers to use the system's \em malloc and/or \em free, the \em alloc and \em free parameters, respectively, should be set to \em NULL
+
+The prototypes for the user-defined functions would appear as follows:
+\code
+typedef void *(*H5MM_allocate_t)(size_t size, void *info) ; typedef void (*H5MM_free_t)(void *mem, void *free_info) ;
+\endcode
+The \em alloc_info and \em free_info parameters can be used to pass along any required information to
+the user's memory management routines.
+
+In summary, if the user has defined custom memory management routines, the name(s) of the routines
+are passed in the \em alloc and \em free parameters and the custom routines' parameters are passed in the
+\em alloc_info and \em free_info parameters. If the user wishes to use the system \em malloc and \em free functions,
+the \em alloc and/or \em free parameters are set to \em NULL and the \em alloc_info and \em free_info parameters are ignored.
+
+
Recovering memory from VL buffers read in
+The complex memory buffers created for a VL datatype may be reclaimed with the #H5Dvlen_reclaim
+function call, as follows:
+\code
+herr_t H5Dvlen_reclaim(hid_t type_id, hid_t space_id, hid_t plist_id, void *buf);
+\endcode
+
+The \em type_id must be the datatype stored in the buffer, \em space_id describes the selection for the
+memory buffer to free the VL datatypes within, \em plist_id is the dataset transfer property list
+which was used for the I/O transfer to create the buffer, and \em buf is the pointer to the buffer
+to free the VL memory within. The VL structures (#hvl_t) in the user's buffer are modified to zero
+out the VL information after it has been freed.
+
+If nested VL datatypes were used to create the buffer, this routine frees them from the bottom up,
+releasing all the memory without creating memory leaks.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+*/
diff --git a/doxygen/dox/LearnBasics3.dox b/doxygen/dox/LearnBasics3.dox
new file mode 100644
index 0000000..2fe0f52
--- /dev/null
+++ b/doxygen/dox/LearnBasics3.dox
@@ -0,0 +1,1015 @@
+/** @page LBPropsList Property Lists Basics
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBPList What is a Property (or Property List)?
+In HDF5, a property or property list is a characteristic or feature associated with an HDF5 object.
+There are default properties which handle the most common needs. These default properties are
+specified by passing in #H5P_DEFAULT for the Property List parameter of a function. Default properties
+can be modified by use of the \ref H5P interface and function parameters.
+
+The \ref H5P API allows a user to take advantage of the more powerful features in HDF5. It typically
+supports unusual cases when creating or accessing HDF5 objects. There is a programming model for
+working with Property Lists in HDF5 (see below).
+
+For examples of modifying a property list, see these tutorial topics:
+\li \see \ref LBDsetLayout
+\li \see \ref LBExtDset
+\li \see \ref LBComDset
+
+There are many Property Lists associated with creating and accessing objects in HDF5. See the
+\ref H5P Interface documentation in the HDF5 \ref RM for a list of the different
+properties associated with HDF5 interfaces.
+
+In summary:
+\li Properties are features of HDF5 objects, that can be changed by use of the Property List API and function parameters.
+\li Property lists provide a mechanism for adding functionality to HDF5 calls without increasing the number of arguments used for a given call.
+\li The Property List API supports unusual cases when creating and accessing HDF5 objects.
+
+\section secLBPListProg Programming Model
+Default properties are specified by simply passing in #H5P_DEFAULT (C) / H5P_DEFAULT_F (F90) for
+the property list parameter in those functions for which properties can be changed.
+
+The programming model for changing a property list is as follows:
+\li Create a copy or "instance" of the desired pre-defined property type, using the #H5Pcreate call. This
+will return a property list identifier. Please see the \ref RM entry for #H5Pcreate, for a comprehensive
+list of the property types.
+\li With the property list identifier, modify the property, using the \ref H5P APIs.
+\li Modify the object feature, by passing the property list identifier into the corresponding HDF5 object function.
+\li Close the property list when done, using #H5Pclose.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBDsetLayout Dataset Storage Layout
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBDsetLayoutDesc Description of a Dataset
+
+\section secLBDsetLayout Dataset Storage Layout
+The storage information, or storage layout, defines how the raw data values in the dataset are
+physically stored on disk. There are three ways that a dataset can be stored:
+\li contiguous
+\li chunked
+\li compact
+
+See the #H5Pset_layout/#H5Pget_layout APIs for details.
+
+\subsection subsecLBDsetLayoutCont Contiguous
+If the storage layout is contiguous, then the raw data values will be stored physically adjacent
+to each other in the HDF5 file (in one contiguous block). This is the default layout for a dataset.
+In other words, if you do not explicitly change the storage layout for the dataset, then it will
+be stored contiguously.
+
+
+
+\image html tutr-locons.png
+
+
+
+
+\subsection subsecLBDsetLayoutChunk Chunked
+With a chunked storage layout the data is stored in equal-sized blocks or chunks of
+a pre-defined size. The HDF5 library always writes and reads the entire chunk:
+
+
+
+\image html tutr-lochk.png
+
+
+
+
+Each chunk is stored as a separate contiguous block in the HDF5 file. There is a chunk index
+which keeps track of the chunks associated with a dataset:
+
+
+
+\image html tutr-lochks.png
+
+
+
+
+
+\subsubsection susubsecLBDsetLayoutChunkWhy Why Chunking ?
+Chunking is required for enabling compression and other filters, as well as for creating extendible
+or unlimited dimension datasets.
+
+It is also commonly used when subsetting very large datasets. Using the chunking layout can
+greatly improve performance when subsetting large datasets, because only the chunks required
+will need to be accessed. However, it is easy to use chunking without considering the consequences
+of the chunk size, which can lead to strikingly poor performance.
+
+Note that a chunk always has the same rank as the dataset and the chunk's dimensions do not need
+to be factors of the dataset dimensions.
+
+Writing or reading a chunked dataset is transparent to the application. You would use the same
+set of operations that you would use for a contiguous dataset. For example:
+\code
+ H5Dopen (...);
+ H5Sselect_hyperslab (...);
+ H5Dread (...);
+\endcode
+
+\subsubsection susubsecLBDsetLayoutChunkProb Problems Using Chunking
+Issues that can cause performance problems with chunking include:
+\li Chunks are too small.
+If a very small chunk size is specified for a dataset it can cause the dataset to be excessively
+large and it can result in degraded performance when accessing the dataset. The smaller the chunk
+size the more chunks that HDF5 has to keep track of, and the more time it will take to search for a chunk.
+\li Chunks are too large.
+An entire chunk has to be read and uncompressed before performing an operation. There can be a
+performance penalty for reading a small subset, if the chunk size is substantially larger than
+the subset. Also, a dataset may be larger than expected if there are chunks that only contain a
+small amount of data.
+\li A chunk does not fit in the Chunk Cache.
+Every chunked dataset has a chunk cache associated with it that has a default size of 1 MB. The
+purpose of the chunk cache is to improve performance by keeping chunks that are accessed frequently
+in memory so that they do not have to be accessed from disk. If a chunk is too large to fit in the
+chunk cache, it can significantly degrade performance. However, the size of the chunk cache can be
+increased by calling #H5Pset_chunk_cache.
+
+It is a good idea to:
+\li Avoid very small chunk sizes, and be aware of the 1 MB chunk cache size default.
+\li Test the data with different chunk sizes to determine the optimal chunk size to use.
+\li Consider the chunk size in terms of the most common access patterns that will be used once the dataset has been created.
+
+\subsection subsecLBDsetLayoutCom Compact
+A compact dataset is one in which the raw data is stored in the object header of the dataset.
+This layout is for very small datasets that can easily fit in the object header.
+
+The compact layout can improve storage and access performance for files that have many very tiny
+datasets. With one I/O access both the header and data values can be read. The compact layout reduces
+the size of a file, as the data is stored with the header which will always be allocated for a dataset.
+However, the object header is 64 KB in size, so this layout can only be used for very small datasets.
+
+\section secLBDsetLayoutProg Programming Model to Modify the Storage Layout
+To modify the storage layout, the following steps must be done:
+\li Create a Dataset Creation Property list. (See #H5Pcreate)
+\li Modify the property list.
+To use chunked storage layout, call: #H5Pset_chunk
+To use the compact storage layout, call: #H5Pset_layout
+\li Create a dataset with the modified property list. (See #H5Dcreate)
+\li Close the property list. (See #H5Pclose)
+For example code, see the \ref HDF5Examples page.
+Specifically look at the Examples by API.
+There are examples for different languages.
+
+The C example to create a chunked dataset is:
+h5ex_d_chunk.c
+The C example to create a compact dataset is:
+h5ex_d_compact.c
+
+\section secLBDsetLayoutChange Changing the Layout after Dataset Creation
+The dataset layout is a Dataset Creation Property List. This means that once the dataset has been
+created the dataset layout cannot be changed. The h5repack utility can be used to write a file
+to a new with a new layout.
+
+\section secLBDsetLayoutSource Sources of Information
+Chunking in HDF5
+(See the documentation on Advanced Topics in HDF5)
+\see \ref sec_plist in the HDF5 \ref UG.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+@page LBExtDset Extendible Datasets
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBExtDsetCreate Creating an Extendible Dataset
+An extendible dataset is one whose dimensions can grow. HDF5 allows you to define a dataset to have
+certain initial dimensions, then to later increase the size of any of the initial dimensions.
+
+HDF5 requires you to use chunking to define extendible datasets. This makes it possible to extend
+datasets efficiently without having to excessively reorganize storage. (To use chunking efficiently,
+be sure to see the advanced topic, Chunking in HDF5.)
+
+The following operations are required in order to extend a dataset:
+\li Declare the dataspace of the dataset to have unlimited dimensions for all dimensions that might eventually be extended.
+\li Set dataset creation properties to enable chunking.
+\li Create the dataset.
+\li Extend the size of the dataset.
+
+\section secLBExtDsetProg Programming Example
+
+\subsection subsecLBExtDsetProgDesc Description
+See \ref LBExamples for the examples used in the \ref LearnBasics tutorial.
+
+The example shows how to create a 3 x 3 extendible dataset, write to that dataset, extend the dataset
+to 10x3, and write to the dataset again.
+
+For details on compiling an HDF5 application:
+[ \ref LBCompiling ]
+
+\subsection subsecLBExtDsetProgRem Remarks
+\li An unlimited dimension dataspace is specified with the #H5Screate_simple call, by passing in
+#H5S_UNLIMITED as an element of the maxdims array.
+\li The #H5Pcreate call creates a new property as an instance of a property list class. For creating
+an extendible array dataset, pass in #H5P_DATASET_CREATE for the property list class.
+\li The #H5Pset_chunk call modifies a Dataset Creation Property List instance to store a chunked
+layout dataset and sets the size of the chunks used.
+\li To extend an unlimited dimension dataset use the the #H5Dset_extent call. Please be aware that
+after this call, the dataset's dataspace must be refreshed with #H5Dget_space before more data can be accessed.
+\li The #H5Pget_chunk call retrieves the size of chunks for the raw data of a chunked layout dataset.
+\li Once there is no longer a need for a Property List instance, it should be closed with the #H5Pclose call.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBComDset Compressed Datasets
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBComDsetCreate Creating a Compressed Dataset
+HDF5 requires you to use chunking to create a compressed dataset. (To use chunking efficiently,
+be sure to see the advanced topic, Chunking in HDF5.)
+
+The following operations are required in order to create a compressed dataset:
+\li Create a dataset creation property list.
+\li Modify the dataset creation property list instance to enable chunking and to enable compression.
+\li Create the dataset.
+\li Close the dataset creation property list and dataset.
+
+For more information on compression, see the FAQ question on Using Compression in HDF5.
+
+\section secLBComDsetProg Programming Example
+
+\subsection subsecLBComDsetProgDesc Description
+See \ref LBExamples for the examples used in the \ref LearnBasics tutorial.
+
+The example creates a chunked and ZLIB compressed dataset. It also includes comments for what needs
+to be done to create an SZIP compressed dataset. The example then reopens the dataset, prints the
+filter information, and reads the dataset.
+
+For details on compiling an HDF5 application:
+[ \ref LBCompiling ]
+
+\subsection subsecLBComDsetProgRem Remarks
+\li The #H5Pset_chunk call modifies a Dataset Creation Property List instance to store a chunked layout
+dataset and sets the size of the chunks used.
+\li The #H5Pset_deflate call modifies the Dataset Creation Property List instance to use ZLIB or DEFLATE
+compression. The #H5Pset_szip call modifies it to use SZIP compression. There are different compression
+parameters required for each compression method.
+\li SZIP compression can only be used with atomic datatypes that are integer, float, or char. It cannot be
+applied to compound, array, variable-length, enumerations, or other user-defined datatypes. The call
+to #H5Dcreate will fail if attempting to create an SZIP compressed dataset with a non-allowed datatype.
+The conflict can only be detected when the property list is used.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBContents Discovering the Contents of an HDF5 File
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBContents Discovering what is in an HDF5 file
+HDFView and h5dump are standalone tools which cannot be called within an application, and using
+#H5Dopen and #H5Dread require that you know the name of the HDF5 dataset. How would an application
+that has no prior knowledge of an HDF5 file be able to determine or discover the contents of it,
+much like HDFView and h5dump?
+
+The answer is that there are ways to discover the contents of an HDF5 file, by using the
+\ref H5G, \ref H5L and \ref H5O APIs:
+\li The \ref H5G interface (covered earlier) consists of routines for working with groups. A group is
+a structure that can be used to organize zero or more HDF5 objects, not unlike a Unix directory.
+\li The \ref H5L interface consists of link routines. A link is a path between groups. The \ref H5L interface
+allows objects to be accessed by use of these links.
+\li The \ref H5O interface consists of routines for working with objects. Datasets, groups, and committed
+datatypes are all objects in HDF5.
+
+Interface routines that simplify the process:
+\li #H5Literate traverses the links in a specified group, in the order of the specified index, using a
+user-defined callback routine. (A callback function is one that will be called when a certain condition
+is met, at a certain point in the future.)
+\li #H5Ovisit / #H5Lvisit recursively visit all objects/links accessible from a specified object/group.
+
+
+\section secLBContentsProg Programming Example
+
+\subsection subsecLBContentsProgUsing Using #H5Literate, #H5Lvisit and #H5Ovisit
+For example code, see the \ref HDF5Examples page.
+Specifically look at the Examples by API.
+There are examples for different languages, where examples of using #H5Literate and #H5Ovisit/#H5Lvisit are included.
+
+The h5ex_g_traverse example traverses a file using H5Literate:
+\li C: h5ex_g_traverse.c
+\li F90: h5ex_g_traverse_F03.f90
+
+The h5ex_g_visit example traverses a file using H5Ovisit and H5Lvisit:
+\li C: h5ex_g_visit.c
+\li F90: h5ex_g_visit_F03.f90
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBQuiz Learning the basics QUIZ
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\ref LBFileOrg
+
+
Name and describe the two primary objects that can be stored in an HDF5 file.
+
+
What is an attribute?
+
+
Give the path name for an object called harry that is a member of a group called dick, which, in turn, is a member of the root group.
+
+
+
+\ref LBAPI
+
+
Describe the purpose of each of the following HDF5 APIs:
+\code
+ H5A, H5D, H5E, H5F, H5G, H5T, H5Z
+\endcode
+
+
+
+\ref LBFileCreate
+
+
What two HDF5 routines must be called to create an HDF5 file?
+
+
What include file must be included in any file that uses the HDF5 library?
+
+
An HDF5 file is never completely empty because as soon as it is created, it automatically contains a certain primary object. What is that object?
+
+
+
+\ref LBDsetCreate
+
+
Name and describe two major datatype categories.
+
+
List the HDF5 atomic datatypes. Give an example of a predefined datatype. How would you create a string dataset?
+
+
What does the dataspace describe? What are the major characteristics of the simple dataspace?
+
+
What information needs to be passed to the #H5Dcreate function, i.e., what information is needed to describe a dataset at creation time?
+
+
+
+
+\ref LBDsetRW
+
+
What are six pieces of information which need to be specified for reading and writing a dataset?
+
+
Why are both the memory dataspace and file dataspace needed for read/write operations, while only the memory datatype is required?
+
+
In Figure 6.1, what does this line mean?
+\code
+DATASPACE { SIMPLE (4 , 6 ) / ( 4 , 6 ) }
+\endcode
+
+
+
+
+\ref LBAttrCreate
+
+
What is an attribute?
+
+
Can partial I/O operations be performed on attributes?
+
+
+
+
+\ref LBGrpCreate
+
+
What are the two primary objects that can be included in a group?
+
+
+
+
+\ref LBGrpCreateNames
+
+
Group names can be specified in two ways. What are these two types of group names?
+
+
You have a dataset named moo in the group boo, which is in the group foo, which, in turn,
+is in the root group. How would you specify an absolute name to access this dataset?
+
+
+
+
+\ref LBGrpDset
+
+
Describe a way to access the dataset moo described in the previous section
+(question 2) using a relative name. Describe a way to access the same dataset using an absolute name.
+
+
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBQuizAnswers Learning the basics QUIZ with Answers
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\ref LBFileOrg
+
+
Name and describe the two primary objects that can be stored in an HDF5 file.
+
+
+
Answer
+
+
Group: A grouping structure containing zero or more HDF5 objects, together with supporting metadata.
+Dataset: A multidimensional array of data elements, together with supporting metadata.
+
+
+
+
+
What is an attribute?
+
+
+
Answer
+
+
An HDF5 attribute is a user-defined HDF5 structure that provides extra information about an HDF5 object.
+
+
+
+
+
Give the path name for an object called harry that is a member of a group called dick, which, in turn, is a member of the root group.
+
+
+
Answer
+
+
/dick/harry
+
+
+
+
+
+
+\ref LBAPI
+
+
Describe the purpose of each of the following HDF5 APIs:
+\code
+ H5A, H5D, H5E, H5F, H5G, H5T, H5Z
+\endcode
+
+
+
Answer
+
+
H5A: Attribute access and manipulation routines
+
+H5D: Dataset access and manipulation routines
+
+H5E: Error handling routines H5F: File access routines
+
+H5G: Routines for creating and operating on groups
+
+H5T: Routines for creating and manipulating the datatypes of dataset elements
+
+H5Z: Data compression routines
+
+
+
+
+
+
+\ref LBFileCreate
+
+
What two HDF5 routines must be called to create an HDF5 file?
+
+
+
Answer
+
+
#H5Fcreate and #H5Fclose.
+
+
+
+
+
What include file must be included in any file that uses the HDF5 library?
+
+
+
Answer
+
+
hdf5.h must be included because it contains definitions and declarations used by the library.
+
+
+
+
+
An HDF5 file is never completely empty because as soon as it is created, it automatically contains a certain primary object. What is that object?
+
+
+
Answer
+
+
The root group.
+
+
+
+
+
+
+\ref LBDsetCreate
+
+
Name and describe two major datatype categories.
+
+
+
Answer
+
+
Atomic datatype: An atomic datatype cannot be decomposed into smaller units at the API level.
+
+Compound datatype: A compound datatype is a collection of atomic and compound datatypes, or small arrays of such types.
+
+
+
+
+
List the HDF5 atomic datatypes. Give an example of a predefined datatype. How would you create a string dataset?
+
+
+
Answer
+
+
There are six HDF5 atomic datatypes: integer, floating point, date and time, character string, bit field, and opaque.
+
+Examples of predefined datatypes include the following:
+\li #H5T_IEEE_F32LE - 4-byte little-endian, IEEE floating point
+\li #H5T_NATIVE_INT - native integer
+
+You would create a string dataset with the #H5T_C_S1 datatype, and set the size of the string with the #H5Tset_size call.
+
+
+
+
+
What does the dataspace describe? What are the major characteristics of the simple dataspace?
+
+
+
Answer
+
+
The dataspace describes the dimensionality of the dataset. A simple dataspace is characterized by its rank and dimension sizes.
+
+
+
+
+
What information needs to be passed to the #H5Dcreate function, i.e., what information is needed to describe a dataset at creation time?
+
+
+
Answer
+
+
The dataset location, name, dataspace, datatype, and dataset creation property list.
+
+
+
+
+
+
+
+\ref LBDsetRW
+
+
What are six pieces of information which need to be specified for reading and writing a dataset?
+
+
+
Answer
+
+
The dataset identifier, the dataset's datatype and dataspace in memory, the dataspace in the file,
+the dataset transfer property list, and a data buffer.
+
+
+
+
+
Why are both the memory dataspace and file dataspace needed for read/write operations, while only the memory datatype is required?
+
+
+
Answer
+
+
A dataset's file datatype is not required for a read/write operation because the file datatype is specified
+when the dataset is created and cannot be changed. Both file and memory dataspaces are required for dataset
+subsetting and for performing partial I/O operations.
+
+
+
+
+
In Figure 6.1, what does this line mean?
+\code
+DATASPACE { SIMPLE (4 , 6 ) / ( 4 , 6 ) }
+\endcode
+
+
+
Answer
+
+
It means that the dataset dset has a simple dataspace with the current dimensions (4,6) and the maximum size of the dimensions (4,6).
+
+
+
+
+
+
+
+\ref LBAttrCreate
+
+
What is an attribute?
+
+
+
Answer
+
+
An attribute is a dataset attached to an object. It describes the nature and/or the intended usage of the object.
+
+
+
+
+
Can partial I/O operations be performed on attributes?
+
+
+
Answer
+
+
No.
+
+
+
+
+
+
+
+\ref LBGrpCreate
+
+
What are the two primary objects that can be included in a group?
+
+
+
Answer
+
+
A group and a dataset.
+
+
+
+
+
+
+
+\ref LBGrpCreateNames
+
+
Group names can be specified in two ways. What are these two types of group names?
+
+
+
Answer
+
+
Relative and absolute.
+
+
+
+
+
You have a dataset named moo in the group boo, which is in the group foo, which, in turn,
+is in the root group. How would you specify an absolute name to access this dataset?
+
+
+
Answer
+
+
/foo/boo/moo
+
+
+
+
+
+
+
+\ref LBGrpDset
+
+
Describe a way to access the dataset moo described in the previous section
+(question 2) using a relative name. Describe a way to access the same dataset using an absolute name.
+
+
+
Answer
+
+
Access the group /foo and get the group ID. Access the group boo using the group ID obtained in Step 1.
+Access the dataset moo using the group ID obtained in Step 2.
+\code
+gid = H5Gopen (file_id, "/foo", 0); /* absolute path */
+gid1 = H5Gopen (gid, "boo", 0); /* relative path */
+did = H5Dopen (gid1, "moo"); /* relative path */
+\endcode
+Access the group /foo and get the group ID. Access the dataset boo/moo with the group ID just obtained.
+\code
+gid = H5Gopen (file_id, "/foo", 0); /* absolute path */
+did = H5Dopen (gid, "boo/moo"); /* relative path */
+\endcode
+Access the dataset with an absolute path.
+\code
+did = H5Dopen (file_id, "/foo/boo/moo"); /* absolute path */
+\endcode
+
+
+
+
+
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBCompiling Compiling HDF5 Applications
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+\section secLBCompiling Tools and Instructions on Compiling
+Compiling applications to use the HDF5 Library can be as simple as executing:
+\code
+h5cc -o myprog myprog.c
+\endcode
+
+As an application's file base evolves, there are better solutions using autotools and makefiles or
+CMake and CMakeLists.txt files. Many tutorials and references can be found with a simple search.
+
+This tutorial section will discuss the use of compile scripts on Linux.
+See the \ref secLBCompilingVS section for compiling with Visual Studio.
+
+\section secLBCompilingLinux Compile Scripts
+When the library is built, the following compile scripts are included:
+\li h5cc: compile script for HDF5 C programs
+\li h5fc: compile script for HDF5 F90 programs
+\li h5c++: compile script for HDF5 C++ programs
+
+These scripts are easilye used to compile single file applications, such as those included in the tutorial.
+
+
+
Warning
+
+
The h5cc/h5fc/h5c++ compile scripts are included when building with configure. Versions of
+these compile scripts have also been added to CMake for Linux ONLY. The CMake versions rely on pkgconfig files.
+
+
+
+
+
Examples of Using the Unix Compile Scripts:
+Following are examples of compiling and running an application with the Unix compile scripts:
+\code
+ h5fc myprog.f90
+ ./a.out
+
+ h5cc -o myprog myprog.c
+ ./myprog
+\endcode
+
+To see how the libraries linked in with a compile script were configured and built, use the
+-showconfig option. For example, if using h5cc type:
+\code
+ h5cc -showconfig
+\endcode
+
+
Detailed Description of Unix Compile Scripts:
+The h5cc, h5c++, and h5fc compile scripts come with the HDF5 binary distributions (include files,
+libraries, and utilities) for the platforms we support. The h5c++ and h5fc utilities are ONLY present
+if the library was built with C++ and Fortran.
+
+\section secLBCompilingVS Using Visual Studio
+
+ 1. If you are building on 64-bit Windows, find the "Platform" dropdown
+ and select "x64". Also select the correct Configuration (Debug, Release, RelWithDebInfo, etc)
+
+ 2. Set up path for external headers
+
+ The HDF5 install path settings will need to be in the project property sheets per project.
+ Go to "Project" and select "Properties", find "Configuration Properties",
+ and then "C/C++".
+
+ 2.1 Add the header path to the "Additional Include Directories" setting. Under "C/C++"
+ find "General" and select "Additional Include Directories". Select "Edit" from the dropdown
+ and add the HDF5 install/include path to the list.
+ (Ex: "C:\Program Files\HDF_Group\HDF5\1.10.9\include")
+
+ 2.2 Building applications with the dynamic/shared hdf5 libraries requires
+ that the "H5_BUILT_AS_DYNAMIC_LIB" compile definition be used. Under "C/C++"
+ find "Preprocessor" and select "Preprocessor Definitions". Select "Edit" from the dropdown
+ and add "H5_BUILT_AS_DYNAMIC_LIB" to the list.
+
+ 3. Set up path for external libraries
+
+ The HDF5 install path/lib settings will need to be in the project property sheets per project.
+ Go to "Project" and select "Properties", find "Configuration Properties",
+ and then "Linker".
+
+ 3.1 Add the libraries to the "Additional Dependencies" setting. Under "Linker"
+ find "Input" and select "Additional Dependencies". Select "Edit" from the dropdown
+ and add the required HDF5 install/lib path to the list.
+ (Ex: "C:\Program Files\HDF_Group\HDF5\1.10.9\lib\hdf5.lib")
+
+ 3.2 For static builds, the external libraries should be added.
+ For example, to compile a C++ application, enter:
+ libhdf5_cpp.lib libhdf5.lib libz.lib libszaec.lib libaec.lib
+
+\section secLBCompilingLibs HDF5 Libraries
+Following are the libraries included with HDF5. Whether you are using the Unix compile scripts or
+Makefiles, or are compiling on Windows, these libraries are or may need to be specified. The order
+they are specified is important on Linux:
+
+
+
HDF5 Static Libraries
+
+
Library
+
Linux Name
+
Mac Name
+
Windows Name
+
+
+
+\code
+HDF5 High Level C++ APIs
+HDF5 C++ Library
+HDF5 High Level Fortran APIs
+HDF5 Fortran Library
+HDF5 High Level C APIs
+HDF5 C Library
+\endcode
+
+\code
+HDF5 High Level C++ APIs
+HDF5 C++ Library
+HDF5 High Level Fortran APIs
+HDF5 Fortran Library
+HDF5 High Level C APIs
+HDF5 C Library
+\endcode
+
+
+The pre-compiled binaries, in particular, are built (if at all possible) with these libraries as well as with
+SZIP and ZLIB. If using shared libraries you may need to add the path to the library to LD_LIBRARY_PATH on Linux
+or on WINDOWS you may need to add the path to the bin folder to PATH.
+
+\section secLBCompilingCMake Compiling an Application with CMake
+
+\subsection subsecLBCompilingCMakeScripts CMake Scripts for Building Applications
+Simple scripts are provided for building applications with different languages and options.
+See CMake Scripts for Building Applications.
+
+For a more complete script (and to help resolve issues) see the script provided with the HDF5 Examples project.
+
+\subsection subsecLBCompilingCMakeExamples HDF5 Examples
+The installed HDF5 can be verified by compiling the HDF5 Examples project, included with the CMake built HDF5 binaries
+in the share folder or you can go to the HDF5 Examples github repository.
+
+Go into the share directory and follow the instructions in USING_CMake_examples.txt to build the examples.
+
+In general, users must first set the HDF5_ROOT environment variable to the installed location of the CMake
+configuration files for HDF5. For example, on Windows the following path might be set:
+
+\code
+ HDF5_ROOT=C:/Program Files/HDF_Group/HDF5/1.N.N
+\endcode
+
+\subsection subsecLBCompilingCMakeTroubless Troubleshooting CMake
+
How do you use find_package with HDF5?
+To use find_package you will first need to make sure that HDF5_ROOT is set correctly. For setting this
+environment variable see the Preconditions in the USING_HDF5_CMake.txt file in the share directory.
+
+See the CMakeLists.txt file provided with these examples for how to use find_package with HDF5.
+
+Please note that the find_package invocation changed to require "shared" or "static":
+\code
+ FIND_PACKAGE(HDF5 COMPONENTS C HL NO_MODULE REQUIRED shared)
+ FIND_PACKAGE(HDF5 COMPONENTS C HL NO_MODULE REQUIRED static)
+\endcode
+
+Previously, the find_package invocation was:
+\code
+ FIND_PACKAGE(HDF5 COMPONENTS C HL NO_MODULE REQUIRED)
+\endcode
+
+
My platform/compiler is not included. Can I still use the configuration files?
+Yes, you can but you will have to edit the HDF5_Examples.cmake file and update the variable:
+\code
+ CTEST_CMAKE_GENERATOR
+\endcode
+
+The generators for your platform can be seen by typing:
+\code
+ cmake --help
+\endcode
+
+
What do I do if the build fails?
+I received an error during the build and the application binary is not in the
+build directory as I expected. How do I determine what the problem is?
+
+If the error is not clear, then the first thing you may want to do is replace the -V (Dash Uppercase Vee)
+option for ctest in the build script to -VV (Dash Uppercase Vee Uppercase Vee). Then remove the build
+directory and re-run the build script. The output should be more verbose.
+
+If the error is still not clear, then check the log files. You will find those in the build directory.
+For example, on Unix the log files will be in:
+\code
+ build/Testing/Temporary/
+\endcode
+There are log files for the configure, test, and build.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+@page LBTraining Training Videos
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+
+Training Videos
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref LearnBasics
+
+*/
diff --git a/doxygen/dox/LearnHDFView.dox b/doxygen/dox/LearnHDFView.dox
index e62eb2f..b1f632c 100644
--- a/doxygen/dox/LearnHDFView.dox
+++ b/doxygen/dox/LearnHDFView.dox
@@ -1,4 +1,8 @@
/** @page LearnHDFView Learning HDF5 with HDFView
+
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+
This tutorial enables you to get a feel for HDF5 by using the HDFView browser. It does NOT require
any programming experience.
@@ -92,11 +96,6 @@ Datatype information as is):
-
Under Dataspace, Maximum Size
-
-
57x57
-
-
Layout
Contiguous (default)
@@ -104,7 +103,7 @@ Datatype information as is):
-
Left click on the Data group in the tree view to see the Storm dataset in the TableView:
+
Click to expand the Data group in the tree view to see the Storm dataset:
@@ -117,7 +116,7 @@ Datatype information as is):
Copy the data from the storm1.txt file into the dataset.
If you downloaded storm1.txt,
-then right click on the Table menu and select Import Data from Text File.
+then click on the Import/Export Data menu and select Import Data from -> Text File.
Specify a location, select storm1.txt
and click on the Open button. Answer Yes in the dialog box that
pops up (which asks if you wish to paste the selected data).
@@ -134,7 +133,7 @@ The values will be entered into the spreadsheet.
-
Close the dataset, and save the data.
+
Table -> Close the dataset, and save the data.
\subsection subsec_learn_hv_topics_image Displaying a Dataset as an Image
@@ -191,15 +190,12 @@ The following illustrates how to add an attribute to the group /Data:
32
-
-
Value
-
-
3343
-
-
-
Select the Ok button. The attribute will show up in the Properties window.
-
Close the Properties window.
+
Select the Ok button. The attribute will show up under the Object Attribute Info tab.
+
Double-click the BatchID attribute line to open the data table for BatchID.
+
Click in the first cell and enter 3343 followed by the enter key.
+
Table -> Close, answer Yes in the dialog box that
+pops up (which asks if you wish to paste the selected data).
Adding an attribute to a dataset is very similar to adding an attribute to a group. For example,
the following adds an attribute to the /Storm dataset:
@@ -226,15 +222,12 @@ these values. (Be sure to add a String Length or the string will be tru
3
-
-
Value
-
-
m/s
-
-
-
Select the Ok button. The attribute will be displayed in the window.
-
Close the Properties window.
+
Select the Ok button. The attribute will show up under the Object Attribute Info tab.
+
Double-click the Units attribute line to open the data table for Units.
+
Click in the first cell and enter m/s followed by the enter key.
+
Table -> Close, answer Yes in the dialog box that
+pops up (which asks if you wish to paste the selected data).
@@ -253,9 +246,9 @@ in the file).
Please note that the chunk sizes used in this topic are for demonstration purposes only. For
information on chunking and specifying an appropriate chunk size, see the
-Chunking in HDF5 documentation.
+Chunking in HDF5 documentation.
-Also see the HDF5 Tutorial topic on Creating a Compressed Dataset.
+Also see the HDF5 Tutorial topic on \ref secLBComDsetCreate.
Right click on storm.h5. Select New -> Group.
Enter Image for the name of the group, and click the OK button to create the group.
@@ -283,12 +276,6 @@ Also see the HDF5 Tutorial topic on storm1.txt file into the dataset. (See the previous topic for copying
storm1.txt into a dataset.)
-
Close the table, and save the data.
+
Table -> Close, and save the data.
Right click on Another Storm, and select Open As.
Select the Image button in the Dataset Selection window that pops up. Click the Ok button at the
bottom of the window to view the dataset as an image.
@@ -373,7 +360,7 @@ create an image to begin with, as is shown below.
Close the dataset.
-
Double left-mouse click on the Data group to see its contents. You will see the Storm Image dataset.
+
Expand the Data group to see its contents. You will see the Storm Image dataset.
@@ -393,17 +380,6 @@ as a spreadsheet.
Left double click on Storm Image to see the image. Close the dataset.
-
Right click on Storm Image and select Show Properties from the pop-up menu, to open the Properties
-window. Select the Attributes tab to see the attributes:
-
-
-
-\image html hdfview-imgprop.png
-
-
-
-
-
Close the Properties window.
Right click on Storm Image and select Open As to bring up the Data Selection window.
Select a subset by clicking the left mouse on the image in the window and dragging the mouse.
Notice that the Height and Width values change. Select to display it as an image. Click Ok.
@@ -437,7 +413,7 @@ dataspace for the compound dataset is one-dimensional, then the dataset can be v
HDFView, as is shown below.
Right button click on the group Data. Select New -> Compound DS.
-
A window pops up on the right side of the screen. Only fill in the following fields:
+
A window pops up. Only fill in the following fields:
Dataset name
@@ -490,4 +466,7 @@ HDFView, as is shown below.
Close the dataset.
+
+Navigate back: \ref index "Main" / \ref GettingStarted
+
*/
diff --git a/doxygen/dox/UsersGuide.dox b/doxygen/dox/UsersGuide.dox
index 1898d3a..53c8ad7 100644
--- a/doxygen/dox/UsersGuide.dox
+++ b/doxygen/dox/UsersGuide.dox
@@ -285,7 +285,7 @@ These documents provide additional information for the use and tuning of specifi
diff --git a/doxygen/dox/ViewTools.dox b/doxygen/dox/ViewTools.dox
new file mode 100644
index 0000000..82d0ed6
--- /dev/null
+++ b/doxygen/dox/ViewTools.dox
@@ -0,0 +1,1198 @@
+/** @page ViewTools Tools for Viewing and Editing HDF5 Files
+
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+
+\section secToolsBasic Basic Facts about HDF5
+The following are basic facts about HDF5 files to keep in mind while completing these tutorial topics:
+\li All HDF5 files contain a root group "/".
+\li There are two primary objects in HDF5, a group and a dataset:
+ Groups allow objects to be organized into a group structure, such as a tree.
+ Datasets contain raw data values.
+\li Additional information about an HDF5 object may optionally be stored in attributes attached to the object.
+
+\section secToolsTopics Tutorial Topics
+
+
+
Tutorial Topic
+
Descriptionn
+
+
+
+@ref LearnHDFView
+
+
Use HDFView to create, edit and view files.
+
+
+
+
+@ref ViewToolsCommand
+
+
Use the HDF5 command-line tools for viewing, editing, and comparing HDF5 files.
+
+
+
+
@ref ViewToolsJPSS
+
+
Use HDF5 tools to examine and work with JPSS NPP files.
+
+
+
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+@page ViewToolsCommand Command-line Tools
+Navigate back: \ref index "Main" / \ref GettingStarted
+
+
+\section secViewToolsCommandObtain Obtain Tools and Files (Optional)
+Pre-built binaries for Linux and Windows are distributed within the respective HDF5 binary release
+packages, which can be obtained from the Download HDF5 page.
+
+HDF5 files can be obtained from various places such as \ref HDF5Examples and HDF-EOS and Tools and
+Information Center. Specifically, the following examples are used in this tutorial topic:
+\li HDF5 Files created from compiling the \ref LBExamples
+\li HDF5 Files on the Examples by API page
+\li NPP JPSS files, SVM01_npp.. (gzipped)
+and SVM09_npp.. (gzipped)
+\li HDF-EOS OMI-Aura file
+
+\section secViewToolsCommandTutor Tutorial Topics
+A variety of command-line tools are included in the HDF5 binary distribution. There are tools to view,
+edit, convert and compare HDF5 files. This tutorial discusses the tools by their functionality. It
+does not cover all of the HDF5 tools.
+
+
+
+\section secViewToolsViewContent File Content and Structure
+The h5dump and h5ls tools can both be used to view the contents of an HDF5 file. The tools are discussed below:
+
+
\ref subsecViewToolsViewContent_h5dump
+
\ref subsecViewToolsViewContent_h5ls
+
+
+\subsection subsecViewToolsViewContent_h5dump h5dump
+The h5dump tool dumps or displays the contents of an HDF5 file (textually). By default if you specify no options,
+h5dump will display the entire contents of a file. There are many h5dump options for examining specific details
+of a file. To see all of the available h5dump options, specify the -h
+or --help option:
+\code
+h5dump -h
+\endcode
+
+The following h5dump options can be helpful in viewing the content and structure of a file:
+
+
+
Option
+
Description
+
Comment
+
+
+
-n, --contents
+
+
Displays a list of the objects in a file
+
+
See @ref subsubsecViewToolsViewContent_h5dumpEx1
+
+
+
+
-n 1, --contents=1
+
+
Displays a list of the objects and attributes in a file
+
+
See @ref subsubsecViewToolsViewAttr_h5dumpEx6
+
+
+
+
-H, --header
+
+
Displays header information only (no data)
+
+
See @ref subsubsecViewToolsViewContent_h5dumpEx2
+
+
+
+
-A 0, --onlyattr=0
+
+
Suppresses the display of attributes
+
+
See @ref subsubsecViewToolsViewContent_h5dumpEx2
+
+
+
+
-N P, --any_path=P
+
+
Displays any object or attribute that matches path P
+
+
See @ref subsubsecViewToolsViewAttr_h5dumpEx6
+
+
+
+
+\subsubsection subsubsecViewToolsViewContent_h5dumpEx1 Example 1
+The following command displays a list of the objects in the file OMI-Aura.he5 (an HDF-EOS5 file):
+\code
+h5dump -n OMI-Aura.he5
+\endcode
+
+As shown in the output below, the objects (groups, datasets) are listed to the left, followed by their
+names. You can see that this file contains two root groups, HDFEOS and HDFEOS INFORMATION:
+\code
+HDF5 "OMI-Aura.he5" {
+FILE_CONTENTS {
+ group /
+ group /HDFEOS
+ group /HDFEOS/ADDITIONAL
+ group /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES
+ group /HDFEOS/GRIDS
+ group /HDFEOS/GRIDS/OMI Column Amount O3
+ group /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields
+ dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/ColumnAmountO3
+ dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/RadiativeCloudFraction
+ dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle
+ dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/ViewingZenithAngle
+ group /HDFEOS INFORMATION
+ dataset /HDFEOS INFORMATION/StructMetadata.0
+ }
+}
+\endcode
+
+\subsubsection subsubsecViewToolsViewContent_h5dumpEx2 Example 2
+The file structure of the OMI-Aura.he5 file can be seen with the following command. The -A 0 option suppresses the display of attributes:
+\code
+h5dump -H -A 0 OMI-Aura.he5
+\endcode
+
+Output of this command is shown below:
+\code
+HDF5 "OMI-Aura.he5" {
+GROUP "/" {
+ GROUP "HDFEOS" {
+ GROUP "ADDITIONAL" {
+ GROUP "FILE_ATTRIBUTES" {
+ }
+ }
+ GROUP "GRIDS" {
+ GROUP "OMI Column Amount O3" {
+ GROUP "Data Fields" {
+ DATASET "ColumnAmountO3" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ DATASET "RadiativeCloudFraction" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ DATASET "SolarZenithAngle" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ DATASET "ViewingZenithAngle" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ }
+ }
+ }
+ }
+ GROUP "HDFEOS INFORMATION" {
+ DATASET "StructMetadata.0" {
+ DATATYPE H5T_STRING {
+ STRSIZE 32000;
+ STRPAD H5T_STR_NULLTERM;
+ CSET H5T_CSET_ASCII;
+ CTYPE H5T_C_S1;
+ }
+ DATASPACE SCALAR
+ }
+ }
+}
+}
+\endcode
+
+\subsection subsecViewToolsViewContent_h5ls h5ls
+The h5ls tool by default just displays the objects in the root group. It will not display
+items in groups beneath the root group unless specified. Useful h5ls options for viewing
+file content and structure are:
+
+
+
Option
+
Description
+
Comment
+
+
+
-r
+
+
Lists all groups and objects recursively
+
+
See @ref subsubsecViewToolsViewContent_h5lsEx3
+
+
+
+
-v
+
+
Generates verbose output (lists dataset properties, attributes
+and attribute values, but no dataset values)
+
+
+
+
+
+
+\subsubsection subsubsecViewToolsViewContent_h5lsEx3 Example 3
+The following command shows the contents of the HDF-EOS5 file OMI-Aura.he5. The output is similar to h5dump, except that h5ls also shows dataspace information for each dataset:
+\code
+h5ls -r OMI-Aura.he5
+\endcode
+
+The output is shown below:
+\code
+/ Group
+/HDFEOS Group
+/HDFEOS/ADDITIONAL Group
+/HDFEOS/ADDITIONAL/FILE_ATTRIBUTES Group
+/HDFEOS/GRIDS Group
+/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3 Group
+/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields Group
+/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ColumnAmountO3 Dataset {720, 1440}
+/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/RadiativeCloudFraction Dataset {720, 1440}
+/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/SolarZenithAngle Dataset {720, 1440}
+/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ViewingZenithAngle Dataset {720, 1440}
+/HDFEOS\ INFORMATION Group
+/HDFEOS\ INFORMATION/StructMetadata.0 Dataset {SCALAR}
+\endcode
+
+\section secViewToolsViewDset Datasets and Dataset Properties
+Both h5dump and h5ls can be used to view specific datasets.
+
+
\ref subsecViewToolsViewDset_h5dump
+
\ref subsecViewToolsViewDset_h5ls
+
+
+\subsection subsecViewToolsViewDset_h5dump h5dump
+Useful h5dump options for examining specific datasets include:
+
+
+
Option
+
Description
+
Comment
+
+
+
-d D, --dataset=D
+
+
Displays dataset D
+
+
See @ref subsubsecViewToolsViewDset_h5dumpEx4
+
+
+
+
-H, --header
+
+
Displays header information only
+
+
See @ref subsubsecViewToolsViewDset_h5dumpEx4
+
+
+
+
-p, --properties
+
+
Displays dataset filters, storage layout, and fill value properties
+
+
See @ref subsubsecViewToolsViewDset_h5dumpEx5
+
+
+
+
-A 0, --onlyattr=0
+
+
Suppresses the display of attributes
+
+
See @ref subsubsecViewToolsViewContent_h5dumpEx2
+
+
+
+
-N P, --any_path=P
+
+
Displays any object or attribute that matches path P
+
+
See @ref subsubsecViewToolsViewAttr_h5dumpEx6
+
+
+
+
+\subsubsection subsubsecViewToolsViewDset_h5dumpEx4 Example 4
+A specific dataset can be viewed with h5dump using the -d D option and specifying the entire
+path and name of the dataset for D. The path is important in identifying the correct dataset,
+as there can be multiple datasets with the same name. The path can be determined by looking at
+the objects in the file with h5dump -n.
+
+The following example uses the groups.h5 file that is created by the
+\ref LBExamples
+example h5_crtgrpar.c. To display dset1 in the groups.h5 file below, specify dataset
+/MyGroup/dset1. The -H option is used to suppress printing of the data values:
+
+Contents of groups.h5
+\code
+ $ h5dump -n groups.h5
+ HDF5 "groups.h5" {
+ FILE_CONTENTS {
+ group /
+ group /MyGroup
+ group /MyGroup/Group_A
+ dataset /MyGroup/Group_A/dset2
+ group /MyGroup/Group_B
+ dataset /MyGroup/dset1
+ }
+ }
+\endcode
+
+Display dataset "dset1"
+\code
+ $ h5dump -d "/MyGroup/dset1" -H groups.h5
+ HDF5 "groups.h5" {
+ DATASET "/MyGroup/dset1" {
+ DATATYPE H5T_STD_I32BE
+ DATASPACE SIMPLE { ( 3, 3 ) / ( 3, 3 ) }
+ }
+ }
+\endcode
+
+\subsubsection subsubsecViewToolsViewDset_h5dumpEx5 Example 5
+The -p option is used to examine the the dataset filters, storage layout, and fill value properties of a dataset.
+
+This option can be useful for checking how well compression works, or even for analyzing performance
+and dataset size issues related to chunking. (The smaller the chunk size, the more chunks that HDF5
+has to keep track of, which increases the size of the file and potentially affects performance.)
+
+In the file shown below the dataset /DS1 is both chunked and compressed:
+\code
+ $ h5dump -H -p -d "/DS1" h5ex_d_gzip.h5
+ HDF5 "h5ex_d_gzip.h5" {
+ DATASET "/DS1" {
+ DATATYPE H5T_STD_I32LE
+ DATASPACE SIMPLE { ( 32, 64 ) / ( 32, 64 ) }
+ STORAGE_LAYOUT {
+ CHUNKED ( 4, 8 )
+ SIZE 5278 (1.552:1 COMPRESSION)
+ }
+ FILTERS {
+ COMPRESSION DEFLATE { LEVEL 9 }
+ }
+ FILLVALUE {
+ FILL_TIME H5D_FILL_TIME_IFSET
+ VALUE 0
+ }
+ ALLOCATION_TIME {
+ H5D_ALLOC_TIME_INCR
+ }
+ }
+ }
+\endcode
+
+You can obtain the h5ex_d_gzip.c program that created this file, as well as the file created,
+from the Examples by API page.
+
+\subsection subsecViewToolsViewDset_h5ls h5ls
+Specific datasets can be specified with h5ls by simply adding the dataset path and dataset after the
+file name. As an example, this command displays dataset dset2 in the groups.h5
+file used in @ref subsubsecViewToolsViewDset_h5dumpEx4 :
+\code
+h5ls groups.h5/MyGroup/Group_A/dset2
+\endcode
+
+Just the dataspace information gets displayed:
+\code
+dset2 Dataset {2, 10}
+\endcode
+
+The following options can be used to see detailed information about a dataset.
+
+
+
Option
+
Description
+
+
+
-v, --verbose
+
+
Generates verbose output (lists dataset properties, attributes
+and attribute values, but no dataset values)
+
+
+
+
-d, --data
+
+
Displays dataset values
+
+
+
+
+The output of using -v is shown below:
+\code
+ $ h5ls -v groups.h5/MyGroup/Group_A/dset2
+ Opened "groups.h5" with sec2 driver.
+ dset2 Dataset {2/2, 10/10}
+ Location: 1:3840
+ Links: 1
+ Storage: 80 logical bytes, 80 allocated bytes, 100.00% utilization
+ Type: 32-bit big-endian integer
+\endcode
+
+The output of using -d is shown below:
+\code
+ $ h5ls -d groups.h5/MyGroup/Group_A/dset2
+ dset2 Dataset {2, 10}
+ Data:
+ (0,0) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
+\endcode
+
+\section secViewToolsViewGrps Groups
+Both h5dump and h5ls can be used to view specific groups in a file.
+
+
\ref subsecViewToolsViewGrps_h5dump
+
\ref subsecViewToolsViewGrps_h5ls
+
+
+\subsection subsecViewToolsViewGrps_h5dump h5dump
+The h5dump options that are useful for examining groups are:
+
+
+
Option
+
Description
+
+
+
-g G, --group=G
+
+
Displays group G and its members
+
+
+
+
-H, --header
+
+
Displays header information only
+
+
+
+
-A 0, --onlyattr=0
+
+
Suppresses the display of attributes
+
+
+
+
+To view the contents of the HDFEOS group in the OMI file mentioned previously, you can specify the path and name of the group as follows:
+\code
+h5dump -g "/HDFEOS" -H -A 0 OMI-Aura.he5
+\endcode
+
+The -A 0 option suppresses attributes and -H suppresses printing of data values:
+\code
+ HDF5 "OMI-Aura.he5" {
+ GROUP "/HDFEOS" {
+ GROUP "ADDITIONAL" {
+ GROUP "FILE_ATTRIBUTES" {
+ }
+ }
+ GROUP "GRIDS" {
+ GROUP "OMI Column Amount O3" {
+ GROUP "Data Fields" {
+ DATASET "ColumnAmountO3" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ DATASET "RadiativeCloudFraction" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ DATASET "SolarZenithAngle" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ DATASET "ViewingZenithAngle" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ }
+ }
+ }
+ }
+ }
+ }
+\endcode
+
+\subsection subsecViewToolsViewGrps_h5ls h5ls
+You can view the contents of a group with h5ls/ by specifying the group after the file name.
+To use h5ls to view the contents of the /HDFEOS group in the OMI-Aura.he5 file, type:
+\code
+h5ls -r OMI-Aura.he5/HDFEOS
+\endcode
+
+The output of this command is:
+\code
+ /ADDITIONAL Group
+ /ADDITIONAL/FILE_ATTRIBUTES Group
+ /GRIDS Group
+ /GRIDS/OMI\ Column\ Amount\ O3 Group
+ /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields Group
+ /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ColumnAmountO3 Dataset {720, 1440}
+ /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/RadiativeCloudFraction Dataset {720, 1440}
+ /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/SolarZenithAngle Dataset {720, 1440}
+ /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ViewingZenithAngle Dataset {720, 1440}
+\endcode
+
+If you specify the -v option, you can also see the attributes and properties of the datasets.
+
+\section secViewToolsViewAttr Attributes
+
+\subsection subsecViewToolsViewAttr_h5dump h5dump
+Attributes are displayed by default if using h5dump. Some files contain many attributes, which
+can make it difficult to examine the objects in the file. Shown below are options that can help
+when using h5dump to work with files that have attributes.
+
+\subsubsection subsubsecViewToolsViewAttr_h5dumpEx6 Example 6
+The -a A option will display an attribute. However, the path to the attribute must be included
+when specifying this option. For example, to see the ScaleFactor attribute in the OMI-Aura.he5 file, type:
+\code
+h5dump -a "/HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle/ScaleFactor" OMI-Aura.he5
+\endcode
+
+This command displays:
+\code
+ HDF5 "OMI-Aura.he5" {
+ ATTRIBUTE "ScaleFactor" {
+ DATATYPE H5T_IEEE_F64LE
+ DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
+ DATA {
+ (0): 1
+ }
+ }
+ }
+\endcode
+
+How can you determine the path to the attribute? This can be done by looking at the file contents with the -n 1 option:
+\code
+h5dump -n 1 OMI-Aura.he5
+\endcode
+
+Below is a portion of the output for this command:
+\code
+ HDF5 "OMI-Aura.he5" {
+ FILE_CONTENTS {
+ group /
+ group /HDFEOS
+ group /HDFEOS/ADDITIONAL
+ group /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/EndUTC
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleDay
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleDayOfYear
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleMonth
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleYear
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/InstrumentName
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/OrbitNumber
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/OrbitPeriod
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/PGEVersion
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/Period
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/ProcessLevel
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/StartUTC
+ attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/TAI93At0zOfGranule
+
+ ...
+\endcode
+
+There can be multiple objects or attributes with the same name in a file. How can you make sure
+you are finding the correct object or attribute? You can first determine how many attributes
+there are with a specified name, and then examine the paths to them.
+
+The -N option can be used to display all objects or attributes with a given name.
+For example, there are four attributes with the name ScaleFactor in the OMI-Aura.he5 file,
+as can be seen below with the -N option:
+\code
+h5dump -N ScaleFactor OMI-Aura.he5
+\endcode
+
+It outputs:
+\code
+HDF5 "OMI-Aura.he5" {
+ATTRIBUTE "ScaleFactor" {
+ DATATYPE H5T_IEEE_F64LE
+ DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
+ DATA {
+ (0): 1
+ }
+}
+ATTRIBUTE "ScaleFactor" {
+ DATATYPE H5T_IEEE_F64LE
+ DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
+ DATA {
+ (0): 1
+ }
+}
+ATTRIBUTE "ScaleFactor" {
+ DATATYPE H5T_IEEE_F64LE
+ DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
+ DATA {
+ (0): 1
+ }
+}
+ATTRIBUTE "ScaleFactor" {
+ DATATYPE H5T_IEEE_F64LE
+ DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
+ DATA {
+ (0): 1
+ }
+}
+}
+\endcode
+
+\subsection subsecViewToolsViewAttr_h5ls h5ls
+If you include the -v (verbose) option for h5ls, you will see all of the attributes for the
+specified file, dataset or group. You cannot display individual attributes.
+
+\section secViewToolsViewSub Dataset Subset
+
+\subsection subsecViewToolsViewSub_h5dump h5dump
+If you have a very large dataset, you may wish to subset or see just a portion of the dataset.
+This can be done with the following h5dump options.
+
+
+
Option
+
Description
+
+
+
-d D, --dataset=D
+
+
Dataset D
+
+
+
+
-s START, --start=START
+
+
Offset or start of subsetting selection
+
+
+
+
-S STRIDE, --stride=STRIDE
+
+
Stride (sampling along a dimension). The default (unspecified, or 1) selects
+every element along a dimension, a value of 2 selects every other element,
+a value of 3 selects every third element, ...
+
+
+
+
-c COUNT, --count=COUNT
+
+
Number of blocks to include in the selection
+
+
+
+
-k BLOCK, --block=BLOCK
+
+
Size of the block in a hyperslab. The default (unspecified, or 1) is for
+the block size to be the size of a single element.
+
+
+
+
+The START (s), STRIDE (S), COUNT (c), and BLOCK (k) options
+define the shape and size of the selection. They are arrays with the same number of dimensions as the rank
+of the dataset's dataspace, and they all work together to define the selection. A change to one of
+these arrays can affect the others.
+
+When specifying these h5dump options, a comma is used as the delimiter for each dimension in the
+option value. For example, with a 2-dimensional dataset, the option value is specified as "H,W",
+where H is the height and W is the width. If the offset is 0 for both dimensions, then
+START would be specified as follows:
+\code
+-s "0,0"
+\endcode
+
+There is also a shorthand way to specify these options with brackets at the end of the dataset name:
+\code
+-d DATASETNAME[s;S;c;k]
+\endcode
+
+Multiple dimensions are separated by commas. For example, a subset for a 2-dimensional dataset would be specified as follows:
+\code
+-d DATASETNAME[s,s;S,S;c,c;k,k]
+\endcode
+
+For a detailed understanding of how selections works, see the #H5Sselect_hyperslab API in the \ref RM.
+
+The dataset SolarZenithAngle in the OMI-Aura.he5 file can be used to illustrate these options. This
+dataset is a 2-dimensional dataset of size 720 (height) x 1440 (width). Too much data will be displayed
+by simply viewing the specified dataset with the -d option:
+\code
+h5dump -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" OMI-Aura.he5
+\endcode
+Subsetting narrows down the output that is displayed. In the following example, the first
+15x10 elements (-c "15,10") are specified, beginning with position (0,0) (-s "0,0"):
+\code
+ h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" -s "0,0" -c "15,10" -w 0 OMI-Aura.he5
+\endcode
+
+If using the shorthand method, specify:
+\code
+ h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle[0,0;;15,10;]" -w 0 OMI-Aura.he5
+\endcode
+
+Where,
+\par The -d option must be specified
+
+before
+\par subsetting options (if not using the shorthand method).
+
+The -A 0 option suppresses the printing of attributes.
+
+The -w 0 option sets the number of columns of output to the maximum allowed value (65535).
+This ensures that there are enough columns specified for displaying the data.
+
+Either command displays:
+\code
+ HDF5 "OMI-Aura.he5" {
+ DATASET "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ SUBSET {
+ START ( 0, 0 );
+ STRIDE ( 1, 1 );
+ COUNT ( 15, 10 );
+ BLOCK ( 1, 1 );
+ DATA {
+ (0,0): 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403,
+ (1,0): 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071,
+ (2,0): 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867,
+ (3,0): 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632,
+ (4,0): 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429,
+ (5,0): 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225,
+ (6,0): 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021,
+ (7,0): 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715,
+ (8,0): 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511,
+ (9,0): 77.658, 77.658, 77.658, 77.307, 77.307, 77.307, 77.307, 77.307, 77.307, 77.307,
+ (10,0): 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.102, 77.102,
+ (11,0): 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 77.102, 77.102,
+ (12,0): 76.34, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413,
+ (13,0): 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 77.195,
+ (14,0): 78.005, 78.005, 78.005, 78.005, 78.005, 78.005, 76.991, 76.991, 76.991, 76.991
+ }
+ }
+ }
+ }
+\endcode
+
+What if we wish to read three rows of three elements at a time (-c "3,3"), where each element
+is a 2 x 3 block (-k "2,3") and we wish to begin reading from the second row (-s "1,0")?
+
+You can do that with the following command:
+\code
+ h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle"
+ -s "1,0" -S "2,3" -c "3,3" -k "2,3" -w 0 OMI-Aura.he5
+\endcode
+
+In this case, the stride must be specified as 2 by 3 (or larger) to accommodate the reading of 2 by 3 blocks.
+If it is smaller, the command will fail with the error,
+\code
+h5dump error: wrong subset selection; blocks overlap.
+\endcode
+
+The output of the above command is shown below:
+\code
+ HDF5 "OMI-Aura.he5" {
+ DATASET "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" {
+ DATATYPE H5T_IEEE_F32LE
+ DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
+ SUBSET {
+ START ( 1, 0 );
+ STRIDE ( 2, 3 );
+ COUNT ( 3, 3 );
+ BLOCK ( 2, 3 );
+ DATA {
+ (1,0): 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071,
+ (2,0): 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867,
+ (3,0): 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632,
+ (4,0): 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429,
+ (5,0): 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225,
+ (6,0): 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021
+ }
+ }
+ }
+ }
+\endcode
+
+\section secViewToolsViewDtypes Datatypes
+
+\subsection subsecViewToolsViewDtypes_h5dump h5dump
+The following datatypes are discussed, using the output of h5dump with HDF5 files from the
+Examples by API page:
+
+
@ref subsubsecViewToolsViewDtypes_array
+
@ref subsubsecViewToolsViewDtypes_objref
+
@ref subsubsecViewToolsViewDtypes_regref
+
@ref subsubsecViewToolsViewDtypes_string
+
+
+\subsubsection subsubsecViewToolsViewDtypes_array Array
+Users have been confused by the difference between an Array datatype (#H5T_ARRAY) and a dataset that
+(has a dataspace that) is an array.
+
+Typically, these users want a dataset that has a simple datatype (like integer or float) that is an
+array, like the following dataset /DS1. It has a datatype of #H5T_STD_I32LE (32-bit Little-Endian Integer)
+and is a 4 by 7 array:
+\code
+$ h5dump h5ex_d_rdwr.h5
+HDF5 "h5ex_d_rdwr.h5" {
+GROUP "/" {
+ DATASET "DS1" {
+ DATATYPE H5T_STD_I32LE
+ DATASPACE SIMPLE { ( 4, 7 ) / ( 4, 7 ) }
+ DATA {
+ (0,0): 0, -1, -2, -3, -4, -5, -6,
+ (1,0): 0, 0, 0, 0, 0, 0, 0,
+ (2,0): 0, 1, 2, 3, 4, 5, 6,
+ (3,0): 0, 2, 4, 6, 8, 10, 12
+ }
+ }
+}
+}
+\endcode
+
+Contrast that with the following dataset that has both an Array datatype and is an array:
+\code
+$ h5dump h5ex_t_array.h5
+HDF5 "h5ex_t_array.h5" {
+GROUP "/" {
+ DATASET "DS1" {
+ DATATYPE H5T_ARRAY { [3][5] H5T_STD_I64LE }
+ DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
+ DATA {
+ (0): [ 0, 0, 0, 0, 0,
+ 0, -1, -2, -3, -4,
+ 0, -2, -4, -6, -8 ],
+ (1): [ 0, 1, 2, 3, 4,
+ 1, 1, 1, 1, 1,
+ 2, 1, 0, -1, -2 ],
+ (2): [ 0, 2, 4, 6, 8,
+ 2, 3, 4, 5, 6,
+ 4, 4, 4, 4, 4 ],
+ (3): [ 0, 3, 6, 9, 12,
+ 3, 5, 7, 9, 11,
+ 6, 7, 8, 9, 10 ]
+ }
+ }
+}
+}
+\endcode
+
+In this file, dataset /DS1 has a datatype of
+\code
+H5T_ARRAY { [3][5] H5T_STD_I64LE }
+\endcode
+and it also has a dataspace of
+\code
+SIMPLE { ( 4 ) / ( 4 ) }
+\endcode
+In other words, it is an array of four elements, in which each element is a 3 by 5 array of #H5T_STD_I64LE.
+
+This dataset is much more complex. Also note that subsetting cannot be done on Array datatypes.
+
+See this FAQ for more information on the Array datatype.
+
+\subsubsection subsubsecViewToolsViewDtypes_objref Object Reference
+An Object Reference is a reference to an entire object (dataset, group, or named datatype).
+A dataset with an Object Reference datatype consists of one or more Object References.
+An Object Reference dataset can be used as an index to an HDF5 file.
+
+The /DS1 dataset in the following file (h5ex_t_objref.h5) is an Object Reference dataset.
+It contains two references, one to group /G1 and the other to dataset /DS2:
+\code
+$ h5dump h5ex_t_objref.h5
+HDF5 "h5ex_t_objref.h5" {
+GROUP "/" {
+ DATASET "DS1" {
+ DATATYPE H5T_REFERENCE { H5T_STD_REF_OBJECT }
+ DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
+ DATA {
+ (0): GROUP 1400 /G1 , DATASET 800 /DS2
+ }
+ }
+ DATASET "DS2" {
+ DATATYPE H5T_STD_I32LE
+ DATASPACE NULL
+ DATA {
+ }
+ }
+ GROUP "G1" {
+ }
+}
+}
+\endcode
+
+\subsubsection subsubsecViewToolsViewDtypes_regref Region Reference
+A Region Reference is a reference to a selection within a dataset. A selection can be either
+individual elements or a hyperslab. In h5dump you will see the name of the dataset along with
+the elements or slab that is selected. A dataset with a Region Reference datatype consists of
+one or more Region References.
+
+An example of a Region Reference dataset (h5ex_t_regref.h5) can be found on the
+Examples by API page,
+under Datatypes. If you examine this dataset with h5dump you will see that /DS1 is a
+Region Reference dataset as indicated by its datatype, highlighted in bold below:
+\code
+$ h5dump h5ex_t_regref.h5
+HDF5 "h5ex_t_regref.h5" {
+GROUP "/" {
+ DATASET "DS1" {
+ DATATYPE H5T_REFERENCE { H5T_STD_REF_DSETREG }
+ DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
+ DATA {
+ DATASET /DS2 {(0,1), (2,11), (1,0), (2,4)},
+ DATASET /DS2 {(0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2), (2,11)-(2,13)}
+ }
+ }
+ DATASET "DS2" {
+ DATATYPE H5T_STD_I8LE
+ DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
+ DATA {
+ (0,0): 84, 104, 101, 32, 113, 117, 105, 99, 107, 32, 98, 114, 111, 119,
+ (0,14): 110, 0,
+ (1,0): 102, 111, 120, 32, 106, 117, 109, 112, 115, 32, 111, 118, 101,
+ (1,13): 114, 32, 0,
+ (2,0): 116, 104, 101, 32, 53, 32, 108, 97, 122, 121, 32, 100, 111, 103,
+ (2,14): 115, 0
+ }
+ }
+}
+}
+\endcode
+
+It contains two Region References:
+\li A selection of four individual elements in dataset /DS2 : (0,1), (2,11), (1,0), (2,4)
+See the #H5Sselect_elements API in the \ref UG for information on selecting individual elements.
+\li A selection of these blocks in dataset /DS2 : (0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2), (2,11)-(2,13)
+See the #H5Sselect_hyperslab API in the \ref UG for how to do hyperslab selection.
+
+
+If you look at the code that creates the dataset (h5ex_t_regref.c) you will see that the
+first reference is created with these calls:
+\code
+ status = H5Sselect_elements (space, H5S_SELECT_SET, 4, coords[0]);
+ status = H5Rcreate (&wdata[0], file, DATASET2, H5R_DATASET_REGION, space);
+\endcode
+
+where the buffer containing the coordinates to select is:
+\code
+ coords[4][2] = { {0, 1},
+ {2, 11},
+ {1, 0},
+ {2, 4} },
+\endcode
+
+The second reference is created by calling,
+\code
+ status = H5Sselect_hyperslab (space, H5S_SELECT_SET, start, stride, count, block);
+ status = H5Rcreate (&wdata[1], file, DATASET2, H5R_DATASET_REGION, space);
+\endcode
+where start, stride, count, and block have these values:
+\code
+ start[2] = {0, 0},
+ stride[2] = {2, 11},
+ count[2] = {2, 2},
+ block[2] = {1, 3};
+\endcode
+
+These start, stride, count, and block values will select the elements shown in bold in the dataset:
+\code
+84 104 101 32 113 117 105 99 107 32 98 114 111 119 110 0
+102 111 120 32 106 117 109 112 115 32 111 118 101 114 32 0
+116 104 101 32 53 32 108 97 122 121 32 100 111 103 115 0
+\endcode
+
+If you use h5dump to select a subset of dataset
+/DS2 with these start, stride, count, and block values, you will see that the same elements are selected:
+\code
+$ h5dump -d "/DS2" -s "0,0" -S "2,11" -c "2,2" -k "1,3" h5ex_t_regref.h5
+HDF5 "h5ex_t_regref.h5" {
+DATASET "/DS2" {
+ DATATYPE H5T_STD_I8LE
+ DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
+ SUBSET {
+ START ( 0, 0 );
+ STRIDE ( 2, 11 );
+ COUNT ( 2, 2 );
+ BLOCK ( 1, 3 );
+ DATA {
+ (0,0): 84, 104, 101, 114, 111, 119,
+ (2,0): 116, 104, 101, 100, 111, 103
+ }
+ }
+}
+}
+\endcode
+
+For more information on selections, see the tutorial topic on
+@ref LBDsetSubRW. Also see the
+\ref secViewToolsViewSub tutorial topic on using h5dump to view a subset.
+
+\subsubsection subsubsecViewToolsViewDtypes_string String
+There are two types of string data, fixed length strings and variable length strings.
+
+Below is the h5dump output for two files that have the same strings written to them. In one file,
+the strings are fixed in length, and in the other, the strings have different sizes (and are variable in size).
+
+Dataset of Fixed Length Strings
+\code
+HDF5 "h5ex_t_string.h5" {
+GROUP "/" {
+ DATASET "DS1" {
+ DATATYPE H5T_STRING {
+ STRSIZE 7;
+ STRPAD H5T_STR_SPACEPAD;
+ CSET H5T_CSET_ASCII;
+ CTYPE H5T_C_S1;
+ }
+ DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
+ DATA {
+ (0): "Parting", "is such", "sweet ", "sorrow."
+ }
+ }
+}
+}
+\endcode
+
+Dataset of Variable Length Strings
+\code
+HDF5 "h5ex_t_vlstring.h5" {
+GROUP "/" {
+ DATASET "DS1" {
+ DATATYPE H5T_STRING {
+ STRSIZE H5T_VARIABLE;
+ STRPAD H5T_STR_SPACEPAD;
+ CSET H5T_CSET_ASCII;
+ CTYPE H5T_C_S1;
+ }
+ DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
+ DATA {
+ (0): "Parting", "is such", "sweet", "sorrow."
+ }
+ }
+}
+}
+\endcode
+
+You might wonder which to use. Some comments to consider are included below.
+\li In general, a variable length string dataset is more complex than a fixed length string. If you don't
+specifically need a variable length type, then just use the fixed length string.
+\li A variable length dataset consists of pointers to heaps in different locations in the file. For this
+reason, a variable length dataset cannot be compressed. (Basically, the pointers get compressed and
+not the actual data!) If compression is needed, then do not use variable length types.
+\li If you need to create an array of of different length strings, you can either use fixed length strings
+along with compression, or use a variable length string.
+
+
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref ViewToolsCommand
+
+*/
diff --git a/doxygen/dox/ViewTools2.dox b/doxygen/dox/ViewTools2.dox
new file mode 100644
index 0000000..4d8788a
--- /dev/null
+++ b/doxygen/dox/ViewTools2.dox
@@ -0,0 +1,786 @@
+/** @page ViewToolsEdit Command-line Tools For Editing HDF5 Files
+Navigate back: \ref index "Main" / \ref GettingStarted / \ref ViewToolsCommand
+
+
+\section secViewToolsEditTOC Contents
+
+
\ref secViewToolsEditRemove
+
\ref secViewToolsEditChange
+
\ref secViewToolsEditApply
+
\ref secViewToolsEditCopy
+
\ref secViewToolsEditAdd
+
+
+\section secViewToolsEditRemove Remove Inaccessible Objects and Unused Space in a File
+HDF5 files may accumulate unused space when they are read and rewritten to or if objects are deleted within
+them. With many edits and deletions this unused space can add up to a sizable amount.
+
+The h5repack tool can be used to remove unused space in an HDF5
+file. If no options other than the input and output HDF5 files are specified on the
+h5repack command line, it will write the file to the new
+file, getting rid of the unused space:
+\code
+h5repack