From d38baac55f8d16ecdccdc2a3313f8fe48beaf326 Mon Sep 17 00:00:00 2001 From: Frank Baker Date: Tue, 15 Sep 1998 16:00:50 -0500 Subject: [svn-r698] Corrected octal apostrophe problem. Clarifying edit re: named datatypes. Assorted spelling corrections and minor edits. --- doc/html/H5.intro.html | 36 ++++++++++++++++++------------------ 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/doc/html/H5.intro.html b/doc/html/H5.intro.html index 44f7145..feab1c1 100644 --- a/doc/html/H5.intro.html +++ b/doc/html/H5.intro.html @@ -2,7 +2,7 @@ -H5introH +Introduction to HDF5 @@ -21,7 +21,7 @@

This is an introduction to the HDF5 data model and programming model. Being a Getting Started or QuickStart document, this Introduction to HDF5 is intended to provide enough information for you to develop a basic understanding of how HDF5 works and is meant to be used. Knowledge of the current version of HDF will make it easier to follow the text, but it is not required. More complete information of the sort you will need to actually use HDF5 is available in the HDF5 documentation at http://hdf.ncsa.uiuc.edu/HDF5/. Available documents include the following:

Code examples are available in the source code tree when you install HDF5. @@ -73,7 +73,7 @@

+
  • HDF5 dataset: a multidimensional array of data elements, together with supporting metadata.

    Working with groups and group members is similar in many ways to working with directories and files in UNIX. As with UNIX directories and files, objects in an HDF5 file are often described by giving their full path names.

    @@ -94,7 +94,7 @@

    The header contains information that is needed to interpret the array portion of the dataset, as well as metadata (or pointers to metadata) that describes or annotates the dataset. Header information includes the name of the object, its dimensionality, its number-type, information about how the data itself is stored on disk, and other information used by the library to speed up access to the dataset or maintain the file's integrity.

    There are four essential classes of information in any header: name, datatype, dataspace, and storage layout:

    Name. A dataset name is a sequence of alphanumeric ASCII characters. -

    Datatype. HDF5 allows one to define many different kinds of datatypes. There are two categories of datatypes: atomic datatypes and compound datatypes. Atomic datatypes are those that are not decomposed at the datatype interface level, such as integers and floats. NATIVE datatypes are system-specific instances of atomic datatypes. Compound datatypes are made up of atomic datatypes. And named dataypes are either atomic or compound datatypes that are have been specifically designated to be shared across datasets. +

    Datatype. HDF5 allows one to define many different kinds of datatypes. There are two categories of datatypes: atomic datatypes and compound datatypes. Atomic datatypes are those that are not decomposed at the datatype interface level, such as integers and floats. NATIVE datatypes are system-specific instances of atomic datatypes. Compound datatypes are made up of atomic datatypes. And named datatypes are either atomic or compound datatypes that are have been specifically designated to be shared across datasets.

    Atomic datatypes include integers and floating-point numbers. Each atomic type belongs to a particular class and has several properties: size, order, precision, and offset. In this introduction, we consider only a few of these properties.

    Atomic datatypes include integer, float, date and time, string, bit field, and opaque. (Note: Only integer, float and string classes are available in the current implementation.)

    Properties of integer types include size, order (endian-ness), and signed-ness (signed/unsigned). @@ -203,13 +203,13 @@ -

    See Datatypes at http://hdf.ncsa.uiuc.edu/HDF5/Datatypes.html in the HDF User’s Guide for further information. +

    See Datatypes at http://hdf.ncsa.uiuc.edu/HDF5/Datatypes.html in the HDF User’s Guide for further information.

    A compound datatype is one in which a collection of simple datatypes are represented as a single unit, similar to a struct in C. The parts of a compound datatype are called members. The members of a compound datatype may be of any datatype, including another compound datatype. It is possible to read members from a compound type without reading the whole type.

    -

    Named datatypes. Normally each dataset has its own datatype, but sometimes we may want to share a datatype among several datasets. This can be done using a named datatype. A named data type is stored in a file independent of any dataset, and referenced by all datasets that have that datatype. Named datatypes may have an associated attributes list. -See Datatypes at http://hdf.ncsa.uiuc.edu/HDF5/Datatypes.html in the HDF User’s Guide for further information. +

    Named datatypes. Normally each dataset has its own datatype, but sometimes we may want to share a datatype among several datasets. This can be done using a named datatype. A named data type is stored in the file independently of any dataset, and referenced by all datasets that have that datatype. Named datatypes may have an associated attributes list. +See Datatypes at http://hdf.ncsa.uiuc.edu/HDF5/Datatypes.html in the HDF User’s Guide for further information.

    Dataspace. A dataset dataspace describes the dimensionality of the dataset. The dimensions of a dataset can be fixed (unchanging), or they may be unlimited, which means that they are extendible (i.e. they can grow larger).

    Properties of a dataspace consist of the rank (number of dimensions) of the data array, the actual sizes of the dimensions of the array, and the maximum sizes of the dimensions of the array. For a fixed-dimension dataset, the actual size is the same as the maximum size of a dimension. When a dimension is unlimited, the maximum size is set to the value H5P_UNLIMITED. (An example below shows how to create extendible datasets.)

    A dataspace can also describe portions of a dataset, making it possible to do partial I/O operations on selections. Selection is supported by the dataspace interface (H5S). Given an n-dimensional dataset, there are currently three ways to do partial selection: @@ -220,7 +220,7 @@ See Datatypes at

    Since I/O operations have two end-points, the raw data transfer functions require two dataspace arguments: one describes the application memory dataspace or subset thereof, and the other describes the file dataspace or subset thereof. -

    See Dataspaces at http://hdf.ncsa.uiuc.edu/HDF5/Dataspaces.html in the HDF User’s Guide for further information. +

    See Dataspaces at http://hdf.ncsa.uiuc.edu/HDF5/Dataspaces.html in the HDF User’s Guide for further information.

    Storage layout. The HDF5 format makes it possible to store data in a variety of ways. The default storage layout format is contiguous, meaning that data is stored in the same linear way that it is organized in memory. Two other storage layout formats are currently defined for HDF5: compact, and chunked. In the future, other storage layouts may be added.

    Compact storage is used when the amount of data is small and can be stored directly in the object header. (Note: Compact storage is not supported in this release.)

    Chunked storage involves dividing the dataset into equal-sized "chunks" that are stored separately. Chunking has three important benefits. @@ -230,12 +230,12 @@ See Datatypes at http://hdf.ncsa.uiuc.edu/HDF5/Datasets.html in the HDF User’s Guide for further information. +

    See Datasets at http://hdf.ncsa.uiuc.edu/HDF5/Datasets.html in the HDF User’s Guide for further information.

    HDF5 Attributes

    Attributes are small named datasets that are attached to primary datasets, groups, or named datatypes. Attributes can be used to describe the nature and/or the intended usage of a dataset or group. An attribute has two parts: (1) a name and (2) a value. The value part contains one or more data entries of the same data type.

    The Attribute API (H5A) is used to read or write attribute information. When accessing attributes, they can be identified by name or by an index value. The use of an index value makes it possible to iterate through all of the attributes associated with a given object.

    The HDF5 format and I/O library are designed with the assumption that attributes are small datasets. They are always stored in the object header of the object they are attached to. Because of this, large datasets should not be stored as attributes. How large is "large" is not defined by the library and is up to the user's interpretation. (Large datasets with metadata can be stored as supplemental datasets in a group with the primary dataset.) -

    See Attributes at http://hdf.ncsa.uiuc.edu/HDF5/Attributes.html in the HDF User’s Guide for further information. +

    See Attributes at http://hdf.ncsa.uiuc.edu/HDF5/Attributes.html in the HDF User’s Guide for further information.

    The HDF5 Applications Programming Interface (API)

    The current HDF5 API is implemented only in C. The API provides routines for creating HDF5 files, creating and writing groups, datasets, and their attributes to HDF5 files, and reading groups, datasets and their attributes from HDF5 files.

    Naming conventions

    @@ -1219,7 +1219,7 @@ H5Tinsert (complex_id, "imaginary", HOFFSET(tmp,im), 2 2 2 3 3 2 2 2 3 3

    The current version of HDF 5 requires you to use chunking in order to define extendible datasets. Chunking makes it possible to extend datasets efficiently, without having to reorganize storage excessively. -

    Three operations are required in order to write an extendible dataset: +

    The following operations are required in order to write an extendible dataset:

    1. Declare the dataspace of the dataset to have unlimited dimensions for all dimensions that might eventually be extended. @@ -1245,7 +1245,7 @@ cparms = H5Pcreate (H5P_DATASET_CREATE); status = H5Pset_chunk( cparms, RANK, chunk_dims); -Then create a datset. +Then create a dataset.
       /*
        * Create a new dataset within the file using cparms
      @@ -1344,7 +1344,7 @@ ret  = H5Aread(attr, H5T_NATIVE_INT, &point_out);
       printf("The value of the attribute \"Integer attribute\" is %d \n", point_out); 
       ret =  H5Aclose(attr);
       
      -

      Reading an attribute whose characterstics are not known. It may be necessary to query a file to obtain information about an attribute, namely its name, data type, rank and dimensions. The following code opens an attribute by its index value using H5Aopen_index, then reads in information about its datatype. +

      Reading an attribute whose characteristics are not known. It may be necessary to query a file to obtain information about an attribute, namely its name, data type, rank and dimensions. The following code opens an attribute by its index value using H5Aopen_index, then reads in information about its datatype.

       /*
      @@ -1359,7 +1359,7 @@ printf("The value of the attribute with the index 2 is %s \n", string_out);
       
       

      In practice, if the characteristics of attributes are not know, the code involved in accessing and processing the attribute can be quite complex. For this reason, HDF5 includes a function called H5Aiterate, which applies a user-supplied function to each of a set of attributes. The user-supplied function can contain the code that interprets, accesses and processes each attribute.

      -Example 8 illustrates the use of the H5Aiterate function, as well as the other attribute examples described above. +Example 8 illustrates the use of the H5Aiterate function, as well as the other attribute examples described above.


      @@ -1540,7 +1540,7 @@ main (void) (unsigned long)(dims_out[0]), (unsigned long)(dims_out[1])); /* - * Define hyperslab in the datatset. + * Define hyperslab in the dataset. */ offset[0] = 1; offset[1] = 2; @@ -1850,7 +1850,7 @@ main(void) double c; } s1_t; s1_t s1[LENGTH]; - hid_t s1_tid; /* File datatype hadle */ + hid_t s1_tid; /* File datatype identifier */ /* Second structure (subset of s1_t) and dataset*/ typedef struct s2_t { @@ -2034,7 +2034,7 @@ main (void) {3, 3} }; /* - * Create the data space with ulimited dimensions. + * Create the data space with unlimited dimensions. */ dataspace = H5Screate_simple(RANK, dims, maxdims); @@ -2617,7 +2617,7 @@ main (void) ret = H5Awrite(attr3, atype, string); /* - * Close attribute and file datapsaces. + * Close attribute and file dataspaces. */ ret = H5Sclose(aid1); ret = H5Sclose(aid2); -- cgit v0.12