From 794ba0a251af47b8e3c60afa2fe92d267e2a6b55 Mon Sep 17 00:00:00 2001
From: Frank Baker
-
-
-
- The above functions will eventually be removed from the HDF5
- distribution and from the HDF5 Reference Manual.
-
-
-
- Backward compatibility with the Release 1.4.x syntax is available
- for the functions indicated above with a leading asterisk (*).
- The backward compatibility features are available only when the
- HDF5 library is configured with the flag
-
-
-
-
-
-
-
-
-
-
- Advice to users: User applications should release the communicator and
- Info object returned by
- The following functions are new for Release 1.4.0, but
- are intended only for use in specialized environments.
- These are also included in the
- HDF5 Reference Manual.
-
-
- The following functions are new for Release 1.4.0 but are intended
- only for driver development work, not for general use.
- They are listed in the
- List of VFL Functions
- document in the
- HDF5 Technical Notes.
- They are described in detail only in the source code and
- do not appear in the HDF5 Reference Manual.
-
-
-
-
-
- This specification is primarily concerned with two dimensional raster
-data similar to HDF4 Raster Images. Specifications for storing other
-types of imagery will be covered in other documents.
- This specification defines:
- The dataset for an image is distinguished from other datasets by giving
-it an attribute "CLASS=IMAGE". In addition, the Image dataset may
-have an optional attribute "PALETTE" that is an array of object references
-for zero or more palettes. The Image dataset may have additional attributes
-to describe the image data, as defined in Section 1.2.
- A Palette is an HDF5 dataset which contains color map information.
-A Pallet dataset has an attribute "CLASS=PALETTE" and other attributes
-indicating the type and size of the palette, as defined in Section
-2.1. A Palette is an independent object, which can be shared
-among several Image datasets.
-
- For example, consider a 5 (rows) by 10 (column) image, with Red, Green,
-and Blue components. Each component is an unsigned byte. In HDF5,
-the datatype would be declared as an unsigned 8 bit integer. For
-pixel interlace, the dataspace would be a three dimensional array, with
-dimensions: [10][5][3]. For plane interleave, the dataspace would
-be three dimensions: [3][10][5].
- In the case of images with only one component, the dataspace may be
-either a two dimensional array, or a three dimensional array with the third
-dimension of size 1. For example, a 5 by 10 image with 8 bit color
-indexes would be an HDF5 dataset with type unsigned 8 bit integer.
-The dataspace could be either a two dimensional array, with dimensions
-[10][5], or three dimensions, with dimensions either [10][5][1] or [1][10][5].
- Image datasets may be stored with any chunking or compression properties
-supported by HDF5.
- A note concerning compatibility with HDF5 GR interface: An Image
-dataset is stored as an HDF5 dataset. It is important to note that
-the order of the dimensions is the same as for any other HDF5 dataset.
-For a two dimensional image that is to be stored as a series of horizontal
-scan lines, with the scan lines contiguous (i.e., the fastest changing
-dimension is 'width'), the image will have a dataspace with dim[0] =
-height and dim[1] = width. This is completely consistent
-with all other HDF5 datasets.
- Users familiar with HDF4 should be cautioned that this is not the
-same as HDF4, and specifically is not consistent with what the HDF4
-GR interface does.
- In this example, the color component numeric type is an 8 bit unsigned
-integer. While this is most common and recommended for general use, other
-component color numeric datatypes, such as a 16 bit unsigned integer ,
-may be used. This type is specified as the type attribute of the palette
-dataset. (see H5Tget_type(), H5Tset_type())
- The minimum and maximum values of the component color numeric are specified
-as attribute of the palette dataset. See below (attribute PAL_MINMAXNUMERIC).
-If these attributes do not exist, it is assumed that the range of values
-will fill the space of the color numeric type. i.e. with an 8 bit unsigned
-integer, the valid range would be 0 to 255 for each color component.
- The HDF5 palette specification additionally allows for color models
-beyond RGB. YUV, HSV, CMY, CMYK, YCbCr color models are supported, and
-may be specified as a color model attribute of the palette dataset. (see
-"Palette Attributes" for details).
- In HDF 4 and earlier, palettes were limited to 256 colors. The HDF5
-palette specification allows for palettes of varying length. The length
-is specified as the number of rows of the palette dataset.
- In a standard palette, the color entries are indexed directly. HDF5
-supports the notion of a range index table. Such a table defines an ascending
-ordered list of ranges that map dataset values to the palette. If a range
-index table exists for the palette, the PAL_TYPE attribute will be set
-to "RANGEINDEX", and the PAL_RANGEINDEX attribute will contain an object
-reference to a range index table array. If not, the PAL_TYPE attribute
-either does not exist, or will be set to "STANDARD".
- The range index table array consists of a one dimensional array with
-the same length as the palette dataset - 1. Ideally, the range index would
-be of the same type as the dataset it refers to, however this is not a
-requirement.
- Example 2: A range index array of type floating point
- The range index array attribute defines the "to" of the range.
-Notice that the range index array attribute is one less entry in size than
-the palette. The first entry of 0.1259, specifies that all values below
-and up to 0.1259 inclusive, will map to the first palette entry. The second
-entry signifies that all values greater than 0.1259 up to 0.3278 inclusive,
-will map to the second palette entry, etc. All value greater than the last
-range index array attribute (100000) map to the last entry in the palette. These attributes are defined as follows:
- If the PAL_TYPE is set to "RANGEINDEX", there will be an additional
-attribute with a name of "PAL_RANGEINDEX", (See example 2
-for more details) The Image and Palette specifications include several redundant standard
-attributes, such as the IMAGE_COLORMODEL and the PAL_COLORMODEL.
-These attributes are informative not normative, in that it is acceptable
-to attach a Palette to an Image dataset even if their attributes do not
-match. Software is not required to enforce consistency, and files
-may contain mismatched associations of Images and Palettes. In all
-cases, it is up to applications to determine what kinds of images and color
-models can be supported.
- For example, an Image that was created from a file with an "RGB" may
-have a "YUV" Palette in its PALETTE attribute array. This
-would be a legal HDF5 file and also conforms to this specification, although
-it may or may not be correct for a given application. The attribute API (H5A) is primarily designed to easily allow small
- datasets to be attached to primary datasets as metadata information.
- Additional goals for the H5A interface include keeping storage
- requirements for each attribute to a minimum and easily sharing
- attributes among datasets.
- Because attributes are intended to be small objects, large datasets
- intended as additional information for a primary dataset should be
- stored as supplemental datasets in a group with the primary dataset.
- Attributes can then be attached to the group containing everything to
- indicate a particular type of dataset with supplemental datasets is
- located in the group. How small is "small" is not defined by the
- library and is up to the user's interpretation.
- Attributes are not separate objects in the file, they are always
- contained in the object header of the object they are attached to. The
- I/O functions defined below are required to read or write attribute
- information, not the H5D I/O routines.
-
- Attributes are created with the Attributes may only be written as an entire object, no partial I/O
- is currently supported.
-
- The iterator returns a negative value if something is wrong, the return
- value of the last operator if it was non-zero, or zero if all attributes
- were processed.
- The prototype for H5A_operator_t is: The operation receives the ID for the group or dataset being iterated over
- (loc_id), the name of the current attribute about the object (attr_name)
- and the pointer to the operator data passed in to H5Aiterate
- (operator_data). The return values from an operator are:
- The HDF5 library is able to handle files larger than the
- maximum file size, and datasets larger than the maximum memory
- size. For instance, a machine where Two "tricks" must be imployed on these small systems in order
- to store large datasets. The first trick circumvents the
- Systems that have 64-bit file addresses will be able to access
- those files automatically. One should see the following output
- from configure:
-
- Also, some 32-bit operating systems have special file systems
- that can support large (>2GB) files and HDF5 will detect
- these and use them automatically. If this is the case, the
- output from configure will show:
-
- Otherwise one must use an HDF5 file family. Such a family is
- created by setting file family properties in a file access
- property list and then supplying a file name that includes a
- The second argument ( If the effective HDF5 address space is limited then one may be
- able to store datasets as external datasets each spanning
- multiple files of any length since HDF5 opens external dataset
- files one at a time. To arrange storage for a 5TB dataset split
- among 1GB files one could say:
-
- The second limit which must be overcome is that of
- To create a dataset with 8*2^30 4-byte integers for a total of
- 32GB one first creates the dataspace. We give two examples
- here: a 4-dimensional dataset whose dimension sizes are smaller
- than the maximum value of a However, the For compilers that don't support The HDF5 library caches two types of data: meta data and raw
- data. The meta data cache holds file objects like the file
- header, symbol table nodes, global heap collections, object
- headers and their messages, etc. in a partially decoded
- state. The cache has a fixed number of entries which is set with
- the file access property list (defaults to 10k) and each entry
- can hold a single meta data object. Collisions between objects
- are handled by preempting the older object in favor of the new
- one.
-
- Raw data chunks are cached because I/O requests at the
- application level typically don't map well to chunks at the
- storage level. The chunk cache has a maximum size in bytes
- set with the file access property list (defaults to 1MB) and
- when the limit is reached chunks are preempted based on the
- following set of heuristics.
-
- One should choose large values for w0 if I/O requests
- typically do not overlap but smaller values for w0 if
- the requests do overlap. For instance, reading an entire 2d
- array by reading from non-overlapping "windows" in a row-major
- order would benefit from a high w0 value while reading
- a diagonal accross the dataset where each request overlaps the
- previous request would benefit from a small w0.
-
- The cache parameters for both caches are part of a file access
- property list and are set and queried with this pair of
- functions:
-
- Chunking refers to a storage layout where a dataset is
- partitioned into fixed-size multi-dimensional chunks. The
- chunks cover the dataset but the dataset need not be an integral
- number of chunks. If no data is ever written to a chunk then
- that chunk isn't allocated on disk. Figure 1 shows a 25x48
- element dataset covered by nine 10x20 chunks and 11 data points
- written to the dataset. No data was written to the region of
- the dataset covered by three of the chunks so those chunks were
- never allocated in the file -- the other chunks are allocated at
- independent locations in the file and written in their entirety.
-
- The HDF5 library treats chunks as atomic objects -- disk I/O is
- always in terms of complete chunks(1). This
- allows data filters to be defined by the application to perform
- tasks such as compression, encryption, checksumming,
- etc. on entire chunks. As shown in Figure 2, if
- It's obvious from Figure 2 that calling The preemption policy for the cache favors certain chunks and
- tries not to preempt them.
-
- Now for some real numbers... A 2000x2000 element dataset is
- created and covered by a 20x20 array of chunks (each chunk is 100x100
- elements). The raw data cache is adjusted to hold at most 25 chunks by
- setting the maximum number of bytes to 25 times the chunk size in
- bytes. Then the application creates a square, two-dimensional memory
- buffer and uses it as a window into the dataset, first reading and then
- rewriting in row-major order by moving the window across the dataset
- (the read and write tests both start with a cold cache).
-
- The measure of efficiency in Figure 3 is the number of bytes requested
- by the application divided by the number of bytes transferred from the
- file. There are at least a couple ways to get an estimate of the cache
- performance: one way is to turn on cache
- debugging and look at the number of cache misses. A more accurate
- and specific way is to register a data filter whose sole purpose is to
- count the number of bytes that pass through it (that's the method used
- below).
-
- The read efficiency is less than one for two reasons: collisions in the
- cache are handled by preempting one of the colliding chunks, and the
- preemption algorithm occasionally preempts a chunk which hasn't been
- referenced for a long time but is about to be referenced in the near
- future.
-
- The write test results in lower efficiency for most window
- sizes because HDF5 is unaware that the application is about to
- overwrite the entire dataset and must read in most chunks before
- modifying parts of them.
-
- There is a simple way to improve efficiency for this example.
- It turns out that any chunk that has been completely read or
- written is a good candidate for preemption. If we increase the
- penalty for such chunks from the default 0.75 to the maximum
- 1.00 then efficiency improves.
-
- The read efficiency is still less than one because of
- collisions in the cache. The number of collisions can often be
- reduced by increasing the number of slots in the cache. Figure
- 5 shows what happens when the maximum number of slots is
- increased by an order of magnitude from the default (this change
- has no major effect on memory used by the test since the byte
- limit was not increased for the cache).
-
- Although the application eventually overwrites every chunk
- completely the library has know way of knowing this before hand
- since most calls to Even if the application transfers the entire dataset contents with a
- single call to By default the strip size is 1MB but it can be changed by calling
- The chunks of the dataset are allocated at independent
- locations throughout the HDF5 file and a B-tree maps chunk
- N-dimensional addresses to file addresses. The more chunks that
- are allocated for a dataset the larger the B-tree. Large B-trees
- have two disadvantages:
-
- There are three ways to reduce the number of B-tree nodes. The
- obvious way is to reduce the number of chunks by choosing a larger chunk
- size (doubling the chunk size will cut the number of B-tree nodes in
- half). Another method is to adjust the split ratios for the B-tree by
- calling Dataset chunks can be compressed through the use of filters.
- See the chapter “Filters in HDF5.”
-
- Reading and rewriting compressed chunked data can result in holes
- in an HDF5 file. In time, enough such holes can increase the
- file size enough to impair application or library performance
- when working with that file. See
- “Freespace Management”
- in the chapter
- “Performance Analysis and Issues.”
-
-
- Footnote 1: Parallel versions of the library
- can access individual bytes of a chunk when the underlying file
- uses MPI-IO.
-
- Footnote 2: The raw data chunk cache was
- added before the second alpha release. This is one of the functions exported from the
- All pointer arguments are initialized when defined. I don't
- worry much about non-pointers because it's usually obvious when
- the value isn't initialized.
-
- I use You'll see this quite often in the low-level stuff and it's
- documented in the The alternative is to call the slighlty cheaper
- Code is arranged in paragraphs with a comment starting each
- paragraph. The previous paragraph is a standard binary search
- algorithm. The It's also my standard practice to have side effects in
- conditional expressions because I can write code faster and it's
- more apparent to me what the condition is testing. But if I
- have an assignment in a conditional expr, then I use an extra
- set of parens even if they're not required (usually they are, as
- in this case) so it's clear that I meant Here I broke the "side effect in conditional" rule, which I
- sometimes do if the expression is so long that the
- For lack of a better way to handle errors during error cleanup,
- I just call the The following code is an API function from the H5F package...
-
- An API prologue is used for each API function instead of my
- normal function prologue. I use the prologue from Code Review 1
- for non-API functions because it's more suited to C programmers,
- it requires less work to keep it synchronized with the code, and
- I have better editing tools for it.
-
- API functions are never called internally, therefore I always
- clear the error stack before doing anything.
-
- If something is wrong with the arguments then we raise an
- error. We never An internal version of the function does the real work. That
- internal version calls
-
- For Example:
-
-
- For Example:
-
-
- For Example:
-
-
- and a header file of private stuff
-
-
-
- and a header for private prototypes
-
-
-
- By splitting the prototypes into separate include files we don't
- have to recompile everything when just one function prototype
- changes.
-
-
-
- PACKAGES
-
-
-
-Names exported beyond function scope begin with `H5' followed by zero,
-one, or two upper-case letters that describe the class of object.
-This prefix is the package name. The implementation of packages
-doesn't necessarily have to map 1:1 to the source files.
-
-
-
-Each package implements a single main class of object (e.g., the H5B
-package implements B-link trees). The main data type of a package is
-the package name followed by `_t'.
-
-
-
-
-Not all packages implement a data type (H5, H5MF) and some
-packages provide access to a preexisting data type (H5MM, H5S).
-
-
-
- PUBLIC vs PRIVATE
-
-
-If the symbol is for internal use only, then the package name is
-followed by an underscore and the rest of the name. Otherwise, the
-symbol is part of the API and there is no underscore between the
-package name and the rest of the name.
-
-
-
-For functions, this is important because the API functions never pass
-pointers around (they use atoms instead for hiding the implementation)
-and they perform stringent checks on their arguments. Internal
-unctions, on the other hand, check arguments with assert().
-
-Data types like H5B_t carry no information about whether the type is
-public or private since it doesn't matter.
-
-
-
-
- INTEGRAL TYPES
-
-
-Integral fixed-point type names are an optional `u' followed by `int'
-followed by the size in bits (8, 16,
-32, or 64). There is no trailing `_t' because these are common
-enough and follow their own naming convention.
-
-
-
- OTHER TYPES
-
-
-
-
-Other data types are always followed by `_t'.
-
-
-
-However, if the name is so common that it's used almost everywhere,
-then we make an alias for it by removing the package name and leading
-underscore and replacing it with an `h' (the main datatype for a
-package already has a short enough name, so we don't have aliases for
-them).
-
-
-
- GLOBAL VARIABLES
-
-
-Global variables include the package name and end with `_g'.
-
-
-
-
-
-
-MACROS, PREPROCESSOR CONSTANTS, AND ENUM MEMBERS
-
-
-
-Same rules as other symbols except the name is all upper case. There
-are a few exceptions:
- No naming scheme; determined by OS and compiler.
-
-
-
-
-
-
-
-
-NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
-
-
-Contributors: National Center for Supercomputing Applications (NCSA) at
-the University of Illinois at Urbana-Champaign (UIUC), Lawrence Livermore
-National Laboratory (LLNL), Sandia National Laboratories (SNL), Los Alamos
-National Laboratory (LANL), Jean-loup Gailly and Mark Adler (gzip library).
-
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted for any purpose (including commercial purposes)
-provided that the following conditions are met:
-
-
-
- DISCLAIMER:
- This work was prepared as an account of work sponsored by an agency
- of the United States Government. Neither the United States
- Government nor the University of California nor any of their
- employees, makes any warranty, express or implied, or assumes any
- liability or responsibility for the accuracy, completeness, or
- usefulness of any information, apparatus, product, or process
- disclosed, or represents that its use would not infringe privately-
- owned rights. Reference herein to any specific commercial products,
- process, or service by trade name, trademark, manufacturer, or
- otherwise, does not necessarily constitute or imply its endorsement,
- recommendation, or favoring by the United States Government or the
- University of California. The views and opinions of authors
- expressed herein do not necessarily state or reflect those of the
- United States Government or the University of California, and shall
- not be used for advertising or product endorsement purposes.
- The purpose of the dataset interface is to provide a mechanism
- to describe properties of datasets and to transfer data between
- memory and disk. A dataset is composed of a collection of raw
- data points and four classes of meta data to describe the data
- points. The interface is hopefully designed in such a way as to
- allow new features to be added without disrupting current
- applications that use the dataset interface.
-
- The four classes of meta data are:
-
- Each of these classes of meta data is handled differently by
- the library although the same API might be used to create them.
- For instance, the datatype exists as constant meta data and as
- memory meta data; the same API (the The dataset API partitions these terms on three orthogonal axes
- (layout, compression, and external storage) and uses a
- dataset creation property list to hold the various
- settings and pass them through the dataset interface. This is
- similar to the way HDF5 files are created with a file creation
- property list. A dataset creation property list is always
- derived from the default dataset creation property list (use
- Once the general layout is defined, the user can define
-
- This example shows how a two-dimensional dataset
- is partitioned into chunks. The library can manage file
- memory by moving the chunks around, and each chunk could be
- compressed. The chunks are allocated in the file on demand
- when data is written to the chunk.
- Although it is most efficient if I/O requests are aligned on chunk
- boundaries, this is not a constraint. The application can perform I/O
- on any set of data points as long as the set can be described by the
- data space. The set on which I/O is performed is called the
- selection.
-
- Chunked data storage
- (see Some storage formats may allow storage of data across a set of
- non-HDF5 files. Currently, only the
- This example shows how a contiguous, one-dimensional dataset
- is partitioned into three parts and each of those parts is
- stored in a segment of an external file. The top rectangle
- represents the logical address space of the dataset
- while the bottom rectangle represents an external file.
- One should note that the segments are defined in order of the
- logical addresses they represent, not their order within the
- external file. It would also have been possible to put the
- segments in separate files. Care should be taken when setting
- up segments in a single file since the library doesn't
- automatically check for segments that overlap.
-
- This example shows how a contiguous, two-dimensional dataset
- is partitioned into three parts and each of those parts is
- stored in a separate external file. The top rectangle
- represents the logical address space of the dataset
- while the bottom rectangles represent external files.
- The library maps the multi-dimensional array onto a linear
- address space like normal, and then maps that address space
- into the segments defined in the external file list.
- The segments of an external file can exist beyond the end of the
- file. The library reads that part of a segment as zeros. When writing
- to a segment that exists beyond the end of a file, the file is
- automatically extended. Using this feature, one can create a segment
- (or set of segments) which is larger than the current size of the
- dataset, which allows to dataset to be extended at a future time
- (provided the data space also allows the extension).
-
- All referenced external data files must exist before performing raw
- data I/O on the dataset. This is normally not a problem since those
- files are being managed directly by the application, or indirectly
- through some other library.
-
-
- Raw data has a constant datatype which describes the datatype
- of the raw data stored in the file, and a memory datatype that
- describes the datatype stored in application memory. Both data
- types are manipulated with the The constant file datatype is associated with the dataset when
- the dataset is created in a manner described below. Once
- assigned, the constant datatype can never be changed.
-
- The memory datatype is specified when data is transferred
- to/from application memory. In the name of data sharability,
- the memory datatype must be specified, but can be the same
- type identifier as the constant datatype.
-
- During dataset I/O operations, the library translates the raw
- data from the constant datatype to the memory datatype or vice
- versa. Structured datatypes include member offsets to allow
- reordering of struct members and/or selection of a subset of
- members and array datatypes include index permutation
- information to allow things like transpose operations (the
- prototype does not support array reordering) Permutations
- are relative to some extrinsic descritpion of the dataset.
-
-
-
- The dataspace of a dataset defines the number of dimensions
- and the size of each dimension and is manipulated with the
- The dataspace can also be used to define partial I/O
- operations. Since I/O operations have two end-points, the raw
- data transfer functions take two data space arguments: one which
- describes the application memory data space or subset thereof
- and another which describes the file data space or subset
- thereof.
-
-
- Each dataset has a set of constant and persistent properties
- which describe the layout method, pre-compression
- transformation, compression method, datatype, external storage,
- and data space. The constant properties are set as described
- above in a dataset creation property list whose identifier is
- passed to Constant or persistent properties can be queried with a set of
- three functions. Each function returns an identifier for a copy
- of the requested properties. The identifier can be passed to
- various functions which modify the underlying object to derive a
- new object; the original dataset is completely unchanged. The
- return values from these functions should be properly destroyed
- when no longer needed.
-
- A dataset also has memory properties which describe memory
- within the application, and transfer properties that control
- various aspects of the I/O operations. The memory can have a
- datatype different than the permanent file datatype (different
- number types, different struct member offsets, different array
- element orderings) and can also be a different size (memory is a
- subset of the permanent dataset elements, or vice versa). The
- transfer properties might provide caching hints or collective
- I/O information. Therefore, each I/O operation must specify
- memory and transfer properties.
-
- The memory properties are specified with type_id and
- space_id arguments while the transfer properties are
- specified with the transfer_id property list for the
- If the maximum size of the temporary I/O pipeline buffers is
- too small to hold the entire I/O request, then the I/O request
- will be fragmented and the transfer operation will be strip
- mined. However, certain restrictions apply to the strip
- mining. For instance, when performing I/O on a hyperslab of a
- simple data space the strip mining is in terms of the slowest
- varying dimension. So if a 100x200x300 hyperslab is requested,
- the temporary buffer must be large enough to hold a 1x200x300
- sub-hyperslab.
-
- To prevent strip mining from happening, the application should
- use
- This example shows how to define a function that sets
- a dataset transfer property list so that strip mining
- does not occur. It takes an (optional) dataset transfer
- property list, a dataset, a data space that describes
- what data points are being transfered, and a datatype
- for the data points in memory. It returns a (new)
- dataset transfer property list with the temporary
- buffer size set to an appropriate value. The return
- value should be passed as the fifth argument to
- Unlike constant and persistent properties, a dataset cannot be
- queried for it's memory or transfer properties. Memory
- properties cannot be queried because the application already
- stores those properties separate from the buffer that holds the
- raw data, and the buffer may hold multiple segments from various
- datasets and thus have more than one set of memory properties.
- The transfer properties cannot be queried from the dataset
- because they're associated with the transfer itself and not with
- the dataset (but one can call
- All raw data I/O is accomplished through these functions which
- take a dataset handle, a memory datatype, a memory data space,
- a file data space, transfer properties, and an application
- memory buffer. They translate data between the memory datatype
- and space and the file datatype and space. The data spaces can
- be used to describe partial I/O operations.
-
- In the name of sharability, the memory datatype must be
- supplied. However, it can be the same identifier as was used to
- create the dataset or as was returned by
- For complete reads of the dataset one may supply
- The examples in this section illustrate some common dataset
- practices.
-
-
- This example shows how to create a dataset which is stored in
- memory as a two-dimensional array of native
- This example uses the file created in Example 1 and reads a
- hyperslab of the 500x600 file dataset. The hyperslab size is
- 100x200 and it is located beginning at element
- <200,200>. We read the hyperslab into an 200x400 array in
- memory beginning at element <0,0> in memory. Visually,
- the transfer looks something like this:
-
-
- If the file contains a compound data structure one of whose
- members is a floating point value (call it "delta") but the
- application is interested in reading an array of floating point
- values which are just the "delta" values, then the application
- should cast the floating point array as a struct with a single
- "delta" member.
-
-
- A dataspace describes the locations that dataset elements are located at.
-A dataspace is either a regular N-dimensional array of data points,
-called a simple dataspace, or a more general collection of data
-points organized in another manner, called a complex dataspace.
-A scalar dataspace is a special case of the simple data
-space and is defined to be a 0-dimensional single data point in size. Currently
-only scalar and simple dataspaces are supported with this version
-of the H5S interface.
-Complex dataspaces will be defined and implemented in a future
-version. Complex dataspaces are intended to be used for such structures
-which are awkward to express in simple dataspaces, such as irregularly
-gridded data or adaptive mesh refinement data. This interface provides
-functions to set and query properties of a dataspace.
-
- Operations on a dataspace include defining or extending the extent of
-the dataspace, selecting portions of the dataspace for I/O and storing the
-dataspaces in the file. The extent of a dataspace is the range of coordinates
-over which dataset elements are defined and stored. Dataspace selections are
-subsets of the extent (up to the entire extent) which are selected for some
-operation.
-
- For example, a 2-dimensional dataspace with an extent of 10 by 10 may have
-the following very simple selection:
- Selections within dataspaces have an offset within the extent which is used
-to locate the selection within the extent of the dataspace. Selection offsets
-default to 0 in each dimension, but may be changed to move the selection within
-a dataspace. In example 2 above, if the offset was changed to 1,1, the selection
-would look like this:
- Selections also have a linearization ordering of the points selected
-(defaulting to "C" order, ie. last dimension changing fastest). The
-linearization order may be specified for each point or it may be chosen by
-the axis of the dataspace. For example, with the default "C" ordering,
-example 1's selected points are iterated through in this order: (1,1), (1,2),
-(1,3), (2,1), (2,2), etc. With "FORTRAN" ordering, example 1's selected points
-would be iterated through in this order: (1,1), (2,1), (3,1), (4,1), (5,1),
-(1,2), (2,2), etc.
-
- A dataspace may be stored in the file as a permanent object, to allow many
-datasets to use a commonly defined dataspace. Dataspaces with extendable
-extents (ie. unlimited dimensions) are not able to be stored as permanent
-dataspaces.
-
- Dataspaces may be created using an existing permanent dataspace as a
-container to locate the new dataspace within. These dataspaces are complete
-dataspaces and may be used to define datasets. A dataspaces with a "parent"
-can be queried to determine the parent dataspace and the location within the
-parent. These dataspaces must currently be the same number of dimensions as
-the parent dataspace.
-
-
-The start array determines the starting coordinates of the hyperslab
-to select. The stride array chooses array locations from the dataspace
-with each value in the stride array determining how many elements to move
-in each dimension. Setting a value in the stride array to 1 moves to
-each element in that dimension of the dataspace, setting a value of 2 in a
-location in the stride array moves to every other element in that
-dimension of the dataspace. In other words, the stride determines the
-number of elements to move from the start location in each dimension.
-Stride values of 0 are not allowed. If the stride parameter is NULL,
-a contiguous hyperslab is selected (as if each value in the stride array
-was set to all 1's). The count array determines how many blocks to
-select from the dataspace, in each dimension. The block array determines
-the size of the element block selected from the dataspace. If the block
-parameter is set to NULL, the block size defaults to a single element
-in each dimension (as if the block array was set to all 1's).
- For example, in a 2-dimensional dataspace, setting start to [1,1],
-stride to [4,4], count to [3,7] and block to [2,2] selects
-21 2x2 blocks of array elements starting with location (1,1) and selecting
-blocks at locations (1,1), (5,1), (9,1), (1,5), (5,5), etc.
- Regions selected with this function call default to 'C' order iteration when
-I/O is performed.
- The selection operator op determines how the new selection is to be
-combined with the already existing selection for the dataspace.
-The following operators are supported:
- The datatype interface provides a mechanism to describe the
- storage format of individual data points of a data set and is
- hopefully designed in such a way as to allow new features to be
- easily added without disrupting applications that use the
- datatype interface. A dataset (the H5D interface) is composed of a
- collection or raw data points of homogeneous type organized
- according to the data space (the H5S interface).
-
- A datatype is a collection of datatype properties, all of
- which can be stored on disk, and which when taken as a whole,
- provide complete information for data conversion to or from that
- datatype. The interface provides functions to set and query
- properties of a datatype.
-
- A data point is an instance of a datatype,
- which is an instance of a type class. We have defined
- a set of type classes and properties which can be extended at a
- later time. The atomic type classes are those which describe
- types which cannot be decomposed at the datatype interface
- level; all other classes are compound.
-
- The functions defined in this section operate on datatypes as
- a whole. New datatypes can be created from scratch or copied
- from existing datatypes. When a datatype is no longer needed
- its resources should be released by calling Datatypes come in two flavors: named datatypes and transient
- datatypes. A named datatype is stored in a file while the
- transient flavor is independent of any file. Named datatypes
- are always read-only, but transient types come in three
- varieties: modifiable, read-only, and immutable. The difference
- between read-only and immutable types is that immutable types
- cannot be closed except when the entire library is closed (the
- predefined types like An atomic type is a type which cannot be decomposed into
- smaller units at the API level. All atomic types have a common
- set of properties which are augmented by properties specific to
- a particular type class. Some of these properties also apply to
- compound datatypes, but we discuss them only as they apply to
- atomic datatypes here. The properties and the functions that
- query and set their values are:
-
- Integer atomic types ( The library supports floating-point atomic types
- ( Dates and times ( Fixed-length character string types are used to store textual
- information. The
- Converting a bit field ( Opaque types ( A compound datatype is similar to a Properties of members of a compound datatype are
- defined when the member is added to the compound type (see
- The library predefines a modest number of datatypes having
- names like
- The base name of most types consists of a letter, a precision
- in bits, and an indication of the byte order. The letters are:
-
-
- The byte order is a two-letter sequence:
-
-
-
- The
-
- To create a 128-bit, little-endian signed integer
- type one could use the following (increasing the
- precision of a type automatically increases the total
- size):
-
-
- To create an 80-byte null terminated string type one
- might do this (the offset of a character string is
- always zero and the precision is adjusted
- automatically to match the size):
-
- A complete list of the datatypes predefined in HDF5 can be found in
- HDF5 Predefined Datatypes
- in the HDF5 Reference Manual.
-
-
- Unlike atomic datatypes which are derived from other atomic
- datatypes, compound datatypes are created from scratch. First,
- one creates an empty compound datatype and specifies it's total
- size. Then members are added to the compound datatype in any
- order.
-
- Usually a C struct will be defined to hold a data point in
- memory, and the offsets of the members in memory will be the
- offsets of the struct members from the beginning of an instance
- of the struct.
-
- Each member must have a descriptive name which is the
- key used to uniquely identify the member within the compound
- datatype. A member name in an HDF5 datatype does not
- necessarily have to be the same as the name of the member in the
- C struct, although this is often the case. Nor does one need to
- define all members of the C struct in the HDF5 compound
- datatype (or vice versa).
-
-
- An HDF5 datatype is created to describe complex
- numbers whose type is defined by the
- Member alignment is handled by the
- This example shows how to create a disk version of a
- compound datatype in order to store data on disk in
- as compact a form as possible. Packed compound
- datatypes should generally not be used to describe memory
- as they may violate alignment constraints for the
- architecture being used. Note also that using a
- packed datatype for disk storage may involve a higher
- data conversion cost.
-
- Compound datatypes that have a compound datatype
- member can be handled two ways. This example shows
- that the compound datatype can be flattened,
- resulting in a compound type with only atomic
- members.
-
-
- However, when the An HDF enumeration datatype is a 1:1 mapping between a set of
- symbols and a set of integer values, and an order is imposed on
- the symbols by their integer values. The symbols are passed
- between the application and library as character strings and all
- the values for a particular enumeration type are of the same
- integer type, which is not necessarily a native type.
-
- Creation of an enumeration datatype resembles creation of a
- compound datatype: first an empty enumeration type is created,
- then members are added to the type, then the type is optionally
- locked.
-
- Because an enumeration datatype is derived from an integer
- datatype, any operation which can be performed on integer
- datatypes can also be performed on enumeration datatypes. This
- includes:
-
-
- In addition, the new function A small set of functions is available for querying properties
- of an enumeration type. These functions are likely to be used
- by browsers to display datatype information.
-
-
- Output:
- In addition to querying about the enumeration type properties,
- an application may want to make queries about enumerated
- data. These functions perform efficient mappings between symbol
- names and values.
-
-
- Output:
- Enumerated data can be converted from one type to another
- provided the destination enumeration type contains all the
- symbols of the source enumeration type. The conversion operates
- by matching up the symbol names of the source and destination
- enumeration types to build a mapping from source value to
- destination value. For instance, if we are translating from an
- enumeration type that defines a sequence of integers as the
- values for the colors to a type that defines a different bit for
- each color then the mapping might look like this:
-
-
-
- That is, a source value of
- Output:
- If the source data stream contains values which are not in the
- domain of the conversion map then an overflow exception is
- raised within the library, causing the application defined
- overflow handler to be invoked (see
- The HDF library will not provide conversions between enumerated
- data and integers although the application is free to do so
- (this is a policy we apply to all classes of HDF datatypes).
- However, since enumeration types are derived from
- integer types it is permissible to treat enumerated data as
- integers and perform integer conversions in that context.
-
- Symbol order is determined by the integer values associated
- with each symbol. When the integer datatype is a native type,
- testing the relative order of two symbols is an easy process:
- simply compare the values of the symbols. If only the symbol
- names are available then the values must first be determined by
- calling When the underlying integer datatype is not a native type then
- the easiest way to compare symbols is to first create a similar
- enumeration type that contains all the same symbols but has a
- native integer type (HDF type conversion features can be used to
- convert the non-native values to native values). Once we have a
- native type we can compare symbol order as just described. If
- It is also possible to convert enumerated data to a new type
- that has a different order defined for the symbols. For
- instance, we can define a new type,
- Output:
- The order that members are inserted into an enumeration type is
- unimportant; the important part is the associations between the
- symbol names and the values. Thus, two enumeration datatypes
- will be considered equal if and only if both types have the same
- symbol/value associations and both have equal underlying integer
- datatypes. Type equality is tested with the
- Although HDF enumeration datatypes are similar to C
-
- The examples below use the following C datatypes:
-
-
- An HDF enumeration datatype can be created from a C
-
- Occassionally two applicatons wish to exchange data but they
- use different names for the constants they exchange. For
- instance, an English and a Spanish program may want to
- communicate color names although they use different symbols in
- the C
- Since symbol ordering is completely determined by the integer values
- assigned to each symbol in the For example, an application may be defined to use the
- definition of A case of this reordering of symbol names was also shown in the
- previous code snippet (as well as a change of language), where
- HDF changed the integer values so 0 ( In fact, the ability to change the order of symbols is often
- convenient when the enumeration type is used only to group
- related symbols that don't have any well defined order
- relationship.
-
- The HDF enumeration type conversion features can also be used
- to provide internationalization of debugging output. A program
- written with the
- The main goal of enumeration types is to provide communication
- of enumerated data using symbolic equivalence. That is, a
- symbol written to a dataset by one application should be read as
- the same symbol by some other application.
-
-
-
-VL datatypes are useful to the scientific community in many different ways,
-some of which are listed below:
-
-HDF5 has native VL strings for each language API, which are stored the
-same way on disk, but are exported through each language API in a natural way
-for that language. When retrieving VL strings from a dataset, users may choose
-to have them stored in memory as a native VL string or in HDF5's
-VL strings may be created in one of two ways: by creating a VL datatype with
-a base type of
-Multi-byte character representations, such as UNICODE or wide
-characters in C/C++, will need the appropriate character and string datatypes
-created so that they can be described properly through the datatype API.
-Additional conversions between these types and the current ASCII characters
-will also be required.
-
-
-Variable-width character strings (which might be compressed data or some
-other encoding) are not currently handled by this design. We will evaluate
-how to implement them based on user feedback.
-
-
-
-The base datatype will be the datatype that the sequence is composed of,
-characters for character strings, vertex coordinates for polygon lists, etc.
-The base datatype specified for the VL datatype can be of any HDF5 datatype,
-including another VL datatype, a compound datatype, or an atomic datatype.
-
-
-
-This routine checks the number of bytes required to store the VL data from
-the dataset, using the
-Default memory management is set by using
-The rest of this subsection is relevant only to those who choose
-not to use default memory management.
-
-
-The user can choose whether to use the
-system
-The
-The prototypes for the user-defined functions would appear as follows:
-
-The
-In summary, if the user has defined custom memory management
-routines, the name(s) of the routines are passed in the
-
-The
-If nested VL datatypes were used to create the buffer,
-this routine frees them from the bottom up,
-releasing all the memory without creating memory leaks.
-
-
-
-The array is stored in the dataset and then read back into memory.
-Default memory management routines are used for writing the VL data.
-Custom memory management routines are used for reading the VL data and
-reclaiming memory space.
-
-
-For further samples of VL datatype code, see the tests in
-Arrays can be nested.
-Not only is an array datatype used as an element of an HDF5 dataset,
-but the elements of an array datatype may be of any datatype,
-including another array datatype.
-
-
-Array datatypes cannot be subdivided for I/O; the entire array must
-be transferred from one dataset to another.
-
-
-Within the limitations outlined in the next paragraph, array datatypes
-may be N-dimensional and of any dimension size.
-Unlimited dimensions, however, are not supported.
-Functionality similar to unlimited dimension arrays is available through
-the use of variable-length datatypes.
-
-
-The maximum number of dimensions, i.e., the maximum rank, of an array
-datatype is specified by the HDF5 library constant
-One array dataype may only be converted to another array datatype
-if the number of dimensions and the sizes of the dimensions are equal
-and the datatype of the first array's elements can be converted
-to the datatype of the second array's elements.
-
-
-The use of the array datatype class will not interfere with the
-use of existing compound datatypes. Applications may continue to
-read and write the older field arrays, but they will no longer be
-able to create array fields in newly-defined compound datatypes.
-
-Existing array fields will be transparently mapped to array datatypes
-when they are read in.
-
-
-
-
- If a file has lots of datasets which have a common datatype,
- then the file could be made smaller by having all the datasets
- share a single datatype. Instead of storing a copy of the
- datatype in each dataset object header, a single datatype is stored
- and the object headers point to it. The space savings is
- probably only significant for datasets with a compound datatype,
- since the atomic datatypes can be described with just a few
- bytes anyway.
-
- To create a bunch of datasets that share a single datatype
- just create the datasets with a committed (named) datatype.
-
-
- To create two datasets that share a common datatype
- one just commits the datatype, giving it a name, and
- then uses that datatype to create the datasets.
-
- And to create two additional datasets later which
- share the same type as the first two datasets:
-
- The library is capable of converting data from one type to
- another and does so automatically when reading or writing the
- raw data of a dataset, attribute data, or fill values. The
- application can also change the type of data stored in an array.
-
- In order to insure that data conversion exceeds disk I/O rates,
- common data conversion paths can be hand-tuned and optimized for
- performance. The library contains very efficient code for
- conversions between most native datatypes and a few non-native
- datatypes, but if a hand-tuned conversion function is not
- available, then the library falls back to a slower but more
- general conversion function. The application programmer can
- define additional conversion functions when the libraries
- repertoire is insufficient. In fact, if an application does
- define a conversion function which would be of general interest,
- we request that the function be submitted to the HDF5
- development team for inclusion in the library.
-
- Note: The HDF5 library contains a deliberately limited
- set of conversion routines. It can convert from one integer
- format to another, from one floating point format to another,
- and from one struct to another. It can also perform byte
- swapping when the source and destination types are otherwise the
- same. The library does not contain any functions for converting
- data between integer and floating point formats. It is
- anticipated that some users will find it necessary to develop
- float to integer or integer to float conversion functions at the
- application level; users are invited to submit those functions
- to be considered for inclusion in future versions of the
- library.
-
- A conversion path contains a source and destination datatype
- and each path contains a hard conversion function
- and/or a soft conversion function. The only difference
- between hard and soft functions is the way in which the library
- chooses which function applies: A hard function applies to a
- specific conversion path while a soft function may apply to
- multiple paths. When both hard and soft functions apply to a
- conversion path, then the hard function is favored and when
- multiple soft functions apply, the one defined last is favored.
-
- A data conversion function is of type The conversion function is called with
- the source and destination datatypes (src_id and
- dst_id),
- the path-constant data struct (cdata),
- the number of instances of the datatype to convert (nelmts),
- a conversion buffer (buffer) which initially contains
- an array of data having the source type and on return will
- contain an array of data having the destination type,
- a temporary or background buffer (bkg_buffer,
- see description of buf_stride and bkg_stride are in bytes and
- are related to the size of the datatype.
- If every data element is to be converted, the parameter's value
- is equal to the size of the datatype;
- if every other data element is to be converted, the parameter's value
- is equal to twice the size of the datatype; etc.
-
- dset_xfer_plist may contain properties that are passed
- to the read and write calls.
- This parameter is currently used only with variable-length data.
-
- bkg_buffer and bkg_stride are used only with
- compound datatypes.
-
- The path-constant data struct, The Whether a background buffer is supplied to a conversion
- function, and whether the background buffer is initialized
- depends on the value of The Once a conversion function is written it can be registered and
- unregistered with these functions:
-
-
- Here's an example application-level function that
- converts Cray The background argument is ignored since
- it's generally not applicable to atomic datatypes.
-
- The convesion function described in the previous
- example applies to more than one conversion path.
- Instead of enumerating all possible paths, we register
- it as a soft function and allow it to decide which
- paths it can handle.
-
- This causes it to be consulted for any conversion
- from an integer type to another integer type. The
- first argument is just a short identifier which will
- be printed with the datatype conversion statistics.
- NOTE: The idea of a master soft list and being able to
- query conversion functions for their abilities tries to overcome
- problems we saw with AIO. Namely, that there was a dichotomy
- between generic conversions and specific conversions that made
- it very difficult to write a conversion function that operated
- on, say, integers of any size and order as long as they don't
- have zero padding. The AIO mechanism required such a function
- to be explicitly registered (like
-
- (Return to Data Types Interface (H5T).)
-
-
- An HDF enumeration data type is a 1:1 mapping between a set of
- symbols and a set of integer values, and an order is imposed on
- the symbols by their integer values. The symbols are passed
- between the application and library as character strings and all
- the values for a particular enumeration type are of the same
- integer type, which is not necessarily a native type.
-
- Creation of an enumeration data type resembles creation of a
- compound data type: first an empty enumeration type is created,
- then members are added to the type, then the type is optionally
- locked.
-
- Because an enumeration data type is derived from an integer
- data type, any operation which can be performed on integer data
- types can also be performed on enumeration data types. This
- includes:
-
-
- In addition, the new function A small set of functions is available for querying properties
- of an enumeration type. These functions are likely to be used
- by browsers to display data type information.
-
-
- Output:
- In addition to querying about the enumeration type properties,
- an application may want to make queries about enumerated
- data. These functions perform efficient mappings between symbol
- names and values.
-
-
- Output:
- Enumerated data can be converted from one type to another
- provided the destination enumeration type contains all the
- symbols of the source enumeration type. The conversion operates
- by matching up the symbol names of the source and destination
- enumeration types to build a mapping from source value to
- destination value. For instance, if we are translating from an
- enumeration type that defines a sequence of integers as the
- values for the colors to a type that defines a different bit for
- each color then the mapping might look like this:
-
-
-
- That is, a source value of
- Output:
- If the source data stream contains values which are not in the
- domain of the conversion map then an overflow exception is
- raised within the library, causing the application defined
- overflow handler to be invoked (see
- The HDF library will not provide conversions between enumerated
- data and integers although the application is free to do so
- (this is a policy we apply to all classes of HDF data
- types). However, since enumeration types are derived from
- integer types it is permissible to treat enumerated data as
- integers and perform integer conversions in that context.
-
- Symbol order is determined by the integer values associated
- with each symbol. When the integer data type is a native type,
- testing the relative order of two symbols is an easy process:
- simply compare the values of the symbols. If only the symbol
- names are available then the values must first be determined by
- calling When the underlying integer data type is not a native type then
- the easiest way to compare symbols is to first create a similar
- enumeration type that contains all the same symbols but has a
- native integer type (HDF type conversion features can be used to
- convert the non-native values to native values). Once we have a
- native type we can compare symbol order as just described. If
- It is also possible to convert enumerated data to a new type
- that has a different order defined for the symbols. For
- instance, we can define a new type,
- Output:
- The order that members are inserted into an enumeration type is
- unimportant; the important part is the associations between the
- symbol names and the values. Thus, two enumeration data types
- will be considered equal if and only if both types have the same
- symbol/value associations and both have equal underlying integer
- data types. Type equality is tested with the
- Although HDF enumeration data types are similar to C
-
- The examples below use the following C data types:
-
-
- An HDF enumeration data type can be created from a C
-
- Occassionally two applicatons wish to exchange data but they
- use different names for the constants they exchange. For
- instance, an English and a Spanish program may want to
- communicate color names although they use different symbols in
- the C
- Since symbol ordering is completely determined by the integer values
- assigned to each symbol in the For example, an application may be defined to use the
- definition of A case of this reordering of symbol names was also shown in the
- previous code snippet (as well as a change of language), where
- HDF changed the integer values so 0 ( In fact, the ability to change the order of symbols is often
- convenient when the enumeration type is used only to group
- related symbols that don't have any well defined order
- relationship.
-
- The HDF enumeration type conversion features can also be used
- to provide internationalization of debugging output. A program
- written with the
- The main goal of enumeration types is to provide communication
- of enumerated data using symbolic equivalence. That is, a
- symbol written to a dataset by one application should be read as
- the same symbol by some other application.
-
-
-
- (Return to Data Types Interface (H5T).)
-
-
- The HDF5 library contains a number of debugging features to
- make programmers' lives easier including the ability to print
- detailed error messages, check invariant conditions, display
- timings and other statistics, and trace API function calls and
- return values.
-
- The statistics and tracing can be displayed on any output
- stream (including streams opened by the shell) with output from
- different packages even going to different streams.
-
- By default any API function that fails will print an error
- stack to the standard error stream.
-
-
- The error handling package (H5E) is described
- elsewhere.
-
- To include checks for invariant conditions the library should
- be configured with
- Code to accumulate statistics is included at compile time by
- using the
- In addition to including the code at compile time the
- application must enable each package at runtime. This is done
- by listing the package names in the
- The components of the The HDF5 library can trace API calls by printing the
- function name, the argument names and their values, and the
- return value. Some people like to see lots of output during
- program execution instead of using a good symbolic debugger, and
- this feature is intended for their consumption. For example,
- the output from
- The code that performs the tracing must be included in the
- library by specifying the
- If the library was not configured for tracing then there is no
- unnecessary overhead since all tracing code is excluded.
- However, if tracing is enabled but not used there is a small
- penalty. First, code size is larger because of extra
- statically-declared character strings used to store argument
- types and names and extra auto variable pointer in each
- function. Also, execution is slower because each function sets
- and tests a local variable and each API function calls the
- If tracing is enabled and turned on then the penalties from the
- previous paragraph apply plus the time required to format each
- line of tracing information. There is also an extra call to
- H5_trace() for each API function to print the return value.
-
- The tracing mechanism is invoked for each API function before
- arguments are checked for validity. If bad arguments are passed
- to an API function it could result in a segmentation fault.
- However, the tracing output is line-buffered so all previous
- output will appear.
-
- There are two API functions that don't participate in
- tracing. They are On the other hand, a number of API functions are called during
- library initialization and they print tracing information.
-
- For those interested in the implementation here is a
- description. Each API function should have a call to one of the
- In order to keep the
- Note: The warning message is the result of a comment of the
- form Error messages have the same format as a compiler so that they
- can be parsed from program development environments like
- Emacs. Any function which generates an error will not be
- modified. When an error occurs deep within the HDF5 library a record is
- pushed onto an error stack and that function returns a failure
- indication. Its caller detects the failure, pushes another
- record onto the stack, and returns a failure indication. This
- continues until the application-called API function returns a
- failure indication (a negative integer or null pointer). The
- next API function which is called (with a few exceptions) resets
- the stack.
-
- In normal circumstances, an error causes the stack to be
- printed on the standard error stream. The first item, number
- "#000" is produced by the API function itself and is usually
- sufficient to indicate to the application programmer what went
- wrong.
-
-
- If an application calls The error stack can also be printed and manipulated by these
- functions, but if an application wishes make explicit calls to
- Sometimes an application will call a function for the sake of
- its return value, fully expecting the function to fail. Under
- these conditions, it would be misleading if an error message
- were automatically printed. Automatic printing of messages is
- controlled by the
- An application can temporarily turn off error
- messages while "probing" a function.
-
- Or automatic printing can be disabled altogether and
- error messages can be explicitly printed.
-
- The application is allowed to define an automatic error
- traversal function other than the default
-
- The application defines a function to print a simple
- error message to the standard error stream.
-
- The function is installed as the error handler by
- saying
-
- The
- This is the implementation of the default error stack
- traversal callback.
-
- This table shows some of the layers of HDF5. Each layer calls
- functions at the same or lower layers and never functions at
- higher layers. An object identifier (OID) takes various forms
- at the various layers: at layer 0 an OID is an absolute physical
- file address; at layers 1 and 2 it's an absolute virtual file
- address. At layers 3 through 6 it's a relative address, and at
- layers 7 and above it's an object handle.
-
- The simplest form of hdf5 file is a single file containing only
- hdf5 data. The file begins with the super block, which is
- followed until the end of the file by hdf5 data. The next most
- complicated file allows non-hdf5 data (user defined data or
- internal wrappers) to appear before the super block and after the
- end of the hdf5 data. The hdf5 data is treated as a single
- linear address space in both cases.
-
- The next level of complexity comes when non-hdf5 data is
- interspersed with the hdf5 data. We handle that by including
- the non-hdf5 interspersed data in the hdf5 address space and
- simply not referencing it (eventually we might add those
- addresses to a "do-not-disturb" list using the same mechanism as
- the hdf5 free list, but it's not absolutely necessary). This is
- implemented except for the "do-not-disturb" list.
-
- The most complicated single address space hdf5 file is when we
- allow the address space to be split among multiple physical
- files. For instance, a >2GB file can be split into smaller
- chunks and transfered to a 32 bit machine, then accessed as a
- single logical hdf5 file. The library already supports >32 bit
- addresses, so at layer 1 we split a 64-bit address into a 32-bit
- file number and a 32-bit offset (the 64 and 32 are
- arbitrary). The rest of the library still operates with a linear
- address space.
-
- Another variation might be a family of two files where all the
- meta data is stored in one file and all the raw data is stored
- in another file to allow the HDF5 wrapper to be easily replaced
- with some other wrapper.
-
- The I've implemented fixed-size family members. The entire hdf5
- file is partitioned into members where each member is the same
- size. The family scheme is used if one passes a name to
- I haven't implemented a split meta/raw family yet but am rather
- curious to see how it would perform. I was planning to use the
- `.h5' extension for the meta data file and `.raw' for the raw
- data file. The high-order bit in the address would determine
- whether the address refers to meta data or raw data. If the user
- passes a name that ends with `.raw' to We also need the ability to point to raw data that isn't in the
- HDF5 linear address space. For instance, a dataset might be
- striped across several raw data files.
-
- Fortunately, the only two packages that need to be aware of
- this are the packages for reading/writing contiguous raw data
- and discontiguous raw data. Since contiguous raw data is a
- special case, I'll discuss how to implement external raw data in
- the discontiguous case.
-
- Discontiguous data is stored as a B-tree whose keys are the
- chunk indices and whose leaf nodes point to the raw data by
- storing a file address. So what we need is some way to name the
- external files, and a way to efficiently store the external file
- name for each chunk.
-
- I propose adding to the object header an External File
- List message that is a 1-origin array of file names.
- Then, in the B-tree, each key has an index into the External
- File List (or zero for the HDF5 file) for the file where the
- chunk can be found. The external file index is only used at
- the leaf nodes to get to the raw data (the entire B-tree is in
- the HDF5 file) but because of the way keys are copied among
- the B-tree nodes, it's much easier to store the index with
- every key.
-
- One might also want to combine two or more HDF5 files in a
- manner similar to mounting file systems in Unix. That is, the
- group structure and meta data from one file appear as though
- they exist in the first file. One opens File-A, and then
- mounts File-B at some point in File-A, the mount
- point, so that traversing into the mount point actually
- causes one to enter the root object of File-B. File-A and
- File-B are each complete HDF5 files and can be accessed
- individually without mounting them.
-
- We need a couple additional pieces of machinery to make this
- work. First, an haddr_t type (a file address) doesn't contain
- any info about which HDF5 file's address space the address
- belongs to. But since haddr_t is an opaque type except at
- layers 2 and below, it should be quite easy to add a pointer to
- the HDF5 file. This would also remove the H5F_t argument from
- most of the low-level functions since it would be part of the
- OID.
-
- The other thing we need is a table of mount points and some
- functions that understand them. We would add the following
- table to each H5F_t struct:
-
- The The I'm expecting to be able to implement the two new flavors of
- single linear address space in about two days. It took two hours
- to implement the malloc/free file driver at level zero and I
- don't expect this to be much more work.
-
- I'm expecting three days to implement the external raw data for
- discontiguous arrays. Adding the file index to the B-tree is
- quite trivial; adding the external file list message shouldn't
- be too hard since the object header message class from wich this
- message derives is fully implemented; and changing
- I'm expecting four days to implement being able to mount one
- HDF5 file on another. I was originally planning a lot more, but
- making The external raw data could be implemented as a single linear
- address space, but doing so would require one to allocate large
- enough file addresses throughout the file (>32bits) before the
- file was created. It would make mixing an HDF5 file family with
- external raw data, or external HDF5 wrapper around an HDF4 file
- a more difficult process. So I consider the implementation of
- external raw data files as a single HDF5 linear address space a
- kludge.
-
- The ability to mount one HDF5 file on another might not be a
- very important feature especially since each HDF5 file must be a
- complete file by itself. It's not possible to stripe an array
- over multiple HDF5 files because the B-tree wouldn't be complete
- in any one file, so the only choice is to stripe the array
- across multiple raw data files and store the B-tree in the HDF5
- file. On the other hand, it might be useful if one file
- contains some public data which can be mounted by other files
- (e.g., a mesh topology shared among collaborators and mounted by
- files that contain other fields defined on the mesh). Of course
- the applications can open the two files separately, but it might
- be more portable if we support it in the library.
-
- So we're looking at about two weeks to implement all three
- versions. I didn't get a chance to do any of them in AIO
- although we had long-term plans for the first two with a
- possibility of the third. They'll be much easier to implement in
- HDF5 than AIO since I've been keeping these in mind from the
- start.
-
- HDF5 files are composed of a super block describing information
- required to portably access files on multiple platforms, followed
- by information about the groups in a file and the datasets in the
- file. The super block contains information about the size of offsets
- and lengths of objects, the number of entries in symbol tables
- (used to store groups) and additional version information for the
- file.
-
- The HDF5 library assumes that all files are implicitly opened for read
- access at all times. Passing the Files are created with the Additional parameters to File creation property lists apply to File access property lists apply to This following example shows how to create a file with 64-bit object
- offsets and lengths: This following example shows how to open an existing file for
- independent datasets access by MPI parallel I/O: HDF5 is able to access its address space through various types of
- low-level file drivers. For instance, an address space might
- correspond to a single file on a Unix file system, multiple files on a
- Unix file system, multiple files on a parallel file system, or a block
- of memory within the application. Generally, an HDF5 address space is
- referred to as an HDF5 file regardless of how the space is organized
- at the storage level.
-
- The sec2 driver uses functions from section 2 of the
- Posix manual to access files stored on a local file system. These are
- the The stdio driver uses the functions declared in the
- The core driver uses This driver uses MPI I/O to provide parallel access to a file.
-
- A single HDF5 address space may be split into multiple files which,
- together, form a file family. Each member of the family must be the
- same logical size although the size and disk storage reported by
- Any HDF5 file can be split into a family of files by running
- the file through On occasion, it might be useful to separate meta data from raw
- data. The split driver does this by creating two files: one for
- meta data and another for raw data. The application provides a base
- file name to HDF5 allows chunked data1
- to pass through user-defined filters
- on the way to or from disk. The filters operate on chunks of an
- Each filter has a two-byte identification number (type
-
- Two types of filters can be applied to raw data I/O: permanent
- filters and transient filters. The permanent filter pipeline is
- defned when the dataset is created while the transient pipeline
- is defined for each I/O operation. During an
- The permanent filter pipeline is defined by calling
- The flags argument to the functions above is a bit vector of
- the following fields:
-
-
- Each filter is bidirectional, handling both input and output to
- the file, and a flag is passed to the filter to indicate the
- direction. In either case the filter reads a chunk of data from
- a buffer, usually performs some sort of transformation on the
- data, places the result in the same or new buffer, and returns
- the buffer pointer and size to the caller. If something goes
- wrong the filter should return zero to indicate a failure.
-
- During output, a filter that fails or isn't defined and is
- marked as optional is silently excluded from the pipeline and
- will not be used when reading that chunk of data. A required
- filter that fails or isn't defined causes the entire output
- operation to fail. During input, any filter that has not been
- excluded from the pipeline during output and fails or is not
- defined will cause the entire input operation to fail.
-
- Filters are defined in two phases. The first phase is to
- define a function to act as the filter and link the function
- into the application. The second phase is to register the
- function, associating the function with an
- If A convenience function for adding the
- Even if the This example shows how to define and register a simple filter
- that adds a checksum capability to the data stream.
-
- The function that acts as the filter always returns zero
- (failure) if the
- Once the filter function is defined it must be registered so
- the HDF5 library knows about it. Since we're testing this
- filter we choose one of the
- Now we can use the filter in a pipeline. We could have added
- the filter to the pipeline before defining or registering the
- filter as long as the filter was defined and registered by time
- we tried to use it (if the filter is marked as optional then we
- could have used it without defining it and the library would
- have automatically removed it from the pipeline for each chunk
- written before the filter was defined and registered).
-
-
- If the library is compiled with debugging turned on for the H5Z
- layer (usually as a result of
-
- Footnote 1: Dataset chunks can be compressed
- through the use of filters. Developers should be aware that
- reading and rewriting compressed chunked data can result in holes
- in an HDF5 file. In time, enough such holes can increase the
- file size enough to impair application or library performance
- when working with that file. See
- “Freespace Management”
- in the chapter
- “Performance Analysis and Issues.”
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- An object in HDF5 consists of an object header at a fixed file
- address that contains messages describing various properties of
- the object such as its storage location, layout, compression,
- etc. and some of these messages point to other data such as the
- raw data of a dataset. The address of the object header is also
- known as an OID and HDF5 has facilities for translating
- names to OIDs.
-
- Every HDF5 object has at least one name and a set of names can
- be stored together in a group. Each group implements a name
- space where the names are any length and unique with respect to
- other names in the group.
-
- Since a group is a type of HDF5 object it has an object header
- and a name which exists as a member of some other group. In this
- way, groups can be linked together to form a directed graph.
- One particular group is called the Root Group and is
- the group to which the HDF5 file super block points. Its name is
- "/" by convention. The full name of an object is
- created by joining component names with slashes much like Unix.
-
-
- However, unlike Unix which arranges directories hierarchically,
- HDF5 arranges groups in a directed graph. Therefore, there is
- no ".." entry in a group since a group can have more than one
- parent. There is no "." entry either but the library understands
- it internally.
-
- HDF5 places few restrictions on names: component names may be
- any length except zero and may contain any character except
- slash ("/") and the null terminator. A full name may be
- composed of any number of component names separated by slashes,
- with any of the component names being the special name ".". A
- name which begins with a slash is an absolute name
- which is looked up beginning at the root group of the file while
- all other relative names are looked up beginning at the
- specified group.
- Multiple consecutive slashes in a full name are treated as
- single slashes and trailing slashes are not significant. A
- special case is the name "/" (or equivalent) which refers to the
- root group.
-
- Functions which operate on names generally take a location
- identifier which is either a file ID or a group ID and perform
- the lookup with respect to that location. Some possibilities
- are:
-
-
- Note, however, that object names within a group must be unique.
- For example, Groups are created with the An object (including a group) can have more than one
- name. Creating the object gives it the first name, and then
- functions described here can be used to give it additional
- names. The association between a name and the object is called
- a link and HDF5 supports two types of links: a hard
- link is a direct association between the name and the
- object where both exist in a single HDF5 address space, and a
- soft link is an indirect association.
-
-
-
- Objects can have a comment associated with them. The comment
- is set and queried with these two functions:
-
- Exercise caution in the use of Note that Consider the following example. Assume that the group
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-HDF5 Application Developer's Guide
-
-
-
-
-
-
-
- These documents provide information of particular interest to
- developers of applications that employ the HDF5 library.
-
-
-
-
-
-
-
- HDF5 Library Changes
-
- from Release to Release
- A summary of changes in the HDF5
- library
-
-
-
-
-
- Supported Configuration
-
- Features Summary
- A summary of configuration features
- supported in this release
-
- (external link)
-
-
-
-
-
- HDF5 Image and
-
- Palette Specification
- A specification for the implementation
- of images and palettes in HDF5 applications
-
-
-
-
-
- Mapping HDF4 Objects
-
- to HDF5 Objects
- Guidelines for translating
- HDF4 file objects into valid HDF5 file objects
- (PDF format only)
-
-
-
-
-
- Fill Value and Space
-
- Allocation Issues
- A summary of HDF5 fill value and storage allocation issues
- (external link)
-
-
-
- Fill Value and Space
-
- Allocation Behavior
- A table summarizing of the behavioral interactions
- of HDF5 fill value and storage allocation settings
- (external link)
-
-
-
-
- SZIP Compression
-
-
- in HDF5
- A description of SZIP compression in HDF5,
- H5Pset_szip
, terms of use and copyright notice,
- and references
- (external link)
-
-
-
-
-
- Shuffle Performance
- An analysis of bzip and gzip compression
- performance in HDF5 with and without the shuffle filter,
- H5Pset_shuffle
- (external link)
-
-
-
-
-
- Generic Properties
- An overview of and the motivation for
- the implementation and use of generic properties in HDF5
- (external link)
-
-
-
-
-
- Error-detecting Codes
-
-
- for HDF5
- A discussion of error-detection codes,
- e.g., checksums, in HDF5
- (external link)
-
-
-
- Fletcher32 Checksum
-
-
- Design and Spec
- Design, API function specification, and test
- for the Fletcher32 checksum implementation in HDF5
- (external link)
-
-
-
-
-
-
- The HDF5 source code, as distributed to users and developers,
- contains two additional files that will be of interest to readers
- of this document. Both files are located at the top level of the
- HDF5 source code tree and are duplicated here for your reference:
-
-
-
- RELEASE.txt
-
- Technical notes regarding this release
-
-
-
-
-
-
-
-
- HISTORY.txt
-
- A release-by-release history of the HDF5 library
-
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-
-
-
-
-
-
diff --git a/doc/html/ADGuide/Changes.html b/doc/html/ADGuide/Changes.html
deleted file mode 100755
index 813da28..0000000
--- a/doc/html/ADGuide/Changes.html
+++ /dev/null
@@ -1,1086 +0,0 @@
-
-
-
-
-
-
-HDF Help Desk
-
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-Last modified: 3 July 2003
-
-
-Copyright
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-HDF5 Software Changes from Release to Release
-Release 1.7.x (current release) versus Release 1.6
-
-Deleted Functions
- The following functions are removed from HDF5 Release 1.7.x as the GASS
- virtual file driver has been retired.
-
-
-
-
-
-
-
-
-
- H5Pget_fapl_gass
-
-
-
-
-
- H5Pset_fapl_gass
-
-
-
- Release 1.6.0 (current release) versus Release 1.4.5
-
-New Functions and Tools
-
-
-
-
-
-
-
-
-
- H5Dget_offset
hsize_t
- H5Dget_offset
(hid_t dset_id
)
-
-
- H5Dget_space_status
hid_t
-
- H5Dget_space_status
(hid_t
- dset_id
, H5D_space_status_t *status
)
-
-
-
- H5Fget_obj_ids
- int
- H5Fget_obj_ids
(hid_t file_id
,
- unsigned int types
,
- int max_objs
, hid_t *obj_id_list
)
-
-
-
- H5Fget_vfd_handle
- herr_t
- H5Fget_vfd_handle
(hid_t file_id
,
- hid_t fapl_id
, void *file_handle
)
-
-
-
- H5Gget_num_objs
- herr_t
- H5Gget_num_objs
(hid_t loc_id
,
- hsize_t* num_obj
)
-
-
-
- H5Gget_objname_by_idx
- ssize_t
- H5Gget_objname_by_idx
(hid_t group_id
,
- hsize_t idx
, char *name
,
- size_t* size
)
-
-
-
- H5Gget_objtype_by_idx
- int
- H5Gget_objtype_by_idx
(hid_t group_id
,
- hsize_t idx
)
-
-
-
- H5Iget_name
- ssize_t
- H5Iget_name
(hid_t obj_id
,
- char *name
, size_t size
)
-
-
-
- H5Pall_filters_avail
- htri_t
- H5Pall_filters_avail
(hid_t dcpl_id
)
-
-
-
- H5Pfill_value_defined
- herr_t
- H5Pfill_value_defined
(hid_t plist_id
,
- H5D_fill_value_t *status
)
-
-
-
- H5Pget_alloc_time
- herr_t
- H5Pget_alloc_time
(hid_t plist_id
,
- H5D_alloc_time_t *alloc_time
)
-
-
-
- H5Pget_edc_check
- H5Z_EDC_t
- H5Pget_edc_check
(hid_t
- plist
)
-
-
-
- H5Pget_family_offset
- herr_t
- H5Pget_family_offset
(hid_t fapl_id
,
- hsize_t *offset
)
-
-
-
- H5Pget_fapl_mpiposix
- herr_t
- H5Pget_fapl_mpiposix
(hid_t fapl_id
,
- MPI_Comm *comm
)
-
-
-
- H5Pget_fill_time
- herr_t
- H5Pget_fill_time
(hid_t plist_id
,
- H5D_fill_time_t *fill_time
)
-
-
-
- H5Pget_filter_by_id
- herr_t
- H5Pget_filter_by_id
(hid_t plist_id
,
- H5Z_filter_t filter
, unsigned int *flags
,
- size_t *cd_nelmts
, unsigned int cd_values[]
,
- size_t namelen
, char *name[]
)
-
-
-
- H5Pget_hyper_vector_size
- herr_t
- H5Pget_hyper_vector_size
(hid_t dxpl_id
,
- size_t *vector_size
)
-
-
-
- H5Pget_multi_type
- herr_t
- H5Pget_multi_type
(hid_t fapl_id
,
- H5FD_mem_t *type
)
-
-
-
- H5Pmodify_filter
- herr_t
- H5Pmodify_filter
(hid_t plist
,
- H5Z_filter_t filter
, unsigned int flags
,
- size_t cd_nelmts
, const unsigned int cd_values[]
)
-
-
-
- H5Pset_alloc_time
- herr_t
- H5Pset_alloc_time
(hid_t
- plist_id
, H5D_alloc_time_t alloc_time
)
-
-
-
- H5Pset_edc_check
- herr_t
- H5Pset_edc_check
(hid_t
- plist
, H5Z_EDC_t check
)
-
-
-
- H5Pset_family_offset
- herr_t
- H5Pset_family_offset
(hid_t fapl_id
,
- hsize_t offset
)
-
-
-
- H5Pset_fapl_mpiposix
- herr_t
- H5Pset_fapl_mpiposix
(hid_t fapl_id
,
- MPI_Comm comm
)
-
-
-
- H5Pset_fill_time
- herr_t
- H5Pset_fill_time
(hid_t plist_id
,
- H5D_fill_time_t fill_time
)
-
-
-
- H5Pset_filter
- herr_t
- H5Pset_filter
- (hid_t plist
, H5Z_filter_t filter
,
- unsigned int flags
, size_t cd_nelmts
,
- const unsigned int cd_values[])
-
-
-
- H5Pset_filter_callback
- herr_t
- H5Pset_filter_callback
(hid_t
- plist
, H5Z_filter_func_t func
,
- void *op_data
)
-
-
-
- H5Pset_fletcher32
- herr_t
- H5Pset_fletcher32
(hid_t
- plist
)
-
-
-
- H5Pset_hyper_vector_size
- herr_t
- H5Pset_hyper_vector_size
(hid_t dxpl_id
,
- size_t vector_size
)
-
-
-
- H5Pset_multi_type
- herr_t
- H5Pset_multi_type
(hid_t fapl_id
,
- H5FD_mem_t type
)
-
-
-
- H5Pset_shuffle
- herr_t
- H5Pset_shuffle
(hid_t plist_id
)
-
-
-
- H5Pset_szip
- herr_t
- H5Pset_szip
(hid_t plist
,
- unsigned int options_mask
, unsigned int
- pixels_per_block
)
-
-
-
- H5Rget_object_type
- int
- H5Rget_object_type
(hid_t id
,
- void *ref
)
-
-
-
- H5set_free_list_limits
- herr_t
- H5set_free_list_limits
(int reg_global_lim
,
- int reg_list_lim
, int arr_global_lim
,
- int arr_list_lim
, int blk_global_lim
,
- int blk_list_lim
)
-
-
-
- H5Sget_select_type
- H5S_sel_type
- H5Sget_select_type
(hid_t space_id
)
-
-
-
- H5Tdetect_class
- htri_t
- H5Tdetect_class
(hid_t dtype_id
,
- H5T_class_t dtype_class
)
-
-
-
- H5Tget_native_type
- hid_t
- H5Tget_native_type
(hid_t type_id
,
- H5T_direction_t direction
)
-
-
-
- H5Tis_variable_str
- htri_t
- H5Tis_variable_str
(hid_t dtype_id
)
-
-
-
- H5Zfilter_avail
- herr_t
- H5Zfilter_avail
(H5Z_filter_t filter
)
-
-
-
- H5Zunregister
- herr_t
- H5Zunregister
(H5Z_filter_t filter
)
- h5diff
- h5import
- h5fc
- h5c++
- h5perf
- h5redeploy
-
-Deleted Functions
- The following functions are deprecated in HDF5 Release 1.6.0.
- A backward compatibility mode is provided in this release,
- enabling these functions and other Release 1.4.x compatibility
- features, but is available only when the HDF5 library is
- configured with the flag H5_WANT_H5_V1_4_COMPAT
.
- The backward compatibility mode is not enabled in the
- binaries distributed by NCSA.
-
-
-
-
-
-
-
-
-
- H5Pset_hyper_cache
-H5Pget_hyper_cache
-
-
-
-
-
- H5Rget_object_type
-
-
-
-
-
-
- Functions with Changed Syntax
- The following functions have changed as noted.
-
-
-
-
- H5FDflush and VFL "flush" callbacks
- closing
has been added to
- these functions,
- to allow the library to indicate that the file will be closed
- following the call to "flush". Actions in the "flush" call
- that are duplicated in the VFL "close" call may be omitted by
- the VFL driver.
- H5Gget_objtype_by_idx
- int
to
- the enumerated type H5G_obj_t
.
- H5Pset(get)_buffer
- size
parameter for H5Pset_buffer
- has changed from type hsize_t
to
- size_t
.
- H5Pget_buffer
return type has similarly
- changed from hsize_t
to
- size_t
.
- H5Pset(get)_cache
- rdcc_nelmts
parameter has changed from type
- int
to
- size_t
.
- H5Pset_fapl_log
- verbosity
parameter has been removed.
- flags
of type unsigned
and
- buf_size
of type size_t
.
- H5Pset(get)_fapl_mpiposix
- use_gpfs
parameter of type
- hbool_t
has been added.
- H5Pset(get)_sieve_buf_size
- size
parameter has changed from type
- hsize_t
to
- size_t
.
- H5Pset(get)_sym_k
- lk
parameter has changed from type
- int
to
- unsigned
.
- H5Sget_select_bounds
- start
and end
parameters have
- changed from type hsize_t *
- to hssize_t *
to better match the
- rest of the dataspace API.
- H5Zregister
- H5Z_class_t
struct and
- new set local and can apply callback functions.
- h5pset(get)_fapl_core_f
- backing_store
parameter has changed from
- INTEGER
to LOGICAL
- to better match the C API.
- h5pset(get)_preserve_f
- flag
parameter has changed from
- INTEGER
to LOGICAL
- to better match the C API.
- H5_WANT_H5_V1_4_COMPAT
,
- is not enabled in the binaries distributed by NCSA, and
- will eventually be removed from the HDF5 distribution.
-
-
- Constants with Changed Values
-
-
-
-Release 1.4.5 versus Release 1.4.4
-C Library
-
-
-
-
-
-herr_t H5Pset_fapl_mpiposix(hid_t fapl_id, MPI_Comm comm);
-herr_t H5Pget_fapl_mpiposix(hid_t fapl_id, MPI_Comm *comm/*out*/);
-
-
-
-
-
- H5Pset_fapl_mpio
- H5Pget_fapl_mpio
- H5Fcreate
- H5Fopen
- H5Fclose
-
- Previously, the Communicator and Info object arguments supplied
- to H5Pset_fapl_mpio
were stored in the property with
- its handle values.
- This meant changes to the communicator or the Info object
- after calling H5Pset_fapl_mpio
would affect the how
- the property list functioned.
- This was also the case when H5Fopen/create
operated.
- They just stored the handle value. This is not according to the
- MPI-2 defined behavior of how Info objects should be handled.
- (MPI-2 defines Info objects must be parsed when called.)
- H5Pset_fapl_mpio
now stores a duplicate of each of
- the communicator and Info object.
- H5Pget_fapl_mpio
now returns a duplicate of its
- stored communicator and Info object.
- It is now the responsibility of the applications to free
- those objects when done.H5Fcreate
and H5Fopen
also store
- a duplicate of the communicator and Info
- object supplied by the file access property List.
- H5Fclose
frees the duplicates.H5Pget_fapl_mpio
when they are
- no longer needed.
-
-
-
-
-None
-
-
-
-
-
-
-
- Fortran90 Library
-
-
- h5get_libversion_f, h5check_version_f, h5garbage_collect_f, h5dont_atexit_f
-
- h5tget_member_index_f, h5tvlen_create_f
-
- h5dget_storage_size_f, h5dvlen_get_max_len_f , h5dwrite_vl_f, h5dread_vl_f
-
-
- Only integer, real
and
- character
types are supported for VL datatypes.
-
- Release 1.4.4 versus Release 1.4.3
-C Library
-
-
-
-
-
-H5Pget_small_data_block_size
-H5Pset_small_data_block_size
-H5Tget_member_index
-
-
-
-
-
-
-
-
-
-
-
-None
-
-
-
-
-
-
-
-
-
-
-
-None
-
-
-
-
-
-
-
- Fortran90 Library
- h5dwrite_f, h5dread_f, h5awrite_f, h5aread_f
were overloaded
- with dims
argument to be assumed size array of type INTEGER(HSIZE_T).
- We recommend to use the subroutines with the new type. Module subroutines
- that accept dims
as INTEGER
array of size 7
will be deprecated in the 1.6.0 release.
-
- Release 1.4.3 versus Release 1.4.2
-C Library
-
-
-
-
-
-H5Pset_fapl_dpss
-
-
-
-
-
-
-
-
-
-
-
-
- Fortran90 Library
- Release 1.4.2 versus Release 1.4.1
-C Library
- The HDF5 Release 1.4.2 C library is a "Bugfix Release";
- there are no API changes in the underlying HDF5 library.
-
- Fortran90 Library
- The following functions in the HDF5 Release 1.4.2 Fortran90 library
- have an additional parameter, dims
, that was not present
- in Release 1.4.1:
-
-
-h5aread_f(attr_id, memtype_id, buf, dims, hdferr)
-h5awrite_f(attr_id, memtype_id, buf, dims, hdferr)
-h5dread_f(dset_id, mem_type_id, buf, dims, hdferr, mem_space_id, &
- file_space_id, xfer_prp)
-h5dwrite_f(dset_id, mem_type_id, buf, dims, hdferr, mem_space_id, &
- file_space_id, xfer_prp)
-
-dims
parameter enables library portability
- between the UNIX and Microsoft Windows platforms.
-
-Release 1.4.1 versus Release 1.4.0
-Release 1.4.0 versus Release 1.2.2
-
-New Functions
- The following functions are new for Release 1.4.0 and are included in the
- HDF5 Reference Manual.
-
-
-herr_t H5Dvlen_get_buf_size (hid_t dataset_id, hid_t type_id,
- hid_t space_id, hsize_t *size);
-herr_t H5Epush (const char *file, const char *func,
- unsigned line, H5E_major_t maj, H5E_minor_t min,
- const char *str);
-hid_t H5Pget_driver (hid_t plist_id);
-void *H5Pget_driver_info (hid_t plist_id);
-herr_t H5Pget_dxpl_mpio (hid_t dxpl_id,
- H5FD_mpio_xfer_t *xfer_mode/*out*/);
-herr_t H5Pget_dxpl_multi (hid_t dxpl_id,
- hid_t *memb_dxpl/*out*/);
-herr_t H5Pget_fapl_core (hid_t fapl_id, size_t *increment/*out*/,
- hbool_t *backing_store/*out*/)
-herr_t H5Pget_fapl_family (hid_t fapl_id,
- hsize_t *memb_size/*out*/, hid_t *memb_fapl_id/*out*/);
-herr_t H5Pget_fapl_mpio (hid_t fapl_id, MPI_Comm *comm/*out*/,
- MPI_Info *info/*out*/);
-herr_t H5Pget_fapl_multi (hid_t fapl_id,
- H5FD_mem_t *memb_map/*out*/, hid_t *memb_fapl/*out*/,
- char **memb_name/*out*/, haddr_t *memb_addr/*out*/,
- hbool_t *relax/*out*/);
-herr_t H5Pget_fapl_stream (hid_t fapl_id,
- H5FD_stream_fapl_t *fapl /*out*/ );
-herr_t H5Pget_meta_block_size (hid_t fapl_id,
- hsize_t *size/*out*/);
-herr_t H5Pget_sieve_buf_size (hid_t fapl_id,
- hsize_t *size/*out*/);
-herr_t H5Pset_driver (hid_t plist_id, hid_t driver_id,
- const void *driver_info);
-herr_t H5Pset_dxpl_mpio (hid_t dxpl_id,
- H5FD_mpio_xfer_t xfer_mode);
-herr_t H5Pset_dxpl_multi (hid_t dxpl_id,
- const hid_t *memb_dxpl);
-herr_t H5Pset_fapl_core (hid_t fapl_id, size_t increment,
- hbool_t backing_store)
-herr_t H5Pset_fapl_family (hid_t fapl_id, hsize_t memb_size,
- hid_t memb_fapl_id);
-herr_t H5Pset_fapl_log (hid_t fapl_id, char *logfile,
- int verbosity);
-herr_t H5Pset_fapl_mpio (hid_t fapl_id, MPI_Comm comm,
- MPI_Info info);
-herr_t H5Pset_fapl_multi (hid_t fapl_id,
- const H5FD_mem_t *memb_map, const hid_t *memb_fapl,
- const char **memb_name, const haddr_t *memb_addr,
- hbool_t relax);
-herr_t H5Pset_fapl_sec2 (hid_t fapl_id);
-herr_t H5Pset_fapl_split (hid_t fapl, const char *meta_ext,
- hid_t meta_plist_id, const char *raw_ext,
- hid_t raw_plist_id);
-herr_t H5Pset_fapl_stdio (hid_t fapl_id);
-herr_t H5Pset_fapl_stream (hid_t fapl_id,
- H5FD_stream_fapl_t *fapl);
-herr_t H5Pset_meta_block_size(hid_t fapl_id, hsize_t size);
-herr_t H5Pset_sieve_buf_size(hid_t fapl_id, hsize_t size);
-hid_t H5Tarray_create (hid_t base, int rank, const hsize_t dims[],
- const int perm[])
-int H5Tget_array_dims (hid_t adtype_id, hsize_t *dims[], int *perm[])
-int H5Tget_array_ndims (hid_t adtype_id)
-
-
-herr_t H5Pget_fapl_dpss (hid_t fapl_id);
-herr_t H5Pget_fapl_gass (hid_t fapl_id, GASS_Info *info/*out*/);
-herr_t H5Pget_fapl_srb (hid_t fapl_id, SRB_Info *info);
-herr_t H5Pset_fapl_dpss (hid_t fapl_id);
-herr_t H5Pset_fapl_gass (hid_t fapl_id, GASS_Info info);
-herr_t H5Pset_fapl_srb (hid_t fapl_id, SRB_Info info);
-
-
-haddr_t H5FDalloc (H5FD_t *file, H5FD_mem_t type,
- hsize_t size);
-herr_t H5FDclose (H5FD_t *file);
-int H5FDcmp (const H5FD_t *f1, const H5FD_t *f2);
-herr_t H5FDflush (H5FD_t *file);
-herr_t H5FDfree (H5FD_t *file, H5FD_mem_t type,
- haddr_t addr, hsize_t size);
-haddr_t H5FDget_eoa (H5FD_t *file);
-haddr_t H5FDget_eof (H5FD_t *file);
-H5FD_t *H5FDopen (const char *name, unsigned flags,
- hid_t fapl_id, haddr_t maxaddr);
-int H5FDquery (const H5FD_t *f, unsigned long *flags);
-herr_t H5FDread (H5FD_t *file, hid_t dxpl_id, haddr_t addr,
- hsize_t size, void *buf/*out*/);
-haddr_t H5FDrealloc (H5FD_t *file, H5FD_mem_t type,
- haddr_t addr, hsize_t old_size, hsize_t new_size);
-hid_t H5FDregister (const H5FD_class_t *cls);
-herr_t H5FDset_eoa (H5FD_t *file, haddr_t eof);
-herr_t H5FDunregister (hid_t driver_id);
-herr_t H5FDwrite (H5FD_t *file, H5FD_mem_t type,
- hid_t dxpl_id, haddr_t addr, hsize_t size,
- const void *buf);
-
-Deleted Functions
- The following functions have been removed from the HDF5 library
- and from the HDF5 Reference Manual.
-
-
-
-
-
-H5Pget_core
-H5Pget_driver
-H5Pget_family
-H5Pget_mpi
-H5Pget_sec2
-H5Pget_split
-H5Pget_stdio
-H5Pget_xfer
-
-
-
-
-
-H5Pset_core
-H5Pset_family
-H5Pset_mpi
-H5Pset_sec2
-H5Pset_split
-H5Pset_stdio
-H5Pset_xfer
-
-
-
-
-
-H5RAclose
-H5RAcreate
-H5RAopen
-H5RAread
-H5RAwrite
-H5Tget_member_dims
-H5Tinsert_array
-
- Functions with Changed Syntax
- The following functions have changed slightly.
-
-
-
-
- H5Pget_buffer
- H5Pset_buffer
- size
parameter has changed
- to hsize_t.
- H5Tconvert
- nelmts
parameter has changed
- to hsize_t.
- Constants with Changed Values
- The values of the constants H5P_DEFAULT
and
- H5S_ALL
have been changed from -2
- to 0
.
- These default values had to be special-cased in situations where
- they could be returned to distinguish them from error values.
-
-
-Migration from Release 1.2.2 to Release 1.4.x
-
-H5Tinsert_array
- The functionality of H5Tinsert_array
has been replaced by
- H5Tarray_create
.
- Here is an example of changing code from H5Tinsert_array
- to H5Tarray_create
.
-
-V1.2.2
-{
- struct tmp_struct {
- int a;
- float f[3];
- double d[2][4];
- };
- size_t f_dims[1]={3};
- size_t d_dims[2]={2,4};
- hid_t compound_type;
-
- compound_type=H5Tcreate(H5T_COMPOUND,sizeof(struct tmp_struct));
- H5Tinsert(compound_type,"a",HOFFSET(struct tmp_struct,a),H5T_NATIVE_INT);
- H5Tinsert_array(compound_type,"f",HOFFSET(struct tmp_struct,f),1,f_dims,NULL,H5T_NATIVE_FLOAT);
- H5Tinsert_array(compound_type,"d",HOFFSET(struct tmp_struct,d),2,d_dims,NULL,H5T_NATIVE_DOUBLE);
-}
-
-V1.4.0
-{
- struct tmp_struct {
- int a;
- float f[3];
- double d[2][4];
- };
- hsize_t f_dims[1]={3};
- hsize_t d_dims[2]={2,4};
- hid_t compound_type;
- hid_t array_type;
-
- compound_type=H5Tcreate(H5T_COMPOUND,sizeof(struct tmp_struct));
- H5Tinsert(compound_type,"a",HOFFSET(struct tmp_struct,a),H5T_NATIVE_INT);
- array_type=H5Tarray_create(H5T_NATIVE_FLOAT,1,f_dims,NULL);
- H5Tinsert(compound_type,"f",HOFFSET(struct tmp_struct,f),array_type);
- H5Tclose(array_type);
- array_type=H5Tarray_create(H5T_NATIVE_DOUBLE,2,d_dims,NULL);
- H5Tinsert(compound_type,"d",HOFFSET(struct tmp_struct,d),array_type);
- H5Tclose(array_type);
-}
-
-
-This and Prior Releases: The RELEASE.txt and HISTORY.txt Files
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-Last modified: 3 March 2005
-
-
-
-
diff --git a/doc/html/ADGuide/H4toH5Mapping.doc b/doc/html/ADGuide/H4toH5Mapping.doc
deleted file mode 100755
index 2f9340f..0000000
Binary files a/doc/html/ADGuide/H4toH5Mapping.doc and /dev/null differ
diff --git a/doc/html/ADGuide/H4toH5Mapping.pdf b/doc/html/ADGuide/H4toH5Mapping.pdf
deleted file mode 100644
index 548912b..0000000
Binary files a/doc/html/ADGuide/H4toH5Mapping.pdf and /dev/null differ
diff --git a/doc/html/ADGuide/HISTORY.txt b/doc/html/ADGuide/HISTORY.txt
deleted file mode 100644
index b6f2585..0000000
--- a/doc/html/ADGuide/HISTORY.txt
+++ /dev/null
@@ -1,3180 +0,0 @@
-HDF5 HISTORY
-============
-This file contains history of the HDF5 libraries releases
-
-CONTENTS
-
-13. Release Information for hdf5-1.4.5
-12. Release Information for hdf5-1.4.4
-11. Release Information for hdf5-1.4.3
-10. Release Information for hdf5-1.4.2
-9. Release Information for hdf5-1.4.1
-8. Release Information for hdf5-1.4.0
-7. Release Information for hdf5-1.2.2
-6. Release Information for hdf5-1.2.1
-5. Release Information for hdf5-1.2.0
-4. Changes from Release 1.0.0 to Release 1.0.1
-3. Changes from the Beta 1.0.0 Release to Release 1.0.0
-2. Changes from the Second Alpha 1.0.0 Release to the Beta 1.0.0 Release
-1. Changes from the First Alpha 1.0.0 Release to the
- Second Alpha 1.0.0 Release
-
-[Search on the string '%%%%' for per-release section breaks.]
-
------------------------------------------------------------------------
-%%%%1.4.5%%%% Release Information for hdf5-1.4.5 (02/February/03)
-
-
-13. Release information for HDF5 version 1.4.5
-==============================================================================
-
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.4.4 and
-HDF5-1.4.5, and contains information on the platforms tested and
-known problems in HDF5-1.4.5. For additional information check the
-HISTORY.txt file in the HDF5 source.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information, see the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-
-CONTENTS
-
-- New Features
-- Bug Fixes since HDF5-1.4.4
-- Performance Improvements
-- Documentation
-- Platforms Tested
-- Supported Configuration Features
-- Known Problems
-
-
-New Features
-============
- o Configuration
- ================
- * Added "unofficial support" for building with a C++ compiler (or at least
- not failing badly when building with a C++ compiler). QAK - 2003/01/09
- * Added "unofficial support" for AIX 64bits. See INSTALL for configure
- details. AKC - 2002/08/29
- * Added "--with-dmalloc" flag, to easily enable support for the 'dmalloc'
- debugging malloc implementation. QAK - 2002/07/15
-
- o Library
- =========
- o General
- ---------
- * Allow scalar dataspaces to be used for parallel I/O. QAK - 2002/11/05
- * Added environment variable "HDF5_DISABLE_VERSION_CHECK", which disables
- the version checking between the header files and the library linked
- into an application if set to '1'. This should be used with caution,
- mis-matched headers and library binaries can cause _serious_ problems.
- QAK - 2002/10/15
- * Partially fixed space allocation inefficiencies in the file by
- improving our algorithms for re-using freed space. QAK - 2002/08/27
- * API tracing has been improved. Nested API calls don't screw up the
- output format; function call and return event times can be logged;
- total time spent in each function can be logged. The following
- HDF5_DEBUG environment variable words affect tracing:
- trace -- turn on/off basic tracing
- ttimes -- turn on tracing and report event times and
- time spent in each API function.
- ttop -- turn on tracing but display only top-level
- API calls.
-
- o APIs
- ------
- * Several missing fortran APIs have been added to the library:
-
- h5get_libversion_f h5tget_member_index_f h5dget_storage_size_f
- h5check_version_f h5tvlen_create_f h5dvlen_get_max_len_f
- h5garbage_collect_f h5dwrite_vl_f
- h5dont_atexit_f h5dread_vl_f
-
- Functions h5dvlen_get_max_len_f, h5dwrite_vl_f, and h5dread_vl_f support
- VL Length C APIs functionality for integer, real and string datatypes.
- See HDF5 Reference Manual and HDF5 FORTRAN90 User's Notes for more
- information and for the functions description.
-
- o Parallel library
- ==================
- * The MPI-posix virtual file driver makes gpfs_fcntl() hints to tell
- the underlying GPFS file system to avoid prefetching byte range
- tokens if USE_GPFS_HINTS is defined when this file is compiled.
- This temporary solution is intended to be removed once the HDF5
- API supports the necessary functionality that makes it possible
- for this sort of thing do be done at a higher software layer.
- RPM - 2002/12/03
- * Added MPI-posix VFL driver. This VFL driver uses MPI functions to
- coordinate actions, but performs I/O directly with POSIX sec(2)
- (i.e. open/close/read/write/etc.) calls. This driver should _NOT_
- be used to access files that are not on a parallel filesystem.
- The following API functions were added:
- herr_t H5Pset_fapl_mpiposix(hid_t fapl_id, MPI_Comm comm);
- herr_t H5Pget_fapl_mpiposix(hid_t fapl_id, MPI_Comm *comm/*out*/);
- QAK - 2002/07/15
-
-
-
- o Support for new platforms and languages
- =========================================
- * C++ API now works on the Origin2000 (IRIX6.5.14.) BMR - 2002/11/14
-
-
- o Misc.
- =========================================
- HDF5 1.4.5 works with Portland Group Compilers (pgcc, pgf90 and pgCC
- version 4.0-2) on Linux 2.4
-
-
-Bug Fixes since HDF5-1.4.4 Release
-==================================
- * H5Fopen without the H5F_ACC_CREAT flag should not succeed in creating
- a new file with the 'core' VFL driver. QAK - 2003/01/24
- * Corrected metadata caching bug in parallel I/O which could cause hangs
- when chunked datasets were accessed with independent transfer mode.
- QAK - 2003/01/23
- * Allow opening objects with unknown object header messages.
- QAK - 2003/01/21
- * Added improved error assertion for nil VL strings. It return error
- stack instead of a simple assertion. SLU - 2002/12/16
- * Fixed h5dump bug(cannot dump data and datatype) for VL string.
- SLU - 2002/11/18
- * Fixed error condition where "none" selections were not being handled
- correctly in serial & parallel. QAK - 2002/10/29
- * Fixed problem where optimized hyperslab routines were incorrectly
- invoked for parallel I/O operations in collective mode. QAK - 2002/07/22
- * Fixed metadata corruption problem which could occur when many objects
- are created in a file during parallel I/O. QAK - 2002/07/19
- * Fixed minor problem with configuration when users specified /usr/include
- and /usr/lib for the --with-* options that some compilers can't
- handle. BW - 2003/01/23
-
-
-
-Documentation
-=============
- New PDF files are not available for this release.
-
-
-Platforms Tested
-================
-
- AIX 5.1 (32 and 64-bit) C for AIX Compiler, Version 6
- xlf 8.1.0.2
- poe 3.2.0.11
- Cray T3E sn6606 2.0.6.08 Cray Standard C Version 6.6.0.1.3
- Cray Fortran Version 3.6.0.0.12
- Cray SV1 10.0.1. 0 Cray Standard C Version 6.6.0.1.3
- Cray Fortran Version 3.6.0.0.12
- Cray T90IEEE 10.0.1.01u Cray Standard C Version 6.4.0.2.3
- Cray Fortran Version 3.4.0.3
- FreeBSD 4.7 gcc 2.95.4
- g++ 2.95.5
- HP-UX B.11.00 HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- IRIX 6.5 MIPSpro cc 7.30
- IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1.3m
- F90 MIPSpro 7.3.1.3m (64 only)
- Linux 2.4.18 gcc 3.2.1
- g++ 3.2.1
- Intel(R) C++ Version 6.0
- Intel(R) Fortran Compiler Version 6.0
- PGI compilers (pgcc, pgf90, pgCC) version 4.0-2
- pgf90 3.2-4
- OSF1 V5.1 Compaq C V6.4-014
- Compaq Fortran X5.4A-1684
- gcc version 3.0 for C++
- SunOS 5.7 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.7) WorkShop Compilers 5.0 98/12/15 C++ 5.0
- WorkShop Compilers 5.0 98/10/25
- FORTRAN 90 2.0 Patch 107356-04
- SunOS 5.8/32 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- Patch 109503-07 2001/08/11
- Sun WorkShop 6 update 1 C++ 5.2 Patch
- 109508-04 2001/07/11
- SunOS 5.8/64 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- Patch 109503-07 2001/08/11
- Sun WorkShop 6 update 1 C++ 5.2 Patch
- 109508-04 2001/07/11
- TFLOPS r1.0.4 v4.3.3 i386 pgcc Rel 3.1-4i with mpich-1.2.4 with
- local modifications
- IA-32 Linux 2.4.9 gcc 2.96
- Intel(R) C++ Version 7.0
- Intel(R) Fortran Compiler Version 7.0
-
- IA-64 Linux 2.4.16 ia64 gcc version 2.96 20000731
- Intel(R) C++ Version 7.0
- Intel(R) Fortran Compiler Version 7.0
- Windows 2000 (NT5.0) MSVC++ 6.0
- DEC Visual Fortran 6.0
- Windows XP .NET
- Windows NT4.0 Code Warrior 6.0
- MAC OS X Darwin 6.2
- gcc and g++ Apple Computer, Inc. GCC
- version 1161, based on gcc version 3.1
-
-
-
-Supported Configuration Features Summary
-========================================
-
- In the tables below
- y = tested and supported
- n = not supported or not tested in this release
- x = not working in this release
- dna = does not apply
- ( ) = footnote appears below second table
-
-
- Platform C C F90 F90 C++ Shared zlib
- parallel parallel libraries (5)
- Solaris2.6 y n y n y y y
- Solaris2.7 64-bit y y (1) y y (1) y y y
- Solaris2.7 32-bit y y (1) y y (1) y y y
- Solaris2.8 64-bit y n y y (1) y y y
- Solaris2.8 32-bit y n y y (1) y y y
- IRIX6.5 y y (1) n n n y y
- IRIX64_6.5 64-bit y y (2) y y y y y
- IRIX64_6.5 32-bit y y (2) n n n y y
- HPUX11.00 y y (1) y n n y y
- OSF1 v5.1 y n y n y y y
- T3E (6) y n y n n n y
- SV1 y n y n n n y
- T90 IEEE y n y n n n y
- TFLOPS n y (1) n n n n y
- AIX-5.1 32-bit y y y y y n y
- AIX-5.1 64-bit y y y y y n y
- WinXP (7) y n n n y y y
- WinNT/2000 y n y n y y y
- WinNT CW y n n n n n y
- Mac OS X 10.2 y n n n y y y
- FreeBSD y y (1) n n y y y
- Linux 2.2 y y (1) y y (1) y y y
- Linux 2.4 gcc (3) y y (1) y n y y y
- Linux 2.4 Intel (3) y n y n n n y
- Linux 2.4 PGI (3) y n y n y n y
- Linux 2.4 IA32 y n y n n n y
- Linux 2.4 IA64 y n y n n n y
-
-
- Platform static- Thread- SRB GASS STREAM-
- exec safe VFD
- Solaris2.6 x y n n y
- Solaris2.7 64-bit x y n n y
- Solaris2.7 32-bit x y n n y
- Solaris2.8 64-bit x n n n y
- Solaris2.8 32-bit x y n n y
- IRIX6.5 x n n n y
- IRIX64_6.5 64-bit x y n y y
- IRIX64_6.5 32-bit x y n y y
- HPUX11.00 x n n n y
- OSF1 v5.1 y n n n y
- T3E (6) y n n n y
- SV1 y n n n y
- T90 IEEE y n n n y
- TFLOPS y n n n n
- AIX-5.1 32-bit y n n n y
- AIX-5.1 64-bit y n n n y
- WinXP (7) dna n n n n
- WinNT/2000 dna n n n n
- WinNT CW dna n n n n
- Mac OS X 10.2 y n n n y
- FreeBSD y y n n y
- Linux 2.2 y y n n y
- Linux 2.4 gcc (3) y y n n y
- Linux 2.4 Intel (3) y n n n y
- Linux 2.4 PGI (3) y n n n y
- Linux 2.4 IA32 y n n n y
- Linux 2.4 IA64 y n n n y
-
- Notes: (1) Using mpich 1.2.4.
- (2) Using mpt and mpich 1.2.4.
- (3) Linux 2.4 with GNU, Intel, and PGI compilers.
- (4) No HDF4-related tools.
- (5) Shared libraries are provided only for the C library,
- except on Windows where they are provided for all languages.
- (6) Debug mode only.
- (7) Binaries only; source code for this platform is not being
- released at this time.
-
-
-Known Problems
-==============
-
- * On Linux 2.4 IA64, Fortran test fails for h5dwrite_vl_f
- for integer and real base datatypes.
-
- * When fortran library is built with Intel compilers, compilation
- for fflush1.f90, fflush2.f90 and fortanlib_test.f90 will fail
- complaining about EXEC function. Comment the call to EXEC subroutine
- in each program, or get a patch for the HDF5 Fortran source code.
-
- * Fortran external dataset test fails on Linux 2.4 with pgf90 compiler.
-
- * On Windows, h5dump may abort printing if a VL string is longer than 4096
- bytes due to a compiler problem. It'll be fixed in v1.6 release.
-
- * Datasets or attributes which have a variable-length string datatype are
- not printing correctly with h5dump and h5ls.
-
- * When a dataset with the variable-length datatype is overwritten,
- the library can develop memory leaks that cause the file to become
- unnecessarily large. This is planned to be fixed in the next release.
-
- * On the SV1, the h5ls test fails due to a difference between the
- SV1 printf precision and the printf precision on other platforms.
-
- * The h5dump tests may fail to match the expected output on some
- platforms (e.g. SP2 parallel, Windows) where the error messages
- directed to "stderr" do not appear in the "right order" with output
- from stdout. This is not an error.
-
- * The --enable-static-exec configure flag fails to compile for HP-UX
- 11.00 platforms.
-
- * The executables are always dynamic on IRIX64 6.5(64 and n32) and
- IRIX 6.5 even if they are configured with --enable-static-exec.
-
- * IRIX 6.5 fails to compile if configured with --enable-static-exec.
-
- * The executables are always dynamic on Solaris 2.7 ans 2.8(64 and n32)
- even if they are configured with --enable-static-exec.
-
- * The HDF5_MPI_OPT_TYPES optimization code in the parallel HDF5 will cause
- a hang in some cases when chunked storage is used. This is now set to
- be off by default. One may turn it on by setting the environment
- variable HDF5_MPI_OPT_TYPES to a non-zero value such as 1.
-
- * On OSF1 v5.1 and IA32 h5dumpgentst program that generates test files
- for h5dump, gives segmentation fault.
-
- * On Windows platforms, C and Fortran tests fail with the debug DLL version
- of the Library if built from all_withf90.zip file.
-
- * On Cray T3E (sn6606 2.0.6.08 unicosmk CRAY T3E) with Cray Standard C Version 6.6.0.1.3
- compiler optimization causes errors in many HDF5 Library tests. Use -g -h zero flags
- to build HDF5 Library.
-
- * On Cray SV1 10.0.1. 0 datatype convertion test fails. Please check HDF FTP site
- if patch is available. We will try to provide one in the nearest future.
-
- * For configuration, building and testing with Intel and PGI compilers see
- corresponding section in INSTALL file.
-
-
-%%%%1.4.4%%%% Release Information for hdf5-1.4.4 (02/July/02)
-
-12. Release information for HDF5 version 1.4.4
-==============================================================================
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.4.3 and
-HDF5-1.4.4, and contains information on the platforms tested and
-known problems in HDF5-1.4.4. For more details check the HISTORY.txt
-file in the HDF5 source.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information, see the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-
-CONTENTS
-
-- New Features
-- Bug Fixes since HDF5-1.4.3
-- Performance Improvements
-- Documentation
-- Platforms Tested
-- Supported Configuration Features
-- Known Problems
-
-
-New Features
-============
- o Configuration
- ================
- * The H4 to H5 tools have been removed from the main source and placed
- in a separate package. You can get these tools from the HDF ftp site
- (ftp://hdf.ncsa.uiuc.edu/). The "--with-hdf4" command-line option
- during configure is no longer valid. BW - 2002/06/25
-
- o Library
- =========
- o General
- ---------
- * Fill-value forward-compatibility with release 1.5 was added. SLU -
- 2002/04/11
- * A new query function H5Tget_member_index has been added for compound
- and enumeration data types. This function retrieves a member's index
- by name. SLU - 2002/04/05
- * Added serial multi-gigabyte file size test. "test/big -h" shows
- the help page. AKC - 2002/03/29
-
- o APIs
- ------
- * The F90 subroutines h5dwrite_f, h5dread_f, h5awrite_f, and h5aread_f
- were overloaded with a "dims" argument of type INTEGER(HSIZE_T) to
- specify the size of the array. We recommend using these subroutines
- with the new type; module subroutines that accept "dims" as an i
- INTEGER array of size 7 will be deprecated in release 1.6.
- EIP - 2002/05/06
-
- o Performance
- -------------
- * Added internal "small data" aggregation, which can reduce the number of
- actual I/O calls made, improving performance. QAK - 2002/06/05
- * Improved internal metadata aggregation, which can reduce the number of
- actual I/O calls made, improving performance. Additionally, this can
- reduce the size of files produced. QAK - 2002/06/04
- * Improved internal metadata caching, which can reduce the number of
- actual I/O calls made by a substantial amount, improving
- performance. QAK - 2002/06/03
-
-
- o Parallel library
- ==================
- * Fixed bug in parallel I/O routines where a collective I/O which used
- MPI derived types, followed by an independent I/O would cause the library
- to hang. QAK 2002/06/24
- * Added environment variable flag to control whether creating MPI derived
- types is preferred or not. This can affect performance, depending on
- which way the MPI-I/O library is optimized. The default is set to
- prefer MPI derived types for collective raw data transfers; setting the
- HDF5_MPI_PREFER_DERIVED_TYPES environment variable to "0" (i.e.:
- "setenv HDF5_MPI_PREFER_DERIVED_TYPES 0") changes the preference to avoid
- using them whenever possible. QAK - 2002/06/19
- * Changed MPI I/O routines to avoid creating MPI derived types (and thus
- needing to set the file view) for contiguous selections within datasets.
- This should result in some performance improvement for those types of
- selections. QAK - 2002/06/18
- * Changed MPI type support for collective I/O to be enabled by default.
- This can be disabled by setting the HDF5_MPI_OPT_TYPES environment
- variable to the value "0". QAK - 2002/06/14
- * Allowed chunks in chunked datasets to be cached when parallel file is
- opened for read-only access (bug #709). QAK - 2002/06/10
- * Changed method for allocating chunked dataset blocks to only allocate
- blocks that don't already exist, instead of attempting to create all the
- blocks all the time. This improves performance for chunked
- datasets. QAK - 2002/05/17
- * Allowed the call to MPI_File_sync to be avoided when the file is going to
- immediately be closed, improving performance. QAK - 2002/05/13
- * Allowed the metadata writes to be shared among all processes, easing the
- burden on process 0. QAK - 2002/05/10
-
-
- o Tools
- =======
- * h5redeploy utility was added. It updates HDF5 compiler tools
- after the HDF5 software has been installed in a new location.
-
-
- o Support for new platforms and languages
- =========================================
- * Parallel Fortran Library works now on HP-UX B.11.00 Sys V.
- EIP - 2002/05/06
- * Intel C++ and F90 compilers Version 6.0 are supported on Linux 2.4.
- * Intel C++ compilers Version 6.0 are supported on Windows 2000.
-
-
- o Misc.
- =========================================
- * zlib has been moved out of the Windows source release. Users should go to
- the ZLIB homepage(http://www.zlib.org) to download the corresponding
- zlib library.
- * The Windows binary release is built with the old version of the zlib
- library. We expect users to use zlib 1.1.4 to build with the source
- release.
- * In the Windows-specific install document, we specify how to test backward
- compatibility. However, in this release, we are not testing the backward
- compatibility of HDF5.
-
-
-Bug Fixes since HDF5-1.4.3 Release
-==================================
- * Fixed bug in chunking routines where they were using internal allocation
- free routines, instead of malloc/free, preventing user filters from
- working correctly. Chunks are now allocated/freed with malloc/free and
- so should the chunks in user filters. QAK 2002/06/18
- * Fixed bug where regular hyperslab selection could get incorrectly
- transferred when the number of elements in a row did not fit evenly
- into the buffer provided. QAK 2002/06/12
- * Fixed bug (#499) which allowed an "empty" compound or enumerated datatype
- (one with no members) to be used to create a dataset or to be committed
- to a file. QAK - 2002/06/11
- * Fixed bug (#777) which allowed a compound datatype to be inserted into
- itself. QAK - 2002/06/10
- * Fixed bug (#789) where creating 1-D dataset region reference caused the
- library to go into infinite loop. QAK - 2002/06/10
- * Fixed bug (#699, fix provided by a user) where a scalar dataspace was
- written to the file and then subsequently queried with the
- H5Sget_simple_extent_type function; type was reported as H5S_SIMPLE
- instead of H5S_SCALAR. EIP - 2002/06/04
- * Clear symbol table node "dirty" flag when flushing symbol tables to
- disk, to reduce I/O calls made & improve performance. QAK - 2002/06/03
- * Fixed bug where an object's header could get corrupted in certain
- obscure situations when many objects were created in the
- file. QAK - 2002/05/31
- * Fixed bug where read/write intent in file IDs created with H5Freopen
- was not being kept the same as the original file. QAK - 2002/05/14
- * Fixed bug where selection offsets were not being used when iterating
- through point and hyperslab selections with
- H5Diterate(). QAK - 2002/04/29
- * Fixed bug where the data for several level deep nested compound &
- variable-length datatypes used for datasets were getting corrupted when
- written to the file. QAK - 2002/04/17
- * Fixed bug where selection offset was being ignored for certain hyperslab
- selections when optimized I/O was being performed. QAK - 2002/04/02
- * Fixed limitation in h5dumper with object names which reached over 1024
- characters in length. We can now handle arbitrarily larger sizes for
- object names. BW - 2002/03/29
- * Fixed bug where variable-length string type did not behave as a
- string. SLU - 2002/03/28
- * Fixed bug in H5Gget_objinfo() which was not setting the 'fileno'
- of the H5G_stat_t struct. QAK - 2002/03/27
- * Fixed data corruption bug in hyperslab routines when contiguous
- hyperslab that spans entire dimension and is larger than type
- conversion buffer is attempted to be read. QAK - 2002/03/26
-
-
-Performance Improvements
-========================
- This release of the HDF5 library has been extensively tuned to improve
-performance, especially to improve parallel I/O performance.
- Most of the specific information for particular performance improvements
-is mentioned in the "New Features" and "Bug Fixes since HDF5-1.4.3" sections
-of this document, but in general, the library should make fewer and larger
-I/O requests when accessing a file. Additionally, improvements to the parallel
-I/O portions of the library should have reduced the communications and barriers
-used in various internal algorithms, improving the performance of the library.
- However, with the extensive changes to some portions of the library that
-were required for these improvements, some errors or unanticipated results may
-have been introduced also. Please report any problems encountered to our
-support team at hdfhelp@ncsa.uiuc.edu.
- Hopefully these improvements will benefit all HDF5 applications, but if
-there are particular I/O patterns that appear to be slower than necessary,
-please send e-mail to hdfhelp@ncsa.uiuc.edu with a sample program showing the
-problem behavior; we will look into the issue to see if it is possible to
-address it.
-
-
-Documentation
-=============
- * Documentation was updated for the hdf5-1.4.4 release.
- * A new "HDF5 User's Guide" is under development. See
- http://hdf.ncsa.uiuc.edu/HDF5/doc_dev_snapshot/H5_NewUG/current/.
- * A "Parallel HDF5 Tutorial" is available at
- http://hdf.ncsa.uiuc.edu/HDF5/doc/Tutor/.
- * The "HDF5 Tutorial" is not distributed with this release. It is
- available at http://hdf.ncsa.uiuc.edu/HDF5/doc/Tutor/.
-
-
-Platforms Tested
-================
-
- AIX 4.3.3.0 (IBM SP powerpc) xlc 5.0.2.0
- mpcc_r 5.0.2.0
- xlf 07.01.0000.0002
- mpxlf 07.01.0000.0002
- AIX 4.3 (IBM SP RS6000) C for AIX Compiler, Version 5.0.2.0
- xlf 7.1.0.2
- poe 3.1.0.12 (includes mpi)
- AIX 5.1 xlc 5.0.2.0
- xlf 07.01.0000.0002
- mpcc_r 5.0.2.0; mpxlf_r 07.01.0000.0002
- Cray T3E sn6711 2.0.5.57 Cray Standard C Version 6.5.0.3
- Cray Fortran Version 3.5.0.4
- Cray SV1 10.0.1.1 Cray Standard C Version 6.5.0.3
- Cray Fortran Version 3.5.0.4
- FreeBSD 4.6 gcc 2.95.4
- g++ 2.95.4
- HP-UX B.10.20 HP C HP92453-01 A.10.32.30
- HP F90 v2.3
- HP-UX B.11.00 HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- HP-UX B.11.00 SysV HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- HP MPI [not a product] (03/24/2000) B6060BA
- IRIX 6.5 MIPSpro cc 7.30
- IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1.3m
- F90 MIPSpro 7.3.1.3m (64 only)
- Linux 2.4.9-31smp gcc 2.95.3
- g++ 2.95.3
- Intel(R) C++ Version 6.0
- Intel(R) Fortran Compiler Version 6.0
- MPICH 1.2.2
- Linux 2.2.18smp gcc 2.95.2
- gcc 2.95.2 with mpich 1.2.1
- g++ 2.95.2
- pgf90 3.2-4
- OSF1 V5.1 Compaq C V6.4-014
- Compaq Fortran V5.5-1877-48BBF
- gcc version 3.0 for C++
- SunOS 5.7 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.7) WorkShop Compilers 5.0 98/12/15 C++ 5.0
- WorkShop Compilers 5.0 98/10/25
- FORTRAN 90 2.0 Patch 107356-04
- SunOS 5.8/32 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- Patch 109503-07 2001/08/11
- Sun WorkShop 6 update 1 C++ 5.2 Patch
- 109508-04 2001/07/11
- SunOS 5.8/64 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- Patch 109503-07 2001/08/11
- Sun WorkShop 6 update 1 C++ 5.2 Patch
- 109508-04 2001/07/11
- TFLOPS r1.0.4 v4.2.2 i386 pgcc Rel 3.1-4i with mpich-1.2.3 with
- local modifications
- IA-32 Linux 2.4.9 cc Intel 5.0.1
- gcc 2.96
- Intel(R) C++ Version 6.0
- Intel(R) Fortran Compiler Version 6.0
-
- IA-64 Linux 2.4.16 ia64 gcc version 2.96 20000731
- Intel(R) C++ Version 6.0
- Intel(R) Fortran Compiler Version 6.0
- Windows 2000 (NT5.0) MSVC++ 6.0
- DEC Visual Fortran 6.0
- Windows NT4.0 MSVC++ 6.0
- DEC Visual Fortran 6.0
- Windows NT4.0 Code Warrior 6.0
-
-
-Supported Configuration Features Summary
-========================================
-
- In the tables below
- y = tested and supported
- n = not supported or not tested in this release
- x = not working in this release
- ( ) = footnote appears below second table
-
-
- Platform C C F90 F90 C++ Shared zlib Tools
- parallel parallel libraries(5)
- Solaris2.6 y n y n y y y y
- Solaris2.7 64 y y (1) y n y y y y
- Solaris2.7 32 y y (1) y n y y y y
- Solaris2.8 64 y n y n y y y y
- Solaris2.8 32 y n y n y y y y
- IRIX6.5 y y (1) n n n y y y
- IRIX64_6.5 64 y y (2) y y n y y y
- IRIX64_6.5 n32 y y (2) n n n y y y
- HPUX10.20 y n y n n y y y
- HPUX11.00 y n y n n y y y
- HPUX11 SysV y y y y n y y y
- OSF1 v5.1 y n y n y y y y
- T3E y y y y n n y y
- SV1 y n y n n n y y
- TFLOPS n y (1) n n n n y y (4)
- AIX-4.3 y y y y y n y y
- AIX-5.1 y y y y n n y y
- WinNT/2000 y n y n y y y y
- WinNT CW y n n n n n y y
- FreeBSD y n n n y y y y
- Linux 2.2 y y (1) y n y y y y
- Linux 2.4 y y (1) n n y y y y
- Linux 2.4 Intel(6) y n y n y n y y
- Linux 2.4 IA32 y n y n n n y y
- Linux 2.4 IA64 y n y n n n y y
-
-
- Platform 1.2 static- Thread- SRB GASS STREAM-
- compatibility exec safe VFD
- Solaris2.6 y x y n n y
- Solaris2.7 64 y x y n n y
- Solaris2.7 32 y x y n n y
- Solaris2.8 64 y y n n n y
- Solaris2.8 32 y x y n n y
- IRIX6.5 y x n n n y
- IRIX64_6.5 64 y x y n y y
- IRIX64_6.5 n32 y x y n y y
- HPUX10.20 y y n n n y
- HPUX11.00 y x n n n y
- HPUX11 SysV y x n n n y
- OSF1 v5.1 y y n n n y
- T3E y y n n n y
- SV1 y y n n n y
- TFLOPS y y n n n n
- AIX-4.3 y y (3) n n n y
- AIX-5.1 y y n n n y
- WinNT/2000 y y n n n n
- WinNT CW n n n n n n
- FreeBSD y y y n n y
- Linux 2.2 y y y n n y
- Linux 2.4 y y y n n y
- Linux 2.4 Intel(6) y y n n n y
- Linux 2.4 IA32 y y n n n y
- Linux 2.4 IA64 y y n n n y
-
-
- Footnotes: (1) Using mpich.
- (2) Using mpt and mpich.
- (3) When configured with static-exec enabled, tests fail in
- serial mode.
- (4) No HDF4-related tools.
- (5) Shared libraries are provided only for the C library,
- except on Windows where they are provided for all languages.
- (6) Linux 2.4 with Intel compilers.
-
-
-Known Problems
-==============
-
- * Datasets or attributes which have a variable-length string datatype are
- not printing correctly with h5dump and h5ls.
-
- * When a dataset with the variable-length datatype is overwritten,
- the library can develop memory leaks that cause the file to become
- unnecessarily large. This is planned to be fixed in the next release.
-
- * On the SV1, the h5ls test fails due to a difference between the
- SV1 printf precision and the printf precision on other platforms.
-
- * The h5dump tests may fail to match the expected output on some
- platforms (e.g. SP2 parallel, Windows) where the error messages
- directed to "stderr" do not appear in the "right order" with output
- from stdout. This is not an error.
-
- * The --enable-static-exec configure flag fails to compile for HP-UX
- 11.00 platforms.
-
- * The executables are always dynamic on IRIX64 6.5(64 and n32) and
- IRIX 6.5 even if they are configured with --enable-static-exec.
-
- * IRIX 6.5 fails to compile if configured with --enable-static-exec.
-
- * The HDF5_MPI_OPT_TYPES optimization code in the parallel HDF5 will cause
- a hang in some cases when chunked storage is used. This is now set to
- be off by default. One may turn it on by setting the environment
- variable HDF5_MPI_OPT_TYPES to a non-zero value such as 1.
-
- * On IA32 and IA64 systems, if you use a compiler other than GCC (such as
- Intel's ecc or icc compilers), you will need to modify the generated
- "libtool" program after configuration is finished. On or around line 104
- of the libtool file, there are lines which look like:
-
- # How to pass a linker flag through the compiler.
- wl=""
-
- Change these lines to this:
-
- # How to pass a linker flag through the compiler.
- wl="-Wl,"
-
- * To build the Fortran library using Intel compilers, one has to
- x modify the source code in the fortran/src directory to remove the
- !DEC and !MS compiler directives.
- x The build will fail in the fortran/test directory and then in the
- fortran/examples directory; to proceed, edit the work.pcl files in
- those directories to contain two lines
-
- work.pc
- ../src/work.pc
-
- * To build the Fortran library on IA64 use
- setenv CC "ecc -DIA64"
- setenv F9X "efc -cl,work.pcl"
- before running configure and see the steps described above.
-
-
-%%%%1.4.3%%%% Release Information for hdf5-1.4.3 (18/Februaru/02)
-
-11. Release information for HDF5 version 1.4.3
-==============================================================================
-
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.4.2 and
-HDF5-1.4.3, and contains information on the platforms tested and
-known problems in HDF5-1.4.2. For more details check the HISTORY.txt
-file in the HDF5 source.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information look at the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-
-CONTENTS
-
-- New Features
-- Bug Fixes since HDF5-1.4.2
-- Documentation
-- Platforms Tested
-- Supported Configuration Features
-- Known Problems
-
-
-New Features
-============
- o Configuration
- ================
- * Can use just enable-threadsafe if the C compiler has built-in pthreads
- support.
-
- o Library
- =========
- o General
- ---------
- * Added a new test to verify the information provided by the configure
- command.
- * Changed internal error handling macros to reduce code size of library by
- about 10%.
-
- o APIs
- ------
- * Changed prototype for H5Awrite from:
- H5Awrite(hid_t attr_id, hid_t type_id, void *buf)
- to:
- H5Awrite(hid_t attr_id, hid_t type_id, const void *buf)
- * The H5Pset_fapl_split() accepts raw and meta file names similar to the
- syntax of H5Pset_fapl_multi() in addition to what it used to accept.
-
- C++ API:
- * Added operator= to class PredType
- * Add the overloaded member function Attribute::getName to return
- the attribute name's length as in C API. Note that the current
- Attribute::getName, that returns "string", is still available.
- * Following the change in the C library, the corresponding C++ API
- is changed from:
- void Attribute::write( const DataType& mem_type, void *buf )
- to:
- void Attribute::write( const DataType& mem_type, const void *buf )
-
- o Performance
- -------------
- * Added perform programs to test the HDF5 library performance. Programs
- are installed in directory perform/.
- * Improved performance of byte-swapping during data conversions.
- * Improved performance of single, contiguous hyperslabs when reading or
- writing.
- * Added support to read/write portions of chunks directly, if they are
- uncompressed and too large to cache. This should speed up I/O on chunked
- datasets for a few more cases. -QAK, 1/31/02
-
- o Parallel Library
- ==================
- * Parallel C HDF5 now works on HP-UX platforms, Compaq clusters,
- Linux clusters, Cplants (alpha-linux clusters).
-
- o Tools
- =======
- * A helper script called ``h5cc'', which helps compilation of HDF5
- programs, is now distributed with HDF5. See the reference manual
- for information on how to use this feature.
- * The H5Dumper can now dump comments associated with groups. -WCW 01-05-02
-
- o Support for new platforms and languages
- =========================================
- * HDF5 C++ Library is supported on Windows platforms (shared and static)
- * HDF5 F90 shared library is supported on Windows platforms.
- * HDF5 C Library is supported on IA32 and IA64 platforms.
-
-
-
-Bug Fixes since HDF5-1.4.2 Release
-==================================
-
- * Fixed a bug when reading chunked datasets where the edge of the dataset
- would be incorrectly detected and generate an assertion failure.
- * Fixed a bug where reading an entire dataset wasn't being handled
- optimally when the dataset had unlimited dimensions. Dataset is read
- in a single low-level I/O now, instead of being broken into separate
- pieces internally.
- * Fixed a bug where reading or writing chunked data which needed datatype
- conversion could result in data values getting corrupted.
- * Fixed a bug where appending a point selection to the current selection
- would not actually append the point when there were no points defined
- currently.
- * Fixed a bug where 'or'ing a hyperslab with a 'none' selection would
- fail. Now adds that hyperslab as the first hyperlab in the selection.
- * Fixed a bug in the 'big' test where quota limits weren't being detected
- properly if they caused close() to fail.
- * Fixed a bug in internal B-tree code where a B-tree was not being copied
- correctly.
- * Fixed an off-by-one error in H5Sselect_valid when hyperslab selections
- which would allow hyperslab selections which overlapped the edge of the
- selection by one element as valid.
- * Fixed the internal macros used to encode & decode file metadata, to avoid
- an unaligned access warning on IA64 machines.
- * Corrected behavior of H5Tinsert to not allow compound datatype fields to
- be inserted past the end of the datatype.
- * Retired the DPSS virtual file driver (--with-gridstorage configure
- option).
- * Fixed bug where variable-length datatypes for attributes was not working
- correctly.
- * Fixed bug where raw data re-allocated from the free-list would sometimes
- overlap with the metadata accumulator and get corrupted. QAK - 1/23/02
- * Fixed bug where a preempted chunk in the chunk data could still be
- used by an internal pointer and cause an assertion failure or core
- dump. QAK - 2/13/02
- * Fixed bug where non-zero fill-value was not being read correctly from
- certain chunked datasets when using an "all" or contiguous hyperslab
- selection. QAK - 2/14/02
-
-
-Documentation
-=============
- * Documentation was updated for the hdf5-1.4.3 release.
- * A new "HDF5 User's Guide" is under development. See
- http://hdf.ncsa.uiuc.edu/HDF5/doc_dev_snapshot/H5_NewUG/current/.
- * Parallel Tutorial is available at http://hdf.ncsa.uiuc.edu/HDF5/doc/Tutor/
-
-
-Platforms Tested
-================
-
- AIX 4.3.3.0 (IBM SP powerpc) xlc 5.0.2.0
- mpcc_r 5.0.2.0
- xlf 07.01.0000.0002
- mpxlf 07.01.0000.0002
- AIX 4.3 (IBM SP RS6000) C for AIX Compiler, Version 5.0.2.0
- xlf 7.1.0.2
- poe 3.1.0.12 (includes mpi)
- Cray T3E sn6711 2.0.5.57 Cray Standard C Version 6.5.0.3
- Cray Fortran Version 3.5.0.4
- Cray SV1 10.0.0.8 Cray Standard C Version 6.5.0.3
- Cray Fortran Version 3.5.0.4
- FreeBSD 4.5 gcc 2.95.3
- g++ 2.95.3
- HP-UX B.10.20 HP C HP92453-01 A.10.32.30
- HP F90 v2.3
- HP-UX B.11.00 HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- HP-UX B.11.00 SysV HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- HP MPI [not a product] (03/24/2000) B6060BA
- IRIX 6.5 MIPSpro cc 7.30
- IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1.2m
- Linux 2.4.4 gcc 2.95.3
- g++ 2.95.3
- Linux 2.2.18smp gcc 2.95.2
- gcc 2.95.2 with mpich 1.2.1
- g++ 2.95.2
- pgf90 3.2-4
- OSF1 V5.1 Compaq C V6.3-028
- Compaq Fortran V5.4-1283
- SunOS 5.7 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.7) Workshop Compilers 5.0 98/12/15 C++ 5.0
- Workshop Compilers 5.0 98/10/25
- FORTRAN 90 2.0 Patch 107356-04
- SunOS 5.8/32 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- Patch 109503-07 2001/08/11
- Sun WorkShop 6 update 1 C++ 5.2 Patch
- 109508-04 2001/07/11
- SunOS 5.8/64 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- Patch 109503-07 2001/08/11
- Sun WorkShop 6 update 1 C++ 5.2 Patch
- 109508-04 2001/07/11
- TFLOPS r1.0.4 v4.0.8 i386 pgcc Rel 3.1-4i with mpich-1.2.1 with
- local modifications
- IA-32 Linux 2.2.10smpx cc Intel 5.0.1
- egcs-2.91.66
- IA-64 Linux 2.4.16 ia64 gcc version 2.96 20000731
- Intel(R) C++ Itanium(TM) Compiler
- for the Itanium(TM)-based applications,
- Version 6.0 Beta, Build 20010905
- Windows 2000 (NT5.0) MSVC++ 6.0
- DEC Visual Fortran 6.0
- Windows NT4.0 MSVC++ 6.0
- DEC Visual Fortran 6.0
- Windows NT4.0 Code Warrior 6.0
- Windows 98 MSVC++ 6.0
- DEC Visual Fortran 6.0
-
-
-Supported Configuration Features Summary
-========================================
-
- In the tables below
- y = tested and supported
- n = not supported or not tested in this release
- x = not working in this release
- ( ) = footnote appears below second table
-
-
- Platform C C F90 F90 C++ Shared zlib Tools
- parallel parallel libraries
- (5)
- Solaris2.7 y y (1) y n y y y y
- Solaris2.8 64 y n y n y y y y
- Solaris2.8 32 y n y n y y y y
- IA-64 y n n n n n y y
- IRIX6.5 y y (1) n n n y y y
- IRIX64_6.5 64 y y (2) y y n y y y
- IRIX64_6.5 32 y y (2) n n n y y y
- HPUX10.20 y n y n n y y y
- HPUX11.00 y y y n n y y y
- HPUX11 SysV y y y n n y y y
- DECOSF y n y n y y y y
- T3E y y y y n n y y
- SV1 y n y n n n y y
- TFLOPS y y (1) n n n n y y (4)
- AIX-4.3 SP2 y y y y n n y n
- AIX-4.3 SP3 y y y y y n y n
- Win2000 y n y n y (6) y y y
- Win98 y n y n y (6) y y y
- WinNT y n y n y (6) y y y
- WinNT CW y n n n n n y y
- FreeBSD y n n n y y y y
- Linux 2.2 y y (1) y n y y y y
- Linux 2.4 y y (1) n n y y y y
-
-
- Platform 1.2 static- Thread- SRB GASS STREAM-
- compatibility exec safe VFD
- Solaris2.7 n x y n n y
- Solaris2.8 64 n y n n n y
- Solaris2.8 32 n x n n n y
- IA-64 n n n n n y
- IRIX6.5 n x y n n y
- IRIX64_6.5 64 n x y n y y
- IRIX64_6.5 32 n x y n y y
- HPUX10.20 n y n n n y
- HPUX11.00 n x n n n y
- HPUX11 SysV n x n n n y
- DECOSF n y n n n y
- T3E n y n n n y
- SV1 n y n n n y
- TFLOPS n y n n n n
- AIX-4.3 SP2 n y (3) n n n y
- AIX-4.3 SP3 n y n n n y
- Win2000 n y n n n n
- Win98 n y n n n n
- WinNT n y n n n n
- WinNT CW n n n n n n
- FreeBSD n y y n n y
- Linux 2.2 n y y n n y
- Linux 2.4 n y y n n y
-
-
- Footnotes: (1) Using mpich.
- (2) Using mpt and mpich.
- (3) When configured with static-exec enabled, tests fail
- in serial mode.
- (4) No HDF4-related tools.
- (5) Shared libraries are provided only for the C library.
- (6) Exception of (5): DLL is available for C++ API on Windows
-
-
-Known Problems
-==============
-
- * Datasets or attributes which have a variable-length string datatype are
- not printing correctly with h5dump and h5ls.
-
- * When a dataset with the variable-legth datatype is overwritten,
- the library can develop memory leaks that cause the file to become
- unnecessarily large. This is planned to be fixed in the next release.
-
- * On the SV1, the h5ls test fails due to a difference between the
- SV1 printf precision and the printf precision on other platforms.
-
-
- * The h5dump tests may fail to match the expected output in some
- platforms (e.g. SP2 parallel, Windows) where the error messages
- directed to "stderr" do not appear in the "right order" with output
- from stdout. This is not an error.
-
- * The --enable-static-exec configure flag fails to compile for HP-UX
- 11.00 platforms.
-
- * The executables are always dynamic on IRIX64 6.5(64 and n32) and
- IRIX 6.5 even if they are configured with --enable-static-exec.
-
- * IRIX 6.5 fails to compile if configured with --enable-static-exec.
-
- * The HDF5_MPI_OPT_TYPES optimization code in the parallel HDF5 will cause
- a hang in some cases when chunked storage is used. This is now set to
- be off by default. One may turn it on by setting environment variable
- HDF5_MPI_OPT_TYPES to a non-zero value such as 1.
-
- * On IA64 systems one has to use -DIA64 compilation flag to compile
- h4toh5 and h5toh4 utilites. After configuration step manually modify
- Makefile in the tools/h4toh4 and tools/h5toh4 directories to add
- -DIA64 to the compilation flags.
-
- * On IA32 ansd IA64 systems, if you use a compiler other than GCC
- (such as Intel's ecc compiler), you will need to modify the generated
- "libtool" program after configuration is finished. On or around line 102
- of the libtool file, there are lines which look like:
-
- # How to pass a linker flag through the compiler.
- wl=""
-
- change the lines to this:
-
- # How to pass a linker flag through the compiler.
- wl="-Wl,"
-
-
-%%%%1.4.2%%%% Release Information for hdf5-1.4.2 (31/July/01)
-
-10. Release Information for hdf5-1.4.2
-=================================================================
-
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.4.1 and
-HDF5-1.4.2, and contains information on the platforms tested and
-known problems in HDF5-1.4.2.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information look at the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-
-CONTENTS
-
-- New Features
-- Bug Fixes since HDF5-1.4.1
-- Documentation
-- Platforms Tested
-- Supported Configuration Features
-- Known Problems
-
-
-New Features
-============
-
- * File sizes greater than 2GB are now supported on Linux systems with
- version 2.4.x or higher kernels.
- * Added a global string variable H5_lib_vers_info_g which holds the
- HDF5 library version information. This can be used to identify
- an hdf5 library or hdf5 application binary.
- Also added a verification of the consistency between H5_lib_vers_info_g
- and other version information in the source code.
- * Parallel HDF5 now runs on the HP V2500 and HP N4000 machines.
- * F90 API:
- - Added aditional parameter "dims" to the h5dread_f/h5dwrite_f and
- h5aread_f/h5awrite_f subroutines. This parameter is a 1-D array
- of size 7 and contains the sizes of the data buffer dimensions.
- This change enables portability between Windows and UNIX platforms.
- In previous versions of the F90 APIs, the data buffer parameters of
- the above functions were declared as assumed-shape arrays, which
- were passed to the C functions by a descriptor. There is no
- portable means, however, of passing descriptors from F90 to C,
- causing portability problems between Windows and UNIX and among
- UNIX platforms. With this change, the data buffers are assumed-
- size arrays, which can be portably passed to the C functions.
- * F90 static library is available on Windows platforms.
- See INSTALL_Windows_withF90.txt for details.
- * F90 APIs are available on HPUX 11.00 and 10.20 and IBM SP platforms.
- * H5 <-> GIF convertor has been added. This is available under
- tools/gifconv. The convertor supports the ability to create animated
- gifs as well.
- * Verified correct operation of library on Solaris 2.8 in both 64-bit and
- 32-bit compilation modes. See INSTALL document for instructions on
- compiling the distribution with 64-bit support.
- * Added support for the Metrowerks Code Warrior compiler for Windows.
- * For H4->H5 converter utility, added a new option to choose not to convert
- HDF4 specified attributes(reference number, class) into HDF5 attributes.
- * Added support chunking and compression in SDS and image in H4->H5 converter.
- Currently HDF5 only supports gzip compression, so by default an HDF4 file
- with any other compression method will be converted into an HDF5 file in
- gzip compression.
- * correct the order or reading HDF4 image array in H4->H5 conversion.
- * Added new parallel hdf5 tests in t_mpi. The new test checks if the
- filesystem or the MPI-IO can really handle greater than 2GB files.
- If it fails, it prints information message only without failing the
- test.
- * Added a parallel HDF5 example examples/ph5example.c to illustrate
- the basic way of using parallel HDF5.
- * Added a new public macro, H5_VERS_INFO, which is a string holding
- the HDF5 library version information. This string is also compiled
- into all HDF5 binary code which helps to identify the version information
- of the binary code. One may use the Unix strings command on the binary
- file and looks for the pattern "HDF5 library version".
- * Added new checking in H5check_version() to verify the five HDF5 version
- information macros (H5_VERS_MAJOR, H5_VERS_MINOR, H5_VERS_RELEASE,
- H5_VERS_SUBRELEASE and H5_VERS_INFO) are consistent.
-
-
-Bug Fixes since HDF5-1.4.1 Release
-==================================
-
- * Fixed bug with non-zero userblock sizes causing raw data to not
- write correctly.
- * Fixed problems with Pablo build and linking with non-standard MPI I/O.
- * Fixed build on Linux systems with --enable-static-exec flag. It now
- works correctly.
- * IMPORTANT: Fixed file metadata corruption bug which could cause
- metadata data loss in certain situations.
- * The allocation by alignment (H5Pset_alignment) feature code somehow
- got dropped in some 1.3.x version. Re-implemented it with "new and
- improved" algorithm. It keeps track of "wasted" file-fragment in
- the free-list too.
- * Removed limitation that the data transfer buffer size needed to be
- set for datasets whose dimensions were too large for the 'all'
- selection code to handle. Any size dimensioned datasets should be
- handled correctly now.
- * Changed behavior of H5Tget_member_type to correctly emulate HDF5 v1.2.x
- when --enable-hdf5v1_2 configure flag is enabled.
- * Added --enable-linux-lfs flag to allow more control over whether to
- enable or disable large file support on Linux.
- * Fixed various bugs releated to SDS dimensional scale conversions in H4->H5
- converter.
- * Fixed a bug to correctly convert HDF4 objects with fill value into HDF5.
- * Fixed a bug of H5pubconf.h causing repeated definitions if it is included
- more than once. hdf5.h now includes H5public.h which includes
- H5pubconf.h. Applications should #include hdf5.h which handles multiple
- inclusion correctly.
- * Fixed H5FDmpio.h to be C++ friendly by making Parallel HDF5 API's to be
- external to C++.
- * Fixed a bug in H5FD_mpio_flush() that might result in negative file seek
- if both MPIO and Split-file drivers are used together.
-
-
-
-Documentation
-=============
-
- * The H5T_conv_t and H5T_cdata_t structures are now properly defined
- in the H5Tregister entry in the "H5T" section of the "HDF5 Reference
- Manual" and described in detail in section 12, "Data Conversions," in
- the "Datatypes" chapter of the "HDF5 User's Guide."
- * The new tools h52gif and gif2h5 have been added to the "Tools" section
- of the Reference Manual.
- * A "Freespace Management" section has been added to the "Performance"
- chapter of the User's Guide.
- * Several user-reported bugs have been fixed since Release 1.4.1.
- * The "HDF5 Image and Palette Specification" (in the "HDF5 Application
- Developer's Guide") has been heavily revised. Based on extensive user
- feedback and input from visualization software developers, Version 1.2
- of the image specification is substantially different from prior
- versions.
-
-
-Platforms Tested
-================
-
- AIX 4.3.3.0 (IBM SP powerpc) xlc 3.6.6.0
- mpcc_r 3.6.6.0
- xlf 07.01.0000.0002
- mpxlf 07.01.0000.0002
- AIX 4.3 (IBM SP RS6000) C for AIX Compiler, Version 5.0.2.0
- xlf 7.1.0.2
- poe 2.4.0.14 (includes mpi)
- Cray T3E sn6711 2.0.5.49a Cray Standard C Version 6.5.0.1
- Cray SV1 10.0.0.2 Cray Standard C Version 6.5.0.1
- Cray Fortran Version 3.5.0.1
- FreeBSD 4.3 gcc 2.95.3
- g++ 2.95.3
- HP-UX B.10.20 HP C HP92453-01 A.10.32.30
- HP F90 v2.3
- HP-UX B.11.00 HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- HP-UX B.11.00 SysV HP C HP92453-01 A.11.01.20
- HP F90 v2.4
- IRIX 6.5 MIPSpro cc 7.30
- IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1.2m
- Linux 2.4.4 gcc 2.95.3
- g++ 2.95.3
- Linux 2.2.18smp gcc 2.95.2
- gcc 2.95.2 with mpich 1.2.1
- g++ 2.95.2
- pgf90 3.2-4
- OSF1 V4.0 DEC-V5.2-040 on Digital UNIX V4.0 (Rev 564)
- Digital Fortran 90 V4.1-270
- SunOS 5.6 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.6) WorkShop Compilers 5.0 98/10/25 FORTRAN 90
- 2.0 Patch 107356-04
- SunOS 5.7 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.7) Workshop Compilers 5.0 98/12/15 C++ 5.0
- Workshop Compilers 5.0 98/10/25 FORTRAN 90
- 2.0 Patch 107356-04
- SunOS 5.8/32 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- 2000/09/11
- Sun WorkShop 6 update 1 C++ 5.2 2000/09/11
- SunOS 5.8/64 Sun WorkShop 6 update 1 C 5.2 2000/09/11
- (Solaris 2.8) Sun WorkShop 6 update 1 Fortran 95 6.1
- 2000/09/11
- Sun WorkShop 6 update 1 C++ 5.2 2000/09/11
- TFLOPS r1.0.4 v4.0.7 i386 pgcc Rel 3.1-4i with mpich-1.2.1 with
- local modifications
- Windows 2000 (NT5.0) MSVC++ 6.0
- Windows NT4.0 MSVC++ 6.0
- DEC Visual Fortran 6.0
- Windows NT4.0 Code Warrior 6.0
- Windows 98 MSVC++ 6.0
- DEC Visual Fortran 6.0
-
-
-Supported Configuration Features Summary
-========================================
-
- In the tables below
- y = tested and supported
- n = not supported or not tested in this release
- x = not working in this release
- ( ) = footnote appears below second table
-
-
- Platform C C F90 F90 C++ Shared zlib Tools
- parallel parallel libraries
- (5)
- Solaris2.6 y n y n y y y y
- Solaris2.7 y y (1) y n y y y y
- Solaris2.8 64 y n n n y y y y
- Solaris2.8 32 y n y n y y y y
- IRIX6.5 y y (1) n n n y y y
- IRIX64_6.5 64 y y (2) y y n y y y
- IRIX64_6.5 32 y y (2) n n n y y y
- HPUX10.20 y n y n n y y y
- HPUX11.00 y n y n n y y y
- HPUX11 SysV y n y n n y y y
- DECOSF y n y n n y y y
- T3E y y y y n n y y
- SV1 y n y n n n y y
- TFLOPS y y (1) n n n n y y (4)
- AIX-4.3 SP2 y y y y n n y n
- AIX-4.3 SP3 y y y y n n y n
- Win2000 y n n n n y y y
- Win98 y n y n n y y y
- WinNT y n y n n y y y
- WinNT CW y n n n n n y y
- FreeBSD y n n n y y y y
- Linux 2.2 y y (1) y n y y y y
- Linux 2.4 y y (1) n n y y y y
-
-
- Platform 1.2 static- Thread- SRB GASS STREAM-
- compatibility exec safe VFD
- Solaris2.6 y x n n n y
- Solaris2.7 y x y n n y
- Solaris2.8 64 y y n n n y
- Solaris2.8 32 y x n n n y
- IRIX6.5 y x y n n y
- IRIX64_6.5 64 y x n n n y
- IRIX64_6.5 32 y x n n n y
- HPUX10.20 y y n n n y
- HPUX11.00 y x n n n y
- HPUX11 SysV y x n n n y
- DECOSF y y n n n y
- T3E y y n n n y
- SV1 y y n n n y
- TFLOPS y y n n n n
- AIX-4.3 SP2 y y (3) n n n y
- AIX-4.3 SP3 y y n n n y
- Win2000 y y n n n n
- Win98 n y n n n n
- WinNT y y n n n n
- WinNT CW n n n n n n
- FreeBSD y y n n n y
- Linux 2.2 y y y n n y
- Linux 2.4 y y y n n y
-
-
- Footnotes: (1) Using mpich.
- (2) Using mpt and mpich.
- (3) When configured with static-exec enabled, tests fail
- in serial mode.
- (4) No HDF4-related tools.
- (5) Shared libraries are provided only for the C library.
-
-
-Known Problems
-==============
-
- * When a dataset with the variable-legth datatype is overwritten,
- the library can develop memory leaks that cause the file to become
- unnecessarily large. This is planned to be fixed in the next release.
-
- * On the SV1, the h5ls test fails due to a difference between the
- SV1 printf precision and the printf precision on other platforms.
-
- * The h5dump tests may fail to match the expected output in some
- platforms (e.g. SP2 parallel, Windows) where the error messages
- directed to "stderr" do not appear in the "right order" with output
- from stdout. This is not an error.
-
- * The --enable-static-exec configure flag fails to compile for HP-UX
- 11.00 platforms.
-
- * The executables are always dynamic on IRIX64 6.5(64 and n32) and
- IRIX 6.5 even if they are configured with --enable-static-exec.
-
- * IRIX 6.5 fails to compile if configured with --enable-static-exec.
-
- * For 24-bit image conversion from H4->H5, the current conversion is
- not consistent with HDF5 image specification.
-
- * In some cases, and SDS with an UNLIMITED dimension that has not
- been written (current size = 0) is not converted correctly.
-
- * After "make install" or "make install-doc" one may need to reload
- the source from the tar file before doing another build.
-
- * The HDF5_MPI_OPT_TYPES optimization code in the parallel HDF5 will cause
- a hang in some cases when chunked storage is used. This is now set to
- be off by default. One may turn it on by setting environment variable
- HDF5_MPI_OPT_TYPES to a non-zero value such as 1.
-
-%%%%1.4.1%%%% Release Information for hdf5-1.4.1 (April/01)
-
-9. Release Information for hdf5-1.4.1 (April/01)
-=====================================================================
-
-
-
- HDF5 Release 1.4.1
-
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.4.0 and
-HDF5-1.4.1, and contains information on the platforms tested and
-known problems in HDF5-1.4.1.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information look at the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-
-CONTENTS
-
-- New Features
-- Bug Fixes since HDF5-1.4.0
-- Documentation
-- Platforms Tested
-- Supported Configuration Features
-- Known Problems
-
-
-New Features
-============
-
- * XML output option for h5dump utility.
-
- A new option --xml to output data in XML format has been added. The
- XML output contains a complete description of the file, marked up in
- XML.
-
- The XML conforms to the HDF5 Document Type Definition (DTD), which
- is available at:
-
- http://hdf.ncsa.uiuc.edu/DTDs/HDF5-File.dtd
-
- The XML output is suitable for use with other tools, including the
- Java Tools:
-
- http://hdf.ncsa.uiuc.edu/java-hdf5-html
-
-
-Bug Fixes since HDF5-1.4.0 Release
-==================================
-
- * h4toh5 utility: conversion of images is fixed
-
- Earlier releases of the h4toh5 utility produced images that did not
- correctly conform to the HDF5 Image and Palette Specification.
-
- http://hdf.ncsa.uiuc.edu/HDF5/doc/ImageSpec.html
-
- Several required HDF5 attributes are omitted, and the dataspace
- is reversed (i.e., the ht. and width of the image dataset is
- incorrectly described.) For more information, please see:
-
- http://hdf.ncsa.uiuc.edu/HDF5/H5Image/ImageDetails.htm
-
- * Fixed bug with contiguous hyperslabs not being detected, causing
- slower I/O than necessary.
- * Fixed bug where non-aligned hyperslab I/O on chunked datasets was
- causing errors during I/O
- * The RCSID string in H5public.h was causing the C++ compiling problem
- because when it was included multiple times, C++ did not like
- multiple definitions of the same static variable. All occurance of
- RCSID definition are removed since we have not used it consistently
- before.
-
-
-Documentation
-=============
-
- PDF and Postscript versions of the following documents are available
- for this release:
- Document Filename
- -------- --------
- Introduction to HDF5 H5-R141-Introduction.pdf
- HDF5 Reference Manual H5-R141-RefManual.pdf
- C++ APIs to HDF5 documents H5-R141-Cplusplus.pdf
- Fortran90 APIs to HDF5 documents H5-R141-Fortran90.pdf
-
- PDF and Postscript files containing H5-R141-DocSet.pdf
- all of the above H5-R141-DocSet.ps
-
- These files are not included in this distribution, but are available
- via the Web or FTP at the following locations:
- http://hdf.ncsa.uiuc.edu/HDF5/doc/PSandPDF/
- ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/docs/
-
- While these documents are labeled Release 1.4.1, they describe
- Release 1.4.0 as well.
-
-
-Platforms Tested
-================
-
-Due to the nature of this release only C, C++ libraries and tools were tested.
-
- AIX 4.3.3.0 (IBM SP powerpc) xlc 3.6.6
- mpcc_r 3.6.6
- Cray T3E sn6711 2.0.5.47 Cray Standard C Version 6.5.0.0
- Cray SV1 10.0.0.8 Cray Standard C Version 6.5.0.0
- FreeBSD 4.3 gcc 2.95.2
- HP-UX B.10.20 HP C HP92453-01 A.10.32.30
- HP-UX B.11.00 HP C HP92453-01 A.11.01.20
- IRIX 6.5 MIPSpro cc 7.30
- IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
- Linux 2.2.18smp gcc-2.95.2
- g++ 2.95.2
- OSF1 V4.0 DEC-V5.2-040
- Digital Fortran 90 V4.1-270
- SunOS 5.6 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.6)
-
- SunOS 5.7 WorkShop Compilers 5.0 98/12/15 C 5.0
- (Solaris 2.7) Workshop Compilers 5.0 98/12/15 C++ 5.0
- TFLOPS r1.0.4 v4.0 mpich-1.2.1 with local changes
- Windows NT4.0, 2000 (NT5.0) MSVC++ 6.0
- Windows 98 MSVC++ 6.0
-
-
-Supported Configuration Features Summary
-========================================
-
- * See "Supported Configuration Features Summary" section for the HDF5
- 1.4.0 release in the HISTORY.txt file.
-
-Known Problems
-==============
-
- * The h5dump tests may fail to match the expected output in some
- platforms (e.g. SP2 parallel, Windows) where the error messages
- directed to "stderr" do not appear in the "right order" with output
- from stdout. This is not an error.
-
- * The --enable-static-exec configure flag fails to compile for HP-UX
- 11.00 platforms.
-
- * The executable are always dynamic on IRIX64 6.5(64 and n32) and
- IRIX 6.5 even if they are configured with --enable-static-exec.
-
- * The shared library failed compilation on IRIX 6.5.
-
- * After "make install" or "make install-doc" one may need to reload the source
- from the tar file before doing another build.
-
- * See "Known problems" section for the HDF5 1.4.0 release in the
- HISTORY.txt file.
-
-%%%%1.4.0%%%% Release Information for hdf5-1.4.0 (2/22/01)
-
-8. Release Information for hdf5-1.4.0
-===================================================================
-
- HDF5 Release 1.4.0
-
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.2.0 and
-HDF5-1.4.0, and contains information on the platforms tested and
-known problems in HDF5-1.4.0. For more details check the HISTORY.txt
-file in the HDF5 source.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information look at the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-
-CONTENTS
-
-- New Features
-- h4toh5 Utility
-- F90 Support
-- C++ Support
-- Pablo Support
-- Bug Fixes since HDF5-1.2.0
-- Bug Fixes since HDF5-1.4.0-beta2
-- Bug Fixes since HDF5-1.4.0
-- Documentation
-- Platforms Tested
-- Supported Configuration Features
-- Known Problems
-
-
-New Features
-============
- * The Virtual File Layer, VFL, was added to replace the old file
- drivers. It also provides an API for user defined file drivers.
- * New features added to snapshots. Use 'snapshot help' to see a
- complete list of features.
- * Improved configure to detect if MPIO routines are available when
- parallel mode is requested.
- * Added Thread-Safe support. Phase I implemented. See:
-
- http://hdf.ncsa.uiuc.edu/HDF5/papers/mthdf/MTHDFpaper.htm
-
- for more details.
- * Added data sieve buffering to raw data I/O path. This is enabled
- for all VFL drivers except the mpio & core drivers. Setting the
- sieve buffer size is controlled with the new API function,
- H5Pset_sieve_buf_size(), and retrieved with H5Pget_sieve_buf_size().
- * Added new Virtual File Driver, Stream VFD, to send/receive entire
- HDF5 files via socket connections.
- * As parts of VFL, HDF-GASS and HDF-SRB are also added to this
- release. To find out details, please read INSTALL_VFL file.
- * Increased maximum number of dimensions for a dataset (H5S_MAX_RANK)
- from 31 to 32 to align with HDF4 & netCDF.
- * Added 'query' function to VFL drivers. Also added 'type' parameter to
- VFL 'read' & 'write' calls, so they are aware of the type of data
- being accessed in the file. Updated the VFL document also.
- * A new h4toh5 utility, to convert HDF4 files to analogous HDF5 files.
- * Added a new array datatype to the datatypes which can be created.
- Removed "array fields" from compound datatypes (use an array datatype
- instead).
- * Parallel HDF5 works correctly with mpich-1.2.1 on Solaris, SGI, Linux.
- * You can now install the HDF5 documentation using the
- ``make install-doc'' command. The documentation is installed in the
- $(prefix)/doc directory where $(prefix) is the prefix specified by
- the (optional) ``--prefix'' flag during configuration.
- * HDF5 can operate correctly in the OpenMP environment in a limited way.
- Check doc/html/TechNotes/openmp-hdf5.html for details.
-
-
-h4toh5 Utility
-==============
- The h4toh5 utility is a new utility that converts an HDF4 file to an
- HDF5 file. For details, see the document, "Mapping HDF4 Objects to
- HDF5 Objects":
- http://hdf.ncsa.uiuc.edu/HDF5/papers/H4-H5MappingGuidelines.pdf
-
- Known Bugs:
-
- The h4toh5 utility produces images that do not correctly conform
- to the HDF5 Image and Palette Specification.
-
- http://hdf.ncsa.uiuc.edu/HDF5/doc/ImageSpec.html
-
- Several required HDF5 attributes are omitted, and the dataspace
- is reversed (i.e., the ht. and width of the image dataset is
- incorrectly described.) For more information, please see:
-
- http://hdf.ncsa.uiuc.edu/HDF5/H5Image/ImageDetails.html
-
- This bug has been fixed for the snapshot of hdf5 1.4 release. March 12th,2001
-
- Known Limitations of the h4toh5 release
- ---------------------------------------------
-
- 1. Error handlings
-
- h4toh5 utility will print out an error message when an error occurs.
-
- 2. String Datatype
-
- HDF4 has no 'string' type. String valued data are usually defined as
- an array of 'char' in HDF4. The h4toh5 utility will generally map
- these to HDF5 'String' types rather than array of char, with the
- following additional rules:
-
- * For the data of an HDF4 SDS, image, and palette, if the data is
- declared 'DFNT_CHAR8' it will be assumed to be integer and will
- be an H5T_INTEGER type.
- * For attributes of any HDF4 object, data of type 'DFNT_CHAR8'
- will be converted to an HDF5 'H5T_STRING' type.
- * For an HDF4 Vdata, it is difficult to determine whether data
- of type 'DFNT_CHAR8' is intended to be bytes or characters. The
- h4toh5 utility will consider them to be C characters, and will
- convert them to an HDF5 'H5T_STRING' type.
-
-
- 3. Compression, Chunking and External Storage
-
- Chunking is supported, but compression and external storage is not.
-
- An HDF4 object that uses chunking will be converted to an HDF5 file
- with analogous chunked storage.
-
- An HDF4 object that uses compression will be converted to an
- uncompressed HDF5 object.
-
- An HDF4 object that uses external storage will be converted to an
- HDF5 object without external storage.
-
- 4. Memory Use
-
- This version of the h4toh5 utility copies data from HDF4 objects
- in a single read followed by a single write to the HDF5 object. For
- large objects, this requires a very large amount of memory, which may
- be extremely slow or fail on some platforms.
-
- Note that a dataset that has only been partly written will
- be read completely, including uninitialized data, and all the
- data will be written to the HDF5 object.
-
- 5. Platforms
-
- The h4toh5 utility requires HDF5-1.4.0 and HDF4r1.4
-
- h4toh5 utility has been tested on all platforms listed below (see
- section "Platforms Tested") except TFLOPS.
-
-
-F90 Support
-===========
- This is the first release of the HDF5 Library with fully integrated
- F90 API support. The Fortran Library is created when the
- --enable-fortran flag is specified during configuration.
-
- Not all F90 subroutines are implemented. Please refer to the HDF5
- Reference Manual for more details.
-
- F90 APIs are available for the Solaris 2.6 and 2.7, Linux, DEC UNIX,
- T3E, SV1 and O2K (64 bit option only) platforms. The Parallel version of
- the HDF5 F90 Library is supported on the O2K and T3E platforms.
-
- Changes since the last prototype release (July 2000)
- ----------------------------------------------------
- * h5open_f and h5close_f must be called instead of h5init_types and
- h5close_types.
-
- * The following subroutines are no longer available:
-
- h5pset_xfer_f
- h5pget_xfer_f
- h5pset_mpi_f
- h5pget_mpi_f
- h5pset_stdio_f
- h5pget_stdio_f
- h5pset_sec2_f
- h5pget_sec2_f
- h5pset_core_f
- h5pget_core_f
- h5pset_family_f
- h5pget_family_f
-
- * The following functions have been added:
-
- h5pset_fapl_mpio_f
- h5pget_fapl_mpio_f
- h5pset_dxpl_mpio_f
- h5pget_dxpl_mpio_f
-
- * In the previous HDF5 F90 releases, the implementation of object
- references and dataset region references was not portable. This
- release introduces a portable implementation, but it also introduces
- changes to the read/write APIs that handle references. If object or
- dataset region references are written or read to/from an HDF5 file,
- h5dwrite_f and h5dread_f must use the extra parameter, n, for the
- buffer size:
-
- h5dwrite(read)_f(dset_id, mem_type_id, buf, n, hdferr, &
- ^^^
- mem_space_id, file_space_id, xfer_prp)
-
- For other datatypes the APIs were not changed.
-
-
-C++ Support
-===========
- This is the first release of the HDF5 Library with fully integrated
- C++ API support. The HDF5 C++ library is built when the --enable-cxx
- flag is specified during configuration.
-
- Check the HDF5 Reference Manual for available C++ documentation.
-
- C++ APIs are available for Solaris 2.6 and 2.7, Linux, and FreeBSD.
-
-
-Pablo Support
-=============
- This version does not allow proper building of the Pablo-instrumented
- version of the library. A version supporting the pablo build is
- available on the Pablo Website at
- www-pablo.cs.uiuc.edu/pub/Pablo.Release.5/HDFLibrary/hdf5_v1.4.tar.gz
-
-
-Bug Fixes since HDF5-1.2.0
-==========================
-
-Library
--------
- * The function H5Pset_mpi is renamed as H5Pset_fapl_mpio.
- * Corrected a floating point number conversion error for the Cray J90
- platform. The error did not convert the value 0.0 correctly.
- * Error was fixed which was not allowing dataset region references to
- have their regions retrieved correctly.
- * Corrected a bug that caused non-parallel file drivers to fail in
- the parallel version.
- * Added internal free-lists to reduce memory required by the library
- and H5garbage_collect API function
- * Fixed error in H5Giterate which was not updating the "index"
- parameter correctly.
- * Fixed error in hyperslab iteration which was not walking through the
- correct sequence of array elements if hyperslabs were staggered in a
- certain pattern
- * Fixed several other problems in hyperslab iteration code.
- * Fixed another H5Giterate bug which was causes groups with large
- numbers of objects in them to misbehave when the callback function
- returned non-zero values.
- * Changed return type of H5Aiterate and H5A_operator_t typedef to be
- herr_t, to align them with the dataset and group iterator functions.
- * Changed H5Screate_simple and H5Sset_extent_simple to not allow
- dimensions of size 0 with out the same dimension being unlimited.
- * QAK - 4/19/00 - Improved metadata hashing & caching algorithms to
- avoid many hash flushes and also remove some redundant I/O when
- moving metadata blocks in the file.
- * The "struct(opt)" type conversion function which gets invoked for
- certain compound datatype conversions was fixed for nested compound
- types. This required a small change in the datatype conversion
- function API.
- * Re-wrote lots of the hyperslab code to speed it up quite a bit.
- * Added bounded garbage collection for the free lists when they run
- out of memory and also added H5set_free_list_limits API call to
- allow users to put an upper limit on the amount of memory used for
- free lists.
- * Checked for non-existent or deleted objects when dereferencing one
- with object or region references and disallow dereference.
- * "Time" datatypes (H5T_UNIX_D*) were not being stored and retrieved
- from object headers correctly, fixed now.
- * Fixed H5Dread or H5Dwrite calls with H5FD_MPIO_COLLECTIVE requests
- that may hang because not all processes are transfer the same amount
- of data. (A.K.A. prematured collective return when zero amount data
- requested.) Collective calls that may cause hanging is done via the
- corresponding MPI-IO independent calls.
- * If configure with --enable-debug=all, couple functions would issue
- warning messages to "stderr" that the operation is expensive time-wise.
- This messed up applications (like testings) that did not expect the
- extra output. It is changed so that the warning will be printed only
- if the corresponding Debug key is set.
-
-Configuration
--------------
- * The hdf5.h include file was fixed to allow the HDF5 Library to be
- compiled with other libraries/applications that use GNU autoconf.
- * Configuration for parallel HDF5 was improved. Configure now attempts
- to link with libmpi.a and/or libmpio.a as the MPI libraries by
- default. It also uses "mpirun" to launch MPI tests by default. It
- tests to link MPIO routines during the configuration stage, rather
- than failing later as before. One can just do "./configure
- --enable-parallel" if the MPI library is in the system library.
- * Added support for pthread library and thread-safe option.
- * The libhdf5.settings file shows the correct machine byte-sex.
- * Added option "--enable-stream-vfd" to configure w/o the Stream VFD.
- For Solaris, added -lsocket to the LIBS list of libraries.
-
-Tools
------
- * h5dump now accepts both short and long command-line parameters:
- -h, --help Print a usage message and exit
- -B, --bootblock Print the content of the boot block
- -H, --header Print the header only; no data is displayed
- -i, --object-ids Print the object ids
- -V, --version Print version number and exit
- -a P, --attribute=P Print the specified attribute
- -d P, --dataset=P Print the specified dataset
- -g P, --group=P Print the specified group and all members
- -l P, --soft-link=P Print the value(s) of the specified soft link
- -o F, --output=F Output raw data into file F
- -t T, --datatype=T Print the specified named data type
- -w #, --width=# Set the number of columns
-
- P - is the full path from the root group to the object.
- T - is the name of the data type.
- F - is a filename.
- # - is an integer greater than 1.
- * A change from the old way command line parameters were interpreted
- is that multiple attributes, datasets, groups, soft-links, and
- object-ids cannot be specified with just one flag but you have to
- use a flag with each object. I.e., instead of doing this:
-
- h5dump -a /attr1 /attr2 foo.h5
-
- do this:
-
- h5dump -a /attr1 -a /attr2 foo.h5
-
- The cases are similar for the other object types.
- * h5dump correctly displays compound datatypes.
- * Corrected an error in h5toh4 which did not convert the 32bits
- int from HDF5 to HDF4 correctly for the T3E platform.
- * h5dump correctly displays the committed copy of predefined types
- correctly.
- * Added an option, -V, to show the version information of h5dump.
- * Fixed a core dumping bug of h5toh4 when executed on platforms like
- TFLOPS.
- * The test script for h5toh4 used to not able to detect the hdp
- dumper command was not valid. It now detects and reports the
- failure of hdp execution.
- * Merged the tools with the 1.2.2 branch. Required adding new
- macros, VERSION12 and VERSION13, used in conditional compilation.
- Updated the Windows project files for the tools.
- * h5dump displays opaque and bitfield data correctly.
- * h5dump and h5ls can browse files created with the Stream VFD
- (eg. "h5ls
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-
-HDF5 Image and Palette Specification
-Version 1.2
-
-
-
-1. HDF5 Image Specification
-
-
-1.1 Overview
-Image data is stored as an HDF5 dataset with values of HDF5 class Integer
-or Float. A common example would be a two dimensional dataset, with
-elements of class Integer, e.g., a two dimensional array of unsigned 8
-bit integers. However, this specification does not limit the dimensions
-or number type that may be used for an Image.
-
-1.2 Image Attributes
-The attributes for the Image are scalars unless otherwise noted.
-The length of String valued attributes should be at least the number of
-characters. Optionally, String valued attributes may be stored in a String
-longer than the minimum, in which case it must be zero terminated or null
-padded. "Required" attributes must always be used. "Optional" attributes
-must be used when required.
-
-
-Attributes
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Attribute name="IMAGE_VERSION" (Required)
-
-
-Table 2 summarizes the standard attributes for an Image datasets using
-the common sub-classes. R means that the attribute listed on the leftmost
-column is Required for the image subclass on the first row, O means that
-the attribute is Optional for that subclass and N that the attribute cannot
-be applied to that subclass. The two first rows show the only required
-attributes
-for all subclasses.
-
-
-
-
-
-
-
-Attribute Name
-
-(R = Required
-
-
-
O= Optional)Type
-
-String Size
-
-Value
-
-
-
-CLASS
-
-R
-
-String
-
-5
-
-"IMAGE"
-
-
-
-PALETTE
-
-O
-
-Array Object References
-
-
-
- <references to Palette datasets>1
-
-
-
-IMAGE_SUBCLASS
-
-O2
-
-String
-
-15,
-
-
-
12,
-
15,
-
13
-
-
-
-
-INTERLACE_MODE
-
-O3,6
-
-String
-
-15
-
-The layout of components if more than one component per pixel.
-
-
-
-DISPLAY_ORIGIN
-
-O
-
-String
-
-2
-
-If set, indicates the intended location of the pixel (0,0).
-
-
-
-IMAGE_WHITE_IS_ZERO
-
-O3,4
-
-Unsigned Integer
-
-
-
- 0 = false, 1 = true
-
-
-
-IMAGE_MINMAXRANGE
-
-O3,5
-
-Array [2] <same datatype as data values>
-
-
-
- The (<minimum>, <maximum>) value of the data.
-
-
-
-IMAGE_BACKGROUNDINDEX
-
-O3
-
-Unsigned Integer
-
-
-
- The index of the background color.
-
-
-
-IMAGE_TRANSPARENCY
-
-O3,5
-
-Unsigned Integer
-
-
-
- The index of the transparent color.
-
-
-
-IMAGE_ASPECTRATIO
-
-O3,4
-
-Unsigned Integer
-
-
-
- The aspect ratio.
-
-
-
-IMAGE_COLORMODEL
-
-O3,6
-
-String
-
-3, 4, or 5
-
-The color model, as defined below in the Palette specification for
-attribute PAL_COLORMODEL.
-
-
-
-IMAGE_GAMMACORRECTION
-
-O3,6
-
-Float
-
-
-
- The gamma correction.
-
-
-IMAGE_VERSION
-
-R
-
-String
-
-3
-
-"1.2"
-1. The first element of the array is the default
-Palette.
-
-
2. This attribute is required for images
-that use one of the standard color map types listed.
-
3. This attribute is required if set for the source
-image, in the case that the image is translated from another file into
-HDF5.
-
4. This applies to: IMAGE_SUBCLASS="IMAGE_GRAYSCALE"
-or "IMAGE_BITMAP".
-
5. This applies to: IMAGE_SUBCLASS="IMAGE_GRAYSCALE",
-"IMAGE_BITMAP", or "IMAGE_INDEXED".
-
6. This applies to: IMAGE_SUBCLASS="IMAGE_TRUECOLOR",
-or "IMAGE_INDEXED".
-
-
-
-
-
-
-IMAGE_SUBCLASS1
-
-IMAGE_GRAYSCALE
-
-IMAGE_BITMAP
-
-
-
-CLASS
-
-R
-
-R
-
-
-
-IMAGE_VERSION
-
-R
-
-R
-
-
-
-INTERLACE_MODE
-
-N
-
-N
-
-
-
-IMAGE_WHITE_IS_ZERO
-
-R
-
-R
-
-
-
-IMAGE_MINMAXRANGE
-
-O
-
-O
-
-
-
-IMAGE_BACKGROUNDINDEX
-
-O
-
-O
-
-
-
-IMAGE_TRANSPARENCY
-
-O
-
-O
-
-
-
-IMAGE_ASPECTRATIO
-
-O
-
-O
-
-
-
-IMAGE_COLORMODEL
-
-N
-
-N
-
-
-
-IMAGE_GAMMACORRECTION
-
-N
-
-N
-
-
-
-PALETTE
-
-O
-
-O
-
-
-DISPLAY_ORIGIN
-
-O
-
-O
-
-
-
-
-
-
-
-
-IMAGE_SUBCLASS
-
-IMAGE_TRUECOLOR
-
-IMAGE_INDEXED
-
-
-
-CLASS
-
-R
-
-R
-
-
-
-IMAGE_VERSION
-
-R
-
-R
-
-
-
-INTERLACE_MODE
-
-R
-
-N
-
-
-
-IMAGE_WHITE_IS_ZERO
-
-N
-
-N
-
-
-
-IMAGE_MINMAXRANGE
-
-N
-
-O
-
-
-
-IMAGE_BACKGROUNDINDEX
-
-N
-
-O
-
-
-
-IMAGE_TRANSPARENCY
-
-N
-
-O
-
-
-
-IMAGE_ASPECTRATIO
-
-O
-
-O
-
-
-
-IMAGE_COLORMODEL
-
-O
-
-O
-
-
-
-IMAGE_GAMMACORRECTION
-
-O
-
-O
-
-
-
-PALETTE
-
-O
-
-O
-
-
-DISPLAY_ORIGIN
-
-O
-
-O
-
-1.3 Storage Layout and Properties for Images
-In the case of an image with more than one component per pixel (e.g., Red,
-Green, and Blue), the data may be arranged in one of two ways. Following
-HDF4 terminology, the data may be interlaced by pixel or by plane, which
-should be indicated by the INTERLACE_MODE attribute. In both
-cases, the dataset will have a dataspace with three dimensions, height,
-width, and components. The interlace modes specify different orders
-for the dimensions.
-
-
-
-
-
-
-
-Interlace Mode
-
-Dimensions in the Dataspace
-
-
-
-INTERLACE_PIXEL
-
-[height][width][pixel components]
-
-
-INTERLACE_PLANE
-
-[pixel components][height][width]
-
-
-2. HDF5 Palette Specification
-
-
-2.1 Overview
-A palette is the means by which color is applied to an image and is also
-referred to as a color lookup table. It is a table in which every row contains
-the numerical representation of a particular color. In the example of an
-8 bit standard RGB color model palette, this numerical representation of
-a color is presented as a triplet specifying the intensity of red, green,
-and blue components that make up each color.
-
-
-
-
-
-
-
-Important Note: The specification of the Indexed
-Palette will change substantially in the next version. The Palette
-described here is denigrated and is not supported.
-
-
-
-
-
-
-Denigrated
-
-
-2.2. Palette Attributes
-A palette exists in an HDF file as an independent data set with accompanying
-attributes. The Palette attributes are scalars except where noted
-otherwise. String values should have size the length of the string
-value plus one. "Required" attributes must be used. "Optional"
-attributes must be used when required.
-
-
-
-
-
-
-
-
-Table 5 summarized the uses of the standard attributes for a palette dataset.
-R means that the attribute listed on the leftmost column is Required for
-the palette type on the first row, O means that the attribute is Optional
-for that type and N that the attribute cannot be applied to that type.
-The four first rows show the attributes that are always required
-for the two palette types.
-
-
-
-
-
-Denigrated
-
-
-
-
-
-
-
-
-
-
-
-
-Attribute name="PAL_VERSION" (Required)
-
They specify the minimum and maximum values of the color numeric components.
-For example, if the palette was an RGB of type Float, the color numeric
-range for Red, Green, and Blue could be set to be between 0.0 and 1.0.
-The intensity of the color guns would then be scaled accordingly to be
-between this minimum and maximum attribute.This attribute is of type H5T_C_S1, with size corresponding to the
-length of the version string. This attribute identifies the version
-number of this specification to which it conforms. The current version
-is "1.2".
-
-
-
-
-
-Attribute Name
-
-(R = Required,
-
-
-
O = Optional)Type
-
-String Size
-
-Value
-
-
-
-CLASS
-
-R
-
-String
-
-
-
-
-"PALETTE"
-
-
-
-PAL_COLORMODEL
-
-R
-
-String
-
-
-
-
-Color Model: "RGB", YUV", "CMY", "CMYK", "YCbCr", or "HSV"
-
-
-
-PAL_TYPE
-
-R
-
-String
-
-
-
-
-
-
-
-
-
-or 10
-"STANDARD8"
-
-
-
-
-
-or "RANGEINDEX" (Denigrated)
-
-
-
-
-
-
-
-
-
-
-Denigrated
-
-
RANGE_INDEX
-
-
-
-
-
-
-
-
-Object Reference
-
-
-
-
-
-
-
-
-<Object Reference to Dataset of range index values>
-
-
-
-PAL_MINMAXNUMERIC
-
-O
-
-Array[2] of <same datatype as palette>
-
-
-
- The first value is the <Minimum value for color values>, the second
-value is <Maximum value for color values>2
-
-
-PAL_VERSION
-
-R
-
-String
-
-4
-
-"1.2"
-
-
-
-
-2. The minimum and maximum are optional. If not
-set, the range is assumed to the maximum range of the number type.
-If one of these attributes is set, then both should be set. The value
-of the minimum must be less than or equal to the value of the maximum.
-
-1. The RANGE_INDEX attribute is required if the
-PAL_TYPE is "RANGEINDEX". Otherwise, the RANGE_INDEX attribute should
-be omitted. (Range index is denigrated.)
-
-
-
-
-
-
-
-
-PAL_TYPE
-
-STANDARD8
-
-RANGEINDEX
-
-
-
-CLASS
-
-R
-
-R
-
-
-
-PAL_VERSION
-
-R
-
-R
-
-
-
-PAL_COLORMODEL
-
-R
-
-R
-
-
-
-RANGE_INDEX
-
-N
-
-R
-
-
-PAL_MINMAXNUMERIC
-
-O
-
-O
-
-2.3. Storage Layout for Palettes
-The values of the Palette are stored as a dataset. The datatype can
-be any HDF 5 atomic numeric type. The dataset will have dimensions
-(nentries by ncomponents), where 'nentries'
-is the number of colors (usually 256) and 'ncomponents' is the
-number of values per color (3 for RGB, 4 for CMYK, etc.)
-
-
-3. Consistency and Correlation of Image and Palette
-Attributes
-The objects in this specification are an extension to the base HDF5 specification
-and library. They are accessible with the standard HDF5 library,
-but the semantics of the objects are not enforced by the base library.
-For example, it is perfectly possible to add an attribute called IMAGE
-to any dataset, or to include an object reference to any
-HDF5 dataset in a PALETTE attribute. This would be a valid
-HDF5 file, but not conformant to this specification. The rules defined
-in this specification must be implemented with appropriate software, and
-applications must use conforming software to assure correctness.
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-Last modified: 8 June 2005
-
-
-
-
diff --git a/doc/html/ADGuide/Makefile.am b/doc/html/ADGuide/Makefile.am
deleted file mode 100644
index fde4097..0000000
--- a/doc/html/ADGuide/Makefile.am
+++ /dev/null
@@ -1,18 +0,0 @@
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-##
-## Makefile.am
-## Run automake to generate a Makefile.in from this file.
-#
-
-include $(top_srcdir)/config/commence-doc.am
-
-localdocdir = $(docdir)/hdf5/ADGuide
-
-# Public doc files (to be installed)...
-localdoc_DATA=Changes.html H4toH5Mapping.pdf HISTORY.txt ImageSpec.html \
- PaletteExample1.gif Palettes.fm.anc.gif RELEASE.txt
diff --git a/doc/html/ADGuide/Makefile.in b/doc/html/ADGuide/Makefile.in
deleted file mode 100644
index 81d0f44..0000000
--- a/doc/html/ADGuide/Makefile.in
+++ /dev/null
@@ -1,487 +0,0 @@
-# Makefile.in generated by automake 1.9.5 from Makefile.am.
-# @configure_input@
-
-# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
-# 2003, 2004, 2005 Free Software Foundation, Inc.
-# This Makefile.in is free software; the Free Software Foundation
-# gives unlimited permission to copy and/or distribute it,
-# with or without modifications, as long as this notice is preserved.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
-# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
-# PARTICULAR PURPOSE.
-
-@SET_MAKE@
-
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-#
-
-srcdir = @srcdir@
-top_srcdir = @top_srcdir@
-VPATH = @srcdir@
-pkgdatadir = $(datadir)/@PACKAGE@
-pkglibdir = $(libdir)/@PACKAGE@
-pkgincludedir = $(includedir)/@PACKAGE@
-top_builddir = ../../..
-am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
-INSTALL = @INSTALL@
-install_sh_DATA = $(install_sh) -c -m 644
-install_sh_PROGRAM = $(install_sh) -c
-install_sh_SCRIPT = $(install_sh) -c
-INSTALL_HEADER = $(INSTALL_DATA)
-transform = $(program_transform_name)
-NORMAL_INSTALL = :
-PRE_INSTALL = :
-POST_INSTALL = :
-NORMAL_UNINSTALL = :
-PRE_UNINSTALL = :
-POST_UNINSTALL = :
-build_triplet = @build@
-host_triplet = @host@
-DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
- $(top_srcdir)/config/commence-doc.am \
- $(top_srcdir)/config/commence.am
-subdir = doc/html/ADGuide
-ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
-am__aclocal_m4_deps = $(top_srcdir)/configure.in
-am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
- $(ACLOCAL_M4)
-mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs
-CONFIG_HEADER = $(top_builddir)/src/H5config.h
-CONFIG_CLEAN_FILES =
-SOURCES =
-DIST_SOURCES =
-am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
-am__vpath_adj = case $$p in \
- $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
- *) f=$$p;; \
- esac;
-am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
-am__installdirs = "$(DESTDIR)$(localdocdir)"
-localdocDATA_INSTALL = $(INSTALL_DATA)
-DATA = $(localdoc_DATA)
-DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
-
-# Set the paths for AFS installs of autotools for Linux machines
-# Ideally, these tools should never be needed during the build.
-ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal
-ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@
-AMDEP_FALSE = @AMDEP_FALSE@
-AMDEP_TRUE = @AMDEP_TRUE@
-AMTAR = @AMTAR@
-AM_MAKEFLAGS = @AM_MAKEFLAGS@
-AR = @AR@
-AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf
-AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader
-AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake
-AWK = @AWK@
-BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@
-BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@
-BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@
-BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@
-BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@
-BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@
-BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@
-BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@
-BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@
-BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@
-BUILD_PDB2HDF = @BUILD_PDB2HDF@
-BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@
-BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@
-BYTESEX = @BYTESEX@
-CC = @CC@
-CCDEPMODE = @CCDEPMODE@
-CC_VERSION = @CC_VERSION@
-CFLAGS = @CFLAGS@
-CONFIG_DATE = @CONFIG_DATE@
-CONFIG_MODE = @CONFIG_MODE@
-CONFIG_USER = @CONFIG_USER@
-CPP = @CPP@
-CPPFLAGS = @CPPFLAGS@
-CXX = @CXX@
-CXXCPP = @CXXCPP@
-CXXDEPMODE = @CXXDEPMODE@
-CXXFLAGS = @CXXFLAGS@
-CYGPATH_W = @CYGPATH_W@
-DEBUG_PKG = @DEBUG_PKG@
-DEFS = @DEFS@
-DEPDIR = @DEPDIR@
-DYNAMIC_DIRS = @DYNAMIC_DIRS@
-ECHO = @ECHO@
-ECHO_C = @ECHO_C@
-ECHO_N = @ECHO_N@
-ECHO_T = @ECHO_T@
-EGREP = @EGREP@
-EXEEXT = @EXEEXT@
-F77 = @F77@
-
-# Make sure that these variables are exported to the Makefiles
-F9XMODEXT = @F9XMODEXT@
-F9XMODFLAG = @F9XMODFLAG@
-F9XSUFFIXFLAG = @F9XSUFFIXFLAG@
-FC = @FC@
-FCFLAGS = @FCFLAGS@
-FCLIBS = @FCLIBS@
-FFLAGS = @FFLAGS@
-FILTERS = @FILTERS@
-FSEARCH_DIRS = @FSEARCH_DIRS@
-H5_VERSION = @H5_VERSION@
-HADDR_T = @HADDR_T@
-HDF5_INTERFACES = @HDF5_INTERFACES@
-HID_T = @HID_T@
-HL = @HL@
-HL_FOR = @HL_FOR@
-HSIZET = @HSIZET@
-HSIZE_T = @HSIZE_T@
-HSSIZE_T = @HSSIZE_T@
-INSTALL_DATA = @INSTALL_DATA@
-INSTALL_PROGRAM = @INSTALL_PROGRAM@
-INSTALL_SCRIPT = @INSTALL_SCRIPT@
-INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
-INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@
-LDFLAGS = @LDFLAGS@
-LIBOBJS = @LIBOBJS@
-LIBS = @LIBS@
-LIBTOOL = @LIBTOOL@
-LN_S = @LN_S@
-LTLIBOBJS = @LTLIBOBJS@
-LT_STATIC_EXEC = @LT_STATIC_EXEC@
-MAINT = @MAINT@
-MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
-MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
-MAKEINFO = @MAKEINFO@
-MPE = @MPE@
-OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@
-OBJEXT = @OBJEXT@
-PACKAGE = @PACKAGE@
-PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
-PACKAGE_NAME = @PACKAGE_NAME@
-PACKAGE_STRING = @PACKAGE_STRING@
-PACKAGE_TARNAME = @PACKAGE_TARNAME@
-PACKAGE_VERSION = @PACKAGE_VERSION@
-PARALLEL = @PARALLEL@
-PATH_SEPARATOR = @PATH_SEPARATOR@
-PERL = @PERL@
-PTHREAD = @PTHREAD@
-RANLIB = @RANLIB@
-ROOT = @ROOT@
-RUNPARALLEL = @RUNPARALLEL@
-RUNSERIAL = @RUNSERIAL@
-R_INTEGER = @R_INTEGER@
-R_LARGE = @R_LARGE@
-SEARCH = @SEARCH@
-SETX = @SETX@
-SET_MAKE = @SET_MAKE@
-
-# Hardcode SHELL to be /bin/sh. Most machines have this shell, and
-# on at least one machine configure fails to detect its existence (janus).
-# Also, when HDF5 is configured on one machine but run on another,
-# configure's automatic SHELL detection may not work on the build machine.
-SHELL = /bin/sh
-SIZE_T = @SIZE_T@
-STATIC_SHARED = @STATIC_SHARED@
-STRIP = @STRIP@
-TESTPARALLEL = @TESTPARALLEL@
-TRACE_API = @TRACE_API@
-USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@
-USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@
-USE_FILTER_NBIT = @USE_FILTER_NBIT@
-USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@
-USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@
-USE_FILTER_SZIP = @USE_FILTER_SZIP@
-VERSION = @VERSION@
-ac_ct_AR = @ac_ct_AR@
-ac_ct_CC = @ac_ct_CC@
-ac_ct_CXX = @ac_ct_CXX@
-ac_ct_F77 = @ac_ct_F77@
-ac_ct_FC = @ac_ct_FC@
-ac_ct_RANLIB = @ac_ct_RANLIB@
-ac_ct_STRIP = @ac_ct_STRIP@
-am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
-am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
-am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
-am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
-am__include = @am__include@
-am__leading_dot = @am__leading_dot@
-am__quote = @am__quote@
-am__tar = @am__tar@
-am__untar = @am__untar@
-bindir = @bindir@
-build = @build@
-build_alias = @build_alias@
-build_cpu = @build_cpu@
-build_os = @build_os@
-build_vendor = @build_vendor@
-datadir = @datadir@
-exec_prefix = @exec_prefix@
-host = @host@
-host_alias = @host_alias@
-host_cpu = @host_cpu@
-host_os = @host_os@
-host_vendor = @host_vendor@
-
-# Install directories that automake doesn't know about
-includedir = $(exec_prefix)/include
-infodir = @infodir@
-install_sh = @install_sh@
-libdir = @libdir@
-libexecdir = @libexecdir@
-localstatedir = @localstatedir@
-mandir = @mandir@
-mkdir_p = @mkdir_p@
-oldincludedir = @oldincludedir@
-prefix = @prefix@
-program_transform_name = @program_transform_name@
-sbindir = @sbindir@
-sharedstatedir = @sharedstatedir@
-sysconfdir = @sysconfdir@
-target_alias = @target_alias@
-
-# Shell commands used in Makefiles
-RM = rm -f
-CP = cp
-
-# Some machines need a command to run executables; this is that command
-# so that our tests will run.
-# We use RUNTESTS instead of RUNSERIAL directly because it may be that
-# some tests need to be run with a different command. Older versions
-# of the makefiles used the command
-# $(LIBTOOL) --mode=execute
-# in some directories, for instance.
-RUNTESTS = $(RUNSERIAL)
-
-# Libraries to link to while building
-LIBHDF5 = $(top_builddir)/src/libhdf5.la
-LIBH5TEST = $(top_builddir)/test/libh5test.la
-LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la
-LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la
-LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la
-LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la
-LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la
-LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la
-LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la
-docdir = $(exec_prefix)/doc
-
-# Scripts used to build examples
-H5CC = $(bindir)/h5cc
-H5CC_PP = $(bindir)/h5pcc
-H5FC = $(bindir)/h5fc
-H5FC_PP = $(bindir)/h5pfc
-
-# .chkexe and .chksh files are used to mark tests that have run successfully.
-MOSTLYCLEANFILES = *.chkexe *.chksh
-localdocdir = $(docdir)/hdf5/ADGuide
-
-# Public doc files (to be installed)...
-localdoc_DATA = Changes.html H4toH5Mapping.pdf HISTORY.txt ImageSpec.html \
- PaletteExample1.gif Palettes.fm.anc.gif RELEASE.txt
-
-all: all-am
-
-.SUFFIXES:
-$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps)
- @for dep in $?; do \
- case '$(am__configure_deps)' in \
- *$$dep*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
- && exit 0; \
- exit 1;; \
- esac; \
- done; \
- echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/ADGuide/Makefile'; \
- cd $(top_srcdir) && \
- $(AUTOMAKE) --foreign doc/html/ADGuide/Makefile
-.PRECIOUS: Makefile
-Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
- @case '$?' in \
- *config.status*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
- *) \
- echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
- cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
- esac;
-
-$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-mostlyclean-libtool:
- -rm -f *.lo
-
-clean-libtool:
- -rm -rf .libs _libs
-
-distclean-libtool:
- -rm -f libtool
-uninstall-info-am:
-install-localdocDATA: $(localdoc_DATA)
- @$(NORMAL_INSTALL)
- test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)"
- @list='$(localdoc_DATA)'; for p in $$list; do \
- if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
- f=$(am__strip_dir) \
- echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \
- $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-
-uninstall-localdocDATA:
- @$(NORMAL_UNINSTALL)
- @list='$(localdoc_DATA)'; for p in $$list; do \
- f=$(am__strip_dir) \
- echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \
- rm -f "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-tags: TAGS
-TAGS:
-
-ctags: CTAGS
-CTAGS:
-
-
-distdir: $(DISTFILES)
- $(mkdir_p) $(distdir)/../../../config
- @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
- topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
- list='$(DISTFILES)'; for file in $$list; do \
- case $$file in \
- $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
- $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
- esac; \
- if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
- dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
- if test "$$dir" != "$$file" && test "$$dir" != "."; then \
- dir="/$$dir"; \
- $(mkdir_p) "$(distdir)$$dir"; \
- else \
- dir=''; \
- fi; \
- if test -d $$d/$$file; then \
- if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
- cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
- fi; \
- cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
- else \
- test -f $(distdir)/$$file \
- || cp -p $$d/$$file $(distdir)/$$file \
- || exit 1; \
- fi; \
- done
-check-am: all-am
-check: check-am
-all-am: Makefile $(DATA)
-installdirs:
- for dir in "$(DESTDIR)$(localdocdir)"; do \
- test -z "$$dir" || $(mkdir_p) "$$dir"; \
- done
-install: install-am
-install-exec: install-exec-am
-install-data: install-data-am
-uninstall: uninstall-am
-
-install-am: all-am
- @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
-
-installcheck: installcheck-am
-install-strip:
- $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
- install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
- `test -z '$(STRIP)' || \
- echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
-mostlyclean-generic:
- -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
-
-clean-generic:
-
-distclean-generic:
- -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-
-maintainer-clean-generic:
- @echo "This command is intended for maintainers to use"
- @echo "it deletes files that may require special tools to rebuild."
-clean: clean-am
-
-clean-am: clean-generic clean-libtool mostlyclean-am
-
-distclean: distclean-am
- -rm -f Makefile
-distclean-am: clean-am distclean-generic distclean-libtool
-
-dvi: dvi-am
-
-dvi-am:
-
-html: html-am
-
-info: info-am
-
-info-am:
-
-install-data-am: install-localdocDATA
-
-install-exec-am:
-
-install-info: install-info-am
-
-install-man:
-
-installcheck-am:
-
-maintainer-clean: maintainer-clean-am
- -rm -f Makefile
-maintainer-clean-am: distclean-am maintainer-clean-generic
-
-mostlyclean: mostlyclean-am
-
-mostlyclean-am: mostlyclean-generic mostlyclean-libtool
-
-pdf: pdf-am
-
-pdf-am:
-
-ps: ps-am
-
-ps-am:
-
-uninstall-am: uninstall-info-am uninstall-localdocDATA
-
-.PHONY: all all-am check check-am clean clean-generic clean-libtool \
- distclean distclean-generic distclean-libtool distdir dvi \
- dvi-am html html-am info info-am install install-am \
- install-data install-data-am install-exec install-exec-am \
- install-info install-info-am install-localdocDATA install-man \
- install-strip installcheck installcheck-am installdirs \
- maintainer-clean maintainer-clean-generic mostlyclean \
- mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
- uninstall uninstall-am uninstall-info-am \
- uninstall-localdocDATA
-
-
-# Ignore most rules
-lib progs check test _test check-p check-s:
- @echo "Nothing to be done"
-
-tests dep depend:
- @@SETX@; for d in X $(SUBDIRS); do \
- if test $$d != X; then \
- (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \
- fi;
- done
-
-# In docs directory, install-doc is the same as install
-install-doc install-all:
- $(MAKE) $(AM_MAKEFLAGS) install
-uninstall-doc uninstall-all:
- $(MAKE) $(AM_MAKEFLAGS) uninstall
-# Tell versions [3.59,3.63) of GNU make to not export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
diff --git a/doc/html/ADGuide/PaletteExample1.gif b/doc/html/ADGuide/PaletteExample1.gif
deleted file mode 100755
index 8694d9d..0000000
Binary files a/doc/html/ADGuide/PaletteExample1.gif and /dev/null differ
diff --git a/doc/html/ADGuide/Palettes.fm.anc.gif b/doc/html/ADGuide/Palettes.fm.anc.gif
deleted file mode 100755
index d344c03..0000000
Binary files a/doc/html/ADGuide/Palettes.fm.anc.gif and /dev/null differ
diff --git a/doc/html/ADGuide/RELEASE.txt b/doc/html/ADGuide/RELEASE.txt
deleted file mode 100644
index 0e58c12..0000000
--- a/doc/html/ADGuide/RELEASE.txt
+++ /dev/null
@@ -1,906 +0,0 @@
-HDF5 version 1.7.48 released on Mon Jul 18 16:18:26 CDT 2005
-================================================================================
-
-
-INTRODUCTION
-
-This document describes the differences between HDF5-1.6.* and
-HDF5-1.7.*, and contains information on the platforms tested and
-known problems in HDF5-1.7.*. For more details check the HISTORY.txt
-file in the HDF5 source.
-
-The HDF5 documentation can be found on the NCSA ftp server
-(ftp.ncsa.uiuc.edu) in the directory:
-
- /HDF/HDF5/docs/
-
-For more information look at the HDF5 home page at:
-
- http://hdf.ncsa.uiuc.edu/HDF5/
-
-If you have any questions or comments, please send them to:
-
- hdfhelp@ncsa.uiuc.edu
-
-CONTENTS
-
-- New Features
-- Support for new platforms and languages
-- Bug Fixes since HDF5-1.6.0
-- Platforms Tested
-- Known Problems
-
-
-New Features
-============
-
- Configuration:
- --------------
- - When make is invoked in parallel (using -j), sequential tests
- are now executed simultaneously. This should make them execute
- more quickly on some machines.
- Also, when tests pass, they will create a foo.chkexe file.
- This prevents the test from executing again until the test or
- main library changes.
- - On windows, all.zip is deprecated. users should
- read INSTALL_Windows.txt to know the details.
- Reasons to deprecate all.zip:
- 1. Avoid confliction for windows programmers
- 2. Decrease size of CVS tree by adding all.zip
- 3. Avoid using winzip as the intermediate step
- --KY 2005/04/22
- - When HDF5 is created as a shared library, it now uses libtool's
- shared library versioning scheme. -JML 2005/04/18
- - HDF5 now uses automake 1.9.5 to generate Makefiles.in.
- This has a number of effects on users:
- The Fortran compiler should be set using the environment
- variable $FC, not $F9X. F9X still works, but is depreciated.
- The output of make may be different. This should be only a
- cosmetic effect.
- make depened (or make dep) is no longer recognized, since automake
- handles dependency tracking.
- Some new configure options exist. --enable-dependency-tracking
- and --disable-dependency-tracking are used to control automake's
- dependency tracking. Dependencies are on by default *on most
- platforms and compilers*. If --enable-dependency-tracking is
- used, they will be enabled on any platform. However, this can
- slow down builds or even cause build errors in some cases.
- Likewise, --disable-dependency-tracking can speed up builds and
- avoid some build errors.
- Some make targets have alternate names. make check-install and
- make installcheck do the same thing, for instance.
- pmake on IRIX can be invoked from the root directory, but the
- -V flag must be used to invoke it in any subdirectory or it
- will give an error about undefined variables.
- JML 2005/01 - 2005/03
- - Hardware conversion between long double and integers is also added.
- SLU 2005/02/10
- - Started to support software conversion between long double and
- integers. Hardware conversion will come very soon. SLU - 2005/1/6
- - Intel v8.0 compiler would infinite loop when compiling some test
- code with -O3 option. Changed enable-production default compiler
- option to -O2. AKC - 2004/12/06
- - Long double is assumed to be a supported C data type. It is a
- stanadard C89 type. AKC - 2004/10/22
- - The IA64 will use ecc as the C++ compiler by default.
- - Added some initial support for making purify (or similar memory
- checking products) happier by initializing buffers to zero and
- disabling the internal free list code. To take advantage of this,
- define 'H5_USING_PURIFY' in your CFLAGS when building the library.
- QAK - 2004/07/23
- - Fixed the long compile time of H5detect.c when v7.x Intel Compiler
- is used with optimization NOT off. AKC - 2004/05/20
- - Fixed configure setting of C++ for OSF1 platform. AKC - 2004/01/06
- - Prefix default is changed from /usr/local to `pwd`/hdf5.
- AKC - 2003/07/09
-
- Library:
- --------
- - Added H5F_OBJ_LOCAL flag to H5Fget_obj_count() & H5Fget_obj_ids(), to
- allow querying for objects in file that were opened with a particular
- file ID, instead of all objects opened in file with any file ID.
- QAK - 2005/06/01
- - Added H5T_CSET_UTF8 character set to mark datatypes that use the
- UTF-8 Unicode character encoding. Added tests to ensure that
- library handles UTF-8 object names, attributes, etc. -JL 2005/05/13
- - HDF5 supports collective MPI-IO for irregular selection with HDF5
- dataset. Irregular selection is when users use H5Sselect_hyperslab
- more than once for the same dataset.
- Currently, not all MPI-IO packages support complicated MPI derived
- datatype used in the implementation of irregular
- selection INSIDE HDF5.
- 1) DEC 5.x is not supporting complicated derived datatype.
- 2) For AIX 5.1 32-bit,
- if your poe version number is 3.2.0.19 or lower,
- please edit powerpc-ibm-aix5.x under hdf5/config,
- Find the line with
- << hdf5_mpi_complex_derived_datatype_works>>
- and UNCOMMENT this line before the configure.
- check poe version with the following command:
- lpp -l all | grep ppe.poe
- For AIX 5.1 64-bit,
- regardless of poe version number, please UNCOMMENT
- << hdf5_mpi_complex_derived_datatype_works>> under hdf5/config.
- We suspect there are some problems for MPI-IO implementation
- for 64-bit.
- 3) For Linux cluster,
- if mpich version is 1.2.5 or lower, collective irregular selection
- IO is not supported, internally independent IO is used.
- 4) For IRIX 6.5,
- if C compiler version is 7.3 or lower, collective irregular selection
- IO is not supported, internally independent IO is used.
- KY - 2005/07/13
- - HDF5 N-bit filter
- HDF5 support N-bit filter from this version,
- The N-Bit filter is used effectively for compressing data of N-Bit
- datatype as well as compound and array datatype with N-Bit fields.
- KY - 2005/04/15
- - HDF5 scaleoffset filter
- HDF5 supports scaleoffset filter for users to do data
- compression through HDF5 library.
- Scale-Offset compression performs a scale and/or offset operation
- on each data value and truncates the resulting value to a minimum
- number of bits and then stores the data.
- Scaleoffset filter supports floating-point and integer datatype.
- Please check the HDF5 reference manual for this.
- KY - 2005/06/06
- - Retired SRB vfd (--with-srb). Functions H5Pset_fapl_srb and
- H5Pget_fapl_srb were removed. EIP - 2005/04/07
- - Retired GASS vfd (--with-gass). Functions H5Pset_fapl_gass and
- H5Pget_fapl_gass are removed too. AKC - 2005/3/3
- - Pablo was removed from the source code EIP - 2005/01/21
- - Modified registration of SZIP to dynamically detect the presence
- or absence of the encoder. Changed configure and Makefiles,
- and tests to dynamically detect encoder. BEM - 2004/11/02
- - Added function H5Pget_data_transform, together with the previously
- added H5Pset_data_transform, to support the data transform
- feature. AKC - 2004/10/26
- - Compound datatype has been enhanced with a new feature of size
- adjustment. The size can be increased and decreased(without
- cutting the last member) as long as it doesn't go down to zero.
- No API change is involved. SLU - 2004/10/1
- - Put back 6 old error API functions to be backward compatible with
- version 1.6. They are H5Epush, H5Eprint, H5Ewalk, H5Eclear,
- H5Eset_auto, H5Eget_auto. Their new equivalent functions are
- called H5Epush_stack, H5Eprint_stack, H5Ewalk_stack,
- H5Eclear_stack, H5Eset_auto_stack, H5Eget_auto_stack. SLU -
- 2004/9/2
- - 4 new API functions, H5Tencode, H5Tdecode, H5Sencode, H5Sdecode were
- added to the library. Given object ID, these functions encode and
- decode HDF5 objects(data type and space) information into and from
- binary buffer. SLU - 2004/07/21
- - Modified the way how HDF5 calculates 'pixels_per_scanline' parameter for
- SZIP compression. Now there is no restriction on the size and shape of the
- chunk except that the total number of elements in the chunk cannot be
- bigger than 'pixels_per_block' parameter provided by the user.
- EIP - 2004/07/21
- - Added support for SZIP without encoder. Added H5Zget_filter_info
- and changed H5Pget_filter and H5Pget_filter_by_id to support this
- change. JL/NF - 2004/06/30
- - SZIP always uses K13 compression. This flag no longer needs to
- be set when calling H5Pset_szip. If the flag for CHIP
- compression is set, it will be ignored (since the two are mutually
- exclusive). JL/NF - 2004/6/30
- - A new API function H5Fget_name was added. It returns the name
- of the file by object(file, group, data set, named data type,
- attribute) ID. SLU - 2004/06/29
- - Added support for user defined identifier types. NF/JL - 2004/06/29
- - A new API function H5Fget_filesize was added. It returns the
- actual file size of the opened file. SLU - 2004/06/24
- - New Feature of Data transformation is added. AKC - 2004/05/03.
- - New exception handler for datatype conversion is put in to
- replace the old overflow callback function. This exception
- handler is set through H5Pset_type_conv_cb function.
- SLU - 2004/4/27
- - Added option that if $HDF5_DISABLE_VERSION_CHECK is set to 2,
- will suppress all library version mismatch warning messages.
- AKC - 2004/4/14
- - A new type of dataspace, null dataspace(dataspace without any
- element) was added. SLU - 2004/3/24
- - Data type conversion(software) from integer to float was added.
- SLU - 2004/3/13
- - Data type conversion(software) from float to integer was added.
- Conversion from integer to float will be added later.
- SLU -2004/2/4
- - Added new H5Premove_filter routine to remove I/O pipeline filters
- from dataset creation property lists. PVN - 2004/01/26
- - Added new 'compare' callback parameter to H5Pregister & H5Pinsert
- routines. QAK - 2004/01/07
- - Data type conversion(hardware) between integers and floats was added.
- SLU 2003/11/21
- - New function H5Iget_file_id() was added. It returns file ID given
- an object(dataset, group, or attribute) ID. SLU 2003/10/29
- - Added new fields to the H5G_stat_t for more information about an
- object's object header. QAK 2003/10/06
- - Added new H5Fget_freespace() routine to query the free space in a
- given file. QAK 2003/10/06
- - Added backward compatability with v1.6 for new Error API. SLU -
- 2003/09/24
- - Changed 'objno' field in H5G_stat_t structure from 'unsigned long[2]'
- to 'haddr_t'. QAK - 2003/08/08
- - Changed 'fileno' field in H5G_stat_t structure from 'unsigned long[2]'
- to 'unsigned long'. QAK - 2003/08/08
- - Changed 'hobj_ref_t' type from structure with array field to 'haddr_t'.
- QAK - 2003/08/08
- - Object references (hobj_ref_t) can now be compared with the 'objno'
- field in the H5G_stat_t struct for testing if two objects are the
- same within a file. QAK - 2003/08/08
- - Switched over to new error API. SLU - 2003/07/25
-
- Parallel Library:
- -----------------
- - Allow compressed, chunked datasets to be read in parallel.
- QAK - 2004/10/04
- - Add options of using atomicity and file-sync to test_mpio_1wMr.
- AKC - 2003/11/13
- - Added parallel test, test_mpio_1wMr, which tests if the
- underlaying parallel I/O system is conforming to the POSIX
- write/read requirement. AKC - 2003/11/12
-
- Fortran Library:
- ----------------
- - added missing h5tget_member_class_f function
- EIP 2005/04/06
- - added new functions h5fget_name_f and h5fget_filesize_f
- EIP 2004/07/08
- - h5dwrite/read_f and h5awrite/read_f functions only accept dims parameter
- of the type INTEGER(HSIZE_T).
- - added support for native integers of 8 bytes (i.e. when special
- compiler flag is specified to set native fortran integers to 8 bytes,
- for example, -i8 flag for PGI and Absoft Fortran compilers,
- -qintsize=8 flag for IBM xlf compiler).
- EIP 2005/06/20
-
-
- Tools:
- ------
- - new tool, h5jam. See reference manual. 2004/10/08
- - h5repack.sh did not report errors encountered during tests. It does
- now. AKC - 2004/04/02
- - Added the MPI-I/O and MPI-POSIX drivers to the list of VFL drivers
- available for h5dump and h5ls. RPM & QAK - 2004/02/01
- - Added option --vfd= to h5ls to allow a VFL driver to be selected
- by a user. RPM & QAK - 2004/02/01
- - Added option -showconfig to compiler tools (h5cc,h5fc,h5c++).
- AKC - 2004/01/08
- - Install the "h5cc" and "h5fc" tools as "h5pcc" and "h5pfc"
- respectively if library is built in parallel mode.
- WCW - 2003/11/04
- - Added metadata benchmark (perform/perf_meta). SLU - 2003/10/03
- - Changed output of "OID"s from h5dump from "
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-The Attribute Interface (H5A)
-
- 1. Introduction
-
- 2. Creating, Opening, Closing and Deleting Attributes
-
- H5Acreate()
function,
- and existing attributes can be accessed with either the
- H5Aopen_name()
or H5Aopen_idx()
functions. All
- three functions return an object ID which should be eventually released
- by calling H5Aclose()
.
-
-
-
-
- hid_t H5Acreate (hid_t loc_id, const char
- *name, hid_t type_id, hid_t space_id,
- hid_t create_plist_id)
-
- hid_t H5Aopen_name (hid_t loc_id, const char
- *name)
-
- hid_t H5Aopen_idx (hid_t loc_id, unsigned
- idx)
-
- herr_t H5Aclose (hid_t attr_id)
-
- herr_t H5Adelete (hid_t loc_id,
- const char *name)
- 3. Attribute I/O Functions
-
-
-
-
- herr_t H5Awrite (hid_t attr_id,
- hid_t mem_type_id, void *buf)
-
- herr_t H5Aread (hid_t attr_id,
- hid_t mem_type_id, void *buf)
- 4. Attribute Inquiry Functions
-
-
-
-
-
-herr_t H5Aiterate (hid_t loc_id,
- unsigned *attr_number,
- H5A_operator operator,
- void *operator_data)
-
- typedef herr_t (*H5A_operator_t)(hid_t loc_id,
- const char *attr_name, void *operator_data);
-
-
-
- hid_t H5Aget_space (hid_t attr_id)
-
- hid_t H5Aget_type (hid_t attr_id)
-
- ssize_t H5Aget_name (hid_t attr_id,
- size_t buf_size, char *buf)
-
- int H5Aget_num_attrs (hid_t loc_id)
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-
-Last modified: 6 July 2000
-
-
-
-
diff --git a/doc/html/Big.html b/doc/html/Big.html
deleted file mode 100644
index fe00ff8..0000000
--- a/doc/html/Big.html
+++ /dev/null
@@ -1,122 +0,0 @@
-
-
-
- Big Datasets on Small Machines
-
- 1. Introduction
-
- sizeof(off_t)
- and sizeof(size_t)
are both four bytes can handle
- datasets and files as large as 18x10^18 bytes. However, most
- Unix systems limit the number of concurrently open files, so a
- practical file size limit is closer to 512GB or 1TB.
-
- off_t
file size limit and the second circumvents
- the size_t
main memory limit.
-
- 2. File Size Limits
-
-
-
-
-checking size of off_t... 8
-
-
-
-checking for lseek64... yes
-checking for fseek64... yes
-
printf
-style integer format. For instance:
-
-
-
-
-hid_t plist, file;
-plist = H5Pcreate (H5P_FILE_ACCESS);
-H5Pset_family (plist, 1<<30, H5P_DEFAULT);
-file = H5Fcreate ("big%03d.h5", H5F_ACC_TRUNC, H5P_DEFAULT, plist);
-
1<<30
) to
- H5Pset_family()
indicates that the family members
- are to be 2^30 bytes (1GB) each although we could have used any
- reasonably large value. In general, family members cannot be
- 2GB because writes to byte number 2,147,483,647 will fail, so
- the largest safe value for a family member is 2,147,483,647.
- HDF5 will create family members on demand as the HDF5 address
- space increases, but since most Unix systems limit the number of
- concurrently open files the effective maximum size of the HDF5
- address space will be limited (the system on which this was
- developed allows 1024 open files, so if each family member is
- approx 2GB then the largest HDF5 file is approx 2TB).
-
-
-
-
-hid_t plist = H5Pcreate (H5P_DATASET_CREATE);
-for (i=0; i<5*1024; i++) {
- sprintf (name, "velocity-%04d.raw", i);
- H5Pset_external (plist, name, 0, (size_t)1<<30);
-}
-
3. Dataset Size Limits
-
- sizeof(size_t)
. HDF5 defines a data type called
- hsize_t
which is used for sizes of datasets and is,
- by default, defined as unsigned long long
.
-
- size_t
, and a
- 1-dimensional dataset whose dimension size is too large to fit
- in a size_t
.
-
-
-
-
-hsize_t size1[4] = {8, 1024, 1024, 1024};
-hid_t space1 = H5Screate_simple (4, size1, size1);
-
-hsize_t size2[1] = {8589934592LL};
-hid_t space2 = H5Screate_simple (1, size2, size2};
-
LL
suffix is not portable, so it may
- be better to replace the number with
- (hsize_t)8*1024*1024*1024
.
-
- long long
large
- datasets will not be possible. The library performs too much
- arithmetic on hsize_t
types to make the use of a
- struct feasible.
-
-
- Robb Matzke
-
-
-Last modified: Sun Jul 19 11:37:25 EDT 1998
-
-
-
diff --git a/doc/html/Caching.html b/doc/html/Caching.html
deleted file mode 100644
index d194ba3..0000000
--- a/doc/html/Caching.html
+++ /dev/null
@@ -1,190 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-Data Caching
-
- 1. Meta Data Caching
-
- 2. Raw Data Chunk Caching
-
-
-
-
- 3. Data Caching Operations
-
-
-
-
-
-herr_t H5Pset_cache(hid_t plist, unsigned int
- mdc_nelmts, size_t rdcc_nbytes, double
- w0)
- herr_t H5Pget_cache(hid_t plist, unsigned int
- *mdc_nelmts, size_t *rdcc_nbytes, double
- w0)
- H5Pget_cache()
any (or all) of
- the pointer arguments may be null pointers.
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 13 December 1999
-
-
-
-
-
diff --git a/doc/html/Chunk_f1.gif b/doc/html/Chunk_f1.gif
deleted file mode 100644
index d73201a..0000000
Binary files a/doc/html/Chunk_f1.gif and /dev/null differ
diff --git a/doc/html/Chunk_f1.obj b/doc/html/Chunk_f1.obj
deleted file mode 100644
index 004204a..0000000
--- a/doc/html/Chunk_f1.obj
+++ /dev/null
@@ -1,252 +0,0 @@
-%TGIF 3.0-p17
-state(0,33,100.000,0,0,0,16,1,9,1,1,0,0,0,1,1,1,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-color_info(11,65535,0,[
- "magenta", 65535, 0, 65535, 65280, 0, 65280, 1,
- "red", 65535, 0, 0, 65280, 0, 0, 1,
- "green", 0, 65535, 0, 0, 65280, 0, 1,
- "blue", 0, 0, 65535, 0, 0, 65280, 1,
- "yellow", 65535, 65535, 0, 65280, 65280, 0, 1,
- "pink", 65535, 49344, 52171, 65280, 49152, 51968, 1,
- "cyan", 0, 65535, 65535, 0, 65280, 65280, 1,
- "CadetBlue", 24415, 40606, 41120, 24320, 40448, 40960, 1,
- "white", 65535, 65535, 65535, 65280, 65280, 65280, 1,
- "black", 0, 0, 0, 0, 0, 0, 1,
- "DarkSlateGray", 12079, 20303, 20303, 12032, 20224, 20224, 1
-]).
-page(1,"",1).
-text('black',432,272,'Courier',0,17,2,1,0,1,49,28,302,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Point",
- "Written"]).
-box('black',256,288,320,352,0,3,1,70,0,0,0,0,0,'3',[
-]).
-text('black',288,272,'Courier',0,17,1,1,0,1,49,14,75,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Dataset"]).
-box('black',352,288,384,320,5,1,1,77,5,0,0,0,0,'1',[
-]).
-text('black',368,272,'Courier',0,17,1,1,0,1,35,14,80,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Chunk"]).
-box('black',96,32,544,384,0,1,1,118,0,0,0,0,0,'1',[
-]).
-box('black',128,64,256,128,5,1,1,131,5,0,0,0,0,'1',[
-]).
-box('black',128,128,256,192,5,1,1,132,5,0,0,0,0,'1',[
-]).
-box('black',384,64,512,128,5,1,1,137,5,0,0,0,0,'1',[
-]).
-box('black',256,128,384,192,5,1,1,142,5,0,0,0,0,'1',[
-]).
-box('black',256,192,384,256,5,1,1,144,5,0,0,0,0,'1',[
-]).
-box('black',384,192,512,256,5,1,1,146,5,0,0,0,0,'1',[
-]).
-box('black',128,64,432,224,0,3,1,26,0,0,0,0,0,'3',[
-]).
-group([
-polygon('black',11,[
- 152,80,154,86,160,86,155,89,157,94,152,91,147,94,149,89,
- 144,86,150,86,152,80],1,1,1,0,178,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',148,84,156,92,0,1,0,179,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',152,83,'Courier',0,17,1,1,0,1,112,14,180,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',152,80,'Courier',0,17,1,1,0,1,0,14,181,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-182,0,0,[
-]).
-group([
-polygon('black',11,[
- 200,96,202,102,208,102,203,105,205,110,200,107,195,110,197,105,
- 192,102,198,102,200,96],1,1,1,0,188,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',196,100,204,108,0,1,0,189,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',200,99,'Courier',0,17,1,1,0,1,112,14,190,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',200,96,'Courier',0,17,1,1,0,1,0,14,191,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-192,0,0,[
-]).
-group([
-polygon('black',11,[
- 168,128,170,134,176,134,171,137,173,142,168,139,163,142,165,137,
- 160,134,166,134,168,128],1,1,1,0,198,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',164,132,172,140,0,1,0,199,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',168,131,'Courier',0,17,1,1,0,1,112,14,200,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',168,128,'Courier',0,17,1,1,0,1,0,14,201,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-202,0,0,[
-]).
-group([
-polygon('black',11,[
- 168,160,170,166,176,166,171,169,173,174,168,171,163,174,165,169,
- 160,166,166,166,168,160],1,1,1,0,208,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',164,164,172,172,0,1,0,209,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',168,163,'Courier',0,17,1,1,0,1,112,14,210,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',168,160,'Courier',0,17,1,1,0,1,0,14,211,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-212,0,0,[
-]).
-group([
-polygon('black',11,[
- 136,144,138,150,144,150,139,153,141,158,136,155,131,158,133,153,
- 128,150,134,150,136,144],1,1,1,0,218,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',132,148,140,156,0,1,0,219,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',136,147,'Courier',0,17,1,1,0,1,112,14,220,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',136,144,'Courier',0,17,1,1,0,1,0,14,221,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-222,0,0,[
-]).
-group([
-polygon('black',11,[
- 248,144,250,150,256,150,251,153,253,158,248,155,243,158,245,153,
- 240,150,246,150,248,144],1,1,1,0,228,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',244,148,252,156,0,1,0,229,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',248,147,'Courier',0,17,1,1,0,1,112,14,230,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',248,144,'Courier',0,17,1,1,0,1,0,14,231,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-232,0,0,[
-]).
-group([
-polygon('black',11,[
- 296,176,298,182,304,182,299,185,301,190,296,187,291,190,293,185,
- 288,182,294,182,296,176],1,1,1,0,238,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',292,180,300,188,0,1,0,239,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',296,179,'Courier',0,17,1,1,0,1,112,14,240,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',296,176,'Courier',0,17,1,1,0,1,0,14,241,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-242,0,0,[
-]).
-group([
-polygon('black',11,[
- 360,208,362,214,368,214,363,217,365,222,360,219,355,222,357,217,
- 352,214,358,214,360,208],1,1,1,0,248,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',356,212,364,220,0,1,0,249,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',360,211,'Courier',0,17,1,1,0,1,112,14,250,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',360,208,'Courier',0,17,1,1,0,1,0,14,251,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-252,0,0,[
-]).
-group([
-polygon('black',11,[
- 408,192,410,198,416,198,411,201,413,206,408,203,403,206,405,201,
- 400,198,406,198,408,192],1,1,1,0,258,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',404,196,412,204,0,1,0,259,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',408,195,'Courier',0,17,1,1,0,1,112,14,260,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',408,192,'Courier',0,17,1,1,0,1,0,14,261,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-262,0,0,[
-]).
-group([
-polygon('black',11,[
- 376,128,378,134,384,134,379,137,381,142,376,139,371,142,373,137,
- 368,134,374,134,376,128],1,1,1,0,268,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',372,132,380,140,0,1,0,269,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',376,131,'Courier',0,17,1,1,0,1,112,14,270,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',376,128,'Courier',0,17,1,1,0,1,0,14,271,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-272,0,0,[
-]).
-group([
-polygon('black',11,[
- 408,80,410,86,416,86,411,89,413,94,408,91,403,94,405,89,
- 400,86,406,86,408,80],1,1,1,0,278,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',404,84,412,92,0,1,0,279,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',408,83,'Courier',0,17,1,1,0,1,112,14,280,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',408,80,'Courier',0,17,1,1,0,1,0,14,281,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-282,0,0,[
-]).
-group([
-polygon('black',11,[
- 424,304,426,310,432,310,427,313,429,318,424,315,419,318,421,313,
- 416,310,422,310,424,304],1,1,1,0,288,0,0,0,0,0,'1',
- "000",[
-]),
-box('black',420,308,428,316,0,1,0,289,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',424,307,'Courier',0,17,1,1,0,1,112,14,290,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "", 1, 0, 0,
-text('black',424,304,'Courier',0,17,1,1,0,1,0,14,291,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- ""]))
-])
-],
-292,0,0,[
-]).
diff --git a/doc/html/Chunk_f2.gif b/doc/html/Chunk_f2.gif
deleted file mode 100644
index 68f9433..0000000
Binary files a/doc/html/Chunk_f2.gif and /dev/null differ
diff --git a/doc/html/Chunk_f2.obj b/doc/html/Chunk_f2.obj
deleted file mode 100644
index 7361c1c..0000000
--- a/doc/html/Chunk_f2.obj
+++ /dev/null
@@ -1,95 +0,0 @@
-%TGIF 3.0-p17
-state(0,33,100.000,0,0,0,16,1,9,1,1,6,1,1,0,1,0,'Courier',0,17,0,2,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-color_info(11,65535,0,[
- "magenta", 65535, 0, 65535, 65280, 0, 65280, 1,
- "red", 65535, 0, 0, 65280, 0, 0, 1,
- "green", 0, 65535, 0, 0, 65280, 0, 1,
- "blue", 0, 0, 65535, 0, 0, 65280, 1,
- "yellow", 65535, 65535, 0, 65280, 65280, 0, 1,
- "pink", 65535, 49344, 52171, 65280, 49152, 51968, 1,
- "cyan", 0, 65535, 65535, 0, 65280, 65280, 1,
- "CadetBlue", 24415, 40606, 41120, 24320, 40448, 40960, 1,
- "white", 65535, 65535, 65535, 65280, 65280, 65280, 1,
- "black", 0, 0, 0, 0, 0, 0, 1,
- "DarkSlateGray", 12079, 20303, 20303, 12032, 20224, 20224, 1
-]).
-page(1,"",1).
-group([
-box('black',192,416,512,544,0,1,0,22,0,0,0,0,0,'1',[
-]),
-oval('black',192,384,512,448,0,1,1,23,0,0,0,0,0,'1',[
-]),
-arc('black',0,1,1,0,192,512,352,544,192,544,512,544,0,320,64,11520,11520,24,0,0,8,3,0,0,0,'1','8','3',[
-]),
-poly('black',2,[
- 192,416,192,544],0,1,1,25,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]),
-poly('black',2,[
- 512,416,512,544],0,1,1,26,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]),
-box('black',196,452,508,572,0,1,0,27,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',352,451,'Courier',0,17,1,1,0,1,112,14,28,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "HDF5 File", 1, 0, 0,
-text('black',351,505,'Courier',0,17,1,1,0,1,63,14,29,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "HDF5 File"]))
-])
-],
-30,0,0,[
-]).
-group([
-polygon('black',5,[
- 240,160,240,352,464,352,464,160,240,160],0,1,1,0,63,0,0,0,0,0,'1',
- "00",[
-]),
-box('black',254,164,450,348,0,1,0,64,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',352,163,'Courier',0,17,1,1,0,1,112,14,65,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "Filter", 1, 0, 0,
-text('black',351,242,'Courier',0,17,2,1,0,1,49,28,66,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Filter",
- "Pipeine"]))
-])
-],
-62,0,0,[
-]).
-group([
-polygon('black',13,[
- 304,85,304,107,336,107,336,128,368,128,368,107,400,107,400,85,
- 368,85,368,64,336,64,336,85,304,85],0,1,1,0,103,0,0,0,0,0,'1',
- "0000",[
-]),
-box('black',307,68,397,124,0,1,0,104,0,0,0,0,0,'1',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',352,67,'Courier',0,17,1,1,0,1,112,14,105,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "Modify Bytes", 1, 0, 0,
-text('black',352,89,'Courier',0,17,1,1,0,1,84,14,106,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Modify Bytes"]))
-])
-],
-107,0,0,[
-]).
-box('black',176,48,528,592,0,1,1,143,0,0,0,0,0,'1',[
-]).
-poly('black',4,[
- 256,416,256,128,256,96,304,96],1,7,1,168,1,0,2,0,22,9,0,0,0,'7','22','9',
- "6",[
-]).
-poly('black',4,[
- 400,96,448,96,448,128,448,416],1,7,1,173,1,0,2,0,22,9,0,0,0,'7','22','9',
- "6",[
-]).
-text('black',432,128,'Courier',0,17,1,0,0,1,35,14,312,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Chunk"]).
-text('black',240,368,'Courier',0,17,1,0,0,1,35,14,314,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Chunk"]).
diff --git a/doc/html/Chunk_f3.gif b/doc/html/Chunk_f3.gif
deleted file mode 100644
index e6e8457..0000000
Binary files a/doc/html/Chunk_f3.gif and /dev/null differ
diff --git a/doc/html/Chunk_f4.gif b/doc/html/Chunk_f4.gif
deleted file mode 100644
index 76f0994..0000000
Binary files a/doc/html/Chunk_f4.gif and /dev/null differ
diff --git a/doc/html/Chunk_f5.gif b/doc/html/Chunk_f5.gif
deleted file mode 100644
index 3b12174..0000000
Binary files a/doc/html/Chunk_f5.gif and /dev/null differ
diff --git a/doc/html/Chunk_f6.gif b/doc/html/Chunk_f6.gif
deleted file mode 100644
index 616946d..0000000
Binary files a/doc/html/Chunk_f6.gif and /dev/null differ
diff --git a/doc/html/Chunk_f6.obj b/doc/html/Chunk_f6.obj
deleted file mode 100644
index 2b2f371..0000000
--- a/doc/html/Chunk_f6.obj
+++ /dev/null
@@ -1,107 +0,0 @@
-%TGIF 3.0-p17
-state(0,33,100.000,0,0,0,8,1,9,1,1,0,1,1,0,1,1,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-color_info(11,65535,0,[
- "magenta", 65535, 0, 65535, 65280, 0, 65280, 1,
- "red", 65535, 0, 0, 65280, 0, 0, 1,
- "green", 0, 65535, 0, 0, 65280, 0, 1,
- "blue", 0, 0, 65535, 0, 0, 65280, 1,
- "yellow", 65535, 65535, 0, 65280, 65280, 0, 1,
- "pink", 65535, 49344, 52171, 65280, 49152, 51968, 1,
- "cyan", 0, 65535, 65535, 0, 65280, 65280, 1,
- "CadetBlue", 24415, 40606, 41120, 24320, 40448, 40960, 1,
- "white", 65535, 65535, 65535, 65280, 65280, 65280, 1,
- "black", 0, 0, 0, 0, 0, 0, 1,
- "DarkSlateGray", 12079, 20303, 20303, 12032, 20224, 20224, 1
-]).
-page(1,"",1).
-polygon('black',5,[
- 128,256,256,256,256,320,128,320,128,256],5,1,1,0,26,5,0,0,0,0,'1',
- "00",[
-]).
-polygon('black',7,[
- 256,128,256,256,128,256,128,192,192,192,192,128,256,128],5,1,1,0,25,5,0,0,0,0,'1',
- "00",[
-]).
-polygon('black',7,[
- 128,64,256,64,256,128,192,128,192,192,128,192,128,64],5,1,1,0,24,5,0,0,0,0,'1',
- "00",[
-]).
-box('black',128,64,256,320,0,3,1,22,0,0,0,0,0,'3',[
-]).
-text('black',192,96,'Courier',0,17,1,1,0,1,49,14,34,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Strip 1"]).
-text('black',224,160,'Courier',0,17,1,1,0,1,49,14,40,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Strip 2"]).
-text('black',192,272,'Courier',0,17,1,1,0,1,49,14,46,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Strip 3"]).
-polygon('black',5,[
- 448,256,576,256,576,320,448,320,448,256],5,1,1,0,59,5,0,0,0,0,'1',
- "00",[
-]).
-polygon('black',7,[
- 576,128,576,256,448,256,448,192,512,192,512,128,576,128],5,1,1,0,60,5,0,0,0,0,'1',
- "00",[
-]).
-polygon('black',7,[
- 448,64,576,64,576,128,512,128,512,192,448,192,448,64],5,1,1,0,61,5,0,0,0,0,'1',
- "00",[
-]).
-box('black',448,64,576,320,0,3,1,62,0,0,0,0,0,'3',[
-]).
-text('black',512,96,'Courier',0,17,1,1,0,1,49,14,63,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Strip 1"]).
-text('black',544,160,'Courier',0,17,1,1,0,1,49,14,64,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Strip 2"]).
-text('black',512,272,'Courier',0,17,1,1,0,1,49,14,65,0,11,3,2,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Strip 3"]).
-text('black',192,32,'Courier',0,17,1,1,0,1,28,14,68,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "FILE"]).
-text('black',512,32,'Courier',0,17,1,1,0,1,42,14,70,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "MEMORY"]).
-group([
-polygon('black',6,[
- 320,160,320,208,384,208,416,184,384,160,320,160],0,3,1,0,72,0,0,0,0,0,'3',
- "00",[
-]),
-box('black',324,164,388,204,0,3,0,73,0,0,0,0,0,'3',[
-attr("", "auto_center_attr", 0, 1, 0,
-text('black',356,162,'Courier',0,17,1,1,0,1,112,14,74,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "auto_center_attr"])),
-attr("label=", "TCONV", 1, 0, 0,
-text('black',355,177,'Courier',0,17,1,1,0,1,35,14,75,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "TCONV"]))
-])
-],
-76,0,0,[
-]).
-poly('black',5,[
- 256,96,288,96,320,96,320,128,320,160],1,7,1,87,1,0,5,0,22,9,0,0,0,'7','22','9',
- "70",[
-]).
-poly('black',2,[
- 256,184,320,184],1,7,1,88,1,0,5,0,22,9,0,0,0,'7','22','9',
- "0",[
-]).
-poly('black',5,[
- 256,288,288,288,320,288,320,256,320,208],1,7,1,89,1,0,5,0,22,9,0,0,0,'7','22','9',
- "70",[
-]).
-poly('black',5,[
- 400,160,400,128,400,96,432,96,448,96],1,7,1,92,1,0,5,0,22,9,0,0,0,'7','22','9',
- "70",[
-]).
-poly('black',2,[
- 416,184,512,184],1,7,1,93,1,0,5,0,22,9,0,0,0,'7','22','9',
- "0",[
-]).
-poly('black',5,[
- 400,208,400,256,400,288,432,288,448,288],1,7,1,94,1,0,5,0,22,9,0,0,0,'7','22','9',
- "70",[
-]).
-box('black',96,0,608,352,0,1,1,99,0,0,0,0,0,'1',[
-]).
diff --git a/doc/html/Chunking.html b/doc/html/Chunking.html
deleted file mode 100644
index 3738d9a..0000000
--- a/doc/html/Chunking.html
+++ /dev/null
@@ -1,313 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-Dataset Chunking Issues
-
- Table of Contents
-
-
-
-
- 1. Introduction
-
-
-
Figure 1H5Dwrite()
touches only a few bytes of the chunk,
- the entire chunk is read from the file, the data passes upward
- through the filter pipeline, the few bytes are modified, the
- data passes downward through the filter pipeline, and the entire
- chunk is written back to the file.
-
-
Figure 22. The Raw Data Chunk Cache
-
- H5Dwrite()
- many times from the application would result in poor performance
- even if the data being written all falls within a single chunk.
- A raw data chunk cache layer was added between the top of the
- filter stack and the bottom of the byte modification layer(2). By default, the chunk cache will store 521
- chunks or 1MB of data (whichever is less) but these values can
- be modified with H5Pset_cache()
.
-
-
-
-
- 3. Cache Efficiency
-
-
Figure 3
Figure 4
Figure 5H5Dwrite()
modify only a
- portion of any given chunk. Therefore, the first modification of
- a chunk will cause the chunk to be read from disk into the chunk
- buffer through the filter pipeline. Eventually HDF5 might
- contain a data set transfer property that can turn off this read
- operation resulting in write efficiency which is equal to read
- efficiency.
-
-
- 4. Fragmentation
-
- H5Dread()
or H5Dwrite()
it's
- possible the request will be broken into smaller, more manageable
- pieces by the library. This is almost certainly true if the data
- transfer includes a type conversion.
-
-
Figure 6H5Pset_buffer()
.
-
-
- 5. File Storage Overhead
-
-
-
-
- H5Pset_btree_ratios()
, but this method typically
- results in only a slight improvement over the default settings.
- Finally, the out-degree of each node can be increased by calling
- H5Pset_istore_k()
(increasing the out degree actually
- increases file overhead while decreasing the number of nodes).
-
-
- 6. Chunk Compression
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 2 August 2001
-
-
-
-
-
diff --git a/doc/html/CodeReview.html b/doc/html/CodeReview.html
deleted file mode 100644
index 213cbbe..0000000
--- a/doc/html/CodeReview.html
+++ /dev/null
@@ -1,300 +0,0 @@
-
-
-
- Code Review 1
Some background...
- H5B.c
file that implements a B-link-tree class
- without worrying about concurrency yet (thus the `Note:' in the
- function prologue). The H5B.c
file provides the
- basic machinery for operating on generic B-trees, but it isn't
- much use by itself. Various subclasses of the B-tree (like
- symbol tables or indirect storage) provide their own interface
- and back end to this function. For instance,
- H5G_stab_find()
takes a symbol table OID and a name
- and calls H5B_find()
with an appropriate
- udata
argument that eventually gets passed to the
- H5G_stab_find()
function.
-
-
-
-
- 1 /*-------------------------------------------------------------------------
- 2 * Function: H5B_find
- 3 *
- 4 * Purpose: Locate the specified information in a B-tree and return
- 5 * that information by filling in fields of the caller-supplied
- 6 * UDATA pointer depending on the type of leaf node
- 7 * requested. The UDATA can point to additional data passed
- 8 * to the key comparison function.
- 9 *
-10 * Note: This function does not follow the left/right sibling
-11 * pointers since it assumes that all nodes can be reached
-12 * from the parent node.
-13 *
-14 * Return: Success: SUCCEED if found, values returned through the
-15 * UDATA argument.
-16 *
-17 * Failure: FAIL if not found, UDATA is undefined.
-18 *
-19 * Programmer: Robb Matzke
-20 * matzke@llnl.gov
-21 * Jun 23 1997
-22 *
-23 * Modifications:
-24 *
-25 *-------------------------------------------------------------------------
-26 */
-27 herr_t
-28 H5B_find (H5F_t *f, const H5B_class_t *type, const haddr_t *addr, void *udata)
-29 {
-30 H5B_t *bt=NULL;
-31 intn idx=-1, lt=0, rt, cmp=1;
-32 int ret_value = FAIL;
-
-
-
-33
-34 FUNC_ENTER (H5B_find, NULL, FAIL);
-35
-36 /*
-37 * Check arguments.
-38 */
-39 assert (f);
-40 assert (type);
-41 assert (type->decode);
-42 assert (type->cmp3);
-43 assert (type->found);
-44 assert (addr && H5F_addr_defined (addr));
-
assert
to check invariant conditions. At
- this level of the library, none of these assertions should fail
- unless something is majorly wrong. The arguments should have
- already been checked by higher layers. It also provides
- documentation about what arguments might be optional.
-
-
-
-
-45
-46 /*
-47 * Perform a binary search to locate the child which contains
-48 * the thing for which we're searching.
-49 */
-50 if (NULL==(bt=H5AC_protect (f, H5AC_BT, addr, type, udata))) {
-51 HGOTO_ERROR (H5E_BTREE, H5E_CANTLOAD, FAIL);
-52 }
-
H5AC.c
file. The
- H5AC_protect
insures that the B-tree node (which
- inherits from the H5AC package) whose OID is addr
- is locked into memory for the duration of this function (see the
- H5AC_unprotect
on line 90). Most likely, if this
- node has been accessed in the not-to-distant past, it will still
- be in memory and the H5AC_protect
is almost a
- no-op. If cache debugging is compiled in, then the protect also
- prevents other parts of the library from accessing the node
- while this function is protecting it, so this function can allow
- the node to be in an inconsistent state while calling other
- parts of the library.
-
- H5AC_find
and assume that the pointer it returns is
- valid only until some other library function is called, but
- since we're accessing the pointer throughout this function, I
- chose to use the simpler protect scheme. All protected objects
- must be unprotected before the file is closed, thus the
- use of HGOTO_ERROR
instead of
- HRETURN_ERROR
.
-
-
-
-
-53 rt = bt->nchildren;
-54
-55 while (lt<rt && cmp) {
-56 idx = (lt + rt) / 2;
-57 if (H5B_decode_keys (f, bt, idx)<0) {
-58 HGOTO_ERROR (H5E_BTREE, H5E_CANTDECODE, FAIL);
-59 }
-60
-61 /* compare */
-62 if ((cmp=(type->cmp3)(f, bt->key[idx].nkey, udata,
-63 bt->key[idx+1].nkey))<0) {
-64 rt = idx;
-65 } else {
-66 lt = idx+1;
-67 }
-68 }
-69 if (cmp) {
-70 HGOTO_ERROR (H5E_BTREE, H5E_NOTFOUND, FAIL);
-71 }
-
(type->cmp3)()
is an indirect
- function call into the subclass of the B-tree. All indirect
- function calls have the function part in parentheses to document
- that it's indirect (quite obvious here, but not so obvious when
- the function is a variable).
-
- =
instead
- of ==
.
-
-
-
-
-72
-73 /*
-74 * Follow the link to the subtree or to the data node.
-75 */
-76 assert (idx>=0 && idx
<0
gets lost at the end. Another thing to note is
- that success/failure is always determined by comparing with zero
- instead of SUCCEED
or FAIL
. I do this
- because occassionally one might want to return other meaningful
- values (always non-negative) or distinguish between various types of
- failure (always negative).
-
-
-
-
-88
-89 done:
-90 if (bt && H5AC_unprotect (f, H5AC_BT, addr, bt)<0) {
-91 HRETURN_ERROR (H5E_BTREE, H5E_PROTECT, FAIL);
-92 }
-93 FUNC_LEAVE (ret_value);
-94 }
-
HRETURN_ERROR
macro even though it
- will make the error stack not quite right. I also use short
- circuiting boolean operators instead of nested if
- statements since that's standard C practice.
-
- Code Review 2
-
-
- 1 /*--------------------------------------------------------------------------
- 2 NAME
- 3 H5Fflush
- 4
- 5 PURPOSE
- 6 Flush all cached data to disk and optionally invalidates all cached
- 7 data.
- 8
- 9 USAGE
-10 herr_t H5Fflush(fid, invalidate)
-11 hid_t fid; IN: File ID of file to close.
-12 hbool_t invalidate; IN: Invalidate all of the cache?
-13
-14 ERRORS
-15 ARGS BADTYPE Not a file atom.
-16 ATOM BADATOM Can't get file struct.
-17 CACHE CANTFLUSH Flush failed.
-18
-19 RETURNS
-20 SUCCEED/FAIL
-21
-22 DESCRIPTION
-23 This function flushes all cached data to disk and, if INVALIDATE
-24 is non-zero, removes cached objects from the cache so they must be
-25 re-read from the file on the next access to the object.
-26
-27 MODIFICATIONS:
-28 --------------------------------------------------------------------------*/
-
-
-
-29 herr_t
-30 H5Fflush (hid_t fid, hbool_t invalidate)
-31 {
-32 H5F_t *file = NULL;
-33
-34 FUNC_ENTER (H5Fflush, H5F_init_interface, FAIL);
-35 H5ECLEAR;
-
-
-
-36
-37 /* check arguments */
-38 if (H5_FILE!=H5Aatom_group (fid)) {
-39 HRETURN_ERROR (H5E_ARGS, H5E_BADTYPE, FAIL); /*not a file atom*/
-40 }
-41 if (NULL==(file=H5Aatom_object (fid))) {
-42 HRETURN_ERROR (H5E_ATOM, H5E_BADATOM, FAIL); /*can't get file struct*/
-43 }
-
assert
arguments at this level.
- We also convert atoms to pointers since atoms are really just a
- pointer-hiding mechanism. Functions that can be called
- internally always have pointer arguments instead of atoms
- because (1) then they don't have to always convert atoms to
- pointers, and (2) the various pointer data types provide more
- documentation and type checking than just an hid_t
- type.
-
-
-
-
-44
-45 /* do work */
-46 if (H5F_flush (file, invalidate)<0) {
-47 HRETURN_ERROR (H5E_CACHE, H5E_CANTFLUSH, FAIL); /*flush failed*/
-48 }
-
assert
to check/document
- it's arguments and can be called from other library functions.
-
-
-
-
-49
-50 FUNC_LEAVE (SUCCEED);
-51 }
-
- Robb Matzke
-
-
-Last modified: Mon Nov 10 15:33:33 EST 1997
-
-
-
diff --git a/doc/html/Coding.html b/doc/html/Coding.html
deleted file mode 100644
index dbf55bf..0000000
--- a/doc/html/Coding.html
+++ /dev/null
@@ -1,300 +0,0 @@
-
-
-
-
-
-
-
-
-
- FILES
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- These appear only in one header file anyway.
-
- Always start with `HDF5_HAVE_' like HDF5_HAVE_STDARG_H for a
- header file, or HDF5_HAVE_DEV_T for a data type, or
- HDF5_HAVE_DIV for a function.
-
-
-
-Copyright Notice and Statement for
-
-
-NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
-
-Copyright 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005 by
-the Board of Trustees of the University of Illinois
-
-All rights reserved.
-
-
-
-
-
-
-
-Portions of HDF5 were developed with support from the University of
-California, Lawrence Livermore National Laboratory (UC LLNL).
-The following statement applies to those portions of the product
-and must be retained in any redistribution of source code, binaries,
-documentation, and/or accompanying materials:
-
-
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-
-
-
-
diff --git a/doc/html/Datasets.html b/doc/html/Datasets.html
deleted file mode 100644
index eca195d..0000000
--- a/doc/html/Datasets.html
+++ /dev/null
@@ -1,954 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-The Dataset Interface (H5D)
-
- 1. Introduction
-
-
-
-
- H5T
API) is
- used to manipulate both pieces of meta data but they're handled
- by the dataset API (the H5D
API) in different
- manners.
-
-
-
- 2. Storage Layout Properties
-
- H5Pcreate()
to get a copy of the default property
- list) by modifying properties with various
- H5Pset_property()
functions.
-
-
-
-
-
- herr_t H5Pset_layout (hid_t plist_id,
- H5D_layout_t layout)
-
-
-
- H5D_COMPACT
(Not yet implemented.)
-
- H5D_CONTIGUOUS
-
- H5D_CHUNKED
- H5Pset_chunk()
.
-
-
- H5D_CHUNKED
layout,
- which needs to know the dimensionality and chunk size.
-
-
-
- herr_t H5Pset_chunk (hid_t plist_id, int
- ndims, hsize_t dim[])
- H5D_CHUNKED
and the chunk size is set to
- dim. The number of elements in the dim array
- is the dimensionality, ndims. One need not call
- H5Dset_layout()
when using this function since
- the chunked layout is implied.
-
-
- Example: Chunked Storage
-
-
-
-
-
-size_t hsize[2] = {1000, 1000};
-plist = H5Pcreate (H5P_DATASET_CREATE);
-H5Pset_chunk (plist, 2, size);
-
3. Compression Properties
-
- H5Pset_chunk
)
- allows data compression as defined by the function
- H5Pset_deflate
.
-
-
-
-
- herr_t H5Pset_deflate (hid_t plist_id,
- int level)
- int H5Pget_deflate (hid_t plist_id)
- H5Pset_deflate()
sets the compression method to
- H5Z_DEFLATE
and sets the compression level to
- some integer between one and nine (inclusive). One results in
- the fastest compression while nine results in the best
- compression ratio. The default value is six if
- H5Pset_deflate()
isn't called. The
- H5Pget_deflate()
returns the compression level
- for the deflate method, or negative if the method is not the
- deflate method.
- 4. External Storage Properties
-
- H5D_CONTIGUOUS
storage
- format allows external storage. A set segments (offsets and sizes) in
- one or more files is defined as an external file list, or EFL,
- and the contiguous logical addresses of the data storage are mapped onto
- these segments.
-
-
-
-
- herr_t H5Pset_external (hid_t plist, const
- char *name, off_t offset, hsize_t
- size)
- H5F_UNLIMITED
, in which case the external file may be
- of unlimited size and no more files can be added to the external files list.
-
-
- int H5Pget_external_count (hid_t plist)
-
- herr_t H5Pget_external (hid_t plist, unsigned
- idx, size_t name_size, char *name, off_t
- *offset, hsize_t *size)
- H5Pset_external()
- function. Given a dataset creation property list and a zero-based
- index into that list, the file name, byte offset, and segment size are
- returned through non-null arguments. At most name_size
- characters are copied into the name argument which is not
- null terminated if the file name is longer than the supplied name
- buffer (this is similar to strncpy()
).
-
-
- Example: Multiple Segments
-
-
-
-
-
-
-plist = H5Pcreate (H5P_DATASET_CREATE);
-H5Pset_external (plist, "velocity.data", 3000, 1000);
-H5Pset_external (plist, "velocity.data", 0, 2500);
-H5Pset_external (plist, "velocity.data", 4500, 1500);
-
-
- Example: Multi-Dimensional
-
-
-
-
-
-
-plist = H5Pcreate (H5P_DATASET_CREATE);
-H5Pset_external (plist, "scan1.data", 0, 24);
-H5Pset_external (plist, "scan2.data", 0, 24);
-H5Pset_external (plist, "scan3.data", 0, 16);
-
5. Datatype
-
- H5T
API.
-
- 6. Data Space
-
- H5S
API. The simple dataspace consists of
- maximum dimension sizes and actual dimension sizes, which are
- usually the same. However, maximum dimension sizes can be the
- constant H5D_UNLIMITED
in which case the actual
- dimension size can be incremented with calls to
- H5Dextend()
. The maximium dimension sizes are
- constant meta data while the actual dimension sizes are
- persistent meta data. Initial actual dimension sizes are
- supplied at the same time as the maximum dimension sizes when
- the dataset is created.
-
- 7. Setting Constant or Persistent Properties
-
- H5Dcreate()
.
-
-
-
-
-
-
- hid_t H5Dcreate (hid_t file_id, const char
- *name, hid_t type_id, hid_t
- space_id, hid_t create_plist_id)
- H5Dcreate
with
- a file identifier, a dataset name, a datatype, a dataspace,
- and constant properties. The datatype and dataspace are the
- type and space of the dataset as it will exist in the file,
- which may be different than in application memory.
- Dataset names within a group must be unique:
- H5Dcreate
returns an error if a dataset with the
- name specified in name
already exists
- at the location specified in file_id
.
- The create_plist_id is a H5P_DATASET_CREATE
- property list created with H5Pcreate()
and
- initialized with the various functions described above.
- H5Dcreate()
returns a dataset handle for success
- or negative for failure. The handle should eventually be
- closed by calling H5Dclose()
to release resources
- it uses.
-
-
- hid_t H5Dopen (hid_t file_id, const char
- *name)
- H5Dclose()
to
- release resources it uses.
-
-
- herr_t H5Dclose (hid_t dataset_id)
-
- herr_t H5Dextend (hid_t dataset_id,
- hsize_t dim[])
- 8. Querying Constant or Persistent Properties
-
-
-
-
-
-
- hid_t H5Dget_type (hid_t dataset_id)
- hid_t H5Dget_space (hid_t dataset_id)
- H5Dextend()
.
-
- hid_t H5Dget_create_plist (hid_t
- dataset_id)
- 9. Setting Memory and Transfer Properties
-
- H5Dread()
and H5Dwrite()
functions
- (these functions are described below).
-
-
-
-
- herr_t H5Pset_buffer (hid_t xfer_plist,
- hsize_t max_buf_size, void *tconv_buf, void
- *bkg_buf)
- hsize_t H5Pget_buffer (hid_t xfer_plist, void
- **tconv_buf, void **bkg_buf)
- H5Pget_buffer()
function returns the maximum
- buffer size or zero on error.
- H5Pset_buffer()
to set the size of the
- temporary buffer so it's large enough to hold the entire
- request.
-
-
-
- Example
-
-
-
- H5Dread()
or H5Dwrite()
.
-
-
- 1 hid_t
- 2 disable_strip_mining (hid_t xfer_plist, hid_t dataset,
- 3 hid_t space, hid_t mem_type)
- 4 {
- 5 hid_t file_type; /* File datatype */
- 6 size_t type_size; /* Sizeof larger type */
- 7 size_t size; /* Temp buffer size */
- 8 hid_t xfer_plist; /* Return value */
- 9
-10 file_type = H5Dget_type (dataset);
-11 type_size = MAX(H5Tget_size(file_type), H5Tget_size(mem_type));
-12 H5Tclose (file_type);
-13 size = H5Sget_npoints(space) * type_size;
-14 if (xfer_plist<0) xfer_plist = H5Pcreate (H5P_DATASET_XFER);
-15 H5Pset_buffer(xfer_plist, size, NULL, NULL);
-16 return xfer_plist;
-17 }
-
10. Querying Memory or Transfer Properties
-
- H5Pget_property()
to query transfer
- properties from a tempalate).
-
-
- 11. Raw Data I/O
-
-
-
-
-
- herr_t H5Dread (hid_t dataset_id, hid_t
- mem_type_id, hid_t mem_space_id, hid_t
- file_space_id, hid_t xfer_plist_id,
- void *buf/*out*/)
-
- herr_t H5Dwrite (hid_t dataset_id, hid_t
- mem_type_id, hid_t mem_space_id, hid_t
- file_space_id, hid_t xfer_plist_id,
- const void *buf)
- H5Dget_type()
; the library will not implicitly
- derive memory datatypes from constant datatypes.
-
- H5S_ALL
as the argument for the file data space.
- If H5S_ALL
is also supplied as the memory data
- space then no data space conversion is performed. This is a
- somewhat dangerous situation since the file data space might be
- different than what the application expects.
-
-
-
- 12. Examples
-
- double
- values but is stored in the file in Cray float
- format using LZ77 compression. The dataset is written to the
- HDF5 file and then read back as a two-dimensional array of
- float
values.
-
-
-
- Example 1
-
-
-
-
-
- 1 hid_t file, data_space, dataset, properties;
- 2 double dd[500][600];
- 3 float ff[500][600];
- 4 hsize_t dims[2], chunk_size[2];
- 5
- 6 /* Describe the size of the array */
- 7 dims[0] = 500;
- 8 dims[1] = 600;
- 9 data_space = H5Screate_simple (2, dims);
-10
-11
-12 /*
-13 * Create a new file using with read/write access,
-14 * default file creation properties, and default file
-15 * access properties.
-16 */
-17 file = H5Fcreate ("test.h5", H5F_ACC_RDWR, H5P_DEFAULT,
-18 H5P_DEFAULT);
-19
-20 /*
-21 * Set the dataset creation plist to specify that
-22 * the raw data is to be partitioned into 100x100 element
-23 * chunks and that each chunk is to be compressed with
-24 * LZ77.
-25 */
-26 chunk_size[0] = chunk_size[1] = 100;
-27 properties = H5Pcreate (H5P_DATASET_CREATE);
-28 H5Pset_chunk (properties, 2, chunk_size);
-29 H5Pset_deflate (properties, 9);
-30
-31 /*
-32 * Create a new dataset within the file. The datatype
-33 * and data space describe the data on disk, which may
-34 * be different than the format used in the application's
-35 * memory.
-36 */
-37 dataset = H5Dcreate (file, "dataset", H5T_CRAY_FLOAT,
-38 data_space, properties);
-39
-40 /*
-41 * Write the array to the file. The datatype and data
-42 * space describe the format of the data in the `dd'
-43 * buffer. The raw data is translated to the format
-44 * required on disk defined above. We use default raw
-45 * data transfer properties.
-46 */
-47 H5Dwrite (dataset, H5T_NATIVE_DOUBLE, H5S_ALL, H5S_ALL,
-48 H5P_DEFAULT, dd);
-49
-50 /*
-51 * Read the array as floats. This is similar to writing
-52 * data except the data flows in the opposite direction.
-53 */
-54 H5Dread (dataset, H5T_NATIVE_FLOAT, H5S_ALL, H5S_ALL,
-55 H5P_DEFAULT, ff);
-56
-64 H5Dclose (dataset);
-65 H5Sclose (data_space);
-66 H5Pclose (properties);
-67 H5Fclose (file);
-
-
- Example 2
-
-
-
-
-
- 1 hid_t file, mem_space, file_space, dataset;
- 2 double dd[200][400];
- 3 hsize_t offset[2];
- 4 hsize size[2];
- 5
- 6 /*
- 7 * Open an existing file and its dataset.
- 8 */
- 9 file = H5Fopen ("test.h5", H5F_ACC_RDONLY, H5P_DEFAULT);
-10 dataset = H5Dopen (file, "dataset");
-11
-12 /*
-13 * Describe the file data space.
-14 */
-15 offset[0] = 200; /*offset of hyperslab in file*/
-16 offset[1] = 200;
-17 size[0] = 100; /*size of hyperslab*/
-18 size[1] = 200;
-19 file_space = H5Dget_space (dataset);
-20 H5Sselect_hyperslab (file_space, H5S_SELECT_SET, offset, NULL, size, NULL);
-21
-22 /*
-23 * Describe the memory data space.
-24 */
-25 size[0] = 200; /*size of memory array*/
-26 size[1] = 400;
-27 mem_space = H5Screate_simple (2, size);
-28
-29 offset[0] = 0; /*offset of hyperslab in memory*/
-30 offset[1] = 0;
-31 size[0] = 100; /*size of hyperslab*/
-32 size[1] = 200;
-33 H5Sselect_hyperslab (mem_space, H5S_SELECT_SET, offset, NULL, size, NULL);
-34
-35 /*
-36 * Read the dataset.
-37 */
-38 H5Dread (dataset, H5T_NATIVE_DOUBLE, mem_space,
-39 file_space, H5P_DEFAULT, dd);
-40
-41 /*
-42 * Close/release resources.
-43 */
-44 H5Dclose (dataset);
-45 H5Sclose (mem_space);
-46 H5Sclose (file_space);
-47 H5Fclose (file);
-
-
- Example 3
-
-
-
-
-
- 1 hid_t file, dataset, type;
- 2 double delta[200];
- 3
- 4 /*
- 5 * Open an existing file and its dataset.
- 6 */
- 7 file = H5Fopen ("test.h5", H5F_ACC_RDONLY, H5P_DEFAULT);
- 8 dataset = H5Dopen (file, "dataset");
- 9
-10 /*
-11 * Describe the memory datatype, a struct with a single
-12 * "delta" member.
-13 */
-14 type = H5Tcreate (H5T_COMPOUND, sizeof(double));
-15 H5Tinsert (type, "delta", 0, H5T_NATIVE_DOUBLE);
-16
-17 /*
-18 * Read the dataset.
-19 */
-20 H5Dread (dataset, type, H5S_ALL, H5S_ALL,
-21 H5P_DEFAULT, dd);
-22
-23 /*
-24 * Close/release resources.
-25 */
-26 H5Dclose (dataset);
-27 H5Tclose (type);
-28 H5Fclose (file);
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
- Last modified: 2 March 2001
-
-
-
-
-
diff --git a/doc/html/Dataspaces.html b/doc/html/Dataspaces.html
deleted file mode 100644
index c83d285..0000000
--- a/doc/html/Dataspaces.html
+++ /dev/null
@@ -1,742 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-The Dataspace Interface (H5S)
-
-
-1. Introduction
-The dataspace interface (H5S) provides a mechanism to describe the positions
-of the elements of a dataset and is designed in such a way as to allow
-new features to be easily added without disrupting applications that use
-the dataspace interface. A dataset (defined with the dataset interface) is
-composed of a collection of raw data points of homogeneous type, defined in the
-datatype (H5T) interface, organized according to the dataspace with this
-interface.
-
-
-
-
- 0 1 2 3 4 5 6 7 8 9
-
- 0
- - - - - - - - - - -
-
- 1
- - X X X - - - - - -
-
- 2
- - X X X - - - - - -
-
- 3
- - X X X - - - - - -
-
- 4
- - X X X - - - - - -
-
- 5
- - X X X - - - - - -
-
- 6
- - - - - - - - - - -
-
- 7
- - - - - - - - - - -
-
- 8
- - - - - - - - - - -
-
- 9
- - - - - - - - - - -
-
Example 1: Contiguous rectangular selection
-
Or, a more complex selection may be defined:
-
-
-
- 0 1 2 3 4 5 6 7 8 9
-
- 0
- - - - - - - - - - -
-
- 1
- - X X X - - X - - -
-
- 2
- - X - X - - - - - -
-
- 3
- - X - X - - X - - -
-
- 4
- - X - X - - - - - -
-
- 5
- - X X X - - X - - -
-
- 6
- - - - - - - - - - -
-
- 7
- - - X X X X - - - -
-
- 8
- - - - - - - - - - -
-
- 9
- - - - - - - - - - -
-
Example 2: Non-contiguous selection
-
-
-
- 0 1 2 3 4 5 6 7 8 9
-
- 0
- - - - - - - - - - -
-
- 1
- - - - - - - - - - -
-
- 2
- - - X X X - - X - -
-
- 3
- - - X - X - - - - -
-
- 4
- - - X - X - - X - -
-
- 5
- - - X - X - - - - -
-
- 6
- - - X X X - - X - -
-
- 7
- - - - - - - - - - -
-
- 8
- - - - X X X X - - -
-
- 9
- - - - - - - - - - -
-
Example 3: Non-contiguous selection with 1,1 offset
- 2. General Dataspace Operations
-The functions defined in this section operate on dataspaces as a whole.
-New dataspaces can be created from scratch or copied from existing data
-spaces. When a dataspace is no longer needed its resources should be released
-by calling H5Sclose().
-
-
-
-
-
- 3. Dataspace Extent Operations
-These functions operate on the extent portion of a dataspace.
-
-
-
-
- 4. Dataspace Selection Operations
-Selections are maintained separately from extents in dataspaces and operations
-on the selection of a dataspace do not affect the extent of the dataspace.
-Selections are independent of extent type and the boundaries of selections are
-reconciled with the extent at the time of the data transfer. Selection offsets
-apply a selection to a location within an extent, allowing the same selection
-to be moved within the extent without requiring a new selection to be specified.
-Offsets default to 0 when the dataspace is created. Offsets are applied when
-an I/O transfer is performed (and checked during calls to H5Sselect_valid).
-Selections have an iteration order for the points selected, which can be any
-permutation of the dimensions involved (defaulting to 'C' array order) or a
-specific order for the selected points, for selections composed of single array
-elements with H5Sselect_elements.
-
-
-
-Further methods of selecting
-portions of a dataspace may be added in the future.
-
-
-
-
-
-
-
-
- H5S_SELECT_SET
-
- Replaces the existing selection with the parameters from this call.
- Overlapping blocks are not supported with this operator.
-
-
- H5S_SELECT_OR
-
- Adds the new selection to the existing selection.
-
-
-
- H5S_SELECT_SET
-
- Replaces the existing selection with the parameters from this call.
- Overlapping blocks are not supported with this operator.
-
-
- H5S_SELECT_OR
-
- Adds the new selection to the existing selection.
- 5. Convenience Dataspace Operation
-
-
-
-
-
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 7 May 2002
-
-
-
-
-
diff --git a/doc/html/Datatypes.html b/doc/html/Datatypes.html
deleted file mode 100644
index 232d7fb..0000000
--- a/doc/html/Datatypes.html
+++ /dev/null
@@ -1,3114 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-The Datatype Interface (H5T)
-
- 1. Introduction
-
- 2. General Datatype Operations
-
- H5Tclose()
.
-
- H5T_NATIVE_INT
are immutable
- transient types).
-
-
-
-
- hid_t H5Tcreate (H5T_class_t class, size_t
- size)
- H5T_COMPOUND
to create a new empty compound
- datatype where size is the total size in bytes of an
- instance of this datatype. Other datatypes are created with
- H5Tcopy()
. All functions that return datatype
- identifiers return a negative value for failure.
-
-
- hid_t H5Topen (hid_t location, const char
- *name)
- H5Tclose()
to
- release resources. The named datatype returned by this
- function is read-only or a negative value is returned for
- failure. The location is either a file or group
- identifier.
-
-
- herr_t H5Tcommit (hid_t location, const char
- *name, hid_t type)
-
- htri_t H5Tcommitted (hid_t type)
- H5Dget_type()
are able to share
- the datatype with other datasets in the same file.
-
-
- hid_t H5Tcopy (hid_t type)
-
- herr_t H5Tclose (hid_t type)
-
- htri_t H5Tequal (hid_t type1, hid_t
- type2)
- TRUE
, otherwise it returns FALSE
(an
- error results in a negative return value).
-
-
- herr_t H5Tlock (hid_t type)
- H5close()
or by normal program termination).
- 3. Properties of Atomic Types
-
-
-
-
- H5T_class_t H5Tget_class (hid_t type)
- H5T_INTEGER, H5T_FLOAT, H5T_TIME, H5T_STRING, or
- H5T_BITFIELD
. This property is read-only and is set
- when the datatype is created or copied (see
- H5Tcreate()
, H5Tcopy()
). If this
- function fails it returns H5T_NO_CLASS
which has
- a negative value (all other class constants are non-negative).
-
-
- size_t H5Tget_size (hid_t type)
- herr_t H5Tset_size (hid_t type, size_t
- size)
- offset
property is
- decremented a bit at a time. If the offset reaches zero and
- the significant part of the data still extends beyond the edge
- of the datatype then the precision
property is
- decremented a bit at a time. Decreasing the size of a
- datatype may fail if the H5T_FLOAT
bit fields would
- extend beyond the significant part of the type. Adjusting the
- size of an H5T_STRING
automatically adjusts the
- precision as well. On error, H5Tget_size()
- returns zero which is never a valid size.
-
-
- H5T_order_t H5Tget_order (hid_t type)
- herr_t H5Tset_order (hid_t type, H5T_order_t
- order)
- H5T_ORDER_LE
. If the bytes are in the oposite
- order then they are said to be big-endian or
- H5T_ORDER_BE
. Some datatypes have the same byte
- order on all machines and are H5T_ORDER_NONE
- (like character strings). If H5Tget_order()
- fails then it returns H5T_ORDER_ERROR
which is a
- negative value (all successful return values are
- non-negative).
-
-
- size_t H5Tget_precision (hid_t type)
- herr_t H5Tset_precision (hid_t type, size_t
- precision)
- short
on a Cray
- is 32 significant bits in an eight-byte field. The
- precision
property identifies the number of
- significant bits of a datatype and the offset
- property (defined below) identifies its location. The
- size
property defined above represents the entire
- size (in bytes) of the datatype. If the precision is
- decreased then padding bits are inserted on the MSB side of
- the significant bits (this will fail for
- H5T_FLOAT
types if it results in the sign,
- mantissa, or exponent bit field extending beyond the edge of
- the significant bit field). On the other hand, if the
- precision is increased so that it "hangs over" the edge of the
- total size then the offset
property is
- decremented a bit at a time. If the offset
- reaches zero and the significant bits still hang over the
- edge, then the total size is increased a byte at a time. The
- precision of an H5T_STRING
is read-only and is
- always eight times the value returned by
- H5Tget_size()
. H5Tget_precision()
- returns zero on failure since zero is never a valid precision.
-
-
- size_t H5Tget_offset (hid_t type)
- herr_t H5Tset_offset (hid_t type, size_t
- offset)
- precision
property defines the number
- of significant bits, the offset
property defines
- the location of those bits within the entire datum. The bits
- of the entire data are numbered beginning at zero at the least
- significant bit of the least significant byte (the byte at the
- lowest memory address for a little-endian type or the byte at
- the highest address for a big-endian type). The
- offset
property defines the bit location of the
- least signficant bit of a bit field whose length is
- precision
. If the offset is increased so the
- significant bits "hang over" the edge of the datum, then the
- size
property is automatically incremented. The
- offset is a read-only property of an H5T_STRING
- and is always zero. H5Tget_offset()
returns zero
- on failure which is also a valid offset, but is guaranteed to
- succeed if a call to H5Tget_precision()
succeeds
- with the same arguments.
-
-
- herr_t H5Tget_pad (hid_t type, H5T_pad_t
- *lsb, H5T_pad_t *msb)
- herr_t H5Tset_pad (hid_t type, H5T_pad_t
- lsb, H5T_pad_t msb)
- precision
and offset
properties
- are called padding. Padding falls into two
- categories: padding in the low-numbered bits is lsb
- padding and padding in the high-numbered bits is msb
- padding (bits are numbered according to the description for
- the offset
property). Padding bits can always be
- set to zero (H5T_PAD_ZERO
) or always set to one
- (H5T_PAD_ONE
). The current pad types are returned
- through arguments of H5Tget_pad()
either of which
- may be null pointers.
- 3.1. Properties of Integer Atomic Types
-
- class=H5T_INTEGER
)
- describe integer number formats. Such types include the
- following information which describes the type completely and
- allows conversion between various integer atomic types.
-
-
-
-
- H5T_sign_t H5Tget_sign (hid_t type)
- herr_t H5Tset_sign (hid_t type, H5T_sign_t
- sign)
- H5T_SGN_2
) or unsigned
- (H5T_SGN_NONE
). Whether data is signed or not
- becomes important when converting between two integer
- datatypes of differing sizes as it determines how values are
- truncated and sign extended.
- 3.2. Properties of Floating-point Atomic Types
-
- class=H5T_FLOAT
) as long as the bits of the
- exponent are contiguous and stored as a biased positive number,
- the bits of the mantissa are contiguous and stored as a positive
- magnitude, and a sign bit exists which is set for negative
- values. Properties specific to floating-point types are:
-
-
-
-
- herr_t H5Tget_fields (hid_t type, size_t
- *spos, size_t *epos, size_t
- *esize, size_t *mpos, size_t
- *msize)
- herr_t H5Tset_fields (hid_t type, size_t
- spos, size_t epos, size_t esize,
- size_t mpos, size_t msize)
- precision
and offset
- properties). The sign bit is always of length one and none of
- the fields are allowed to overlap. When expanding a
- floating-point type one should set the precision first; when
- decreasing the size one should set the field positions and
- sizes first.
-
-
- size_t H5Tget_ebias (hid_t type)
- herr_t H5Tset_ebias (hid_t type, size_t
- ebias)
- ebias
larger than the true exponent.
- H5Tget_ebias()
returns zero on failure which is
- also a valid exponent bias, but the function is guaranteed to
- succeed if H5Tget_precision()
succeeds when
- called with the same arguments.
-
-
- H5T_norm_t H5Tget_norm (hid_t type)
- herr_t H5Tset_norm (hid_t type, H5T_norm_t
- norm)
-
-
-
- H5T_NORM_MSBSET
then the
- mantissa is shifted left (if non-zero) until the first bit
- after the radix point is set and the exponent is adjusted
- accordingly. All bits of the mantissa after the radix
- point are stored.
-
- H5T_NORM_IMPLIED
then the
- mantissa is shifted left (if non-zero) until the first bit
- after the radix point is set and the exponent is adjusted
- accordingly. The first bit after the radix point is not stored
- since it's always set.
-
- H5T_NORM_NONE
then the fractional
- part of the mantissa is stored without normalizing it.
-
- H5T_pad_t H5Tget_inpad (hid_t type)
- herr_t H5Tset_inpad (hid_t type, H5T_pad_t
- inpad)
- H5T_PAD_ZERO
if the internal
- padding should always be set to zero, or H5T_PAD_ONE
- if it should always be set to one.
- H5Tget_inpad()
returns H5T_PAD_ERROR
- on failure which is a negative value (successful return is
- always non-negative).
- 3.3. Properties of Date and Time Atomic Types
-
- class=H5T_TIME
) are stored as
- character strings in one of the ISO-8601 formats like
- "1997-12-05 16:25:30"; as character strings using the
- Unix asctime(3) format like "Thu Dec 05 16:25:30 1997";
- as an integer value by juxtaposition of the year, month, and
- day-of-month, hour, minute and second in decimal like
- 19971205162530; as an integer value in Unix time(2)
- format; or other variations.
-
- 3.4. Properties of Character String Atomic Types
-
- offset
property of a string is
- always zero and the precision
property is eight
- times as large as the value returned by
- H5Tget_size()
(since precision is measured in bits
- while size is measured in bytes). Both properties are
- read-only.
-
-
-
-
- H5T_cset_t H5Tget_cset (hid_t type)
- herr_t H5Tset_cset (hid_t type, H5T_cset_t
- cset)
- H5T_CSET_ASCII
.
-
-
- H5T_str_t H5Tget_strpad (hid_t type)
- herr_t H5Tset_strpad (hid_t type, H5T_str_t
- strpad)
-
-
-
- H5T_STR_NULLTERM
-
- H5T_STR_NULLPAD
- H5T_STR_NULLPAD
- string will truncate but not null terminate. Conversion
- from a short value to a longer value will append null
- characters as with H5T_STR_NULLTERM
.
-
-
- H5T_STR_SPACEPAD
- H5T_STR_NULLPAD
except the padding character
- is a space instead of a null.
- H5Tget_strpad()
returns
- H5T_STR_ERROR
on failure, a negative value (all
- successful return values are non-negative).
- 3.5. Properties of Bit Field Atomic Types
-
- class=H5T_BITFIELD
) from
- one type to another simply copies the significant bits. If the
- destination is smaller than the source then bits are truncated.
- Otherwise new bits are filled according to the msb
- padding type.
-
- 3.6. Character and String Datatype Issues
-
- The H5T_NATIVE_CHAR
and H5T_NATIVE_UCHAR
- datatypes are actually numeric data (1-byte integers). If the
- application wishes to store character data, then an HDF5
- string datatype should be derived from
- H5T_C_S1
instead.
-
- Motivation
-
- HDF5 defines at least three classes of datatypes:
- integer data, floating point data, and character data.
- However, the C language defines only integer and
- floating point datatypes; character data in C is
- overloaded on the 8- or 16-bit integer types and
- character strings are overloaded on arrays of those
- integer types which, by convention, are terminated with
- a zero element.
-
- In C, the variable unsigned char s[256]
is
- either an array of numeric data, a single character string
- with at most 255 characters, or an array of 256 characters,
- depending entirely on usage. For uniformity with the
- other H5T_NATIVE_
types, HDF5 uses the
- numeric interpretation of H5T_NATIVE_CHAR
- and H5T_NATIVE_UCHAR
.
-
-
- Usage
-
- To store unsigned char s[256]
data as an
- array of integer values, use the HDF5 datatype
- H5T_NATIVE_UCHAR
and a data space that
- describes the 256-element array. Some other application
- that reads the data will then be able to read, say, a
- 256-element array of 2-byte integers and HDF5 will
- perform the numeric translation.
-
- To store unsigned char s[256]
data as a
- character string, derive a fixed length string datatype
- from H5T_C_S1
by increasing its size to
- 256 characters. Some other application that reads the
- data will be able to read, say, a space padded string
- of 16-bit characters and HDF5 will perform the character
- and padding translations.
-
-
- hid_t s256 = H5Tcopy(H5T_C_S1);
- H5Tset_size(s256, 256);
-
-
- To store unsigned char s[256]
data as
- an array of 256 ASCII characters, use an
- HDF5 data space to describe the array and derive a
- one-character string type from H5T_C_S1
.
- Some other application will be able to read a subset
- of the array as 16-bit characters and HDF5 will
- perform the character translations.
- The H5T_STR_NULLPAD
is necessary because
- if H5T_STR_NULLTERM
were used
- (the default) then the single character of storage
- would be for the null terminator and no useful data
- would actually be stored (unless the length were
- incremented to more than one character).
-
-
- hid_t s1 = H5Tcopy(H5T_C_S1);
- H5Tset_strpad(s1, H5T_STR_NULLPAD);
-
-
- Summary
-
- The C language uses the term char
to
- represent one-byte numeric data and does not make
- character strings a first-class datatype.
- HDF5 makes a distinction between integer and
- character data and maps the C signed char
- (H5T_NATIVE_CHAR
) and
- unsigned char
(H5T_NATIVE_UCHAR
)
- datatypes to the HDF5 integer type class.
-
- 4. Properties of Opaque Types
-
- class=H5T_OPAQUE
) provide the
- application with a mechanism for describing data which cannot be
- otherwise described by HDF5. The only properties associated with
- opaque types are a size in bytes and an ASCII tag which is
- manipulated with H5Tset_tag()
and
- H5Tget_tag()
functions. The library contains no
- predefined conversion functions but the application is free to
- register conversions between any two opaque types or between an
- opaque type and some other type.
-
- 5. Properties of Compound Types
-
- struct
in C
- or a common block in Fortran: it is a collection of one or more
- atomic types or small arrays of such types. Each
- member of a compound type has a name which is unique
- within that type, and a byte offset that determines the first
- byte (smallest byte address) of that member in a compound datum.
- A compound datatype has the following properties:
-
-
-
-
- H5T_class_t H5Tget_class (hid_t type)
- H5T_COMPOUND
. This property is read-only and is
- defined when a datatype is created or copied (see
- H5Tcreate()
or H5Tcopy()
).
-
-
- size_t H5Tget_size (hid_t type)
-
- int H5Tget_nmembers (hid_t type)
- H5Tget_nmembers()
returns -1 on failure.
-
-
- char *H5Tget_member_name (hid_t type, unsigned
- membno)
- malloc()
or the null pointer on failure. The
- caller is responsible for freeing the memory returned by this
- function.
-
-
- size_t H5Tget_member_offset (hid_t type, unsigned
- membno)
- H5Tget_member_class()
- succeeds when called with the same type and
- membno arguments.
-
-
- hid_t H5Tget_member_type (hid_t type, unsigned
- membno)
- H5Tclose()
on that type.
- H5Tinsert()
) and cannot be subsequently modified.
- This makes it imposible to define recursive data structures.
-
-
-
- 6. Predefined Atomic Datatypes
-
-
- H5T_arch_base
where
- arch is an architecture name and base is a
- programming type name. New types can be derived from the
- predifined types by copying the predefined type (see
- H5Tcopy()
) and then modifying the result.
-
-
-
-
-
-
- Architecture Name
- Description
-
-
-
-
- IEEE
This architecture defines standard floating point
- types in various byte orders.
-
-
-
-
- STD
This is an architecture that contains semi-standard
- datatypes like signed two's complement integers,
- unsigned integers, and bitfields in various byte
- orders.
-
-
-
-
- UNIX
Types which are specific to Unix operating systems are
- defined in this architecture. The only type currently
- defined is the Unix date and time types
- (
- time_t
).
-
-
-
- C
FORTRANTypes which are specific to the C or Fortran
- programming languages are defined in these
- architectures. For instance,
- H5T_C_STRING
- defines a base string type with null termination which
- can be used to derive string types of other
- lengths.
-
-
-
- NATIVE
This architecture contains C-like datatypes for the
- machine on which the library was compiled. The types
- were actually defined by running the
-
- H5detect
program when the library was
- compiled. In order to be portable, applications should
- almost always use this architecture to describe things
- in memory.
-
-
-
- CRAY
Cray architectures. These are word-addressable,
- big-endian systems with non-IEEE floating point.
-
-
-
-
- INTEL
All Intel and compatible CPU's including 80286, 80386,
- 80486, Pentium, Pentium-Pro, and Pentium-II. These are
- little-endian systems with IEEE floating-point.
-
-
-
-
- MIPS
All MIPS CPU's commonly used in SGI systems. These
- are big-endian systems with IEEE floating-point.
-
-
-
- ALPHA
All DEC Alpha CPU's, little-endian systems with IEEE
- floating-point.
-
-
-
-
- B
- Bitfield
-
-
- D
- Date and time
-
-
- F
- Floating point
-
-
- I
- Signed integer
-
-
- R
- References
-
-
- S
- Character string
-
-
- U
- Unsigned integer
-
-
-
-
- BE
- Big endian
-
-
- LE
- Little endian
-
-
- VX
- Vax order
-
-
-
-
-
-
-
Example
-
Description
-
-
- H5T_IEEE_F64LE
Eight-byte, little-endian, IEEE floating-point
-
-
-
- H5T_IEEE_F32BE
Four-byte, big-endian, IEEE floating point
-
-
-
- H5T_STD_I32LE
Four-byte, little-endian, signed two's complement integer
-
-
-
- H5T_STD_U16BE
Two-byte, big-endian, unsigned integer
-
-
-
- H5T_UNIX_D32LE
Four-byte, little-endian, time_t
-
-
-
- H5T_C_S1
One-byte, null-terminated string of eight-bit characters
-
-
-
- H5T_INTEL_B64
Eight-byte bit field on an Intel CPU
-
-
-
- H5T_CRAY_F64
Eight-byte Cray floating point
-
-
-
- H5T_STD_ROBJ
Reference to an entire object in a file
- NATIVE
architecture has base names which don't
- follow the same rules as the others. Instead, native type names
- are similar to the C type names. Here are some examples:
-
-
-
-
-
-
-
Example
-
Corresponding C Type
-
-
- H5T_NATIVE_CHAR
- char
-
-
- H5T_NATIVE_SCHAR
- signed char
-
-
- H5T_NATIVE_UCHAR
- unsigned char
-
-
- H5T_NATIVE_SHORT
- short
-
-
- H5T_NATIVE_USHORT
- unsigned short
-
-
- H5T_NATIVE_INT
- int
-
-
- H5T_NATIVE_UINT
- unsigned
-
-
- H5T_NATIVE_LONG
- long
-
-
- H5T_NATIVE_ULONG
- unsigned long
-
-
- H5T_NATIVE_LLONG
- long long
-
-
- H5T_NATIVE_ULLONG
- unsigned long long
-
-
- H5T_NATIVE_FLOAT
- float
-
-
- H5T_NATIVE_DOUBLE
- double
-
-
- H5T_NATIVE_LDOUBLE
- long double
-
-
- H5T_NATIVE_HSIZE
- hsize_t
-
-
- H5T_NATIVE_HSSIZE
- hssize_t
-
-
- H5T_NATIVE_HERR
- herr_t
-
-
- H5T_NATIVE_HBOOL
- hbool_t
-
- Example: A 128-bit
- integer
-
-
-
-
-
-hid_t new_type = H5Tcopy (H5T_NATIVE_INT);
-H5Tset_precision (new_type, 128);
-H5Tset_order (new_type, H5T_ORDER_LE);
-
-
- Example: An 80-character
- string
-
-
-
-
-
-hid_t str80 = H5Tcopy (H5T_C_S1);
-H5Tset_size (str80, 80);
-
7. Defining Compound Datatypes
-
-
-
-
- HOFFSET(s,m)
- offsetof(s,m)
- stddef.h
does
- exactly the same thing as the HOFFSET()
macro.
-
-
- Example: A simple struct
-
-
-
- complex_t
struct.
-
-
-
-typedef struct {
- double re; /*real part*/
- double im; /*imaginary part*/
-} complex_t;
-
-hid_t complex_id = H5Tcreate (H5T_COMPOUND, sizeof tmp);
-H5Tinsert (complex_id, "real", HOFFSET(complex_t,re),
- H5T_NATIVE_DOUBLE);
-H5Tinsert (complex_id, "imaginary", HOFFSET(complex_t,im),
- H5T_NATIVE_DOUBLE);
-
HOFFSET
- macro. However, data stored on disk does not require alignment,
- so unaligned versions of compound data structures can be created
- to improve space efficiency on disk. These unaligned compound
- datatypes can be created by computing offsets by hand to
- eliminate inter-member padding, or the members can be packed by
- calling H5Tpack()
(which modifies a datatype
- directly, so it is usually preceded by a call to
- H5Tcopy()
):
-
-
-
- Example: A packed struct
-
-
-
-
-
-hid_t complex_disk_id = H5Tcopy (complex_id);
-H5Tpack (complex_disk_id);
-
-
- Example: A flattened struct
-
-
-
-
-
-typedef struct {
- complex_t x;
- complex_t y;
-} surf_t;
-
-hid_t surf_id = H5Tcreate (H5T_COMPOUND, sizeof tmp);
-H5Tinsert (surf_id, "x-re", HOFFSET(surf_t,x.re),
- H5T_NATIVE_DOUBLE);
-H5Tinsert (surf_id, "x-im", HOFFSET(surf_t,x.im),
- H5T_NATIVE_DOUBLE);
-H5Tinsert (surf_id, "y-re", HOFFSET(surf_t,y.re),
- H5T_NATIVE_DOUBLE);
-H5Tinsert (surf_id, "y-im", HOFFSET(surf_t,y.im),
- H5T_NATIVE_DOUBLE);
-
-
- Example: A nested struct
-
-
-
- complex_t
is used
- often it becomes inconvenient to list its members over
- and over again. So the alternative approach to
- flattening is to define a compound datatype and then
- use it as the type of the compound members, as is done
- here (the typedefs are defined in the previous
- examples).
-
-
-
-hid_t complex_id, surf_id; /*hdf5 datatypes*/
-
-complex_id = H5Tcreate (H5T_COMPOUND, sizeof c);
-H5Tinsert (complex_id, "re", HOFFSET(complex_t,re),
- H5T_NATIVE_DOUBLE);
-H5Tinsert (complex_id, "im", HOFFSET(complex_t,im),
- H5T_NATIVE_DOUBLE);
-
-surf_id = H5Tcreate (H5T_COMPOUND, sizeof s);
-H5Tinsert (surf_id, "x", HOFFSET(surf_t,x), complex_id);
-H5Tinsert (surf_id, "y", HOFFSET(surf_t,y), complex_id);
-
8. Enumeration Datatypes
-
- 8.1. Introduction
-
- 8.2. Creation
-
-
-
-
- hid_t H5Tcreate(H5T_class_t type_class,
- size_t size)
- H5T_ENUM
and the second argument is the
- size in bytes of the native integer on which the enumeration
- type is based. If the architecture does not support a native
- signed integer of the specified size then an error is
- returned.
-
-
-/* Based on a native signed short */
-hid_t hdf_en_colors = H5Tcreate(H5T_ENUM, sizeof(short));
-
-
- hid_t H5Tenum_create(hid_t base)
- H5Tcreate()
function. This
- function is useful when creating an enumeration type based on
- some non-native integer datatype, but it can be used for
- native types as well.
-
-
-/* Based on a native unsigned short */
-hid_t hdf_en_colors_1 = H5Tenum_create(H5T_NATIVE_USHORT);
-
-/* Based on a MIPS 16-bit unsigned integer */
-hid_t hdf_en_colors_2 = H5Tenum_create(H5T_MIPS_UINT16);
-
-/* Based on a big-endian 16-bit unsigned integer */
-hid_t hdf_en_colors_3 = H5Tenum_create(H5T_STD_U16BE);
-
-
- herr_t H5Tenum_insert(hid_t etype, const char
- *symbol, void *value)
-
-short val;
-H5Tenum_insert(hdf_en_colors, "RED", (val=0,&val));
-H5Tenum_insert(hdf_en_colors, "GREEN", (val=1,&val));
-H5Tenum_insert(hdf_en_colors, "BLUE", (val=2,&val));
-H5Tenum_insert(hdf_en_colors, "WHITE", (val=3,&val));
-H5Tenum_insert(hdf_en_colors, "BLACK", (val=4,&val));
-
-
- herr_t H5Tlock(hid_t etype)
-
-H5Tlock(hdf_en_colors);
-
- 8.3. Integer Operations
-
-
-
-
-
- H5Topen()
- H5Tcreate()
- H5Tcopy()
- H5Tclose()
-
- H5Tequal()
- H5Tlock()
- H5Tcommit()
- H5Tcommitted()
-
- H5Tget_class()
- H5Tget_size()
- H5Tget_order()
- H5Tget_pad()
-
- H5Tget_precision()
- H5Tget_offset()
- H5Tget_sign()
- H5Tset_size()
-
- H5Tset_order()
- H5Tset_precision()
- H5Tset_offset()
- H5Tset_pad()
-
-
- H5Tset_sign()
H5Tget_super()
will
- be defined for all datatypes that are derived from existing
- types (currently just enumeration types).
-
-
-
-
- hid_t H5Tget_super(hid_t type)
-
-hid_t itype = H5Tget_super(hdf_en_colors);
-hid_t hdf_fr_colors = H5Tenum_create(itype);
-H5Tclose(itype);
-
-short val;
-H5Tenum_insert(hdf_fr_colors, "ouge", (val=0,&val));
-H5Tenum_insert(hdf_fr_colors, "vert", (val=1,&val));
-H5Tenum_insert(hdf_fr_colors, "bleu", (val=2,&val));
-H5Tenum_insert(hdf_fr_colors, "blanc", (val=3,&val));
-H5Tenum_insert(hdf_fr_colors, "noir", (val=4,&val));
-H5Tlock(hdf_fr_colors);
- 8.4. Type Functions
-
-
-
-
- int H5Tget_nmembers(hid_t etype)
-
- char *H5Tget_member_name(hid_t etype, unsigned
- membno)
- H5Tget_nmembers()
. The members are stored in no
- particular order. This function is already implemented for
- compound datatypes. If an error occurs then the null pointer
- is returned. The return value should be freed by calling
- free()
.
-
-
- herr_t H5Tget_member_value(hid_t etype, unsigned
- membno, void *value/*out*/)
- H5Tget_member_name()
). The value returned
- is in the domain of the underlying integer
- datatype which is often a native integer type. The
- application should ensure that the memory pointed to by
- value is large enough to contain the result (the size
- can be obtained by calling H5Tget_size()
on
- either the enumeration type or the underlying integer type
- when the type is not known by the C compiler.
-
-
-int n = H5Tget_nmembers(hdf_en_colors);
-unsigned u;
-for (u=0; u<(unsigned)n; u++) {
- char *symbol = H5Tget_member_name(hdf_en_colors, u);
- short val;
- H5Tget_member_value(hdf_en_colors, u, &val);
- printf("#%u %20s = %d\n", u, symbol, val);
- free(symbol);
-}
-
-
-#0 BLACK = 4
-#1 BLUE = 2
-#2 GREEN = 1
-#3 RED = 0
-#4 WHITE = 3
- 8.5. Data Functions
-
-
-
-
- herr_t H5Tenum_valueof(hid_t etype, const char
- *symbol, void *value/*out*/)
-
- herr_t H5Tenum_nameof(hid_t etype, void
- *value, char *symbol, size_t
- size)
-
-short data[1000] = {4, 2, 0, 0, 5, 1, ...};
-int i;
-char symbol[32];
-
-for (i=0; i<1000; i++) {
- if (H5Tenum_nameof(hdf_en_colors, data+i, symbol,
- sizeof symbol))<0) {
- if (symbol[0]) {
- strcpy(symbol+sizeof(symbol)-4, "...");
- } else {
- strcpy(symbol, "UNKNOWN");
- }
- }
- printf("%d %s\n", data[i], symbol);
-}
-printf("}\n");
-
-
-4 BLACK
-2 BLUE
-0 RED
-0 RED
-5 UNKNOWN
-1 GREEN
-...
- 8.6. Conversion
-
- 2
which corresponds to
- BLUE
would be mapped to 0x0004
. The
- following code snippet builds the second datatype, then
- converts a raw data array from one datatype to another, and
- then prints the result.
-
-
-/* Create a new enumeration type */
-short val;
-hid_t bits = H5Tcreate(H5T_ENUM, sizeof val);
-H5Tenum_insert(bits, "RED", (val=0x0001,&val));
-H5Tenum_insert(bits, "GREEN", (val=0x0002,&val));
-H5Tenum_insert(bits, "BLUE", (val=0x0004,&val));
-H5Tenum_insert(bits, "WHITE", (val=0x0008,&val));
-H5Tenum_insert(bits, "BLACK", (val=0x0010,&val));
-
-/* The data */
-short data[6] = {1, 4, 2, 0, 3, 5};
-
-/* Convert the data from one type to another */
-H5Tconvert(hdf_en_colors, bits, 5, data, NULL, plist_id);
-
-/* Print the data */
-for (i=0; i<6; i++) {
- printf("0x%04x\n", (unsigned)(data[i]));
-}
-
-
-
-0x0002
-0x0010
-0x0004
-0x0001
-0x0008
-0xffff
-
- H5Tset_overflow()
). If no overflow handler is
- defined then all bits of the destination value will be set.
-
- 8.7. Symbol Order
-
- H5Tenum_valueof()
.
-
-
-short val1, val2;
-H5Tenum_valueof(hdf_en_colors, "WHITE", &val1);
-H5Tenum_valueof(hdf_en_colors, "BLACK", &val2);
-if (val1 < val2) ...
-
- foreign
is some non-native enumeration type then a
- native type can be created as follows:
-
-
-int n = H5Tget_nmembers(foreign);
-hid_t itype = H5Tget_super(foreign);
-void *val = malloc(n * MAX(H5Tget_size(itype), sizeof(int)));
-char *name = malloc(n * sizeof(char*));
-unsigned u;
-
-/* Get foreign type information */
-for (u=0; u<(unsigned)n; u++) {
- name[u] = H5Tget_member_name(foreign, u);
- H5Tget_member_value(foreign, u,
- (char*)val+u*H5Tget_size(foreign));
-}
-
-/* Convert integer values to new type */
-H5Tconvert(itype, H5T_NATIVE_INT, n, val, NULL, plist_id);
-
-/* Build a native type */
-hid_t native = H5Tenum_create(H5T_NATIVE_INT);
-for (i=0; i<n; i++) {
- H5Tenum_insert(native, name[i], ((int*)val)[i]);
- free(name[i]);
-}
-free(name);
-free(val);
-
- reverse
that
- defines the same five colors but in the reverse order.
-
-
-short val;
-int i;
-char sym[8];
-short data[5] = {0, 1, 2, 3, 4};
-
-hid_t reverse = H5Tenum_create(H5T_NATIVE_SHORT);
-H5Tenum_insert(reverse, "BLACK", (val=0,&val));
-H5Tenum_insert(reverse, "WHITE", (val=1,&val));
-H5Tenum_insert(reverse, "BLUE", (val=2,&val));
-H5Tenum_insert(reverse, "GREEN", (val=3,&val));
-H5Tenum_insert(reverse, "RED", (val=4,&val));
-
-/* Print data */
-for (i=0; i<5; i++) {
- H5Tenum_nameof(hdf_en_colors, data+i, sym, sizeof sym);
- printf ("%d %s\n", data[i], sym);
-}
-
-puts("Converting...");
-H5Tconvert(hdf_en_colors, reverse, 5, data, NULL, plist_id);
-
-/* Print data */
-for (i=0; i<5; i++) {
- H5Tenum_nameof(reverse, data+i, sym, sizeof sym);
- printf ("%d %s\n", data[i], sym);
-}
-
-
-0 RED
-1 GREEN
-2 BLUE
-3 WHITE
-4 BLACK
-Converting...
-4 RED
-3 GREEN
-2 BLUE
-1 WHITE
-0 BLACK
-
- 8.8. Equality
-
- H5Tequal()
function.
-
- 8.9. Interacting with C's
-
- enum
Typeenum
datatypes, there are some important
- differences:
-
-
-
-
-
-
- Difference
- Motivation/Implications
-
-
-
- Symbols are unquoted in C but quoted in
- HDF.
- This allows the application to manipulate
- symbol names in ways that are not possible with C.
-
-
-
- The C compiler automatically replaces all
- symbols with their integer values but HDF requires
- explicit calls to do the same.
- C resolves symbols at compile time while
- HDF resolves symbols at run time.
-
-
-
- The mapping from symbols to integers is
- N:1 in C but 1:1 in HDF.
- HDF can translate from value to name
- uniquely and large
- switch
statements are
- not necessary to print values in human-readable
- format.
-
-
- A symbol must appear in only one C
-
- enum
type but may appear in multiple HDF
- enumeration types.The translation from symbol to value in HDF
- requires the datatype to be specified while in C the
- datatype is not necessary because it can be inferred
- from the symbol.
-
-
-
- The underlying integer value is always a
- native integer in C but can be a foreign integer type in
- HDF.
- This allows HDF to describe data that might
- reside on a foreign architecture, such as data stored in
- a file.
-
-
- The sign and size of the underlying integer
- datatype is chosen automatically by the C compiler but
- must be fully specified with HDF.
- Since HDF doesn't require finalization of a
- datatype, complete specification of the type must be
- supplied before the type is used. Requiring that
- information at the time of type creation was a design
- decision to simplify the library.
-
-
-
-
-
-
-
-
-
-/* English color names */
-typedef enum {
- RED,
- GREEN,
- BLUE,
- WHITE,
- BLACK
-} c_en_colors;
-
-/* Spanish color names, reverse order */
-typedef enum {
- NEGRO
- BLANCO,
- AZUL,
- VERDE,
- ROJO,
-} c_sp_colors;
-
-/* No enum definition for French names */
-
Creating HDF Types from C Types
-
- enum
type simply by passing pointers to the C
- enum
values to H5Tenum_insert()
. For
- instance, to create HDF types for the c_en_colors
- type shown above:
-
-
-
-
-
-
-
-
-
-
-
-c_en_colors val;
-hid_t hdf_en_colors = H5Tcreate(H5T_ENUM, sizeof(c_en_colors));
-H5Tenum_insert(hdf_en_colors, "RED", (val=RED, &val));
-H5Tenum_insert(hdf_en_colors, "GREEN", (val=GREEN,&val));
-H5Tenum_insert(hdf_en_colors, "BLUE", (val=BLUE, &val));
-H5Tenum_insert(hdf_en_colors, "WHITE", (val=WHITE,&val));
-H5Tenum_insert(hdf_en_colors, "BLACK", (val=BLACK,&val));
Name Changes between Applications
-
- enum
definitions. The communication is still
- possible although the applications must agree on common terms
- for the colors. The following example shows the Spanish code to
- read the values assuming that the applications have agreed that
- the color information will be exchanged using Enlish color
- names:
-
-
-
-
-
-
-
-
-
-
-
-
-c_sp_colors val, data[1000];
-hid_t hdf_sp_colors = H5Tcreate(H5T_ENUM, sizeof(c_sp_colors));
-H5Tenum_insert(hdf_sp_colors, "RED", (val=ROJO, &val));
-H5Tenum_insert(hdf_sp_colors, "GREEN", (val=VERDE, &val));
-H5Tenum_insert(hdf_sp_colors, "BLUE", (val=AZUL, &val));
-H5Tenum_insert(hdf_sp_colors, "WHITE", (val=BLANCO, &val));
-H5Tenum_insert(hdf_sp_colors, "BLACK", (val=NEGRO, &val));
-
-H5Dread(dataset, hdf_sp_colors, H5S_ALL, H5S_ALL, H5P_DEFAULT, data);
Symbol Ordering across Applications
-
- enum
definition,
- ordering of enum
symbols cannot be preserved across
- files like with HDF enumeration types. HDF can convert from one
- application's integer values to the other's so a symbol in one
- application's C enum
gets mapped to the same symbol
- in the other application's C enum
, but the relative
- order of the symbols is not preserved.
-
- c_en_colors
defined above where
- WHITE
is less than BLACK
, but some
- other application might define the colors in some other
- order. If each application defines an HDF enumeration type based
- on that application's C enum
type then HDF will
- modify the integer values as data is communicated from one
- application to the other so that a RED
value
- in the first application is also a RED
value in the
- other application.
-
- RED
) in the
- input file became 4 (ROJO
) in the data
- array. In the input file, WHITE
was less than
- BLACK
; in the application the opposite is true.
-
- Internationalization
-
- c_en_colors
datatype could define
- a separate HDF datatype for languages such as English, Spanish,
- and French and cast the enumerated value to one of these HDF
- types to print the result.
-
-
-
-
-
-
-
-
-
-
-
-c_en_colors val, *data=...;
-
-hid_t hdf_sp_colors = H5Tcreate(H5T_ENUM, sizeof val);
-H5Tenum_insert(hdf_sp_colors, "ROJO", (val=RED, &val));
-H5Tenum_insert(hdf_sp_colors, "VERDE", (val=GREEN, &val));
-H5Tenum_insert(hdf_sp_colors, "AZUL", (val=BLUE, &val));
-H5Tenum_insert(hdf_sp_colors, "BLANCO", (val=WHITE, &val));
-H5Tenum_insert(hdf_sp_colors, "NEGRO", (val=BLACK, &val));
-
-hid_t hdf_fr_colors = H5Tcreate(H5T_ENUM, sizeof val);
-H5Tenum_insert(hdf_fr_colors, "OUGE", (val=RED, &val));
-H5Tenum_insert(hdf_fr_colors, "VERT", (val=GREEN, &val));
-H5Tenum_insert(hdf_fr_colors, "BLEU", (val=BLUE, &val));
-H5Tenum_insert(hdf_fr_colors, "BLANC", (val=WHITE, &val));
-H5Tenum_insert(hdf_fr_colors, "NOIR", (val=BLACK, &val));
-
-void
-nameof(lang_t language, c_en_colors val, char *name, size_t size)
-{
- switch (language) {
- case ENGLISH:
- H5Tenum_nameof(hdf_en_colors, &val, name, size);
- break;
- case SPANISH:
- H5Tenum_nameof(hdf_sp_colors, &val, name, size);
- break;
- case FRENCH:
- H5Tenum_nameof(hdf_fr_colors, &val, name, size);
- break;
- }
-}
8.10. Goals That Have Been Met
-
-
-
-
-
-
-
-
-
-
-
-
-
- Architecture Independence
- Two applications shall be able to exchange
- enumerated data even when the underlying integer values
- have different storage formats. HDF accomplishes this for
- enumeration types by building them upon integer types.
-
-
-
- Preservation of Order Relationship
- The relative order of symbols shall be
- preserved between two applications that use equivalent
- enumeration datatypes. Unlike numeric values that have
- an implicit ordering, enumerated data has an explicit
- order defined by the enumeration datatype and HDF
- records this order in the file.
-
-
-
- Order Independence
- An application shall be able to change the
- relative ordering of the symbols in an enumeration
- datatype. This is accomplished by defining a new type with
- different integer values and converting data from one type
- to the other.
-
-
-
- Subsets
- An application shall be able to read
- enumerated data from an archived dataset even after the
- application has defined additional members for the
- enumeration type. An application shall be able to write
- to a dataset when the dataset contains a superset of the
- members defined by the application. Similar rules apply
- for in-core conversions between enumerated datatypes.
-
-
-
- Targetable
- An application shall be able to target a
- particular architecture or application when storing
- enumerated data. This is accomplished by allowing
- non-native underlying integer types and converting the
- native data to non-native data.
-
-
-
- Efficient Data Transfer
- An application that defines a file dataset
- that corresponds to some native C enumerated data array
- shall be able to read and write to that dataset directly
- using only Posix read and write functions. HDF already
- optimizes this case for integers, so the same optimization
- will apply to enumerated data.
-
-
- Efficient Storage
- Enumerated data shall be stored in a manner
- which is space efficient. HDF stores the enumerated data
- as integers and allows the application to chose the size
- and format of those integers.
- 9. Variable-length Datatypes
-
-9.1. Overview And Justification
-
-Variable-length (VL) datatypes are sequences of an existing datatype
-(atomic, VL, or compound) which are not fixed in length from one dataset location
-to another. In essence, they are similar to C character strings -- a sequence of
-a type which is pointed to by a particular type of pointer -- although
-they are implemented more closely to FORTRAN strings by including an explicit
-length in the pointer instead of using a particular value to terminate the
-sequence.
-
-
-
-
-
-
- Value1: Object1, Object3, Object9
- Value2: Object0, Object12, Object14, Object21, Object22
- Value3: Object2
- Value4: <none>
- Value5: Object1, Object10, Object12
- .
- .
-
-
- Feature1: Dataset1:Region, Dataset3:Region, Dataset9:Region
- Feature2: Dataset0:Region, Dataset12:Region, Dataset14:Region,
- Dataset21:Region, Dataset22:Region
- Feature3: Dataset2:Region
- Feature4: <none>
- Feature5: Dataset1:Region, Dataset10:Region, Dataset12:Region
- .
- .
-
-9.2. Variable-length Datatype Memory Management
-
-With each element possibly being of different sequence lengths for a
-dataset with a VL datatype, the memory for the VL datatype must be dynamically
-allocated. Currently there are two methods of managing the memory for
-VL datatypes: the standard C malloc/free memory allocation routines or a method
-of calling user-defined memory management routines to allocate or free memory.
-Since the memory allocated when reading (or writing) may be complicated to
-release, an HDF5 routine is provided to traverse a memory buffer and free the
-VL datatype information without leaking memory.
-
-
-Variable-length datatypes cannot be divided
-
-VL datatypes are designed so that they cannot be subdivided by the library
-with selections, etc. This design was chosen due to the complexities in
-specifying selections on each VL element of a dataset through a selection API
-that is easy to understand. Also, the selection APIs work on dataspaces, not
-on datatypes. At some point in time, we may want to create a way for
-dataspaces to have VL components to them and we would need to allow selections
-of those VL regions, but that is beyond the scope of this document.
-
-
-What happens if the library runs out of memory while reading?
-
-It is possible for a call to H5Dread
to fail while reading in
-VL datatype information if the memory required exceeds that which is available.
-In this case, the H5Dread
call will fail gracefully and any
-VL data which has been allocated prior to the memory shortage will be returned
-to the system via the memory management routines detailed below.
-It may be possible to design a partial read API function at a
-later date, if demand for such a function warrants.
-
-
-Strings as variable-length datatypes
-
-Since character strings are a special case of VL data that is implemented
-in many different ways on different machines and in different programming
-languages, they are handled somewhat differently from other VL datatypes in HDF5.
-
-hvl_t
-struct for VL datatypes.
-
-H5T_NATIVE_ASCII
, H5T_NATIVE_UNICODE
,
-etc., or by creating a string datatype and setting its length to
-H5T_VARIABLE
. The second method is used to access
-native VL strings in memory. The library will convert between the two types,
-but they are stored on disk using different datatypes and have different
-memory representations.
-
-9.3. Variable-length Datatype API
-
-Creation
-
-VL datatypes are created with the H5Tvlen_create()
function
-as follows:
-
-
-
-H5Tvlen_create
(hid_t base_type_id
);
-Query base datatype of VL datatype
-
-It may be necessary to know the base datatype of a VL datatype before
-memory is allocated, etc. The base datatype is queried with the
-H5Tget_super()
function, described in the H5T documentation.
-
-
-Query minimum memory required for VL information
-
-It order to predict the memory usage that H5Dread
may need
-to allocate to store VL data while reading the data, the
-H5Dget_vlen_size()
function is provided:
-
-
- (This function is not implemented in Release 1.2.)
-
-H5Dget_vlen_buf_size
(hid_t dataset_id
,
- hid_t type_id
,
- hid_t space_id
,
- hsize_t *size
)
-space_id
for the selection in the dataset
-on disk and the type_id
for the memory representation of the
-VL data in memory. The *size
value is modified according to
-how many bytes are required to store the VL data in memory.
-
-
-Specifying how to manage memory for the VL datatype
-
-The memory management method is determined by dataset transfer properties
-passed into the H5Dread
and H5Dwrite
functions
-with the dataset transfer property list.
-
-H5P_DEFAULT
-for the dataset transfer property list identifier.
-If H5P_DEFAULT
is used with H5Dread
,
-the system malloc
and free
calls
-will be used for allocating and freeing memory.
-In such a case, H5P_DEFAULT
should also be passed
-as the property list identifier to H5Dvlen_reclaim
.
-
-malloc
and free
calls or
-user-defined, or custom, memory management functions.
-If user-defined memory management functions are to be used,
-the memory allocation and free routines must be defined via
-H5Pset_vlen_mem_manager()
, as follows:
-
-
-
-
-H5Pset_vlen_mem_manager
(hid_t plist_id
,
- H5MM_allocate_t alloc
,
- void *alloc_info
,
- H5MM_free_t free
,
- void *free_info
)
-alloc
and free
parameters
-identify the memory management routines to be used.
-If the user has defined custom memory management routines,
-alloc
and/or free
should be set to make
-those routine calls (i.e., the name of the routine is used as
-the value of the parameter);
-if the user prefers to use the system's malloc
-and/or free
, the alloc
and
-free
parameters, respectively, should be set to
- NULL
-
-
-
-typedef
void
- *(*H5MM_allocate_t
)(size_t size
,
- void *info
) ;
- typedef
void
- (*H5MM_free_t
)(void *mem
,
- void *free_info
) ;
-alloc_info
and free_info
parameters can be
-used to pass along any required information to the user's memory management
-routines.
-
-alloc
and free
parameters and the
-custom routines' parameters are passed in the
-alloc_info
and free_info
parameters.
-If the user wishes to use the system malloc
and
-free
functions, the alloc
and/or
-free
parameters are set to NULL
-and the alloc_info
and free_info
-parameters are ignored.
-
-Recovering memory from VL buffers read in
-
-The complex memory buffers created for a VL datatype may be reclaimed with
-the H5Dvlen_reclaim()
function call, as follows:
-
-
-
-H5Dvlen_reclaim
(hid_t type_id
,
- hid_t space_id
,
- hid_t plist_id
,
- void *buf
);
-type_id
must be the datatype stored in the buffer,
-space_id
describes the selection for the memory buffer
-to free the VL datatypes within,
-plist_id
is the dataset transfer property list which
-was used for the I/O transfer to create the buffer, and
-buf
is the pointer to the buffer to free the VL memory within.
-The VL structures (hvl_t
) in the user's buffer are
-modified to zero out the VL information after it has been freed.
-
-9.4. Code Examples
-
-The following example creates the following one-dimensional array
-of size 4 of variable-length datatype.
-
- 0 10 20 30
- 11 21 31
- 22 32
- 33
-
-Each element of the VL datatype is of H5T_NATIVE_UINT type.
-
-
-Example: Variable-length Datatypes
-
-
-
-
-#include <hdf5.h>
-
-#define FILE "vltypes.h5"
-#define MAX(X,Y) ((X)>(Y)?(X):(Y))
-
-/* 1-D dataset with fixed dimensions */
-#define SPACE_NAME "Space"
-#define SPACE_RANK 1
-#define SPACE_DIM 4
-
-void *vltypes_alloc_custom(size_t size, void *info);
-void vltypes_free_custom(void *mem, void *info);
-
-/****************************************************************
-**
-** vltypes_alloc_custom(): VL datatype custom memory
-** allocation routine. This routine just uses malloc to
-** allocate the memory and increments the amount of memory
-** allocated.
-**
-****************************************************************/
-void *vltypes_alloc_custom(size_t size, void *info)
-{
-
- void *ret_value=NULL; /* Pointer to return */
- int *mem_used=(int *)info; /* Get the pointer to the memory used */
- size_t extra; /* Extra space needed */
-
- /*
- * This weird contortion is required on the DEC Alpha to keep the
- * alignment correct.
- */
- extra=MAX(sizeof(void *),sizeof(int));
-
- if((ret_value=(void *)malloc(extra+size))!=NULL) {
- *(int *)ret_value=size;
- *mem_used+=size;
- } /* end if */
- ret_value=((unsigned char *)ret_value)+extra;
- return(ret_value);
-}
-/******************************************************************
-** vltypes_free_custom(): VL datatype custom memory
-** allocation routine. This routine just uses free to
-** release the memory and decrements the amount of memory
-** allocated.
-** ****************************************************************/
-void vltypes_free_custom(void *_mem, void *info)
-
-{
- unsigned char *mem;
- int *mem_used=(int *)info; /* Get the pointer to the memory used */
- size_t extra; /* Extra space needed */
- /*
- * This weird contortion is required on the DEC Alpha to keep the
- * alignment correct.
- */
- extra=MAX(sizeof(void *),sizeof(int));
- if(_mem!=NULL) {
- mem=((unsigned char *)_mem)-extra;
- *mem_used-=*(int *)mem;
- free(mem);
- } /* end if */
-}
-
-int main(void)
-
-{
- hvl_t wdata[SPACE_DIM]; /* Information to write */
- hvl_t rdata[SPACE_DIM]; /* Information read in */
- hid_t fid; /* HDF5 File IDs */
- hid_t dataset; /* Dataset ID */
- hid_t sid; /* Dataspace ID */
- hid_t tid; /* Datatype ID */
- hid_t xfer_pid; /* Dataset transfer property list ID */
- hsize_t dims[] = {SPACE_DIM};
- uint i,j; /* counting variables */
- int mem_used=0; /* Memory used during allocation */
- herr_t ret; /* Generic return value */
-
- /*
- * Allocate and initialize VL data to write
- */
- for(i=0; i<SPACE_DIM; i++) {
-
- wdata[i].p= (unsigned int *)malloc((i+1)*sizeof(unsigned int));
- wdata[i].len=i+1;
- for(j=0; j<(i+1); j++)
- ((unsigned int *)wdata[i].p)[j]=i*10+j;
- } /* end for */
-
- /*
- * Create file.
- */
- fid = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /*
- * Create dataspace for datasets.
- */
- sid = H5Screate_simple(SPACE_RANK, dims, NULL);
-
- /*
- * Create a datatype to refer to.
- */
- tid = H5Tvlen_create (H5T_NATIVE_UINT);
-
- /*
- * Create a dataset.
- */
- dataset=H5Dcreate(fid, "Dataset", tid, sid, H5P_DEFAULT);
-
- /*
- * Write dataset to disk.
- */
- ret=H5Dwrite(dataset, tid, H5S_ALL, H5S_ALL, H5P_DEFAULT, wdata);
-
- /*
- * Change to the custom memory allocation routines for reading
- * VL data
- */
- xfer_pid=H5Pcreate(H5P_DATASET_XFER);
-
- ret=H5Pset_vlen_mem_manager(xfer_pid, vltypes_alloc_custom,
- &mem_used, vltypes_free_custom,
- &mem_used);
-
- /*
- * Read dataset from disk. vltypes_alloc_custom and
- * will be used to manage memory.
- */
- ret=H5Dread(dataset, tid, H5S_ALL, H5S_ALL, xfer_pid, rdata);
-
- /*
- * Display data read in
- */
- for(i=0; i<SPACE_DIM; i++) {
- printf("%d-th element length is %d \n", i,
- (unsigned) rdata[i].len);
- for(j=0; j<rdata[i].len; j++) {
- printf(" %d ",((unsigned int *)rdata[i].p)[j] );
- }
- printf("\n");
- } /* end for */
-
- /*
- * Reclaim the read VL data. vltypes_free_custom will be used
- * to reclaim the space.
- */
- ret=H5Dvlen_reclaim(tid, sid, xfer_pid, rdata);
-
- /*
- * Reclaim the write VL data. C language free function will be
- * used to reclaim space.
- */
- ret=H5Dvlen_reclaim(tid, sid, H5P_DEFAULT, wdata);
-
- /*
- * Close Dataset
- */
- ret = H5Dclose(dataset);
-
- /*
- * Close datatype
- */
- ret = H5Tclose(tid);
-
- /*
- * Close disk dataspace
- */
- ret = H5Sclose(sid);
-
- /*
- * Close dataset transfer property list
- */
- ret = H5Pclose(xfer_pid);
-
- /*
- * Close file
- */
- ret = H5Fclose(fid);
-
-}
-
-
-
-Example: Variable-length Datatypes, Sample Output
-
-
-
-
-0-th element length is 1
-0
-1-th element length is 2
-10 11
-2-th element length is 3
-20 21 22
-3-th element length is 4
-30 31 32 33
-
- test/tvltypes.c
-in the HDF5 distribution.
-
-
-
-
-10. Array Datatypes
-
-The array class of datatypes, H5T_ARRAY
, allows the
-construction of true, homogeneous, multi-dimensional arrays.
-Since these are homogeneous arrays, each element of the array will be
-of the same datatype, designated at the time the array is created.
-
-H5S_MAX_RANK
.
-The minimum rank is 1 (one).
-All dimension sizes must be greater than 0 (zero).
-
-10.1 Array Datatype APIs
-
-The functions for creating and manipulating array datadypes are
-as follows:
-
-
-
-
- H5Tarray_create
-
- Creates an array datatype.
- H5Tarray_create
(
- hid_t base
,
- int rank
,
- const hsize_t dims[/*rank*/]
,
- const int perm[/*rank*/]
- )
-
- H5Tget_array_ndims
-
- Retrieves the rank of the array datatype.
- H5Tget_array_ndims
(
- hid_t adtype_id
- )
-
- H5Tget_array_dims
-
- Retrieves the dimension sizes of the array datatype.
-
-H5Tget_array_dims
(
- hid_t adtype_id
,
- hsize_t *dims[]
,
- int *perm[]
- )
- 10.2 Transition Issues in Adapting Existing Software
-
-The array datatype class is new with Release 1.4;
-prior releases included an array element for compound datatypes.
-
-
-(Transition to HDF5 Release 1.4 Only)10.3 Code Example
-
-The following example creates an array datatype and a dataset
-containing elements of the array datatype in an HDF5 file.
-It then writes the dataset to the file.
-
-
-Example: Array Datatype
-
-
-
-
-#include <hdf5.h>
-
-#define FILE "SDS_array_type.h5"
-#define DATASETNAME "IntArray"
-#define ARRAY_DIM1 5 /* array dimensions and rank */
-#define ARRAY_DIM2 4
-#define ARRAY_RANK 2
-#define SPACE_DIM 10 /* dataset dimensions and rank */
-#define RANK 1
-
-int
-main (void)
-{
- hid_t file, dataset; /* file and dataset handles */
- hid_t datatype, dataspace; /* handles */
- hsize_t sdims[] = {SPACE_DIM}; /* dataset dimensions */
- hsize_t adims[] = {ARRAY_DIM1, ARRAY_DIM2}; /* array dimensions */
- hsize_t adims_out[2];
- herr_t status;
- int data[SPACE_DIM][ARRAY_DIM1][ARRAY_DIM2]; /* data to write */
- int k, i, j;
- int array_rank_out;
-
- /*
- * Data and output buffer initialization.
- */
- for (k = 0; k < SPACE_DIM; k++) {
- for (j = 0; j < ARRAY_DIM1; j++) {
- for (i = 0; i < ARRAY_DIM2; i++)
- data[k][j][i] = k;
- }
- }
- /*
- * Create a new file using H5F_ACC_TRUNC access,
- * default file creation properties, and default file
- * access properties.
- */
- file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /*
- * Describe the size of the array and create the data space for fixed
- * size dataset.
- */
- dataspace = H5Screate_simple(RANK, sdims, NULL);
-
- /*
- * Define array datatype for the data in the file.
- */
- datatype = H5Tarray_create(H5T_NATIVE_INT, ARRAY_RANK, adims, NULL);
-
- /*
- * Create a new dataset within the file using defined dataspace and
- * datatype and default dataset creation properties.
- */
- dataset = H5Dcreate(file, DATASETNAME, datatype, dataspace,
- H5P_DEFAULT);
-
- /*
- * Write the data to the dataset using default transfer properties.
- */
- status = H5Dwrite(dataset, datatype, H5S_ALL, H5S_ALL,
- H5P_DEFAULT, data);
-
-
- /*
- * Close/release resources.
- */
- H5Sclose(dataspace);
- H5Tclose(datatype);
- H5Dclose(dataset);
- /*
- * Reopen dataset, and return information about its datatype.
- */
- dataset = H5Dopen(file, DATASETNAME);
- datatype = H5Dget_type(dataset);
- array_rank_out = H5Tget_array_ndims(datatype);
- status = H5Tget_array_dims(datatype, adims_out, NULL);
- printf(" Array datatype rank is %d \n", array_rank_out);
- printf(" Array dimensions are %d x %d \n", (int)adims_out[0],
- (int)adims_out[1]);
-
- H5Tclose(datatype);
- H5Dclose(dataset);
- H5Fclose(file);
-
- return 0;
-}
-
- 11. Sharing Datatypes among Datasets
-
-
-
- Example: Shared Datatypes
-
-
-
-
-
-
-hid_t t1 = ...some transient type...;
-H5Tcommit (file, "shared_type", t1);
-hid_t dset1 = H5Dcreate (file, "dset1", t1, space, H5P_DEFAULT);
-hid_t dset2 = H5Dcreate (file, "dset2", t1, space, H5P_DEFAULT);
-
-
-hid_t dset1 = H5Dopen (file, "dset1");
-hid_t t2 = H5Dget_type (dset1);
-hid_t dset3 = H5Dcreate (file, "dset3", t2, space, H5P_DEFAULT);
-hid_t dset4 = H5Dcreate (file, "dset4", t2, space, H5P_DEFAULT);
-
12. Data Conversion
-
-
- H5T_conv_t
,
- which is defined as follows:
-
-typedef herr_t (*H5T_conv_t) (hid_t src_id,
- hid_t dst_id,
- H5T_cdata_t *cdata,
- hsize_t nelmts,
- size_t buf_stride,
- size_t bkg_stride,
- void *buffer,
- void *bkg_buffer,
- hid_t dset_xfer_plist);
H5T_BKG_YES
below),
- conversion and background buffer strides (buf_stride and
- bkg_stride) that indicate what data is to be converted, and
- a dataset transfer properties list (dset_xfer_plist).
-
- H5T_cdata_t
,
- is declared as follows:
-
-typedef struct *H5T_cdata_t (H5T_cmd_t command,
- H5T_bkg_t need_bkg,
- hbool_t *recalc,
- void *priv)
command
field of the cdata argument
- determines what happens within the conversion function. It's
- values can be:
-
-
-
-
-
- H5T_CONV_INIT
- priv
field of cdata (or private data can
- be initialized later). It should also initialize the
- need_bkg
field described below. The buf
- and background pointers will be null pointers.
-
-
- H5T_CONV_CONV
- priv
field of cdata if it wasn't
- initialize during the H5T_CONV_INIT
command and
- then convert nelmts instances of the
- src_type to the dst_type. The
- buffer serves as both input and output. The
- background buffer is supplied according to the value
- of the need_bkg
field of cdata (the
- values are described below).
-
-
- H5T_CONV_FREE
- cdata->priv
pointer) should be freed and
- set to null. All other pointer arguments are null, the
- src_type and dst_type are invalid
- (negative), and the nelmts argument is zero.
-
-
- cdata->need_bkg
- which the conversion function should have initialized during the
- H5T_CONV_INIT command. It can have one of these values:
-
-
-
-
- H5T_BKG_NONE
-
- H5T_BKG_TEMP
-
- H5T_BKG_YES
- recalc
field of cdata is set when the
- conversion path table changes. It can be used by conversion
- function that cache other conversion paths so they know when
- their cache needs to be recomputed.
-
-
-
-
-
- herr_t H5Tregister(H5T_pers_t pers, const
- char *name, hid_t src_type, hid_t
- dest_type, H5T_conv_t func)
- H5T_PERS_HARD
) or soft
- (H5T_PERS_SOFT
) conversion depending on the value
- of pers, displacing any previous conversions for all
- applicable paths. The name is used only for
- debugging but must be supplied. If pers is
- H5T_PERS_SOFT
then only the type classes of the
- src_type and dst_type are used. For
- instance, to register a general soft conversion function that
- can be applied to any integer to integer conversion one could
- say: H5Tregister(H5T_PERS_SOFT, "i2i", H5T_NATIVE_INT,
- H5T_NATIVE_INT, convert_i2i)
. One special conversion
- path called the "no-op" conversion path is always defined by
- the library and used as the conversion function when no data
- transformation is necessary. The application can redefine this
- path by specifying a new hard conversion function with a
- negative value for both the source and destination datatypes,
- but the library might not call the function under certain
- circumstances.
-
-
- herr_t H5Tunregister (H5T_pers_t pers, const
- char *name, hid_t src_type, hid_t
- dest_type, H5T_conv_t func)
- H5Tregister()
with the added feature that any
- (or all) may be wild cards. The
- H5T_PERS_DONTCARE
constant should be used to
- indicate a wild card for the pers argument. The wild
- card name is the null pointer or empty string, the
- wild card for the src_type and dest_type
- arguments is any negative value, and the wild card for the
- func argument is the null pointer. The special no-op
- conversion path is never removed by this function.
-
-
- Example: A conversion
- function
-
-
-
- unsigned short
to any other
- 16-bit unsigned big-endian integer. A cray
- short
is a big-endian value which has 32
- bits of precision in the high-order bits of a 64-bit
- word.
-
-
-
-
- 1 typedef struct {
- 2 size_t dst_size;
- 3 int direction;
- 4 } cray_ushort2be_t;
- 5
- 6 herr_t
- 7 cray_ushort2be (hid_t src, hid_t dst,
- 8 H5T_cdata_t *cdata, hsize_t nelmts,
- 9 size_t buf_str, size_t bkg_str, void *buf,
-10 const void *background, hid_t plist)
-11 {
-12 unsigned char *src = (unsigned char *)buf;
-13 unsigned char *dst = src;
-14 cray_ushort2be_t *priv = NULL;
-15
-16 switch (cdata->command) {
-17 case H5T_CONV_INIT:
-18 /*
-19 * We are being queried to see if we handle this
-20 * conversion. We can handle conversion from
-21 * Cray unsigned short to any other big-endian
-22 * unsigned integer that doesn't have padding.
-23 */
-24 if (!H5Tequal (src, H5T_CRAY_USHORT) ||
-25 H5T_ORDER_BE != H5Tget_order (dst) ||
-26 H5T_SGN_NONE != H5Tget_signed (dst) ||
-27 8*H5Tget_size (dst) != H5Tget_precision (dst)) {
-28 return -1;
-29 }
-30
-31 /*
-32 * Initialize private data. If the destination size
-33 * is larger than the source size, then we must
-34 * process the elements from right to left.
-35 */
-36 cdata->priv = priv = malloc (sizeof(cray_ushort2be_t));
-37 priv->dst_size = H5Tget_size (dst);
-38 if (priv->dst_size>8) {
-39 priv->direction = -1;
-40 } else {
-41 priv->direction = 1;
-42 }
-43 break;
-44
-45 case H5T_CONV_FREE:
-46 /*
-47 * Free private data.
-48 */
-49 free (cdata->priv);
-50 cdata->priv = NULL;
-51 break;
-52
-53 case H5T_CONV_CONV:
-54 /*
-55 * Convert each element, watch out for overlap src
-56 * with dst on the left-most element of the buffer.
-57 */
-58 priv = (cray_ushort2be_t *)(cdata->priv);
-59 if (priv->direction<0) {
-60 src += (nelmts - 1) * 8;
-61 dst += (nelmts - 1) * dst_size;
-62 }
-63 for (i=0; i<n; i++) {
-64 if (src==dst && dst_size<4) {
-65 for (j=0; j<dst_size; j++) {
-66 dst[j] = src[j+4-dst_size];
-67 }
-68 } else {
-69 for (j=0; j<4 && j<dst_size; j++) {
-70 dst[dst_size-(j+1)] = src[3-j];
-71 }
-72 for (j=4; j<dst_size; j++) {
-73 dst[dst_size-(j+1)] = 0;
-74 }
-75 }
-76 src += 8 * direction;
-77 dst += dst_size * direction;
-78 }
-79 break;
-80
-81 default:
-82 /*
-83 * Unknown command.
-84 */
-85 return -1;
-86 }
-87 return 0;
-88 }
-
-
- Example: Soft
- Registration
-
-
-
-
-
-
-H5Tregister(H5T_PERS_SOFT, "cus2be",
- H5T_NATIVE_INT, H5T_NATIVE_INT,
- cray_ushort2be);
-
H5Tregister_hard()
) for each an every possible
- conversion path whether that conversion path was actually used
- or not.
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 2 August 2001
-
-
-
-
-
diff --git a/doc/html/DatatypesEnum.html b/doc/html/DatatypesEnum.html
deleted file mode 100644
index 607030a..0000000
--- a/doc/html/DatatypesEnum.html
+++ /dev/null
@@ -1,926 +0,0 @@
-
-
-
-
-
-
-
-
- Introduction to HDF5
-
- HDF5 Reference Manual
- Other HDF5 documents and links
-
-
- And in this document, the
- HDF5 User's Guide:
- Files
-
- Datasets
- Data Types
- Dataspaces
- Groups
- References
-
- Attributes
- Property Lists
- Error Handling
- Filters
- Caching
-
- Chunking
- Debugging
- Environment
- DDL
- Ragged Arrays
-
-
-
-
- The Data Type Interface (H5T) (contitnued)
-
- 7. Enumeration Data Types
-
- 7.1. Introduction
-
-
7.2. Creation
-
-
-
-
- hid_t H5Tcreate(H5T_class_t type_class,
- size_t size)
- H5T_ENUM
and the second argument is the
- size in bytes of the native integer on which the enumeration
- type is based. If the architecture does not support a native
- signed integer of the specified size then an error is
- returned.
-
-
-/* Based on a native signed short */
-hid_t hdf_en_colors = H5Tcreate(H5T_ENUM, sizeof(short));
-
-
- hid_t H5Tenum_create(hid_t base)
- H5Tcreate()
function. This
- function is useful when creating an enumeration type based on
- some non-native integer data type, but it can be used for
- native types as well.
-
-
-/* Based on a native unsigned short */
-hid_t hdf_en_colors_1 = H5Tenum_create(H5T_NATIVE_USHORT);
-
-/* Based on a MIPS 16-bit unsigned integer */
-hid_t hdf_en_colors_2 = H5Tenum_create(H5T_MIPS_UINT16);
-
-/* Based on a big-endian 16-bit unsigned integer */
-hid_t hdf_en_colors_3 = H5Tenum_create(H5T_STD_U16BE);
-
-
- herr_t H5Tenum_insert(hid_t etype, const char
- *symbol, void *value)
-
-short val;
-H5Tenum_insert(hdf_en_colors, "RED", (val=0,&val));
-H5Tenum_insert(hdf_en_colors, "GREEN", (val=1,&val));
-H5Tenum_insert(hdf_en_colors, "BLUE", (val=2,&val));
-H5Tenum_insert(hdf_en_colors, "WHITE", (val=3,&val));
-H5Tenum_insert(hdf_en_colors, "BLACK", (val=4,&val));
-
-
- herr_t H5Tlock(hid_t etype)
-
-H5Tlock(hdf_en_colors);
-
- 7.3. Integer Operations
-
-
-
-
-
- H5Topen()
- H5Tcreate()
- H5Tcopy()
- H5Tclose()
-
- H5Tequal()
- H5Tlock()
- H5Tcommit()
- H5Tcommitted()
-
- H5Tget_class()
- H5Tget_size()
- H5Tget_order()
- H5Tget_pad()
-
- H5Tget_precision()
- H5Tget_offset()
- H5Tget_sign()
- H5Tset_size()
-
- H5Tset_order()
- H5Tset_precision()
- H5Tset_offset()
- H5Tset_pad()
-
-
- H5Tset_sign()
H5Tget_super()
will
- be defined for all data types that are derived from existing
- types (currently just enumeration types).
-
-
-
-
- hid_t H5Tget_super(hid_t type)
-
-hid_t itype = H5Tget_super(hdf_en_colors);
-hid_t hdf_fr_colors = H5Tenum_create(itype);
-H5Tclose(itype);
-
-short val;
-H5Tenum_insert(hdf_fr_colors, "ouge", (val=0,&val));
-H5Tenum_insert(hdf_fr_colors, "vert", (val=1,&val));
-H5Tenum_insert(hdf_fr_colors, "bleu", (val=2,&val));
-H5Tenum_insert(hdf_fr_colors, "blanc", (val=3,&val));
-H5Tenum_insert(hdf_fr_colors, "noir", (val=4,&val));
-H5Tlock(hdf_fr_colors);
- 7.4. Type Functions
-
-
-
-
- int H5Tget_nmembers(hid_t etype)
-
- char *H5Tget_member_name(hid_t etype, unsigned
- membno)
- H5Tget_nmembers()
. The members are stored in no
- particular order. This function is already implemented for
- compound data types. If an error occurs then the null pointer
- is returned. The return value should be freed by calling
- free()
.
-
-
- herr_t H5Tget_member_value(hid_t etype, unsigned
- membno, void *value/*out*/)
- H5Tget_member_name()
). The value returned
- is in the domain of the underlying integer
- data type which is often a native integer type. The
- application should ensure that the memory pointed to by
- value is large enough to contain the result (the size
- can be obtained by calling H5Tget_size()
on
- either the enumeration type or the underlying integer type
- when the type is not known by the C compiler.
-
-
-int n = H5Tget_nmembers(hdf_en_colors);
-unsigned u;
-for (u=0; u<(unsigned)n; u++) {
- char *symbol = H5Tget_member_name(hdf_en_colors, u);
- short val;
- H5Tget_member_value(hdf_en_colors, u, &val);
- printf("#%u %20s = %d\n", u, symbol, val);
- free(symbol);
-}
-
-
-#0 BLACK = 4
-#1 BLUE = 2
-#2 GREEN = 1
-#3 RED = 0
-#4 WHITE = 3
- 7.5. Data Functions
-
-
-
-
- herr_t H5Tenum_valueof(hid_t etype, const char
- *symbol, void *value/*out*/)
-
- herr_t H5Tenum_nameof(hid_t etype, void
- *value, char *symbol, size_t
- size)
-
-short data[1000] = {4, 2, 0, 0, 5, 1, ...};
-int i;
-char symbol[32];
-
-for (i=0; i<1000; i++) {
- if (H5Tenum_nameof(hdf_en_colors, data+i, symbol,
- sizeof symbol))<0) {
- if (symbol[0]) {
- strcpy(symbol+sizeof(symbol)-4, "...");
- } else {
- strcpy(symbol, "UNKNOWN");
- }
- }
- printf("%d %s\n", data[i], symbol);
-}
-printf("}\n");
-
-
-
-4 BLACK
-2 BLUE
-0 RED
-0 RED
-5 UNKNOWN
-1 GREEN
-...
- 7.6. Conversion
-
-
2
which corresponds to
- BLUE
would be mapped to 0x0004
. The
- following code snippet builds the second data type, then
- converts a raw data array from one data type to another, and
- then prints the result.
-
-
-/* Create a new enumeration type */
-short val;
-hid_t bits = H5Tcreate(H5T_ENUM, sizeof val);
-H5Tenum_insert(bits, "RED", (val=0x0001,&val));
-H5Tenum_insert(bits, "GREEN", (val=0x0002,&val));
-H5Tenum_insert(bits, "BLUE", (val=0x0004,&val));
-H5Tenum_insert(bits, "WHITE", (val=0x0008,&val));
-H5Tenum_insert(bits, "BLACK", (val=0x0010,&val));
-
-/* The data */
-short data[6] = {1, 4, 2, 0, 3, 5};
-
-/* Convert the data from one type to another */
-H5Tconvert(hdf_en_colors, bits, 5, data, NULL);
-
-/* Print the data */
-for (i=0; i<6; i++) {
- printf("0x%04x\n", (unsigned)(data[i]));
-}
-
-
-
-0x0002
-0x0010
-0x0004
-0x0001
-0x0008
-0xffff
-
- H5Tset_overflow()
). If no overflow handler is
- defined then all bits of the destination value will be set.
-
- 7.7. Symbol Order
-
-
H5Tenum_valueof()
.
-
-
-short val1, val2;
-H5Tenum_valueof(hdf_en_colors, "WHITE", &val1);
-H5Tenum_valueof(hdf_en_colors, "BLACK", &val2);
-if (val1 < val2) ...
-
- foreign
is some non-native enumeration type then a
- native type can be created as follows:
-
-
-int n = H5Tget_nmembers(foreign);
-hid_t itype = H5Tget_super(foreign);
-void *val = malloc(n * MAX(H5Tget_size(itype), sizeof(int)));
-char *name = malloc(n * sizeof(char*));
-unsigned u;
-
-/* Get foreign type information */
-for (u=0; u<(unsigned)n; u++) {
- name[u] = H5Tget_member_name(foreign, u);
- H5Tget_member_value(foreign, u,
- (char*)val+u*H5Tget_size(foreign));
-}
-
-/* Convert integer values to new type */
-H5Tconvert(itype, H5T_NATIVE_INT, n, val, NULL);
-
-/* Build a native type */
-hid_t native = H5Tenum_create(H5T_NATIVE_INT);
-for (i=0; i<n; i++) {
- H5Tenum_insert(native, name[i], ((int*)val)[i]);
- free(name[i]);
-}
-free(name);
-free(val);
-
- reverse
that
- defines the same five colors but in the reverse order.
-
-
-short val;
-int i;
-char sym[8];
-short data[5] = {0, 1, 2, 3, 4};
-
-hid_t reverse = H5Tenum_create(H5T_NATIVE_SHORT);
-H5Tenum_insert(reverse, "BLACK", (val=0,&val));
-H5Tenum_insert(reverse, "WHITE", (val=1,&val));
-H5Tenum_insert(reverse, "BLUE", (val=2,&val));
-H5Tenum_insert(reverse, "GREEN", (val=3,&val));
-H5Tenum_insert(reverse, "RED", (val=4,&val));
-
-/* Print data */
-for (i=0; i<5; i++) {
- H5Tenum_nameof(hdf_en_colors, data+i, sym, sizeof sym);
- printf ("%d %s\n", data[i], sym);
-}
-
-puts("Converting...");
-H5Tconvert(hdf_en_colors, reverse, 5, data, NULL);
-
-/* Print data */
-for (i=0; i<5; i++) {
- H5Tenum_nameof(reverse, data+i, sym, sizeof sym);
- printf ("%d %s\n", data[i], sym);
-}
-
-
-0 RED
-1 GREEN
-2 BLUE
-3 WHITE
-4 BLACK
-Converting...
-4 RED
-3 GREEN
-2 BLUE
-1 WHITE
-0 BLACK
-
- 7.8. Equality
-
-
H5Tequal()
function.
-
- 7.9. Interacting with C's
enum
Type
-
- enum
data types, there are some important
- differences:
-
-
-
-
-
-
- Difference
- Motivation/Implications
-
-
-
- Symbols are unquoted in C but quoted in
- HDF.
- This allows the application to manipulate
- symbol names in ways that are not possible with C.
-
-
-
- The C compiler automatically replaces all
- symbols with their integer values but HDF requires
- explicit calls to do the same.
- C resolves symbols at compile time while
- HDF resolves symbols at run time.
-
-
-
- The mapping from symbols to integers is
- N:1 in C but 1:1 in HDF.
- HDF can translate from value to name
- uniquely and large
- switch
statements are
- not necessary to print values in human-readable
- format.
-
-
- A symbol must appear in only one C
-
- enum
type but may appear in multiple HDF
- enumeration types.The translation from symbol to value in HDF
- requires the data type to be specified while in C the
- data type is not necessary because it can be inferred
- from the symbol.
-
-
-
- The underlying integer value is always a
- native integer in C but can be a foreign integer type in
- HDF.
- This allows HDF to describe data that might
- reside on a foreign architecture, such as data stored in
- a file.
-
-
- The sign and size of the underlying integer
- data type is chosen automatically by the C compiler but
- must be fully specified with HDF.
- Since HDF doesn't require finalization of a
- data type, complete specification of the type must be
- supplied before the type is used. Requiring that
- information at the time of type creation was a design
- decision to simplify the library.
-
-
-
-
-
-
-
-
-
-/* English color names */
-typedef enum {
- RED,
- GREEN,
- BLUE,
- WHITE,
- BLACK
-} c_en_colors;
-
-/* Spanish color names, reverse order */
-typedef enum {
- NEGRO
- BLANCO,
- AZUL,
- VERDE,
- ROJO,
-} c_sp_colors;
-
-/* No enum definition for French names */
-
Creating HDF Types from C Types
-
-
enum
type simply by passing pointers to the C
- enum
values to H5Tenum_insert()
. For
- instance, to create HDF types for the c_en_colors
- type shown above:
-
-
-
-
-
-
-
-
-
-
-
-c_en_colors val;
-hid_t hdf_en_colors = H5Tcreate(H5T_ENUM, sizeof(c_en_colors));
-H5Tenum_insert(hdf_en_colors, "RED", (val=RED, &val));
-H5Tenum_insert(hdf_en_colors, "GREEN", (val=GREEN,&val));
-H5Tenum_insert(hdf_en_colors, "BLUE", (val=BLUE, &val));
-H5Tenum_insert(hdf_en_colors, "WHITE", (val=WHITE,&val));
-H5Tenum_insert(hdf_en_colors, "BLACK", (val=BLACK,&val));
Name Changes between Applications
-
-
enum
definitions. The communication is still
- possible although the applications must agree on common terms
- for the colors. The following example shows the Spanish code to
- read the values assuming that the applications have agreed that
- the color information will be exchanged using Enlish color
- names:
-
-
-
-
-
-
-
-
-
-
-
-
-c_sp_colors val, data[1000];
-hid_t hdf_sp_colors = H5Tcreate(H5T_ENUM, sizeof(c_sp_colors));
-H5Tenum_insert(hdf_sp_colors, "RED", (val=ROJO, &val));
-H5Tenum_insert(hdf_sp_colors, "GREEN", (val=VERDE, &val));
-H5Tenum_insert(hdf_sp_colors, "BLUE", (val=AZUL, &val));
-H5Tenum_insert(hdf_sp_colors, "WHITE", (val=BLANCO, &val));
-H5Tenum_insert(hdf_sp_colors, "BLACK", (val=NEGRO, &val));
-
-H5Dread(dataset, hdf_sp_colors, H5S_ALL, H5S_ALL, H5P_DEFAULT, data);
Symbol Ordering across Applications
-
-
enum
definition,
- ordering of enum
symbols cannot be preserved across
- files like with HDF enumeration types. HDF can convert from one
- application's integer values to the other's so a symbol in one
- application's C enum
gets mapped to the same symbol
- in the other application's C enum
, but the relative
- order of the symbols is not preserved.
-
- c_en_colors
defined above where
- WHITE
is less than BLACK
, but some
- other application might define the colors in some other
- order. If each application defines an HDF enumeration type based
- on that application's C enum
type then HDF will
- modify the integer values as data is communicated from one
- application to the other so that a RED
value
- in the first application is also a RED
value in the
- other application.
-
- RED
) in the
- input file became 4 (ROJO
) in the data
- array. In the input file, WHITE
was less than
- BLACK
; in the application the opposite is true.
-
- Internationalization
-
-
c_en_colors
data type could define
- a separate HDF data type for languages such as English, Spanish,
- and French and cast the enumerated value to one of these HDF
- types to print the result.
-
-
-
-
-
-
-
-
-
-
-
-c_en_colors val, *data=...;
-
-hid_t hdf_sp_colors = H5Tcreate(H5T_ENUM, sizeof val);
-H5Tenum_insert(hdf_sp_colors, "ROJO", (val=RED, &val));
-H5Tenum_insert(hdf_sp_colors, "VERDE", (val=GREEN, &val));
-H5Tenum_insert(hdf_sp_colors, "AZUL", (val=BLUE, &val));
-H5Tenum_insert(hdf_sp_colors, "BLANCO", (val=WHITE, &val));
-H5Tenum_insert(hdf_sp_colors, "NEGRO", (val=BLACK, &val));
-
-hid_t hdf_fr_colors = H5Tcreate(H5T_ENUM, sizeof val);
-H5Tenum_insert(hdf_fr_colors, "OUGE", (val=RED, &val));
-H5Tenum_insert(hdf_fr_colors, "VERT", (val=GREEN, &val));
-H5Tenum_insert(hdf_fr_colors, "BLEU", (val=BLUE, &val));
-H5Tenum_insert(hdf_fr_colors, "BLANC", (val=WHITE, &val));
-H5Tenum_insert(hdf_fr_colors, "NOIR", (val=BLACK, &val));
-
-void
-nameof(lang_t language, c_en_colors val, char *name, size_t size)
-{
- switch (language) {
- case ENGLISH:
- H5Tenum_nameof(hdf_en_colors, &val, name, size);
- break;
- case SPANISH:
- H5Tenum_nameof(hdf_sp_colors, &val, name, size);
- break;
- case FRENCH:
- H5Tenum_nameof(hdf_fr_colors, &val, name, size);
- break;
- }
-}
7.10. Goals That Have Been Met
-
-
-
-
-
-
-
-
- Architecture Independence
- Two applications shall be able to exchange
- enumerated data even when the underlying integer values
- have different storage formats. HDF accomplishes this for
- enumeration types by building them upon integer types.
-
-
-
- Preservation of Order Relationship
- The relative order of symbols shall be
- preserved between two applications that use equivalent
- enumeration data types. Unlike numeric values that have
- an implicit ordering, enumerated data has an explicit
- order defined by the enumeration data type and HDF
- records this order in the file.
-
-
-
- Order Independence
- An application shall be able to change the
- relative ordering of the symbols in an enumeration data
- type. This is accomplished by defining a new type with
- different integer values and converting data from one type
- to the other.
-
-
-
- Subsets
- An application shall be able to read
- enumerated data from an archived dataset even after the
- application has defined additional members for the
- enumeration type. An application shall be able to write
- to a dataset when the dataset contains a superset of the
- members defined by the application. Similar rules apply
- for in-core conversions between enumerated data
- types.
-
-
-
- Targetable
- An application shall be able to target a
- particular architecture or application when storing
- enumerated data. This is accomplished by allowing
- non-native underlying integer types and converting the
- native data to non-native data.
-
-
-
- Efficient Data Transfer
- An application that defines a file dataset
- that corresponds to some native C enumerated data array
- shall be able to read and write to that dataset directly
- using only Posix read and write functions. HDF already
- optimizes this case for integers, so the same optimization
- will apply to enumerated data.
-
-
- Efficient Storage
- Enumerated data shall be stored in a manner
- which is space efficient. HDF stores the enumerated data
- as integers and allows the application to chose the size
- and format of those integers.
-
-
-
-
-
- Introduction to HDF5
-
- HDF5 Reference Manual
- Other HDF5 documents and links
-
-
- And in this document, the
- HDF5 User's Guide:
- Files
-
- Datasets
- Data Types
- Dataspaces
- Groups
- References
-
- Attributes
- Property Lists
- Error Handling
- Filters
- Caching
-
- Chunking
- Debugging
- Environment
- DDL
- Ragged Arrays
-
-
-
-HDF Help Desk
-
-
-
-Last modified: 30 April 1999
-Footer modified: 3 July 2002
-
-
-
-This file is longer used; the material has been integrated into Datatypes.html.
-
-
-
-
diff --git a/doc/html/Debugging.html b/doc/html/Debugging.html
deleted file mode 100644
index d04cf27..0000000
--- a/doc/html/Debugging.html
+++ /dev/null
@@ -1,516 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-Debugging HDF5 Applications
-
- Introduction
-
-
-
-
-
- NDEBUG
is defined during compiling, the
- library will include code to verify that invariant conditions
- have the expected values. When a problem is detected the
- library will display the file and line number within the
- library and the invariant condition that failed. A core dump
- may be generated for post mortem debugging. The code to
- perform these checks can be included on a per-package bases.
-
-
-
- Error Messages
-
-
-
-
-
-
-
-
-
-
-HDF5-DIAG: Error detected in thread 0. Back trace follows.
- #000: H5F.c line 1245 in H5Fopen(): unable to open file
- major(04): File interface
- minor(10): Unable to open file
- #001: H5F.c line 846 in H5F_open(): file does not exist
- major(04): File interface
- minor(10): Unable to open file
-
Invariant Conditions
-
- --disable-production
, the
- default for versions before 1.2. The library designers have made
- every attempt to handle error conditions gracefully but an
- invariant condition assertion may fail in certain cases. The
- output from a failure usually looks something like this:
-
-
-
-
-
-
-
-
-
-
-Assertion failed: H5.c:123: i<NELMTS(H5_debug_g)
-IOT Trap, core dumped.
-
Timings and Statistics
-
- --enable-debug
configure switch. The
- switch can be followed by an equal sign and a comma-separated
- list of package names or else a default list is used.
-
-
-
-
-
- Name
- Default
- Description
-
-
- a
- No
- Attributes
-
-
- ac
- Yes
- Meta data cache
-
-
- b
- Yes
- B-Trees
-
-
- d
- Yes
- Datasets
-
-
- e
- Yes
- Error handling
-
-
- f
- Yes
- Files
-
-
- g
- Yes
- Groups
-
-
- hg
- Yes
- Global heap
-
-
- hl
- No
- Local heaps
-
-
- i
- Yes
- Interface abstraction
-
-
- mf
- No
- File memory management
-
-
- mm
- Yes
- Library memory managment
-
-
- o
- No
- Object headers and messages
-
-
- p
- Yes
- Property lists
-
-
- s
- Yes
- Data spaces
-
-
- t
- Yes
- Datatypes
-
-
- v
- Yes
- Vectors
-
-
- z
- Yes
- Raw data filters
- HDF5_DEBUG
- environment variable. That variable may also contain file
- descriptor numbers (the default is `2') which control the output
- for all following packages up to the next file number. The
- word all
refers to all packages. Any word my be
- preceded by a minus sign to turn debugging off for the package.
-
-
-
-
-
-
- all
This causes debugging output from all packages to be
- sent to the standard error stream.
-
-
-
- all -t -s
Debugging output for all packages except datatypes
- and data spaces will appear on the standard error
- stream.
-
-
-
- -all ac 255 t,s
This disables all debugging even if the default was to
- debug something, then output from the meta data cache is
- send to the standard error stream and output from data
- types and spaces is sent to file descriptor 255 which
- should be redirected by the shell.
- HDF5_DEBUG
value may be
- separated by any non-lowercase letter.
-
- API Tracing
-
- h5ls foo
after turning on tracing,
- includes:
-
-
-
-
-
-
-
-
-
-
-H5Tcopy(type=184549388) = 184549419 (type);
-H5Tcopy(type=184549392) = 184549424 (type);
-H5Tlock(type=184549424) = SUCCEED;
-H5Tcopy(type=184549393) = 184549425 (type);
-H5Tlock(type=184549425) = SUCCEED;
-H5Fopen(filename="foo", flags=0, access=H5P_DEFAULT) = FAIL;
-HDF5-DIAG: Error detected in thread 0. Back trace follows.
- #000: H5F.c line 1245 in H5Fopen(): unable to open file
- major(04): File interface
- minor(10): Unable to open file
- #001: H5F.c line 846 in H5F_open(): file does not exist
- major(04): File interface
- minor(10): Unable to open file
-
--enable-trace
- configuration switch (the default for versions before 1.2). Then
- the word trace
must appear in the value of the
- HDF5_DEBUG
variable. The output will appear on the
- last file descriptor before the word trace
or two
- (standard error) by default.
-
-
-
-
-
- To display the trace on the standard error stream:
-
-
-
-$ env HDF5_DEBUG=trace a.out
-
-
- To send the trace to a file:
-
-
-
-$ env HDF5_DEBUG="55 trace" a.out 55>trace-output
-
Performance
-
- H5_trace()
function.
-
- Safety
-
- Completeness
-
- H5Eprint()
and
- H5Eprint_cb()
because their participation would
- mess up output during automatic error reporting.
-
- Implementation
-
- H5TRACE()
macros immediately after the
- FUNC_ENTER()
macro. The first argument is the
- return type encoded as a string. The second argument is the
- types of all the function arguments encoded as a string. The
- remaining arguments are the function arguments. This macro was
- designed to be as terse and unobtrousive as possible.
-
- H5TRACE()
calls synchronized
- with the source code we've written a perl script which gets
- called automatically just before Makefile dependencies are
- calculated for the file. However, this only works when one is
- using GNU make. To reinstrument the tracing explicitly, invoke
- the trace
program from the hdf5 bin directory with
- the names of the source files that need to be updated. If any
- file needs to be modified then a backup is created by appending
- a tilde to the file name.
-
-
-
-
-
-
-
-
-
-
-$ ../bin/trace *.c
-H5E.c: in function `H5Ewalk_cb':
-H5E.c:336: warning: trace info was not inserted
-
/*NO TRACE*/
somewhere in the function
- body. Tracing information will not be updated or inserted if
- such a comment exists.
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 13 December 1999
-
-
-
-
-
diff --git a/doc/html/EnumMap.gif b/doc/html/EnumMap.gif
deleted file mode 100644
index d06f06a..0000000
Binary files a/doc/html/EnumMap.gif and /dev/null differ
diff --git a/doc/html/Environment.html b/doc/html/Environment.html
deleted file mode 100644
index a00998b..0000000
--- a/doc/html/Environment.html
+++ /dev/null
@@ -1,166 +0,0 @@
-
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-HDF5 Library Environment Variables and Configuration Parameters
-
-1. Environment Variables
-
-The HDF5 library uses UNIX environment variables to control
-or adjust certain library features at runtime. The variables and
-their defined effects are as follows:
-
-
-
-
-
-1
, HDF5 will not abort when the version
- of the HDF5 headers doesn't match the version of the HDF5 library.
-
- 1
, PHDF5 will use the MPI optimized
- code to perform parallel read/write accesses to datasets.
- Currently, this optimization fails when accessing extendable
- datasets. The default is not to use the optimized code.
-
-2. Configuration Parameters
-
-The HDF5 configuration script accepts a list of parameters to control
-configuration features when creating the Makefiles for the library.
-The command
-
- configure --help
-
-will display the current list of parameters and their effects.
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 13 December 1999
-
-
-
-
diff --git a/doc/html/Errors.html b/doc/html/Errors.html
deleted file mode 100644
index 29a00ba..0000000
--- a/doc/html/Errors.html
+++ /dev/null
@@ -1,386 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-The Error Handling Interface (H5E)
-
- 1. Introduction
-
- 2. Error Handling Operations
-
-
-
- Example: An Error Message
-
-
-
- H5Tclose
on a
- predefined datatype then the following message is
- printed on the standard error stream. This is a
- simple error that has only one component, the API
- function; other errors may have many components.
-
-
-
-HDF5-DIAG: Error detected in thread 0. Back trace follows.
- #000: H5T.c line 462 in H5Tclose(): predefined datatype
- major(01): Function argument
- minor(05): Bad value
-
H5Eprint()
then the automatic printing should be
- turned off to prevent error messages from being displayed twice
- (see H5Eset_auto()
below).
-
-
-
-
- herr_t H5Eprint (FILE *stream)
- HDF5-DIAG: Error detected in thread 0.
-
-
- herr_t H5Eclear (void)
- H5Eprint()
).
- H5Eset_auto()
function:
-
-
-
-
- herr_t H5Eset_auto (herr_t(*func)(void*),
- void *client_data)
- H5Eprint()
(cast appropriately) and
- client_data is the standard error stream pointer,
- stderr
.
-
-
- herr_t H5Eget_auto (herr_t(**func)(void*),
- void **client_data)
-
-
- Example: Error Control
-
-
-
-
-
-
-/* Save old error handler */
-herr_t (*old_func)(void*);
-void *old_client_data;
-H5Eget_auto(&old_func, &old_client_data);
-
-/* Turn off error handling */
-H5Eset_auto(NULL, NULL);
-
-/* Probe. Likely to fail, but that's okay */
-status = H5Fopen (......);
-
-/* Restore previous error handler */
-H5Eset_auto(old_func, old_client_data);
-
-
-/* Turn off error handling permanently */
-H5Eset_auto (NULL, NULL);
-
-/* If failure, print error message */
-if (H5Fopen (....)<0) {
- H5Eprint (stderr);
- exit (1);
-}
-
H5Eprint()
. For instance, one could define a
- function that prints a simple, one-line error message to the
- standard error stream and then exits.
-
-
-
- Example: Simple Messages
-
-
-
-
-
-
-herr_t
-my_hdf5_error_handler (void *unused)
-{
- fprintf (stderr, "An HDF5 error was detected. Bye.\n");
- exit (1);
-}
-
-
-H5Eset_auto (my_hdf5_error_handler, NULL);
-
H5Eprint()
function is actually just a wrapper
- around the more complex H5Ewalk()
function which
- traverses an error stack and calls a user-defined function for
- each member of the stack.
-
-
-
-
- herr_t H5Ewalk (H5E_direction_t direction,
- H5E_walk_t func, void *client_data)
- H5E_WALK_UPWARD
then traversal begins at the
- inner-most function that detected the error and concludes with
- the API function. The opposite order is
- H5E_WALK_DOWNWARD
.
-
-
- typedef herr_t (*H5E_walk_t)(int n,
- H5E_error_t *eptr, void
- *client_data)
- H5Ewalk()
.
-
-
-
- typedef struct {
- H5E_major_t maj_num;
- H5E_minor_t min_num;
- const char *func_name;
- const char *file_name;
- unsigned line;
- const char *desc;
-} H5E_error_t;
- const char *H5Eget_major (H5E_major_t num)
- const char *H5Eget_minor (H5E_minor_t num)
-
-
- Example: H5Ewalk_cb
-
-
-
-
-
-herr_t
-H5Ewalk_cb(int n, H5E_error_t *err_desc, void *client_data)
-{
- FILE *stream = (FILE *)client_data;
- const char *maj_str = NULL;
- const char *min_str = NULL;
- const int indent = 2;
-
- /* Check arguments */
- assert (err_desc);
- if (!client_data) client_data = stderr;
-
- /* Get descriptions for the major and minor error numbers */
- maj_str = H5Eget_major (err_desc->maj_num);
- min_str = H5Eget_minor (err_desc->min_num);
-
- /* Print error message */
- fprintf (stream, "%*s#%03d: %s line %u in %s(): %s\n",
- indent, "", n, err_desc->file_name, err_desc->line,
- err_desc->func_name, err_desc->desc);
- fprintf (stream, "%*smajor(%02d): %s\n",
- indent*2, "", err_desc->maj_num, maj_str);
- fprintf (stream, "%*sminor(%02d): %s\n",
- indent*2, "", err_desc->min_num, min_str);
-
- return 0;
-}
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 13 December 1999
-
-
-
-
diff --git a/doc/html/ExternalFiles.html b/doc/html/ExternalFiles.html
deleted file mode 100644
index 0213ea8..0000000
--- a/doc/html/ExternalFiles.html
+++ /dev/null
@@ -1,279 +0,0 @@
-
-
-
- External Files in HDF5
Overview of Layers
-
-
-
-
-
- Layer-7
- Groups
- Datasets
-
-
- Layer-6
- Indirect Storage
- Symbol Tables
-
-
- Layer-5
- B-trees
- Object Hdrs
- Heaps
-
-
- Layer-4
- Caching
-
-
- Layer-3
- H5F chunk I/O
-
-
- Layer-2
- H5F low
-
-
- Layer-1
- File Family
- Split Meta/Raw
-
-
- Layer-0
- Section-2 I/O
- Standard I/O
- Malloc/Free
- Single Address Space
-
- H5Fcreate
and H5Fopen
functions
- would need to be modified to pass file-type info down to layer 2
- so the correct drivers can be called and parameters passed to
- the drivers to initialize them.
-
- Implementation
-
- H5F_open
(which is called by H5Fopen()
- and H5Fcreate
) that contains a
- printf(3c)
-style integer format specifier.
- Currently, the default low-level file driver is used for all
- family members (H5F_LOW_DFLT, usually set to be Section 2 I/O or
- Section 3 stdio), but we'll probably eventually want to pass
- that as a parameter of the file access property list, which
- hasn't been implemented yet. When creating a family, a default
- family member size is used (defined at the top H5Ffamily.c,
- currently 64MB) but that also should be settable in the file
- access property list. When opening an existing family, the size
- of the first member is used to determine the member size
- (flushing/closing a family ensures that the first member is the
- correct size) but the other family members don't have to be that
- large (the local address space, however, is logically the same
- size for all members).
-
- H5F_open
- then we'll chose the split family and use the default low level
- driver for each of the two family members. Eventually we'll
- want to pass these kinds of things through the file access
- property list instead of relying on naming convention.
-
- External Raw Data
-
- Multiple HDF5 Files
-
-
-
-
-struct H5F_mount_t {
- H5F_t *parent; /* Parent HDF5 file if any */
- struct {
- H5F_t *f; /* File which is mounted */
- haddr_t where; /* Address of mount point */
- } *mount; /* Array sorted by mount point */
- intn nmounts; /* Number of mounted files */
- intn alloc; /* Size of mount table */
-}
-
H5Fmount
function takes the ID of an open
- file or group, the name of a to-be-mounted file, the name of the mount
- point, and a file access property list (like H5Fopen
).
- It opens the new file and adds a record to the parent's mount
- table. The H5Funmount
function takes the parent
- file or group ID and the name of the mount point and disassociates
- the mounted file from the mount point. It does not close the
- mounted file. The H5Fclose
- function closes/unmounts files recursively.
-
- H5G_iname
function which translates a name to
- a file address (haddr_t
) looks at the mount table
- at each step in the translation and switches files where
- appropriate. All name-to-address translations occur through
- this function.
-
- How Long?
-
- H5F_istore_read
should be trivial. Most of the
- time will be spent designing a way to cache Unix file
- descriptors efficiently since the total number open files
- allowed per process could be much smaller than the total number
- of HDF5 files and external raw data files.
-
- haddr_t
opaque turned out to be much easier
- than I planned (I did it last Fri). Most of the work will
- probably be removing the redundant H5F_t arguments for lots of
- functions.
-
- Conclusion
-
-
- Robb Matzke
-
-
-Last modified: Tue Sep 8 14:43:32 EDT 1998
-
-
-
diff --git a/doc/html/FF-IH_FileGroup.gif b/doc/html/FF-IH_FileGroup.gif
deleted file mode 100644
index b0d76f5..0000000
Binary files a/doc/html/FF-IH_FileGroup.gif and /dev/null differ
diff --git a/doc/html/FF-IH_FileObject.gif b/doc/html/FF-IH_FileObject.gif
deleted file mode 100644
index 8eba623..0000000
Binary files a/doc/html/FF-IH_FileObject.gif and /dev/null differ
diff --git a/doc/html/Files.html b/doc/html/Files.html
deleted file mode 100644
index d490436..0000000
--- a/doc/html/Files.html
+++ /dev/null
@@ -1,607 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-The File Interface (H5F)
-
- 1. Introduction
-
- 2. File access modes
-
- H5F_ACC_RDWR
- parameter to H5Fopen()
allows write access to a
- file also. H5Fcreate()
assumes write access as
- well as read access, passing H5F_ACC_TRUNC
forces
- the truncation of an existing file, otherwise H5Fcreate will
- fail to overwrite an existing file.
-
- 3. Creating, Opening, and Closing Files
-
- H5Fcreate()
function,
- and existing files can be accessed with H5Fopen()
. Both
- functions return an object ID which should be eventually released by
- calling H5Fclose()
.
-
-
-
-
- hid_t H5Fcreate (const char *name, uintn
- flags, hid_t create_properties, hid_t
- access_properties)
- H5F_ACC_TRUNC
flag is set,
- any current file is truncated when the new file is created.
- If a file of the same name exists and the
- H5F_ACC_TRUNC
flag is not set (or the
- H5F_ACC_EXCL
bit is set), this function will
- fail. Passing H5P_DEFAULT
for the creation
- and/or access property lists uses the library's default
- values for those properties. Creating and changing the
- values of a property list is documented further below. The
- return value is an ID for the open file and it should be
- closed by calling H5Fclose()
when it's no longer
- needed. A negative value is returned for failure.
-
-
- hid_t H5Fopen (const char *name, uintn
- flags, hid_t access_properties)
- H5F_ACC_RDWR
flag is
- set. The access_properties is a file access property
- list ID or H5P_DEFAULT
for the default I/O access
- parameters. Creating and changing the parameters for access
- property lists is documented further below. Files which are opened
- more than once return a unique identifier for each
- H5Fopen()
call and can be accessed through all
- file IDs. The return value is an ID for the open file and it
- should be closed by calling H5Fclose()
when it's
- no longer needed. A negative value is returned for failure.
-
-
- herr_t H5Fclose (hid_t file_id)
- H5Fcreate()
or H5Fopen()
. After
- closing a file the file_id should not be used again. This
- function returns zero for success or a negative value for failure.
-
-
- herr_t H5Fflush (hid_t object_id,
- H5F_scope_t scope)
- 4. File Property Lists
-
- H5Fcreate()
or
- H5Fopen()
are passed through property list
- objects, which are created with the H5Pcreate()
- function. These objects allow many parameters of a file's
- creation or access to be changed from the default values.
- Property lists are used as a portable and extensible method of
- modifying multiple parameter values with simple API functions.
- There are two kinds of file-related property lists,
- namely file creation properties and file access properties.
-
- 4.1. File Creation Properties
-
- H5Fcreate()
only
- and are used to control the file meta-data which is maintained
- in the super block of the file. The parameters which can be
- modified are:
-
-
-
-
- H5Pset_userblock()
and
- H5Pget_userblock()
calls.
-
-
- H5Pset_sizes()
and
- H5Pget_sizes()
calls.
-
-
- H5Pset_sym_k()
and H5Pget_sym_k()
calls.
-
-
- H5Pset_istore_k()
and H5Pget_istore_k()
- calls.
- 4.2. File Access Property Lists
-
- H5Fcreate()
or
- H5Fopen()
and are used to control different methods of
- performing I/O on files.
-
-
-
-
- open()
,
- lseek()
, read()
, write()
, and
- close()
. The lseek64()
function is used
- on operating systems that support it. This driver is enabled and
- configured with H5Pset_fapl_sec2()
.
-
-
- stdio.h
, namely
- fopen()
, fseek()
, fread()
,
- fwrite()
, and fclose()
. The
- fseek64()
function is used on operating systems that
- support it. This driver is enabled and configured with
- H5Pset_fapl_stdio()
.
-
-
- malloc()
and free()
to create storage
- space for the file. The total size of the file must be small enough
- to fit in virtual memory. The name supplied to
- H5Fcreate()
is irrelevant, and H5Fopen()
- will always fail.
-
-
- MPI_File_open()
during file creation or open.
- The access_mode controls the kind of parallel access the application
- intends. (Note that it is likely that the next API revision will
- remove the access_mode parameter and have access control specified
- via the raw data transfer property list of H5Dread()
- and H5Dwrite()
.) These parameters are set and queried
- with the H5Pset_fapl_mpi()
and
- H5Pget_fapl_mpi()
calls.
-
-
- H5Pset_alignment()
function. Any allocation
- request at least as large as some threshold will be aligned on
- an address which is a multiple of some number.
- 5. Examples of using file property lists
-
- 5.1. Example of using file creation property lists
-
-
-
- hid_t create_plist;
- hid_t file_id;
-
- create_plist = H5Pcreate(H5P_FILE_CREATE);
- H5Pset_sizes(create_plist, 8, 8);
-
- file_id = H5Fcreate("test.h5", H5F_ACC_TRUNC,
- create_plist, H5P_DEFAULT);
- .
- .
- .
- H5Fclose(file_id);
-
-
- 5.2. Example of using file creation plist
-
-
-
- hid_t access_plist;
- hid_t file_id;
-
- access_plist = H5Pcreate(H5P_FILE_ACCESS);
- H5Pset_fapl_mpi(access_plist, MPI_COMM_WORLD, MPI_INFO_NULL);
-
- /* H5Fopen must be called collectively */
- file_id = H5Fopen("test.h5", H5F_ACC_RDWR, access_plist);
- .
- .
- .
- /* H5Fclose must be called collectively */
- H5Fclose(file_id);
-
-
-
- 6. Low-level File Drivers
-
- 6.1. Unbuffered Permanent Files
-
- open()
, close()
, read()
,
- write()
, and lseek()
functions. If the
- operating system supports lseek64()
then it is used instead
- of lseek()
. The library buffers meta data regardless of
- the low-level driver, but using this driver prevents data from being
- buffered again by the lowest layers of the HDF5 library.
-
-
-
-
- hid_t H5Pget_driver (hid_t access_properties)
- H5FD_SEC2
if the
- sec2 driver is defined as the low-level driver for the
- specified access property list.
-
-
- herr_t H5Pset_fapl_sec2
- (hid_t access_properties)
- 6.2. Buffered Permanent Files
-
- stdio.h
header file to access permanent files in a local
- file system. These are the fopen()
, fclose()
,
- fread()
, fwrite()
, and fseek()
- functions. If the operating system supports fseek64()
then
- it is used instead of fseek()
. Use of this driver
- introduces an additional layer of buffering beneath the HDF5 library.
-
-
-
-
- hid_t H5Pget_driver(hid_t access_properties)
- H5FD_STDIO
if the
- stdio driver is defined as the low-level driver for the
- specified access property list.
-
-
- herr_t H5Pset_fapl_stdio
- (hid_t access_properties)
- 6.3. Buffered Temporary Files
-
- malloc()
and
- free()
to allocate space for a file in the heap. Reading
- and writing to a file of this type results in mem-to-mem copies instead
- of disk I/O and as a result is somewhat faster. However, the total file
- size must not exceed the amount of available virtual memory, and only
- one HDF5 file handle can access the file (because the name of such a
- file is insignificant and H5Fopen()
always fails).
-
-
-
-
- hid_t H5Pget_driver (hid_t access_properties)
- H5FD_CORE
if the
- core driver is defined as the low-level driver for the
- specified access property list.
-
-
- herr_t H5Pset_fapl_core (hid_t access_properties,
- size_t block_size,
- hbool_t backing_store)
-
- herr_t H5Pget_fapl_core (hid_t access_properties,
- size_t *block_size),
- hbool_t *backing_store)
- H5Pset_fapl_core()
.
- 6.4. Parallel Files
-
-
-
-
-
- hid_t H5Pget_driver (hid_t access_properties)
- H5FD_MPI
if the
- mpi driver is defined as the low-level driver for the
- specified access property list.
-
-
- herr_t H5Pset_fapl_mpi (hid_t access_properties, MPI_Comm
- comm, MPI_info info)
-
- herr_t H5Pget_fapl_mpi
- (hid_t access_properties,
- MPI_Comm *comm,
- MPI_info *info)
- H5Pset_fapl_mpi()
.
- 6.5. File Families
-
-
- ls
(1) may be substantially smaller. The name passed to
- H5Fcreate()
or H5Fopen()
should include a
- printf(3c)
style integer format specifier which will be
- replaced with the family member number (the first family member is
- zero).
-
- split
(1) and numbering the output
- files. However, because HDF5 is lazy about extending the size
- of family members, a valid file cannot generally be created by
- concatenation of the family members. Additionally,
- split
and cat
don't attempt to
- generate files with holes. The h5repart
program
- can be used to repartition an HDF5 file or family into another
- file or family and preserves holes in the files.
-
-
-
-
- h5repart
[-v
] [-b
- block_size[suffix]] [-m
- member_size[suffix]] source
- destination
- printf
-style integer format such as "%d". The
- -v
switch prints input and output file names on
- the standard error stream for progress monitoring,
- -b
sets the I/O block size (the default is 1kB),
- and -m
sets the output member size if the
- destination is a family name (the default is 1GB). The block
- and member sizes may be suffixed with the letters
- g
, m
, or k
for GB, MB,
- or kB respectively.
-
-
- hid_t H5Pget_driver (hid_t access_properties)
- H5FD_FAMILY
if
- the family driver is defined as the low-level driver for the
- specified access property list.
-
-
- herr_t H5Pset_fapl_family (hid_t access_properties,
- hsize_t memb_size, hid_t member_properties)
- off_t
type is
- four bytes then the maximum family member size is usually
- 2^31-1 because the byte at offset 2,147,483,647 is generally
- inaccessible. Additional parameters may be added to this
- function in the future.
-
-
- herr_t H5Pget_fapl_family (hid_t access_properties,
- hsize_t *memb_size,
- hid_t *member_properties)
- H5Pclose()
when the application is finished with
- it. If memb_size is non-null then it will contain
- the logical size in bytes of each family member. In the
- future, additional arguments may be added to this function to
- match those added to H5Pset_fapl_family()
.
- 6.6. Split Meta/Raw Files
-
- H5Fcreate()
or H5Fopen()
and this
- driver appends a file extension which defaults to .meta
for
- the meta data file and .raw
for the raw data file.
- Each file can have its own
- file access property list which allows, for instance, a split file with
- meta data stored with the core driver and raw data stored with
- the sec2 driver.
-
-
-
-
-
-hid_t H5Pget_driver (hid_t access_properties)
- H5FD_SPLIT
if
- the split driver is defined as the low-level driver for the
- specified access property list.
-
-
- herr_t H5Pset_fapl_split (hid_t access_properties,
- const char *meta_extension,
- hid_t meta_properties, const char *raw_extension,
- hid_t raw_properties)
- .meta
) to the end of
- the base name and will be accessed according to the
- meta_properties. The raw file will have a name which is
- formed by appending raw_extension (or .raw
) to the base
- name and will be accessed according to the raw_properties.
- Additional parameters may be added to this function in the future.
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 26 April 2001
-
-
-
-
diff --git a/doc/html/Filters.html b/doc/html/Filters.html
deleted file mode 100644
index a253cfb..0000000
--- a/doc/html/Filters.html
+++ /dev/null
@@ -1,593 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-Filters in HDF5
-
- Note: Transient pipelines described in this document have not
- been implemented.
-
- 1. Introduction
-
- H5D_CHUNKED
dataset can be arranged in a pipeline
- so output of one filter becomes the input of the next filter.
-
- H5Z_filter_t
) allocated by NCSA and can also be
- passed application-defined integer resources to control its
- behavior. Each filter also has an optional ASCII comment
- string.
-
-
-
- H5Z_filter_t
-
-
-
- Value
- Description
-
-
-
-
- 0-255
These values are reserved for filters predefined and
- registered by the HDF5 library and of use to the general
- public. They are described in a separate section
- below.
-
-
-
-
- 256-511
Filter numbers in this range are used for testing only
- and can be used temporarily by any organization. No
- attempt is made to resolve numbering conflicts since all
- definitions are by nature temporary.
-
-
- 512-65535
Reserved for future assignment. Please contact the
- HDF5 development
- team to reserve a value or range of values for
- use by your filters.
- 2. Defining and Querying the Filter Pipeline
-
- H5Dwrite()
the transient filters are applied first
- in the order defined and then the permanent filters are applied
- in the order defined. For an H5Dread()
the
- opposite order is used: permanent filters in reverse order, then
- transient filters in reverse order. An H5Dread()
- must result in the same amount of data for a chunk as the
- original H5Dwrite()
.
-
- H5Pset_filter()
for a dataset creation property
- list while the transient filter pipeline is defined by calling
- that function for a dataset transfer property list.
-
-
-
-
- herr_t H5Pset_filter (hid_t plist,
- H5Z_filter_t filter, unsigned int flags,
- size_t cd_nelmts, const unsigned int
- cd_values[])
-
- int H5Pget_nfilters (hid_t plist)
-
- H5Z_filter_t H5Pget_filter (hid_t plist,
- int filter_number, unsigned int *flags,
- size_t *cd_nelmts, unsigned int
- *cd_values, size_t namelen, char name[])
- H5Pset_filter()
and returns information about a
- particular filter number in a permanent or transient pipeline
- depending on whether plist is a dataset creation or
- dataset transfer property list. On input, cd_nelmts
- indicates the number of entries in the cd_values
- array allocated by the caller while on exit it contains the
- number of values defined by the filter. The
- filter_number should be a value between zero and
- N-1 as described for H5Pget_nfilters()
- and the function will return failure (a negative value) if the
- filter number is out of range. If name is a pointer
- to an array of at least namelen bytes then the filter
- name will be copied into that array. The name will be null
- terminated if the namelen is large enough. The
- filter name returned will be the name appearing in the file or
- else the name registered for the filter or else an empty string.
-
-
-
-
-
- Value
- Description
-
-
-
- H5Z_FLAG_OPTIONAL
If this bit is set then the filter is optional. If
- the filter fails (see below) during an
-
- H5Dwrite()
operation then the filter is
- just excluded from the pipeline for the chunk for which
- it failed; the filter will not participate in the
- pipeline during an H5Dread()
of the chunk.
- This is commonly used for compression filters: if the
- compression result would be larger than the input then
- the compression filter returns failure and the
- uncompressed data is stored in the file. If this bit is
- clear and a filter fails then the
- H5Dwrite()
or H5Dread()
also
- fails.3. Defining Filters
-
- H5Z_filter_t
identification number and a comment.
-
-
-
-
-
- typedef size_t (*H5Z_func_t)(unsigned int
- flags, size_t cd_nelmts, const unsigned int
- cd_values[], size_t nbytes, size_t
- *buf_size, void **buf)
- H5Pset_filter()
function with the additional flag
- H5Z_FLAG_REVERSE
which is set when the filter is
- called as part of the input pipeline. The input buffer is
- pointed to by *buf and has a total size of
- *buf_size bytes but only nbytes are valid
- data. The filter should perform the transformation in place if
- possible and return the number of valid bytes or zero for
- failure. If the transformation cannot be done in place then
- the filter should allocate a new buffer with
- malloc()
and assign it to *buf,
- assigning the allocated size of that buffer to
- *buf_size. The old buffer should be freed
- by calling free()
.
-
-
- herr_t H5Zregister (H5Z_filter_t filter_id,
- const char *comment, H5Z_func_t
- filter)
- 4. Predefined Filters
-
- zlib
version 1.1.2 or later was found
- during configuration then the library will define a filter whose
- H5Z_filter_t
number is
- H5Z_FILTER_DEFLATE
. Since this compression method
- has the potential for generating compressed data which is larger
- than the original, the H5Z_FLAG_OPTIONAL
flag
- should be turned on so such cases can be handled gracefully by
- storing the original data instead of the compressed data. The
- cd_nvalues should be one with cd_value[0]
- being a compression agression level between zero and nine,
- inclusive (zero is the fastest compression while nine results in
- the best compression ratio).
-
- H5Z_FILTER_DEFLATE
filter to a pipeline is:
-
-
-
-
- herr_t H5Pset_deflate (hid_t plist, unsigned
- aggression)
- zlib
isn't detected during
- configuration the application can define
- H5Z_FILTER_DEFLATE
as a permanent filter. If the
- filter is marked as optional (as with
- H5Pset_deflate()
) then it will always fail and be
- automatically removed from the pipeline. Applications that read
- data will fail only if the data is actually compressed; they
- won't fail if H5Z_FILTER_DEFLATE
was part of the
- permanent output pipeline but was automatically excluded because
- it didn't exist when the data was written.
-
- zlib
can be acquired from
- http://www.cdrom.com/pub/infozip/zlib/
.
-
- 5. Example
-
- md5()
function was not detected at
- configuration time (left as an excercise for the reader).
- Otherwise the function is broken down to an input and output
- half. The output half calculates a checksum, increases the size
- of the output buffer if necessary, and appends the checksum to
- the end of the buffer. The input half calculates the checksum
- on the first part of the buffer and compares it to the checksum
- already stored at the end of the buffer. If the two differ then
- zero (failure) is returned, otherwise the buffer size is reduced
- to exclude the checksum.
-
-
-
-
-
-
-
-
-
-
-size_t
-md5_filter(unsigned int flags, size_t cd_nelmts,
- const unsigned int cd_values[], size_t nbytes,
- size_t *buf_size, void **buf)
-{
-#ifdef HAVE_MD5
- unsigned char cksum[16];
-
- if (flags & H5Z_REVERSE) {
- /* Input */
- assert(nbytes>=16);
- md5(nbytes-16, *buf, cksum);
-
- /* Compare */
- if (memcmp(cksum, (char*)(*buf)+nbytes-16, 16)) {
- return 0; /*fail*/
- }
-
- /* Strip off checksum */
- return nbytes-16;
-
- } else {
- /* Output */
- md5(nbytes, *buf, cksum);
-
- /* Increase buffer size if necessary */
- if (nbytes+16>*buf_size) {
- *buf_size = nbytes + 16;
- *buf = realloc(*buf, *buf_size);
- }
-
- /* Append checksum */
- memcpy((char*)(*buf)+nbytes, cksum, 16);
- return nbytes+16;
- }
-#else
- return 0; /*fail*/
-#endif
-}
-
H5Z_filter_t
numbers
- from the reserved range. We'll randomly choose 305.
-
-
-
-
-
-
-
-
-
-
-#define FILTER_MD5 305
-herr_t status = H5Zregister(FILTER_MD5, "md5 checksum", md5_filter);
-
-
-
-
-
-
-
-
-
-hid_t dcpl = H5Pcreate(H5P_DATASET_CREATE);
-hsize_t chunk_size[3] = {10,10,10};
-H5Pset_chunk(dcpl, 3, chunk_size);
-H5Pset_filter(dcpl, FILTER_MD5, 0, 0, NULL);
-hid_t dset = H5Dcreate(file, "dset", H5T_NATIVE_DOUBLE, space, dcpl);
-
6. Filter Diagnostics
-
- configure
- --enable-debug=z
) then filter statistics are printed when
- the application exits normally or the library is closed. The
- statistics are written to the standard error stream and include
- two lines for each filter that was used: one for input and one
- for output. The following fields are displayed:
-
-
-
-
-
-
- Field Name
- Description
-
-
-
- Method
- This is the name of the method as defined with
-
- H5Zregister()
with the charaters
- "< or ">" prepended to indicate
- input or output.
-
-
- Total
- The total number of bytes processed by the filter
- including errors. This is the maximum of the
- nbytes argument or the return value.
-
-
-
- Errors
- This field shows the number of bytes of the Total
- column which can be attributed to errors.
-
-
-
- User, System, Elapsed
- These are the amount of user time, system time, and
- elapsed time in seconds spent in the filter function.
- Elapsed time is sensitive to system load. These times
- may be zero on operating systems that don't support the
- required operations.
-
-
- Bandwidth
- This is the filter bandwidth which is the total
- number of bytes processed divided by elapsed time.
- Since elapsed time is subject to system load the
- bandwidth numbers cannot always be trusted.
- Furthermore, the bandwidth includes bytes attributed to
- errors which may significanly taint the value if the
- function is able to detect errors without much
- expense.
-
-
-
-
-
-
-
-
-H5Z: filter statistics accumulated over life of library:
- Method Total Errors User System Elapsed Bandwidth
- ------ ----- ------ ---- ------ ------- ---------
- >deflate 160000 40000 0.62 0.74 1.33 117.5 kBs
- <deflate 120000 0 0.11 0.00 0.12 1.000 MBs
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.4.5, February 2003
-
-
-Last modified: 2 August 2001
-
-
-
-
-
diff --git a/doc/html/Glossary.html b/doc/html/Glossary.html
deleted file mode 100644
index fd32c97..0000000
--- a/doc/html/Glossary.html
+++ /dev/null
@@ -1,573 +0,0 @@
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-HDF5 Glossary
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-
-
diff --git a/doc/html/Graphics/C++.gif b/doc/html/Graphics/C++.gif
deleted file mode 100755
index 120b7cc..0000000
Binary files a/doc/html/Graphics/C++.gif and /dev/null differ
diff --git a/doc/html/Graphics/FORTRAN.gif b/doc/html/Graphics/FORTRAN.gif
deleted file mode 100755
index d08a451..0000000
Binary files a/doc/html/Graphics/FORTRAN.gif and /dev/null differ
diff --git a/doc/html/Graphics/Java.gif b/doc/html/Graphics/Java.gif
deleted file mode 100755
index a064d1d..0000000
Binary files a/doc/html/Graphics/Java.gif and /dev/null differ
diff --git a/doc/html/Graphics/Makefile.am b/doc/html/Graphics/Makefile.am
deleted file mode 100644
index 3e65c67..0000000
--- a/doc/html/Graphics/Makefile.am
+++ /dev/null
@@ -1,17 +0,0 @@
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-##
-## Makefile.am
-## Run automake to generate a Makefile.in from this file.
-#
-
-include $(top_srcdir)/config/commence-doc.am
-
-localdocdir=$(docdir)/hdf5/Graphics
-
-# Public doc files (to be installed)...
-localdoc_DATA=C++.gif FORTRAN.gif Java.gif OtherAPIs.gif
diff --git a/doc/html/Graphics/Makefile.in b/doc/html/Graphics/Makefile.in
deleted file mode 100644
index 50d0abf..0000000
--- a/doc/html/Graphics/Makefile.in
+++ /dev/null
@@ -1,485 +0,0 @@
-# Makefile.in generated by automake 1.9.5 from Makefile.am.
-# @configure_input@
-
-# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
-# 2003, 2004, 2005 Free Software Foundation, Inc.
-# This Makefile.in is free software; the Free Software Foundation
-# gives unlimited permission to copy and/or distribute it,
-# with or without modifications, as long as this notice is preserved.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
-# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
-# PARTICULAR PURPOSE.
-
-@SET_MAKE@
-
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-#
-
-srcdir = @srcdir@
-top_srcdir = @top_srcdir@
-VPATH = @srcdir@
-pkgdatadir = $(datadir)/@PACKAGE@
-pkglibdir = $(libdir)/@PACKAGE@
-pkgincludedir = $(includedir)/@PACKAGE@
-top_builddir = ../../..
-am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
-INSTALL = @INSTALL@
-install_sh_DATA = $(install_sh) -c -m 644
-install_sh_PROGRAM = $(install_sh) -c
-install_sh_SCRIPT = $(install_sh) -c
-INSTALL_HEADER = $(INSTALL_DATA)
-transform = $(program_transform_name)
-NORMAL_INSTALL = :
-PRE_INSTALL = :
-POST_INSTALL = :
-NORMAL_UNINSTALL = :
-PRE_UNINSTALL = :
-POST_UNINSTALL = :
-build_triplet = @build@
-host_triplet = @host@
-DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
- $(top_srcdir)/config/commence-doc.am \
- $(top_srcdir)/config/commence.am
-subdir = doc/html/Graphics
-ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
-am__aclocal_m4_deps = $(top_srcdir)/configure.in
-am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
- $(ACLOCAL_M4)
-mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs
-CONFIG_HEADER = $(top_builddir)/src/H5config.h
-CONFIG_CLEAN_FILES =
-SOURCES =
-DIST_SOURCES =
-am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
-am__vpath_adj = case $$p in \
- $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
- *) f=$$p;; \
- esac;
-am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
-am__installdirs = "$(DESTDIR)$(localdocdir)"
-localdocDATA_INSTALL = $(INSTALL_DATA)
-DATA = $(localdoc_DATA)
-DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
-
-# Set the paths for AFS installs of autotools for Linux machines
-# Ideally, these tools should never be needed during the build.
-ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal
-ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@
-AMDEP_FALSE = @AMDEP_FALSE@
-AMDEP_TRUE = @AMDEP_TRUE@
-AMTAR = @AMTAR@
-AM_MAKEFLAGS = @AM_MAKEFLAGS@
-AR = @AR@
-AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf
-AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader
-AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake
-AWK = @AWK@
-BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@
-BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@
-BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@
-BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@
-BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@
-BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@
-BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@
-BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@
-BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@
-BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@
-BUILD_PDB2HDF = @BUILD_PDB2HDF@
-BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@
-BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@
-BYTESEX = @BYTESEX@
-CC = @CC@
-CCDEPMODE = @CCDEPMODE@
-CC_VERSION = @CC_VERSION@
-CFLAGS = @CFLAGS@
-CONFIG_DATE = @CONFIG_DATE@
-CONFIG_MODE = @CONFIG_MODE@
-CONFIG_USER = @CONFIG_USER@
-CPP = @CPP@
-CPPFLAGS = @CPPFLAGS@
-CXX = @CXX@
-CXXCPP = @CXXCPP@
-CXXDEPMODE = @CXXDEPMODE@
-CXXFLAGS = @CXXFLAGS@
-CYGPATH_W = @CYGPATH_W@
-DEBUG_PKG = @DEBUG_PKG@
-DEFS = @DEFS@
-DEPDIR = @DEPDIR@
-DYNAMIC_DIRS = @DYNAMIC_DIRS@
-ECHO = @ECHO@
-ECHO_C = @ECHO_C@
-ECHO_N = @ECHO_N@
-ECHO_T = @ECHO_T@
-EGREP = @EGREP@
-EXEEXT = @EXEEXT@
-F77 = @F77@
-
-# Make sure that these variables are exported to the Makefiles
-F9XMODEXT = @F9XMODEXT@
-F9XMODFLAG = @F9XMODFLAG@
-F9XSUFFIXFLAG = @F9XSUFFIXFLAG@
-FC = @FC@
-FCFLAGS = @FCFLAGS@
-FCLIBS = @FCLIBS@
-FFLAGS = @FFLAGS@
-FILTERS = @FILTERS@
-FSEARCH_DIRS = @FSEARCH_DIRS@
-H5_VERSION = @H5_VERSION@
-HADDR_T = @HADDR_T@
-HDF5_INTERFACES = @HDF5_INTERFACES@
-HID_T = @HID_T@
-HL = @HL@
-HL_FOR = @HL_FOR@
-HSIZET = @HSIZET@
-HSIZE_T = @HSIZE_T@
-HSSIZE_T = @HSSIZE_T@
-INSTALL_DATA = @INSTALL_DATA@
-INSTALL_PROGRAM = @INSTALL_PROGRAM@
-INSTALL_SCRIPT = @INSTALL_SCRIPT@
-INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
-INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@
-LDFLAGS = @LDFLAGS@
-LIBOBJS = @LIBOBJS@
-LIBS = @LIBS@
-LIBTOOL = @LIBTOOL@
-LN_S = @LN_S@
-LTLIBOBJS = @LTLIBOBJS@
-LT_STATIC_EXEC = @LT_STATIC_EXEC@
-MAINT = @MAINT@
-MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
-MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
-MAKEINFO = @MAKEINFO@
-MPE = @MPE@
-OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@
-OBJEXT = @OBJEXT@
-PACKAGE = @PACKAGE@
-PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
-PACKAGE_NAME = @PACKAGE_NAME@
-PACKAGE_STRING = @PACKAGE_STRING@
-PACKAGE_TARNAME = @PACKAGE_TARNAME@
-PACKAGE_VERSION = @PACKAGE_VERSION@
-PARALLEL = @PARALLEL@
-PATH_SEPARATOR = @PATH_SEPARATOR@
-PERL = @PERL@
-PTHREAD = @PTHREAD@
-RANLIB = @RANLIB@
-ROOT = @ROOT@
-RUNPARALLEL = @RUNPARALLEL@
-RUNSERIAL = @RUNSERIAL@
-R_INTEGER = @R_INTEGER@
-R_LARGE = @R_LARGE@
-SEARCH = @SEARCH@
-SETX = @SETX@
-SET_MAKE = @SET_MAKE@
-
-# Hardcode SHELL to be /bin/sh. Most machines have this shell, and
-# on at least one machine configure fails to detect its existence (janus).
-# Also, when HDF5 is configured on one machine but run on another,
-# configure's automatic SHELL detection may not work on the build machine.
-SHELL = /bin/sh
-SIZE_T = @SIZE_T@
-STATIC_SHARED = @STATIC_SHARED@
-STRIP = @STRIP@
-TESTPARALLEL = @TESTPARALLEL@
-TRACE_API = @TRACE_API@
-USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@
-USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@
-USE_FILTER_NBIT = @USE_FILTER_NBIT@
-USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@
-USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@
-USE_FILTER_SZIP = @USE_FILTER_SZIP@
-VERSION = @VERSION@
-ac_ct_AR = @ac_ct_AR@
-ac_ct_CC = @ac_ct_CC@
-ac_ct_CXX = @ac_ct_CXX@
-ac_ct_F77 = @ac_ct_F77@
-ac_ct_FC = @ac_ct_FC@
-ac_ct_RANLIB = @ac_ct_RANLIB@
-ac_ct_STRIP = @ac_ct_STRIP@
-am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
-am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
-am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
-am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
-am__include = @am__include@
-am__leading_dot = @am__leading_dot@
-am__quote = @am__quote@
-am__tar = @am__tar@
-am__untar = @am__untar@
-bindir = @bindir@
-build = @build@
-build_alias = @build_alias@
-build_cpu = @build_cpu@
-build_os = @build_os@
-build_vendor = @build_vendor@
-datadir = @datadir@
-exec_prefix = @exec_prefix@
-host = @host@
-host_alias = @host_alias@
-host_cpu = @host_cpu@
-host_os = @host_os@
-host_vendor = @host_vendor@
-
-# Install directories that automake doesn't know about
-includedir = $(exec_prefix)/include
-infodir = @infodir@
-install_sh = @install_sh@
-libdir = @libdir@
-libexecdir = @libexecdir@
-localstatedir = @localstatedir@
-mandir = @mandir@
-mkdir_p = @mkdir_p@
-oldincludedir = @oldincludedir@
-prefix = @prefix@
-program_transform_name = @program_transform_name@
-sbindir = @sbindir@
-sharedstatedir = @sharedstatedir@
-sysconfdir = @sysconfdir@
-target_alias = @target_alias@
-
-# Shell commands used in Makefiles
-RM = rm -f
-CP = cp
-
-# Some machines need a command to run executables; this is that command
-# so that our tests will run.
-# We use RUNTESTS instead of RUNSERIAL directly because it may be that
-# some tests need to be run with a different command. Older versions
-# of the makefiles used the command
-# $(LIBTOOL) --mode=execute
-# in some directories, for instance.
-RUNTESTS = $(RUNSERIAL)
-
-# Libraries to link to while building
-LIBHDF5 = $(top_builddir)/src/libhdf5.la
-LIBH5TEST = $(top_builddir)/test/libh5test.la
-LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la
-LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la
-LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la
-LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la
-LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la
-LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la
-LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la
-docdir = $(exec_prefix)/doc
-
-# Scripts used to build examples
-H5CC = $(bindir)/h5cc
-H5CC_PP = $(bindir)/h5pcc
-H5FC = $(bindir)/h5fc
-H5FC_PP = $(bindir)/h5pfc
-
-# .chkexe and .chksh files are used to mark tests that have run successfully.
-MOSTLYCLEANFILES = *.chkexe *.chksh
-localdocdir = $(docdir)/hdf5/Graphics
-
-# Public doc files (to be installed)...
-localdoc_DATA = C++.gif FORTRAN.gif Java.gif OtherAPIs.gif
-all: all-am
-
-.SUFFIXES:
-$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps)
- @for dep in $?; do \
- case '$(am__configure_deps)' in \
- *$$dep*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
- && exit 0; \
- exit 1;; \
- esac; \
- done; \
- echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/Graphics/Makefile'; \
- cd $(top_srcdir) && \
- $(AUTOMAKE) --foreign doc/html/Graphics/Makefile
-.PRECIOUS: Makefile
-Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
- @case '$?' in \
- *config.status*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
- *) \
- echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
- cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
- esac;
-
-$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-mostlyclean-libtool:
- -rm -f *.lo
-
-clean-libtool:
- -rm -rf .libs _libs
-
-distclean-libtool:
- -rm -f libtool
-uninstall-info-am:
-install-localdocDATA: $(localdoc_DATA)
- @$(NORMAL_INSTALL)
- test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)"
- @list='$(localdoc_DATA)'; for p in $$list; do \
- if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
- f=$(am__strip_dir) \
- echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \
- $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-
-uninstall-localdocDATA:
- @$(NORMAL_UNINSTALL)
- @list='$(localdoc_DATA)'; for p in $$list; do \
- f=$(am__strip_dir) \
- echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \
- rm -f "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-tags: TAGS
-TAGS:
-
-ctags: CTAGS
-CTAGS:
-
-
-distdir: $(DISTFILES)
- $(mkdir_p) $(distdir)/../../../config
- @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
- topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
- list='$(DISTFILES)'; for file in $$list; do \
- case $$file in \
- $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
- $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
- esac; \
- if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
- dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
- if test "$$dir" != "$$file" && test "$$dir" != "."; then \
- dir="/$$dir"; \
- $(mkdir_p) "$(distdir)$$dir"; \
- else \
- dir=''; \
- fi; \
- if test -d $$d/$$file; then \
- if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
- cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
- fi; \
- cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
- else \
- test -f $(distdir)/$$file \
- || cp -p $$d/$$file $(distdir)/$$file \
- || exit 1; \
- fi; \
- done
-check-am: all-am
-check: check-am
-all-am: Makefile $(DATA)
-installdirs:
- for dir in "$(DESTDIR)$(localdocdir)"; do \
- test -z "$$dir" || $(mkdir_p) "$$dir"; \
- done
-install: install-am
-install-exec: install-exec-am
-install-data: install-data-am
-uninstall: uninstall-am
-
-install-am: all-am
- @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
-
-installcheck: installcheck-am
-install-strip:
- $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
- install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
- `test -z '$(STRIP)' || \
- echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
-mostlyclean-generic:
- -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
-
-clean-generic:
-
-distclean-generic:
- -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-
-maintainer-clean-generic:
- @echo "This command is intended for maintainers to use"
- @echo "it deletes files that may require special tools to rebuild."
-clean: clean-am
-
-clean-am: clean-generic clean-libtool mostlyclean-am
-
-distclean: distclean-am
- -rm -f Makefile
-distclean-am: clean-am distclean-generic distclean-libtool
-
-dvi: dvi-am
-
-dvi-am:
-
-html: html-am
-
-info: info-am
-
-info-am:
-
-install-data-am: install-localdocDATA
-
-install-exec-am:
-
-install-info: install-info-am
-
-install-man:
-
-installcheck-am:
-
-maintainer-clean: maintainer-clean-am
- -rm -f Makefile
-maintainer-clean-am: distclean-am maintainer-clean-generic
-
-mostlyclean: mostlyclean-am
-
-mostlyclean-am: mostlyclean-generic mostlyclean-libtool
-
-pdf: pdf-am
-
-pdf-am:
-
-ps: ps-am
-
-ps-am:
-
-uninstall-am: uninstall-info-am uninstall-localdocDATA
-
-.PHONY: all all-am check check-am clean clean-generic clean-libtool \
- distclean distclean-generic distclean-libtool distdir dvi \
- dvi-am html html-am info info-am install install-am \
- install-data install-data-am install-exec install-exec-am \
- install-info install-info-am install-localdocDATA install-man \
- install-strip installcheck installcheck-am installdirs \
- maintainer-clean maintainer-clean-generic mostlyclean \
- mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
- uninstall uninstall-am uninstall-info-am \
- uninstall-localdocDATA
-
-
-# Ignore most rules
-lib progs check test _test check-p check-s:
- @echo "Nothing to be done"
-
-tests dep depend:
- @@SETX@; for d in X $(SUBDIRS); do \
- if test $$d != X; then \
- (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \
- fi;
- done
-
-# In docs directory, install-doc is the same as install
-install-doc install-all:
- $(MAKE) $(AM_MAKEFLAGS) install
-uninstall-doc uninstall-all:
- $(MAKE) $(AM_MAKEFLAGS) uninstall
-# Tell versions [3.59,3.63) of GNU make to not export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
diff --git a/doc/html/Graphics/OtherAPIs.gif b/doc/html/Graphics/OtherAPIs.gif
deleted file mode 100755
index 8ae8902..0000000
Binary files a/doc/html/Graphics/OtherAPIs.gif and /dev/null differ
diff --git a/doc/html/Groups.html b/doc/html/Groups.html
deleted file mode 100644
index 2941008..0000000
--- a/doc/html/Groups.html
+++ /dev/null
@@ -1,404 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-The Group Interface (H5G)
-
- 1. Introduction
-
- 2. Names
-
-
-
-
-
-
- Location Type
- Object Name
- Description
-
-
-
- File ID
-
- /foo/bar
The object
- bar
in group foo
- in the root group.
-
-
- Group ID
-
- /foo/bar
The object
- bar
in group foo
- in the root group of the file containing the specified
- group. In other words, the group ID's only purpose is
- to supply a file.
-
-
- File ID
-
- /
The root group of the specified file.
-
-
-
- Group ID
-
- /
The root group of the file containing the specified
- group.
-
-
-
- File ID
-
- foo/bar
The object
- bar
in group foo
- in the specified group.
-
-
- Group ID
-
- foo/bar
The object
- bar
in group foo
- in the specified group.
-
-
- File ID
-
- .
The root group of the file.
-
-
-
- Group ID
-
- .
The specified group.
-
-
-
- Other ID
-
- .
The specified object.
- H5Dcreate
returns an error if a
- dataset with the dataset name specified in the parameter list
- already exists at the location specified in the parameter list.
-
-
- 3. Creating, Opening, and Closing Groups
-
- H5Gcreate()
function,
- and existing groups can be access with
- H5Gopen()
. Both functions return an object ID which
- should be eventually released by calling
- H5Gclose()
.
-
-
-
-
- hid_t H5Gcreate (hid_t location_id, const char
- *name, size_t size_hint)
- H5Gclose()
- when it's no longer needed. A negative value is returned for
- failure.
-
-
- hid_t H5Gopen (hid_t location_id, const char
- *name)
- H5Gclose()
when it is no
- longer needed. A negative value is returned for failure.
-
-
- herr_t H5Gclose (hid_t group_id)
- H5Gcreate()
or
- H5Gopen()
. After closing a group the
- group_id should not be used again. This function
- returns zero for success or a negative value for failure.
- 4. Objects with Multiple Names
-
-
-
-
-
- herr_t H5Glink (hid_t file_id, H5G_link_t
- link_type, const char *current_name,
- const char *new_name)
- H5G_LINK_HARD
then a new
- hard link is created. Otherwise if link_type is
- H5T_LINK_SOFT
a soft link is created which is an
- alias for the current_name. When creating a soft
- link the object need not exist. This function returns zero
- for success or negative for failure.
-
-
- herr_t H5Gunlink (hid_t file_id, const char
- *name)
- 5. Comments
-
-
-
-
-
- herr_t H5Gset_comment (hid_t loc_id, const
- char *name, const char *comment)
-
- herr_t H5Gget_comment (hid_t loc_id, const
- char *name, size_t bufsize, char
- *comment)
- 6. Unlinking Datasets with H5Gmove and H5Gunlink
-
-
- H5Gmove
and
- H5Gunlink
.
-
- H5Gmove
and H5Gunlink
- each include a step that unlinks pointers to a set or group.
- If the link that is removed is on the only path leading
- to a dataset or group, that dataset or group will become
- inaccessible in the file.
-
- group2
can only be accessed via the following path,
- where top_group
is a member of the file's root group:
-
-
- Using /top_group/group1/group2/
H5Gmove
, top_group
is renamed
- to be a member of group2
. At this point, since
- top_group
was the only route from the root group
- to group1
, there is no longer a path by which
- one can access group1
, group2
, or
- any member datasets.
- top_group
and any member datasets have also
- become inaccessible.
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
Functionality | -netCDF | -SD | -AIO | -HDF5 | -Comments | -
---|---|---|---|---|---|
Open existing file for read/write | -ncopen | -SDstart | -AIO_open | -H5Fopen | -|
Creates new file for read/write. | -nccreate | -H5Fcreate | -SD API handles this with SDopen | -||
Close file | -ncclose | -SDend | -AIO_close | -H5Fclose | -|
Redefine parameters | -ncredef | -Unneccessary under SD & HDF5 data-models | -|||
End "define" mode | -ncendef | -Unneccessary under SD & HDF5 data-models | -|||
Query the number of datasets, dimensions and attributes in a file | -ncinquire | -SDfileinfo | -H5Dget_info H5Rget_num_relations H5Gget_num_contents |
-HDF5 interface is more granular and flexible | -|
Update a writeable file with current changes | -ncsync | -AIO_flush | -H5Mflush | -HDF5 interface is more flexible because it can be applied to parts of the -file hierarchy instead of the whole file at once. The SD interface does not -have this feature, although most of the lower HDF library supports it. | -|
Close file access without applying recent changes | -ncabort | -How useful is this feature? | -|||
Create new dimension | -ncdimdef | -SDsetdimname | -H5Mcreate | -SD interface actually creates dimensions with datasets, this just allows -naming them | -|
Get ID of existing dimension | -ncdimid | -SDgetdimid | -H5Maccess | -SD interface looks up dimensions by index and the netCDF interface uses -names, but they are close enough. The HDF5 interface does not current allow -access to particular dimensions, only the dataspace as a whole. | -|
Get size & name of dimension | -ncdiminq | -SDdiminfo | -H5Mget_name H5Sget_lrank |
-Only a rough match | -|
Rename dimension | -ncdimrename | -SDsetdimname | -H5Mset_name | -- | |
Create a new dataset | -ncvardef | -SDcreate | -AIO_mkarray | -H5Mcreate | -- |
Attach to an existing dataset | -ncvarid | -SDselect | -AIO_arr_load | -H5Maccess | -- |
Get basic information about a dataset | -ncvarinq | -SDgetinfo | -AIO_arr_get_btype AIO_arr_get_nelmts AIO_arr_get_nbdims AIO_arr_get_bdims AIO_arr_get_slab |
-H5Dget_info | -All interfaces have different levels of information that they return, some -use of auxilliary functions is required to get equivalent amount of information | -
Write a single value to a dataset | -ncvarput1 | -SDwritedata | -AIO_write | -H5Dwrite | -What is this useful for? | -
Read a single value from a dataset | -ncvarget1 | -SDreaddata | -AIO_read | -H5Dread | -What is this useful for? | -
Write a solid hyperslab of data (i.e. subset) to a dataset | -ncvarput | -SDwritedata | -AIO_write | -H5Dwrite | -- |
Read a solid hyperslab of data (i.e. subset) from a dataset | -ncvarget | -SDreaddata | -AIO_read | -H5Dread | -- |
Write a general hyperslab of data (i.e. possibly subsampled) to a dataset | -ncvarputg | -SDwritedata | -AIO_write | -H5Dwrite | -- |
Read a general hyperslab of data (i.e. possibly subsampled) from a dataset | -ncvargetg | -SDreaddata | -AIO_read | -H5Dread | -- |
Rename a dataset variable | -ncvarrename | -H5Mset_name | -- | ||
Add an attribute to a dataset | -ncattput | -SDsetattr | -H5Rattach_oid | -HDF5 requires creating a seperate object to attach to a dataset, but it also -allows objects to be attributes of any other object, even nested. | -|
Get attribute information | -ncattinq | -SDattrinfo | -H5Dget_info | -HDF5 has no specific function for attributes, they are treated as all other -objects in the file. | -|
Retrieve attribute for a dataset | -ncattget | -SDreadattr | -H5Dread | -HDF5 uses general dataset I/O for attributes. | -|
Copy attribute from one dataset to another | -ncattcopy | -What is this used for? | -|||
Get name of attribute | -ncattname | -SDattrinfo | -H5Mget_name | -- | |
Rename attribute | -ncattrename | -H5Mset_name | -- | ||
Delete attribute | -ncattdel | -H5Mdelete | -This can be faked in current HDF interface with lower-level calls | -||
Compute # of bytes to store a number-type | -nctypelen | -DFKNTsize | -Hmm, the HDF5 Datatype interface needs this functionality. | -||
Indicate that fill-values are to be written to dataset | -ncsetfill | -SDsetfillmode | -HDF5 Datatype interface should work on this functionality | -||
Get information about "record" variables (Those datasets which share the -same unlimited dimension | -ncrecinq | -This should probably be wrapped in a higher layer interface, if it's -needed for HDF5. | -|||
Get a record from each dataset sharing the unlimited dimension | -ncrecget | -This is somewhat equivalent to reading a vdata with non-interlaced -fields, only in a dataset oriented way. This should also be wrapped in a -higher layer interface if it's necessary for HDF5. | -|||
Put a record from each dataset sharing the unlimited dimension | -ncrecput | -This is somewhat equivalent to writing a vdata with non-interlaced -fields, only in a dataset oriented way. This should also be wrapped in a -higher layer interface if it's necessary for HDF5. | -|||
Map a dataset's name to an index to reference it with | -SDnametoindex | -H5Mfind_name | -Equivalent functionality except HDF5 call returns an OID instead of an -index. | -||
Get the valid range of values for data in a dataset | -SDgetrange | -Easily implemented with attributes at a higher level for HDF5. | -|||
Release access to a dataset | -SDendaccess | -AIO_arr_destroy | -H5Mrelease | -Odd that the netCDF API doesn't have this... | -|
Set the valid range of data in a dataset | -SDsetrange | -Easily implemented with attributes at a higher level for HDF5. | -|||
Set the label, units, format, etc. of the data values in a dataset | -SDsetdatastrs | -Easily implemented with attributes at a higher level for HDF5. | -|||
Get the label, units, format, etc. of the data values in a dataset | -SDgetdatastrs | -Easily implemented with attributes at a higher level for HDF5. | -|||
Set the label, units, format, etc. of the dimensions in a dataset | -SDsetdimstrs | -Easily implemented with attributes at a higher level for HDF5. | -|||
Get the label, units, format, etc. of the dimensions in a dataset | -SDgetdimstrs | -Easily implemented with attributes at a higher level for HDF5. | -|||
Set the scale of the dimensions in a dataset | -SDsetdimscale | -Easily implemented with attributes at a higher level for HDF5. | -|||
Get the scale of the dimensions in a dataset | -SDgetdimscale | -Easily implemented with attributes at a higher level for HDF5. | -|||
Set the calibration parameters of the data values in a dataset | -SDsetcal | -Easily implemented with attributes at a higher level for HDF5. | -|||
Get the calibration parameters of the data values in a dataset | -SDgetcal | -Easily implemented with attributes at a higher level for HDF5. | -|||
Set the fill value for the data values in a dataset | -SDsetfillvalue | -HDF5 needs something like this, I'm not certain where to put it. | -|||
Get the fill value for the data values in a dataset | -SDgetfillvalue | -HDF5 needs something like this, I'm not certain where to put it. | -|||
Move/Set the dataset to be in an 'external' file | -SDsetexternalfile | -H5Dset_storage | -HDF5 has simple functions for this, but needs an API for setting up the -storage flow. | -||
Move/Set the dataset to be stored using only certain bits from the dataset | -SDsetnbitdataset | -H5Dset_storage | -HDF5 has simple functions for this, but needs an API for setting up the -storage flow. | -||
Move/Set the dataset to be stored in compressed form | -SDsetcompress | -H5Dset_storage | -HDF5 has simple functions for this, but needs an API for setting up the -storage flow. | -||
Search for an dataset attribute with particular name | -SDfindattr | -H5Mfind_name H5Mwild_search |
-HDF5 can handle wildcard searchs for this feature. | -||
Map a run-time dataset handle to a persistant disk reference | -SDidtoref | -I'm not certain this is needed for HDF5. | -|||
Map a persistant disk reference for a dataset to an index in a group | -SDreftoindex | -I'm not certain this is needed for HDF5. | -|||
Determine if a dataset is a 'record' variable (i.e. it has an unlimited dimension) | -SDisrecord | -Easily implemented by querying the dimensionality at a higher level for HDF5. | -|||
Determine if a dataset is a 'coordinate' variable (i.e. it is used as a dimension) | -SDiscoord | -I'm not certain this is needed for HDF5. | -|||
Set the access type (i.e. parallel or serial) for dataset I/O | -SDsetaccesstype | -HDF5 has functions for reading the information about this, but needs a better -API for setting up the storage flow. | -|||
Set the size of blocks used to store a dataset with unlimited dimensions | -SDsetblocksize | -HDF5 has functions for reading the information about this, but needs a better -API for setting up the storage flow. | -|||
Sets backward compatibility of dimensions created. | -SDsetdimval_comp | -Unneccessary in HDF5. | -|||
Checks backward compatibility of dimensions created. | -SDisdimval_comp | -Unneccessary in HDF5. | -|||
Move/Set the dataset to be stored in chunked form | -SDsetchunk | -H5Dset_storage | -HDF5 has simple functions for this, but needs an API for setting up the -storage flow. | -||
Get the chunking information for a dataset stored in chunked form | -SDgetchunkinfo | -H5Dstorage_detail | -- | ||
Read/Write chunks of a dataset using a chunk index | -SDreadchunk SDwritechunk |
-I'm not certain that HDF5 needs something like this. | -|||
Tune chunk caching parameters for chunked datasets | -SDsetchunkcache | -HDF5 needs something like this. | -|||
Change some default behavior of the library | -AIO_defaults | -Something like this would be useful in HDF5, to tune I/O pipelines, etc. | -|||
Flush and close all open files | -AIO_exit | -Something like this might be useful in HDF5, although it could be - encapsulated with a higher-level function. | -|||
Target an architecture for data-type storage | -AIO_target | -There are some rough parallels with using the data-type in HDF5 to create - data-type objects which can be used to write out future datasets. | -|||
Map a filename to a file ID | -AIO_filename | -H5Mget_name | -- | ||
Get the active directory (where new datasets are created) | -AIO_getcwd | -HDF5 allows multiple directories (groups) to be attached to, any of which - can have new datasets created within it. | -|||
Change active directory | -AIO_chdir | -Since HDF5 has a slightly different access method for directories (groups), - this functionality can be wrapped around calls to H5Gget_oid_by_name. | -|||
Create directory | -AIO_mkdir | -H5Mcreate | -- | ||
Return detailed information about an object | -AIO_stat | -H5Dget_info H5Dstorage_detail |
-Perhaps more information should be provided through another function in - HDF5? | -||
Get "flag" information | -AIO_getflags | -Not required in HDF5. | -|||
Set "flag" information | -AIO_setflags | -Not required in HDF5. | -|||
Get detailed information about all objects in a directory | -AIO_ls | -H5Gget_content_info_mult H5Dget_info H5Dstorage_detail |
-Only roughly equivalent functionality in HDF5, perhaps more should be - added? | -||
Get base type of object | -AIO_BASIC | -H5Gget_content_info | -- | ||
Set base type of dataset | -AIO_arr_set_btype | -H5Mcreate(DATATYPE) | -- | ||
Set dimensionality of dataset | -AIO_arr_set_bdims | -H5Mcreate(DATASPACE) | -- | ||
Set slab of dataset to write | -AIO_arr_set_slab | -This is similar to the process of creating a dataspace for use when - performing I/O on an HDF5 dataset | -|||
Describe chunking of dataset to write | -AIO_arr_set_chunk | -H5Dset_storage | -- | ||
Describe array index permutation of dataset to write | -AIO_arr_set_perm | -H5Dset_storage | -- | ||
Create a new dataset with dataspace and datatype information from an - existing dataset. | -AIO_arr_copy | -This can be mimicked in HDF5 by attaching to the datatype and dataspace of -an existing dataset and using the IDs to create new datasets. | -|||
Create a new directory to group objects within | -AIO_mkgroup | -H5Mcreate(GROUP) | -- | ||
Read name of objects in directory | -AIO_read_group | -H5Gget_content_info_mult | -- | ||
Add objects to directory | -AIO_write_group | -H5Ginsert_item_mult | -- | ||
Combine an architecture and numeric type to derive the format's datatype | -AIO_COMBINE | -This is a nice feature to add to HDF5. | -|||
Derive an architecture from the format's datatype | -AIO_ARCH | -This is a nice feature to add to HDF5. | -|||
Derive a numeric type from the format's datatype | -AIO_PNT | -This is a nice feature to add to HDF5. | -|||
Register error handling function for library to call when errors occur | -AIO_error_handler | -This should be added to HDF5. | -
- HDF5 documents and links - Introduction to HDF5 - - |
-
- HDF5 User's Guide - HDF5 Reference Manual - HDF5 Application Developer's Guide - |
- - - | ||
- Figure 1: Relationships among the HDF5 root group, other groups, and objects
- - | ||
- - | ||
- Figure 2: HDF5 objects -- datasets, datatypes, or dataspaces
- - |
The format of an HDF5 file on disk encompasses several - key ideas of the HDF4 and AIO file formats as well as - addressing some shortcomings therein. The new format is - more self-describing than the HDF4 format and is more - uniformly applied to data objects in the file. - -
An HDF5 file appears to the user as a directed graph. - The nodes of this graph are the higher-level HDF5 objects - that are exposed by the HDF5 APIs: - -
At the lowest level, as information is actually written to the disk, - an HDF5 file is made up of the following objects: -
The HDF5 library uses these low-level objects to represent the - higher-level objects that are then presented to the user or - to applications through the APIs. - For instance, a group is an object header that contains a message that - points to a local heap and to a B-tree which points to symbol nodes. - A dataset is an object header that contains messages that describe - datatype, space, layout, filters, external files, fill value, etc - with the layout message pointing to either a raw data chunk or to a - B-tree that points to raw data chunks. - - -
This document describes the lower-level data objects; - the higher-level objects and their properties are described - in the HDF5 User's Guide. - -
Three levels of information comprise the file format. - Level 0 contains basic information for identifying and - defining information about the file. Level 1 information contains - the information about the pieces of a file shared by many objects - in the file (such as a B-trees and heaps). Level 2 is the rest - of the file and contains all of the data objects, with each object - partitioned into header information, also known as - metadata, and data. - -
The sizes of various fields in the following layout tables are - determined by looking at the number of columns the field spans - in the table. There are three exceptions: (1) The size may be - overridden by specifying a size in parentheses, (2) the size of - addresses is determined by the Size of Offsets field - in the super block and is indicated in this document with a - superscripted 'O', and (3) the size of length fields is determined - by the Size of Lengths field in the super block and is - indicated in this document with a superscripted 'L'. - -
Values for all fields in this document should be treated as unsigned - integers, unless otherwise noted in the description of a field. - Additionally, all metadata fields are stored in little-endian byte - order. -
- -The super block may begin at certain predefined offsets within - the HDF5 file, allowing a block of unspecified content for - users to place additional information at the beginning (and - end) of the HDF5 file without limiting the HDF5 library's - ability to manage the objects within the file itself. This - feature was designed to accommodate wrapping an HDF5 file in - another file format or adding descriptive information to the - file without requiring the modification of the actual file's - information. The super block is located by searching for the - HDF5 file signature at byte offset 0, byte offset 512 and at - successive locations in the file, each a multiple of two of - the previous location, i.e. 0, 512, 1024, 2048, etc. - -
The super block is composed of a file signature, followed by
- super block and group version numbers, information
- about the sizes of offset and length values used to describe
- items within the file, the size of each group page,
- and a group entry for the root object in the file.
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
HDF5 File Signature (8 bytes) |
- |||
Version # of Super Block | -Version # of Global Free-space Storage | -Version # of Root Group Symbol Table Entry | -Reserved (zero) | -
Version # of Shared Header Message Format | -Size of Offsets | -Size of Lengths | -Reserved (zero) | -
Group Leaf Node K | -Group Internal Node K | -||
File Consistency Flags | -|||
Indexed Storage Internal Node K1 | -Reserved (zero)1 | -||
Base AddressO | -|||
Address of Global Free-space HeapO | -|||
End of File AddressO | -|||
Driver Information Block AddressO | -|||
Root Group Symbol Table Entry | -
- (Items marked with an 'O' the above table are
- - of the size specified in "Size of Offsets.") - |
- (Items marked with an '1' the above table are
- - new in version 1 of the superblock) - |
Field Name | -Description | -|||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HDF5 File Signature | -
- This field contains a constant value and can be used to - quickly identify a file as being an HDF5 file. The - constant value is designed to allow easy identification of - an HDF5 file and to allow certain types of data corruption - to be detected. The file signature of an HDF5 file always - contains the following values: - - -
- - This signature both identifies the file as an HDF5 file - and provides for immediate detection of common - file-transfer problems. The first two bytes distinguish - HDF5 files on systems that expect the first two bytes to - identify the file type uniquely. The first byte is - chosen as a non-ASCII value to reduce the probability - that a text file may be misrecognized as an HDF5 file; - also, it catches bad file transfers that clear bit - 7. Bytes two through four name the format. The CR-LF - sequence catches bad file transfers that alter newline - sequences. The control-Z character stops file display - under MS-DOS. The final line feed checks for the inverse - of the CR-LF translation problem. (This is a direct - descendent of the PNG file - signature.) - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Version Number of the Super Block | -
- This value is used to determine the format of the - information in the super block. When the format of the - information in the super block is changed, the version number - is incremented to the next integer and can be used to - determine how the information in the super block is - formatted. - - -Values of 0 and 1 are defined for this field. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Version Number of the File Free-space Information | -
- This value is used to determine the format of the - information in the File Free-space Information. - -The only value currently valid in this field is '0', which - indicates that the free space index is formatted as described - below. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Version Number of the Root Group Symbol Table Entry | -
- This value is used to determine the format of the - information in the Root Group Symbol Table Entry. When the - format of the information in that field is changed, the - version number is incremented to the next integer and can be - used to determine how the information in the field - is formatted. - -The only value currently valid in this field is '0', which - indicates that the root group symbol table entry is formatted as - described below. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Version Number of the Shared Header Message Format | -
- This value is used to determine the format of the - information in a shared object header message, which is - stored in the global small-data heap. Since the format - of the shared header messages differs from the private - header messages, a version number is used to identify changes - in the format. - -The only value currently valid in this field is '0', which - indicates that shared header messages are formatted as - described below. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Size of Offsets | -
- This value contains the number of bytes used to store - addresses in the file. The values for the addresses of - objects in the file are offsets relative to a base address, - usually the address of the super block signature. This - allows a wrapper to be added after the file is created - without invalidating the internal offset locations. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Size of Lengths | -
- This value contains the number of bytes used to store - the size of an object. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Group Leaf Node K | -
- Each leaf node of a group B-tree will have at - least this many entries but not more than twice this - many. If a group has a single leaf node then it - may have fewer entries. - -This value must be greater than zero. - -See the description of B-trees below. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Group Internal Node K | -
- Each internal node of a group B-tree will have at - least this many entries but not more than twice this - many. If the group has only one internal - node then it might have fewer entries. - -This value must be greater than zero. - -See the description of B-trees below. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
File Consistency Flags | -
- This value contains flags to indicate information - about the consistency of the information contained - within the file. Currently, the following bit flags are - defined: -
This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Indexed Storage Internal Node K | -
- Each internal node of a indexed storage B-tree will have at - least this many entries but not more than twice this - many. If the group has only one internal - node then it might have fewer entries. - -This value must be greater than zero. - -See the description of B-trees below. - - -This field is present in version 1+ of the superblock. - - |
- |||||||||||||||||||||||||||
Base Address | -
- This is the absolute file address of the first byte of - the HDF5 data within the file. The library currently - constrains this value to be the absolute file address - of the super block itself when creating new files; - future versions of the library may provide greater - flexibility. When opening an existing file and this address does - not match the offset of the superblock, the library assumes - that the entire contents of the HDF5 file have been adjusted in - the file and adjusts the base address and end of file address to - reflect their new positions in the file. Unless otherwise noted, - all other file addresses are relative to this base - address. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Address of Global Free-space Index | -
- Free-space management is not yet defined in the HDF5 - file format and is not handled by the library. - Currently this field always contains the - undefined address. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
End of File Address | -
- This is the absolute file address of the first byte past - the end of all HDF5 data. It is used to determine whether a - file has been accidently truncated and as an address where - file data allocation can occur if space from the free list is - not used. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Driver Information Block Address | -
- This is the relative file address of the file driver - information block which contains driver-specific - information needed to reopen the file. If there is no - driver information block then this entry should be the - undefined address. - - -This field is present in version 0+ of the superblock. - - |
- |||||||||||||||||||||||||||
Root Group Symbol Table Entry | -
- This is the symbol table entry - of the root group, which serves as the entry point into - the group graph for the file. - - -This field is present in version 0+ of the superblock. - - |
-
The file driver information block is an optional region of the
- file which contains information needed by the file driver in
- order to reopen a file. The format of the file driver information
- block is:
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Reserved (zero) | -||
Driver Information Size (4 bytes) | -|||
Driver Identification (8 bytes) |
- |||
Driver Information (n bytes) |
-
Field Name | -Description | -
---|---|
Version | -
- The version number of the driver information block. The - file format documented here is version zero. - - |
-
Driver Information Size | -
- The size in bytes of the Driver Information part of this - structure. - - |
-
Driver Identification | -
- This is an eight-byte ASCII string without null
- termination which identifies the driver and version number
- of the Driver Information block. The predefined drivers
- supplied with the HDF5 library are identified by the
- letters
- For example, the various versions of the family driver
- will be identified by - Identification for user-defined drivers - is arbitrary but should be unique and avoid the four character - prefix "NCSA". - - |
-
Driver Information | -Driver information is stored in a format defined by the
- file driver and encoded/decoded by the driver callbacks
- invoked from the H5FD_sb_encode and
- H5FD_sb_decode functions. |
-
B-link trees allow flexible storage for objects which tend to grow - in ways that cause the object to be stored discontiguously. B-trees - are described in various algorithms books including "Introduction to - Algorithms" by Thomas H. Cormen, Charles E. Leiserson, and Ronald - L. Rivest. The B-link tree, in which the sibling nodes at a - particular level in the tree are stored in a doubly-linked list, - is described in the "Efficient Locking for Concurrent Operations - on B-trees" paper by Phillip Lehman and S. Bing Yao as published - in the ACM Transactions on Database Systems, Vol. 6, - No. 4, December 1981. - -
The B-link trees implemented by the file format contain one more
- key than the number of children. In other words, each child
- pointer out of a B-tree node has a left key and a right key.
- The pointers out of internal nodes point to sub-trees while
- the pointers out of leaf nodes point to symbol nodes and
- raw data chunks.
- Aside from that difference, internal nodes and leaf nodes
- are identical.
-
-
-
byte | -byte | -byte | -byte | - -
---|---|---|---|
Signature | - -|||
Node Type | -Node Level | -Entries Used | - -|
Address of Left SiblingO | - -|||
Address of Right SiblingO | - -|||
Key 0 (variable size) | - -|||
Address of Child 0O | - -|||
Key 1 (variable size) | - -|||
Address of Child 1O | - -|||
... | - -|||
Key 2K (variable size) | - -|||
Address of Child 2KO | - -|||
Key 2K+1 (variable size) | -
- (Items marked with an 'O' the above table are
- - of the size specified in "Size of Offsets.") - |
Field Name | -Description | -||||||||
---|---|---|---|---|---|---|---|---|---|
Signature | -
- The ASCII character string " |
- ||||||||
Node Type | -
- Each B-link tree points to a particular type of data. - This field indicates the type of data as well as - implying the maximum degree K of the tree and - the size of each Key field. - - -
|
- ||||||||
Node Level | -
- The node level indicates the level at which this node - appears in the tree (leaf nodes are at level zero). Not - only does the level indicate whether child pointers - point to sub-trees or to data, but it can also be used - to help file consistency checking utilities reconstruct - damanged trees. - - |
- ||||||||
Entries Used | -
- This determines the number of children to which this - node points. All nodes of a particular type of tree - have the same maximum degree, but most nodes will point - to less than that number of children. The valid child - pointers and keys appear at the beginning of the node - and the unused pointers and keys appear at the end of - the node. The unused pointers and keys have undefined - values. - - |
- ||||||||
Address of Left Sibling | -
- This is the relative file address of the left sibling of - the current node. If the current - node is the left-most node at this level then this field - is the undefined address. - - |
- ||||||||
Address of Right Sibling | -
- This is the relative file address of the right sibling of - the current node. If the current - node is the right-most node at this level then this - field is the undefined address. - - |
- ||||||||
Keys and Child Pointers | -
- Each tree has 2K+1 keys with 2K - child pointers interleaved between the keys. The number - of keys and child pointers actually containing valid - values is determined by the node's Entries Used field. - If that field is N then the B-link tree contains - N child pointers and N+1 keys. - - |
- ||||||||
Key | -
- The format and size of the key values is determined by - the type of data to which this tree points. The keys are - ordered and are boundaries for the contents of the child - pointer; that is, the key values represented by child - N fall between Key N and Key - N+1. Whether the interval is open or closed on - each end is determined by the type of data to which the - tree points. - - -- The format of the key depends on the node type. - For nodes of node type 0 (group nodes), the key is formatted as - follows: -
- For nodes of node type 1 (chunked raw data nodes), the key is - formatted as follows: -
|
- ||||||||
Child Pointer | -
- The tree node contains file addresses of subtrees or - data depending on the node level. Nodes at Level 0 point - to data addresses, either raw data chunk or group nodes. - Nodes at non-zero levels point to other nodes of the - same B-tree. - -For raw data chunk nodes, the child pointer is the address - of a single raw data chunk. For group nodes, the child pointer - points to a symbol table, which contains - information for multiple symbol table entries. - - |
-
- Conceptually, each B-tree node looks like this: -
key[0] | - | child[0] | - | key[1] | - | child[1] | - | key[2] | - | ... | - | ... | - | key[N-1] | - | child[N-1] | - | key[N] | -
The following question must next be answered: - "Is the value described by key[i] contained in - child[i-1] or in child[i]?" - The answer depends on the type of tree. - In trees for groups (node type 0) the object described by - key[i] is the greatest object contained in - child[i-1] while in chunk trees (node type 1) the - chunk described by key[i] is the least chunk in - child[i]. - -
That means that key[0] for group trees is sometimes unused; - it points to offset zero in the heap, which is always the - empty string and compares as "less-than" any valid object name. - -
And key[N] for chunk trees is sometimes unused; - it contains a chunk offset which compares as "greater-than" - any other chunk offset and has a chunk byte size of zero - to indicate that it is not actually allocated. - - -
A group is an object internal to the file that allows - arbitrary nesting of objects within the file (including other groups). - A group maps a set of names in the group to a set of relative - file addresses where objects with those names are located in - the file. Certain metadata for an object to which the group points - can be cached in the group's symbol table in addition to the - object's header. - -
An HDF5 object name space can be stored hierarchically by - partitioning the name into components and storing each - component in a group. The group entry for a - non-ultimate component points to the group containing - the next component. The group entry for the last - component points to the object being named. - -
A group is a collection of group nodes pointed
- to by a B-link tree. Each group node contains entries
- for one or more symbols. If an attempt is made to add a
- symbol to an already full group node containing
- 2K entries, then the node is split and one node
- contains K symbols and the other contains
- K+1 symbols.
-
-
-
byte | -byte | -byte | -byte | - -
---|---|---|---|
Signature | - -|||
Version Number | -Reserved (0) | -Number of Symbols | - -|
Group Entries |
-
Field Name | -Description | -
---|---|
Signature | -
- The ASCII character string " |
-
Version Number | -
- The version number for the group node. This - document describes version 1. (There is no version '0' - of the group node) - - |
-
Number of Symbols | -
- Although all group nodes have the same length, - most contain fewer than the maximum possible number of - symbol entries. This field indicates how many entries - contain valid data. The valid entries are packed at the - beginning of the group node while the remaining - entries contain undefined values. - - |
-
Group Entries | -
- Each symbol has an entry in the group node. - The format of the entry is described below. - There are 2K entries in each group node, where - K is the "Group Leaf Node K" value from the - super block. - - |
-
Each group entry in a group node is designed
- to allow for very fast browsing of stored objects.
- Toward that design goal, the group entries
- include space for caching certain constant metadata from the
- object header.
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Name OffsetO | -|||
Object Header AddressO | -|||
Cache Type | -|||
Reserved | -|||
Scratch-pad Space (16 bytes) |
-
- (Items marked with an 'O' the above table are
- - of the size specified in "Size of Offsets.") - |
Field Name | -Description | -||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Name Offset | -
- This is the byte offset into the group local - heap for the name of the object. The name is null - terminated. - - |
- ||||||||||
Object Header Address | -
- Every object has an object header which serves as a - permanent location for the object's metadata. In addition - to appearing in the object header, some metadata can be - cached in the scratch-pad space. - - |
- ||||||||||
Cache Type | -
- The cache type is determined from the object header.
- It also determines the format for the scratch-pad space:
-
|
- ||||||||||
Reserved | -
- These four bytes are present so that the scratch-pad - space is aligned on an eight-byte boundary. They are - always set to zero. - - |
- ||||||||||
Scratch-pad Space | -
- This space is used for different purposes, depending - on the value of the Cache Type field. Any metadata - about a dataset object represented in the scratch-pad - space is duplicated in the object header for that - dataset. This metadata can include the datatype - and the size of the dataspace for a dataset whose datatype - is atomic and whose dataspace is fixed and less than - four dimensions. - -- Furthermore, no data is cached in the group - entry scratch-pad space if the object header for - the group entry has a link count greater than - one. - - |
-
The group entry scratch-pad space is formatted - according to the value in the Cache Type field. - -
If the Cache Type field contains the value zero
- (0)
then no information is
- stored in the scratch-pad space.
-
-
If the Cache Type field contains the value one
- (1)
, then the scratch-pad space
- contains cached metadata for another object header
- in the following format:
-
-
-
byte | -byte | -byte | -byte | - -
---|---|---|---|
Address of B-treeO | - -|||
Address of Name HeapO | -
- (Items marked with an 'O' the above table are
- - of the size specified in "Size of Offsets.") - |
Field Name | -Description | -
---|---|
Address of B-tree | -
- This is the file address for the root of the - group's B-tree. - - |
-
Address of Name Heap | -
- This is the file address for the group's local - heap, in which are stored the group's symbol names. - - |
-
If the Cache Type field contains the value two
- (2)
, then the scratch-pad space
- contains cached metadata for another symbolic link
- in the following format:
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Offset to Link Value | -
Field Name | -Description | -
---|---|
Offset to Link Value | -
- The value of a symbolic link (that is, the name of the - thing to which it points) is stored in the local heap. - This field is the 4-byte offset into the local heap for - the start of the link value, which is null terminated. - - |
-
A heap is a collection of small heap objects. Objects can be - inserted and removed from the heap at any time. - The address of a heap does not change once the heap is created. - References to objects are stored in the group table; - the names of those objects are stored in the local heap. -
- -byte | -byte | -byte | -byte | -
---|---|---|---|
Signature | -|||
Version | -Reserved (zero) | - - -||
Data Segment SizeL | -|||
Offset to Head of Free-listL | -|||
Address of Data SegmentO | -
- (Items marked with an 'L' the above table are
- - of the size specified in "Size of Lengths.") - |
- (Items marked with an 'O' the above table are
- - of the size specified in "Size of Offsets.") - |
-
Field Name | -Description | -
---|---|
Signature | -
- The ASCII character string " |
-
Version | -
- Each local heap has its own version number so that new - heaps can be added to old files. This document - describes version zero (0) of the local heap. - - |
-
Data Segment Size | -
- The total amount of disk memory allocated for the heap - data. This may be larger than the amount of space - required by the objects stored in the heap. The extra - unused space in the heap holds a linked list of free blocks. - - |
-
Offset to Head of Free-list | -
- This is the offset within the heap data segment of the - first free block (or the - undefined address if there is no - free block). The free block contains "Size of Lengths" bytes that - are the offset of the next free block (or the - value '1' if this is the - last free block) followed by "Size of Lengths" bytes that store - the size of this free block. The size of the free block includes - the space used to store the offset of the next free block and - the of the current block, making the minimum size of a free block - 2 * "Size of Lengths". - - |
-
Address of Data Segment | -
- The data segment originally starts immediately after - the heap header, but if the data segment must grow as a - result of adding more objects, then the data segment may - be relocated, in its entirety, to another part of the - file. - - |
-
Objects within the heap should be aligned on an 8-byte boundary. - -
Each HDF5 file has a global heap which stores various types of - information which is typically shared between datasets. The - global heap was designed to satisfy these goals: - -
The implementation of the heap makes use of the memory - management already available at the file level and combines that - with a new top-level object called a collection to - achieve Goal B. The global heap is the set of all collections. - Each global heap object belongs to exactly one collection and - each collection contains one or more global heap objects. For - the purposes of disk I/O and caching, a collection is treated as - an atomic object. -
- -The HDF5 library creates global heap collections as needed, so there may - be multiple collections throughout the file. The set of all of them is - abstractly called the "global heap", although they don't actually link - to each other, and there is no global place in the file where you can - discover all of the collections. The collections are found simply by - finding a reference to one through another object in the file (eg. - variable-length datatype elements, etc). -
- -byte | -byte | -byte | -byte | -
---|---|---|---|
Signature | -|||
Version | -Reserved (zero) | - - -||
Collection SizeL | -|||
Global Heap Object 1 |
- |||
Global Heap Object 2 |
- |||
... |
- |||
Global Heap Object N |
- |||
Global Heap Object 0 (free space) |
-
- (Items marked with an 'L' the above table are
- - of the size specified in "Size of Lengths.") - |
Field Name | -Description | -
---|---|
Signature | -
- The ASCII character string " |
-
Version | -
- Each collection has its own version number so that new - collections can be added to old files. This document - describes version one (1) of the collections (there is no - version zero (0)). - - |
-
Collection Size | -
- This is the size in bytes of the entire collection - including this field. The default (and minimum) - collection size is 4096 bytes which is a typical file - system block size. This allows for 127 16-byte heap - objects plus their overhead (the collection header of 16 bytes - and the 16 bytes of information about each heap object). - - |
-
Global Heap Object 1 through N | -
- The objects are stored in any order with no - intervening unused space. - - |
-
Global Heap Object 0 | -
- Global Heap Object 0 (zero), when present, represents the free - space in the collection. Free space always appears at the end of - the collection. If the free space is too small to store the header - for Object 0 (described below) then the header is implied and the - collection contains no free space. - - |
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Heap Object ID | -Reference Count | -||
Reserved | -|||
Object SizeL | -|||
Object Data |
-
- (Items marked with an 'L' the above table are
- - of the size specified in "Size of Lengths.") - |
Field Name | -Description | -
---|---|
Heap Object ID | -
- Each object has a unique identification number within a
- collection. The identification numbers are chosen so that
- new objects have the smallest value possible with the
- exception that the identifier |
-
Reference Count | -
- All heap objects have a reference count field. An - object which is referenced from some other part of the - file will have a positive reference count. The reference - count for Object 0 is always zero. - - |
-
Reserved | -
- Zero padding to align next field on an 8-byte boundary. - - |
-
Object Size | -
- This is the size of the object data stored for the object. - The actual storage space allocated for the object data is rounded - up to a multiple of eight. - - |
-
Object Data | -
- The object data is treated as a one-dimensional array - of bytes to be interpreted by the caller. - - |
-
The free-space index is a collection of blocks of data, - dispersed throughout the file, which are currently not used by - any file objects. - -
The super block contains a pointer to root of the free-space description; - that pointer is currently required to be the - undefined address. - -
The format of the free-space index is not defined at this time.
-
-
-
-
-
Data objects contain the real information in the file. These - objects compose the scientific data and other information which - are generally thought of as "data" by the end-user. All the - other information in the file is provided as a framework for - these data objects. -
- -A data object is composed of header information and data - information. The header information contains the information - needed to interpret the data information for the data object as - well as additional "metadata" or pointers to additional - "metadata" used to describe or annotate each data object. -
- -The header information of an object is designed to encompass - all the information about an object, except for the data itself. - This information includes - the dataspace, datatype, information about how the data - is stored on disk (in external files, compressed, broken up in - blocks, etc.), as well as other information used by the library - to speed up access to the data objects or maintain a file's - integrity. Information stored by user applications as attributes - is also stored in the object's header. The header of each object is - not necessarily located immediately prior to the object's data in the - file and in fact may be located in any position in the file. The order - of the messages in an object header is not significant. -
- -Header messages are aligned on 8-byte boundaries. -
- -byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Reserved (zero) | -Number of Header Messages | -|
Object Reference Count | -|||
Object Header Size | -|||
Header Message Type #1 | -Size of Header Message Data #1 | -||
Header Message #1 Flags | -Reserved (zero) | -||
Header Message Data #1 |
- |||
. . . |
- |||
Header Message Type #n | -Size of Header Message Data #n | -||
Header Message #n Flags | -Reserved (zero) | -||
Header Message Data #n |
-
Field Name | -Description | -||||||||
---|---|---|---|---|---|---|---|---|---|
Version | -
- This value is used to determine the format of the - information in the object header. When the format of the - information in the object header is changed, the version number - is incremented and can be used to determine how the - information in the object header is formatted. This - document describes version one (1) (there was no version - zero (0)). - - |
- ||||||||
Number of Header Messages | -
- This value determines the number of messages listed in - object headers for this object. This value includes the messages - in continuation messages for this object. - - |
- ||||||||
Object Reference Count | -
- This value specifies the number of "hard links" to this object - within the current file. References to the object from external - files, "soft links" in this file and object references in this - file are not tracked. - - |
- ||||||||
Object Header Size | -
- This value specifies the number of bytes of header message data - following this length field that contain object header messages - for this object header. This value does not include the size of - object header continuation blocks for this object elsewhere in the - file. - - |
- ||||||||
Header Message Type | -
- This value specifies the type of information included in the - following header message data. The header message types for the - pre-defined header messages are included in sections below. - - |
- ||||||||
Size of Header Message Data | -
- This value specifies the number of bytes of header - message data following the header message type and length - information for the current message. The size includes - padding bytes to make the message a multiple of eight - bytes. - - |
- ||||||||
Header Message Flags | -
- This is a bit field with the following definition: -
|
- ||||||||
Header Message Data | -
- The format and length of this field is determined by the - header message type and size respectively. Some header - message types do not require any data and this information - can be eliminated by setting the length of the message to - zero. The data is padded with enough zeros to make the - size a multiple of eight. - - |
-
The header message types and the message data associated with - them compose the critical "metadata" about each object. Some - header messages are required for each object while others are - optional. Some optional header messages may also be repeated - several times in the header itself, the requirements and number - of times allowed in the header will be noted in each header - message description below. -
- -The following is a list of currently defined header messages: -
- -Header Message Type: 0x0000 -
-Length: varies -
-Status: Optional, may be repeated. -
-Purpose and Description: The NIL message is used to indicate a - message which is to be ignored when reading the header messages for a - data object. [Possibly one which has been deleted for some reason.] -
-Format of Data: Unspecified. -
- -Header Message Type: 0x0001 -
-Length: Varies according to the number of dimensions, - as described in the following table. -
-Status: Required for dataset objects, may not be - repeated. -
-Description: The simple dataspace message describes the - number of dimensions (i.e. "rank") and size of each dimension that the - data object has. This message is only used for datasets which have a - simple, rectilinear grid layout; datasets requiring a more complex - layout (irregularly structured or unstructured grids, etc.) must use - the Complex Dataspace message for expressing the space the - dataset inhabits. (Note: The Complex Dataspace - functionality is not yet implemented and it is not described in this - document.) -
- -Format of Data:
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Dimensionality | -Flags | -Reserved | -
Reserved | -|||
Dimension #1 SizeL | -|||
. . . |
- |||
Dimension #n SizeL | -|||
Dimension #1 Maximum SizeL | -|||
. . . |
- |||
Dimension #n Maximum SizeL | -|||
Permutation Index #1L | -|||
. . . |
- |||
Permutation Index #nL | -
- (Items marked with an 'L' the above table are
- - of the size specified in "Size of Lengths.") - |
Field Name | -Description | -
---|---|
Version | -
- This value is used to determine the format of the - Simple Dataspace Message. When the format of the - information in the message is changed, the version number - is incremented and can be used to determine how the - information in the object header is formatted. This - document describes version one (1) (there was no version - zero (0)). - - |
-
Dimensionality | -
- This value is the number of dimensions that the data - object has. - - |
-
Flags | -
- This field is used to store flags to indicate the - presence of parts of this message. Bit 0 (the least - significant bit) is used to indicate that maximum - dimensions are present. Bit 1 is used to indicate that - permutation indices are present. - - |
-
Dimension #n Size | -
- This value is the current size of the dimension of the - data as stored in the file. The first dimension stored in - the list of dimensions is the slowest changing dimension - and the last dimension stored is the fastest changing - dimension. - - |
-
Dimension #n Maximum Size | -
- This value is the maximum size of the dimension of the - data as stored in the file. This value may be the special - "unlimited" size which indicates - that the data may expand along this dimension indefinitely. - If these values are not stored, the maximum size of each - dimension is assumed to be the dimension's current size. - - |
-
Permutation Index #n | -
- This value is the index permutation used to map - each dimension from the canonical representation to an - alternate axis for each dimension. If these values are - not stored, the first dimension stored in the list of - dimensions is the slowest changing dimension and the last - dimension stored is the fastest changing dimension. - - |
-
Purpose and Description: This message type was skipped during - the initial specification of the file format and may be used in a - future expansion to the format. - - -
Header Message Type: 0x0003 -
-Length: variable -
-Status: Required for dataset or named datatype objects, - may not be repeated. -
- -Description: The datatype message defines the datatype - for each element of a dataset. A datatype can describe an atomic type - like a fixed- or floating-point type or a compound type like a C - struct. - Datatypes messages are stored - as a list of datatype classes and - their associated properties. -
- -Datatype messages that are part of a dataset object, - do not describe how elements are related to one another, the dataspace - message is used for that purpose. Datatype messages that are part of - a named datatype message describe an "abstract" datatype that can be - used by other objects in the file. -
- -Format of Data:
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Class and Version | -Class Bit Field, Bits 0-7 | -Class Bit Field, Bits 8-15 | -Class Bit Field, Bits 16-23 | -
Size | -|||
Properties |
-
Field Name | -Description | -||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class and Version | -
- The version of the datatype message and the datatype's class - information are packed together in this field. The version - number is packed in the top 4 bits of the field and the class - is contained in the bottom 4 bits. - -The version number information is used for changes in the - format of the datatype message and is described here: -
The class of the datatype determines the format for the class - bit field and properties portion of the datatype message, which - are described below. The - following classes are currently defined: -
|
- ||||||||||||||||||||||||||||||||
Class Bit Fields | -
- The information in these bit fields is specific to each datatype - class and is described below. All bits not defined for a - datatype class are set to zero. - - |
- ||||||||||||||||||||||||||||||||
Size | -
- The size of the datatype in bytes. - - |
- ||||||||||||||||||||||||||||||||
Properties | -
- This variable-sized field encodes information specific to each - datatype class and is described below. If there is no - property information specified for a datatype class, the size - of this field is zero. - - |
-
Class specific information for Fixed-Point Numbers (Class 0):
-
-
-
Bits | -Meaning | -
---|---|
0 | -Byte Order. If zero, byte order is little-endian; - otherwise, byte order is big endian. | -
1, 2 | -Padding type. Bit 1 is the lo_pad type and bit 2 - is the hi_pad type. If a datum has unused bits at either - end, then the lo_pad or hi_pad bit is copied to those - locations. | -
3 | -Signed. If this bit is set then the fixed-point - number is in 2's complement form. | -
4-23 | -Reserved (zero). | -
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Bit Offset | -Bit Precision | -
Field Name | -Description | -
---|---|
Bit Offset | -
- The bit offset of the first significant bit of the fixed-point - value within the datatype. The bit offset specifies the number - of bits "to the right of" the value. - - |
-
Bit Precision | -
- The number of bits of precision of the fixed-point value - within the datatype. - - |
-
Class specific information for Floating-Point Numbers (Class 1):
-
-
-
Bits | -Meaning | -
---|---|
0 | -Byte Order. If zero, byte order is little-endian; - otherwise, byte order is big endian. | -
1, 2, 3 | -Padding type. Bit 1 is the low bits pad type, bit 2 - is the high bits pad type, and bit 3 is the internal bits - pad type. If a datum has unused bits at either end or between - the sign bit, exponent, or mantissa, then the value of bit - 1, 2, or 3 is copied to those locations. | -
4-5 | -Normalization. The value can be 0 if there is no - normalization, 1 if the most significant bit of the - mantissa is always set (except for 0.0), and 2 if the most - signficant bit of the mantissa is not stored but is - implied to be set. The value 3 is reserved and will not - appear in this field. | -
6-7 | -Reserved (zero). | -
8-15 | -Sign Location. This is the bit position of the sign - bit. Bits are numbered with the least significant bit zero. | -
16-23 | -Reserved (zero). | -
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Bit Offset | -Bit Precision | -||
Exponent Location | -Exponent Size | -Mantissa Location | -Mantissa Size | -
Exponent Bias | -
Field Name | -Description | -
---|---|
Bit Offset | -
- The bit offset of the first significant bit of the floating-point - value within the datatype. The bit offset specifies the number - of bits "to the right of" the value. - - |
-
Bit Precision | -
- The number of bits of precision of the floating-point value - within the datatype. - - |
-
Exponent Location | -
- The bit position of the exponent field. Bits are numbered with - the least significant bit number zero. - - |
-
Exponent Size | -
- The size of the exponent field in bits. - - |
-
Mantissa Location | -
- The bit position of the mantissa field. Bits are numbered with - the least significant bit number zero. - - |
-
Mantissa Size | -
- The size of the mantissa field in bits. - - |
-
Exponent Bias | -
- The bias of the exponent field. - - |
-
Class specific information for Time (Class 2):
-
-
-
Bits | -Meaning | -
---|---|
0 | -Byte Order. If zero, byte order is little-endian; - otherwise, byte order is big endian. | -
1-23 | -Reserved (zero). | -
Byte | -Byte | -
---|---|
Bit Precision | -
Field Name | -Description | -
---|---|
Bit Precision | -
- The number of bits of precision of the time value. - - |
-
Class specific information for Strings (Class 3):
-
-
-
Bits | -Meaning | -||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0-3 | -Padding type. This four-bit value determines the
- type of padding to use for the string. The values are:
-
-
| ||||||||||
4-7 | -Character Set. The character set to use for - encoding the string. The only character set supported is - the 8-bit ASCII (zero) so no translations have been defined - yet. | -||||||||||
8-23 | -Reserved (zero). | -
There are no properties defined for the string class. -
- - -Class specific information for Bitfields (Class 4):
-
-
-
Bits | -Meaning | -
---|---|
0 | -Byte Order. If zero, byte order is little-endian; - otherwise, byte order is big endian. | -
1, 2 | -Padding type. Bit 1 is the lo_pad type and bit 2 - is the hi_pad type. If a datum has unused bits at either - end, then the lo_pad or hi_pad bit is copied to those - locations. | -
3-23 | -Reserved (zero). | -
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Bit Offset | -Bit Precision | -
Field Name | -Description | -
---|---|
Bit Offset | -
- The bit offset of the first significant bit of the bitfield - within the datatype. The bit offset specifies the number - of bits "to the right of" the value. - - |
-
Bit Precision | -
- The number of bits of precision of the bitfield - within the datatype. - - |
-
Class specific information for Opaque (Class 5):
-
-
-
Bits | -Meaning | -
---|---|
0-7 | -Length of ASCII tag in bytes. | -
8-23 | -Reserved (zero). | -
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
ASCII Tag - |
-
Field Name | -Description | -
---|---|
ASCII Tag | -
- This NUL-terminated string provides a description for the - opaque type. It is NUL-padded to a multiple of 8 bytes. - - |
-
Class specific information for Compound (Class 6):
-
-
-
Bits | -Meaning | -
---|---|
0-15 | -Number of Members. This field contains the number - of members defined for the compound datatype. The member - definitions are listed in the Properties field of the data - type message. - |
15-23 | -Reserved (zero). | -
The Properties field of a compound datatype is a list of the - member definitions of the compound datatype. The member - definitions appear one after another with no intervening bytes. - The member types are described with a recursive datatype - message. - -
Note that the property descriptions are different for different
- versions of the datatype version. Additionally note that the version
- 0 properties are deprecated and have been replaced with the version
- 1 properties in versions of the HDF5 library from the 1.4 release
- onward.
-
-
-
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Name |
- |||
Byte Offset of Member | -|||
Dimensionality | -Reserved (zero) | -||
Dimension Permutation | -|||
Reserved (zero) | -|||
Dimension #1 Size (required) | -|||
Dimension #2 Size (required) | -|||
Dimension #3 Size (required) | -|||
Dimension #4 Size (required) | -|||
Member Type Message |
-
Field Name | -Description | -
---|---|
Name | -
- This NUL-terminated string provides a description for the - opaque type. It is NUL-padded to a multiple of 8 bytes. - - |
-
Byte Offset of Member | -
- This is the byte offset of the member within the datatype. - - |
-
Dimensionality | -
- If set to zero, this field indicates a scalar member. If set - to a value greater than zero, this field indicates that the - member is an array of values. For array members, the size of - the array is indicated by the 'Size of Dimension n' field in - this message. - - |
-
Dimension Permutation | -
- This field was intended to allow an array field to have - it's dimensions permuted, but this was never implemented. - This field should always be set to zero. - - |
-
Dimension #n Size | -
- This field is the size of a dimension of the array field as - stored in the file. The first dimension stored in the list of - dimensions is the slowest changing dimension and the last - dimension stored is the fastest changing dimension. - - |
-
Member Type Message | -
- This field is a datatype message describing the datatype of - the member. - - |
-
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Name |
- |||
Byte Offset of Member | -|||
Member Type Message |
-
Field Name | -Description | -
---|---|
Name | -
- This NUL-terminated string provides a description for the - opaque type. It is NUL-padded to a multiple of 8 bytes. - - |
-
Byte Offset of Member | -
- This is the byte offset of the member within the datatype. - - |
-
Member Type Message | -
- This field is a datatype message describing the datatype of - the member. - - |
-
Class specific information for Reference (Class 7):
-
-
-
Bits | -Meaning | -||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0-3 | -Type. This four-bit value contains the type of reference
- described. The values defined are:
-
-
|
- ||||||||||
15-23 | -Reserved (zero). | -
There are no properties defined for the reference class. -
- - -Class specific information for Enumeration (Class 8):
-
-
-
Bits | -Meaning | -
---|---|
0-15 | -Number of Members. The number of name/value - pairs defined for the enumeration type. | -
16-23 | -Reserved (zero). | -
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Base Type |
- |||
Names |
- |||
Values |
-
Field Name | -Description | -
---|---|
Base Type | -
- Each enumeration type is based on some parent type, usually an - integer. The information for that parent type is described - recursively by this field. - - |
-
Names | -
- The name for each name/value pair. Each name is stored as a null - terminated ASCII string in a multiple of eight bytes. The names - are in no particular order. - - |
-
Values | -
- The list of values in the same order as the names. The values - are packed (no inter-value padding) and the size of each value - is determined by the parent type. - - |
-
Class specific information for Variable-Length (Class 9):
-
-
-
Bits | -Meaning | -||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0-3 | -Type. This four-bit value contains the type of
- variable-length datatype described. The values defined are:
-
-
|
- ||||||||||
4-7 | -Padding type. (variable-length string only)
- This four-bit value determines the type of padding
- used for variable-length strings. The values are the same
- as for the string padding type, as follows:
-
|
- ||||||||||
8-11 | -Character Set. (variable-length string only)
- This four-bit value specifies the character set
- to be used for encoding the string:
-
|
- ||||||||||
12-23 | -Reserved (zero). | -
Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Base Type |
-
Field Name | -Description | -
---|---|
Base Type | -
- Each variable-length type is based on some parent type. The - information for that parent type is described recursively by - this field. - - |
-
Class specific information for Array (Class 10): - -
There are no bit fields defined for the array class. -
- -Note that the dimension information defined in the property for this - datatype class is independent of dataspace information for a dataset. - The dimension information here describes the dimensionality of the - information within a data element (or a component of an element, if the - array datatype is nested within another datatype) and the dataspace for a - dataset describes the location of the elements in a dataset. -
- -Byte | -Byte | -Byte | -Byte | -
---|---|---|---|
Dimensionality | -Reserved (zero) | -||
Dimension #1 Size | -|||
. . . |
- |||
Dimension #n Size | -|||
Permutation Index #1 | -|||
. . . |
- |||
Permutation Index #n | -|||
Base Type |
-
Field Name | -Description | -
---|---|
Dimensionality | -
- This value is the number of dimensions that the array has. - - |
-
Dimension #n Size | -
- This value is the size of the dimension of the array - as stored in the file. The first dimension stored in - the list of dimensions is the slowest changing dimension - and the last dimension stored is the fastest changing - dimension. - - |
-
Permutation Index #n | -
- This value is the index permutation used to map - each dimension from the canonical representation to an - alternate axis for each dimension. Currently, dimension - permutations are not supported and these indices should be set - to the index position minus one (i.e. the first dimension should - be set to 0, the second dimension should be set to 1, etc.) - - |
-
Base Type | -
- Each array type is based on some parent type. The - information for that parent type is described recursively by - this field. - - |
-
Header Message Type: 0x0004 -
-Length: varies -
-Status: Optional, may not be repeated. -
- -Description: The fill value message stores a single - data value which is returned to the application when an uninitialized - data element is read from a dataset. The fill value is interpreted - with the same datatype as the dataset. If no fill value message is - present then a fill value of all zero bytes is assumed. -
- -This fill value message is deprecated in favor of the "new" - fill value message (Message Type 0x0005) and is only written to the - file for forward compatibility with versions of the HDF5 library before - the 1.6.0 version. Additionally, it only appears for datasets with a - user defined fill value (as opposed to the library default fill value - or an explicitly set "undefined" fill value). -
- -Format of Data:
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Size | -|||
Fill Value |
-
Field Name | -Description | -
---|---|
Size | -
- This is the size of the Fill Value field in bytes. - - |
-
Fill Value | -
- The fill value. The bytes of the fill value are interpreted - using the same datatype as for the dataset. - - |
-
Header Message Type: 0x0005 -
-Length: varies -
-Status: Required for dataset objects, may not be repeated. -
- -Description: The fill value message stores a single - data value which is returned to the application when an uninitialized - data element is read from a dataset. The fill value is interpreted - with the same datatype as the dataset. -
- -Format of Data:
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Space Allocation Time | -Fill Value Write Time | -Fill Value Defined | -
Size | -|||
Fill Value |
-
Field Name | -Description | -||||||||
---|---|---|---|---|---|---|---|---|---|
Version | -
- The version number information is used for changes in the - format of the fill value message and is described here: -
|
- ||||||||
Space Allocation Time | -
- When the storage space for the dataset's raw data will be - allocated. The allowed values are: -
|
- ||||||||
Fill Value Write Time | -
- At the time that storage space for the dataset's raw data is - allocated, this value indicates whether the fill value should - be written to the raw data storage elements. The allowed values - are: -
|
- ||||||||
Fill Value Defined | -
- This value indicates if a fill value is defined for this - dataset. If this value is 0, the fill value is undefined. - If this value is 1, a fill value is defined for this dataset. - For version 2 or later of the fill value message, this value - controls the presence of the Size field. - - |
- ||||||||
Size | -
- This is the size of the Fill Value field in bytes. This field - is not present if the Version field is >1 and the Fill Value - Defined field is set to 0. - - |
- ||||||||
Fill Value | -
- The fill value. The bytes of the fill value are interpreted - using the same datatype as for the dataset. This field is - not present if the Version field is >1 and the Fill Value - Defined field is set to 0. - - |
-
Purpose and Description: This message type was skipped during - the initial specification of the file format and may be used in a - future expansion to the format. - -
Purpose and Description: The external object message - indicates that the data for an object is stored outside the HDF5 - file. The filename of the object is stored as a Universal - Resource Location (URL) of the actual filename containing the - data. An external file list record also contains the byte offset - of the start of the data within the file and the amount of space - reserved in the file for that data. - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Reserved | -||
Allocated Slots | -Used Slots | -||
Heap Address |
- |||
Slot Definitions... |
-
-
Field Name | -Description | -
---|---|
Version | -This value is used to determine the format of the - External File List Message. When the format of the - information in the message is changed, the version number - is incremented and can be used to determine how the - information in the object header is formatted. | -
Reserved | -This field is reserved for future use. | -
Allocated Slots | -The total number of slots allocated in the message. Its - value must be at least as large as the value contained in - the Used Slots field. | -
Used Slots | -The number of initial slots which contain valid - information. The remaining slots are zero filled. | -
Heap Address | -This is the address of a local name heap which contains - the names for the external files. The name at offset zero - in the heap is always the empty string. | -
Slot Definitions | -The slot definitions are stored in order according to - the array addresses they represent. If more slots have - been allocated than what has been used then the defined - slots are all at the beginning of the list. | -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Name Offset (<size> bytes) |
- |||
File Offset (<size> bytes) |
- |||
Size |
-
-
Field Name | -Description | -
---|---|
Name Offset (<size> bytes) | -The byte offset within the local name heap for the name
- of the file. File names are stored as a URL which has a
- protocol name, a host name, a port number, and a file
- name:
- protocol:port//host/file .
- If the protocol is omitted then "file:" is assumed. If
- the port number is omitted then a default port for that
- protocol is used. If both the protocol and the port
- number are omitted then the colon can also be omitted. If
- the double slash and host name are omitted then
- "localhost" is assumed. The file name is the only
- mandatory part, and if the leading slash is missing then
- it is relative to the application's current working
- directory (the use of relative names is not
- recommended). |
-
File Offset (<size> bytes) | -This is the byte offset to the start of the data in the - specified file. For files that contain data for a single - dataset this will usually be zero. | -
Size | -This is the total number of bytes reserved in the - specified file for raw data storage. For a file that - contains exactly one complete dataset which is not - extendable, the size will usually be the exact size of the - dataset. However, by making the size larger one allows - HDF5 to extend the dataset. The size can be set to a value - larger than the entire file since HDF5 will read zeros - past the end of the file without failing. | -
Purpose and Description: Data layout describes how the - elements of a multi-dimensional array are arranged in the linear - address space of the file. Three types of data layout are - supported: - -
Version 3 of this message re-structured the format into specific - properties that are required for each layout class. - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Dimensionality | -Layout Class | -Reserved | -
Reserved | -|||
Address |
- |||
Dimension 0 (4-bytes) | -|||
Dimension 1 (4-bytes) | -|||
... | -|||
Compact Data Size (4-bytes) | -|||
Compact Data | -|||
... | -
-
Field Name | -Description | -
---|---|
Version | -A version number for the layout message. This value can be - either 1 or 2. | -
Dimensionality | -An array has a fixed dimensionality. This field - specifies the number of dimension size fields later in the - message. | -
Layout Class | -The layout class specifies how the other fields of the - layout message are to be interpreted. A value of one - indicates contiguous storage, a value of two - indicates chunked storage, - while a value of zero - indicates compact storage. Other values will be defined - in the future. | -
Address | -For contiguous storage, this is the address of the first - byte of storage. For chunked storage this is the address - of the B-tree that is used to look up the addresses of the - chunks. This field is not present for compact storage. - If the version for this message is set to 2, the address - may have the "undefined address" value, to indicate that - storage has not yet been allocated for this array. | -
Dimensions | -For contiguous storage the dimensions define the entire - size of the array while for chunked storage they define - the size of a single chunk. | -
Compact Data Size | -This field is only present for compact data storage. - It contains the size of the raw data for the dataset array. | - -
Compact Data | -This field is only present for compact data storage. - It contains the raw data for the dataset array. | -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Layout Class | -||
Properties | -
-
Field Name | -Description | -
---|---|
Version | -A version number for the layout message. This value can be - either 1, 2 or 3. | -
Layout Class | -The layout class specifies how the other fields of the - layout message are to be interpreted. A value of one - indicates contiguous storage, a value of two - indicates chunked storage, - while a value of three - indicates compact storage. | -
Properties | -This variable-sized field encodes information specific to each - layout class and is described below. If there is no property - information specified for a layout class, the size of this field - is zero bytes. | -
Class-specific information for contiguous layout (Class 0): - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Address |
- |||
Size |
-
-
Field Name | -Description | -
---|---|
Address | -This is the address of the first byte of raw data storage. - The address may have the "undefined address" value, to indicate - that storage has not yet been allocated for this array. | -
Size | -This field contains the size allocated to store the raw data. | -
Class-specific information for chunked layout (Class 1): - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Dimensionality | -|||
Address |
- |||
Dimension 0 (4-bytes) | -|||
Dimension 1 (4-bytes) | -|||
... | -
-
Field Name | -Description | -
---|---|
Dimensionality | -A chunk has a fixed dimensionality. This field - specifies the number of dimension size fields later in the - message. | -
Address | -This is the address - of the B-tree that is used to look up the addresses of the - chunks. - The address - may have the "undefined address" value, to indicate that - storage has not yet been allocated for this array. | -
Dimensions | -The dimension sizes define the size of a single chunk. | -
Class-specific information for compact layout (Class 2): - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Size | -|||
Raw Data | -|||
... | -
-
Field Name | -Description | -
---|---|
Size | -This field contains the size of the raw data for the dataset array. | - -
Raw Data | -This field contains the raw data for the dataset array. | -
Purpose and Description: This message type was skipped during - the initial specification of the file format and may be used in a - future expansion to the format. - -
Purpose and Description: This message type was skipped during - the initial specification of the file format and may be used in a - future expansion to the format. - -
Purpose and Description: This message describes the - filter pipeline which should be applied to the data stream by - providing filter identification numbers, flags, a name, an - client data. - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Number of Filters | -Reserved | -|
Reserved | -|||
Filter List |
-
-
Field Name | -Description | -
---|---|
Version | -The version number for this message. This document - describes version one. | -
Number of Filters | -The total number of filters described by this - message. The maximum possible number of filters in a - message is 32. | -
Filter List | -A description of each filter. A filter description - appears in the next table. | -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Filter Identification | -Name Length | -||
Flags | -Client Data Number of Values | -||
Name |
- |||
Client Data |
- |||
Padding | -
-
Field Name | -Description | -
---|---|
Filter Identification | -This is a unique (except in the case of testing) - identifier for the filter. Values from zero through 255 - are reserved for filters defined by the NCSA HDF5 - library. Values 256 through 511 have been set aside for - use when developing/testing new filters. The remaining - values are allocated to specific filters by contacting the - HDF5 Development - Team. | -
Name Length | -Each filter has an optional null-terminated ASCII name - and this field holds the length of the name including the - null termination padded with nulls to be a multiple of - eight. If the filter has no name then a value of zero is - stored in this field. | -
Flags | -The flags indicate certain properties for a filter. The
- bit values defined so far are:
-
-
|
Client Data Number of Values | -Each filter can store a few integer values to control - how the filter operates. The number of entries in the - Client Data array is stored in this field. | -
Name | -If the Name Length field is non-zero then it will - contain the size of this field, a multiple of eight. This - field contains a null-terminated, ASCII character - string to serve as a comment/name for the filter. | -
Client Data | -This is an array of four-byte integers which will be - passed to the filter function. The Client Data Number of - Values determines the number of elements in the - array. | -
Padding | -Four bytes of zeros are added to the message at this - point if the Client Data Number of Values field contains - an odd number. | -
Purpose and Description: The Attribute - message is used to list objects in the HDF file which are used - as attributes, or "metadata" about the current object. An - attribute is a small dataset; it has a name, a datatype, a data - space, and raw data. Since attributes are stored in the object - header they must be relatively small (<64KB) and can be - associated with any type of object which has an object header - (groups, datasets, named types and spaces, etc.). - -
Note: Attributes on an object must have unique names. (The HDF5 library - currently enforces this by causing the creation of an attribute with - a duplicate name to fail) - Attributes on different objects may have the same name, however. - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Reserved | -Name Size | -|
Type Size | -Space Size | -||
Name |
- |||
Type |
- |||
Space |
- |||
Data |
-
-
Field Name | -Description | -
---|---|
Version | -Version number for the message. This document describes - version 1 of attribute messages. | -
Reserved | -This field is reserved for later use and is set to - zero. | -
Name Size | -The length of the attribute name in bytes including the - null terminator. Note that the Name field below may - contain additional padding not represented by this - field. | -
Type Size | -The length of the datatype description in the Type - field below. Note that the Type field may contain - additional padding not represented by this field. | -
Space Size | -The length of the dataspace description in the Space - field below. Note that the Space field may contain - additional padding not represented by this field. | -
Name | -The null-terminated attribute name. This field is - padded with additional null characters to make it a - multiple of eight bytes. | -
Type | -The datatype description follows the same format as - described for the datatype object header message. This - field is padded with additional zero bytes to make it a - multiple of eight bytes. | -
Space | -The dataspace description follows the same format as - described for the dataspace object header message. This - field is padded with additional zero bytes to make it a - multiple of eight bytes. | -
Data | -The raw data for the attribute. The size is determined - from the datatype and dataspace descriptions. This - field is not padded with additional zero - bytes. | -
Header Message Type: 0x000D
- Length: varies
- Status: Optional, may not be repeated.
-
-
Purpose and Description: The object comment is
- designed to be a short description of an object. An object comment
- is a sequence of non-zero (\0
) ASCII characters with no other
- formatting included by the library.
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Comment |
-
-
Field Name | -Description | -
---|---|
Name | -A null terminated ASCII character string. | -
Header Message Type: 0x000E
- Length: fixed
- Status: Optional, may not be repeated.
-
-
Purpose and Description: The object modification date - and time is a timestamp which indicates (using ISO-8601 date and - time format) the last modification of an object. The time is - updated when any object header message changes according to the - system clock where the change was posted. - -
This modification time message is deprecated in favor of the "new" - modification time message (Message Type 0x0012) and is no longer written - to the file in versions of the HDF5 library after the 1.6.0 version. -
- - --
byte | -byte | -byte | -byte | -
---|---|---|---|
Year | -|||
Month | -Day of Month | -||
Hour | -Minute | -||
Second | -Reserved | -
-
Field Name | -Description | -
---|---|
Year | -The four-digit year as an ASCII string. For example,
- 1998 . All fields of this message should be interpreted
- as coordinated universal time (UTC) |
-
Month | -The month number as a two digit ASCII string where
- January is 01 and December is 12 . |
-
Day of Month | -The day number within the month as a two digit ASCII
- string. The first day of the month is 01 . |
-
Hour | -The hour of the day as a two digit ASCII string where
- midnight is 00 and 11:00pm is 23 . |
-
Minute | -The minute of the hour as a two digit ASCII string where
- the first minute of the hour is 00 and
- the last is 59 . |
-
Second | -The second of the minute as a two digit ASCII string
- where the first second of the minute is 00
- and the last is 59 . |
-
Reserved | -This field is reserved and should always be zero. | -
A constant message can be shared among several object headers - by writing that message in the global heap and having the object - headers all point to it. The pointing is accomplished with a - Shared Object message which is understood directly by the object - header layer of the library. It is also possible to have a - message of one object header point to a message in some other - object header, but care must be exercised to prevent cycles. - -
If a message is shared, then the message appears in the global
- heap and its message ID appears in the Header Message Type
- field of the object header. Also, the Flags field in the object
- header for that message will have bit two set (the
- H5O_FLAG_SHARED
bit). The message body in the
- object header will be that of a Shared Object message defined
- here and not that of the pointed-to message.
-
-
-
byte - | byte - | byte - | byte - |
---|---|---|---|
Version | -Flags | -Reserved | -|
Reserved | -|||
Pointer |
-
-
Field Name | -Description | -
---|---|
Version | -The version number for the message. This document - describes version one of shared messages. | -
Flags | -The Shared Message message points to a message which is
- shared among multiple object headers. The Flags field
- describes the type of sharing:
-
-
|
Pointer | -This field points to the actual message. The format of - the pointer depends on the value of the Flags field. If - the actual message is in the global heap then the pointer - is the file address of the global heap collection that - holds the message, and a four-byte index into that - collection. Otherwise the pointer is a group entry - that points to some other object header. | -
- The object header continuation is formatted as follows (assuming a 4-byte -length & offset are being used in the current file): - -
-
byte | -byte | -byte | -byte | - -
---|---|---|---|
Header Continuation Offset | -|||
Header Continuation Length | -
-
The group message is formatted as follows: - -
-
byte | -byte | -byte | -byte | - -
---|---|---|---|
B-tree Address | - -|||
Heap Address | -
-
Header Message Type: 0x0012 -
-Length: Fixed -
-Status: Optional, may not be repeated. -
- -Description: The object modification date - and time is a timestamp which indicates - the last modification of an object. The time is - updated when any object header message changes according to the - system clock where the change was posted. -
- --
byte | -byte | -byte | -byte | -
---|---|---|---|
Version | -Reserved | -||
Seconds After Epoch | -
-
Field Name | -Description | -
---|---|
Version | -The version number for the message. This document - describes version one of the new modification time message. | -
Reserved | -This field is reserved and should always be zero. | -
Seconds After Epoch | -The number of seconds since 0 hours, 0 - minutes, 0 seconds, January 1, 1970, Coordinated Universal Time. - |
In order to share header messages between several dataset objects, object
-header messages may be placed into the global heap. Since these
-messages require additional information beyond the basic object header message
-information, the format of the shared message is detailed below.
-
-
-
byte | -byte | -byte | -byte | - -
---|---|---|---|
Reference Count of Shared Header Message | -|||
Shared Object Header Message |
-
-
The data for an object is stored separately from the header -information in the file and may not actually be located in the HDF5 file -itself if the header indicates that the data is stored externally. The -information for each record in the object is stored according to the -dimensionality of the object (indicated in the dimensionality header message). -Multi-dimensional data is stored in C order [same as current scheme], i.e. the -"last" dimension changes fastest. -
Data whose elements are composed of simple number-types are stored in -native-endian IEEE format, unless they are specifically defined as being stored -in a different machine format with the architecture-type information from the -number-type header message. This means that each architecture will need to -[potentially] byte-swap data values into the internal representation for that -particular machine. -
Data with a variable-length datatype is stored in the global heap -of the HDF5 file. Global heap identifiers are stored in the -data object storage. -
Data whose elements are composed of pointer number-types are stored in several -different ways depending on the particular pointer type involved. Simple -pointers are just stored as the dataset offset of the object being pointed to with the -size of the pointer being the same number of bytes as offsets in the file. -Dataset region references are stored as a heap-ID which points to the following -information within the file-heap: an offset of the object pointed to, number-type -information (same format as header message), dimensionality information (same -format as header message), sub-set start and end information (i.e. a coordinate -location for each), and field start and end names (i.e. a [pointer to the] -string indicating the first field included and a [pointer to the] string name -for the last field). - -
Data of a compound datatype is stored as a contiguous stream of the items -in the structure, with each item formatted according to its datatype.
- -Definitions of various terms used in this document. -
-The "undefined address" for a file is a
-file address with all bits set, i.e. 0xffff...ff
.
-
The "unlimited size" for a size is a
-value with all bits set, i.e. 0xffff...ff
.
-
-
- HDF5 documents and links - Introduction to HDF5 - - |
-
- HDF5 User's Guide - HDF5 Reference Manual - HDF5 Application Developer's Guide - |
-Introduction to HDF5 -HDF5 User Guide - - |
-
-HDF5 Reference Manual -Other HDF5 documents and links - |
This is an introduction to the HDF5 data model and programming model. Being a Getting Started or QuickStart document, this
Introduction to HDF5 is intended to provide enough information for you to develop a basic understanding of how HDF5 works and is meant to be used. Knowledge of the current version of HDF will make it easier to follow the text, but it is not required. More complete information of the sort you will need to actually use HDF5 is available in the HDF5 documentation. Available documents include the following: - -Code examples are available in the source code tree when you install HDF5. - -
hdf5/examples
,
-hdf5/doc/html/examples/
, and
-hdf5/doc/html/Tutor/examples/
contain the examples
-used in this document.
-- -
HDF5 is a completely new Hierarchical Data Format -product consisting of a data format specification and a -supporting library implementation. HDF5 is designed to address some -of the limitations of the older HDF product and to address current and -anticipated requirements of modern systems and applications. -1 -
We urge you to look at HDF5, the format and the library, and give us
-feedback on what you like or do not like about it, and what features
-you would like to see added to it.
-
- Why HDF5?
HDF5 includes the following improvements. - -
(Return to TOC) - - -
A detailed list of changes in HDF5 between the current release and
-the preceding major release can be found in the file
-RELEASE.txt
,
-with a highlights summary in the document
-"HDF5 Software Changes from Release to Release"
-in the
-HDF5 Application Developer's Guide.
-
-
-
HDF5 files are organized in a hierarchical structure, with two primary structures: groups and datasets. - -
Working with groups and group members is similar in many ways to working with directories and files in UNIX. As with UNIX directories and files, objects in an HDF5 file are often described by giving their full (or absolute) path names. -
-- /
signifies the root group.
-/foo
signifies a member of the root group called foo
.
-/foo/zoo
signifies a member of the group foo
, which in turn is a member of the root group.
-
-Any HDF5 group or dataset may have an associated attribute list. An HDF5 attribute is a user-defined HDF5 structure that provides extra information about an HDF5 object. Attributes are described in more detail below. - - -
(Return to TOC) - - -
An HDF5 group is a structure containing zero or more HDF5 objects. A group has two parts: - -
(Return to TOC) - - -
A dataset is stored in a file in two parts: a header and a data array. -
The header contains information that is needed to interpret the array portion of the dataset, as well as metadata (or pointers to metadata) that describes or annotates the dataset. Header information includes the name of the object, its dimensionality, its number-type, information about how the data itself is stored on disk, and other information used by the library to speed up access to the dataset or maintain the file's integrity. -
There are four essential classes of information in any header: name, datatype, dataspace, and storage layout: -
Name.
A dataset name is a sequence of alphanumeric ASCII characters. -Datatype.
HDF5 allows one to define many different kinds of datatypes. There are two categories of datatypes: atomic datatypes and compound datatypes. -Atomic datatypes can also be system-specific, orNATIVE
, and all datatypes can be named:
-NATIVE
datatypes are system-specific instances of atomic datatypes.
-Atomic datatypes include integers and floating-point numbers. Each atomic type belongs to a particular class and has several properties: size, order, precision, and offset. In this introduction, we consider only a few of these properties. -
Atomic classes include integer, float, date and time, string, bit field, and opaque. (Note: Only integer, float and string classes are available in the current implementation.) -
Properties of integer types include size, order (endian-ness), and signed-ness (signed/unsigned). -
Properties of float types include the size and location of the exponent and mantissa, and the location of the sign bit. -
The datatypes that are supported in the current implementation are: - -
-NATIVE
datatypes. Although it is possible to describe nearly any kind of atomic datatype, most applications will use predefined datatypes that are supported by their compiler. In HDF5 these are called native datatypes. NATIVE
datatypes are C-like datatypes that are generally supported by the hardware of the machine on which the library was compiled. In order to be portable, applications should almost always use the NATIVE
designation to describe data values in memory.
-
The NATIVE
architecture has base names which do not follow the same rules as the others. Instead, native type names are similar to the C type names. The following figure shows several examples.
-
- -
- Example |
-
- Corresponding C Type |
-
-H5T_NATIVE_CHAR |
-
-signed char |
-
-H5T_NATIVE_UCHAR |
-
-unsigned char |
-
-H5T_NATIVE_SHORT |
-
-short |
-
-H5T_NATIVE_USHORT |
-
-unsigned short |
-
-H5T_NATIVE_INT |
-
-int |
-
-H5T_NATIVE_UINT |
-
-unsigned |
-
-H5T_NATIVE_LONG |
-
-long |
-
-H5T_NATIVE_ULONG |
-
-unsigned long |
-
-H5T_NATIVE_LLONG |
-
-long long |
-
-H5T_NATIVE_ULLONG |
-
-unsigned long long |
-
-H5T_NATIVE_FLOAT |
-
-float |
-
-H5T_NATIVE_DOUBLE |
-
-double |
-
-H5T_NATIVE_LDOUBLE |
-
-long double |
-
-H5T_NATIVE_HSIZE |
-
-hsize_t |
-
-H5T_NATIVE_HSSIZE |
-
-hssize_t |
-
-H5T_NATIVE_HERR |
-
-herr_t |
-
-H5T_NATIVE_HBOOL |
-
-hbool_t |
-
See Datatypes in the HDF Users Guide for further information.
- - -A compound datatype is one in which a -collection of several datatypes are represented as a single unit, -a compound datatype, similar to a struct in C. -The parts of a compound datatype are called members. -The members of a compound datatype may be of any datatype, -including another compound datatype. It is possible to read members -from a compound type without reading the whole type. -
- Named datatypes. Normally each dataset has its own datatype, but sometimes we may want to share a datatype among several datasets. This can be done using a named datatype. A named datatype is stored in the file independently of any dataset, and referenced by all datasets that have that datatype. Named datatypes may have an associated attributes list.
-See Datatypes
Dataspace.
A dataset dataspace describes the dimensionality of the dataset. The dimensions of a dataset can be fixed (unchanging), or they may be unlimited, which means that they are extendible (i.e. they can grow larger). -Properties of a dataspace consist of the rank (number of dimensions) of the data array, the actual sizes of the dimensions of the array, and the maximum sizes of the dimensions of the array. For a fixed-dimension dataset, the actual size is the same as the maximum size of a dimension. When a dimension is unlimited, the maximum size is set to the
valueH5P_UNLIMITED
. (An example below shows how to create extendible datasets.)
-A dataspace can also describe portions of a dataset, making it possible to do partial I/O operations on selections. Selection is supported by the dataspace interface (H5S). Given an n-dimensional dataset, there are currently four ways to do partial selection: -
Since I/O operations have two end-points, the raw data transfer functions require two dataspace arguments: one describes the application memory dataspace or subset thereof, and the other describes the file dataspace or subset thereof. -
See Dataspaces
in the HDF Users Guide for further information. -Storage layout.
The HDF5 format makes it possible to store data in a variety of ways. The default storage layout format is contiguous, meaning that data is stored in the same linear way that it is organized in memory. Two other storage layout formats are currently defined for HDF5: compact, and chunked. In the future, other storage layouts may be added. -Compact storage is used when the amount of data is small and can be stored directly in the object header. (Note: Compact storage is not supported in this release.) -
Chunked storage involves dividing the dataset into equal-sized "chunks" that are stored separately. Chunking has three important benefits. -
-See Datasets and Dataset Chunking Issues
in the HDF Users Guide for further information. -We particularly encourage you to read Dataset Chunking Issues since the issue is complex and beyond the scope of this document. - - -(Return to TOC) - - -
The Attribute API (H5A) is used to read or write attribute information. When accessing attributes, they can be identified by name or by an index value. The use of an index value makes it possible to iterate through all of the attributes associated with a given object. -
The HDF5 format and I/O library are designed with the assumption that attributes are small datasets. They are always stored in the object header of the object they are attached to. Because of this, large datasets should not be stored as attributes. How large is "large" is not defined by the library and is up to the user's interpretation. (Large datasets with metadata can be stored as supplemental datasets in a group with the primary dataset.) -
See Attributes
in the HDF Users Guide for further information. - - - -(Return to TOC) - - -
For those who are interested, this section takes a look at - the low-level elements of the file as the file is written to disk - (or other storage media) and the relation of those low-level - elements to the higher level elements with which users typically - are more familiar. The HDF5 API generally exposes only the - high-level elements to the user; the low-level elements are - often hidden. - The rest of this Introduction does not assume - an understanding of this material. - -
The format of an HDF5 file on disk encompasses several - key ideas of the HDF4 and AIO file formats as well as - addressing some shortcomings therein. The new format is - more self-describing than the HDF4 format and is more - uniformly applied to data objects in the file. - -
- - - | |
- Figure 1: Relationships among the
- HDF5 root group, other groups, and objects
- - |
An HDF5 file appears to the user as a directed graph. - The nodes of this graph are the higher-level HDF5 objects - that are exposed by the HDF5 APIs: - -
At the lowest level, as information is actually written to the disk, - an HDF5 file is made up of the following objects: -
- - - | |
- Figure 2: HDF5 objects -- datasets, datatypes, or dataspaces
- - |
See the HDF5 File Format
- Specification for further information.
-
-
-
-
-
-
The current HDF5 API is implemented only in C. The API provides routines for creating HDF5 files, creating and writing groups, datasets, and their attributes to HDF5 files, and reading groups, datasets and their attributes from HDF5 files. -
All C routines in the HDF 5 library begin with a prefix of the form H5*, where * is a single letter indicating the object on which the operation is to be performed: - -
H5Fopen
, which opens an HDF5 file.
-H5Gset
,which sets the working group to the specified group.
-H5Tcopy
,which creates a copy of an existing datatype.
-H5Screate_simple
, which creates simple dataspaces.
-H5Dread
, which reads all or part of a dataset into a buffer in memory.
-H5Pset_chunk
, which sets the number of dimensions and the size of a chunk.
-H5Aget_name
, which retrieves name of an attribute.
-H5Zregister
, which registers new compression and uncompression functions for use with the HDF5 library.
-H5Eprint
, which prints the current error stack.
-H5Rcreate
, which creates a reference.
-H5Iget_type
, which retrieves the type of an object.
-
-
-(Return to TOC) - - -
There are a number definitions and declarations that should be included with any HDF5 program. These definitions and declarations are contained in several include files. The main include
file ishdf5.h
. This file includes all of the other files that your program is likely to need. Be sure to include hdf5.h
in any program that uses the HDF5 library.
-
-
-(Return to TOC) - - -
In this section we describe how to program some basic operations on files, including how to - -
(Return to TOC) - - -
This programming model shows how to create a file and also how to close the file. -
The following code fragment implements the specified model. If there is a possibility that the file already exists, the user must add the flag H5ACC_TRUNC
to the access mode to overwrite the previous file's information.
-
-hid_t file; /* identifier */
-/*
-* Create a new file using H5ACC_TRUNC access,
-* default file creation properties, and default file
-* access properties.
-* Then close the file.
-*/
-file = H5Fcreate(FILE, H5ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-status = H5Fclose(file);
-
(Return to TOC) - - -
Recall that datatypes and dimensionality (dataspace) are independent objects, which are created separately from any dataset that they might be attached to. Because of this the creation of a dataset requires, at a minimum, separate definitions of datatype, dimensionality, and dataset. Hence, to create a dataset the following steps need to be taken: -
The following code illustrates the creation of these three components of a dataset object. -
hid_t dataset, datatype, dataspace; /* declare identifiers */
-
-/*
- * Create dataspace: Describe the size of the array and
- * create the data space for fixed size dataset.
- */
-dimsf[0] = NX;
-dimsf[1] = NY;
-dataspace = H5Screate_simple(RANK, dimsf, NULL);
-/*
- * Define datatype for the data in the file.
- * We will store little endian integer numbers.
- */
-datatype = H5Tcopy(H5T_NATIVE_INT);
-status = H5Tset_order(datatype, H5T_ORDER_LE);
-/*
- * Create a new dataset within the file using defined
- * dataspace and datatype and default dataset creation
- * properties.
- * NOTE: H5T_NATIVE_INT can be used as datatype if conversion
- * to little endian is not needed.
- */
-dataset = H5Dcreate(file, DATASETNAME, datatype, dataspace, H5P_DEFAULT);
-
-
-
-(Return to TOC) - - -
The datatype, dataspace and dataset objects should be released once they are no longer needed by a program. Since each is an independent object, the must be released (or closed) separately. The following lines of code close the datatype, dataspace, and datasets that were created in the preceding section. -
H5Tclose(datatype);
-
H5Dclose(dataset);
-
H5Sclose(dataspace);
-
-
-
-(Return to TOC) - - -
Having defined the datatype, dataset, and dataspace parameters, you write out the data with a call to
H5Dwrite
.
-/*
-* Write the data to the dataset using default transfer
-* properties.
-*/
-status = H5Dwrite(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL,
- H5P_DEFAULT, data);
-
The third and fourth parameters of
H5Dwrite
in the example describe the dataspaces in memory and in the file, respectively. They are set to the value H5S_ALL
to indicate that an entire dataset is to be written. In a later section we look at how we would access a portion of a dataset.
-Example 1 contains a program that creates a file and a dataset, and writes the dataset to the file.
- Reading is analogous to writing. If, in the previous example, we wish to read an entire dataset, we would use the same basic calls with the same parameters. Of course, the routine H5Dread
would replace H5Dwrite
.
-
-
-
-
(Return to TOC) - - -
Although reading is analogous to writing, it is often necessary to query a file to obtain information about a dataset. For instance, we often need to know about the datatype associated with a dataset, as well dataspace information (e.g. rank and dimensions). There are several "get" routines for obtaining this information. The following code segment illustrates how we would get this kind of information: -
/*
-* Get datatype and dataspace identifiers and then query
-* dataset class, order, size, rank and dimensions.
-*/
-
-datatype = H5Dget_type(dataset); /* datatype identifier */
-class = H5Tget_class(datatype);
-if (class == H5T_INTEGER) printf("Data set has INTEGER type \n");
-order = H5Tget_order(datatype);
-if (order == H5T_ORDER_LE) printf("Little endian order \n");
-
-size = H5Tget_size(datatype);
-printf(" Data size is %d \n", size);
-
-dataspace = H5Dget_space(dataset); /* dataspace identifier */
-rank = H5Sget_simple_extent_ndims(dataspace);
-status_n = H5Sget_simple_extent_dims(dataspace, dims_out);
-printf("rank %d, dimensions %d x %d \n", rank, dims_out[0], dims_out[1]);
-
-
-
-(Return to TOC) - - -
In the previous discussion, we describe how to access an entire dataset with one write (or read) operation. HDF5 also supports access to portions (or selections) of a dataset in one read/write operation. Currently selections are limited to hyperslabs, their unions, and the lists of independent points. Both types of selection will be discussed in the following sections. Several sample cases of selection reading/writing are shown on the following figure. -
- - |
-a - |
-b - |
-c - |
-d - |
- - |
In example (a) a single hyperslab is read from the midst of a two-dimensional array in a file and stored in the corner of a smaller two-dimensional array in memory. In (b) a regular series of blocks is read from a two-dimensional array in the file and stored as a contiguous sequence of values at a certain offset in a one-dimensional array in memory. In (c) a sequence of points with no regular pattern is read from a two-dimensional array in a file and stored as a sequence of points with no regular pattern in a three-dimensional array in memory. -In (d) a union of hyperslabs in the file dataspace is read and -the data is stored in another union of hyperslabs in the memory dataspace. -
As these examples illustrate, whenever we perform partial read/write operations on the data, the following information must be provided: file dataspace, file dataspace selection, memory dataspace and memory dataspace selection. After the required information is specified, actual read/write operation on the portion of data is done in a single call to the HDF5 read/write functions H5Dread(write). - - -
Hyperslabs are portions of datasets. A hyperslab selection can be a logically contiguous collection of points in a dataspace, or it can be regular pattern of points or blocks in a dataspace. The following picture illustrates a selection of regularly spaced 3x2 blocks in an 8x12 dataspace.
-- -
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
Four parameters are required to describe a completely general hyperslab. Each parameter is an array whose rank is the same as that of the dataspace: - -
start
: a starting location for the hyperslab. In the example start
is (0,1).
-stride
: the number of elements to separate each element or block to be selected. In the example stride
is (4,3). If the stride parameter is set to NULL, the stride size defaults to 1 in each dimension.
-count
: the number of elements or blocks to select along each dimension. In the example, count
is (2,4).
-block
: the size of the block selected from the dataspace. In the example, block
is (3,2). If the block parameter is set to NULL, the block size defaults to a single element in each dimension, as if the block array was set to all 1s.
-
-In what order is data copied? When actual I/O is performed data values are copied by default from one dataspace to another in so-called row-major, or C order. That is, it is assumed that the first dimension varies slowest, the second next slowest, and so forth. -
Example without strides or blocks. Suppose we want to read a 3x4 hyperslab from a dataset in a file beginning at the element This describes the dataspace from which we wish to read. We need to define the dataspace in memory analogously. Suppose, for instance, that we have in memory a 3 dimensional 7x7x3 array into which we wish to read the 3x4 hyperslab described above beginning at the element Notice that we must describe two things: the dimensions of the in-memory array, and the size and position of the hyperslab that we wish to read in. The following code illustrates how this would be done.
-<1,2>
in the dataset. In order to do this, we must create a dataspace that describes the overall rank and dimensions of the dataset in the file, as well as the position and size of the hyperslab that we are extracting from that dataset. The following code illustrates the selection of the hyperslab in the file dataspace.
-
-/*
-* Define file dataspace.
-*/
-dataspace = H5Dget_space(dataset); /* dataspace identifier */
-rank = H5Sget_simple_extent_ndims(dataspace);
-status_n = H5Sget_simple_extent_dims(dataspace, dims_out, NULL);
-
-/*
-* Define hyperslab in the dataset.
-*/
-offset[0] = 1;
-offset[1] = 2;
-count[0] = 3;
-count[1] = 4;
-status = H5Sselect_hyperslab(dataspace, H5S_SELECT_SET, offset, NULL,
- count, NULL);
-<3,0,0>
. Since the in-memory dataspace has three dimensions, we have to describe the hyperslab as an array with three dimensions, with the last dimension being 1: <3,4,1>
.
-/*
-* Define memory dataspace.
-*/
-dimsm[0] = 7;
-dimsm[1] = 7;
-dimsm[2] = 3;
-memspace = H5Screate_simple(RANK_OUT,dimsm,NULL);
-
-/*
-* Define memory hyperslab.
-*/
-offset_out[0] = 3;
-offset_out[1] = 0;
-offset_out[2] = 0;
-count_out[0] = 3;
-count_out[1] = 4;
-count_out[2] = 1;
-status = H5Sselect_hyperslab(memspace, H5S_SELECT_SET, offset_out, NULL,
- count_out, NULL);
-
-/*
-
Example 2 contains a complete program that performs these operations.
- Example with strides and blocks. Consider the 8x12 dataspace described above, in which we selected eight 3x2 blocks. Suppose we wish to fill these eight blocks.
- -
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
This hyperslab has the following parameters: Suppose that the source dataspace in memory is this 50-element one dimensional array called start=(0,1), stride=(4,3), count=(2,4), block=(3,2).
-
vector
:
-
- -
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-
The following code will write 48 elements from
vector
to our file dataset, starting with the second element in vector
.
--/* Select hyperslab for the dataset in the file, using 3x2 blocks, (4,3) stride - * (2,4) count starting at the position (0,1). - */ -start[0] = 0; start[1] = 1; -stride[0] = 4; stride[1] = 3; -count[0] = 2; count[1] = 4; -block[0] = 3; block[1] = 2; -ret = H5Sselect_hyperslab(fid, H5S_SELECT_SET, start, stride, count, block); - -/* - * Create dataspace for the first dataset. - */ -mid1 = H5Screate_simple(MSPACE1_RANK, dim1, NULL); - -/* - * Select hyperslab. - * We will use 48 elements of the vector buffer starting at the second element. - * Selected elements are 1 2 3 . . . 48 - */ -start[0] = 1; -stride[0] = 1; -count[0] = 48; -block[0] = 1; -ret = H5Sselect_hyperslab(mid1, H5S_SELECT_SET, start, stride, count, block); - -/* - * Write selection from the vector buffer to the dataset in the file. - * -ret = H5Dwrite(dataset, H5T_NATIVE_INT, midd1, fid, H5P_DEFAULT, vector) -
-
After these operations, the file dataspace will have the following values. -
- -
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
Notice that the values are inserted in the file dataset in row-major order. -
Example 3 includes this code and other example code illustrating the use of hyperslab selection. - - -
H5Sselect_elements
. Suppose, for example, that we wish to write the values 53, 59, 61, 67 to the following elements of the 8x12 array used in the previous example: (0,0), (3,3), (3,5), and (5,6). The following code selects the points and writes them to the dataset:
--#define FSPACE_RANK 2 /* Dataset rank as it is stored in the file */ -#define NPOINTS 4 /* Number of points that will be selected - and overwritten */ -#define MSPACE2_RANK 1 /* Rank of the second dataset in memory */ -#define MSPACE2_DIM 4 /* Dataset size in memory */ - - -hsize_t dim2[] = {MSPACE2_DIM}; /* Dimension size of the second - dataset (in memory) */ -int values[] = {53, 59, 61, 67}; /* New values to be written */ -hsize_t coord[NPOINTS][FSPACE_RANK]; /* Array to store selected points - from the file dataspace */ - -/* - * Create dataspace for the second dataset. - */ -mid2 = H5Screate_simple(MSPACE2_RANK, dim2, NULL); - -/* - * Select sequence of NPOINTS points in the file dataspace. - */ -coord[0][0] = 0; coord[0][1] = 0; -coord[1][0] = 3; coord[1][1] = 3; -coord[2][0] = 3; coord[2][1] = 5; -coord[3][0] = 5; coord[3][1] = 6; - -ret = H5Sselect_elements(fid, H5S_SELECT_SET, NPOINTS, - (const hsize_t **)coord); - -/* - * Write new selection of points to the dataset. - */ -ret = H5Dwrite(dataset, H5T_NATIVE_INT, mid2, fid, H5P_DEFAULT, values); -- -
-
After these operations, the file dataspace will have the following values: -
- -
-
|
-
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - |
-
|
-- |
-
|
-- | - | - | - | - | - |
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
Example 3 contains a complete program that performs these subsetting operations.
-
-
-Selecting a union of hyperslabs
-
-
-The HDF5 Library allows the user to select a union of hyperslabs and
-write or read the selection into another selection. The shapes of
-the two selections may differ, but the number of elements must be equal.
-
-Suppose that we want to read two overlapping hyperslabs from the dataset -written in the previous example into a union of hyperslabs in the memory -dataset. This exercise is illustrated in the two figures immediately below. -Note that the memory dataset has a different shape from the previously -written dataset. Similarly, the selection in the memory dataset -could have a different shape than the selected union of hyperslabs in -the original file; for simplicity, we will preserve the selection's shape -in this example. -
- -
-
|
-
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - |
-
|
-- |
-
|
-- | - | - | - | - | - |
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-- |
-
|
-
-
|
-
- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-
- | - | - | - | - | - | - | - | - | - | - | - |
- -
-
|
-- |
-
|
-
-
|
-- | - | - | - | - |
-
|
-- |
-
|
-
-
|
-- |
-
|
-
-
|
-- | - |
- |
-
|
-- |
-
|
-- | - | - | - | - |
- | - |
-
|
-
-
|
-- |
-
|
-
-
|
-- | - |
- | - |
-
|
-
-
|
-
-
|
-
-
|
-
-
|
-- | - |
- | - |
-
|
-
-
|
-- |
-
|
-
-
|
-- | - |
- | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - - |
-The following lines of code show the required steps. -
-First obtain the dataspace identifier for the dataset in the file. - -
- /* - * Get dataspace of the open dataset. - */ - fid = H5Dget_space(dataset); -- -Then select the hyperslab with the size 3x4 and -the left upper corner at the position (1,2): - -
- /* - * Select first hyperslab for the dataset in the file. The following - * elements are selected: - * 10 0 11 12 - * 18 0 19 20 - * 0 59 0 61 - * - */ - start[0] = 1; start[1] = 2; - block[0] = 1; block[1] = 1; - stride[0] = 1; stride[1] = 1; - count[0] = 3; count[1] = 4; - ret = H5Sselect_hyperslab(fid, H5S_SELECT_SET, start, stride, count, block); -- -Now select the second hyperslab with the size 6x5 at the position (2,4), -and create the union with the first hyperslab. - -
- /* - * Add second selected hyperslab to the selection. - * The following elements are selected: - * 19 20 0 21 22 - * 0 61 0 0 0 - * 27 28 0 29 30 - * 35 36 67 37 38 - * 43 44 0 45 46 - * 0 0 0 0 0 - * Note that two hyperslabs overlap. Common elements are: - * 19 20 - * 0 61 - */ - start[0] = 2; start[1] = 4; - block[0] = 1; block[1] = 1; - stride[0] = 1; stride[1] = 1; - count[0] = 6; count[1] = 5; - ret = H5Sselect_hyperslab(fid, H5S_SELECT_OR, start, stride, count, block); -- -Note that when we add the selected hyperslab to the union, the -second argument to the
H5Sselect_hyperslab
function
-has to be H5S_SELECT_OR
instead of H5S_SELECT_SET
.
-Using H5S_SELECT_SET
would reset the selection to
-the second hyperslab.
--Now define the memory dataspace and select the union of the hyperslabs -in the memory dataset. - -
- /* - * Create memory dataspace. - */ - mid = H5Screate_simple(MSPACE_RANK, mdim, NULL); - - /* - * Select two hyperslabs in memory. Hyperslabs has the same - * size and shape as the selected hyperslabs for the file dataspace. - */ - start[0] = 0; start[1] = 0; - block[0] = 1; block[1] = 1; - stride[0] = 1; stride[1] = 1; - count[0] = 3; count[1] = 4; - ret = H5Sselect_hyperslab(mid, H5S_SELECT_SET, start, stride, count, block); - start[0] = 1; start[1] = 2; - block[0] = 1; block[1] = 1; - stride[0] = 1; stride[1] = 1; - count[0] = 6; count[1] = 5; - ret = H5Sselect_hyperslab(mid, H5S_SELECT_OR, start, stride, count, block); -- -Finally we can read the selected data from the file dataspace to the selection -in memory with one call to the
H5Dread
function.
-
-ret = H5Dread(dataset, H5T_NATIVE_INT, mid, fid, H5P_DEFAULT, matrix_out); -- -
-Example 3 -includes this code along with the previous selection example. - - - - -
(Return to TOC) - - -
-VL datatypes are useful to the scientific community in many different ways, -some of which are listed below: -
- Value1: Object1, Object3, Object9 - Value2: Object0, Object12, Object14, Object21, Object22 - Value3: Object2 - Value4: <none> - Value5: Object1, Object10, Object12 - . - . --
- Feature1: Dataset1:Region, Dataset3:Region, Dataset9:Region - Feature2: Dataset0:Region, Dataset12:Region, Dataset14:Region, - Dataset21:Region, Dataset22:Region - Feature3: Dataset2:Region - Feature4: <none> - Feature5: Dataset1:Region, Dataset10:Region, Dataset12:Region - . - . --
H5Dread
to fail while reading in
-VL datatype information if the memory required exceeds that which is available.
-In this case, the H5Dread
call will fail gracefully and any
-VL data which has been allocated prior to the memory shortage will be returned
-to the system via the memory management routines detailed below.
-It may be possible to design a partial read API function at a
-later date, if demand for such a function warrants.
-
-
-
-HDF5 has native VL strings for each language API, which are stored the
-same way on disk, but are exported through each language API in a natural way
-for that language. When retrieving VL strings from a dataset, users may choose
-to have them stored in memory as a native VL string or in HDF5's hvl_t
-struct for VL datatypes.
-
-
-VL strings may be created in one of two ways: by creating a VL datatype with
-a base type of H5T_NATIVE_ASCII
, H5T_NATIVE_UNICODE
,
-etc., or by creating a string datatype and setting its length to
-H5T_VARIABLE
. The second method is used to access
-native VL strings in memory. The library will convert between the two types,
-but they are stored on disk using different datatypes and have different
-memory representations.
-
-
-Multi-byte character representations, such as UNICODE or wide -characters in C/C++, will need the appropriate character and string datatypes -created so that they can be described properly through the datatype API. -Additional conversions between these types and the current ASCII characters -will also be required. - -
-Variable-width character strings (which might be compressed data or some -other encoding) are not currently handled by this design. We will evaluate -how to implement them based on user feedback. - - -
H5Tvlen_create()
function
-as follows:
-H5Tvlen_create
(hid_t base_type_id
);
--The base datatype will be the datatype that the sequence is composed of, -characters for character strings, vertex coordinates for polygon lists, etc. -The base datatype specified for the VL datatype can be of any HDF5 datatype, -including another VL datatype, a compound datatype, or an atomic datatype. - - -
H5Tget_super()
function, described in the H5T documentation.
-
-
-H5Dread
may need
-to allocate to store VL data while reading the data, the
-H5Dget_vlen_size()
function is provided:
-H5Dvlen_get_buf_size
(hid_t dataset_id
,
- hid_t type_id
,
- hid_t space_id
,
- hsize_t *size
)
-
-This routine checks the number of bytes required to store the VL data from
-the dataset, using the space_id
for the selection in the dataset
-on disk and the type_id
for the memory representation of the
-VL data in memory. The *size
value is modified according to
-how many bytes are required to store the VL data in memory.
-
-
-
H5Dread
and H5Dwrite
functions
-with the dataset transfer property list.
-
-
-Default memory management is set by using H5P_DEFAULT
-for the dataset transfer property list identifier.
-If H5P_DEFAULT
is used with H5Dread
,
-the system malloc
and free
calls
-will be used for allocating and freeing memory.
-In such a case, H5P_DEFAULT
should also be passed
-as the property list identifier to H5Dvlen_reclaim
.
-
-
-The rest of this subsection is relevant only to those who choose -not to use default memory management. - -
-The user can choose whether to use the
-system malloc
and free
calls or
-user-defined, or custom, memory management functions.
-If user-defined memory management functions are to be used,
-the memory allocation and free routines must be defined via
-H5Pset_vlen_mem_manager()
, as follows:
-
H5Pset_vlen_mem_manager
(hid_t plist_id
,
- H5MM_allocate_t alloc
,
- void *alloc_info
,
- H5MM_free_t free
,
- void *free_info
)
-
-The alloc
and free
parameters
-identify the memory management routines to be used.
-If the user has defined custom memory management routines,
-alloc
and/or free
should be set to make
-those routine calls (i.e., the name of the routine is used as
-the value of the parameter);
-if the user prefers to use the system's malloc
-and/or free
, the alloc
and
-free
parameters, respectively, should be set to
- NULL
-
-The prototypes for the user-defined functions would appear as follows: -
typedef
void
- *(*H5MM_allocate_t
)(size_t size
,
- void *info
) ;
- typedef
void
- (*H5MM_free_t
)(void *mem
,
- void *free_info
) ;
-
-The alloc_info
and free_info
parameters can be
-used to pass along any required information to the user's memory management
-routines.
-
-
-In summary, if the user has defined custom memory management
-routines, the name(s) of the routines are passed in the
-alloc
and free
parameters and the
-custom routines' parameters are passed in the
-alloc_info
and free_info
parameters.
-If the user wishes to use the system malloc
and
-free
functions, the alloc
and/or
-free
parameters are set to NULL
-and the alloc_info
and free_info
-parameters are ignored.
-
-
H5Dvlen_reclaim()
function call, as follows:
-H5Dvlen_reclaim
(hid_t type_id
,
- hid_t space_id
,
- hid_t plist_id
,
- void *buf
);
-
-The type_id
must be the datatype stored in the buffer,
-space_id
describes the selection for the memory buffer
-to free the VL datatypes within,
-plist_id
is the dataset transfer property list which
-was used for the I/O transfer to create the buffer, and
-buf
is the pointer to the buffer to free the VL memory within.
-The VL structures (hvl_t
) in the user's buffer are
-modified to zero out the VL information after it has been freed.
-
-
-If nested VL datatypes were used to create the buffer, -this routine frees them from the bottom up, -releasing all the memory without creating memory leaks. - -
-Example 4 -creates a dataset with the variable-length datatype using user-defined -functions for memory management. - - -
(Return to TOC) - - -
H5T_ARRAY
, allows the
-construction of true, homogeneous, multi-dimensional arrays.
-Since these are homogeneous arrays, each element of the array will be
-of the same datatype, designated at the time the array is created.
-
--Arrays can be nested. -Not only is an array datatype used as an element of an HDF5 dataset, -but the elements of an array datatype may be of any datatype, -including another array datatype. - -
-Array datatypes cannot be subdivided for I/O; the entire array must -be transferred from one dataset to another. - -
-Within certain limitations, outlined in the next paragraph, array datatypes -may be N-dimensional and of any dimension size. -Unlimited dimensions, however, are not supported. -Functionality similar to unlimited dimension arrays is available through -the use of variable-length datatypes. - -
-The maximum number of dimensions, i.e., the maximum rank, of an array
-datatype is specified by the HDF5 library constant H5S_MAX_RANK
.
-The minimum rank is 1 (one).
-All dimension sizes must be greater than 0 (zero).
-
-
-One array dataype may only be converted to another array datatype -if the number of dimensions and the sizes of the dimensions are equal -and the datatype of the first array's elements can be converted -to the datatype of the second array's elements. - -
H5Tarray_create
, for creating an array datatype,
-and two, H5Tget_array_ndims
and H5Tget_array_dims
,
-for working with existing array datatypes.
-
-H5Tarray_create
creates a new array datatype object.
-Parameters specify
- H5Tarray_create
(
- hid_t base
,
- int rank
,
- const hsize_t dims[/*rank*/]
,
- const int perm[/*rank*/]
- )
-
-The function H5Tget_array_ndims
returns the rank of a
-specified array datatype.
-
-
H5Tget_array_ndims
(
- hid_t adtype_id
- )
-H5Tget_array_dims
retrieves the
-permutation of the array and the size of each dimension.
-(Note: The permutation feature is not implemented in Release 1.4.)
-
-H5Tget_array_dims
(
- hid_t adtype_id
,
- hsize_t *dims[]
,
- int *perm[]
- )
--Example 5 -creates an array datatype and a dataset containing elements of the -array datatype in an HDF5 file. It then writes the dataset to the file. - - -
(Return to TOC) - - -
Properties of compound datatypes. A compound datatype is similar to a struct in C or a common block in Fortran. It is a collection of one or more atomic types or small arrays of such types. To create and use of a compound datatype you need to refer to various properties of the data compound datatype: - -
Properties of members of a compound datatype are defined when the member is added to the compound type and cannot be subsequently modified. -
Defining compound datatypes. Compound datatypes must be built out of other datatypes. First, one creates an empty compound datatype and specifies its total size. Then members are added to the compound datatype in any order. -
Member names. Each member must have a descriptive name, which is the key used to uniquely identify the member within the compound datatype. A member name in an HDF5 datatype does not necessarily have to be the same as the name of the corresponding member in the C struct in memory, although this is often the case. Nor does one need to define all members of the C struct in the HDF5 compound datatype (or vice versa). -
Offsets. Usually a C struct will be defined to hold a data point in memory, and the offsets of the members in memory will be the offsets of the struct members from the beginning of an instance of the struct. The library defines the macro to compute the offset of a member within a struct: -
HOFFSET(s,m)
-Here is an example in which a compound datatype is created to describe complex numbers whose type is defined by the
complex_t
struct.
-typedef struct {
- double re; /*real part */
- double im; /*imaginary part */
-} complex_t;
-
-complex_t tmp; /*used only to compute offsets */
-hid_t complex_id = H5Tcreate (H5T_COMPOUND, sizeof tmp);
-H5Tinsert (complex_id, "real", HOFFSET(tmp,re),
- H5T_NATIVE_DOUBLE);
-H5Tinsert (complex_id, "imaginary", HOFFSET(tmp,im),
- H5T_NATIVE_DOUBLE);
-
Example 6 shows how to create a compound datatype, write an array that has the compound datatype to the file, and read back subsets of the members. - - - -
(Return to TOC) - - -
An extendible dataset is one whose dimensions can grow. In HDF5, it is possible to define a dataset to have certain initial dimensions, then later to increase the size of any of the initial dimensions. -
For example, you can create and store the following 3x3 HDF5 dataset: -
1 1 1 - 1 1 1 - 1 1 1-
then later to extend this into a 10x3 dataset by adding 7 rows, such as this: -
1 1 1 - 1 1 1 - 1 1 1 - 2 2 2 - 2 2 2 - 2 2 2 - 2 2 2 - 2 2 2 - 2 2 2 - 2 2 2-
then further extend it to a 10x5 dataset by adding two columns, such as this: -
1 1 1 3 3 - 1 1 1 3 3 - 1 1 1 3 3 - 2 2 2 3 3 - 2 2 2 3 3 - 2 2 2 3 3 - 2 2 2 3 3 - 2 2 2 3 3 - 2 2 2 3 3 - 2 2 2 3 3-
HDF 5 requires you to use chunking in order to define extendible datasets. Chunking makes it possible to extend datasets efficiently, without having to reorganize storage excessively. -
The following operations are required in order to write an extendible dataset: -
For example, suppose we wish to create a dataset similar to the one shown above. We want to start with a 3x3 dataset, then later extend it in both directions. -
Declaring unlimited dimensions. We could declare the dataspace to have unlimited dimensions with the following code, which uses the predefined constant
H5S_UNLIMITED
to specify unlimited dimensions.
-hsize_t dims[2] = { 3, 3}; /* dataset dimensions -at the creation time */ -hsize_t maxdims[2] = {H5S_UNLIMITED, H5S_UNLIMITED}; -/* - * Create the data space with unlimited dimensions. - */ -dataspace = H5Screate_simple(RANK, dims, maxdims);-
Enabling chunking. We can then set the dataset storage layout properties to enable chunking. We do this using the routine H5Pset_chunk
:
-
hid_t cparms; -hsize_t chunk_dims[2] ={2, 5}; -/* - * Modify dataset creation properties to enable chunking. - */ -cparms = H5Pcreate (H5P_DATASET_CREATE); -status = H5Pset_chunk( cparms, RANK, chunk_dims); -- -Then create a dataset. -
-/* - * Create a new dataset within the file using cparms - * creation properties. - */ -dataset = H5Dcreate(file, DATASETNAME, H5T_NATIVE_INT, dataspace, - cparms); -- -
Extending dataset size. Finally, when we want to extend the size of the dataset, we invoke H5Dextend
to extend the size of the dataset. In the following example, we extend the dataset along the first dimension, by seven rows, so that the new dimensions are <10,3>
:
-
/* - * Extend the dataset. Dataset becomes 10 x 3. - */ -dims[0] = dims[0] + 7; -size[0] = dims[0]; -size[1] = dims[1]; -status = H5Dextend (dataset, size);-
-
Example 7 shows how to create a 3x3 extendible dataset, write the dataset, extend the dataset to 10x3, write the dataset again, extend it again to 10x5, write the dataset again. -
Example 8 shows how to read the data written by Example 7. - - -
(Return to TOC) - - -
Groups provide a mechanism for organizing meaningful and extendible sets of datasets within an HDF5 file. The H5G API contains routines for working with groups. -
Creating a group. To create a group, use
-H5Gcreate
. For example, the following code
-creates a group called Data
in the root group.
-
- /* - * Create a group in the file. - */ - grp = H5Gcreate(file, "/Data", 0); --A group may be created in another group by providing the -absolute name of the group to the
H5Gcreate
-function or by specifying its location. For example,
-to create the group Data_new
in the
-Data
group, one can use the following sequence
-of calls:
-- /* - * Create group "Data_new" in the group "Data" by specifying - * absolute name of the group. - */ - grp_new = H5Gcreate(file, "/Data/Data_new", 0); --or -
- /* - * Create group "Data_new" in the "Data" group. - */ - grp_new = H5Gcreate(grp, "Data_new", 0); --Note that the group identifier
grp
is used
-as the first parameter in the H5Gcreate
function
-when the relative name is provided.
-
-The third parameter in H5Gcreate
optionally
-specifies how much file space to reserve to store the names
-that will appear in this group. If a non-positive
-value is supplied, then a default size is chosen.
-
-H5Gclose
closes the group and releases the
-group identifier.
-
- -Creating a dataset in a particular group. -As with groups, a dataset can be created in a particular -group by specifying its absolute name as illustrated in -the following example: - -
- /* - * Create the dataset "Compressed_Data" in the group using the - * absolute name. The dataset creation property list is modified - * to use GZIP compression with the compression effort set to 6. - * Note that compression can be used only when the dataset is - * chunked. - */ - dims[0] = 1000; - dims[1] = 20; - cdims[0] = 20; - cdims[1] = 20; - dataspace = H5Screate_simple(RANK, dims, NULL); - plist = H5Pcreate(H5P_DATASET_CREATE); - H5Pset_chunk(plist, 2, cdims); - H5Pset_deflate( plist, 6); - dataset = H5Dcreate(file, "/Data/Compressed_Data", H5T_NATIVE_INT, - dataspace, plist); --A relative dataset name may also be used when a dataset is -created. First obtain the identifier of the group in which -the dataset is to be created. Then create the dataset -with
H5Dcreate
as illustrated in the following
-example:
-- /* - * Open the group. - */ - grp = H5Gopen(file, "Data"); - - /* - * Create the dataset "Compressed_Data" in the "Data" group - * by providing a group identifier and a relative dataset - * name as parameters to the H5Dcreate function. - */ - dataset = H5Dcreate(grp, "Compressed_Data", H5T_NATIVE_INT, - dataspace, plist); --
-
-Accessing an object in a group.
-Any object in a group can be accessed by its absolute or
-relative name. The following lines of code show how to use
-the absolute name to access the dataset
-Compressed_Data
in the group Data
-created in the examples above:
-
- /* - * Open the dataset "Compressed_Data" in the "Data" group. - */ - dataset = H5Dopen(file, "/Data/Compressed_Data"); --The same dataset can be accessed in another manner. First -access the group to which the dataset belongs, then open -the dataset. -
- /* - * Open the group "data" in the file. - */ - grp = H5Gopen(file, "Data"); - - /* - * Access the "Compressed_Data" dataset in the group. - */ - dataset = H5Dopen(grp, "Compressed_Data"); -- -
-Example 9
-shows how to create a group in a file and a
-dataset in a group. It uses the iterator function
-H5Giterate
to find the names of the objects
-in the root group, and H5Glink
and H5Gunlink
-to create a new group name and delete the original name.
-
-
-
(Return to TOC) - - -
Think of an attribute as a small datasets that is attached to a normal dataset or group. The H5A API contains routines for working with attributes. Since attributes share many of the characteristics of datasets, the programming model for working with attributes is analogous in many ways to the model for working with datasets. The primary differences are that an attribute must be attached to a dataset or a group, and subsetting operations cannot be performed on attributes. -
To create an attribute belonging to a particular dataset or group, first create a dataspace for the attribute with the call to H5Screate
, then create the attribute using H5Acreate
. For example, the following code creates an attribute called Integer_attribute
that is a member of a dataset whose identifier is dataset
. The attribute identifier is attr2
. H5Awrite
then sets the value of the attribute of that of the integer variable point
. H5Aclose
then releases the attribute identifier.
-
-
-
-int point = 1; /* Value of the scalar attribute */ - -/* - * Create scalar attribute. - */ -aid2 = H5Screate(H5S_SCALAR); -attr2 = H5Acreate(dataset, "Integer attribute", H5T_NATIVE_INT, aid2, - H5P_DEFAULT); - -/* - * Write scalar attribute. - */ -ret = H5Awrite(attr2, H5T_NATIVE_INT, &point); - -/* - * Close attribute dataspace. - */ -ret = H5Sclose(aid2); - -/* - * Close attribute. - */ -ret = H5Aclose(attr2); --
-
To read a scalar attribute whose name and datatype are known, first open the attribute using H5Aopen_name
, then use H5Aread to get its value. For example the following reads a scalar attribute called Integer_attribute
whose datatype is a native integer, and whose parent dataset has the identifier dataset
.
-
-/* - * Attach to the scalar attribute using attribute name, then read and - * display its value. - */ -attr = H5Aopen_name(dataset,"Integer attribute"); -ret = H5Aread(attr, H5T_NATIVE_INT, &point_out); -printf("The value of the attribute \"Integer attribute\" is %d \n", point_out); -ret = H5Aclose(attr); --
Reading an attribute whose characteristics are not known. It may be necessary to query a file to obtain information about an attribute, namely its name, datatype, rank and dimensions. The following code opens an attribute by its index value using H5Aopen_index
, then reads in information about its datatype.
-
-
-/* - * Attach to the string attribute using its index, then read and display the value. - */ -attr = H5Aopen_idx(dataset, 2); -atype = H5Tcopy(H5T_C_S1); - H5Tset_size(atype, 4); -ret = H5Aread(attr, atype, string_out); -printf("The value of the attribute with the index 2 is %s \n", string_out); --
-
In practice, if the characteristics of attributes are not known,
-the code involved in accessing and processing the attribute can be quite
-complex. For this reason, HDF5 includes a function called
-H5Aiterate
, which applies a user-supplied function to each
-of a set of attributes. The user-supplied function can contain the code
-that interprets, accesses and processes each attribute.
-
-Example 10 illustrates the use of the H5Aiterate
function, as well as the other attribute examples described above.
-
-
-
(Return to TOC) - - -
-An object reference is based on the relative file address of the object header -in the file and is constant for the life of the object. Once a reference to -an object is created and stored in a dataset in the file, it can be used -to dereference the object it points to. References are handy for creating -a file index or for grouping related objects by storing references to them in -one dataset. -
- -
-Notes: -Note the following elements of this example: -
- dataset = H5Dcreate ( fid1,"Dataset3",H5T_STD_REF_OBJ,sid1,H5P_DEFAULT ); -- creates a dataset to store references. Notice that the -
H5T_SDT_REF_OBJ
datatype is used to specify that
- references to objects will be stored.
- The datatype H5T_STD_REF_DSETREG
is used to store the
- dataset region references and is be discussed later.
-H5Rcreate
function create
- references to the objects and store them in the buffer wbuf.
- The signature of the H5Rcreate
function is:
-- herr_t H5Rcreate ( void* buf, hid_t loc_id, const char *name, - H5R_type_t ref_type, hid_t space_id ) --
/Group1/Dataset1
- identify the dataset. One could also use the group identifier
- of group Group1
and the relative name of the dataset
- Dataset1
to create the same reference.
- H5R_OBJECT
).
- Another type of reference, reference to the dataset region
- (H5R_DATASET_REGION
), is discussed later.
- -1
.
-H5Dwrite
function writes a dataset with the
- references to the file. Notice that the H5T_SDT_REF_OBJ
- datatype is used to describe the dataset's memory datatype.
-trefer1.h5
file created by this example
-are as follows:
-- -HDF5 "trefer1.h5" { -GROUP "/" { - DATASET "Dataset3" { - DATATYPE { H5T_REFERENCE } - DATASPACE { SIMPLE ( 4 ) / ( 4 ) } - DATA { - DATASET 0:1696, DATASET 0:2152, GROUP 0:1320, DATATYPE 0:2268 - } - } - GROUP "Group1" { - DATASET "Dataset1" { - DATATYPE { H5T_STD_U32LE } - DATASPACE { SIMPLE ( 4 ) / ( 4 ) } - DATA { - 0, 3, 6, 9 - } - } - DATASET "Dataset2" { - DATATYPE { H5T_STD_U8LE } - DATASPACE { SIMPLE ( 4 ) / ( 4 ) } - DATA { - 0, 0, 0, 0 - } - } - DATATYPE "Datatype1" { - H5T_STD_I32BE "a"; - H5T_STD_I32BE "b"; - H5T_IEEE_F32BE "c"; - } - } -} -} - --Notice how the data in dataset
Dataset3
is described.
-The two numbers with the colon in between represent a unique identifier
-of the object. These numbers are constant for the life of the object.
-
-
-H5T_STD_REF_OBJ
datatype must be used to
- describe the memory datatype.
-Dataset3
from the file created created
-in Example 11. Then the program dereferences the references
-to dataset Dataset1
, the group and the named datatype,
-and opens those objects.
-The program reads and displays the dataset's data, the group's comment, and
-the number of members of the compound datatype.
-
--Output file contents: -The output of this program is as follows: - -
- -Dataset data : - 0 3 6 9 - -Group comment is Foo! - -Number of compound datatype members is 3 -- - -
-Notes: -Note the following elements of this example: - -
H5Dread
function was used to read dataset
- Dataset3
containing the references to the objects.
- The H5T_STD_REF_OBJ
memory datatype was
- used to read references to memory.
-H5Rdereference
obtains the object's identifier.
- The signature of this function is:
-- hid_t H5Rdereference (hid_t datatset, H5R_type_t ref_type, void *ref) --
H5R_OBJECT
was used to specify a reference to an
- object. Another type, used to specifiy a reference to a dataset
- region and discussed later, is H5R_DATASET_REGION
.
- H5Rget_object_type
should be used to
- identify the type of object the reference points to.
- (Return to TOC) - - -
-
-
-
-
-Notes: -Note the following elements of this example: -
- dset1=H5Dcreate(fid1,"Dataset1",H5T_STD_REF_DSETREG,sid1,H5P_DEFAULT); -- creates a dataset to store references to the dataset(s) regions (selections). - Notice that the
H5T_STD_REF_DSETREG
datatype is used.
-
-H5Sselect_hyperslab
- and H5Sselect_elements
. The handle was created when dataset
- Dataset2
was created and it describes the dataset's
- dataspace. It was not closed when the dataset was closed to decrease
- the number of function calls used in the example.
- In a real application program, one should open the dataset and determine
- its dataspace using the H5Dget_space
function.
-H5Rcreate
is used to create a dataset region reference
- and store it in a buffer. The signature of the function is:
-- herr_t H5Rcreate(void *buf, hid_t loc_id, const char *name, - H5R_type_t ref_type, hid_t space_id) --
/Dataset2
were
- used to identify the dataset. The reference to the region of this
- dataset is stored in the buffer buf.
-
- H5R_DATASET_REGION
datatype is used.
- trefer2.h5
created by this program
-are as follows:
-
--HDF5 "trefer2.h5" { -GROUP "/" { - DATASET "Dataset1" { - DATATYPE { H5T_REFERENCE } - DATASPACE { SIMPLE ( 4 ) / ( 4 ) } - DATA { - DATASET 0:744 {(2,2)-(7,7)}, DATASET 0:744 {(6,9), (2,2), (8,4), (1,6), - (2,8), (3,2), (0,4), (9,0), (7,1), (3,3)}, NULL, NULL - } - } - DATASET "Dataset2" { - DATATYPE { H5T_STD_U8LE } - DATASPACE { SIMPLE ( 10, 10 ) / ( 10, 10 ) } - DATA { - 0, 3, 6, 9, 12, 15, 18, 21, 24, 27, - 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, - 60, 63, 66, 69, 72, 75, 78, 81, 84, 87, - 90, 93, 96, 99, 102, 105, 108, 111, 114, 117, - 120, 123, 126, 129, 132, 135, 138, 141, 144, 147, - 150, 153, 156, 159, 162, 165, 168, 171, 174, 177, - 180, 183, 186, 189, 192, 195, 198, 201, 204, 207, - 210, 213, 216, 219, 222, 225, 228, 231, 234, 237, - 240, 243, 246, 249, 252, 255, 255, 255, 255, 255, - 255, 255, 255, 255, 255, 255, 255, 255, 255, 255 - } - } -} -} --Notice how raw data of the dataset with the dataset regions is displayed. -Each element of the raw data consists of a reference to the dataset -(
DATASET number1:number2
) and its selected region.
-If the selection is a hyperslab, the corner coordinates of the hyperslab
-are displayed.
-For the point selection, the coordinates of each point are displayed.
-Since only two selections were stored, the third and fourth elements of the
-dataset Dataset1
are set to NULL
.
-This was done by the buffer inizialization in the program.
-
-H5T_STD_REF_DSETREG
must be used during
- read operation.
-H5Rdereference
to obtain the dataset identifier
- from the read dataset region reference.
- OR -- Use
H5Rget_region
to obtain the dataspace identifier for
- the dataset containing the selection from the read dataset region reference.
-H5Sget_select_
*, can be used to obtain information
- about the selection.
--Output: -The output of this program is : -
- - Number of elements in the dataset is : 100 - 0 3 6 9 12 15 18 21 24 27 - 30 33 36 39 42 45 48 51 54 57 - 60 63 66 69 72 75 78 81 84 87 - 90 93 96 99 102 105 108 111 114 117 - 120 123 126 129 132 135 138 141 144 147 - 150 153 156 159 162 165 168 171 174 177 - 180 183 186 189 192 195 198 201 204 207 - 210 213 216 219 222 225 228 231 234 237 - 240 243 246 249 252 255 255 255 255 255 - 255 255 255 255 255 255 255 255 255 255 - Number of elements in the hyperslab is : 36 - Hyperslab coordinates are : - ( 2 , 2 ) ( 7 , 7 ) - Number of selected elements is : 10 - Coordinates of selected elements are : - ( 6 , 9 ) - ( 2 , 2 ) - ( 8 , 4 ) - ( 1 , 6 ) - ( 2 , 8 ) - ( 3 , 2 ) - ( 0 , 4 ) - ( 9 , 0 ) - ( 7 , 1 ) - ( 3 , 3 ) - -- -Notes: -Note the following elements of this example: -
H5Dread
- with the H5T_STD_REF_DSETREG
datatype specified.
-- dset2 = H5Rdereference (dset1,H5R_DATASET_REGION,&rbuf[0]); -- or to obtain spacial information (dataspace and selection) with the call - to
H5Rget_region
:
-- sid2=H5Rget_region(dset1,H5R_DATASET_REGION,&rbuf[0]); -- The reference to the dataset region has information for both the dataset - itself and its selection. In both functions: -
H5Sget_select
*
- functions used to obtain information about selections:
-
-H5Sget_select_npoints:
returns the number of elements in
- the hyperslabH5Sget_select_hyper_nblocks:
returns the number of blocks
- in the hyperslabH5Sget_select_blocklist:
returns the "lower left" and
- "upper right" coordinates of the blocks in the hyperslab selectionH5Sget_select_bounds:
returns the coordinates of the
- "minimal" block containing a hyperslab selectionH5Sget_select_elem_npoints:
returns the number of points
- in the element selectionH5Sget_select_elem_points:
returns the coordinates of
- the element selection
-(Return to TOC) - - -
-Introduction to HDF5 -HDF5 User Guide - - |
-
-HDF5 Reference Manual -Other HDF5 documents and links - |
-
-
-HDF Help Desk
- -Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0 - - -Last modified: 1 July 2004 - - | -Copyright - |
Example programs/sections of code below: -
Notes:
-This example creates a new HDF5 file and allows write access.
-If the file exists already, the H5F_ACC_TRUNC flag would also be necessary to
-overwrite the previous file's information.
-
-
Code:
-
-
-
-
- hid_t file_id;
-
- file_id=H5Fcreate("example1.h5",H5F_ACC_EXCL,H5P_DEFAULT,H5P_DEFAULT);
-
- H5Fclose(file_id);
-
-
Notes:
-This example creates a 4-dimensional dataset of 32-bit floating-point
-numbers, corresponding to the current Scientific Dataset functionality.
-
-
Code:
-
-
-
-
- 1 hid_t file_id; /* new file's ID */
- 2 hid_t dim_id; /* new dimensionality's ID */
- 3 int rank=4; /* the number of dimensions */
- 4 hsize_t dims[4]={6,5,4,3}; /* the size of each dimension */
- 5 hid_t dataset_id; /* new dataset's ID */
- 6 float buf[6][5][4][3]; /* storage for the dataset's data */
- 7 herr_t status; /* function return status */
- 8
- 9 file_id = H5Fcreate ("example3.h5", H5F_ACC_TRUNC, H5P_DEFAULT,
-10 H5P_DEFAULT);
-11 assert (file_id >= 0);
-12
-13 /* Create & initialize a dimensionality object */
-14 dim_id = H5Screate_simple (rank, dims);
-15 assert (dim_id >= 0);
-16
-17 /* Create & initialize the dataset object */
-18 dataset_id = H5Dcreate (file_id, "Simple Object", H5T_NATIVE_FLOAT,
-19 dim_id, H5P_DEFAULT);
-20 assert (dataset_id >= 0);
-21
-22 <initialize data array>
-23
-24 /* Write the entire dataset out */
-25 status = H5Dwrite (dataset_id, H5T_NATIVE_FLOAT, H5S_ALL, H5S_ALL,
-26 H5P_DEFAULT, buf);
-27 assert (status >= 0);
-28
-29 /* Release the IDs we've created */
-30 H5Sclose (dim_id);
-31 H5Dclose (dataset_id);
-32 H5Fclose (file_id);
-
Notes:
-This example shows how to get the information for and display a generic
-dataset.
-
-
Code:
-
-
diff --git a/doc/html/H5.user.PrintGen.html b/doc/html/H5.user.PrintGen.html
deleted file mode 100644
index b73f093..0000000
--- a/doc/html/H5.user.PrintGen.html
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
- 1 hid_t file_id; /* file's ID */
- 2 hid_t dataset_id; /* dataset's ID in memory */
- 3 hid_t space_id; /* dataspace's ID in memory */
- 4 uintn nelems; /* number of elements in array */
- 5 double *buf; /* pointer to the dataset's data */
- 6 herr_t status; /* function return value */
- 7
- 8 file_id = H5Fopen ("example6.h5", H5F_ACC_RDONLY, H5P_DEFAULT);
- 9 assert (file_id >= 0);
-10
-11 /* Attach to a datatype object */
-12 dataset_id = H5Dopen (file_id, "dataset1");
-13 assert (dataset_id >= 0);
-14
-15 /* Get the OID for the dataspace */
-16 space_id = H5Dget_space (dataset_id);
-17 assert (space_id >= 0);
-18
-19 /* Allocate space for the data */
-20 nelems = H5Sget_npoints (space_id);
-21 buf = malloc (nelems * sizeof(double));
-22
-23 /* Read in the dataset */
-24 status = H5Dread (dataset_id, H5T_NATIVE_DOUBLE, H5S_ALL,, H5S_ALL,
-25 H5P_DEFAULT, buf);
-26 assert (status >= 0);
-27
-28 /* Release the IDs we've accessed */
-29 H5Sclose (space_id);
-30 H5Dclose (dataset_id);
-31 H5Fclose (file_id);
-
Title Page - | - | Title page. - |
Copyright - | - | The HDF5 copyright notice, contact information, - and other back-of-the-title-page material. - |
TOC - | - | Table of contents. - |
HDF5 Files - | - | A guide to the H5F interface. - |
Datasets - | A guide to the H5D - interface. - | |
Datatypes - | A guide to the H5T - interface. - | |
Dataspaces - | A guide to the H5S - interface. - | |
Groups - | A guide to the H5G - interface. - | |
References and - Identifiers - | A guide to the H5R - and H5I interfaces. - | |
Attributes - | A guide to the H5A - interface. - | |
Property Lists - | A guide to the H5P - interface. - | |
Error Handling - | A guide to the H5E - interface. - | |
Filters - | A guide to the H5Z - interface. - | |
Caching - | A guide for meta and raw data caching. - | |
Dataset Chunking - | A guide to the issues and pitfalls - of dataset chunking. - | |
Mounting Files - | A guide to mounting files containing - external HDF5 datasets. - | |
Debugging - | A guide to debugging HDF5 API calls. - | |
Environment Variables - and Configuration Parameters - | A list of HDF5 environment variables
- and configuration parameters. - | |
DDL for HDF5 - | A DDL in BNF for HDF5. - |
-HDF Help Desk
- -Last modified: 22 July 1999 - - |
-A Note to the Reader: -The primary HDF5 user documents are the online HTML documents -distributed with the HDF5 code and binaries and found on the HDF5 website. -These PDF and PostScript versions are generated from the HTML to provide -the following capabilites: -- --
-In this package, you will find four PDF and PostScript documents: -- To provide a version that can be reasonably printed in a - single print operation. -
- To provide an easily searchable version. -
-
-Note that these versions were created in response to user feedback; -the HDF Group is eager to hear from you so as to improve the delivered -product. -- Introduction to HDF5 -
- A User's Guide for HDF5 -
- HDF5 Reference Manual -
- All three of the above documents concatenated into a single file -
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
This document is the HDF5 User's Guide from - HDF5 Release 1.4.5. - Since a - new - HDF5 User's Guide is under development, - this version has not been updated for Release 1.6.0. -
The following documents form a loosely organized user's guide - to the HDF5 library. -
- -
HDF5 Files - | - | A guide to the H5F interface. - |
Datasets - | A guide to the H5D - interface. - | |
Datatypes - | A guide to the H5T - interface. - | |
Dataspaces - | A guide to the H5S - interface. - | |
Groups - | A guide to the H5G - interface. - | |
References and - Identifiers - | A guide to the H5R - and H5I interfaces. - | |
Attributes - | A guide to the H5A - interface. - | |
Property Lists - | A guide to the H5P - interface. - | |
Error Handling - | A guide to the H5E - interface. - | |
Filters - | A guide to the H5Z - interface. - | |
Caching - | A guide for meta and raw data caching. - | |
Dataset Chunking - | A guide to the issues and pitfalls - of dataset chunking. - | |
Mounting Files - | A guide to mounting files containing - external HDF5 datasets. - | |
Performance - | A guide to performance issues and - analysis tools. - | |
Debugging - | A guide to debugging HDF5 API calls. - | |
Environment Variables
- and
- Configuration - Parameters - | A list of HDF5 environment variables
- and configuration parameters. - | |
DDL for HDF5 - | A DDL in BNF for HDF5. - |
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
-
-
-HDF Help Desk
- -Describes HDF5 Release 1.4.5, February 2003 - - -Last modified: 3 July 2003 - - - | -Copyright - |
The HDF5 raw data pipeline is a complicated beast that handles - all aspects of raw data storage and transfer of that data - between the file and the application. Data can be stored - contiguously (internal or external), in variable size external - segments, or regularly chunked; it can be sparse, extendible, - and/or compressible. Data transfers must be able to convert from - one data space to another, convert from one number type to - another, and perform partial I/O operations. Furthermore, - applications will expect their common usage of the pipeline to - perform well. - -
To accomplish these goals, the pipeline has been designed in a - modular way so no single subroutine is overly complicated and so - functionality can be inserted easily at the appropriate - locations in the pipeline. A general pipeline was developed and - then certain paths through the pipeline were optimized for - performance. - -
We describe only the file-to-memory side of the pipeline since - the memory-to-file side is a mirror image. We also assume that a - proper hyperslab of a simple data space is being read from the - file into a proper hyperslab of a simple data space in memory, - and that the data type is a compound type which may require - various number conversions on its members. - - - -
The diagrams should be read from the top down. The Line A
- in the figure above shows that H5Dread()
copies
- data from a hyperslab of a file dataset to a hyperslab of an
- application buffer by calling H5D_read()
. And
- H5D_read()
calls, in a loop,
- H5S_simp_fgath()
, H5T_conv_struct()
,
- and H5S_simp_mscat()
. A temporary buffer, TCONV, is
- loaded with data points from the file, then data type conversion
- is performed on the temporary buffer, and finally data points
- are scattered out to application memory. Thus, data type
- conversion is an in-place operation and data space conversion
- consists of two steps. An additional temporary buffer, BKG, is
- large enough to hold N instances of the destination
- data type where N is the same number of data points
- that can be held by the TCONV buffer (which is large enough to
- hold either source or destination data points).
-
-
The application sets an upper limit for the size of the TCONV
- buffer and optionally supplies a buffer. If no buffer is
- supplied then one will be created by calling
- malloc()
when the pipeline is executed (when
- necessary) and freed when the pipeline exits. The size of the
- BKG buffer depends on the size of the TCONV buffer and if the
- application supplies a BKG buffer it should be at least as large
- as the TCONV buffer. The default size for these buffers is one
- megabyte but the buffer might not be used to full capacity if
- the buffer size is not an integer multiple of the source or
- destination data point size (whichever is larger, but only
- destination for the BKG buffer).
-
-
-
-
Occassionally the destination data points will be partially
- initialized and the H5Dread()
operation should not
- clobber those values. For instance, the destination type might
- be a struct with members a
and b
where
- a
is already initialized and we're reading
- b
from the file. An extra line, G, is added to the
- pipeline to provide the type conversion functions with the
- existing data.
-
-
-
-
It will most likely be quite common that no data type - conversion is necessary. In such cases a temporary buffer for - data type conversion is not needed and data space conversion - can happen in a single step. In fact, when the source and - destination data are both contiguous (they aren't in the - picture) the loop degenerates to a single iteration. - - - - -
So far we've looked only at internal contiguous storage, but by - replacing Line B in Figures 1 and 2 and Line A in Figure 3 with - Figure 4 the pipeline is able to handle regularly chunked - objects. Line B of Figure 4 is executed once for each chunk - which contains data to be read and the chunk address is found by - looking at a multi-dimensional key in a chunk B-tree which has - one entry per chunk. - - - -
If a single chunk is requested and the destination buffer is - the same size/shape as the chunk, then the CHUNK buffer is - bypassed and the destination buffer is used instead as shown in - Figure 5. - - - -
-Introduction to HDF5 -HDF5 User Guide - - |
-
-HDF5 Reference Manual -Other HDF5 documents and links - |
Table of Contents | ||
---|---|---|
-
-
-    
- 1: Creating and writing a
- dataset -     - 2. Reading a hyperslab -     - 3. Writing selected data -     - 4. Working with compound datatypes -     - 5. Creating and writing an extendible -     -     - dataset -     - 6. Reading data -     - 7. Creating groups - - - |
-
-
-    
- 8. Writing and reading
- attributes -     - 9. Creating and writing references -     -     - to objects -     - 10. Reading references to objects -     - 11. Creating and writing references -     -     - to dataset regions -     - 12. Reading references to dataset -     -     - regions - - |
- -
This example creates a 2-dimensional HDF 5 dataset of little endian 32-bit integers. -
- -/* - * This example writes data to the HDF5 file. - * Data conversion is performed during write operation. - */ - -#include- - - - -- -#define FILE "SDS.h5" -#define DATASETNAME "IntArray" -#define NX 5 /* dataset dimensions */ -#define NY 6 -#define RANK 2 - -int -main (void) -{ - hid_t file, dataset; /* file and dataset handles */ - hid_t datatype, dataspace; /* handles */ - hsize_t dimsf[2]; /* dataset dimensions */ - herr_t status; - int data[NX][NY]; /* data to write */ - int i, j; - - /* - * Data and output buffer initialization. - */ - for (j = 0; j < NX; j++) { - for (i = 0; i < NY; i++) - data[j][i] = i + j; - } - /* - * 0 1 2 3 4 5 - * 1 2 3 4 5 6 - * 2 3 4 5 6 7 - * 3 4 5 6 7 8 - * 4 5 6 7 8 9 - */ - - /* - * Create a new file using H5F_ACC_TRUNC access, - * default file creation properties, and default file - * access properties. - */ - file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* - * Describe the size of the array and create the data space for fixed - * size dataset. - */ - dimsf[0] = NX; - dimsf[1] = NY; - dataspace = H5Screate_simple(RANK, dimsf, NULL); - - /* - * Define datatype for the data in the file. - * We will store little endian INT numbers. - */ - datatype = H5Tcopy(H5T_NATIVE_INT); - status = H5Tset_order(datatype, H5T_ORDER_LE); - - /* - * Create a new dataset within the file using defined dataspace and - * datatype and default dataset creation properties. - */ - dataset = H5Dcreate(file, DATASETNAME, datatype, dataspace, - H5P_DEFAULT); - - /* - * Write the data to the dataset using default transfer properties. - */ - status = H5Dwrite(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, - H5P_DEFAULT, data); - - /* - * Close/release resources. - */ - H5Sclose(dataspace); - H5Tclose(datatype); - H5Dclose(dataset); - H5Fclose(file); - - return 0; -} -
-
(Return to TOC) - - - -
This example reads a hyperslab from a 2-d HDF5 dataset into a 3-d dataset in memory. -
- -/* - * This example reads hyperslab from the SDS.h5 file - * created by h5_write.c program into two-dimensional - * plane of the three-dimensional array. - * Information about dataset in the SDS.h5 file is obtained. - */ - -#include "hdf5.h" - -#define FILE "SDS.h5" -#define DATASETNAME "IntArray" -#define NX_SUB 3 /* hyperslab dimensions */ -#define NY_SUB 4 -#define NX 7 /* output buffer dimensions */ -#define NY 7 -#define NZ 3 -#define RANK 2 -#define RANK_OUT 3 - -int -main (void) -{ - hid_t file, dataset; /* handles */ - hid_t datatype, dataspace; - hid_t memspace; - H5T_class_t class; /* datatype class */ - H5T_order_t order; /* data order */ - size_t size; /* - * size of the data element - * stored in file - */ - hsize_t dimsm[3]; /* memory space dimensions */ - hsize_t dims_out[2]; /* dataset dimensions */ - herr_t status; - - int data_out[NX][NY][NZ ]; /* output buffer */ - - hsize_t count[2]; /* size of the hyperslab in the file */ - hsize_t offset[2]; /* hyperslab offset in the file */ - hsize_t count_out[3]; /* size of the hyperslab in memory */ - hsize_t offset_out[3]; /* hyperslab offset in memory */ - int i, j, k, status_n, rank; - - for (j = 0; j < NX; j++) { - for (i = 0; i < NY; i++) { - for (k = 0; k < NZ ; k++) - data_out[j][i][k] = 0; - } - } - - /* - * Open the file and the dataset. - */ - file = H5Fopen(FILE, H5F_ACC_RDONLY, H5P_DEFAULT); - dataset = H5Dopen(file, DATASETNAME); - - /* - * Get datatype and dataspace handles and then query - * dataset class, order, size, rank and dimensions. - */ - datatype = H5Dget_type(dataset); /* datatype handle */ - class = H5Tget_class(datatype); - if (class == H5T_INTEGER) printf("Data set has INTEGER type \n"); - order = H5Tget_order(datatype); - if (order == H5T_ORDER_LE) printf("Little endian order \n"); - - size = H5Tget_size(datatype); - printf(" Data size is %d \n", size); - - dataspace = H5Dget_space(dataset); /* dataspace handle */ - rank = H5Sget_simple_extent_ndims(dataspace); - status_n = H5Sget_simple_extent_dims(dataspace, dims_out, NULL); - printf("rank %d, dimensions %lu x %lu \n", rank, - (unsigned long)(dims_out[0]), (unsigned long)(dims_out[1])); - - /* - * Define hyperslab in the dataset. - */ - offset[0] = 1; - offset[1] = 2; - count[0] = NX_SUB; - count[1] = NY_SUB; - status = H5Sselect_hyperslab(dataspace, H5S_SELECT_SET, offset, NULL, - count, NULL); - - /* - * Define the memory dataspace. - */ - dimsm[0] = NX; - dimsm[1] = NY; - dimsm[2] = NZ ; - memspace = H5Screate_simple(RANK_OUT,dimsm,NULL); - - /* - * Define memory hyperslab. - */ - offset_out[0] = 3; - offset_out[1] = 0; - offset_out[2] = 0; - count_out[0] = NX_SUB; - count_out[1] = NY_SUB; - count_out[2] = 1; - status = H5Sselect_hyperslab(memspace, H5S_SELECT_SET, offset_out, NULL, - count_out, NULL); - - /* - * Read data from hyperslab in the file into the hyperslab in - * memory and display. - */ - status = H5Dread(dataset, H5T_NATIVE_INT, memspace, dataspace, - H5P_DEFAULT, data_out); - for (j = 0; j < NX; j++) { - for (i = 0; i < NY; i++) printf("%d ", data_out[j][i][0]); - printf("\n"); - } - /* - * 0 0 0 0 0 0 0 - * 0 0 0 0 0 0 0 - * 0 0 0 0 0 0 0 - * 3 4 5 6 0 0 0 - * 4 5 6 7 0 0 0 - * 5 6 7 8 0 0 0 - * 0 0 0 0 0 0 0 - */ - - /* - * Close/release resources. - */ - H5Tclose(datatype); - H5Dclose(dataset); - H5Sclose(dataspace); - H5Sclose(memspace); - H5Fclose(file); - - return 0; -} -- - - - - -
-
(Return to TOC) - - -
This example shows how to use the selection capabilities of HDF5 to write selected data to a file. It includes the examples discussed in the text. - -
- -/* - * This program shows how the H5Sselect_hyperslab and H5Sselect_elements - * functions are used to write selected data from memory to the file. - * Program takes 48 elements from the linear buffer and writes them into - * the matrix using 3x2 blocks, (4,3) stride and (2,4) count. - * Then four elements of the matrix are overwritten with the new values and - * file is closed. Program reopens the file and reads and displays the result. - */ - -#include- - - - -- -#define FILE "Select.h5" - -#define MSPACE1_RANK 1 /* Rank of the first dataset in memory */ -#define MSPACE1_DIM 50 /* Dataset size in memory */ - -#define MSPACE2_RANK 1 /* Rank of the second dataset in memory */ -#define MSPACE2_DIM 4 /* Dataset size in memory */ - -#define FSPACE_RANK 2 /* Dataset rank as it is stored in the file */ -#define FSPACE_DIM1 8 /* Dimension sizes of the dataset as it is - stored in the file */ -#define FSPACE_DIM2 12 - - /* We will read dataset back from the file - to the dataset in memory with these - dataspace parameters. */ -#define MSPACE_RANK 2 -#define MSPACE_DIM1 8 -#define MSPACE_DIM2 12 - -#define NPOINTS 4 /* Number of points that will be selected - and overwritten */ -int main (void) -{ - - hid_t file, dataset; /* File and dataset identifiers */ - hid_t mid1, mid2, fid; /* Dataspace identifiers */ - hsize_t dim1[] = {MSPACE1_DIM}; /* Dimension size of the first dataset - (in memory) */ - hsize_t dim2[] = {MSPACE2_DIM}; /* Dimension size of the second dataset - (in memory */ - hsize_t fdim[] = {FSPACE_DIM1, FSPACE_DIM2}; - /* Dimension sizes of the dataset (on disk) */ - - hsize_t start[2]; /* Start of hyperslab */ - hsize_t stride[2]; /* Stride of hyperslab */ - hsize_t count[2]; /* Block count */ - hsize_t block[2]; /* Block sizes */ - - hsize_t coord[NPOINTS][FSPACE_RANK]; /* Array to store selected points - from the file dataspace */ - herr_t ret; - uint i,j; - int matrix[MSPACE_DIM1][MSPACE_DIM2]; - int vector[MSPACE1_DIM]; - int values[] = {53, 59, 61, 67}; /* New values to be written */ - - /* - * Buffers' initialization. - */ - vector[0] = vector[MSPACE1_DIM - 1] = -1; - for (i = 1; i < MSPACE1_DIM - 1; i++) vector[i] = i; - - for (i = 0; i < MSPACE_DIM1; i++) { - for (j = 0; j < MSPACE_DIM2; j++) - matrix[i][j] = 0; - } - - /* - * Create a file. - */ - file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* - * Create dataspace for the dataset in the file. - */ - fid = H5Screate_simple(FSPACE_RANK, fdim, NULL); - - /* - * Create dataset and write it into the file. - */ - dataset = H5Dcreate(file, "Matrix in file", H5T_NATIVE_INT, fid, H5P_DEFAULT); - ret = H5Dwrite(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, matrix); - - /* - * Select hyperslab for the dataset in the file, using 3x2 blocks, - * (4,3) stride and (2,4) count starting at the position (0,1). - */ - start[0] = 0; start[1] = 1; - stride[0] = 4; stride[1] = 3; - count[0] = 2; count[1] = 4; - block[0] = 3; block[1] = 2; - ret = H5Sselect_hyperslab(fid, H5S_SELECT_SET, start, stride, count, block); - - /* - * Create dataspace for the first dataset. - */ - mid1 = H5Screate_simple(MSPACE1_RANK, dim1, NULL); - - /* - * Select hyperslab. - * We will use 48 elements of the vector buffer starting at the second element. - * Selected elements are 1 2 3 . . . 48 - */ - start[0] = 1; - stride[0] = 1; - count[0] = 48; - block[0] = 1; - ret = H5Sselect_hyperslab(mid1, H5S_SELECT_SET, start, stride, count, block); - - /* - * Write selection from the vector buffer to the dataset in the file. - * - * File dataset should look like this: - * 0 1 2 0 3 4 0 5 6 0 7 8 - * 0 9 10 0 11 12 0 13 14 0 15 16 - * 0 17 18 0 19 20 0 21 22 0 23 24 - * 0 0 0 0 0 0 0 0 0 0 0 0 - * 0 25 26 0 27 28 0 29 30 0 31 32 - * 0 33 34 0 35 36 0 37 38 0 39 40 - * 0 41 42 0 43 44 0 45 46 0 47 48 - * 0 0 0 0 0 0 0 0 0 0 0 0 - */ - ret = H5Dwrite(dataset, H5T_NATIVE_INT, mid1, fid, H5P_DEFAULT, vector); - - /* - * Reset the selection for the file dataspace fid. - */ - ret = H5Sselect_none(fid); - - /* - * Create dataspace for the second dataset. - */ - mid2 = H5Screate_simple(MSPACE2_RANK, dim2, NULL); - - /* - * Select sequence of NPOINTS points in the file dataspace. - */ - coord[0][0] = 0; coord[0][1] = 0; - coord[1][0] = 3; coord[1][1] = 3; - coord[2][0] = 3; coord[2][1] = 5; - coord[3][0] = 5; coord[3][1] = 6; - - ret = H5Sselect_elements(fid, H5S_SELECT_SET, NPOINTS, - (const hsize_t **)coord); - - /* - * Write new selection of points to the dataset. - */ - ret = H5Dwrite(dataset, H5T_NATIVE_INT, mid2, fid, H5P_DEFAULT, values); - - /* - * File dataset should look like this: - * 53 1 2 0 3 4 0 5 6 0 7 8 - * 0 9 10 0 11 12 0 13 14 0 15 16 - * 0 17 18 0 19 20 0 21 22 0 23 24 - * 0 0 0 59 0 61 0 0 0 0 0 0 - * 0 25 26 0 27 28 0 29 30 0 31 32 - * 0 33 34 0 35 36 67 37 38 0 39 40 - * 0 41 42 0 43 44 0 45 46 0 47 48 - * 0 0 0 0 0 0 0 0 0 0 0 0 - * - */ - - /* - * Close memory file and memory dataspaces. - */ - ret = H5Sclose(mid1); - ret = H5Sclose(mid2); - ret = H5Sclose(fid); - - /* - * Close dataset. - */ - ret = H5Dclose(dataset); - - /* - * Close the file. - */ - ret = H5Fclose(file); - - /* - * Open the file. - */ - file = H5Fopen(FILE, H5F_ACC_RDONLY, H5P_DEFAULT); - - /* - * Open the dataset. - */ - dataset = dataset = H5Dopen(file,"Matrix in file"); - - /* - * Read data back to the buffer matrix. - */ - ret = H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, - H5P_DEFAULT, matrix); - - /* - * Display the result. - */ - for (i=0; i < MSPACE_DIM1; i++) { - for(j=0; j < MSPACE_DIM2; j++) printf("%3d ", matrix[i][j]); - printf("\n"); - } - - return 0; -} -
-
(Return to TOC) - - -
This example shows how to create a compound datatype, write an array which has the compound datatype to the file, and read back subsets of fields. -
- -/* - * This example shows how to create a compound datatype, - * write an array which has the compound datatype to the file, - * and read back fields' subsets. - */ - -#include "hdf5.h" - -#define FILE "SDScompound.h5" -#define DATASETNAME "ArrayOfStructures" -#define LENGTH 10 -#define RANK 1 - -int -main(void) -{ - - /* First structure and dataset*/ - typedef struct s1_t { - int a; - float b; - double c; - } s1_t; - s1_t s1[LENGTH]; - hid_t s1_tid; /* File datatype identifier */ - - /* Second structure (subset of s1_t) and dataset*/ - typedef struct s2_t { - double c; - int a; - } s2_t; - s2_t s2[LENGTH]; - hid_t s2_tid; /* Memory datatype handle */ - - /* Third "structure" ( will be used to read float field of s1) */ - hid_t s3_tid; /* Memory datatype handle */ - float s3[LENGTH]; - - int i; - hid_t file, dataset, space; /* Handles */ - herr_t status; - hsize_t dim[] = {LENGTH}; /* Dataspace dimensions */ - - - /* - * Initialize the data - */ - for (i = 0; i< LENGTH; i++) { - s1[i].a = i; - s1[i].b = i*i; - s1[i].c = 1./(i+1); - } - - /* - * Create the data space. - */ - space = H5Screate_simple(RANK, dim, NULL); - - /* - * Create the file. - */ - file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* - * Create the memory datatype. - */ - s1_tid = H5Tcreate (H5T_COMPOUND, sizeof(s1_t)); - H5Tinsert(s1_tid, "a_name", HOFFSET(s1_t, a), H5T_NATIVE_INT); - H5Tinsert(s1_tid, "c_name", HOFFSET(s1_t, c), H5T_NATIVE_DOUBLE); - H5Tinsert(s1_tid, "b_name", HOFFSET(s1_t, b), H5T_NATIVE_FLOAT); - - /* - * Create the dataset. - */ - dataset = H5Dcreate(file, DATASETNAME, s1_tid, space, H5P_DEFAULT); - - /* - * Wtite data to the dataset; - */ - status = H5Dwrite(dataset, s1_tid, H5S_ALL, H5S_ALL, H5P_DEFAULT, s1); - - /* - * Release resources - */ - H5Tclose(s1_tid); - H5Sclose(space); - H5Dclose(dataset); - H5Fclose(file); - - /* - * Open the file and the dataset. - */ - file = H5Fopen(FILE, H5F_ACC_RDONLY, H5P_DEFAULT); - - dataset = H5Dopen(file, DATASETNAME); - - /* - * Create a datatype for s2 - */ - s2_tid = H5Tcreate(H5T_COMPOUND, sizeof(s2_t)); - - H5Tinsert(s2_tid, "c_name", HOFFSET(s2_t, c), H5T_NATIVE_DOUBLE); - H5Tinsert(s2_tid, "a_name", HOFFSET(s2_t, a), H5T_NATIVE_INT); - - /* - * Read two fields c and a from s1 dataset. Fields in the file - * are found by their names "c_name" and "a_name". - */ - status = H5Dread(dataset, s2_tid, H5S_ALL, H5S_ALL, H5P_DEFAULT, s2); - - /* - * Display the fields - */ - printf("\n"); - printf("Field c : \n"); - for( i = 0; i < LENGTH; i++) printf("%.4f ", s2[i].c); - printf("\n"); - - printf("\n"); - printf("Field a : \n"); - for( i = 0; i < LENGTH; i++) printf("%d ", s2[i].a); - printf("\n"); - - /* - * Create a datatype for s3. - */ - s3_tid = H5Tcreate(H5T_COMPOUND, sizeof(float)); - - status = H5Tinsert(s3_tid, "b_name", 0, H5T_NATIVE_FLOAT); - - /* - * Read field b from s1 dataset. Field in the file is found by its name. - */ - status = H5Dread(dataset, s3_tid, H5S_ALL, H5S_ALL, H5P_DEFAULT, s3); - - /* - * Display the field - */ - printf("\n"); - printf("Field b : \n"); - for( i = 0; i < LENGTH; i++) printf("%.4f ", s3[i]); - printf("\n"); - - /* - * Release resources - */ - H5Tclose(s2_tid); - H5Tclose(s3_tid); - H5Dclose(dataset); - H5Fclose(file); - - return 0; -} -- - - - -
-
(Return to TOC) - - -
This example shows how to create a 3x3 extendible dataset, to extend the dataset to 10x3, then to extend it again to 10x5. -
- -/* - * This example shows how to work with extendible dataset. - * In the current version of the library dataset MUST be - * chunked. - * - */ - -#include "hdf5.h" - -#define FILE "SDSextendible.h5" -#define DATASETNAME "ExtendibleArray" -#define RANK 2 -#define NX 10 -#define NY 5 - -int -main (void) -{ - hid_t file; /* handles */ - hid_t dataspace, dataset; - hid_t filespace; - hid_t cparms; - hsize_t dims[2] = { 3, 3}; /* - * dataset dimensions - * at the creation time - */ - hsize_t dims1[2] = { 3, 3}; /* data1 dimensions */ - hsize_t dims2[2] = { 7, 1}; /* data2 dimensions */ - hsize_t dims3[2] = { 2, 2}; /* data3 dimensions */ - - hsize_t maxdims[2] = {H5S_UNLIMITED, H5S_UNLIMITED}; - hsize_t chunk_dims[2] ={2, 5}; - hsize_t size[2]; - hsize_t offset[2]; - - herr_t status; - - int data1[3][3] = { {1, 1, 1}, /* data to write */ - {1, 1, 1}, - {1, 1, 1} }; - - int data2[7] = { 2, 2, 2, 2, 2, 2, 2}; - - int data3[2][2] = { {3, 3}, - {3, 3} }; - - /* - * Create the data space with unlimited dimensions. - */ - dataspace = H5Screate_simple(RANK, dims, maxdims); - - /* - * Create a new file. If file exists its contents will be overwritten. - */ - file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* - * Modify dataset creation properties, i.e. enable chunking. - */ - cparms = H5Pcreate (H5P_DATASET_CREATE); - status = H5Pset_chunk( cparms, RANK, chunk_dims); - - /* - * Create a new dataset within the file using cparms - * creation properties. - */ - dataset = H5Dcreate(file, DATASETNAME, H5T_NATIVE_INT, dataspace, - cparms); - - /* - * Extend the dataset. This call assures that dataset is at least 3 x 3. - */ - size[0] = 3; - size[1] = 3; - status = H5Dextend (dataset, size); - - /* - * Select a hyperslab. - */ - filespace = H5Dget_space (dataset); - offset[0] = 0; - offset[1] = 0; - status = H5Sselect_hyperslab(filespace, H5S_SELECT_SET, offset, NULL, - dims1, NULL); - - /* - * Write the data to the hyperslab. - */ - status = H5Dwrite(dataset, H5T_NATIVE_INT, dataspace, filespace, - H5P_DEFAULT, data1); - - /* - * Extend the dataset. Dataset becomes 10 x 3. - */ - dims[0] = dims1[0] + dims2[0]; - size[0] = dims[0]; - size[1] = dims[1]; - status = H5Dextend (dataset, size); - - /* - * Select a hyperslab. - */ - filespace = H5Dget_space (dataset); - offset[0] = 3; - offset[1] = 0; - status = H5Sselect_hyperslab(filespace, H5S_SELECT_SET, offset, NULL, - dims2, NULL); - - /* - * Define memory space - */ - dataspace = H5Screate_simple(RANK, dims2, NULL); - - /* - * Write the data to the hyperslab. - */ - status = H5Dwrite(dataset, H5T_NATIVE_INT, dataspace, filespace, - H5P_DEFAULT, data2); - - /* - * Extend the dataset. Dataset becomes 10 x 5. - */ - dims[1] = dims1[1] + dims3[1]; - size[0] = dims[0]; - size[1] = dims[1]; - status = H5Dextend (dataset, size); - - /* - * Select a hyperslab - */ - filespace = H5Dget_space (dataset); - offset[0] = 0; - offset[1] = 3; - status = H5Sselect_hyperslab(filespace, H5S_SELECT_SET, offset, NULL, - dims3, NULL); - - /* - * Define memory space. - */ - dataspace = H5Screate_simple(RANK, dims3, NULL); - - /* - * Write the data to the hyperslab. - */ - status = H5Dwrite(dataset, H5T_NATIVE_INT, dataspace, filespace, - H5P_DEFAULT, data3); - - /* - * Resulting dataset - * - * 1 1 1 3 3 - * 1 1 1 3 3 - * 1 1 1 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - */ - /* - * Close/release resources. - */ - H5Dclose(dataset); - H5Sclose(dataspace); - H5Sclose(filespace); - H5Fclose(file); - - return 0; -} -- - - - -
-
(Return to TOC) - - -
This example shows how to read information the chunked dataset written by Example 5. -
- -/* - * This example shows how to read data from a chunked dataset. - * We will read from the file created by h5_extend_write.c - */ - -#include "hdf5.h" - -#define FILE "SDSextendible.h5" -#define DATASETNAME "ExtendibleArray" -#define RANK 2 -#define RANKC 1 -#define NX 10 -#define NY 5 - -int -main (void) -{ - hid_t file; /* handles */ - hid_t dataset; - hid_t filespace; - hid_t memspace; - hid_t cparms; - hsize_t dims[2]; /* dataset and chunk dimensions*/ - hsize_t chunk_dims[2]; - hsize_t col_dims[1]; - hsize_t count[2]; - hsize_t offset[2]; - - herr_t status, status_n; - - int data_out[NX][NY]; /* buffer for dataset to be read */ - int chunk_out[2][5]; /* buffer for chunk to be read */ - int column[10]; /* buffer for column to be read */ - int rank, rank_chunk; - hsize_t i, j; - - - - /* - * Open the file and the dataset. - */ - file = H5Fopen(FILE, H5F_ACC_RDONLY, H5P_DEFAULT); - dataset = H5Dopen(file, DATASETNAME); - - /* - * Get dataset rank and dimension. - */ - - filespace = H5Dget_space(dataset); /* Get filespace handle first. */ - rank = H5Sget_simple_extent_ndims(filespace); - status_n = H5Sget_simple_extent_dims(filespace, dims, NULL); - printf("dataset rank %d, dimensions %lu x %lu\n", - rank, (unsigned long)(dims[0]), (unsigned long)(dims[1])); - - /* - * Get creation properties list. - */ - cparms = H5Dget_create_plist(dataset); /* Get properties handle first. */ - - /* - * Check if dataset is chunked. - */ - if (H5D_CHUNKED == H5Pget_layout(cparms)) { - - /* - * Get chunking information: rank and dimensions - */ - rank_chunk = H5Pget_chunk(cparms, 2, chunk_dims); - printf("chunk rank %d, dimensions %lu x %lu\n", rank_chunk, - (unsigned long)(chunk_dims[0]), (unsigned long)(chunk_dims[1])); - } - - /* - * Define the memory space to read dataset. - */ - memspace = H5Screate_simple(RANK,dims,NULL); - - /* - * Read dataset back and display. - */ - status = H5Dread(dataset, H5T_NATIVE_INT, memspace, filespace, - H5P_DEFAULT, data_out); - printf("\n"); - printf("Dataset: \n"); - for (j = 0; j < dims[0]; j++) { - for (i = 0; i < dims[1]; i++) printf("%d ", data_out[j][i]); - printf("\n"); - } - - /* - * dataset rank 2, dimensions 10 x 5 - * chunk rank 2, dimensions 2 x 5 - - * Dataset: - * 1 1 1 3 3 - * 1 1 1 3 3 - * 1 1 1 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - * 2 0 0 0 0 - */ - - /* - * Read the third column from the dataset. - * First define memory dataspace, then define hyperslab - * and read it into column array. - */ - col_dims[0] = 10; - memspace = H5Screate_simple(RANKC, col_dims, NULL); - - /* - * Define the column (hyperslab) to read. - */ - offset[0] = 0; - offset[1] = 2; - count[0] = 10; - count[1] = 1; - status = H5Sselect_hyperslab(filespace, H5S_SELECT_SET, offset, NULL, - count, NULL); - status = H5Dread(dataset, H5T_NATIVE_INT, memspace, filespace, - H5P_DEFAULT, column); - printf("\n"); - printf("Third column: \n"); - for (i = 0; i < 10; i++) { - printf("%d \n", column[i]); - } - - /* - * Third column: - * 1 - * 1 - * 1 - * 0 - * 0 - * 0 - * 0 - * 0 - * 0 - * 0 - */ - - /* - * Define the memory space to read a chunk. - */ - memspace = H5Screate_simple(rank_chunk,chunk_dims,NULL); - - /* - * Define chunk in the file (hyperslab) to read. - */ - offset[0] = 2; - offset[1] = 0; - count[0] = chunk_dims[0]; - count[1] = chunk_dims[1]; - status = H5Sselect_hyperslab(filespace, H5S_SELECT_SET, offset, NULL, - count, NULL); - - /* - * Read chunk back and display. - */ - status = H5Dread(dataset, H5T_NATIVE_INT, memspace, filespace, - H5P_DEFAULT, chunk_out); - printf("\n"); - printf("Chunk: \n"); - for (j = 0; j < chunk_dims[0]; j++) { - for (i = 0; i < chunk_dims[1]; i++) printf("%d ", chunk_out[j][i]); - printf("\n"); - } - /* - * Chunk: - * 1 1 1 0 0 - * 2 0 0 0 0 - */ - - /* - * Close/release resources. - */ - H5Pclose(cparms); - H5Dclose(dataset); - H5Sclose(filespace); - H5Sclose(memspace); - H5Fclose(file); - - return 0; -} - -- - - - -
-
(Return to TOC) - - -
This example shows how to create and access a group in an
-HDF5 file and to place a dataset within this group.
-It also illustrates the usage of the H5Giterate
,
-H5Glink
, and H5Gunlink
functions.
-
-
- -/* - * This example creates a group in the file and dataset in the group. - * Hard link to the group object is created and the dataset is accessed - * under different names. - * Iterator function is used to find the object names in the root group. - */ - - -#include "hdf5.h" - - -#define FILE "group.h5" -#define RANK 2 - - -herr_t file_info(hid_t loc_id, const char *name, void *opdata); - /* Operator function */ -int -main(void) -{ - - hid_t file; - hid_t grp; - hid_t dataset, dataspace; - hid_t plist; - - herr_t status; - hsize_t dims[2]; - hsize_t cdims[2]; - - int idx; - - /* - * Create a file. - */ - file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* - * Create a group in the file. - */ - grp = H5Gcreate(file, "/Data", 0); - - /* - * Create dataset "Compressed Data" in the group using absolute - * name. Dataset creation property list is modified to use - * GZIP compression with the compression effort set to 6. - * Note that compression can be used only when dataset is chunked. - */ - dims[0] = 1000; - dims[1] = 20; - cdims[0] = 20; - cdims[1] = 20; - dataspace = H5Screate_simple(RANK, dims, NULL); - plist = H5Pcreate(H5P_DATASET_CREATE); - H5Pset_chunk(plist, 2, cdims); - H5Pset_deflate( plist, 6); - dataset = H5Dcreate(file, "/Data/Compressed_Data", H5T_NATIVE_INT, - dataspace, plist); - - /* - * Close the dataset and the file. - */ - H5Sclose(dataspace); - H5Dclose(dataset); - H5Fclose(file); - - /* - * Now reopen the file and group in the file. - */ - file = H5Fopen(FILE, H5F_ACC_RDWR, H5P_DEFAULT); - grp = H5Gopen(file, "Data"); - - /* - * Access "Compressed_Data" dataset in the group. - */ - dataset = H5Dopen(grp, "Compressed_Data"); - if( dataset < 0) printf(" Dataset is not found. \n"); - printf("\"/Data/Compressed_Data\" dataset is open \n"); - - /* - * Close the dataset. - */ - status = H5Dclose(dataset); - - /* - * Create hard link to the Data group. - */ - status = H5Glink(file, H5G_LINK_HARD, "Data", "Data_new"); - - /* - * We can access "Compressed_Data" dataset using created - * hard link "Data_new". - */ - dataset = H5Dopen(file, "/Data_new/Compressed_Data"); - if( dataset < 0) printf(" Dataset is not found. \n"); - printf("\"/Data_new/Compressed_Data\" dataset is open \n"); - - /* - * Close the dataset. - */ - status = H5Dclose(dataset); - - /* - * Use iterator to see the names of the objects in the file - * root directory. - */ - idx = H5Giterate(file, "/", NULL, file_info, NULL); - - /* - * Unlink name "Data" and use iterator to see the names - * of the objects in the file root direvtory. - */ - if (H5Gunlink(file, "Data") < 0) - printf(" H5Gunlink failed \n"); - else - printf("\"Data\" is unlinked \n"); - - idx = H5Giterate(file, "/", NULL, file_info, NULL); - - - /* - * Close the file. - */ - - status = H5Fclose(file); - - return 0; -} -/* - * Operator function. - */ -herr_t -file_info(hid_t loc_id, const char *name, void *opdata) -{ - hid_t grp; - /* - * Open the group using its name. - */ - grp = H5Gopen(loc_id, name); - - /* - * Display group name. - */ - printf("\n"); - printf("Name : "); - puts(name); - - H5Gclose(grp); - return 0; - } -- - - - -
-
(Return to TOC) - - -
This example shows how to create HDF5 attributes, to attach them to a dataset, and to read through all of the attributes of a dataset. - -
- -/* - * This program illustrates the usage of the H5A Interface functions. - * It creates and writes a dataset, and then creates and writes array, - * scalar, and string attributes of the dataset. - * Program reopens the file, attaches to the scalar attribute using - * attribute name and reads and displays its value. Then index of the - * third attribute is used to read and display attribute values. - * The H5Aiterate function is used to iterate through the dataset attributes, - * and display their names. The function is also reads and displays the values - * of the array attribute. - */ - -#include- - - --#include - -#define FILE "Attributes.h5" - -#define RANK 1 /* Rank and size of the dataset */ -#define SIZE 7 - -#define ARANK 2 /* Rank and dimension sizes of the first dataset attribute */ -#define ADIM1 2 -#define ADIM2 3 -#define ANAME "Float attribute" /* Name of the array attribute */ -#define ANAMES "Character attribute" /* Name of the string attribute */ - -herr_t attr_info(hid_t loc_id, const char *name, void *opdata); - /* Operator function */ - -int -main (void) -{ - - hid_t file, dataset; /* File and dataset identifiers */ - - hid_t fid; /* Dataspace identifier */ - hid_t attr1, attr2, attr3; /* Attribute identifiers */ - hid_t attr; - hid_t aid1, aid2, aid3; /* Attribute dataspace identifiers */ - hid_t atype; /* Attribute type */ - - hsize_t fdim[] = {SIZE}; - hsize_t adim[] = {ADIM1, ADIM2}; /* Dimensions of the first attribute */ - - float matrix[ADIM1][ADIM2]; /* Attribute data */ - - herr_t ret; /* Return value */ - uint i,j; /* Counters */ - int idx; /* Attribute index */ - char string_out[80]; /* Buffer to read string attribute back */ - int point_out; /* Buffer to read scalar attribute back */ - - /* - * Data initialization. - */ - int vector[] = {1, 2, 3, 4, 5, 6, 7}; /* Dataset data */ - int point = 1; /* Value of the scalar attribute */ - char string[] = "ABCD"; /* Value of the string attribute */ - - - for (i=0; i < ADIM1; i++) { /* Values of the array attribute */ - for (j=0; j < ADIM2; j++) - matrix[i][j] = -1.; - } - - /* - * Create a file. - */ - file = H5Fcreate(FILE, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* - * Create the dataspace for the dataset in the file. - */ - fid = H5Screate(H5S_SIMPLE); - ret = H5Sset_extent_simple(fid, RANK, fdim, NULL); - - /* - * Create the dataset in the file. - */ - dataset = H5Dcreate(file, "Dataset", H5T_NATIVE_INT, fid, H5P_DEFAULT); - - /* - * Write data to the dataset. - */ - ret = H5Dwrite(dataset, H5T_NATIVE_INT, H5S_ALL , H5S_ALL, H5P_DEFAULT, vector); - - /* - * Create dataspace for the first attribute. - */ - aid1 = H5Screate(H5S_SIMPLE); - ret = H5Sset_extent_simple(aid1, ARANK, adim, NULL); - - /* - * Create array attribute. - */ - attr1 = H5Acreate(dataset, ANAME, H5T_NATIVE_FLOAT, aid1, H5P_DEFAULT); - - /* - * Write array attribute. - */ - ret = H5Awrite(attr1, H5T_NATIVE_FLOAT, matrix); - - /* - * Create scalar attribute. - */ - aid2 = H5Screate(H5S_SCALAR); - attr2 = H5Acreate(dataset, "Integer attribute", H5T_NATIVE_INT, aid2, - H5P_DEFAULT); - - /* - * Write scalar attribute. - */ - ret = H5Awrite(attr2, H5T_NATIVE_INT, &point); - - /* - * Create string attribute. - */ - aid3 = H5Screate(H5S_SCALAR); - atype = H5Tcopy(H5T_C_S1); - H5Tset_size(atype, 4); - attr3 = H5Acreate(dataset, ANAMES, atype, aid3, H5P_DEFAULT); - - /* - * Write string attribute. - */ - ret = H5Awrite(attr3, atype, string); - - /* - * Close attribute and file dataspaces. - */ - ret = H5Sclose(aid1); - ret = H5Sclose(aid2); - ret = H5Sclose(aid3); - ret = H5Sclose(fid); - - /* - * Close the attributes. - */ - ret = H5Aclose(attr1); - ret = H5Aclose(attr2); - ret = H5Aclose(attr3); - - /* - * Close the dataset. - */ - ret = H5Dclose(dataset); - - /* - * Close the file. - */ - ret = H5Fclose(file); - - /* - * Reopen the file. - */ - file = H5Fopen(FILE, H5F_ACC_RDONLY, H5P_DEFAULT); - - /* - * Open the dataset. - */ - dataset = H5Dopen(file,"Dataset"); - - /* - * Attach to the scalar attribute using attribute name, then read and - * display its value. - */ - attr = H5Aopen_name(dataset,"Integer attribute"); - ret = H5Aread(attr, H5T_NATIVE_INT, &point_out); - printf("The value of the attribute \"Integer attribute\" is %d \n", point_out); - ret = H5Aclose(attr); - - /* - * Attach to the string attribute using its index, then read and display the value. - */ - attr = H5Aopen_idx(dataset, 2); - atype = H5Tcopy(H5T_C_S1); - H5Tset_size(atype, 4); - ret = H5Aread(attr, atype, string_out); - printf("The value of the attribute with the index 2 is %s \n", string_out); - ret = H5Aclose(attr); - ret = H5Tclose(atype); - - /* - * Get attribute info using iteration function. - */ - idx = H5Aiterate(dataset, NULL, attr_info, NULL); - - /* - * Close the dataset and the file. - */ - H5Dclose(dataset); - H5Fclose(file); - - return 0; -} - -/* - * Operator function. - */ -herr_t -attr_info(hid_t loc_id, const char *name, void *opdata) -{ - hid_t attr, atype, aspace; /* Attribute, datatype and dataspace identifiers */ - int rank; - hsize_t sdim[64]; - herr_t ret; - int i; - size_t npoints; /* Number of elements in the array attribute. */ - float *float_array; /* Pointer to the array attribute. */ - /* - * Open the attribute using its name. - */ - attr = H5Aopen_name(loc_id, name); - - /* - * Display attribute name. - */ - printf("\n"); - printf("Name : "); - puts(name); - - /* - * Get attribute datatype, dataspace, rank, and dimensions. - */ - atype = H5Aget_type(attr); - aspace = H5Aget_space(attr); - rank = H5Sget_simple_extent_ndims(aspace); - ret = H5Sget_simple_extent_dims(aspace, sdim, NULL); - - /* - * Display rank and dimension sizes for the array attribute. - */ - - if(rank > 0) { - printf("Rank : %d \n", rank); - printf("Dimension sizes : "); - for (i=0; i< rank; i++) printf("%d ", (int)sdim[i]); - printf("\n"); - } - - /* - * Read array attribute and display its type and values. - */ - - if (H5T_FLOAT == H5Tget_class(atype)) { - printf("Type : FLOAT \n"); - npoints = H5Sget_simple_extent_npoints(aspace); - float_array = (float *)malloc(sizeof(float)*(int)npoints); - ret = H5Aread(attr, atype, float_array); - printf("Values : "); - for( i = 0; i < (int)npoints; i++) printf("%f ", float_array[i]); - printf("\n"); - free(float_array); - } - - /* - * Release all identifiers. - */ - H5Tclose(atype); - H5Sclose(aspace); - H5Aclose(attr); - - return 0; -} -
-
(Return to TOC) - - -
- -#include <hdf5.h> - -#define FILE1 "trefer1.h5" - -/* 1-D dataset with fixed dimensions */ -#define SPACE1_NAME "Space1" -#define SPACE1_RANK 1 -#define SPACE1_DIM1 4 - -/* 2-D dataset with fixed dimensions */ -#define SPACE2_NAME "Space2" -#define SPACE2_RANK 2 -#define SPACE2_DIM1 10 -#define SPACE2_DIM2 10 - -int -main(void) { - hid_t fid1; /* HDF5 File IDs */ - hid_t dataset; /* Dataset ID */ - hid_t group; /* Group ID */ - hid_t sid1; /* Dataspace ID */ - hid_t tid1; /* Datatype ID */ - hsize_t dims1[] = {SPACE1_DIM1}; - hobj_ref_t *wbuf; /* buffer to write to disk */ - int *tu32; /* Temporary pointer to int data */ - int i; /* counting variables */ - const char *write_comment="Foo!"; /* Comments for group */ - herr_t ret; /* Generic return value */ - -/* Compound datatype */ -typedef struct s1_t { - unsigned int a; - unsigned int b; - float c; -} s1_t; - - /* Allocate write buffers */ - wbuf=(hobj_ref_t *)malloc(sizeof(hobj_ref_t)*SPACE1_DIM1); - tu32=malloc(sizeof(int)*SPACE1_DIM1); - - /* Create file */ - fid1 = H5Fcreate(FILE1, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* Create dataspace for datasets */ - sid1 = H5Screate_simple(SPACE1_RANK, dims1, NULL); - - /* Create a group */ - group=H5Gcreate(fid1,"Group1",-1); - - /* Set group's comment */ - ret=H5Gset_comment(group,".",write_comment); - - /* Create a dataset (inside Group1) */ - dataset=H5Dcreate(group,"Dataset1",H5T_STD_U32LE,sid1,H5P_DEFAULT); - - for(i=0; i < SPACE1_DIM1; i++) - tu32[i] = i*3; - - /* Write selection to disk */ - ret=H5Dwrite(dataset,H5T_NATIVE_INT,H5S_ALL,H5S_ALL,H5P_DEFAULT,tu32); - - /* Close Dataset */ - ret = H5Dclose(dataset); - - /* Create another dataset (inside Group1) */ - dataset=H5Dcreate(group,"Dataset2",H5T_NATIVE_UCHAR,sid1,H5P_DEFAULT); - - /* Close Dataset */ - ret = H5Dclose(dataset); - - /* Create a datatype to refer to */ - tid1 = H5Tcreate (H5T_COMPOUND, sizeof(s1_t)); - - /* Insert fields */ - ret=H5Tinsert (tid1, "a", HOFFSET(s1_t,a), H5T_NATIVE_INT); - - ret=H5Tinsert (tid1, "b", HOFFSET(s1_t,b), H5T_NATIVE_INT); - - ret=H5Tinsert (tid1, "c", HOFFSET(s1_t,c), H5T_NATIVE_FLOAT); - - /* Save datatype for later */ - ret=H5Tcommit (group, "Datatype1", tid1); - - /* Close datatype */ - ret = H5Tclose(tid1); - - /* Close group */ - ret = H5Gclose(group); - - /* Create a dataset to store references */ - dataset=H5Dcreate(fid1,"Dataset3",H5T_STD_REF_OBJ,sid1,H5P_DEFAULT); - - /* Create reference to dataset */ - ret = H5Rcreate(&wbuf[0],fid1,"/Group1/Dataset1",H5R_OBJECT,-1); - - /* Create reference to dataset */ - ret = H5Rcreate(&wbuf[1],fid1,"/Group1/Dataset2",H5R_OBJECT,-1); - - /* Create reference to group */ - ret = H5Rcreate(&wbuf[2],fid1,"/Group1",H5R_OBJECT,-1); - - /* Create reference to named datatype */ - ret = H5Rcreate(&wbuf[3],fid1,"/Group1/Datatype1",H5R_OBJECT,-1); - - /* Write selection to disk */ - ret=H5Dwrite(dataset,H5T_STD_REF_OBJ,H5S_ALL,H5S_ALL,H5P_DEFAULT,wbuf); - - /* Close disk dataspace */ - ret = H5Sclose(sid1); - - /* Close Dataset */ - ret = H5Dclose(dataset); - - /* Close file */ - ret = H5Fclose(fid1); - free(wbuf); - free(tu32); - return 0; -} - -- - - - -
-
(Return to TOC) - - -
Dataset3
from
-the file created in Example 9. Then the program dereferences the references
-to dataset Dataset1
, the group and the named datatype,
-and opens those objects.
-The program reads and displays the dataset's data, the group's comment, and
-the number of members of the compound datatype.
-
-
-- -#include <stdlib.h> -#include <hdf5.h> - -#define FILE1 "trefer1.h5" - -/* dataset with fixed dimensions */ -#define SPACE1_NAME "Space1" -#define SPACE1_RANK 1 -#define SPACE1_DIM1 4 - -int -main(void) -{ - hid_t fid1; /* HDF5 File IDs */ - hid_t dataset, /* Dataset ID */ - dset2; /* Dereferenced dataset ID */ - hid_t group; /* Group ID */ - hid_t sid1; /* Dataspace ID */ - hid_t tid1; /* Datatype ID */ - hobj_ref_t *rbuf; /* buffer to read from disk */ - int *tu32; /* temp. buffer read from disk */ - int i; /* counting variables */ - char read_comment[10]; - herr_t ret; /* Generic return value */ - - /* Allocate read buffers */ - rbuf = malloc(sizeof(hobj_ref_t)*SPACE1_DIM1); - tu32 = malloc(sizeof(int)*SPACE1_DIM1); - - /* Open the file */ - fid1 = H5Fopen(FILE1, H5F_ACC_RDWR, H5P_DEFAULT); - - /* Open the dataset */ - dataset=H5Dopen(fid1,"/Dataset3"); - - /* Read selection from disk */ - ret=H5Dread(dataset,H5T_STD_REF_OBJ,H5S_ALL,H5S_ALL,H5P_DEFAULT,rbuf); - - /* Open dataset object */ - dset2 = H5Rdereference(dataset,H5R_OBJECT,&rbuf[0]); - - /* Check information in referenced dataset */ - sid1 = H5Dget_space(dset2); - - ret=H5Sget_simple_extent_npoints(sid1); - - /* Read from disk */ - ret=H5Dread(dset2,H5T_NATIVE_INT,H5S_ALL,H5S_ALL,H5P_DEFAULT,tu32); - printf("Dataset data : \n"); - for (i=0; i < SPACE1_DIM1 ; i++) printf (" %d ", tu32[i]); - printf("\n"); - printf("\n"); - - /* Close dereferenced Dataset */ - ret = H5Dclose(dset2); - - /* Open group object */ - group = H5Rdereference(dataset,H5R_OBJECT,&rbuf[2]); - - /* Get group's comment */ - ret=H5Gget_comment(group,".",10,read_comment); - printf("Group comment is %s \n", read_comment); - printf(" \n"); - /* Close group */ - ret = H5Gclose(group); - - /* Open datatype object */ - tid1 = H5Rdereference(dataset,H5R_OBJECT,&rbuf[3]); - - /* Verify correct datatype */ - { - H5T_class_t tclass; - - tclass= H5Tget_class(tid1); - if ((tclass == H5T_COMPOUND)) - printf ("Number of compound datatype members is %d \n", H5Tget_nmembers(tid1)); - printf(" \n"); - } - - /* Close datatype */ - ret = H5Tclose(tid1); - - /* Close Dataset */ - ret = H5Dclose(dataset); - - /* Close file */ - ret = H5Fclose(fid1); - - /* Free memory buffers */ - free(rbuf); - free(tu32); - return 0; -} - -- - - -
-
(Return to TOC) - - -
-#include <stdlib.h> -#include <hdf5.h> - -#define FILE2 "trefer2.h5" -#define SPACE1_NAME "Space1" -#define SPACE1_RANK 1 -#define SPACE1_DIM1 4 - -/* Dataset with fixed dimensions */ -#define SPACE2_NAME "Space2" -#define SPACE2_RANK 2 -#define SPACE2_DIM1 10 -#define SPACE2_DIM2 10 - -/* Element selection information */ -#define POINT1_NPOINTS 10 - -int -main(void) -{ - hid_t fid1; /* HDF5 File IDs */ - hid_t dset1, /* Dataset ID */ - dset2; /* Dereferenced dataset ID */ - hid_t sid1, /* Dataspace ID #1 */ - sid2; /* Dataspace ID #2 */ - hsize_t dims1[] = {SPACE1_DIM1}, - dims2[] = {SPACE2_DIM1, SPACE2_DIM2}; - hsize_t start[SPACE2_RANK]; /* Starting location of hyperslab */ - hsize_t stride[SPACE2_RANK]; /* Stride of hyperslab */ - hsize_t count[SPACE2_RANK]; /* Element count of hyperslab */ - hsize_t block[SPACE2_RANK]; /* Block size of hyperslab */ - hsize_t coord1[POINT1_NPOINTS][SPACE2_RANK]; - /* Coordinates for point selection */ - hdset_reg_ref_t *wbuf; /* buffer to write to disk */ - int *dwbuf; /* Buffer for writing numeric data to disk */ - int i; /* counting variables */ - herr_t ret; /* Generic return value */ - - - /* Allocate write & read buffers */ - wbuf=calloc(sizeof(hdset_reg_ref_t), SPACE1_DIM1); - dwbuf=malloc(sizeof(int)*SPACE2_DIM1*SPACE2_DIM2); - - /* Create file */ - fid1 = H5Fcreate(FILE2, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); - - /* Create dataspace for datasets */ - sid2 = H5Screate_simple(SPACE2_RANK, dims2, NULL); - - /* Create a dataset */ - dset2=H5Dcreate(fid1,"Dataset2",H5T_STD_U8LE,sid2,H5P_DEFAULT); - - for(i=0; i < SPACE2_DIM1*SPACE2_DIM2; i++) - dwbuf[i]=i*3; - - /* Write selection to disk */ - ret=H5Dwrite(dset2,H5T_NATIVE_INT,H5S_ALL,H5S_ALL,H5P_DEFAULT,dwbuf); - - /* Close Dataset */ - ret = H5Dclose(dset2); - - /* Create dataspace for the reference dataset */ - sid1 = H5Screate_simple(SPACE1_RANK, dims1, NULL); - - /* Create a dataset */ - dset1=H5Dcreate(fid1,"Dataset1",H5T_STD_REF_DSETREG,sid1,H5P_DEFAULT); - - /* Create references */ - - /* Select 6x6 hyperslab for first reference */ - start[0]=2; start[1]=2; - stride[0]=1; stride[1]=1; - count[0]=6; count[1]=6; - block[0]=1; block[1]=1; - ret = H5Sselect_hyperslab(sid2,H5S_SELECT_SET,start,stride,count,block); - - /* Store first dataset region */ - ret = H5Rcreate(&wbuf[0],fid1,"/Dataset2",H5R_DATASET_REGION,sid2); - - /* Select sequence of ten points for second reference */ - coord1[0][0]=6; coord1[0][1]=9; - coord1[1][0]=2; coord1[1][1]=2; - coord1[2][0]=8; coord1[2][1]=4; - coord1[3][0]=1; coord1[3][1]=6; - coord1[4][0]=2; coord1[4][1]=8; - coord1[5][0]=3; coord1[5][1]=2; - coord1[6][0]=0; coord1[6][1]=4; - coord1[7][0]=9; coord1[7][1]=0; - coord1[8][0]=7; coord1[8][1]=1; - coord1[9][0]=3; coord1[9][1]=3; - ret = H5Sselect_elements(sid2,H5S_SELECT_SET,POINT1_NPOINTS,(const hsize_t **)coord1); - - /* Store second dataset region */ - ret = H5Rcreate(&wbuf[1],fid1,"/Dataset2",H5R_DATASET_REGION,sid2); - - /* Write selection to disk */ - ret=H5Dwrite(dset1,H5T_STD_REF_DSETREG,H5S_ALL,H5S_ALL,H5P_DEFAULT,wbuf); - - /* Close all objects */ - ret = H5Sclose(sid1); - ret = H5Dclose(dset1); - ret = H5Sclose(sid2); - - /* Close file */ - ret = H5Fclose(fid1); - - free(wbuf); - free(dwbuf); - return 0; -} - -- - - -
-
(Return to TOC) - - -
- -#include <stdlib.h> -#include <hdf5.h> - -#define FILE2 "trefer2.h5" -#define NPOINTS 10 - -/* 1-D dataset with fixed dimensions */ -#define SPACE1_NAME "Space1" -#define SPACE1_RANK 1 -#define SPACE1_DIM1 4 - -/* 2-D dataset with fixed dimensions */ -#define SPACE2_NAME "Space2" -#define SPACE2_RANK 2 -#define SPACE2_DIM1 10 -#define SPACE2_DIM2 10 - -int -main(void) -{ - hid_t fid1; /* HDF5 File IDs */ - hid_t dset1, /* Dataset ID */ - dset2; /* Dereferenced dataset ID */ - hid_t sid1, /* Dataspace ID #1 */ - sid2; /* Dataspace ID #2 */ - hsize_t * coords; /* Coordinate buffer */ - hsize_t low[SPACE2_RANK]; /* Selection bounds */ - hsize_t high[SPACE2_RANK]; /* Selection bounds */ - hdset_reg_ref_t *rbuf; /* buffer to to read disk */ - int *drbuf; /* Buffer for reading numeric data from disk */ - int i, j; /* counting variables */ - herr_t ret; /* Generic return value */ - - /* Output message about test being performed */ - - /* Allocate write & read buffers */ - rbuf=malloc(sizeof(hdset_reg_ref_t)*SPACE1_DIM1); - drbuf=calloc(sizeof(int),SPACE2_DIM1*SPACE2_DIM2); - - /* Open the file */ - fid1 = H5Fopen(FILE2, H5F_ACC_RDWR, H5P_DEFAULT); - - /* Open the dataset */ - dset1=H5Dopen(fid1,"/Dataset1"); - - /* Read selection from disk */ - ret=H5Dread(dset1,H5T_STD_REF_DSETREG,H5S_ALL,H5S_ALL,H5P_DEFAULT,rbuf); - - /* Try to open objects */ - dset2 = H5Rdereference(dset1,H5R_DATASET_REGION,&rbuf[0]); - - /* Check information in referenced dataset */ - sid1 = H5Dget_space(dset2); - - ret=H5Sget_simple_extent_npoints(sid1); - printf(" Number of elements in the dataset is : %d\n",ret); - - /* Read from disk */ - ret=H5Dread(dset2,H5T_NATIVE_INT,H5S_ALL,H5S_ALL,H5P_DEFAULT,drbuf); - - for(i=0; i < SPACE2_DIM1; i++) { - for (j=0; j < SPACE2_DIM2; j++) printf (" %d ", drbuf[i*SPACE2_DIM2+j]); - printf("\n"); } - - /* Get the hyperslab selection */ - sid2=H5Rget_region(dset1,H5R_DATASET_REGION,&rbuf[0]); - - /* Verify correct hyperslab selected */ - ret = H5Sget_select_npoints(sid2); - printf(" Number of elements in the hyperslab is : %d \n", ret); - ret = H5Sget_select_hyper_nblocks(sid2); - coords=malloc(ret*SPACE2_RANK*sizeof(hsize_t)*2); /* allocate space for the hyperslab blocks */ - ret = H5Sget_select_hyper_blocklist(sid2,0,ret,coords); - printf(" Hyperslab coordinates are : \n"); - printf (" ( %lu , %lu ) ( %lu , %lu ) \n", \ -(unsigned long)coords[0],(unsigned long)coords[1],(unsigned long)coords[2],(unsigned long)coords[3]); - free(coords); - ret = H5Sget_select_bounds(sid2,low,high); - - /* Close region space */ - ret = H5Sclose(sid2); - - /* Get the element selection */ - sid2=H5Rget_region(dset1,H5R_DATASET_REGION,&rbuf[1]); - - /* Verify correct elements selected */ - ret = H5Sget_select_elem_npoints(sid2); - printf(" Number of selected elements is : %d\n", ret); - - /* Allocate space for the element points */ - coords= malloc(ret*SPACE2_RANK*sizeof(hsize_t)); - ret = H5Sget_select_elem_pointlist(sid2,0,ret,coords); - printf(" Coordinates of selected elements are : \n"); - for (i=0; i < 2*NPOINTS; i=i+2) - printf(" ( %lu , %lu ) \n", (unsigned long)coords[i],(unsigned long)coords[i+1]); - - free(coords); - ret = H5Sget_select_bounds(sid2,low,high); - - /* Close region space */ - ret = H5Sclose(sid2); - - /* Close first space */ - ret = H5Sclose(sid1); - - /* Close dereferenced Dataset */ - ret = H5Dclose(dset2); - - /* Close Dataset */ - ret = H5Dclose(dset1); - - /* Close file */ - ret = H5Fclose(fid1); - - /* Free memory buffers */ - free(rbuf); - free(drbuf); - return 0; -} - -- - -
-
(Return to TOC) - - -
-Introduction to HDF5 -HDF5 User Guide - - |
-
-HDF5 Reference Manual -Other HDF5 documents and links - |
-
-
-HDF Help Desk
- -Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0 - - -Last modified: 3 August 2004 - - | -Copyright - |
-Information for HDF5 maintainers: - -* You can run make from any directory. However, running in a - subdirectory only knows how to build things in that directory and - below. However, all makefiles know when their target depends on - something outside the local directory tree: - - $ cd test - $ make - make: *** No rule to make target ../src/libhdf5.a - -* All Makefiles understand the following targets: - - all -- build locally. - install -- install libs, headers, progs. - uninstall -- remove installed files. - mostlyclean -- remove temp files (eg, *.o but not *.a). - clean -- mostlyclean plus libs and progs. - distclean -- all non-distributed files. - maintainer-clean -- all derived files but H5config.h.in and configure. - -* Most Makefiles also understand: - - TAGS -- build a tags table - dep, depend -- recalculate source dependencies - lib -- build just the libraries w/o programs - -* If you have personal preferences for which make, compiler, compiler - flags, preprocessor flags, etc., that you use and you don't want to - set environment variables, then use a site configuration file. - - When configure starts, it looks in the config directory for files - whose name is some combination of the CPU name, vendor, and - operating system in this order: - - CPU-VENDOR-OS - VENDOR-OS - CPU-VENDOR - OS - VENDOR - CPU - - The first file which is found is sourced and can therefore affect - the behavior of the rest of configure. See config/BlankForm for the - template. - -* If you use GNU make along with gcc the Makefile will contain targets - that automatically maintain a list of source interdependencies; you - seldom have to say `make clean'. I say `seldom' because if you - change how one `*.h' file includes other `*.h' files you'll have - to force an update. - - To force an update of all dependency information remove the - `.depend' file from each directory and type `make'. For - instance: - - $ cd $HDF5_HOME - $ find . -name .depend -exec rm {} \; - $ make - - If you're not using GNU make and gcc then dependencies come from - ".distdep" files in each directory. Those files are generated on - GNU systems and inserted into the Makefile's by running - config.status (which happens near the end of configure). - -* If you use GNU make along with gcc then the Perl script `trace' is - run just before dependencies are calculated to update any H5TRACE() - calls that might appear in the file. Otherwise, after changing the - type of a function (return type or argument types) one should run - `trace' manually on those source files (e.g., ../bin/trace *.c). - -* Object files stay in the directory and are added to the library as a - final step instead of placing the file in the library immediately - and removing it from the directory. The reason is three-fold: - - 1. Most versions of make don't allow `$(LIB)($(SRC:.c=.o))' - which makes it necessary to have two lists of files, one - that ends with `.c' and the other that has the library - name wrapped around each `.o' file. - - 2. Some versions of make/ar have problems with modification - times of archive members. - - 3. Adding object files immediately causes problems on SMP - machines where make is doing more than one thing at a - time. - -* When using GNU make on an SMP you can cause it to compile more than - one thing at a time. At the top of the source tree invoke make as - - $ make -j -l6 - - which causes make to fork as many children as possible as long as - the load average doesn't go above 6. In subdirectories one can say - - $ make -j2 - - which limits the number of children to two (this doesn't work at the - top level because the `-j2' is not passed to recursive makes). - -* To create a release tarball go to the top-level directory and run - ./bin/release. You can optionally supply one or more of the words - `tar', `gzip', `bzip2' or `compress' on the command line. The - result will be a (compressed) tar file(s) in the `releases' - directory. The README file is updated to contain the release date - and version number. - -* To create a tarball of all the files which are part of HDF5 go to - the top-level directory and type: - - tar cvf foo.tar `grep '^\.' MANIFEST |unexpand |cut -f1` -diff --git a/doc/html/Makefile.am b/doc/html/Makefile.am deleted file mode 100644 index 2d89255..0000000 --- a/doc/html/Makefile.am +++ /dev/null @@ -1,43 +0,0 @@ -# HDF5 Library Doc Makefile(.in) -# -# Copyright (C) 1997, 2002 -# National Center for Supercomputing Applications. -# All rights reserved. -# -# -# This is the top level makefile of the Doc directory. It mostly just -# reinvokes make in the various subdirectories. -# You can alternatively invoke make from each subdirectory manually. -## -## Makefile.am -## Run automake to generate a Makefile.in from this file. -# - -include $(top_srcdir)/config/commence-doc.am - -localdocdir=$(docdir)/hdf5 - -# Subdirectories in build-order -SUBDIRS=ADGuide Graphics Intro PSandPDF TechNotes Tutor \ - cpplus ed_libs ed_styles fortran - -# Public doc files (to be installed)... -localdoc_DATA=ADGuide.html Attributes.html Big.html Caching.html Chunk_f1.gif \ - Chunk_f2.gif Chunk_f3.gif Chunk_f4.gif Chunk_f5.gif Chunk_f6.gif \ - Chunking.html Coding.html Copyright.html Datasets.html \ - Dataspaces.html Datatypes.html DatatypesEnum.html Debugging.html \ - EnumMap.gif Environment.html Errors.html FF-IH_FileGroup.gif \ - FF-IH_FileObject.gif Files.html Filters.html Glossary.html \ - Groups.html H5.api_map.html H5.format.html H5.intro.html \ - H5.sample_code.html H5.user.PrintGen.html H5.user.PrintTpg.html \ - H5.user.html IH_map1.gif IH_map2.gif IH_map3.gif IH_map4.gif \ - IH_mapFoot.gif IH_mapHead.gif IOPipe.html MountingFiles.html \ - NCSAfooterlogo.gif Performance.html PredefDTypes.html \ - Properties.html RM_H5.html RM_H5A.html RM_H5D.html RM_H5E.html \ - RM_H5F.html RM_H5Front.html RM_H5G.html RM_H5I.html RM_H5P.html \ - RM_H5R.html RM_H5S.html RM_H5T.html RM_H5Z.html References.html \ - TechNotes.html Tools.html Version.html chunk1.gif compat.html \ - dataset_p1.gif ddl.html extern1.gif extern2.gif group_p1.gif \ - group_p2.gif group_p3.gif h5s.examples hdf2.jpg ph5design.html \ - ph5example.c ph5implement.txt pipe1.gif pipe2.gif pipe3.gif \ - pipe4.gif pipe5.gif index.html version.gif diff --git a/doc/html/Makefile.in b/doc/html/Makefile.in deleted file mode 100644 index 2f39b5f..0000000 --- a/doc/html/Makefile.in +++ /dev/null @@ -1,670 +0,0 @@ -# Makefile.in generated by automake 1.9.5 from Makefile.am. -# @configure_input@ - -# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, -# 2003, 2004, 2005 Free Software Foundation, Inc. -# This Makefile.in is free software; the Free Software Foundation -# gives unlimited permission to copy and/or distribute it, -# with or without modifications, as long as this notice is preserved. - -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY, to the extent permitted by law; without -# even the implied warranty of MERCHANTABILITY or FITNESS FOR A -# PARTICULAR PURPOSE. - -@SET_MAKE@ - -# HDF5 Library Doc Makefile(.in) -# -# Copyright (C) 1997, 2002 -# National Center for Supercomputing Applications. -# All rights reserved. -# -# -# This is the top level makefile of the Doc directory. It mostly just -# reinvokes make in the various subdirectories. -# You can alternatively invoke make from each subdirectory manually. -# - -srcdir = @srcdir@ -top_srcdir = @top_srcdir@ -VPATH = @srcdir@ -pkgdatadir = $(datadir)/@PACKAGE@ -pkglibdir = $(libdir)/@PACKAGE@ -pkgincludedir = $(includedir)/@PACKAGE@ -top_builddir = ../.. -am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd -INSTALL = @INSTALL@ -install_sh_DATA = $(install_sh) -c -m 644 -install_sh_PROGRAM = $(install_sh) -c -install_sh_SCRIPT = $(install_sh) -c -INSTALL_HEADER = $(INSTALL_DATA) -transform = $(program_transform_name) -NORMAL_INSTALL = : -PRE_INSTALL = : -POST_INSTALL = : -NORMAL_UNINSTALL = : -PRE_UNINSTALL = : -POST_UNINSTALL = : -build_triplet = @build@ -host_triplet = @host@ -DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \ - $(top_srcdir)/config/commence-doc.am \ - $(top_srcdir)/config/commence.am -subdir = doc/html -ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 -am__aclocal_m4_deps = $(top_srcdir)/configure.in -am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ - $(ACLOCAL_M4) -mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs -CONFIG_HEADER = $(top_builddir)/src/H5config.h -CONFIG_CLEAN_FILES = -SOURCES = -DIST_SOURCES = -RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \ - html-recursive info-recursive install-data-recursive \ - install-exec-recursive install-info-recursive \ - install-recursive installcheck-recursive installdirs-recursive \ - pdf-recursive ps-recursive uninstall-info-recursive \ - uninstall-recursive -am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; -am__vpath_adj = case $$p in \ - $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ - *) f=$$p;; \ - esac; -am__strip_dir = `echo $$p | sed -e 's|^.*/||'`; -am__installdirs = "$(DESTDIR)$(localdocdir)" -localdocDATA_INSTALL = $(INSTALL_DATA) -DATA = $(localdoc_DATA) -ETAGS = etags -CTAGS = ctags -DIST_SUBDIRS = $(SUBDIRS) -DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) - -# Set the paths for AFS installs of autotools for Linux machines -# Ideally, these tools should never be needed during the build. -ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal -ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@ -AMDEP_FALSE = @AMDEP_FALSE@ -AMDEP_TRUE = @AMDEP_TRUE@ -AMTAR = @AMTAR@ -AM_MAKEFLAGS = @AM_MAKEFLAGS@ -AR = @AR@ -AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf -AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader -AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake -AWK = @AWK@ -BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@ -BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@ -BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@ -BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@ -BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@ -BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@ -BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@ -BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@ -BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@ -BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@ -BUILD_PDB2HDF = @BUILD_PDB2HDF@ -BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@ -BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@ -BYTESEX = @BYTESEX@ -CC = @CC@ -CCDEPMODE = @CCDEPMODE@ -CC_VERSION = @CC_VERSION@ -CFLAGS = @CFLAGS@ -CONFIG_DATE = @CONFIG_DATE@ -CONFIG_MODE = @CONFIG_MODE@ -CONFIG_USER = @CONFIG_USER@ -CPP = @CPP@ -CPPFLAGS = @CPPFLAGS@ -CXX = @CXX@ -CXXCPP = @CXXCPP@ -CXXDEPMODE = @CXXDEPMODE@ -CXXFLAGS = @CXXFLAGS@ -CYGPATH_W = @CYGPATH_W@ -DEBUG_PKG = @DEBUG_PKG@ -DEFS = @DEFS@ -DEPDIR = @DEPDIR@ -DYNAMIC_DIRS = @DYNAMIC_DIRS@ -ECHO = @ECHO@ -ECHO_C = @ECHO_C@ -ECHO_N = @ECHO_N@ -ECHO_T = @ECHO_T@ -EGREP = @EGREP@ -EXEEXT = @EXEEXT@ -F77 = @F77@ - -# Make sure that these variables are exported to the Makefiles -F9XMODEXT = @F9XMODEXT@ -F9XMODFLAG = @F9XMODFLAG@ -F9XSUFFIXFLAG = @F9XSUFFIXFLAG@ -FC = @FC@ -FCFLAGS = @FCFLAGS@ -FCLIBS = @FCLIBS@ -FFLAGS = @FFLAGS@ -FILTERS = @FILTERS@ -FSEARCH_DIRS = @FSEARCH_DIRS@ -H5_VERSION = @H5_VERSION@ -HADDR_T = @HADDR_T@ -HDF5_INTERFACES = @HDF5_INTERFACES@ -HID_T = @HID_T@ -HL = @HL@ -HL_FOR = @HL_FOR@ -HSIZET = @HSIZET@ -HSIZE_T = @HSIZE_T@ -HSSIZE_T = @HSSIZE_T@ -INSTALL_DATA = @INSTALL_DATA@ -INSTALL_PROGRAM = @INSTALL_PROGRAM@ -INSTALL_SCRIPT = @INSTALL_SCRIPT@ -INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ -INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@ -LDFLAGS = @LDFLAGS@ -LIBOBJS = @LIBOBJS@ -LIBS = @LIBS@ -LIBTOOL = @LIBTOOL@ -LN_S = @LN_S@ -LTLIBOBJS = @LTLIBOBJS@ -LT_STATIC_EXEC = @LT_STATIC_EXEC@ -MAINT = @MAINT@ -MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@ -MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@ -MAKEINFO = @MAKEINFO@ -MPE = @MPE@ -OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@ -OBJEXT = @OBJEXT@ -PACKAGE = @PACKAGE@ -PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ -PACKAGE_NAME = @PACKAGE_NAME@ -PACKAGE_STRING = @PACKAGE_STRING@ -PACKAGE_TARNAME = @PACKAGE_TARNAME@ -PACKAGE_VERSION = @PACKAGE_VERSION@ -PARALLEL = @PARALLEL@ -PATH_SEPARATOR = @PATH_SEPARATOR@ -PERL = @PERL@ -PTHREAD = @PTHREAD@ -RANLIB = @RANLIB@ -ROOT = @ROOT@ -RUNPARALLEL = @RUNPARALLEL@ -RUNSERIAL = @RUNSERIAL@ -R_INTEGER = @R_INTEGER@ -R_LARGE = @R_LARGE@ -SEARCH = @SEARCH@ -SETX = @SETX@ -SET_MAKE = @SET_MAKE@ - -# Hardcode SHELL to be /bin/sh. Most machines have this shell, and -# on at least one machine configure fails to detect its existence (janus). -# Also, when HDF5 is configured on one machine but run on another, -# configure's automatic SHELL detection may not work on the build machine. -SHELL = /bin/sh -SIZE_T = @SIZE_T@ -STATIC_SHARED = @STATIC_SHARED@ -STRIP = @STRIP@ -TESTPARALLEL = @TESTPARALLEL@ -TRACE_API = @TRACE_API@ -USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@ -USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@ -USE_FILTER_NBIT = @USE_FILTER_NBIT@ -USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@ -USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@ -USE_FILTER_SZIP = @USE_FILTER_SZIP@ -VERSION = @VERSION@ -ac_ct_AR = @ac_ct_AR@ -ac_ct_CC = @ac_ct_CC@ -ac_ct_CXX = @ac_ct_CXX@ -ac_ct_F77 = @ac_ct_F77@ -ac_ct_FC = @ac_ct_FC@ -ac_ct_RANLIB = @ac_ct_RANLIB@ -ac_ct_STRIP = @ac_ct_STRIP@ -am__fastdepCC_FALSE = @am__fastdepCC_FALSE@ -am__fastdepCC_TRUE = @am__fastdepCC_TRUE@ -am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@ -am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@ -am__include = @am__include@ -am__leading_dot = @am__leading_dot@ -am__quote = @am__quote@ -am__tar = @am__tar@ -am__untar = @am__untar@ -bindir = @bindir@ -build = @build@ -build_alias = @build_alias@ -build_cpu = @build_cpu@ -build_os = @build_os@ -build_vendor = @build_vendor@ -datadir = @datadir@ -exec_prefix = @exec_prefix@ -host = @host@ -host_alias = @host_alias@ -host_cpu = @host_cpu@ -host_os = @host_os@ -host_vendor = @host_vendor@ - -# Install directories that automake doesn't know about -includedir = $(exec_prefix)/include -infodir = @infodir@ -install_sh = @install_sh@ -libdir = @libdir@ -libexecdir = @libexecdir@ -localstatedir = @localstatedir@ -mandir = @mandir@ -mkdir_p = @mkdir_p@ -oldincludedir = @oldincludedir@ -prefix = @prefix@ -program_transform_name = @program_transform_name@ -sbindir = @sbindir@ -sharedstatedir = @sharedstatedir@ -sysconfdir = @sysconfdir@ -target_alias = @target_alias@ - -# Shell commands used in Makefiles -RM = rm -f -CP = cp - -# Some machines need a command to run executables; this is that command -# so that our tests will run. -# We use RUNTESTS instead of RUNSERIAL directly because it may be that -# some tests need to be run with a different command. Older versions -# of the makefiles used the command -# $(LIBTOOL) --mode=execute -# in some directories, for instance. -RUNTESTS = $(RUNSERIAL) - -# Libraries to link to while building -LIBHDF5 = $(top_builddir)/src/libhdf5.la -LIBH5TEST = $(top_builddir)/test/libh5test.la -LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la -LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la -LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la -LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la -LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la -LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la -LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la -docdir = $(exec_prefix)/doc - -# Scripts used to build examples -H5CC = $(bindir)/h5cc -H5CC_PP = $(bindir)/h5pcc -H5FC = $(bindir)/h5fc -H5FC_PP = $(bindir)/h5pfc - -# .chkexe and .chksh files are used to mark tests that have run successfully. -MOSTLYCLEANFILES = *.chkexe *.chksh -localdocdir = $(docdir)/hdf5 - -# Subdirectories in build-order -SUBDIRS = ADGuide Graphics Intro PSandPDF TechNotes Tutor \ - cpplus ed_libs ed_styles fortran - - -# Public doc files (to be installed)... -localdoc_DATA = ADGuide.html Attributes.html Big.html Caching.html Chunk_f1.gif \ - Chunk_f2.gif Chunk_f3.gif Chunk_f4.gif Chunk_f5.gif Chunk_f6.gif \ - Chunking.html Coding.html Copyright.html Datasets.html \ - Dataspaces.html Datatypes.html DatatypesEnum.html Debugging.html \ - EnumMap.gif Environment.html Errors.html FF-IH_FileGroup.gif \ - FF-IH_FileObject.gif Files.html Filters.html Glossary.html \ - Groups.html H5.api_map.html H5.format.html H5.intro.html \ - H5.sample_code.html H5.user.PrintGen.html H5.user.PrintTpg.html \ - H5.user.html IH_map1.gif IH_map2.gif IH_map3.gif IH_map4.gif \ - IH_mapFoot.gif IH_mapHead.gif IOPipe.html MountingFiles.html \ - NCSAfooterlogo.gif Performance.html PredefDTypes.html \ - Properties.html RM_H5.html RM_H5A.html RM_H5D.html RM_H5E.html \ - RM_H5F.html RM_H5Front.html RM_H5G.html RM_H5I.html RM_H5P.html \ - RM_H5R.html RM_H5S.html RM_H5T.html RM_H5Z.html References.html \ - TechNotes.html Tools.html Version.html chunk1.gif compat.html \ - dataset_p1.gif ddl.html extern1.gif extern2.gif group_p1.gif \ - group_p2.gif group_p3.gif h5s.examples hdf2.jpg ph5design.html \ - ph5example.c ph5implement.txt pipe1.gif pipe2.gif pipe3.gif \ - pipe4.gif pipe5.gif index.html version.gif - -all: all-recursive - -.SUFFIXES: -$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps) - @for dep in $?; do \ - case '$(am__configure_deps)' in \ - *$$dep*) \ - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \ - && exit 0; \ - exit 1;; \ - esac; \ - done; \ - echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/Makefile'; \ - cd $(top_srcdir) && \ - $(AUTOMAKE) --foreign doc/html/Makefile -.PRECIOUS: Makefile -Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status - @case '$?' in \ - *config.status*) \ - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ - *) \ - echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ - cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ - esac; - -$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh - -$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh -$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh - -mostlyclean-libtool: - -rm -f *.lo - -clean-libtool: - -rm -rf .libs _libs - -distclean-libtool: - -rm -f libtool -uninstall-info-am: -install-localdocDATA: $(localdoc_DATA) - @$(NORMAL_INSTALL) - test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)" - @list='$(localdoc_DATA)'; for p in $$list; do \ - if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ - f=$(am__strip_dir) \ - echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \ - $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \ - done - -uninstall-localdocDATA: - @$(NORMAL_UNINSTALL) - @list='$(localdoc_DATA)'; for p in $$list; do \ - f=$(am__strip_dir) \ - echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \ - rm -f "$(DESTDIR)$(localdocdir)/$$f"; \ - done - -# This directory's subdirectories are mostly independent; you can cd -# into them and run `make' without going through this Makefile. -# To change the values of `make' variables: instead of editing Makefiles, -# (1) if the variable is set in `config.status', edit `config.status' -# (which will cause the Makefiles to be regenerated when you run `make'); -# (2) otherwise, pass the desired values on the `make' command line. -$(RECURSIVE_TARGETS): - @failcom='exit 1'; \ - for f in x $$MAKEFLAGS; do \ - case $$f in \ - *=* | --[!k]*);; \ - *k*) failcom='fail=yes';; \ - esac; \ - done; \ - dot_seen=no; \ - target=`echo $@ | sed s/-recursive//`; \ - list='$(SUBDIRS)'; for subdir in $$list; do \ - echo "Making $$target in $$subdir"; \ - if test "$$subdir" = "."; then \ - dot_seen=yes; \ - local_target="$$target-am"; \ - else \ - local_target="$$target"; \ - fi; \ - (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ - || eval $$failcom; \ - done; \ - if test "$$dot_seen" = "no"; then \ - $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ - fi; test -z "$$fail" - -mostlyclean-recursive clean-recursive distclean-recursive \ -maintainer-clean-recursive: - @failcom='exit 1'; \ - for f in x $$MAKEFLAGS; do \ - case $$f in \ - *=* | --[!k]*);; \ - *k*) failcom='fail=yes';; \ - esac; \ - done; \ - dot_seen=no; \ - case "$@" in \ - distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ - *) list='$(SUBDIRS)' ;; \ - esac; \ - rev=''; for subdir in $$list; do \ - if test "$$subdir" = "."; then :; else \ - rev="$$subdir $$rev"; \ - fi; \ - done; \ - rev="$$rev ."; \ - target=`echo $@ | sed s/-recursive//`; \ - for subdir in $$rev; do \ - echo "Making $$target in $$subdir"; \ - if test "$$subdir" = "."; then \ - local_target="$$target-am"; \ - else \ - local_target="$$target"; \ - fi; \ - (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ - || eval $$failcom; \ - done && test -z "$$fail" -tags-recursive: - list='$(SUBDIRS)'; for subdir in $$list; do \ - test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \ - done -ctags-recursive: - list='$(SUBDIRS)'; for subdir in $$list; do \ - test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \ - done - -ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) - list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ - unique=`for i in $$list; do \ - if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ - done | \ - $(AWK) ' { files[$$0] = 1; } \ - END { for (i in files) print i; }'`; \ - mkid -fID $$unique -tags: TAGS - -TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ - $(TAGS_FILES) $(LISP) - tags=; \ - here=`pwd`; \ - if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ - include_option=--etags-include; \ - empty_fix=.; \ - else \ - include_option=--include; \ - empty_fix=; \ - fi; \ - list='$(SUBDIRS)'; for subdir in $$list; do \ - if test "$$subdir" = .; then :; else \ - test ! -f $$subdir/TAGS || \ - tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \ - fi; \ - done; \ - list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ - unique=`for i in $$list; do \ - if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ - done | \ - $(AWK) ' { files[$$0] = 1; } \ - END { for (i in files) print i; }'`; \ - if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \ - test -n "$$unique" || unique=$$empty_fix; \ - $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ - $$tags $$unique; \ - fi -ctags: CTAGS -CTAGS: ctags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ - $(TAGS_FILES) $(LISP) - tags=; \ - here=`pwd`; \ - list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ - unique=`for i in $$list; do \ - if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ - done | \ - $(AWK) ' { files[$$0] = 1; } \ - END { for (i in files) print i; }'`; \ - test -z "$(CTAGS_ARGS)$$tags$$unique" \ - || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ - $$tags $$unique - -GTAGS: - here=`$(am__cd) $(top_builddir) && pwd` \ - && cd $(top_srcdir) \ - && gtags -i $(GTAGS_ARGS) $$here - -distclean-tags: - -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags - -distdir: $(DISTFILES) - $(mkdir_p) $(distdir)/../../config - @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ - topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \ - list='$(DISTFILES)'; for file in $$list; do \ - case $$file in \ - $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ - $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \ - esac; \ - if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ - dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \ - if test "$$dir" != "$$file" && test "$$dir" != "."; then \ - dir="/$$dir"; \ - $(mkdir_p) "$(distdir)$$dir"; \ - else \ - dir=''; \ - fi; \ - if test -d $$d/$$file; then \ - if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ - cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \ - fi; \ - cp -pR $$d/$$file $(distdir)$$dir || exit 1; \ - else \ - test -f $(distdir)/$$file \ - || cp -p $$d/$$file $(distdir)/$$file \ - || exit 1; \ - fi; \ - done - list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ - if test "$$subdir" = .; then :; else \ - test -d "$(distdir)/$$subdir" \ - || $(mkdir_p) "$(distdir)/$$subdir" \ - || exit 1; \ - distdir=`$(am__cd) $(distdir) && pwd`; \ - top_distdir=`$(am__cd) $(top_distdir) && pwd`; \ - (cd $$subdir && \ - $(MAKE) $(AM_MAKEFLAGS) \ - top_distdir="$$top_distdir" \ - distdir="$$distdir/$$subdir" \ - distdir) \ - || exit 1; \ - fi; \ - done -check-am: all-am -check: check-recursive -all-am: Makefile $(DATA) -installdirs: installdirs-recursive -installdirs-am: - for dir in "$(DESTDIR)$(localdocdir)"; do \ - test -z "$$dir" || $(mkdir_p) "$$dir"; \ - done -install: install-recursive -install-exec: install-exec-recursive -install-data: install-data-recursive -uninstall: uninstall-recursive - -install-am: all-am - @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am - -installcheck: installcheck-recursive -install-strip: - $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ - install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ - `test -z '$(STRIP)' || \ - echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install -mostlyclean-generic: - -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES) - -clean-generic: - -distclean-generic: - -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) - -maintainer-clean-generic: - @echo "This command is intended for maintainers to use" - @echo "it deletes files that may require special tools to rebuild." -clean: clean-recursive - -clean-am: clean-generic clean-libtool mostlyclean-am - -distclean: distclean-recursive - -rm -f Makefile -distclean-am: clean-am distclean-generic distclean-libtool \ - distclean-tags - -dvi: dvi-recursive - -dvi-am: - -html: html-recursive - -info: info-recursive - -info-am: - -install-data-am: install-localdocDATA - -install-exec-am: - -install-info: install-info-recursive - -install-man: - -installcheck-am: - -maintainer-clean: maintainer-clean-recursive - -rm -f Makefile -maintainer-clean-am: distclean-am maintainer-clean-generic - -mostlyclean: mostlyclean-recursive - -mostlyclean-am: mostlyclean-generic mostlyclean-libtool - -pdf: pdf-recursive - -pdf-am: - -ps: ps-recursive - -ps-am: - -uninstall-am: uninstall-info-am uninstall-localdocDATA - -uninstall-info: uninstall-info-recursive - -.PHONY: $(RECURSIVE_TARGETS) CTAGS GTAGS all all-am check check-am \ - clean clean-generic clean-libtool clean-recursive ctags \ - ctags-recursive distclean distclean-generic distclean-libtool \ - distclean-recursive distclean-tags distdir dvi dvi-am html \ - html-am info info-am install install-am install-data \ - install-data-am install-exec install-exec-am install-info \ - install-info-am install-localdocDATA install-man install-strip \ - installcheck installcheck-am installdirs installdirs-am \ - maintainer-clean maintainer-clean-generic \ - maintainer-clean-recursive mostlyclean mostlyclean-generic \ - mostlyclean-libtool mostlyclean-recursive pdf pdf-am ps ps-am \ - tags tags-recursive uninstall uninstall-am uninstall-info-am \ - uninstall-localdocDATA - - -# Ignore most rules -lib progs check test _test check-p check-s: - @echo "Nothing to be done" - -tests dep depend: - @@SETX@; for d in X $(SUBDIRS); do \ - if test $$d != X; then \ - (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \ - fi; - done - -# In docs directory, install-doc is the same as install -install-doc install-all: - $(MAKE) $(AM_MAKEFLAGS) install -uninstall-doc uninstall-all: - $(MAKE) $(AM_MAKEFLAGS) uninstall -# Tell versions [3.59,3.63) of GNU make to not export all variables. -# Otherwise a system limit (for SysV at least) may be exceeded. -.NOEXPORT: diff --git a/doc/html/MemoryManagement.html b/doc/html/MemoryManagement.html deleted file mode 100644 index c93dc10..0000000 --- a/doc/html/MemoryManagement.html +++ /dev/null @@ -1,510 +0,0 @@ - - - -
Some form of memory management may be necessary in HDF5 when - the various deletion operators are implemented so that the - file memory is not permanently orphaned. However, since an - HDF5 file was designed with persistent data in mind, the - importance of a memory manager is questionable. - -
On the other hand, when certain meta data containers (file glue) - grow, they may need to be relocated in order to keep the - container contiguous. - -
- Example: An object header consists of up to two - chunks of contiguous memory. The first chunk is a fixed - size at a fixed location when the header link count is - greater than one. Thus, inserting additional items into an - object header may require the second chunk to expand. When - this occurs, the second chunk may need to move to another - location in the file, freeing the file memory which that - chunk originally occupied. -- -
The relocation of meta data containers could potentially - orphan a significant amount of file memory if the application - has made poor estimates for preallocation sizes. - - -
Memory management by the library can be independent of memory - management support by the file format. The file format can - support no memory management, some memory management, or full - memory management. Similarly with the library. - -
We now evaluate each combination of library support with file - support: - -
The file contains an unsorted, doubly-linked list of free - blocks. The address of the head of the list appears in the - super block. Each free block contains the following fields: - -
byte | -byte | -byte | -byte | - -
---|---|---|---|
Free Block Signature | - -|||
Total Free Block Size | - -|||
Address of Left Sibling | - -|||
Address of Right Sibling | - -|||
Remainder of Free Block |
-
The library reads as much of the free list as convenient when - convenient and pushes those entries onto stacks. This can - occur when a file is opened or any time during the life of the - file. There is one stack for each free block size and the - stacks are sorted by size in a balanced tree in memory. - -
Deallocation involves finding the correct stack or creating - a new one (an O(log K) operation where K is - the number of stacks), pushing the free block info onto the - stack (a constant-time operation), and inserting the free - block into the file free block list (a constant-time operation - which doesn't necessarily involve any I/O since the free blocks - can be cached like other objects). No attempt is made to - coalesce adjacent free blocks into larger blocks. - -
Allocation involves finding the correct stack (an O(log - K) operation), removing the top item from the stack - (a constant-time operation), and removing the block from the - file free block list (a constant-time operation). If there is - no free block of the requested size or larger, then the file - is extended. - -
To provide sharability of the free list between processes, - the last step of an allocation will check for the free block - signature and if it doesn't find one will repeat the process. - Alternatively, a process can temporarily remove free blocks - from the file and hold them in it's own private pool. - -
To summarize... -
The HDF5 file format supports a general B-tree mechanism - for storing data with keys. If we use a B-tree to represent - all parts of the file that are free and the B-tree is indexed - so that a free file chunk can be found if we know the starting - or ending address, then we can efficiently determine whether a - free chunk begins or ends at the specified address. Call this - the Address B-Tree. - -
If a second B-tree points to a set of stacks where the - members of a particular stack are all free chunks of the same - size, and the tree is indexed by chunk size, then we can - efficiently find the best-fit chunk size for a memory request. - Call this the Size B-Tree. - -
All free blocks of a particular size can be linked together - with an unsorted, doubly-linked, circular list and the left - and right sibling addresses can be stored within the free - chunk, allowing us to remove or insert items from the list in - constant time. - -
Deallocation of a block fo file memory consists of: - -
Allocation is similar to deallocation. - -
To summarize... - -
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
This document contrasts two methods for mounting an hdf5 file - on another hdf5 file: the case where the relationship between - files is a tree and the case where it's a graph. The tree case - simplifies current working group functions and allows symbolic - links to point into ancestor files whereas the graph case is - more consistent with the organization of groups within a - particular file. - -
If file child
is mounted on file
- parent
at group /mnt
in
- parent
then the contents of the root group of
- child
will appear in the group /mnt
of
- parent
. The group /mnt
is called the
- mount point of the child in the parent.
-
-
These features are common to both mounting schemes. - -
/mnt
in
- parent
is temporarily hidden. If objects in that
- group had names from other groups then the objects will still
- be visible by those other names.
-
- /mnt
for that group then
- the root group of the child will be visible in all those
- names.
-
- H5Gmove()
in such a way
- that the new location would be in a different file than the
- original location.
-
- Tree | -Graph | -
---|---|
The set of mount-related files makes a tree. | -The set of mount-related files makes a directed - graph. | -
A file can be mounted at only one mount point. | -A file can be mounted at any number of mount points. | -
Symbolic links in the child that have a link value which - is an absolute name can be interpreted with respect to the - root group of either the child or the root of the mount - tree, a property which is determined when the child is - mounted. | -Symbolic links in the child that have a link value which - is an absolute name are interpreted with respect to the - root group of the child. | -
Closing a child causes it to be unmounted from the - parent. | -Closing a child has no effect on its relationship with - the parent. One can continue to access the child contents - through the parent. | -
Closing the parent recursively unmounts and closes all - mounted children. | -Closing the parent unmounts all children but - does not close them or unmount their children. | -
The current working group functions
- H5Gset() , H5Gpush() , and
- H5Gpop() operate on the root of the mount
- tree. |
- The current working group functions operate on the file - specified by their first argument. | -
Absolute name lookups (like for H5Dopen() )
- are always performed with respect to the root of the mount
- tree. |
- Absolute name lookups are performed with respect to the - file specified by the first argument. | -
Relative name lookups (like for H5Dopen() )
- are always performed with respect to the specified group
- or the current working group of the root of the mount
- tree. |
- Relative name lookups are always performed with respect - to the specified group or the current working group of the - file specified by the first argument. | -
Mounting a child temporarily hides the current working - group stack for that child | -Mounting a child has no effect on its current working - group stack. | -
Calling H5Fflush() will flush all files of
- the mount tree regardless of which file is specified as
- the argument. |
- Calling H5Fflush() will flush only the
- specified file. |
-
herr_t H5Fmount(hid_t loc, const char
- *name, hid_t child, hid_t
- plist)
- Tree | -Graph | -
---|---|
The call will fail if the child is already mounted - elsewhere. | -A child can be mounted at numerous mount points. | -
The call will fail if the child is an ancestor of the - parent. | -The mount graph is allowed to have cycles. | -
Subsequently closing the child will cause it to be - unmounted from the parent. | -Closing the child has no effect on its mount - relationship with the parent. | -
herr_t H5Funmount(hid_t loc, const char
- *name)
- hid_t H5Pcreate(H5P_MOUNT)
- herr_t H5Pset_symlink_locality(hid_t plist,
- H5G_symlink_t locality)
- herr_t H5Pget_symlink_locality(hid_t plist,
- H5G_symlink_t *locality)
- H5G_SYMLINK_LOCAL
or
- H5G_SYMLINK_GLOBAL
(the default).
-
- hid_t H5Freopen(hid_t file)
- Tree | -Graph | -
---|---|
The new handle is not mounted but the old handle - continues to be mounted. | -The new handle is mounted at the same location(s) as - the original handle. | -
A file eos.h5
contains data which is constant for
- all problems. The output of a particular physics application is
- dumped into data1.h5
and data2.h5
and
- the physics expects various constants from eos.h5
- in the eos
group of the two data files. Instead of
- copying the contents of eos.h5
into every physics
- output file we simply mount eos.h5
as a read-only
- child of data1.h5
and data2.h5
.
-
-
Tree
|
Graph
|
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
This section includes brief discussions of performance issues - in HDF5 and performance analysis tools for HDF5 or pointers to - such discussions. - -
HDF5 does not yet manage freespace as effectively as it might. - While a file is opened, the library actively tracks and re-uses - freespace, i.e., space that is freed (or released) - during the run. - But the library does not yet manage freespace across the - closing and reopening of a file; when a file is closed, - all knowledge of available freespace is lost. - What was freespace becomes an unusable hole in the file. - -
There are several circumstances that can result in freespace - in an HDF5 file: -
H5Gunlink
),
- the space previously occupied by the object is released
- and identified as freespace.
- As stated above, freespace is not managed across the - closing and reopening of an HDF5 file; file space that was - known freespace while the file remained open becomes an - inaccessible hole when the file is closed. - Thus, if a file is often closed and reopened, datasets - frequently rewritten, or groups and/or datasets frequently - added and deleted, that file can develop large numbers of - holes and grow unnecessarily large. This can, in turn, - seriously impair application or library performance - as the file ages. - -
An h5pack
utility would enable packing
- a file to remove the holes, but writing such a utility to
- universally pack the file correctly is a complex task and the
- HDF5 development team has not to date had the resources to
- complete the task.
-
-
For application developers or researchers who find themselves - working with files that become bloated in this manner, there - are, at this time, two remedies: -
H5view
, an HDF5 Java tool, allows the user
- to open a file and, using the Save As...
feature,
- save the file under a new filename. The new file can then
- be closed and will be a packed version of the original file.
- This approach is reasonably reliable, but with two caveats:
- The Pablo software consists - of an instrumented copy of the HDF5 library, the Pablo Trace and - Trace Extensions libraries, and some utilities for processing the - output. The instrumented version of the HDF5 library has hooks - inserted into the HDF5 code which call routines in the Pablo Trace - library just after entry to each instrumented HDF5 routine and - just prior to exit from the routine. The Pablo Trace Extension - library has programs that track the I/O activity between the - entry and exit of the HDF5 routine during execution. - -
A few lines of code must be inserted in the user's main program - to enable tracing and to specify which HDF5 procedures are to be - traced. The program is linked with the special HDF5 and Pablo - libraries to produce an executable. Running this executable on - a single processor produces an output file called the trace file - which contains records, called Pablo Self-Defining Data Format - (SDDF) records, which can later be analyzed using the - HDF5 Analysis Utilities. The HDF5 Analysis Utilites can be used - to interpret the SDDF records in the trace files to produce a - report describing the HDF5 IO activity that occurred during - execution. - -
For further instructions, see the file READ_ME
- in the $(toplevel)/hdf5/pablo/
subdirectory of
- the HDF5 source code distribution.
-
-
For further information about Pablo and the
- Self-Defining Data Format, visit the Pablo website at
- http://www-pablo.cs.uiuc.edu/
.
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
- H5T_IEEE_F32BE - H5T_IEEE_F32LE - H5T_IEEE_F64BE - H5T_IEEE_F64LE -- - -
-- H5T_STD_I8BE - H5T_STD_I8LE - H5T_STD_I16BE - H5T_STD_I16LE - H5T_STD_I32BE - H5T_STD_I32LE - H5T_STD_I64BE - H5T_STD_I64LE -- |
-
-- H5T_STD_U8BE - H5T_STD_U8LE - H5T_STD_U16BE - H5T_STD_U16LE - H5T_STD_U32BE - H5T_STD_U32LE - H5T_STD_U64BE - H5T_STD_U64LE -- |
-
-- H5T_STD_B8BE - H5T_STD_B8LE - H5T_STD_B16BE - H5T_STD_B16LE - H5T_STD_B32BE - H5T_STD_B32LE - H5T_STD_B64BE - H5T_STD_B64LE -- |
- H5T_STD_REF_OBJ - H5T_STD_REF_DSETREG -- - -
- H5T_UNIX_D32BE - H5T_UNIX_D32LE - H5T_UNIX_D64BE - H5T_UNIX_D64LE -- - -
- H5T_C_S1 -- - -
- H5T_FORTRAN_S1 -- - -
-- H5T_INTEL_I8 - H5T_INTEL_I16 - H5T_INTEL_I32 - H5T_INTEL_I64 - - H5T_INTEL_U8 - H5T_INTEL_U16 - H5T_INTEL_U32 - H5T_INTEL_U64 -- |
-
-- H5T_INTEL_B8 - H5T_INTEL_B16 - H5T_INTEL_B32 - H5T_INTEL_B64 - - H5T_INTEL_F32 - H5T_INTEL_F64 -- |
-- H5T_ALPHA_I8 - H5T_ALPHA_I16 - H5T_ALPHA_I32 - H5T_ALPHA_I64 - - H5T_ALPHA_U8 - H5T_ALPHA_U16 - H5T_ALPHA_U32 - H5T_ALPHA_U64 -- |
-
-- H5T_ALPHA_B8 - H5T_ALPHA_B16 - H5T_ALPHA_B32 - H5T_ALPHA_B64 - - H5T_ALPHA_F32 - H5T_ALPHA_F64 -- |
-- H5T_MIPS_I8 - H5T_MIPS_I16 - H5T_MIPS_I32 - H5T_MIPS_I64 - - H5T_MIPS_U8 - H5T_MIPS_U16 - H5T_MIPS_U32 - H5T_MIPS_U64 -- |
-
-- H5T_MIPS_B8 - H5T_MIPS_B16 - H5T_MIPS_B32 - H5T_MIPS_B64 - - H5T_MIPS_F32 - H5T_MIPS_F64 -- |
H5detect
.
- Their names differ from other HDF5 datatype names as follows:
- U
, then it is the unsigned
- version of the integer datatype; other integer datatypes are signed.
- LLONG
corresponds to
- C's long_long
and
- LDOUBLE
is long_double
.
- These datatypes might be the same as LONG
and
- DOUBLE
, respectively.
-
-- H5T_NATIVE_CHAR - H5T_NATIVE_SCHAR - H5T_NATIVE_UCHAR - - H5T_NATIVE_SHORT - H5T_NATIVE_USHORT - - H5T_NATIVE_INT - H5T_NATIVE_UINT - - H5T_NATIVE_LONG - H5T_NATIVE_ULONG - H5T_NATIVE_LLONG - H5T_NATIVE_ULLONG -- |
-
-- H5T_NATIVE_FLOAT - H5T_NATIVE_DOUBLE - H5T_NATIVE_LDOUBLE - - H5T_NATIVE_B8 - H5T_NATIVE_B16 - H5T_NATIVE_B32 - H5T_NATIVE_B64 - - H5T_NATIVE_OPAQUE - H5T_NATIVE_HADDR - H5T_NATIVE_HSIZE - H5T_NATIVE_HSSIZE - H5T_NATIVE_HERR - H5T_NATIVE_HBOOL -- |
LEAST
-- storage to use least amount of space
- FAST
-- storage to maximize performance
-
-- H5T_NATIVE_INT8 - H5T_NATIVE_UINT8 - H5T_NATIVE_INT_LEAST8 - H5T_NATIVE_UINT_LEAST8 - H5T_NATIVE_INT_FAST8 - H5T_NATIVE_UINT_FAST8 - - H5T_NATIVE_INT16 - H5T_NATIVE_UINT16 - H5T_NATIVE_INT_LEAST16 - H5T_NATIVE_UINT_LEAST16 - H5T_NATIVE_INT_FAST16 - H5T_NATIVE_UINT_FAST16 -- |
-
-- H5T_NATIVE_INT32 - H5T_NATIVE_UINT32 - H5T_NATIVE_INT_LEAST32 - H5T_NATIVE_UINT_LEAST32 - H5T_NATIVE_INT_FAST32 - H5T_NATIVE_UINT_FAST32 - - H5T_NATIVE_INT64 - H5T_NATIVE_UINT64 - H5T_NATIVE_INT_LEAST64 - H5T_NATIVE_UINT_LEAST64 - H5T_NATIVE_INT_FAST64 - H5T_NATIVE_UINT_FAST64 -- |
- H5T_NATIVE_INTEGER - H5T_NATIVE_REAL - H5T_NATIVE_DOUBLE - H5T_NATIVE_CHARACTER -- -
-- H5T_STD_I8BE - H5T_STD_I8LE - H5T_STD_I16BE - H5T_STD_I16LE - H5T_STD_I32BE - H5T_STD_I32LE - H5T_STD_I64BE - H5T_STD_I64LE -- |
-
-- H5T_STD_U8BE - H5T_STD_U8LE - H5T_STD_U16BE - H5T_STD_U16LE - H5T_STD_U32BE - H5T_STD_U32LE - H5T_STD_U64BE - H5T_STD_U64LE -- |
-
-- H5T_IEEE_F32BE - H5T_IEEE_F32LE - H5T_IEEE_F64BE - H5T_IEEE_F64LE -- |
- H5T_STD_REF_OBJ - H5T_STD_REF_DSETREG -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
The property list (a.k.a., template) interface provides a - mechanism for default named arguments for a C function - interface. A property list is a collection of name/value pairs - which can be passed to various other HDF5 functions to control - features that are typically unimportant or whose default values - are usually used. - -
For instance, file creation needs to know various things such
- as the size of the user-block at the beginning of the file, or
- the size of various file data structures. Wrapping this
- information in a property list simplifies the API by reducing
- the number of arguments to H5Fcreate()
.
-
-
Property lists follow the same create/open/close paradigm as - the rest of the library. - -
hid_t H5Pcreate (H5P_class_t class)
- H5P_FILE_CREATE
- H5P_FILE_ACCESS
- H5P_DATASET_CREATE
- H5P_DATASET_XFER
- hid_t H5Pcopy (hid_t plist)
- herr_t H5Pclose (hid_t plist)
- H5P_class_t H5Pget_class (hid_t plist)
- H5Pcreate()
.
-
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
- - | - - | - - |
-
|
-
- - - |
-
|
-
- - - | - - | -
- - | - - | - - |
H5check_version
(unsigned majnum
,
- unsigned minnum
,
- unsigned relnum
- )
-H5check_version
verifies that the arguments provided
- with the function call match the version numbers compiled into
- the library.
-
- H5check_version
serves two slightly differing purposes.
-
- First, the function is intended to be called by the user to verify
- that the version of the header files compiled into an application
- matches the version of the HDF5 library being used.
- One may look at the H5check
definition in the file
- H5public.h
as an example.
-
- Due to the risks of data corruption or segmentation faults,
- H5check_version
causes the application to abort if the
- version numbers do not match.
- The abort is achieved by means of a call to the
- standard C function abort()
.
-
- Note that H5check_version
verifies only the
- major and minor version numbers and the release number;
- it does not verify the sub-release value as that should be
- an empty string for any official release.
- This means that any two incompatible library versions must
- have different {major,minor,release} numbers. (Notice the
- reverse is not necessarily true.)
-
- Secondarily, H5check_version
verifies that the
- library version identifiers H5_VERS_MAJOR
,
- H5_VERS_MINOR
, H5_VERS_RELEASE
,
- H5_VERS_SUBRELEASE
, and H5_VERS_INFO
- are consistent.
- This is designed to catch source code inconsistencies,
- but does not generate the fatal error as in the first stage
- because this inconsistency does not cause errors in the data files.
- If this check reveals inconsistencies, the library issues a warning
- but the function does not fail.
-
-
unsigned majnum |
- IN: The major version of the library. |
unsigned minnum |
- IN: The minor version of the library. |
unsigned relnum |
- IN: The release number of the library. |
-SUBROUTINE h5check_version_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(IN) :: majnum ! The major version of the library - INTEGER, INTENT(IN) :: minnum ! The minor version of the library - INTEGER, INTENT(IN) :: relnum ! The release number - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5check_version_f -- - -
H5close
(void)
-H5close
flushes all data to disk,
- closes all file identifiers, and cleans up all memory used by
- the library. This function is generally called when the
- application calls exit()
, but may be called earlier
- in event of an emergency shutdown or out of desire to free all
- resources used by the HDF5 library.
-
- h5close_f
and h5open_f
are
- required calls in Fortran90 applications.
-
-SUBROUTINE h5close_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5close_f -- - -
H5dont_atexit
(void)
-atexit
cleanup routine.
-H5dont_atexit
indicates to the library that an
- atexit()
cleanup routine should not be installed.
- The major purpose for this is in situations where the
- library is dynamically linked into an application and is
- un-linked from the application before exit()
gets
- called. In those situations, a routine installed with
- atexit()
would jump to a routine which was
- no longer in memory, causing errors.
- - In order to be effective, this routine must be called - before any other HDF function calls, and must be called each - time the library is loaded/linked into the application - (the first time and after it's been un-loaded). -
-SUBROUTINE h5dont_atexit_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5dont_atexit_f -- - -
H5garbage_collect
(void)
-H5garbage_collect
walks through all the garbage
- collection routines of the library, freeing any unused memory.
-
- It is not required that H5garbage_collect
be called
- at any particular time; it is only necessary in certain situations
- where the application has performed actions that cause the library
- to allocate many objects. The application should call
- H5garbage_collect
if it eventually releases those
- objects and wants to reduce the memory used by the library from
- the peak usage required.
-
- The library automatically garbage collects all the free lists - when the application ends. -
-SUBROUTINE h5garbage_collect_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5garbage_collect_f -- - -
H5get_libversion
(unsigned *majnum
,
- unsigned *minnum
,
- unsigned *relnum
- )
-H5get_libversion
retrieves the major, minor, and release
- numbers of the version of the HDF library which is linked to
- the application.
-unsigned *majnum |
- OUT: The major version of the library. |
unsigned *minnum |
- OUT: The minor version of the library. |
unsigned *relnum |
- OUT: The release number of the library. |
-SUBROUTINE h5get_libversion_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(OUT) :: majnum ! The major version of the library - INTEGER, INTENT(OUT) :: minnum ! The minor version of the library - INTEGER, INTENT(OUT) :: relnum ! The release number - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5get_libversion_f -- - -
H5open
(void)
-H5open
initialize the library.
-
- When the HDF5 Library is employed in a C application,
- this function is normally called automatically, but if you
- find that an HDF5 library function is failing inexplicably,
- try calling this function first.
- If you wish to elimnate this possibility, it is safe to
- routinely call H5open
before an application
- starts working with the library as there are no damaging
- side-effects in calling it more than once.
-
- When the HDF5 Library is employed in a Fortran90 application,
- h5open_f
initializes global variables
- (e.g. predefined types) and performs other tasks required to
- initialize the library.
- h5open_f
and h5close_f
are therefore
- required calls in Fortran90 applications.
-
-SUBROUTINE h5open_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5open_f -- - -
H5set_free_list_limits
(int reg_global_lim
,
- int reg_list_lim
,
- int arr_global_lim
,
- int arr_list_lim
,
- int blk_global_lim
,
- int blk_list_lim
- )
-H5set_free_list_limits
sets size limits
- on all types of free lists.
- The HDF5 library uses free lists internally to manage memory.
- There are three types of free lists:
- - These are global limits, but each limit applies only to - free lists of the specified type. - Therefore, if an application sets a 1Mb limit on each of - the global lists, up to 3Mb of total storage might be - allocated, 1Mb for each of the regular, array, and - block type lists. -
- Using a value of -1
for a limit means that
- no limit is set for the specified type of free list.
-
int reg_global_lim |
- IN: The limit on all regular free list memory used |
int reg_list_lim |
- IN: The limit on memory used in each regular free list |
int arr_global_lim |
- IN: The limit on all array free list memory used |
int arr_list_lim |
- IN: The limit on memory used in each array free list |
int blk_global_lim |
- IN: The limit on all block free list memory used |
int blk_list_lim |
- IN: The limit on memory used in each block free list |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
- - |
-
|
-
|
-
|
-
- - - |
-
|
-
- - - |
-
|
-
-
| - - |
-
|
-The Attribute interface, H5A, is primarily designed to easily allow -small datasets to be attached to primary datasets as metadata information. -Additional goals for the H5A interface include keeping storage requirement -for each attribute to a minimum and easily sharing attributes among -datasets. -
-Because attributes are intended to be small objects, large datasets -intended as additional information for a primary dataset should be -stored as supplemental datasets in a group with the primary dataset. -Attributes can then be attached to the group containing everything -to indicate a particular type of dataset with supplemental datasets -is located in the group. How small is "small" is not defined by the -library and is up to the user's interpretation. -
-See Attributes in the -HDF5 User's Guide for further information. - - - - - -
H5Aclose
(hid_t attr_id
)
-H5Aclose
terminates access to the attribute
- specified by attr_id
by releasing the identifier.
- - Further use of a released attribute identifier is illegal; - a function using such an identifier will fail. -
hid_t attr_id |
- IN: Attribute to release access to. |
-SUBROUTINE h5aclose_f(attr_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(OUT) :: attr_id ! Attribute identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aclose_f -- - -
H5Acreate
(hid_t loc_id
,
- const char *name
,
- hid_t type_id
,
- hid_t space_id
,
- hid_t create_plist
- )
-H5Acreate
creates an attribute named name
- and attached to the object specified with loc_id
.
- loc_id
is a group, dataset, or named datatype identifier.
-
- The attribute name specified in name
must be unique.
- Attempting to create an attribute with the same name as an already
- existing attribute will fail, leaving the pre-existing attribute
- in place. To overwrite an existing attribute with a new attribute
- of the same name, first call H5Adelete
then recreate
- the attribute with H5Acreate
.
-
- The datatype and dataspace identifiers of the attribute,
- type_id
and space_id
, respectively,
- are created with the H5T and H5S interfaces, respectively.
-
- Currently only simple dataspaces are allowed for attribute dataspaces. -
- The attribute creation property list, create_plist
,
- is currently unused;
- it may be used in the future for optional attribute properties.
- At this time, H5P_DEFAULT
is the only accepted value.
-
H5Aclose
or resource leaks will develop.
-hid_t loc_id |
- IN: Object (dataset, group, or named datatype) to be attached to. |
const char *name |
- IN: Name of attribute to create. |
hid_t type_id |
- IN: Identifier of datatype for attribute. |
hid_t space_id |
- IN: Identifier of dataspace for attribute. |
hid_t create_plist |
- IN: Identifier of creation property list. (Currently unused;
- the only accepted value is H5P_DEFAULT .) |
-SUBROUTINE h5acreate_f(obj_id, name, type_id, space_id, attr_id, & - hdferr, creation_prp) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Attribute name - INTEGER(HID_T), INTENT(IN) :: type_id ! Attribute datatype identifier - INTEGER(HID_T), INTENT(IN) :: space_id ! Attribute dataspace identifier - INTEGER(HID_T), INTENT(OUT) :: attr_id ! Attribute identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure- -
- INTEGER(HID_T), OPTIONAL, INTENT(IN) :: creation_prp - ! Attribute creation property - ! list identifier -END SUBROUTINE h5acreate_f --
H5Adelete
(hid_t loc_id
,
- const char *name
- )
-H5Adelete
removes the attribute specified by its
- name, name
, from a dataset, group, or named datatype.
- This function should not be used when attribute identifiers are
- open on loc_id
as it may cause the internal indexes
- of the attributes to change and future writes to the open
- attributes to produce incorrect results.
-hid_t loc_id |
- IN: Identifier of the dataset, group, or named datatype - to have the attribute deleted from. |
const char *name |
- IN: Name of the attribute to delete. |
-SUBROUTINE h5adelete_f(obj_id, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Attribute name - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5adelete_f -- - -
H5Aget_name
(hid_t attr_id
,
- size_t buf_size
,
- char *buf
- )
-H5Aget_name
retrieves the name of an attribute
- specified by the identifier, attr_id
.
- Up to buf_size
characters are stored in
- buf
followed by a \0
string
- terminator. If the name of the attribute is longer than
- (buf_size -1)
, the string terminator is stored in the
- last position of the buffer to properly terminate the string.
-hid_t attr_id |
- IN: Identifier of the attribute. |
size_t buf_size |
- IN: The size of the buffer to store the name in. |
char *buf |
- IN: Buffer to store name in. |
buf_size
, if successful.
- Otherwise returns a negative value.
--SUBROUTINE h5aget_name_f(attr_id, size, buf, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: attr_id ! Attribute identifier - INTEGER, INTENT(IN) :: size ! Buffer size - CHARACTER(LEN=*), INTENT(OUT) :: buf ! Buffer to hold attribute name - INTEGER, INTENT(OUT) :: hdferr ! Error code: name length - ! on success and -1 on failure -END SUBROUTINE h5aget_name_f -- - -
H5Aget_num_attrs
(hid_t loc_id
)
-H5Aget_num_attrs
returns the number of attributes
- attached to the object specified by its identifier,
- loc_id
.
- The object can be a group, dataset, or named datatype.
-hid_t loc_id |
- IN: Identifier of a group, dataset, or named datatype. |
-SUBROUTINE h5aget_num_attrs_f(obj_id, attr_num, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - INTEGER, INTENT(OUT) :: attr_num ! Number of attributes of the object - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aget_num_attrs_f -- - -
H5Aget_space
(hid_t attr_id
)
-H5Aget_space
retrieves a copy of the dataspace
- for an attribute. The dataspace identifier returned from
- this function must be released with H5Sclose
- or resource leaks will develop.
-hid_t attr_id |
- IN: Identifier of an attribute. |
-SUBROUTINE h5aget_space_f(attr_id, space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: attr_id ! Attribute identifier - INTEGER(HID_T), INTENT(OUT) :: space_id ! Attribute dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aget_space_f -- - -
H5Aget_type
(hid_t attr_id
)
-H5Aget_type
retrieves a copy of the datatype
- for an attribute.
- - The datatype is reopened if it is a named type before returning - it to the application. The datatypes returned by this function - are always read-only. If an error occurs when atomizing the - return datatype, then the datatype is closed. -
- The datatype identifier returned from this function must be
- released with H5Tclose
or resource leaks will develop.
-
hid_t attr_id |
- IN: Identifier of an attribute. |
-SUBROUTINE h5aget_type_f(attr_id, type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: attr_id ! Attribute identifier - INTEGER(HID_T), INTENT(OUT) :: type_id ! Attribute datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aget_type_f -- - -
H5Aiterate
(hid_t loc_id
,
- unsigned * idx
,
- H5A_operator_t op
,
- void *op_data
- )
-H5Aiterate
iterates over the attributes of
- the object specified by its identifier, loc_id
.
- The object can be a group, dataset, or named datatype.
- For each attribute of the object, the op_data
- and some additional information specified below are passed
- to the operator function op
.
- The iteration begins with the attribute specified by its
- index, idx
; the index for the next attribute
- to be processed by the operator, op
, is
- returned in idx
.
- If idx
is the null pointer, then all attributes
- are processed.
-
- The prototype for H5A_operator_t
is:
- typedef herr_t (*H5A_operator_t)(hid_t loc_id,
- const char *attr_name,
- void *operator_data);
-
-
- The operation receives the identifier for the group, dataset
- or named datatype being iterated over, loc_id
, the
- name of the current attribute about the object, attr_name
,
- and the pointer to the operator data passed in to H5Aiterate
,
- op_data
. The return values from an operator are:
-
hid_t loc_id |
- IN: Identifier of a group, dataset or named datatype. |
unsigned * idx |
- IN/OUT: Starting (IN) and ending (OUT) attribute index. |
H5A_operator_t op |
- IN: User's function to pass each attribute to |
void *op_data |
- IN/OUT: User's data to pass through to iterator operator function |
H5Aopen_idx
(hid_t loc_id
,
- unsigned int idx
- )
-H5Aopen_idx
opens an attribute which is attached
- to the object specified with loc_id
.
- The location object may be either a group, dataset, or
- named datatype, all of which may have any sort of attribute.
- The attribute specified by the index, idx
,
- indicates the attribute to access.
- The value of idx
is a 0-based, non-negative integer.
- The attribute identifier returned from this function must be
- released with H5Aclose
or resource leaks will develop.
-hid_t loc_id |
- IN: Identifier of the group, dataset, or named datatype - attribute to be attached to. |
unsigned int idx |
- IN: Index of the attribute to open. |
-SUBROUTINE h5aopen_idx_f(obj_id, index, attr_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - INTEGER, INTENT(IN) :: index ! Attribute index - INTEGER(HID_T), INTENT(OUT) :: attr_id ! Attribute identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aopen_idx_f -- - -
H5Aopen_name
(hid_t loc_id
,
- const char *name
- )
-H5Aopen_name
opens an attribute specified by
- its name, name
, which is attached to the
- object specified with loc_id
.
- The location object may be either a group, dataset, or
- named datatype, which may have any sort of attribute.
- The attribute identifier returned from this function must
- be released with H5Aclose
or resource leaks
- will develop.
-hid_t loc_id |
- IN: Identifier of a group, dataset, or named datatype - atttribute to be attached to. |
const char *name |
- IN: Attribute name. |
-SUBROUTINE h5aopen_name_f(obj_id, name, attr_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Attribute name - INTEGER(HID_T), INTENT(OUT) :: attr_id ! Attribute identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aopen_name_f -- - -
H5Aread
(hid_t attr_id
,
- hid_t mem_type_id
,
- void *buf
- )
-H5Aread
reads an attribute, specified with
- attr_id
. The attribute's memory datatype
- is specified with mem_type_id
. The entire
- attribute is read into buf
from the file.
- - Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
hid_t attr_id |
- IN: Identifier of an attribute to read. |
hid_t mem_type_id |
- IN: Identifier of the attribute datatype (in memory). |
void *buf |
- OUT: Buffer for data to be read. |
-SUBROUTINE h5aread_f(attr_id, memtype_id, buf, dims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: attr_id ! Attribute identifier - INTEGER(HID_T), INTENT(IN) :: memtype_id ! Attribute datatype - ! identifier (in memory) - TYPE, INTENT(INOUT) :: buf ! Data buffer; may be a scalar or - ! an array - DIMENSION(*), INTEGER(HSIZE_T), INTENT(IN) :: dims - ! Array to hold corresponding - ! dimension sizes of data buffer buf; - ! dim(k) has value of the - ! k-th dimension of buffer buf; - ! values are ignored if buf is a - ! scalar - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5aread_f -- - -
H5Awrite
(hid_t attr_id
,
- hid_t mem_type_id
,
- const void *buf
- )
-H5Awrite
writes an attribute, specified with
- attr_id
. The attribute's memory datatype
- is specified with mem_type_id
. The entire
- attribute is written from buf
to the file.
- - Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
hid_t attr_id |
- IN: Identifier of an attribute to write. |
hid_t mem_type_id |
- IN: Identifier of the attribute datatype (in memory). |
const void *buf |
- IN: Data to be written. |
-SUBROUTINE h5awrite_f(attr_id, memtype_id, buf, dims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: attr_id ! Attribute identifier - INTEGER(HID_T), INTENT(IN) :: memtype_id ! Attribute datatype - ! identifier (in memory) - TYPE, INTENT(IN) :: buf ! Data buffer; may be a scalar or - ! an array - DIMENSION(*), INTEGER(HSIZE_T), INTENT(IN) :: dims - ! Array to hold corresponding - ! dimension sizes of data buffer buf; - ! dim(k) has value of the k-th - ! dimension of buffer buf; - ! values are ignored if buf is - ! a scalar - INTEGER, INTENT(OUT) :: hdferr ! Error code: - ! 0 on success and -1 on failure -END SUBROUTINE h5awrite_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
-
| - - |
-
|
-
|
-
- - - | - - | - -- - |
-
|
-
- - | - - |
-
|
H5Dclose
(hid_t dataset_id
- )
-H5Dclose
ends access to a dataset specified by
- dataset_id
and releases resources used by it.
- Further use of the dataset identifier is illegal in calls to
- the dataset API.
-hid_t dataset_id |
- IN: Identifier of the dataset to close access to. |
-SUBROUTINE h5dclose_f(dset_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dclose_f -- - -
H5Dcreate
(hid_t loc_id
,
- const char *name
,
- hid_t type_id
,
- hid_t space_id
,
- hid_t create_plist_id
- )
-H5Dcreate
creates a data set with a name,
- name
, in the file or in the group specified by
- the identifier loc_id
.
- The dataset has the datatype and dataspace identified by
- type_id
and space_id
, respectively.
- The specified datatype and dataspace are the datatype and
- dataspace of the dataset as it will exist in the file,
- which may be different than in application memory.
- Dataset creation properties are specified by the argument
- create_plist_id
.
-
- Dataset names within a group are unique:
- H5Dcreate
will return an error if a dataset with
- the name specified in name
already exists at the
- location specified in loc_id
.
-
- create_plist_id
is a H5P_DATASET_CREATE
- property list created with H5Pcreate
and
- initialized with the various functions described above.
-
- H5Dcreate
returns an error if the dataset's datatype
- includes a variable-length (VL) datatype and the fill value
- is undefined, i.e., set to NULL
in the
- dataset creation property list.
- Such a VL datatype may be directly included,
- indirectly included as part of a compound or array datatype, or
- indirectly included as part of a nested compound or array datatype.
-
- H5Dcreate
returns a dataset identifier for success
- or a negative value for failure.
- The dataset identifier should eventually be closed by
- calling H5Dclose
to release resources it uses.
-
- Fill values and space allocation:
- The HDF5 library provides flexible means
- of specifying a fill value,
- of specifying when space will be allocated for a dataset, and
- of specifying when fill values will be written to a dataset.
- For further information on these topics, see the document
-
- Fill Value and Dataset Storage Allocation Issues in HDF5
- and the descriptions of the following HDF5 functions in this
- HDF5 Reference Manual:
-
- |
- H5Dfill - H5Pset_fill_value - H5Pget_fill_value - H5Pfill_value_defined
- |
- H5Pset_fill_time - H5Pget_fill_time - H5Pset_alloc_time - H5Pget_alloc_time
- |
H5Dcreate
can fail if there has been an error
- in setting up an element of the dataset creation property list.
- In such cases, each item in the property list must be examined
- to ensure that the setup satisfies to all required conditions.
- This problem is most likely to occur with the use of filters.
-
- For example, H5Dcreate
will fail without a meaningful
- explanation if
-
pixels_per_block
- is set to an inappropriate value.
-
- In such a case, one would refer to the description of
- H5Pset_szip
,
- looking for any conditions or requirements that might affect the
- local computing environment.
-
-
hid_t loc_id |
- IN: Identifier of the file or group - within which to create the dataset. |
const char * name |
- IN: The name of the dataset to create. |
hid_t type_id |
- IN: Identifier of the datatype to use - when creating the dataset. |
hid_t space_id |
- IN: Identifier of the dataspace to use - when creating the dataset. |
hid_t create_plist_id |
- IN: Identifier of the set creation property list. |
-SUBROUTINE h5dcreate_f(loc_id, name, type_id, space_id, dset_id, & - hdferr, creation_prp) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the dataset - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HID_T), INTENT(OUT) :: dset_id ! Dataset identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: creation_prp - ! Dataset creation propertly - ! list identifier , default - ! value is H5P_DEFAULT_F (6) -END SUBROUTINE h5dcreate_f -- - -
H5Dextend
(hid_t dataset_id
,
- const hsize_t * size
- )
-H5Dextend
verifies that the dataset is at least of size
- size
.
- The dimensionality of size
is the same as that of
- the dataspace of the dataset being changed.
- This function cannot be applied to a dataset with fixed dimensions.
-
- Space on disk is immediately allocated for the new dataset extent
- if the dataset's space allocation time is set to
- H5D_ALLOC_TIME_EARLY
.
- Fill values will be written to the dataset if the dataset's fill time
- is set to H5D_FILL_TIME_IFSET
or
- H5D_FILL_TIME_ALLOC
.
- (Also see
- H5Pset_fill_time
- and
- H5Pset_alloc_time.)
-
-
hid_t dataset_id |
- IN: Identifier of the dataset. |
const hsize_t * size |
- IN: Array containing the new magnitude of each dimension. |
-SUBROUTINE h5dextend_f(dataset_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dataset_id ! Dataset identifier - INTEGER(HSIZE_T), DIMENSION(*), INTENT(IN) :: size - ! Array containing - ! dimensions' sizes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dextend_f -- - -
H5Dfill
(
- const void *fill
,
- hid_t fill_type_id
,
- void *buf
,
- hid_t buf_type_id
,
- hid_t space_id
- )
-H5Dfill
explicitly fills
- the dataspace selection in memory, space_id
,
- with the fill value specified in fill
.
- If fill
is NULL
,
- a fill value of 0
(zero) is used.
-
- fill_type_id
specifies the datatype
- of the fill value.
- buf
specifies the buffer in which
- the dataspace elements will be written.
- buf_type_id
specifies the datatype of
- those data elements.
-
- Note that if the fill value datatype differs - from the memory buffer datatype, the fill value - will be converted to the memory buffer datatype - before filling the selection. -
const void *fill |
- IN: Pointer to the fill value to be used. |
hid_t fill_type_id |
- IN: Fill value datatype identifier. |
void *buf |
- IN/OUT: Pointer to the memory buffer containing the - selection to be filled. |
hid_t buf_type_id |
- IN: Datatype of dataspace elements to be filled. |
hid_t space_id |
- IN: Dataspace describing memory buffer and - containing the selection to be filled. |
-SUBROUTINE h5dfill_f(fill_value, space_id, buf, hdferr) - IMPLICIT NONE - TYPE, INTENET(IN) :: fill_value ! Fill value; may be have one of the - ! following types: - ! INTEGER, REAL, DOUBLE PRECISION, - ! CHARACTER - INTEGER(HID_T), INTENT(IN) :: space_id ! Memory dataspace selection identifier - TYPE, DIMENSION(*) :: buf ! Memory buffer to fill in; must have - ! the same datatype as fill value - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dfill_f -- - -
H5Dget_create_plist
(hid_t dataset_id
- )
-H5Dget_create_plist
returns an identifier for a
- copy of the dataset creation property list for a dataset.
- The creation property list identifier should be released with
- the H5Pclose
function.
-hid_t dataset_id |
- IN: Identifier of the dataset to query. |
-SUBROUTINE h5dget_create_plist_f(dataset_id, creation_prp, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dataset_id ! Dataset identifier - INTEGER(HID_T), INTENT(OUT) :: creation_id ! Dataset creation - ! property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dget_create_plist_f - -- - -
H5Dget_offset
(hid_t dset_id
)
-H5Dget_offset
returns the address in the file
- of the dataset dset_id
.
- That address is expressed as the offset in bytes from
- the beginning of the file.
-hid_t dset_id |
- Dataset identifier. |
HADDR_UNDEF
, a negative value.
-H5Dget_space
(hid_t dataset_id
- )
-H5Dget_space
returns an identifier for a copy of the
- dataspace for a dataset.
- The dataspace identifier should be released with the
- H5Sclose
function.
-hid_t dataset_id |
- IN: Identifier of the dataset to query. |
-SUBROUTINE h5dget_space_f(dataset_id, dataspace_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dataset_id ! Dataset identifier - INTEGER(HID_T), INTENT(OUT) :: dataspace_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dget_space_f -- - -
H5Dget_space_status
(hid_t dset_id
,
- H5D_space_status_t *status
)
-H5Dget_space_status
determines whether space has been
- allocated for the dataset dset_id
.
-
- Space allocation status is returned in status
,
- which will have one of the following values:
-
- H5D_SPACE_STATUS_NOT_ALLOCATED
- | - Space has not been allocated for this dataset. - | |
- H5D_SPACE_STATUS_ALLOCATED
- | - Space has been allocated for this dataset. - | |
- H5D_SPACE_STATUS_PART_ALLOCATED
- | - Space has been partially allocated for this dataset. - (Used only for datasets with chunked storage.) - |
hid_t dset_id |
- IN: Identifier of the dataset to query. |
H5D_space_status_t *status |
- OUT: Space allocation status. |
-SUBROUTINE h5dget_space_status_f(dset_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - INTEGER, INTENET(OUT) :: flag ! Status flag ; possible values: - ! H5D_SPACE_STS_ERROR_F - ! H5D_SPACE_STS_NOT_ALLOCATED_F - ! H5D_SPACE_STS_PART_ALLOCATED_F - ! H5D_SPACE_STS_ALLOCATED_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dget_space_status_f -- - -
H5Dget_storage_size
(hid_t dataset_id
- )
-H5Dget_storage_size
returns the amount of storage
- that is required for the specified dataset, dataset_id
.
- For chunked datasets, this is the number of allocated chunks times
- the chunk size.
- The return value may be zero if no data has been stored.
-hid_t dataset_id |
- IN: Identifier of the dataset to query. |
-SUBROUTINE h5dget_storage_size_f(dset_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - INTEGER(HSIZE_T), INTENT(OUT) :: size ! Amount of storage required - ! for dataset - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dget_storage_size_f -- - -
H5Dget_type
(hid_t dataset_id
- )
-H5Dget_type
returns an identifier for a copy of the
- datatype for a dataset.
- The datatype should be released with the H5Tclose
function.
- - If a dataset has a named datatype, then an identifier to the - opened datatype is returned. - Otherwise, the returned datatype is read-only. - If atomization of the datatype fails, then the datatype is closed. -
hid_t dataset_id |
- IN: Identifier of the dataset to query. |
-SUBROUTINE h5dget_type_f(dataset_id, datatype_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dataset_id ! Dataset identifier - INTEGER(HID_T), INTENT(OUT) :: datatype_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dget_type_f -- - -
H5Diterate
(
- void *buf
,
- hid_t type_id
,
- hid_t space_id
,
- H5D_operator_t operator
,
- void *operator_data
- )
-H5Diterate
iterates over all the elements selected
- in a memory buffer. The callback function is called once for each
- element selected in the dataspace.
-
- The selection in the dataspace is modified so that any elements
- already iterated over are removed from the selection if the
- iteration is interrupted (by the H5D_operator_t
- function returning non-zero) before the iteration is complete;
- the iteration may then be re-started by the user where it left off.
-
-
void *buf |
- IN/OUT: Pointer to the buffer in memory containing the - elements to iterate over. |
hid_t type_id |
- IN: Datatype identifier for the elements stored in
- buf . |
hid_t space_id |
- IN: Dataspace identifier for buf .
- Also contains the selection to iterate over. |
H5D_operator_t operator |
- IN: Function pointer to the routine to be called
- for each element in buf iterated over. |
void *operator_data |
- IN/OUT: Pointer to any user-defined data associated - with the operation. |
H5Dopen
(hid_t loc_id
,
- const char *name
- )
-H5Dopen
opens an existing dataset for access in the file
- or group specified in loc_id
. name
is
- a dataset name and is used to identify the dataset in the file.
-hid_t loc_id |
- IN: Identifier of the file or group - within which the dataset to be accessed will be found. |
const char * name |
- IN: The name of the dataset to access. |
-SUBROUTINE h5dopen_f(loc_id, name, dset_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the dataset - INTEGER(HID_T), INTENT(OUT) :: dset_id ! Dataset identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dopen_f -- - -
H5Dread
(hid_t dataset_id
,
- hid_t mem_type_id
,
- hid_t mem_space_id
,
- hid_t file_space_id
,
- hid_t xfer_plist_id
,
- void * buf
- )
-H5Dread
reads a (partial) dataset, specified by its
- identifier dataset_id
, from the
- file into an application memory buffer buf
.
- Data transfer properties are defined by the argument
- xfer_plist_id
.
- The memory datatype of the (partial) dataset is identified by
- the identifier mem_type_id
.
- The part of the dataset to read is defined by
- mem_space_id
and file_space_id
.
-
- file_space_id
is used to specify only the selection within
- the file dataset's dataspace. Any dataspace specified in file_space_id
- is ignored by the library and the dataset's dataspace is always used.
- file_space_id
can be the constant H5S_ALL
.
- which indicates that the entire file dataspace, as defined by the
- current dimensions of the dataset, is to be selected.
-
- mem_space_id
is used to specify both the memory dataspace
- and the selection within that dataspace.
- mem_space_id
can be the constant H5S_ALL
,
- in which case the file dataspace is used for the memory dataspace and
- the selection defined with file_space_id
is used for the
- selection within that dataspace.
-
- If raw data storage space has not been allocated for the dataset
- and a fill value has been defined, the returned buffer buf
- is filled with the fill value.
-
- The behavior of the library for the various combinations of valid
- dataspace identifiers and H5S_ALL for the mem_space_id
and the
- file_space_id
parameters is described below:
-
-
-
- mem_space_id
- |
-
- file_space_id
- |
- - Behavior - | -
---|---|---|
- valid dataspace identifier - | -- valid dataspace identifier - | -
- mem_space_id specifies the memory dataspace and the
- selection within it.
- file_space_id specifies the selection within the file
- dataset's dataspace.
- |
-
- H5S_ALL
- |
- - valid dataspace identifier - | -
- The file dataset's dataspace is used for the memory dataspace and the
- selection specified with file_space_id specifies the
- selection within it.
- The combination of the file dataset's dataspace and the selection from
- file_space_id is used for memory also.
- |
-
- valid dataspace identifier - | -
- H5S_ALL
- |
-
- mem_space_id specifies the memory dataspace and the
- selection within it.
- The selection within the file dataset's dataspace is set to the "all"
- selection.
- |
-
- H5S_ALL
- |
-
- H5S_ALL
- |
- - The file dataset's dataspace is used for the memory dataspace and the - selection within the memory dataspace is set to the "all" selection. - The selection within the file dataset's dataspace is set to the "all" - selection. - | -
- Setting an H5S_ALL
selection indicates that the entire dataspace, as
- defined by the current dimensions of a dataspace, will be selected.
- The number of elements selected in the memory dataspace must match the
- number of elements selected in the file dataspace.
-
- xfer_plist_id
can be the constant H5P_DEFAULT
.
- in which case the default data transfer properties are used.
-
- Data is automatically converted from the file datatype - and dataspace to the memory datatype and dataspace - at the time of the read. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
hid_t dataset_id |
- IN: Identifier of the dataset read from. |
hid_t mem_type_id |
- IN: Identifier of the memory datatype. |
hid_t mem_space_id |
- IN: Identifier of the memory dataspace. |
hid_t file_space_id |
- IN: Identifier of the dataset's dataspace in the file. |
hid_t xfer_plist_id |
- IN: Identifier of a transfer property list - for this I/O operation. |
void * buf |
- OUT: Buffer to receive data read from file. |
-SUBROUTINE h5dread_f(dset_id, mem_type_id, buf, dims, hdferr, & - mem_space_id, file_space_id, xfer_prp) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - INTEGER(HID_T), INTENT(IN) :: mem_type_id ! Memory datatype identifier - TYPE, INTENT(INOUT) :: buf ! Data buffer; may be a scalar - ! or an array - DIMENSION(*), INTEGER(HSIZE_T), INTENT(IN) :: dims - ! Array to hold corresponding - ! dimension sizes of data - ! buffer buf - ! dim(k) has value of the k-th - ! dimension of buffer buf - ! Values are ignored if buf is - ! a scalar - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: mem_space_id - ! Memory dataspace identfier - ! Default value is H5S_ALL_F - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: file_space_id - ! File dataspace identfier - ! Default value is H5S_ALL_F - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: xfer_prp - ! Transfer property list identifier - ! Default value is H5P_DEFAULT_F -END SUBROUTINE h5dread_f -- - -
H5Dvlen_get_buf_size
(hid_t dataset_id
,
- hid_t type_id
,
- hid_t space_id
,
- hsize_t *size
- )
-H5Dvlen_get_buf_size
determines the number of bytes
- required to store the VL data from the dataset, using the
- space_id
for the selection in the dataset on
- disk and the type_id
for the memory representation
- of the VL data in memory.
-
- *size
is returned with the number of bytes
- required to store the VL data in memory.
-
hid_t dataset_id |
- IN: Identifier of the dataset to query. |
hid_t type_id |
- IN: Datatype identifier. |
hid_t space_id |
- IN: Dataspace identifier. |
hsize_t *size |
- OUT: The size in bytes of the memory - buffer required to store the VL data. |
H5Dvlen_get_buf_size
;
- corresponding functionality is provided by the FORTRAN function
- h5dvlen_get_max_len_f
.
- -SUBROUTINE h5dvlen_get_max_len_f(dset_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - - INTEGER(SIZE_T), INTENT(OUT) :: elem_len ! Maximum length of the element - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5dvlen_get_max_len_f -- - -
H5Dvlen_reclaim
(hid_t type_id
,
- hid_t space_id
,
- hid_t plist_id
,
- void *buf
- )
-H5Dvlen_reclaim
reclaims memory buffers created to
- store VL datatypes.
-
- The type_id
must be the datatype stored in the buffer.
- The space_id
describes the selection for the memory buffer
- to free the VL datatypes within.
- The plist_id
is the dataset transfer property list which
- was used for the I/O transfer to create the buffer.
- And buf
is the pointer to the buffer to be reclaimed.
-
- The VL structures (hvl_t
) in the user's buffer are
- modified to zero out the VL information after the memory has been reclaimed.
-
- If nested VL datatypes were used to create the buffer, - this routine frees them from the bottom up, releasing all - the memory without creating memory leaks. -
hid_t type_id |
- IN: Identifier of the datatype. |
hid_t space_id |
- IN: Identifier of the dataspace. |
hid_t plist_id |
- IN: Identifier of the property list used to create the buffer. |
void *buf |
- IN: Pointer to the buffer to be reclaimed. |
H5Dwrite
(hid_t dataset_id
,
- hid_t mem_type_id
,
- hid_t mem_space_id
,
- hid_t file_space_id
,
- hid_t xfer_plist_id
,
- const void * buf
- )
-H5Dwrite
writes a (partial) dataset, specified by its
- identifier dataset_id
, from the
- application memory buffer buf
into the file.
- Data transfer properties are defined by the argument
- xfer_plist_id
.
- The memory datatype of the (partial) dataset is identified by
- the identifier mem_type_id
.
- The part of the dataset to write is defined by
- mem_space_id
and file_space_id
.
-
- file_space_id
is used to specify only the selection within
- the file dataset's dataspace. Any dataspace specified in file_space_id
- is ignored by the library and the dataset's dataspace is always used.
- file_space_id
can be the constant H5S_ALL
.
- which indicates that the entire file dataspace, as defined by the
- current dimensions of the dataset, is to be selected.
-
- mem_space_id
is used to specify both the memory dataspace
- and the selection within that dataspace.
- mem_space_id
can be the constant H5S_ALL
,
- in which case the file dataspace is used for the memory dataspace and
- the selection defined with file_space_id
is used for the
- selection within that dataspace.
-
- The behavior of the library for the various combinations of valid
- dataspace IDs and H5S_ALL for the mem_space_id
and the
- file_space_id
parameters is described below:
-
-
-
- mem_space_id
- |
-
- file_space_id
- |
- - Behavior - | -
---|---|---|
- valid dataspace identifier - | -- valid dataspace identifier - | -
- mem_space_id specifies the memory dataspace and the
- selection within it.
- file_space_id specifies the selection within the file
- dataset's dataspace.
- |
-
- H5S_ALL - | -- valid dataspace identifier - | -
- The file dataset's dataspace is used for the memory dataspace and the
- selection specified with file_space_id specifies the
- selection within it.
- The combination of the file dataset's dataspace and the selection from
- file_space_id is used for memory also.
- |
-
- valid dataspace identifier - | -- H5S_ALL - | -
- mem_space_id specifies the memory dataspace and the
- selection within it.
- The selection within the file dataset's dataspace is set to the "all"
- selection.
- |
-
- H5S_ALL - | -- H5S_ALL - | -- The file dataset's dataspace is used for the memory dataspace and the - selection within the memory dataspace is set to the "all" selection. - The selection within the file dataset's dataspace is set to the "all" - selection. - | -
- Setting an "all" selection indicates that the entire dataspace, as - defined by the current dimensions of a dataspace, will be selected. - The number of elements selected in the memory dataspace must match the - number of elements selected in the file dataspace. -
- xfer_plist_id
can be the constant H5P_DEFAULT
.
- in which case the default data transfer properties are used.
-
- Writing to an dataset will fail if the HDF5 file was - not opened with write access permissions. -
- Data is automatically converted from the memory datatype - and dataspace to the file datatype and dataspace - at the time of the write. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
- If the dataset's space allocation time is set to
- H5D_ALLOC_TIME_LATE
or H5D_ALLOC_TIME_INCR
- and the space for the dataset has not yet been allocated,
- that space is allocated when the first raw data is written to the
- dataset.
- Unused space in the dataset will be written with fill values at the
- same time if the dataset's fill time is set to
- H5D_FILL_TIME_IFSET
or H5D_FILL_TIME_ALLOC
.
- (Also see
- H5Pset_fill_time
- and
- H5Pset_alloc_time.)
-
- If a dataset's storage layout is 'compact', care must be taken when - writing data to the dataset in parallel. A compact dataset's raw data - is cached in memory and may be flushed to the file from any of the - parallel processes, so parallel applications should always attempt to - write identical data to the dataset from all processes. - -
hid_t dataset_id |
- IN: Identifier of the dataset to write to. |
hid_t mem_type_id |
- IN: Identifier of the memory datatype. |
hid_t mem_space_id |
- IN: Identifier of the memory dataspace. |
hid_t file_space_id |
- IN: Identifier of the dataset's dataspace in the file. |
hid_t xfer_plist_id |
- IN: Identifier of a transfer property list - for this I/O operation. |
const void * buf |
- IN: Buffer with data to be written to the file. |
-SUBROUTINE h5dwrite_f(dset_id, mem_type_id, buf, dims, hdferr, & - mem_space_id, file_space_id, xfer_prp) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - INTEGER(HID_T), INTENT(IN) :: mem_type_id ! Memory datatype identifier - TYPE, INTENT(IN) :: buf ! Data buffer; may be a scalar - ! or an array- -
- DIMENSION(*), INTEGER(HSIZE_T), INTENT(IN) :: dims - ! Array to hold corresponding - ! dimension sizes of data - ! buffer buf; dim(k) has value - ! of the k-th dimension of - ! buffer buf; values are - ! ignored if buf is a scalar - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: mem_space_id - ! Memory dataspace identfier - ! Default value is H5S_ALL_F - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: file_space_id - ! File dataspace identfier - ! Default value is H5S_ALL_F - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: xfer_prp - ! Transfer property list - ! identifier; default value - ! is H5P_DEFAULT_F - -END SUBROUTINE h5dwrite_f -- - - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
-
- -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
-
| - - | - - |
- - | - -- - |
-
|
-
- - - | - - | -
-
|
-
|
-
|
-The Error interface provides error handling in the form of a stack.
-The FUNC_ENTER()
macro clears the error stack whenever
-an interface function is entered.
-When an error is detected, an entry is pushed onto the stack.
-As the functions unwind, additional entries are pushed onto the stack.
-The API function will return some indication that an error occurred and
-the application can print the error stack.
-
-Certain API functions in the H5E package, such as H5Eprint
,
-do not clear the error stack. Otherwise, any function which
-does not have an underscore immediately after the package name
-will clear the error stack. For instance, H5Fopen
-clears the error stack while H5F_open
does not.
-
-An error stack has a fixed maximum size. -If this size is exceeded then the stack will be truncated and only the -inner-most functions will have entries on the stack. -This is expected to be a rare condition. -
-Each thread has its own error stack, but since -multi-threading has not been added to the library yet, this -package maintains a single error stack. The error stack is -statically allocated to reduce the complexity of handling -errors within the H5E package. - - - - - -
H5Eauto_is_stack
(hid_t
- estack_id
, unsigned *is_stack
)
- H5Eauto_is_stack
determines whether the error auto
- reporting function for an error stack conforms to the
- H5E_auto_stack_t
typedef or the
- H5E_auto_t
typedef.
-
- The is_stack
parameter is set to 1
1
- if the error stack conforms to H5E_auto_stack_t
- and 0
for if H5E_auto_t
.
-
hid_t estack_id |
- The error stack identifier |
unsigned* is_stack |
- A flag indicating which error stack typedef - the specified error stack conforms to. |
H5Eclear
(void
)
-H5Eclear
clears the error stack for the current thread.
-
- The stack is also cleared whenever an API function is called,
- with certain exceptions (for instance, H5Eprint
).
-
- H5Eclear
can fail if there are problems initializing
- the library.
-
- Note:
- As of HDF5 Release 1.8, H5Eclear_stack
- replaces H5Eclear
and H5Eclear
is designated
- a deprecated function. H5Eclear
may be removed
- from the library at a future release.
-
None |
-SUBROUTINE h5eclear_f(hdferr) - IMPLICIT NONE - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5eclear_f -- - -
H5Eclear_stack
(hid_t estack_id
)
-H5Eclear_stack
clears the error stack specified
- by estack_id
for the current thread.
-
- If the value of estack_id
is H5E_DEFAULT
,
- the current current error stack will be cleared.
-
- The current error stack is also cleared whenever an API function
- is called, with certain exceptions
- (for instance, H5Eprint
).
-
- H5Eclear_stack
can fail if there are problems initializing
- the library.
-
hid_t mesg_id |
- IN: Error message identifier. |
H5Eclose_msg
(hid_t
- mesg_id
)
- H5Eclose_msg
closes an error message identifier.,
- which can be either a major or minor message.
- hid_t mesg_id |
- IN: Error message identifier. |
H5Eclose_stack
(hid_t
- estack_id
)
- H5Eclose_stack
closes the object handle for an
- error stack and releases its resources. H5E_DEFAULT
- cannot be closed.
- hid_t estack_id |
- IN: Error stack identifier. |
H5Ecreate_msg
(hid_t
- class
, H5E_type_t msg_type
,
- const char* mesg
)
- H5Ecreate_msg
adds an error message to an error class
- defined by client library or application program. The error message
- can be either major or minor which is indicated
- by parameter msg_type
.
- hid_t class |
- IN: Error class identifier. |
H5E_type_t msg_type |
- IN: The type of the error message.
- - Valid values are H5E_MAJOR and
- H5E_MINOR . |
const char* mesg |
- IN: Major error message. |
H5Eget_auto
(H5E_auto_t * func
,
- void **client_data
- )
-H5Eget_auto
returns the current settings for the
- automatic error stack traversal function, func
,
- and its data, client_data
. Either (or both)
- arguments may be null in which case the value is not returned.
-
- Note:
- As of HDF5 Release 1.8, H5Eget_auto_stack
- replaces H5Eget_auto
and H5Eget_auto
is designated
- a deprecated function. H5Eget_auto
may be removed
- from the library at a future release.
-
H5E_auto_t * func |
- OUT: Current setting for the function to be called upon an - error condition. |
void **client_data |
- OUT: Current setting for the data passed to the error function. |
H5Eget_auto_stack
(
- hid_t estack_id
,
- H5E_auto_stack_t * func
,
- void **client_data
- )
-H5Eget_auto_stack
returns the current settings for the
- automatic error stack traversal function, func
,
- and its data, client_data
, that are associated with
- the error stack specified by estack_id
.
-
- Either or both of the func
and client_data
- arguments may be null, in which case the value is not returned.
-
hid_t estack_id
- |
- IN: Error stack identifier.
- H5E_DEFAULT indicates the current stack. |
H5E_auto_stack_t * func |
- OUT: The function currently set to be - called upon an error condition. |
void **client_data |
- OUT: Data currently set to be passed - to the error function. |
H5Eget_class_name
(hid_t
- class_id
, char* name
,
- size_t size
)
- H5Eget_class_name
retrieves the name of the error class
- specified by the class identifier.
- If non-NULL pointer is passed in for name
and
- size
is greater than zero, the class
- name of size
long is returned. The length of the error
- class name is also returned.
- If NULL is passed in as name, only the length of
- class name is returned. If zero is returned, it means no name.
- User is responsible for allocated enough buffer for the name.
- hid_t class_id |
- IN: Error class identifier. |
char* name |
- OUT: The name of the class to be queried. |
size_t size |
- IN: The length of class name to be returned - by this function. |
H5Eget_current_stack
(void)
- H5Eget_current_stack
registers the current error stack,
- returns an object identifier, and clears the current error stack.
- An empty error stack will also be assigned an identifier.
- None. |
H5Eget_major
(H5E_major_t n
)
-H5Eget_major
returns a
- constant character string that describes the error.
-
- Note:
- As of HDF5 Release 1.8, H5Eget_msg
- replaces H5Eget_major
and H5Eget_major
is designated
- a deprecated function. H5Eget_major
may be removed
- from the library at a future release.
-
H5E_major_t n |
- IN: Major error number. |
-SUBROUTINE h5eget_major_f(error_no, name, hdferr) - INTEGER, INTENT(IN) :: error_no !Major error number - CHARACTER(LEN=*), INTENT(OUT) :: name ! File name - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5eget_major_f -- - -
H5Eget_minor
(H5E_minor_t n
)
-H5Eget_minor
returns a
- constant character string that describes the error.
-
- Note:
- As of HDF5 Release 1.8, H5Eget_msg
- replaces H5Eget_minor
and H5Eget_minor
is designated
- a deprecated function. H5Eget_minor
may be removed
- from the library at a future release.
-
H5E_minor_t n |
- IN: Minor error number. |
-SUBROUTINE h5eget_minor_f(error_no, name, hdferr) - INTEGER, INTENT(IN) :: error_no !Major error number - CHARACTER(LEN=*), INTENT(OUT) :: name ! File name - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5eget_minor_f -- - -
H5Eget_msg
(hid_t
- mesg_id
, H5E_type_t* mesg_type
,
- char* mesg
, size_t size
)
- H5Eget_msg
retrieves the error message including its
- length and type. The error message is specified by mesg_id
.
- User is responsible for passing in enough buffer for the message.
- If mesg
is not NULL and size
is greater than zero,
- the error message of size
long is returned. The length of the
- message is also returned. If NULL is passed in as mesg
, only the
- length and type of the message is returned. If the return value is zero,
- it means no message.
- hid_t mesg_id |
- IN: Idenfier for error message to be queried. |
H5E_type_t* mesg_type |
- OUT: The type of the error message.
- - Valid values are H5E_MAJOR and
- H5E_MINOR . |
char* mesg |
- OUT: Error message buffer. |
size_t size |
- IN: The length of error message to be returned - by this function. |
H5Eget_num
(hid_t estack_id
)
- H5Eget_num
retrieves the number of error records
- in the error stack specified by estack_id
- (including major, minor messages and description).
- hid_t estack_id
- |
- IN: Error stack identifier. |
H5Epop
(hid_t
- estack_id
, size_t count
)
- H5Epop
deletes the number of error records specified
- in count
from the top of the error stack
- specified by estack_id
- (including major, minor messages and description).
- The number of error messages to be deleted is specified by count.
- hid_t estack_id
- |
- IN: Error stack identifier. |
size_t count |
- IN: The number of error messages to be deleted - from the top of error stack. |
H5Eprint
(FILE * stream
)
-H5Eprint
prints the error stack on the specified
- stream, stream
.
- Even if the error stack is empty, a one-line message will be printed:
- HDF5-DIAG: Error detected in thread 0.
-
- H5Eprint
is a convenience function for
- H5Ewalk
with a function that prints error messages.
- Users are encouraged to write their own more specific error handlers.
-
- Note:
- As of HDF5 Release 1.8, H5Eprint_stack
- replaces H5Eprint
and H5Eprint
is designated
- a deprecated function. H5Eprint
may be removed
- from the library at a future release.
-
FILE * stream |
- IN: File pointer, or stderr if NULL. |
-SUBROUTINE h5eprint_f(hdferr, name) - CHARACTER(LEN=*), OPTIONAL, INTENT(IN) :: name ! File name - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5eprint_f -- - -
H5Eprint
(
- hid_t estack_id
,
- FILE * stream
)
-H5Eprint_stack
prints the error stack specified by
- estack_id
on the specified stream, stream
.
- Even if the error stack is empty, a one-line message of the
- following form will be printed:
- HDF5-DIAG: Error detected in HDF5 library version: 1.5.62
- thread 0.
- - A similar line will appear before the error messages of each - error class stating the library name, library version number, and - thread identifier. -
- If estack_id
is H5E_DEFAULT
,
- the current error stack will be printed.
-
- H5Eprint_stack
is a convenience function for
- H5Ewalk_stack
with a function that prints error messages.
- Users are encouraged to write their own more specific error handlers.
-
hid_t estack_id |
- IN: Identifier of the error stack to be printed.
- If the identifier is H5E_DEFAULT ,
- the current error stack will be printed. |
FILE * stream |
- IN: File pointer, or stderr if NULL. |
H5Epush
(
- const char *file
,
- const char *func
,
- unsigned line
,
- H5E_major_t maj_num
,
- H5E_minor_t min_num
,
- const char *str
- )
-H5Epush
pushes a new error record onto the
- error stack for the current thread.
-
- The error has major and minor numbers maj_num
and
- min_num
,
- the function func
where the error was detected,
- the name of the file file
where the error was detected,
- the line line
within that file,
- and an error description string str
.
-
- The function name, filename, and error description strings - must be statically allocated. -
- Note:
- As of HDF5 Release 1.8, H5Epush_stack
- replaces H5Epush
and H5Epush
is designated
- a deprecated function. H5Epush
may be removed
- from the library at a future release.
-
H5Epush
:
- const char *file |
- IN: Name of the file in which the error - was detected. |
const char *func |
- IN: Name of the function in which the error - was detected. |
unsigned line |
- IN: Line within the file at which the error - was detected. |
H5E_major_t maj_num |
- IN: Major error number. |
H5E_minor_t min_num |
- IN: Minor error number. |
const char *str |
- IN: Error description string. |
H5Epush_stack
(
- hid_t estack_id
,
- const char *file
,
- const char *func
,
- unsigned line
,
- hid_t class_id
,
- hid_t major_id
,
- hid_t minor_id
,
- const char *msg
,
- ...)
-H5Epush_stack
pushes a new error record onto the
- error stack for the current thread.
-
- The error record contains
- the error class identifier class_id
,
- the major and minor message identifiers major_id
and
- minor_id
,
- the function name func
where the error was detected,
- the filename file
and line number line
- within that file where the error was detected, and
- an error description msg
.
-
- The major and minor errors must be in the same error class. -
- The function name, filename, and error description strings - must be statically allocated. -
- msg
can be a format control string with
- additional arguments. This design of appending additional arguments
- is similar to the system and C functions printf
and
- fprintf
.
-
hid_t estack_id |
- IN: Identifier of the error stack to which
- the error record is to be pushed.
- If the identifier is H5E_DEFAULT , the error record
- will be pushed to the current stack. |
const char *file |
- IN: Name of the file in which the error was - detected. |
const char *func |
- IN: Name of the function in which the error was - detected. |
unsigned line |
- IN: Line number within the file at which the - error was detected. |
hid_t class_id |
- IN: Error class identifier. |
hid_t major_id |
- IN: Major error identifier. |
hid_t minor_id |
- IN: Minor error identifier. |
const char *msg |
- IN: Error description string. |
H5Eregister_class
(const char*
- cls_name
, const char* lib_name
,
- const char* version
)
- H5Eregister_class
registers a client library or
- application program to HDF5 error API so that the client library
- or application program can report error together with HDF5 library.
- It receives an identifier for this error class for further error
- operations. The library name and version number will
- be printed out in the error message as preamble.
- const char* cls_name |
- IN: Name of the error class. |
const char* lib_name |
- IN: Name of the client library or application - to which the error class belongs. |
const char* version |
- IN: Version of the client library or application - to which the error class belongs. - A NULL can be passed in. |
H5Eset_auto
(H5E_auto_t func
,
- void *client_data
- )
-H5Eset_auto
turns on or off automatic printing of
- errors. When turned on (non-null func
pointer),
- any API function which returns an error indication will
- first call func
, passing it client_data
- as an argument.
-
- When the library is first initialized the auto printing function
- is set to H5Eprint
(cast appropriately) and
- client_data
is the standard error stream pointer,
- stderr
.
-
- Automatic stack traversal is always in the
- H5E_WALK_DOWNWARD
direction.
-
- Note:
- As of HDF5 Release 1.8, H5Eset_auto_stack
- replaces H5Eset_auto
and H5Eset_auto
is designated
- a deprecated function. H5Eset_auto
may be removed
- from the library at a future release.
-
H5E_auto_t func |
- IN: Function to be called upon an error condition. |
void *client_data |
- IN: Data passed to the error function. |
-SUBROUTINE h5eset_auto_f(printflag, hdferr) - INTEGER, INTENT(IN) :: printflag !flag to turn automatic error - !printing on or off - !possible values are: - !printon (1) - !printoff(0) - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5eset_auto_f -- - -
H5Eset_auto_stack
(
- hid_t estack_id
,
- H5E_auto_stack_t func
,
- void *client_data
- )
-H5Eset_auto_stack
turns on or off automatic printing of
- errors for the error stack specified with estack_id
.
- An estack_id
value of H5E_DEFAULT
- indicates the current stack.
-
- When automatic printing is turned on,
- by the use of a non-null func
pointer,
- any API function which returns an error indication will
- first call func
, passing it client_data
- as an argument.
-
- When the library is first initialized, the auto printing function
- is set to H5Eprint_stack
(cast appropriately) and
- client_data
is the standard error stream pointer,
- stderr
.
-
- Automatic stack traversal is always in the
- H5E_WALK_DOWNWARD
direction.
-
- Automatic error printing is turned off with a
- H5Eset_auto_stack
call with a NULL
- func
pointer.
-
hid_t estack_id |
- IN: Error stack identifier. |
H5E_auto_stack_t func |
- IN: Function to be called upon an error - condition. |
void *client_data |
- IN: Data passed to the error function. |
H5Eset_current_stack
(hid_t
- estack_id
)
- H5Eset_current_stack
replaces the content of
- the current error stack with a copy of the content of error stack
- specified by estack_id
.
- hid_t estack_id |
- IN: Error stack identifier. |
H5Eunregister_class
(hid_t
- class_id
)
- H5Eunregister_class
removes the error class specified
- by class_id
.
- All the major and minor errors in this class will also be closed.
- hid_t class_id |
- IN: Error class identifier. |
H5Ewalk
(H5E_direction_t direction
,
- H5E_walk_t func
,
- void * client_data
- )
-H5Ewalk
walks the error stack for the current thread
- and calls the specified function for each error along the way.
-
- direction
determines whether the stack is walked
- from the inside out or the outside in.
- A value of H5E_WALK_UPWARD
means begin with the
- most specific error and end at the API;
- a value of H5E_WALK_DOWNWARD
means to start at the
- API and end at the inner-most function where the error was first
- detected.
-
- func
will be called for each error in the error stack.
- Its arguments will include an index number (beginning at zero
- regardless of stack traversal direction), an error stack entry,
- and the client_data
pointer passed to
- H5E_print
.
- The H5E_walk_t
prototype is as follows:
-
- typedef
herr_t (*H5E_walk_t)(
int n,
- H5E_error_t *err_desc,
- void *client_data)
-
- where the parameters have the following meanings: -
n
- err_desc
- hdf5/src/H5Epublic.h
.
- That file also contains the definitive list of major
- and minor error codes. That information will
- eventually be presented as an appendix to this
- Reference Manual.)
- client_data
-
- H5Ewalk
can fail if there are problems initializing
- the library.
-
- Note:
- As of HDF5 Release 1.8, H5Ewalk_stack
- replaces H5Ewalk
and H5Ewalk
is designated
- a deprecated function. H5Ewalk
may be removed
- from the library at a future release.
-
H5E_direction_t direction |
- IN: Direction in which the error stack is to be walked. |
H5E_walk_t func |
- IN: Function to be called for each error encountered. |
void * client_data |
- IN: Data to be passed with func . |
H5Ewalk_stack
(
- hid_t estack_id
,
- H5E_direction_t direction
,
- H5E_walk_t func
,
- void * client_data
- )
-H5Ewalk_stack
walks the error stack specified by
- estack_id
for the current thread and calls the function
- specified in func
for each error along the way.
-
- If the value of estack_id
is H5E_DEFAULT
,
- then H5Ewalk_stack
walks the current error stack.
-
- direction
specifies whether the stack is walked
- from the inside out or the outside in.
- A value of H5E_WALK_UPWARD
means to begin with the
- most specific error and end at the API;
- a value of H5E_WALK_DOWNWARD
means to start at the
- API and end at the innermost function where the error was first
- detected.
-
- func
, a function compliant with the
- H5E_walk_t
prototype, will be called for each error
- in the error stack.
- Its arguments will include an index number n
- (beginning at zero regardless of stack traversal direction),
- an error stack entry err_desc
,
- and the client_data
pointer passed to
- H5E_print
.
- The H5E_walk_t
prototype is as follows:
-
- typedef
herr_t (*H5E_walk_t)(
int n,
- H5E_error_t *err_desc,
- void *client_data)
-
- where the parameters have the following meanings: -
n
- err_desc
- hdf5/src/H5Epublic.h
.
- That file also contains the definitive list of major
- and minor error codes; that information will
- eventually be presented as an appendix to this
- HDF5 Reference Manual.)
- client_data
-
- H5Ewalk_stack
can fail if there are problems initializing
- the library.
-
hid_t estack_id |
- IN: Error stack identifier. |
H5E_direction_t direction |
- IN: Direction in which the error stack is - to be walked. |
H5E_walk_t func |
- IN: Function to be called for each error - encountered. |
void * client_data |
- IN: Data to be passed with func .
- |
H5Ewalk_cb
(int n
,
- H5E_error_t *err_desc
,
- void *client_data
- )
-H5Ewalk_cb
is a default error stack traversal callback
- function that prints error messages to the specified output stream.
- It is not meant to be called directly but rather as an
- argument to the H5Ewalk
function.
- This function is called also by H5Eprint
.
- Application writers are encouraged to use this function as a
- model for their own error stack walking functions.
-
- n
is a counter for how many times this function
- has been called for this particular traversal of the stack.
- It always begins at zero for the first error on the stack
- (either the top or bottom error, or even both, depending on
- the traversal direction and the size of the stack).
-
- err_desc
is an error description. It contains all the
- information about a particular error.
-
- client_data
is the same pointer that was passed as the
- client_data
argument of H5Ewalk
.
- It is expected to be a file pointer (or stderr if NULL).
-
int n |
- IN/OUT: Number of times this function has been called - for this traversal of the stack. |
H5E_error_t *err_desc |
- OUT: Error description. |
void *client_data |
- IN: A file pointer, or stderr if NULL. |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
- - | - - | - - |
- - | - -- - | - - | - -- - |
-
|
-
-
| - - |
-
|
H5Fclose
(hid_t file_id
- )
-H5Fclose
terminates access to an HDF5 file
- by flushing all data to storage and terminating access
- to the file through file_id
.
- - If this is the last file identifier open for the file - and no other access identifier is open (e.g., a dataset - identifier, group identifier, or shared datatype identifier), - the file will be fully closed and access will end. -
- Delayed close:
-
- Note the following deviation from the above-described behavior.
- If H5Fclose
is called for a file but one or more
- objects within the file remain open, those objects will remain
- accessible until they are individually closed.
- Thus, if the dataset data_sample
is open when
- H5Fclose
is called for the file containing it,
- data_sample
will remain open and accessible
- (including writable) until it is explicitely closed.
- The file will be automatically closed once all objects in the
- file have been closed.
-
- Be warned, hoever, that there are circumstances where it is
- not possible to delay closing a file.
- For example, an MPI-IO file close is a collective call; all of
- the processes that opened the file must close it collectively.
- The file cannot be closed at some time in the future by each
- process in an independent fashion.
- Another example is that an application using an AFS token-based
- file access privilage may destroy its AFS token after
- H5Fclose
has returned successfully.
- This would make any future access to the file, or any object
- within it, illegal.
-
- In such situations, applications must close all open objects
- in a file before calling H5Fclose
.
- It is generally recommended to do so in all cases.
-
hid_t file_id |
- IN: Identifier of a file to terminate access to. |
-SUBROUTINE h5fclose_f(file_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fclose_f -- - -
H5Fcreate
(const char *name
,
- unsigned flags
,
- hid_t create_id
,
- hid_t access_id
- )
-H5Fcreate
is the primary function for creating
- HDF5 files .
-
- The flags
parameter determines whether an
- existing file will be overwritten. All newly created files
- are opened for both reading and writing. All flags may be
- combined with the bit-wise OR operator (`|') to change
- the behavior of the H5Fcreate
call.
-
- The more complex behaviors of file creation and access
- are controlled through the file-creation and file-access
- property lists. The value of H5P_DEFAULT
for
- a property list value indicates that the library should use
- the default values for the appropriate property list.
-
- The return value is a file identifier for the newly-created file;
- this file identifier should be closed by calling
- H5Fclose
when it is no longer needed.
-
-
- Special case -- File creation in the case of an
- already-open file:
-
- If a file being created is already opened, by either a
- previous H5Fopen
or H5Fcreate
call,
- the HDF5 library may or may not detect that the open file and
- the new file are the same physical file.
- (See H5Fopen
regarding
- the limitations in detecting the re-opening of an already-open
- file.)
-
- If the library detects that the file is already opened,
- H5Fcreate
will return a failure, regardless
- of the use of H5F_ACC_TRUNC
.
-
- If the library does not detect that the file is already opened
- and H5F_ACC_TRUNC
is not used,
- H5Fcreate
will return a failure because the file
- already exists. Note that this is correct behavior.
-
- But if the library does not detect that the file is already
- opened and H5F_ACC_TRUNC
is used,
- H5Fcreate
will truncate the existing file
- and return a valid file identifier.
- Such a truncation of a currently-opened file will almost
- certainly result in errors.
- While unlikely, the HDF5 library may not be able to detect,
- and thus report, such errors.
-
- Applications should avoid calling H5Fcreate
- with an already opened file.
-
-
const char *name |
- IN: Name of the file to access. |
uintn flags |
- IN: File access flags. Allowable values are:
-
H5F_ACC_TRUNC and H5F_ACC_EXCL
- are mutually exclusive; use exactly one.
- H5F_ACC_DEBUG , prints
- debug information. This flag is used only by HDF5 library
- developers; it is neither tested nor supported
- for use in applications. |
hid_t create_id |
- IN: File creation property list identifier, used when modifying
- default file meta-data.
- Use H5P_DEFAULT for default file creation properties. |
hid_t access_id |
- IN: File access property list identifier.
- If parallel file access is desired, this is a collective
- call according to the communicator stored in the
- access_id .
- Use H5P_DEFAULT for default file access properties. |
-SUBROUTINE h5fcreate_f(name, access_flags, file_id, hdferr, & - creation_prp, access_prp) - IMPLICIT NONE - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the file - INTEGER, INTENT(IN) :: access_flag ! File access flags - ! Possible values are: - ! H5F_ACC_RDWR_F - ! H5F_ACC_RDONLY_F - ! H5F_ACC_TRUNC_F - ! H5F_ACC_EXCL_F - ! H5F_ACC_DEBUG_F - INTEGER(HID_T), INTENT(OUT) :: file_id ! File identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: creation_prp - ! File creation propertly - ! list identifier, if not - ! specified its value is - ! H5P_DEFAULT_F - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: access_prp - ! File access property list - ! identifier, if not - ! specified its value is - ! H5P_DEFAULT_F -END SUBROUTINE h5fcreate_f -- - -
H5Fflush
(hid_t object_id
,
- H5F_scope_t scope
- )
-H5Fflush
causes all buffers associated with a
- file to be immediately flushed to disk without removing the
- data from the cache.
-
- object_id
can be any object associated with the file,
- including the file itself, a dataset, a group, an attribute, or
- a named data type.
-
- scope
specifies whether the scope of the flushing
- action is global or local. Valid values are
-
H5F_SCOPE_GLOBAL |
- - | Flushes the entire virtual file. |
H5F_SCOPE_LOCAL |
- - | Flushes only the specified file. |
H5Fflush
flushes the internal HDF5 buffers then
- asks the operating system (the OS) to flush the system buffers for the
- open files. After that, the OS is responsible for ensuring that
- the data is actually flushed to disk.
-hid_t object_id |
- IN: Identifier of object used to identify the file. |
H5F_scope_t scope |
- IN: Specifies the scope of the flushing action. |
-SUBROUTINE h5fflush_f(obj_id, new_file_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - INTEGER, INTENT(IN) :: scope ! Flag with two possible values: - ! H5F_SCOPE_GLOBAL_F - ! H5F_SCOPE_LOCAL_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fflush_f -- - -
H5Fget_access_plist
(hid_t file_id
)
-H5Fget_access_plist
returns the
- file access property list identifier of the specified file.
- - See "File Access Properties" in - H5P: Property List Interface - in this reference manual and - "File Access Property Lists" - in Files in the - HDF5 User's Guide for - additional information and related functions. -
hid_t file_id |
- IN: Identifier of file to get access property list of |
-SUBROUTINE h5fget_access_plist_f(file_id, fcpl_id, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER(HID_T), INTENT(OUT) :: fapl_id ! File access property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fget_access_plist_f -- - -
H5Fget_create_plist
(hid_t file_id
- )
-H5Fget_create_plist
returns a file creation
- property list identifier identifying the creation properties
- used to create this file. This function is useful for
- duplicating properties when creating another file.
- - See "File Creation Properties" in - H5P: Property List Interface - in this reference manual and - "File Creation Properties" - in Files in the - HDF5 User's Guide for - additional information and related functions. -
hid_t file_id |
- IN: Identifier of the file to get creation property list of | -
-SUBROUTINE h5fget_create_plist_f(file_id, fcpl_id, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER(HID_T), INTENT(OUT) :: fcpl_id ! File creation property list - ! identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fget_create_plist_f -- - -
H5Fget_filesize
(hid_t file_id
,
- hsize_t *size
- )
-H5Fget_filesize
returns the size
- of the HDF5 file specified by file_id
.
-
- The returned size is that of the entire file,
- as opposed to only the HDF5 portion of the file.
- I.e., size
includes the user block, if any,
- the HDF5 portion of the file, and
- any data that may have been appended
- beyond the data written through the HDF5 Library.
-
file_id
- size
- -SUBROUTINE h5fget_filesize_f(file_id, size, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! file identifier - INTEGER(HSIZE_T), INTENT(OUT) :: size ! Size of the file - INTEGER, INTENT(OUT) :: hdferr ! Error code: 0 on success, - ! -1 if fail -END SUBROUTINE h5fget_filesize_f -- - -
H5Fget_freespace
(hid_t file_id
)
-file_id
,
- H5Fget_freespace
returns the amount of space that is
- unused by any objects in the file.
- - Currently, the HDF5 library only tracks free space in a file from a - file open or create until that file is closed, so this routine will - only report the free space that has been created during that - interval. -
hid_t file_id |
- IN: Identifier of a currently-open HDF5 file |
-SUBROUTINE h5fget_freespace_f(file_id, free_space, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER(HSSIZE_T), INTENT(OUT) :: free_space ! Amount of free space in file - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fget_freespace_f -- - -
H5Fget_mdc_config
(hid_t
- file_id
, H5AC_cache_config_t *config_ptr
)
-H5Fget_mdc_config
loads the current metadata cache
- configuration into the instance of H5AC_cache_config_t
- pointed to by the config_ptr
parameter.
-
- Note that the version field of *config_ptr
must
- be initialized --this allows the library to support old versions
- of the H5AC_cache_config_t
structure.
-
-
See the overview of the metadata cache in the special topics section - of the user manual for details on metadata cache configuration. - If you haven't read and understood that documentation, the results - of this call will not make much sense. -
hid_t file_id |
- IN: Identifier of the target file |
H5AC_cache_config_t *config_ptr |
- IN/OUT: Pointer to the instance of H5AC_cache_config_t - in which the current metadata cache configuration is to be reported. - The fields of this structure are discussed below: |
- | |
General configuration section: | -|
int version |
- IN: Integer field indicating the the version - of the H5AC_cache_config_t in use. This field should be - set to H5AC__CURR_CACHE_CONFIG_VERSION - (defined in H5ACpublic.h). |
hbool_t rpt_fcn_enabled |
- OUT: Boolean flag indicating whether the adaptive
- cache resize report function is enabled. This field should almost
- always be set to FALSE. Since resize algorithm activity is reported
- via stdout, it MUST be set to FALSE on Windows machines.
- The report function is not supported code, and can be - expected to change between versions of the library. - Use it at your own risk. |
hbool_t set_initial_size |
- OUT: Boolean flag indicating whether the cache
- should be created with a user specified initial maximum size.
- If the configuration is loaded from the cache, - this flag will always be FALSE. |
size_t initial_size |
- OUT: Initial maximum size of the cache in bytes,
- if applicable.
- If the configuration is loaded from the cache, this - field will contain the cache maximum size as of the - time of the call. |
double min_clean_fraction |
- OUT: This field is only used in the parallel - version of the library. It specifies the minimum fraction - of the cache that must be kept either clean or - empty when possible. |
size_t max_size |
- OUT: Upper bound (in bytes) on the range of - values that the adaptive cache resize code can select as - the maximum cache size. |
size_t min_size |
- OUT: Lower bound (in bytes) on the range - of values that the adaptive cache resize code can - select as the maximum cache size. |
long int epoch_length |
- OUT: Number of cache accesses between runs - of the adaptive cache resize code. |
- | |
Increment configuration section: | -|
enum H5C_cache_incr_mode incr_mode |
- OUT: Enumerated value indicating the operational
- mode of the automatic cache size increase code. At present,
- only the following values are legal:
- H5C_incr__off: Automatic cache size increase is disabled. - H5C_incr__threshold: Automatic cache size increase is - enabled using the hit rate threshold algorithm. |
double lower_hr_threshold |
- OUT: Hit rate threshold used in the hit rate - threshold cache size increase algorithm. |
double increment |
- OUT: The factor by which the current maximum - cache size is multiplied to obtain an initial new maximum cache - size if a size increase is triggered in the hit rate - threshold cache size increase algorithm. |
hbool_t apply_max_increment |
- OUT: Boolean flag indicating whether an upper - limit will be applied to the size of cache size increases. |
size_t max_increment |
- OUT: The maximum number of bytes by which the - maximum cache size can be increased in a single step -- if - applicable. |
- | |
Decrement configuration section: | -|
enum H5C_cache_decr_mode decr_mode |
- OUT: Enumerated value indicating the operational
- mode of the automatic cache size decrease code. At present,
- the following values are legal:
- H5C_decr__off: Automatic cache size decrease is disabled, - and the remaining decrement fields are ignored. - H5C_decr__threshold: Automatic cache size decrease is - enabled using the hit rate threshold algorithm. - H5C_decr__age_out: Automatic cache size decrease is enabled - using the ageout algorithm. - H5C_decr__age_out_with_threshold: Automatic cache size - decrease is enabled using the ageout with hit rate - threshold algorithm |
double upper_hr_threshold |
- OUT: Upper hit rate threshold. This value is only - used if the decr_mode is either H5C_decr__threshold or - H5C_decr__age_out_with_threshold. |
double decrement |
- OUT: Factor by which the current max cache size - is multiplied to obtain an initial value for the new cache - size when cache size reduction is triggered in the hit rate - threshold cache size reduction algorithm. |
hbool_t apply_max_decrement |
- OUT: Boolean flag indicating whether an upper - limit should be applied to the size of cache size - decreases. |
size_t max_decrement |
- OUT: The maximum number of bytes by which cache - size can be decreased if any single step, if applicable. |
int epochs_before_eviction |
- OUT: The minimum number of epochs that an entry - must reside unaccessed in cache before being evicted under - either of the ageout cache size reduction algorithms. |
hbool_t apply_empty_reserve |
- OUT: Boolean flag indicating whether an empty - reserve should be maintained under either of the ageout - cache size reduction algorithms. |
double empty_reserve |
- OUT: Empty reserve for use with the ageout - cache size reduction algorithms, if applicable. |
H5Fget_mdc_hit_rate
(hid_t
- file_id
, double *hit_rate_ptr
)
-The hit rate stats can be reset either manually (via - H5Freset_mdc_hit_rate_stats()), or automatically. If the cache's - adaptive resize code is enabled, the hit rate stats will be reset - once per epoch. If they are reset manually as well, - the cache may behave oddly. -
See the overview of the metadata cache in the special - topics section of the user manual for details on the metadata - cache and its adaptive resize algorithms. -
hid_t file_id
- |
- IN: Identifier of the target file. |
double * hit_rate_ptr
- |
- OUT: Pointer to the double in which the
- hit rate is returned. Note that *hit_rate_ptr is
- undefined if the API call fails. |
H5Fget_mdc_size
(hid_t file_id
,
- size_t *max_size_ptr
,
- size_t *min_clean_size_ptr
,
- size_t *cur_size_ptr
,
- int *cur_num_entries_ptr
)
-If the API call fails, the values returned via the pointer - parameters are undefined. -
If adaptive cache resizing is enabled, the cache maximum size - and minimum clean size may change at the end of each epoch. Current - size and current number of entries can change on each cache access. -
Current size can exceed maximum size under certain conditions. - See the overview of the metadata cache in the special topics - section of the user manual for a discussion of this. -
hid_t file_id
- |
- IN: Identifier of the target file. |
size_t *max_size_ptr
- |
- OUT: Pointer to the location in which the - current cache maximum size is to be returned, or NULL if - this datum is not desired. |
size_t *min_clean_size_ptr
- |
- OUT: Pointer to the location in which the - current cache minimum clean size is to be returned, or - NULL if that datum is not desired. |
size_t *cur_size_ptr
- |
- OUT: Pointer to the location in which the - current cache size is to be returned, or NULL if that - datum is not desired. |
int *cur_num_entries_ptr
- |
- OUT: Pointer to the location in which the - current number of entries in the cache is to be returned, - or NULL if that datum is not desired. |
H5Fget_name
(hid_t obj_id
,
- char *name
,
- size_t size
- )
-
-H5Fget_name
retrieves the name of the file
- to which the object obj_id
belongs.
- The object can be a group, dataset, attribute, or
- named datatype.
-
- Up to size
characters of the filename
- are returned in name
;
- additional characters, if any, are not returned to
- the user application.
-
- If the length of the name,
- which determines the required value of size
,
- is unknown, a preliminary H5Fget_name
call
- can be made by setting name
to NULL.
- The return value of this call will be the size of the filename;
- that value can then be assigned to size
- for a second H5Fget_name
call,
- which will retrieve the actual name.
-
- If an error occurs, the buffer pointed to by
- name
is unchanged and
- the function returns a negative value.
-
obj_id
- name
- size
- name
buffer.
- -SUBROUTINE h5fget_name_f(obj_id, buf, size, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - CHARACTER(LEN=*), INTENT(INOUT) :: buf ! Buffer to hold filename - INTEGER(SIZE_T), INTENT(OUT) :: size ! Size of the filename - INTEGER, INTENT(OUT) :: hdferr ! Error code: 0 on success, - ! -1 if fail -END SUBROUTINE h5fget_name_f -- - -
H5Fget_obj_count
(hid_t file_id
,
- unsigned int types
- )
-file_id
,
- and the desired object types, types
,
- H5Fget_obj_count
returns the number of
- open object identifiers for the file.
-
- To retrieve a count of open identifiers for open objects in
- all HDF5 application files that are currently open,
- pass the value H5F_OBJ_ALL
in file_id
.
-
- The types of objects to be counted are specified
- in types
as follows:
-
- H5F_OBJ_FILE
- | - Files only - |
- H5F_OBJ_DATASET
- | - Datasets only - |
- H5F_OBJ_GROUP
- | - Groups only - |
- H5F_OBJ_DATATYPE
- | - Named datatypes only - |
- H5F_OBJ_ATTR
- | - Attributes only - |
- H5F_OBJ_ALL
- |
- All of the above
- - (I.e., H5F_OBJ_FILE | H5F_OBJ_DATASET |
- H5F_OBJ_GROUP | H5F_OBJ_DATATYPE
- | H5F_OBJ_ATTR )
- |
OR
operator (|
).
- For example, the expression (H5F_OBJ_DATASET|H5F_OBJ_GROUP)
would call for
- datasets and groups.
-hid_t file_id |
- IN: Identifier of a currently-open HDF5 file or
- H5F_OBJ_ALL for all currently-open HDF5 files. |
unsigned int types |
- IN: Type of object for which identifiers are to be returned. |
-SUBROUTINE h5fget_obj_count_f(file_id, obj_type, obj_count, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER, INTENT(IN) :: obj_type ! Object types, possible values are: - ! H5F_OBJ_FILE_F - ! H5F_OBJ_GROUP_F - ! H5F_OBJ_DATASET_F - ! H5F_OBJ_DATATYPE_F - ! H5F_OBJ_ALL_F - INTEGER, INTENT(OUT) :: obj_count ! Number of opened objects - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fget_obj_count_f -- - -
H5Fget_obj_ids
(hid_t file_id
,
- unsigned int types
,
- int max_objs
,
- hid_t *obj_id_list
- )
-file_id
and
- the type of objects to be identified, types
,
- H5Fget_obj_ids
returns the list of identifiers
- for all open HDF5 objects fitting the specified criteria.
-
- To retrieve identifiers for open objects in all HDF5 application
- files that are currently open, pass the value
- H5F_OBJ_ALL
in file_id
.
-
- The types of object identifiers to be retrieved are specified
- in types
using the codes listed for the same
- parameter in H5Fget_obj_count
-
- To retrieve identifiers for all open objects, pass a negative value
- for the max_objs
.
-
hid_t file_id |
- IN: Identifier of a currently-open HDF5 file or
- H5F_OBJ_ALL for all currently-open HDF5 files. |
unsigned int types |
- IN: Type of object for which identifiers are to be returned. |
int max_objs |
- IN: Maximum number of object identifiers to place into
- obj_id_list . |
hid_t *obj_id_list |
- OUT: Pointer to the returned list of open object identifiers. |
obj_id_list
if successful;
- otherwise returns a negative value.
--SUBROUTINE h5fget_obj_ids_f(file_id, obj_type, max_objs, obj_ids, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER, INTENT(IN) :: obj_type ! Object types, possible values are: - ! H5F_OBJ_FILE_F - ! H5F_OBJ_GROUP_F - ! H5F_OBJ_DATASET_F - ! H5F_OBJ_DATATYPE_F - ! H5F_OBJ_ALL_F - INTEGER, INTENT(IN) :: max_objs ! Maximum number of object - ! identifiers to retrieve - INTEGER(HID_T), DIMENSION(*), INTENT(OUT) :: obj_ids - ! Array of requested object - ! identifiers - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fget_obj_ids_f -- - -
H5Fget_vfd_handle
(hid_t file_id
,
- hid_t fapl_id
,
- void *file_handle
- )
-file_id
and
- the file access property list fapl_id
,
- H5Fget_vfd_handle
returns a pointer to the file handle
- from the low-level file driver currently being used by the
- HDF5 library for file I/O.
-- This file handle is dynamic and is valid only while the file remains - open; it will be invalid if the file is closed and reopened or - opened during a subsequent session. -
hid_t file_id |
- IN: Identifier of the file to be queried. |
hid_t fapl_id |
- IN: File access property list identifier.
- For most drivers, the value will be H5P_DEFAULT .
- For the FAMILY or MULTI drivers,
- this value should be defined through the property list
- functions:
- H5Pset_family_offset for the FAMILY
- driver and H5Pset_multi_type for the
- MULTI driver. |
void *file_handle |
- OUT: Pointer to the file handle being used by - the low-level virtual file driver. |
H5Fis_hdf5
(const char *name
- )
-H5Fis_hdf5
determines whether a file is in
- the HDF5 format.
-const char *name |
- IN: File name to check format. |
TRUE
,
- or 0
(zero), for FALSE
.
- Otherwise returns a negative value.
--SUBROUTINE h5fis_hdf5_f(name, status, hdferr) - IMPLICIT NONE - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the file - LOGICAL, INTENT(OUT) :: status ! This parameter indicates - ! whether file is an HDF5 file - ! ( TRUE or FALSE ) - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fis_hdf5_f -- - -
H5Fmount
(hid_t loc_id
,
- const char *name
,
- hid_t child_id
,
- hid_t plist_id
- )
-H5Fmount
mounts the file specified by
- child_id
onto the group specified by
- loc_id
and name
using
- the mount properties plist_id
.
-
- Note that loc_id
is either a file or group identifier
- and name
is relative to loc_id
.
-
hid_t loc_id |
- IN: Identifier for of file or group in
- which name is defined. |
-
const char *name |
- IN: Name of the group onto which the
- file specified by child_id
- is to be mounted. |
-
hid_t child_id |
- IN: Identifier of the file to be mounted. | -
hid_t plist_id |
- IN: Identifier of the property list to be used. | -
-SUBROUTINE h5fmount_f(loc_id, name, child_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN):: name ! Group name at locationloc_id - INTEGER(HID_T), INTENT(IN) :: child_id ! File(to be mounted) identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5fmount_f -- - -
H5Fopen
(const char *name
,
- unsigned flags
,
- hid_t access_id
- )
-H5Fopen
opens an existing file and is the primary
- function for accessing existing HDF5 files.
-
- The parameter access_id
is a file access property
- list identifier or H5P_DEFAULT
if the
- default I/O access parameters are to be used
-
- The flags
argument determines whether writing
- to an existing file will be allowed.
- The file is opened with read and write permission if
- flags
is set to H5F_ACC_RDWR
.
- All flags may be combined with the bit-wise OR operator (`|')
- to change the behavior of the file open call.
- More complex behaviors of file access are controlled
- through the file-access property list.
-
- The return value is a file identifier for the open file;
- this file identifier should be closed by calling
- H5Fclose
when it is no longer needed.
-
-
- Special case -- Multiple opens:
-
- A file can often be opened with a new H5Fopen
- call without closing an already-open identifier established
- in a previous H5Fopen
or H5Fcreate
- call. Each such H5Fopen
call will return a
- unique identifier and the file can be accessed through any
- of these identifiers as long as the identifier remains valid.
- In such multiply-opened cases, all the open calls should
- use the same flags
argument.
-
- In some cases, such as files on a local Unix file system, - the HDF5 library can detect that a file is multiply opened and - will maintain coherent access among the file identifiers. -
- But in many other cases, such as parallel file systems or - networked file systems, it is not always possible to detect - multiple opens of the same physical file. - In such cases, HDF5 will treat the file identifiers - as though they are accessing different files and - will be unable to maintain coherent access. - Errors are likely to result in these cases. - While unlikely, the HDF5 library may not be able to detect, - and thus report, such errors. -
- It is generally recommended that applications avoid - multiple opens of the same file. - -
const char *name |
- IN: Name of the file to access. |
unsigned flags |
- IN: File access flags. Allowable values are:
-
|
hid_t access_id |
- IN: Identifier for the file access properties list.
- If parallel file access is desired, this is a collective
- call according to the communicator stored in the
- access_id .
- Use H5P_DEFAULT for default file access properties. |
-SUBROUTINE h5fopen_f(name, access_flags, file_id, hdferr, & - access_prp) - IMPLICIT NONE - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the file - INTEGER, INTENT(IN) :: access_flag ! File access flags - ! Possible values are: - ! H5F_ACC_RDWR_F - ! H5F_ACC_RDONLY_F - INTEGER(HID_T), INTENT(OUT) :: file_id ! File identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HID_T), OPTIONAL, INTENT(IN) :: access_prp - ! File access property list - ! identifier -END SUBROUTINE h5fopen_f -- - -
H5Freopen
(hid_t file_id
- )
-H5Freopen
returns a new file identifier for an
- already-open HDF5 file, as specified by file_id
.
- Both identifiers share caches and other information.
- The only difference between the identifiers is that the
- new identifier is not mounted anywhere and no files are
- mounted on it.
-
- Note that there is no circumstance under which
- H5Freopen
can actually open a closed file;
- the file must already be open and have an active
- file_id
. E.g., one cannot close a file with
- H5Fclose (file_id)
then use
- H5Freopen (file_id)
to reopen it.
-
- The new file identifier should be closed by calling
- H5Fclose
when it is no longer needed.
-
hid_t file_id |
- IN: Identifier of a file for which an additional identifier - is required. |
-SUBROUTINE h5freopen_f(file_id, new_file_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: file_id ! File identifier - INTEGER(HID_T), INTENT(OUT) :: new_file_id ! New file identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5freopen_f -- - -
H5Freset_mdc_hit_rate_stats
(hid_t
- file_id
)
-If the adaptive cache resizing code is enabled, the hit - rate statistics are reset at the beginning of each epoch. - This API call allows you to do the same thing from your program. -
The adaptive cache resizing code may behave oddly if you use - this call when adaptive cache resizing is enabled. However, - the call should be useful if you choose to control metadata - cache size from your program. -
See the overview of the metadata cache in the special topics - section of the user manual for details of the metadata cache and - the adaptive cache resizing algorithms. If you haven't read, - understood, and thought about the material covered in that - documentation, you shouldn't be using this API call. -
hid_t file_id |
- IN: Identifier of the target file. |
H5Fset_mdc_config
(hid_t
- file_id
, H5AC_cache_config_t *config_ptr
)
-See the overview of the metadata cache in the special topics - section of the user manual for details on what is being configured. - If you haven't read and understood that documentation, you really - shouldn't be using this API call. -
hid_t file_id
- |
- IN: Identifier of the target file |
H5AC_cache_config_t *config_ptr
- |
- IN: Pointer to the instance of H5AC_cache_config_t - containing the desired configuration. The fields of this structure - are discussed below: |
- | |
General configuration section: | -|
int version
- |
- IN: Integer field indicating the the version of - the H5AC_cache_config_t in use. This field should be set to - H5AC__CURR_CACHE_CONFIG_VERSION (defined in H5ACpublic.h). |
hbool_t rpt_fcn_enabled
- |
- IN: Boolean flag indicating whether the adaptive
- cache resize report function is enabled. This field should almost
- always be set to FALSE. Since resize algorithm activity is reported
- via stdout, it MUST be set to FALSE on Windows machines.
- The report function is not supported code, and can be expected to - change between versions of the library. Use it at your own risk. |
hbool_t set_initial_size
- |
- IN: Boolean flag indicating whether the cache should be - forced to the user specified initial size. |
size_t initial_size
- |
- IN: If set_initial_size is TRUE, initial_size must - contains the desired initial size in bytes. This value must lie - in the closed interval [min_size, max_size]. (see below) |
double min_clean_fraction
- |
- IN: This field is only used in the parallel version
- of the library. It specifies the minimum fraction of the cache that
- must be kept either clean or empty.
- The value must lie in the interval [0.0, 1.0]. 0.25 is a good place - to start. |
size_t max_size
- |
- IN: Upper bound (in bytes) on the range of values - that the adaptive cache resize code can select as the maximum - cache size. |
size_t min_size
- |
- IN: Lower bound (in bytes) on the range of values - that the adaptive cache resize code can select as the maximum - cache size. |
long int epoch_length
- |
- IN: Number of cache accesses between runs of the - adaptive cache resize code. 50,000 is a good starting number. |
- | |
Increment configuration section: | -|
enum H5C_cache_incr_mode incr_mode
- |
- IN: Enumerated value indicating the operational mode
- of the automatic cache size increase code. At present, only two
- values are legal:
- H5C_incr__off: Automatic cache size increase is disabled, - and the remaining increment fields are ignored. - H5C_incr__threshold: Automatic cache size increase is enabled - using the hit rate threshold algorithm. |
double lower_hr_threshold
- |
- IN: Hit rate threshold used by the hit rate threshold
- cache size increment algorithm.
- When the hit rate over an epoch is below this threshold and the - cache is full, the maximum size of the cache is multiplied by - increment (below), and then clipped as necessary to stay within - max_size, and possibly max_increment. - This field must lie in the interval [0.0, 1.0]. 0.8 or 0.9 - is a good starting point. |
double increment
- |
- IN: Factor by which the hit rate threshold cache
- size increment algorithm multiplies the current cache max size
- to obtain a tentative new cache size.
- The actual cache size increase will be clipped to satisfy the - max_size specified in the general configuration, and possibly - max_increment below. - The parameter must be greater than or equal to 1.0 -- 2.0 - is a reasonable value. - If you set it to 1.0, you will effectively disable cache size - increases. |
hbool_t apply_max_increment
- |
- IN: Boolean flag indicating whether an upper limit - should be applied to the size of cache size increases. |
size_t max_increment
- |
- IN: Maximum number of bytes by which cache size can - be increased in a single step -- if applicable. |
- | |
Decrement configuration section: | -|
enum H5C_cache_decr_mode decr_mode
- |
- IN: Enumerated value indicating the operational
- mode of the automatic cache size decrease code. At present,
- the following values are legal:
- H5C_decr__off: Automatic cache size decrease is disabled. - H5C_decr__threshold: Automatic cache size decrease is - enabled using the hit rate threshold algorithm. - H5C_decr__age_out: Automatic cache size decrease is enabled - using the ageout algorithm. - H5C_decr__age_out_with_threshold: Automatic cache size - decrease is enabled using the ageout with hit rate threshold - algorithm |
double upper_hr_threshold
- |
- IN: Hit rate threshold for the hit rate threshold and
- ageout with hit rate threshold cache size decrement algorithms.
- When decr_mode is H5C_decr__threshold, and the hit rate over a - given epoch exceeds the supplied threshold, the current maximum - cache size is multiplied by decrement to obtain a tentative new - (and smaller) maximum cache size. - When decr_mode is H5C_decr__age_out_with_threshold, there is no - attempt to find and evict aged out entries unless the hit rate in - the previous epoch exceeded the supplied threshold. - This field must lie in the interval [0.0, 1.0]. - For H5C_incr__threshold, .9995 or .99995 is a good place to start. - For H5C_decr__age_out_with_threshold, .999 might be - more useful. |
double decrement
- |
- IN: In the hit rate threshold cache size decrease
- algorithm, this parameter contains the factor by which the
- current max cache size is multiplied to produce a tentative
- new cache size.
- The actual cache size decrease will be clipped to satisfy the - min_size specified in the general configuration, and possibly - max_decrement below. - The parameter must be be in the interval [0.0, 1.0]. - If you set it to 1.0, you will effectively disable cache size - decreases. 0.9 is a reasonable starting point. |
hbool_t apply_max_decrement
- |
- IN: Boolean flag indicating whether an upper limit - should be applied to the size of cache size decreases. |
size_t max_decrement
- |
- IN: Maximum number of bytes by which the maximum cache - size can be decreased in any single step -- if applicable. |
int epochs_before_eviction
- |
- IN: In the ageout based cache size reduction algorithms, - this field contains the minimum number of epochs an entry must remain - unaccessed in cache before the cache size reduction algorithm tries to - evict it. 3 is a reasonable value. |
hbool_t apply_empty_reserve
- |
- IN: Boolean flag indicating whether the ageout based - decrement algorithms will maintain a empty reserve when decreasing - cache size. |
double empty_reserve
- |
- IN: Empty reserve as a fraction of maximum cache
- size if applicable.
- When so directed, the ageout based algorithms will not decrease - the maximum cache size unless the empty reserve can be met. - The parameter must lie in the interval [0.0, 1.0]. - 0.1 or 0.05 is a good place to start. |
H5Funmount
(hid_t loc_id
,
- const char *name
- )
-H5Funmount
- dissassociates the mount point's file
- from the file mounted there. This function
- does not close either file.
- - The mount point can be either the group in the - parent or the root group of the mounted file - (both groups have the same name). If the mount - point was opened before the mount then it is the - group in the parent; if it was opened after the - mount then it is the root group of the child. -
- Note that loc_id
is either a file or group identifier
- and name
is relative to loc_id
.
-
hid_t loc_id |
- IN: File or group identifier for the location at which - the specified file is to be unmounted. |
const char *name |
- IN: Name of the mount point. |
-SUBROUTINE h5funmount_f(loc_id, name, child_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN):: name ! Group name at location loc_id - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5funmount_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
- - - -
- -
High-level APIs | -Main HDF5 Library,
- including Low-level APIs |
- Fortran and C++
- Interfaces |
-
- High-level HDF5 APIs- The HDF5 Library includes several sets of convenience and - standard-use APIs. - The HDF5 Lite APIs are convenience functions designed to - facilitate common HDF5 operations. - The HDF5 Image, HDF5 Table and HDF5 Packet Table APIs - implement standardized approaches to common use cases - with the intention of improving interoperability. - |
- |||
- | Lite | -- | The H5LT - API general higher-level functions | -
- | Image | -- | The H5IM API for images | -
- | Table | -- | The H5TB API for manipulating - table datasets | -
- | - Packet Table - | -- | The H5PT API for managing packet tables (and C++ H5PT wrappers) - | -
- | - Dimension Scales | -- | The H5DS API for managing dimension scales | -
- | - | - | - |
- Main HDF5 Library, or Low-level APIs- The main HDF5 Library includes all of the low-level APIs, - providing user applications with fine-grain control of - HDF5 functionality. - |
- |||
- | Library Functions | -The general-purpose - H5 functions. | -|
- | Attribute Interface | -- | The H5A API for attributes. | -
- | Dataset Interface | -- | The H5D API for manipulating - scientific datasets. | -
- | Error Interface | -- | The H5E API for error handling. | -
- | File Interface | -- | The H5F API for accessing HDF files. | -
- | Group Interface | -- | The H5G API for creating physical - groups of objects on disk. | -
- | Identifier Interface | -- | The H5I API for working with - object identifiers. | -
- | Property List Interface | -- | The H5P API for manipulating - object property lists. | -
- | Reference Interface | -- | The H5R API for references. | -
- | Dataspace Interface | -- | The H5S API for defining dataset - dataspace. | -
- | Datatype Interface | -- | The H5T API for defining dataset - element information. | -
- | Filters and - Compression Interface |
- - | The H5Z API for inline data filters - and data compression. | -
- | Tools | -- | Interactive tools for the examination - of existing HDF5 files. | -
- | Predefined Datatypes | -- | Predefined datatypes in HDF5. - - | -
-A PDF version of this HDF5 Reference Manual will be available
-from
-http://hdf.ncsa.uiuc.edu/HDF5/doc/PSandPDF/
-approximately one week after each release.
-
-
-
-
-
-
-
-
-The Fortran90 and C++ APIs to HDF5
-
-
-The HDF5 Library distribution includes FORTRAN90 and C++ APIs,
-which are described in the following documents.
-
-
-Fortran90 API -
- Fortran90 APIs in the Reference Manual: - The current version of the HDF5 Reference Manual includes - descriptions of the Fortran90 APIs to HDF5. - Fortran subroutines exist in the H5, H5A, H5D, H5E, H5F, H5G, H5I, H5P, - H5R, H5S, H5T, and H5Z interfaces and are described on those pages. - In general, each Fortran subroutine performs exactly the same task - as the corresponding C function. - -
- Whereas Fortran subroutines had been described on separate pages in - prior releases, those descriptions were fully integrated into the - body of the reference manual for HDF5 Release 1.6.2 - (and mostly so for Release 1.6.1). -
- - HDF5 Fortran90 Flags and Datatypes - lists the flags employed in the Fortran90 interface and - contains a pointer to the HDF5 Fortran90 datatypes. -
- - HDF5 C++ Interfaces - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-
-
-HDF Help Desk
- -Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0 - - - - | - - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
-
| - | - - | - - |
-
|
-
- - - |
-
|
-
- - - |
-
|
-
-
|
-
|
-
|
-A group associates names with objects and provides a mechanism -for mapping a name to an object. Since all objects appear in at -least one group (with the possible exception of the root object) -and since objects can have names in more than one group, the set -of all objects in an HDF5 file is a directed graph. The internal -nodes (nodes with out-degree greater than zero) must be groups -while the leaf nodes (nodes with out-degree zero) are either empty -groups or objects of some other type. Exactly one object in every -non-empty file is the root object. The root object always has a -positive in-degree because it is pointed to by the file super block. - -
-An object name consists of one or more components separated from -one another by slashes. An absolute name begins with a slash and the -object is located by looking for the first component in the root -object, then looking for the second component in the first object, etc., -until the entire name is traversed. A relative name does not begin -with a slash and the traversal begins at the location specified by the -create or access function. - -
- - - - - -
H5Gclose
(hid_t group_id
)
- H5Gclose
releases resources used by a group which was
- opened by H5Gcreate
or H5Gopen
.
- After closing a group, the group_id
cannot be used again.
- - Failure to release a group with this call will result in resource leaks. -
hid_t group_id |
- IN: Group identifier to release. |
-SUBROUTINE h5gclose_f( gr_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: gr_id ! Group identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gclose_f -- - -
H5Gcreate
(hid_t loc_id
,
- const char *name
,
- size_t size_hint
- )
- H5Gcreate
creates a new group with the specified
- name at the specified location, loc_id
.
- The location is identified by a file or group identifier.
- The name, name
, must not already be taken by some
- other object and all parent groups must already exist.
-
- size_hint
is a hint for the number of bytes to
- reserve to store the names which will be eventually added to
- the new group. Passing a value of zero for size_hint
- is usually adequate since the library is able to dynamically
- resize the name heap, but a correct hint may result in better
- performance.
- If a non-positive value is supplied for size_hint
,
- then a default size is chosen.
-
- The return value is a group identifier for the open group.
- This group identifier should be closed by calling
- H5Gclose
when it is no longer needed.
-
hid_t loc_id |
- IN: File or group identifier. |
const char *name |
- IN: Absolute or relative name of the new group. |
size_t size_hint |
- IN: Optional parameter indicating the number of bytes - to reserve for the names that will appear in the group. - A conservative estimate could result in multiple - system-level I/O requests to read the group name heap; - a liberal estimate could result in a single large - I/O request even when the group has just a few names. - HDF5 stores each name with a null terminator. |
-SUBROUTINE h5gcreate_f(loc_id, name, gr_id, hdferr, size_hint) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the group to be created - INTEGER(HID_T), INTENT(OUT) :: gr_id ! Group identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(SIZE_T), OPTIONAL, INTENT(IN) :: size_hint - ! Number of bytes to store the names - ! of objects in the group. - ! Default value is - ! OBJECT_NAMELEN_DEFAULT_F -END SUBROUTINE h5gcreate_f -- - -
H5Gget_comment
(hid_t loc_id
,
- const char *name
,
- size_t bufsize
,
- char *comment
- )
- H5Gget_comment
retrieves the comment for the the
- object specified by loc_id
and name
.
- The comment is returned in the buffer comment
.
-
- At most bufsize
characters, including a null
- terminator, are returned in comment
.
- The returned value is not null terminated
- if the comment is longer than the supplied buffer.
-
- If an object does not have a comment, the empty string - is returned. -
hid_t loc_id |
- IN: Identifier of the file, group, dataset, or - named datatype. |
const char *name |
- IN: Name of the object in loc_id whose
- comment is to be retrieved.
- - name can be '.' (dot) if loc_id
- fully specifies the object for which the associated comment
- is to be retrieved.
- - name is ignored if loc_id
- is a dataset or named datatype.
- |
size_t bufsize |
- IN: Anticipated required size of the
- comment buffer. |
char *comment |
- OUT: The comment. |
bufsize
.
- Otherwise returns a negative value.
- -SUBROUTINE h5gget_comment_f(loc_id, name, size, buffer, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File, group, dataset, or - ! named datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the object link - CHARACTER(LEN=size), INTENT(OUT) :: buffer ! Buffer to hold the comment - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gget_comment_f -- - -
H5Gget_linkval
(hid_t loc_id
,
- const char *name
,
- size_t size
,
- char *value
- )
- H5Gget_linkval
returns size
- characters of the name of the object that the symbolic link name
points to.
-
- The parameter loc_id
is a file or group identifier.
-
- The parameter name
must be a symbolic link pointing to
- the desired object and must be defined relative to loc_id
.
-
- If size
is smaller than the size of the returned object name, then
- the name stored in the buffer value
will not be null terminated.
-
- This function fails if name
is not a symbolic link.
- The presence of a symbolic link can be tested by passing zero for
- size
and NULL for value
.
-
- This function should be used only after H5Gget_objinfo
has been called
- to verify that name
is a symbolic link.
-
hid_t loc_id |
- IN: Identifier of the file or group. |
const char *name |
- IN: Symbolic link to the object whose name is to be returned. |
size_t size |
- IN: Maximum number of characters of value
- to be returned. |
char *value |
- OUT: A buffer to hold the name of the object being sought. |
value
,
- if successful.
- Otherwise returns a negative value.
- -SUBROUTINE h5gget_linkval_f(loc_id, name, size, buffer, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the symbolic link - CHARACTER(LEN=size), INTENT(OUT) :: buffer ! Buffer to hold a - ! name of the object - ! symbolic link points to - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gget_linkval_f -- - -
H5Gget_num_objs
(hid_t loc_id
,
- hsize_t* num_obj
)
-H5Gget_num_objs
returns number of objects in a group.
- Group is specified by its identifier loc_id
.
- If a file identifier is passed in, then the number of objects in the
- root group is returned.
-hid_t loc_id |
- IN: Identifier of the group or the file |
hsize_t *num_obj |
- OUT: Number of objects in the group. |
H5Gget_objinfo
(hid_t loc_id
,
- const char *name
,
- hbool_t follow_link
,
- H5G_stat_t *statbuf
- )
- H5Gget_objinfo
returns information about the
- specified object through the statbuf
argument.
- loc_id
(a file or group identifier) and
- name
together determine the object.
- If the object is a symbolic link and follow_link
is
- zero (0
), then the information returned is that for the link itself;
- otherwise the link is followed and information is returned about
- the object to which the link points.
- If follow_link
is non-zero but the final symbolic link
- is dangling (does not point to anything), then an error is returned.
- The statbuf
fields are undefined for an error.
- The existence of an object can be tested by calling this function
- with a null statbuf
.
-
- H5Gget_objinfo
fills in the following data structure
- (defined in H5Gpublic.h):
-
- typedef struct H5G_stat_t { - unsigned long fileno; - haddr_t objno; - unsigned nlink; - H5G_obj_t type; - time_t mtime; - size_t linklen; - H5O_stat_t ohdr; - } H5G_stat_t -- - where H5O_stat_t (defined in H5Opublic.h) is: - -
- typedef struct H5O_stat_t { - hsize_t size; - hsize_t free; - unsigned nmesgs; - unsigned nchunks; - } H5O_stat_t -- The
fileno
and objno
fields contain
- four values which uniquely identify an object among those
- HDF5 files which are open: if all four values are the same
- between two objects, then the two objects are the same
- (provided both files are still open).
- fileno
will change.
- H5Fopen
calls referencing the same file
- actually open the same file, each will get a different
- fileno
.
-
- The nlink
field is the number of hard links to
- the object or zero when information is being returned about a
- symbolic link (symbolic links do not have hard links but
- all other objects always have at least one).
-
- The type
field contains the type of the object,
- one of
- H5G_GROUP
,
- H5G_DATASET
,
- H5G_LINK
, or
- H5G_TYPE
.
-
- The mtime
field contains the modification time.
-
- If information is being returned about a symbolic link then
- linklen
will be the length of the link value
- (the name of the pointed-to object with the null terminator);
- otherwise linklen
will be zero.
-
- The fields in the H5O_stat_t
struct contain information
- about the object header for the object queried:
-
size
- free
- nmesgs
- nchunks
- - Other fields may be added to this structure in the future. -
mtime
value of 0 (zero).
- hid_t loc_id |
- IN: File or group identifier. |
const char *name |
- IN: Name of the object for which status is being sought. |
hbool_t follow_link |
- IN: Link flag. |
H5G_stat_t *statbuf |
- OUT: Buffer in which to return information about the object. |
statbuf
(if non-null) initialized.
- Otherwise returns a negative value.
- H5Gget_objname_by_idx
(hid_t loc_id
,
- hsize_t idx
,
- char *name
,
- size_t size
)
-H5Gget_objname_by_idx
returns a name of the object
- specified by the index idx
in the group loc_id
.
-
- The group is specified by a group identifier loc_id
.
- If preferred, a file identifier may be passed in loc_id
;
- that file's root group will be assumed.
-
- idx
is the transient index used to iterate through
- the objects in the group.
- The value of idx
is any nonnegative number less than
- the total number of objects in the group, which is returned by the
- function H5Gget_num_objs
.
- Note that this is a transient index; an object may have a
- different index each time a group is opened.
-
- The object name is returned in the user-specified buffer name
.
-
- If the size of the provided buffer name
is
- less or equal the actual object name length,
- the object name is truncated to max_size - 1
characters.
-
- Note that if the size of the object's name is unkown, a
- preliminary call to H5Gget_objname_by_idx
with name
- set to NULL will return the length of the object's name.
- A second call to H5Gget_objname_by_idx
- can then be used to retrieve the actual name.
-
hid_t loc_id |
- IN: Group or file identifier. |
hsize_t idx |
- IN: Transient index identifying object. |
char *name |
- IN/OUT: Pointer to user-provided buffer the object name. |
size_t size |
- IN: Name length. |
0
if no name is associated with the group identifier.
- Otherwise returns a negative value.
- H5Gget_objtype_by_idx
(hid_t loc_id
,
- hsize_t idx
)
-H5Gget_objtype_by_idx
returns the type of the object
- specified by the index idx
in the group loc_id
.
-
- The group is specified by a group identifier loc_id
.
- If preferred, a file identifier may be passed in loc_id
;
- that file's root group will be assumed.
-
- idx
is the transient index used to iterate through
- the objects in the group.
- This parameter is described in more detail in the discussion of
- H5Gget_objname_by_idx
.
-
- The object type is returned as the function return value: -
- - |
- H5G_LINK
-
- |
- 0
- | - Object is a symbolic link. - |
- - |
- H5G_GROUP
-
- |
- 1
- | - Object is a group. - |
- - |
- H5G_DATASET
-
- |
- 2
- | - Object is a dataset. - |
- - |
- H5G_TYPE
-
- |
- 3
- | - Object is a named datatype. - |
hid_t loc_id |
- IN: Group or file identifier. |
hsize_t idx |
- IN: Transient index identifying object. |
H5Giterate
(hid_t loc_id
,
- const char *name
,
- int *idx
,
- H5G_iterate_t operator
,
- void *operator_data
- )
- H5Giterate
iterates over the members of
- name
in the file or group specified with
- loc_id
.
- For each object in the group, the operator_data
- and some additional information, specified below, are
- passed to the operator
function.
- The iteration begins with the idx
object in the
- group and the next element to be processed by the operator is
- returned in idx
. If idx
- is NULL, then the iterator starts at the first group member;
- since no stopping point is returned in this case, the iterator
- cannot be restarted if one of the calls to its operator returns
- non-zero.
-
- The prototype for H5G_iterate_t
is:
-
- | typedef herr_t (*H5G_iterate_t )
- (hid_t group_id , const char *
- member_name , void *operator_data ); |
The operation receives the group identifier for the group being
- iterated over, group_id
, the name of the current
- object within the group, member_name
, and the
- pointer to the operator data passed in to H5Giterate
,
- operator_data
.
-
- The return values from an operator are: -
- H5Giterate
assumes that the membership of the group
- identified by name
remains unchanged through the
- iteration. If the membership changes during the iteration,
- the function's behavior is undefined.
-
hid_t loc_id |
- IN: File or group identifier. |
const char *name |
- IN: Group over which the iteration is performed. |
int *idx |
- IN/OUT: Location at which to begin the iteration. |
H5G_iterate_t operator |
- IN: Operation to be performed on an object at each step of - the iteration. |
void *operator_data |
- IN/OUT: Data associated with the operation. |
H5Giterate
.
- Instead, that functionality is provided by two FORTRAN functions:
-
-
- h5gn_members_f
- |
- - | - Purpose: - Returns the number of group members. - |
- h5gget_obj_info_idx_f
- | - Purpose: - Returns name and type of the group member identified by its index. - | -
-SUBROUTINE h5gn_members_f(loc_id, name, nmembers, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the group - INTEGER, INTENT(OUT) :: nmembers ! Number of members in the group - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gn_members_f -- -
-SUBROUTINE h5gget_obj_info_idx_f(loc_id, name, idx, & - obj_name, obj_type, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the group - INTEGER, INTENT(IN) :: idx ! Index of member object - CHARACTER(LEN=*), INTENT(OUT) :: obj_name ! Name of the object - INTEGER, INTENT(OUT) :: obj_type ! Object type : - ! H5G_LINK_F - ! H5G_GROUP_F - ! H5G_DATASET_F - ! H5G_TYPE_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gget_obj_info_idx_f --
H5Glink
(hid_t loc_id
,
- H5G_link_t link_type
,
- const char *current_name
,
- const char *new_name
- )
- new_name
- to current_name
.
- H5Glink
creates a new name for an object that has some current
- name, possibly one of many names it currently has.
-
- If link_type
is H5G_LINK_HARD
, then
- current_name
must specify the name of an
- existing object and both
- names are interpreted relative to loc_id
, which is
- either a file identifier or a group identifier.
-
- If link_type
is H5G_LINK_SOFT
, then
- current_name
can be anything and is interpreted at
- lookup time relative to the group which contains the final
- component of new_name
. For instance, if
- current_name
is ./foo
,
- new_name
is ./x/y/bar
, and a request
- is made for ./x/y/bar
, then the actual object looked
- up is ./x/y/./foo
.
-
hid_t loc_id |
- IN: File or group identifier. |
H5G_link_t link_type |
- IN: Link type.
- Possible values are H5G_LINK_HARD and
- H5G_LINK_SOFT . |
const char * current_name |
- IN: Name of the existing object if link is a hard link. - Can be anything for the soft link. |
const char * new_name |
- IN: New name for the object. |
-SUBROUTINE h5glink_f(loc_id, link_type, current_name, new_name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group location identifier - INTEGER, INTENT(IN) :: link_type ! Link type, possible values are: - ! H5G_LINK_HARD_F - ! H5G_LINK_SOFT_F - CHARACTER(LEN=*), INTENT(IN) :: current_name - ! Current object name relative - ! to loc_id - CHARACTER(LEN=*), INTENT(IN) :: new_name ! New object name - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5glink_f -- - -
H5Glink2
(
- hid_t curr_loc_id
, const char *current_name
,
- H5G_link_t link_type
,
- hid_t new_loc_id
, const char *new_name
)
- new_name
- to current_name
.
- H5Glink2
creates a new name for an object that has some current
- name, possibly one of many names it currently has.
-
- If link_type
is H5G_LINK_HARD
, then current_name
- must specify the name of an existing object.
- In this case, current_name
and new_name
are interpreted
- relative to curr_loc_id
and new_loc_id
, respectively,
- which are either file or group identifiers.
-
- If link_type
is H5G_LINK_SOFT
, then
- current_name
can be anything and is interpreted at
- lookup time relative to the group which contains the final
- component of new_name
. For instance, if
- current_name
is ./foo
,
- new_name
is ./x/y/bar
, and a request
- is made for ./x/y/bar
, then the actual object looked
- up is ./x/y/./foo
.
-
hid_t curr_loc_id |
- IN: The file or group identifier for the original object. |
const char * current_name |
- IN: Name of the existing object if link is a hard link. - Can be anything for the soft link. |
H5G_link_t link_type |
- IN: Link type.
- Possible values are H5G_LINK_HARD and
- H5G_LINK_SOFT . |
hid_t new_loc_id |
- IN: The file or group identifier for the new link. | const char * new_name |
- IN: New name for the object. | -
-SUBROUTINE h5glink2_f(cur_loc_id, cur_name, link_type, new_loc_id, new_name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: cur_loc_id ! File or group location identifier - CHARACTER(LEN=*), INTENT(IN) :: cur_name ! Name of the existing object - ! is relative to cur_loc_id - ! Can be anything for the soft link - INTEGER, INTENT(IN) :: link_type ! Link type, possible values are: - ! H5G_LINK_HARD_F - ! H5G_LINK_SOFT_F - INTEGER(HID_T), INTENT(IN) :: new_loc_id ! New location identifier - CHARACTER(LEN=*), INTENT(IN) :: new_name ! New object name - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5glink2_f -- - -
H5Gmove
(hid_t loc_id
,
- const char *src_name
,
- const char *dst_name
- )
- H5Gmove
renames an object within an HDF5 file.
- The original name, src_name
, is unlinked from the
- group graph and the new name, dst_name
, is inserted
- as an atomic operation. Both names are interpreted relative
- to loc_id
, which is either a file or a group
- identifier.
- H5Gmove
.
- See The Group Interface
- in the HDF5 User's Guide.
- hid_t loc_id |
- IN: File or group identifier. |
const char *src_name |
- IN: Object's original name. |
const char *dst_name |
- IN: Object's new name. |
-SUBROUTINE h5gmove_f(loc_id, name, new_name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Original name of an object - CHARACTER(LEN=*), INTENT(IN) :: new_name ! New name of an object - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gmove_f -- - -
H5Gmove2
( hid_t src_loc_id
,
- const char *src_name
, hid_t dst_loc_id
,
- const char *dst_name
)
- H5Gmove2
renames an object within an HDF5 file. The original
- name, src_name
, is unlinked from the group graph and the new
- name, dst_name
, is inserted as an atomic operation.
- -
src_name
and dst_name
are interpreted relative to
- src_name
and dst_name
, respectively,
- which are either file or group identifiers.
- H5Gmove
. See The
- Group Interface in the HDF5 User's Guide.
- hid_t src_loc_id |
- IN: Original file or group identifier. |
const char *src_name |
- IN: Object's original name. |
hid_t dst_loc_id |
- IN: Destination file or group identifier. |
const char *dst_name |
- IN: Object's new name. |
-SUBROUTINE h5gmove2_f(src_loc_id, src_name, dst_loc_id, dst_name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: src_loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: src_name ! Original name of an object - ! relative to src_loc_id - INTEGER(HID_T), INTENT(IN) :: dst_loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: dst_name ! New name of an object - ! relative to dst_loc_id - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gmove2_f -- - -
H5Gopen
(hid_t loc_id
,
- const char *name
- )
- H5Gopen
opens an existing group with the specified
- name at the specified location, loc_id
.
- - The location is identified by a file or group identifier -
- H5Gopen
returns a group identifier for the group
- that was opened. This group identifier should be released by
- calling H5Gclose
when it is no longer needed.
-
hid_t loc_id |
- IN: File or group identifier within which group is to be open. |
const char * name |
- IN: Name of group to open. |
-SUBROUTINE h5gopen_f(loc_id, name, gr_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the group to open - INTEGER(HID_T), INTENT(OUT) :: gr_id ! Group identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gopen_f -- - -
H5Gset_comment
(hid_t loc_id
,
- const char *name
,
- const char *comment
- )
- H5Gset_comment
sets the comment for the
- object specified by loc_id
and name
- to comment
.
- Any previously existing comment is overwritten.
-
- If comment
is the empty string or a
- null pointer, the comment message is removed from the object.
-
- Comments should be relatively short, null-terminated, - ASCII strings. -
- Comments can be attached to any object that has an object header, - e.g., datasets, groups, named datatypes, and dataspaces, but - not symbolic links. -
hid_t loc_id |
- IN: Identifier of the file, group, dataset, - or named datatype. |
const char *name |
- IN: Name of the object whose comment is to be
- set or reset.
- - name can be '.' (dot) if loc_id
- fully specifies the object for which the comment is to be set.
- - name is ignored if loc_id
- is a dataset or named datatype.
- |
const char *comment |
- IN: The new comment. |
-SUBROUTINE h5gset_comment_f(loc_id, name, comment, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File, group, dataset, or - ! named datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of object - CHARACTER(LEN=*), INTENT(IN) :: comment ! Comment for the object - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gset_comment_f -- - -
H5Gunlink
(hid_t loc_id
,
- const char *name
- )
- H5Gunlink
removes the object specified by
- name
from the group graph and decrements the
- link count for the object to which name
points.
- This action eliminates any association between name
- and the object to which name
pointed.
- - Object headers keep track of how many hard links refer to an object; - when the link count reaches zero, the object can be removed - from the file. Objects which are open are not removed until all - identifiers to the object are closed. -
- If the link count reaches zero, all file space associated with - the object will be released, i.e., identified in memory as freespace. - If the any object identifier is open for the object, the space - will not be released until after the object identifier is closed. -
- Note that space identified as freespace is available for re-use - only as long as the file remains open; once a file has been - closed, the HDF5 library loses track of freespace. See - “Freespace Management” - in the HDF5 User's Guide for further details. -
H5Gunlink
.
- See The Group Interface
- in the HDF5 User's Guide.
- hid_t loc_id |
- IN: Identifier of the file or group containing the object. |
const char * name |
- IN: Name of the object to unlink. |
-SUBROUTINE h5gunlink_f(loc_id, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the object to unlink - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5gunlink_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interface: - -
- - |
-
| - - |
- - | - -- - |
-
|
-
- - - | - - | -
-
|
-
|
-
|
H5Iclear_type
(H5I_type_t type
,
- hbool_t force
)
-H5Iclear_type
deletes all IDs of the type identified by the argument type.
- - The type’s free function is first called on all of these IDs to free their memory, - then they are removed from the type. - -
- If the force
flag is set to false, only those IDs whose reference
- counts are equal to 1 will be deleted, and all other IDs will be entirely unchanged.
- If the force
flag is true, all IDs of this type will be deleted.
-
H5I_type_t type |
- IN: Identifier of ID type which is to be cleared of IDs | -
hbool_t force |
- IN: Whether or not to force deletion of all IDs | -
H5Idec_ref
(hid_t obj_id
)
-H5Idec_ref
decrements the reference count of the object
- identified by obj_id
.
-
- - The reference count for an object ID is attached to the information - about an object in memory and has no relation to the number of links to - an object on disk. - -
- The reference count for a newly created object will be 1.
- Reference counts for objects may be explicitly modified with this
- function or with H5Iinc_ref
.
- When an object ID's reference count reaches zero, the object will be
- closed.
- Calling an object ID's 'close' function decrements the reference count
- for the ID which normally closes the object, but
- if the reference count for the ID has been incremented with
- H5Iinc_ref
, the object will only be closed when the
- reference count
- reaches zero with further calls to this function or the
- object ID's 'close' function.
-
-
- If the object ID was created by a collective parallel call (such as
- H5Dcreate
, H5Gopen
, etc.), the reference
- count should be modified by all the processes which have copies of
- the ID. Generally this means that group, dataset, attribute, file
- and named datatype IDs should be modified by all the processes and
- that all other types of IDs are safe to modify by individual processes.
-
-
- This function is of particular value when an application is maintaining - multiple copies of an object ID. The object ID can be incremented when - a copy is made. Each copy of the ID can then be safely closed or - decremented and the HDF5 object will be closed when the reference count - for that that object drops to zero. -
hid_t obj_id |
- IN: Object identifier whose reference count will be modified. |
-SUBROUTINE h5idec_ref_f(obj_id, ref_count, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id !Object identifier - INTEGER, INTENT(OUT) :: ref_count !Reference count of object ID - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success, and -1 on failure -END SUBROUTINE h5idec_ref_f -- - -
H5Idec_type_ref
(H5I_type_t type
)
-H5Idec_type_ref
decrements the reference count on an ID type.
- The reference count is used by the library to indicate when an ID type can
- be destroyed. If the reference count reaches zero, this function will destroy it.
-
- The type
parameter is the identifier for the ID type whose
- reference count is to be decremented. This identifier must have been
- created by a call to H5Iregister_type
.
-
H5I_type_t type |
- IN: The identifier of the type whose reference count is to be decremented | -
H5Idestroy_type
(H5I_type_t type
)
-type
and all IDs within that type.
-H5Idestroy_type
deletes an entire ID type. All IDs of this
- type are destroyed and no new IDs of this type can be registered.
-
- - The type’s free function is called on all of the IDs which are deleted by - this function, freeing their memory. In addition, all memory used by this - type’s hash table is freed. - -
- Since the H5I_type_t values of destroyed ID types are reused
- when new types are registered, it is a good idea to set the variable
- holding the value of the destroyed type to H5I_UNINIT
.
-
H5I_type_t type |
- IN: Identifier of ID type which is to be destroyed | -
H5Iget_file_id
(hid_t obj_id
)
-H5Iget_file_id
returns the identifier of the file
- associated with the object referenced by obj_id
.
-
- obj_id
can be a file, group, dataset, named datatype,
- or attribute identifier.
-
- Note that the HDF5 Library permits an application to close a file
- while objects within the file remain open.
- If the file containing the object obj_id
- is still open, H5Iget_file_id
will retrieve the
- existing file identifier.
- If there is no existing file identifier for the file,
- i.e., the file has been closed,
- H5Iget_file_id
will reopen the file and
- return a new file identifier.
- In either case, the file identifier must eventually be released
- using H5Fclose
.
-
hid_t obj_id |
- IN: Identifier of the object whose associated - file identifier will be returned. | -
-SUBROUTINE h5iget_file_id_f(obj_id, file_id, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - INTEGER(HID_T), INTENT(OUT) :: file_id ! File identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5iget_file_id_f -- - -
H5Iget_name
(hid_t obj_id
,
- char *name
,
- size_t size
- )
-H5Iget_name
retrieves a name for the object identified
- by obj_id
.
-
- Up to size
characters of the name are returned in
- name
; additional characters, if any, are not returned
- to the user application.
-
- If the length of the name, which determines the required
- value of size
, is unknown, a preliminary
- H5Iget_name
call can be made.
- The return value of this call will be the size of the
- object name.
- That value can then be assigned to size
- for a second H5Iget_name
call,
- which will retrieve the actual name.
-
- If there is no name associated with the object identifier
- or if the name is NULL
, H5Iget_name
- returns 0
(zero).
-
- Note that an object in an HDF5 file may have multiple names, - varying according to the path through the HDF5 group - hierarchy used to reach that object. -
hid_t obj_id |
- IN: Identifier of the object. - This identifier can refer to a group, dataset, or named datatype. |
char *name |
- OUT: A name associated with the identifier. |
size_t size | -IN: The size of the name buffer. |
0
(zero) if no name is associated with the identifier.
- Otherwise returns a negative value.
--SUBROUTINE h5iget_name_f(obj_id, buf, buf_size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id ! Object identifier - CHARACTER(LEN=*), INTENT(OUT) :: buf ! Buffer to hold object name - INTEGER(SIZE_T), INTENT(IN) :: buf_size ! Buffer size - INTEGER(SIZE_T), INTENT(OUT) :: name_size ! Name size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success, and -1 on failure -END SUBROUTINE h5iget_name_f -- - -
H5Iget_ref
(hid_t obj_id
)
-H5Iget_ref
retrieves the reference count of the object
- identified by obj_id
.
-
- - The reference count for an object ID is attached to the information - about an object in memory and has no relation to the number of links to - an object on disk. - -
- This function can also be used to check if an object ID is still valid. - A non-negative return value from this function indicates that the ID - is still valid. -
hid_t obj_id |
- IN: Object identifier whose reference count will be retrieved. |
-SUBROUTINE h5iget_ref_f(obj_id, ref_count, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id !Object identifier - INTEGER, INTENT(OUT) :: ref_count !Reference count of object ID - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success, and -1 on failure -END SUBROUTINE h5iget_ref_f -- - -
H5Iget_type
(hid_t obj_id
)
-H5Iget_type
retrieves the type of the object
- identified by obj_id
.
- - Valid types returned by the function are -
H5I_FILE
- | File |
H5I_GROUP
- | Group |
H5I_DATATYPE
- | Datatype |
H5I_DATASPACE
- | Dataspace |
H5I_DATASET
- | Dataset |
H5I_ATTR
- | Attribute |
H5I_BADID
- | Invalid identifier |
- This function is of particular value in determining the
- type of object closing function (H5Dclose
,
- H5Gclose
, etc.) to call after a call to
- H5Rdereference
.
-
hid_t obj_id |
- IN: Object identifier whose type is to be determined. |
H5I_BADID
.
--SUBROUTINE h5iget_type_f(obj_id, type, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id !Object identifier - INTEGER, INTENT(OUT) :: type !type of an object. - !possible values are: - !H5I_FILE_F - !H5I_GROUP_F - !H5I_DATATYPE_F - !H5I_DATASPACE_F - !H5I_DATASET_F - !H5I_ATTR_F - !H5I_BADID_F - INTEGER, INTENT(OUT) :: hdferr ! E rror code - ! 0 on success, and -1 on failure -END SUBROUTINE h5iget_type_f -- - -
H5Iget_type_ref
(H5I_type_t type
)
-H5Iget_type_ref
retrieves the reference count on an ID type.
- The reference count is used by the library to indicate when an
- ID type can be destroyed.
-
-
- The type
parameter is the identifier for the ID type whose
- reference count is to be retrieved. This identifier must have been created
- by a call to H5Iregister_type
.
-
H5I_type_t type |
- IN: The identifier of the type whose reference count is to be retrieved | -
H5Iinc_ref
(hid_t obj_id
)
-H5Iinc_ref
increments the reference count of the object
- identified by obj_id
.
-
- - The reference count for an object ID is attached to the information - about an object in memory and has no relation to the number of links to - an object on disk. - -
- The reference count for a newly created object will be 1.
- Reference counts for objects may be explicitly modified with this
- function or with H5Idec_ref
.
- When an object ID's reference count reaches zero, the object will be
- closed.
- Calling an object ID's 'close' function decrements the reference count
- for the ID which normally closes the object, but
- if the reference count for the ID has been incremented with this
- function, the object will only be closed when the reference count
- reaches zero with further calls to H5Idec_ref
or the
- object ID's 'close' function.
-
-
- If the object ID was created by a collective parallel call (such as
- H5Dcreate
, H5Gopen
, etc.), the reference
- count should be modified by all the processes which have copies of
- the ID. Generally this means that group, dataset, attribute, file
- and named datatype IDs should be modified by all the processes and
- that all other types of IDs are safe to modify by individual processes.
-
-
- This function is of particular value when an application is maintaining - multiple copies of an object ID. The object ID can be incremented when - a copy is made. Each copy of the ID can then be safely closed or - decremented and the HDF5 object will be closed when the reference count - for that that object drops to zero. -
hid_t obj_id |
- IN: Object identifier whose reference count will be modified. |
-SUBROUTINE h5iinc_ref_f(obj_id, ref_count, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: obj_id !Object identifier - INTEGER, INTENT(OUT) :: ref_count !Reference count of object ID - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success, and -1 on failure -END SUBROUTINE h5iinc_ref_f -- - -
H5Iinc_type_ref
(H5I_type_t type
)
-H5Iinc_type_ref
increments the reference count on an ID type.
- The reference count is used by the library to indicate when an ID type can be destroyed.
-
-
- The type
parameter is the identifier for the ID type whose
- reference count is to be incremented. This identifier must have been created
- by a call to H5Iregister_type
.
-
H5I_type_t type |
- IN: The identifier of the type whose reference count is to be incremented | -
H5Inmembers
(H5I_type_t type
)
-H5Inmembers
returns the number of IDs of a given ID type.
- If no IDs of this type have been registered, H5Inmembers returns 0.
- If the type does not exist or has been destroyed, H5Inmembers
- also returns 0.
-H5I_type_t type |
- IN: Identifier of ID type whose member count will be retrieved | -
H5Iobject_verify
(hid_t id
,
- H5I_type_t id_type
)
-H5Iobject_verify
returns a pointer to the memory referenced by
- id
after verifying that id
is of type id_type
.
- This function is analogous to dereferencing a pointer in C with type checking.
-
-
- H5Iregister
(H5I_type_t type
,
- void *object
) takes an H5I_type_t and a
- void pointer to an object, returning an hid_t of that type.
- This hid_t can then be passed to H5Iobject_verify
- along with its type to retrieve the object.
-
-
- H5Iobject_verify
does not change the ID it is called on in any
- way (as opposed to H5Iremove_verify
, which removes the ID from its
- type’s hash table).
-
hid_t id |
- IN: ID to be dereferenced | -
H5I_type_t type |
- IN: ID type to which id should belong | -
NULL
on failure.
-H5Iregister
(H5I_type_t type
,
- void *object
)
-H5Iregister
allocates space for a new ID and returns an identifier for it.
-
-
- The type
parameter is the identifier for the ID type to which
- this new ID will belong. This identifier must have been created by a call
- to H5Iregister_type
.
-
-
- The object
parameter is a pointer to the memory which the new
- ID will be a reference to. This pointer will be stored by the library and
- returned to you via a call to H5Iobject_verify
.
-
H5I_type_t type |
- IN: The identifier of the type to which the new ID will belong | -
void *object |
- IN: Pointer to memory for the library to store | -
H5Iregister_type
(size_t
- hash_size
, unsigned reserved
,
- H5I_free_t free_func
)
-H5Iregister_type
allocates space for a new ID type and
- returns an identifier for it.
-
-
- The hash_size
parameter indicates the minimum size of the hash
- table used to store IDs in the new type.
-
-
- The reserved
parameter indicates the number of IDs in this new
- type to be reserved. Reserved IDs are valid IDs which are not associated with
- any storage within the library.
-
-
- The free_func
parameter is a function pointer to a function
- which returns an herr_t and accepts a void *. The purpose
- of this function is to deallocate memory for a single ID. It will be called
- by H5Iclear_type
and H5Idestroy_type
on each ID.
- This function is NOT called by H5Iremove_verify
.
- The void * will be the same pointer which was passed in to
- the H5Iregister
function. The free_func
- function should return 0 on success and -1 on failure.
-
size_t hash_size |
- IN: Size of the hash table (in entries) used to store IDs for the new type | -
unsigned reserved |
- IN: Number of reserved IDs for the new type | -
H5I_free_t free_func |
- IN: Function used to deallocate space for a single ID | -
H5Iremove_verify
(hid_t id
,
- H5I_type_t id_type
)
-H5Iremove_verify
first ensures that id
belongs to
- id_type
. If so, it removes id
from internal storage
- and returns the pointer to the memory it referred to. This pointer is the
- same pointer that was placed in storage by H5Iregister
.
- If id
does not belong to id_type
,
- then NULL
is returned.
-
-
- The id
parameter is the ID which is to be removed from
- internal storage. Note: this function does NOT deallocate the memory that
- id
refers to. The pointer returned by H5Iregister
- must be deallocated by the user to avoid memory leaks.
-
-
- The type
parameter is the identifier for the ID type
- which id
is supposed to belong to. This identifier must
- have been created by a call to H5Iregister_type
.
-
hid_t id |
- IN: The ID to be removed from internal storage | -
H5I_type_t type |
- IN: The identifier of the type whose reference count is to be retrieved | -
id
- on success, NULL
on failure.
-H5Isearch
(H5I_type_t type
,
- H5I_search_func_t func
, void *key
)
-H5Isearch
searches through a give ID type to find an object
- that satisfies the criteria defined by func
. If such an object
- is found, the pointer to the memory containing this object is returned.
- Otherwise, NULL
is returned. To do this, func
is
- called on every member of type
. The first member to satisfy
- func
is returned.
-
-
- The type
parameter is the identifier for the ID type which is
- to be searched. This identifier must have been created by a call to
- H5Iregister_type
.
-
-
- The parameter func
is a function pointer to a function
- which takes three parameters. The first parameter is a void *.
- It will be a pointer the object to be tested. This is the same object
- that was placed in storage using H5Iregister
. The second
- parameter is a hid_t. It is the ID of the object to be tested.
- The last parameter is a void *. This is the key
parameter
- and can be used however the user finds helpful. Or it can simply be ignored
- if it is not needed. func
returns 0 if the object it is testing
- does not pass its criteria. A non-zero value should be returned if the object
- does pass its criteria.
-
-
- The key
parameter will be passed to the search function as a
- parameter. It can be used to further define the search at run-time.
-
H5I_type_t type |
- IN: The identifier of the type to be searched | -
H5I_search_func_t func |
- IN: The function defining the search criteria | -
void *key |
- IN: A key for the search function | -
NULL
on failure.
-
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - - -
- - -Alphabetical Listing -
H5Pall_filters_avail
(hid_t dcpl_id
)
- H5Pall_filters_avail
verifies that all of the filters
- set in the dataset creation property list dcpl_id
are
- currently available.
- hid_t dcpl_id |
- IN: Dataset creation property list identifier. |
TRUE
if all filters are available
- and FALSE
if one or more is not currently available.FAIL
, a negative value, on error.
-H5Pclose
(hid_t plist
- )
- H5Pclose
terminates access to a property list.
- All property lists should be closed when the application is
- finished accessing them.
- This frees resources used by the property list.
- hid_t plist |
- IN: Identifier of the property list to terminate access to. |
-SUBROUTINE h5pclose_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pclose_f -- - -
H5Pclose_class
(
- hid_t class
- )
-
- - Existing property lists of this class will continue to exist, - but new ones are not able to be created. - -
hid_t class |
- IN: Property list class to close |
-SUBROUTINE h5pclose_class_f(class, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: class ! Property list class identifier - ! to close - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pclose_class_f -- - -
H5Pclose_list
(
- hid_t plist
- )
-
- H5Pclose_list
closes a property list.
-
-
- If a close
callback exists for the property list class,
- it is called before the property list is destroyed.
- If close
callbacks exist for any individual properties
- in the property list, they are called after the class
- close
callback.
-
-
hid_t plist | - | IN: Property list to close |
-SUBROUTINE h5pclose_list_f(plist, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist ! Property list identifier to close - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pclose_list_f -- - -
H5Pcopy
(hid_t plist
- )
- H5Pcopy
copies an existing property list to create
- a new property list.
- The new property list has the same properties and values
- as the original property list.
- hid_t plist |
- IN: Identifier of property list to duplicate. |
-SUBROUTINE h5pcopy_f(prp_id, new_prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HID_T), INTENT(OUT) :: new_prp_id ! Identifier of property list - ! copy - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pcopy_f -- - -
H5Pcopy_prop
(
- hid_t dst_id
,
- hid_t src_id
,
- const char *name
- )
-
- H5Pcopy_prop
copies a property from one property
- list or class to another.
-
- - If a property is copied from one class to another, all the property - information will be first deleted from the destination class and - then the property information will be copied from the source class - into the destination class. - -
- If a property is copied from one list to another, the property
- will be first deleted from the destination list (generating a call
- to the close
callback for the property, if one exists)
- and then the property is copied from the source list to the
- destination list (generating a call to the copy
- callback for the property, if one exists).
-
-
- If the property does not exist in the class or list, this call is
- equivalent to calling H5Pregister
or H5Pinsert
- (for a class or list, as appropriate) and the create
- callback will be called in the case of the property being
- copied into a list (if such a callback exists for the property).
-
-
hid_t dst_id |
- IN: Identifier of the destination property list or - class |
hid_t src_id |
- IN: Identifier of the source property list or class |
const char *name |
- IN: Name of the property to copy |
-SUBROUTINE h5pcopy_prop_f(dst_id, src_id, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dst_id ! Destination property list - ! identifier - INTEGER(HID_T), INTENT(IN) :: src_id ! Source property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Property name - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pcopy_prop_f -- - -
H5Pcreate
(hid_t cls_id
- )
- H5Pcreate
creates a new property as an instance of some
- property list class. The new property list is initialized
- with default values for the specified class. The classes are:
- H5P_FILE_CREATE
- H5P_FILE_ACCESS
- H5P_DATASET_CREATE
- H5P_DATASET_XFER
- H5P_MOUNT
- H5Pcreate
- creates and returns a new mount property list
- initialized with default values.
-
- This property list must eventually be closed with
- H5Pclose
;
- otherwise, errors are likely to occur.
-
hid_t cls_id |
- IN: The class of the property list to create. |
plist
) if successful;
- otherwise Fail (-1).
- -SUBROUTINE h5pcreate_f(classtype, prp_id, hdferr) - IMPLICIT NONE - INTEGER, INTENT(IN) :: classtype ! The type of the property list - ! to be created - ! Possible values are: - ! H5P_FILE_CREATE_F - ! H5P_FILE_ACCESS_F - ! H5P_DATASET_CREATE_F - ! H5P_DATASET_XFER_F - ! H5P_MOUNT_F - INTEGER(HID_T), INTENT(OUT) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pcreate_f -- - -
H5Pcreate_class
(
- hid_t class
,
- const char *name
,
- H5P_cls_create_func_t create
,
- H5P_cls_copy_func_t copy
,
- H5P_cls_close_func_t close
- )
-
- H5Pcreate_class
registers a new property list class
- with the library.
- The new property list class can inherit from an existing property
- list class or may be derived from the default "empty" class.
- New classes with inherited properties from existing classes
- may not remove those existing properties, only add or remove
- their own class properties.
-
-
- The create
routine is called when a new property list
- of this class is being created.
- The H5P_cls_create_func_t
callback function is defined
- as follows:
- H5P_cls_create_func_t
)(
- hid_t prop_id
,
- void * create_data
- );
- hid_t prop_id |
- IN: The identifier of the property list being created |
void * create_data |
- IN/OUT: User pointer to any class creation information needed |
create
routine is called after any registered
- create
function is called for each property value.
- If the create
routine returns a negative value,
- the new list is not returned to the user and the
- property list creation routine returns an error value.
-
-
- The copy
routine is called when an existing property list
- of this class is copied.
- The H5P_cls_copy_func_t
callback function
- is defined as follows:
- H5P_cls_copy_func_t
)(
- hid_t prop_id
,
- void * copy_data
- );
- hid_t prop_id |
- IN: The identifier of the property list created by copying |
void * copy_data |
- IN/OUT: User pointer to any class copy information needed |
copy
routine is called after any registered
- copy
function is called for each property value.
- If the copy
routine returns a negative value, the new list
- is not returned to the user and the property list copy routine returns
- an error value.
-
-
- The close
routine is called when a property list of this
- class
- is being closed.
- The
H5P_cls_close_func_t callback function is defined
- as follows:
- H5P_cls_close_func_t
)(
- hid_t prop_id
,
- void * close_data
- );
- hid_t prop_id |
- IN: The identifier of the property list being closed |
void * close_data |
- IN/OUT: User pointer to any class close information needed |
close
routine is called before any registered
- close
function is called for each property value.
- If the close
routine returns a negative value,
- the property list close routine returns an error value
- but the property list is still closed.
-
- hid_t class |
- IN: Property list class to inherit from. |
const char *name |
- IN: Name of property list class to register |
H5P_cls_create_func_t create |
- IN: Callback routine called when a property list is created |
H5P_cls_copy_func_t copy |
- IN: Callback routine called when a property list is copied |
H5P_cls_close_func_t close |
- IN: Callback routine called when a property list is being closed |
-SUBROUTINE h5pcreate_class_f(parent, name, class, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: parent ! Parent property list class - ! identifier - ! Possible values include: - ! H5P_NO_CLASS_F - ! H5P_FILE_CREATE_F - ! H5P_FILE_ACCESS_F - ! H5P_DATASET_CREATE_F - ! H5P_DATASET_XFER_F - ! H5P_MOUNT_F - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to create - INTEGER(HID_T), INTENT(OUT) :: class ! Property list class identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pcreate_class_f -- - -
H5Pcreate_list
(
- hid_t class
)
-
- H5Pcreate_list
creates a new property list of a
- given class. If a create
callback exists for the
- property list class, it is called before the property list
- is passed back to the user.
- If create
callbacks exist for any individual properties
- in the property list, they are called before the class
- create
callback.
-
- hid_t class; |
- IN: Class of property list to create. |
H5Premove_filter
(hid_t plist
,
- H5Z_filter_t filter
- )
- H5Premove_filter
removes the specified
- filter
from the filter pipeline in the
- dataset creation property list plist
.
-
- The filter
parameter specifies the filter to be removed.
- Valid values for use in filter
are as follows:
-
-
- H5Z_FILTER_ALL
- | - Removes all filters from the permanent filter pipeline. - |
- H5Z_FILTER_DEFLATE
- | - Data compression filter, employing the gzip algorithm - |
- H5Z_FILTER_SHUFFLE
- | - Data shuffling filter - |
- H5Z_FILTER_FLETCHER32
- | - Error detection filter, employing the Fletcher32 checksum algorithm - |
- H5Z_FILTER_SZIP
- | - Data compression filter, employing the SZIP algorithm - |
- Additionally, user-defined filters can be removed with this routine - by passing the filter identifier with which they were registered - with the HDF5 Library. -
- Attempting to remove a filter that is not in the permanent filter - pipeline is an error. -
plist
must be a dataset creation
- property list.
- plist_id
- filter
- -SUBROUTINE h5premove_filter_f(prp_id, filter, hdferr) - - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset creation property - ! list identifier - INTEGER, INTENT(IN) :: filter ! Filter to be removed - ! Valid values are: - ! H5Z_FILTER_ALL_F - ! H5Z_FILTER_DEFLATE_F - ! H5Z_FILTER_SHUFFLE_F - ! H5Z_FILTER_FLETCHER32_F - ! H5Z_FILTER_SZIP_F - ! - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success, -1 on failure -END SUBROUTINE h5premove_filter_f -- - -
H5Pequal
(
- hid_t id1,
- hid_t id2
- )
-
- H5Pequal
compares two property lists or classes
- to determine whether they are equal to one another.
-
-
- Either both id1
and id2
must be
- property lists or both must be classes; comparing a list to a
- class is an error.
-
-
hid_t id1 |
- IN: First property object to be compared |
hid_t id2 |
- IN: Second property object to be compared |
-SUBROUTINE h5pequal_f(plist1_id, plist2_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist1_id ! Property list identifier - INTEGER(HID_T), INTENT(IN) :: plist2_id ! Property list identifier - LOGICAL, INTENET(OUT) :: flag ! Flag - ! .TRUE. if lists are equal - ! .FALSE. otherwise - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pequal_f -- - -
H5Pexist
(
- hid_t id
;
- const char *name
- )
-
- H5Pexist
determines whether a property exists
- within a property list or class.
-
- hid_t id |
- IN: Identifier for the property to query |
const char *name |
- IN: Name of property to check for |
-SUBROUTINE h5pexist_f(prp_id, name, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to modify - LOGICAL, INTENT(OUT) :: flag ! Logical flag - ! .TRUE. if exists - ! .FALSE. otherwise - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pexist_f -- - -
H5Pfill_value_defined
(hid_t plist_id
,
- H5D_fill_value_t *status
- )
- H5Pfill_value_defined
determines whether a fill value
- is defined in the dataset creation property list plist_id
.
-
- Valid values returned in status
are as follows:
-
- H5D_FILL_VALUE_UNDEFINED
- | - Fill value is undefined. - | |
- H5D_FILL_VALUE_DEFAULT
- | - Fill value is the library default. - | |
- H5D_FILL_VALUE_USER_DEFINED
- | - Fill value is defined by the application. - |
H5Pfill_value_defined
is designed for use in
- concert with the dataset fill value properties functions
- H5Pget_fill_value
and H5Pget_fill_time
.
- - See H5Dcreate for - further cross-references. -
hid_t plist_id |
- IN: Dataset creation property list identifier. |
H5D_fill_value_t *status |
- OUT: Status of fill value in property list. |
H5Pget
(
- hid_t plid
,
- const char *name
,
- void *value
- )
-
- H5Pget
retrieves a copy of the value for a property
- in a property list. If there is a get
callback routine
- registered for this property, the copy of the value of the property
- will first be passed to that routine and any changes to the copy of
- the value will be used when returning the property value from this
- routine.
-
-
- This routine may be called for zero-sized properties with the
- value
set to NULL. The get
routine
- will be called with a NULL value if the callback exists.
-
-
- The property name must exist or this routine will fail. - -
- If the get
callback routine returns an error,
- value
will not be modified.
-
-
hid_t plid |
- IN: Identifier of the property list to query |
const char *name |
- IN: Name of property to query |
void *value |
- OUT: Pointer to a location to which to copy the value of - of the property |
-SUBROUTINE h5pget_f(plid, name, value, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plid ! Property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to get - TYPE, INTENT(OUT) :: value ! Property value - ! Supported types are: - ! INTEGER - ! REAL - ! DOUBLE PRECISION - ! CHARACTER(LEN=*) - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_f -- - -
H5Pget_alignment
(hid_t plist
,
- hsize_t *threshold
,
- hsize_t *alignment
- )
- H5Pget_alignment
retrieves the current settings for
- alignment properties from a file access property list.
- The threshold
and/or alignment
pointers
- may be null pointers (NULL).
- hid_t plist |
- IN: Identifier of a file access property list. |
hsize_t *threshold |
- OUT: Pointer to location of return threshold value. |
hsize_t *alignment |
- OUT: Pointer to location of return alignment value. |
-SUBROUTINE h5pget_alignment_f(prp_id, threshold, alignment, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), INTENT(OUT) :: threshold ! Threshold value - INTEGER(HSIZE_T), INTENT(OUT) :: alignment ! Alignment value - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_alignment_f -- - -
H5Pget_alloc_time
(hid_t plist_id
,
- H5D_alloc_time_t *alloc_time
- )
- H5Pget_alloc_time
retrieves the timing for allocating
- storage space for a dataset's raw data.
- This property is set in the dataset creation property list
- plist_id
.
-
- The timing setting is returned in fill_time
as one of the
- following values:
-
- H5D_ALLOC_TIME_DEFAULT
- |
- Uses the default allocation time, based on the dataset storage method. - See the fill_time description in
- H5Pset_alloc_time for
- default allocation times for various storage methods.
- | |
- H5D_ALLOC_TIME_EARLY
- | - All space is allocated when the dataset is created. - | |
- H5D_ALLOC_TIME_INCR
- | - Space is allocated incrementally as data is written to the dataset. - | |
- H5D_ALLOC_TIME_LATE
- | - All space is allocated when data is first written to the dataset. - |
H5Pget_alloc_time
is designed to work in concert
- with the dataset fill value and fill value write time properties,
- set with the functions
- H5Pget_fill_value
and H5Pget_fill_time
.
- hid_t plist_id |
- IN: Dataset creation property list identifier. |
H5D_alloc_time_t *alloc_time |
- IN: When to allocate dataset storage space. |
-SUBROUTINE h5pget_alloc_time_f(plist_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset creation - ! property list identifier - INTEGER(HSIZE_T), INTENT(OUT) :: flag ! Allocation time flag - ! Possible values are: - ! H5D_ALLOC_TIME_ERROR_F - ! H5D_ALLOC_TIME_DEFAULT_F - ! H5D_ALLOC_TIME_EARLY_F - ! H5D_ALLOC_TIME_LATE_F - ! H5D_ALLOC_TIME_INCR_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_alloc_time_f -- - -
H5Pget_btree_ratios
(hid_t plist
,
- double *left
,
- double *middle
,
- double *right
- )
- H5Pget_btree_ratios
returns the B-tree split ratios
- for a dataset transfer property list.
-
- The B-tree split ratios are returned through the non-NULL
- arguments left
, middle
, and right
,
- as set by the H5Pset_btree_ratios function.
-
hid_t plist |
- IN: The dataset transfer property list identifier. |
double left |
- OUT: The B-tree split ratio for left-most nodes. |
double right |
- OUT: The B-tree split ratio for right-most nodes and lone nodes. |
double middle |
- OUT: The B-tree split ratio for all other nodes. |
-SUBROUTINE h5pget_btree_ratios_f(prp_id, left, middle, right, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id - ! Property list identifier - REAL, INTENT(OUT) :: left ! B-tree split ratio for left-most nodes - REAL, INTENT(OUT) :: middle ! B-tree split ratio for all other nodes - REAL, INTENT(OUT) :: right ! The B-tree split ratio for right-most - ! nodes and lone nodes. - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_btree_ratios_f -- - -
H5Pget_buffer
(hid_t plist
,
- void **tconv
,
- void **bkg
- )
- H5Pget_buffer
reads values previously set
- with H5Pset_buffer.
- hid_t plist |
- IN: Identifier for the dataset transfer property list. |
void **tconv |
- OUT: Address of the pointer to application-allocated - type conversion buffer. |
void **bkg |
- OUT: Address of the pointer to application-allocated - background buffer. |
-SUBROUTINE h5pget_buffer_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset transfer - ! property list identifier - INTEGER(HSIZE_T), INTENT(OUT) :: size ! Conversion buffer size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_buffer_f -- - -
H5Pget_cache
(hid_t plist_id
,
- int *mdc_nelmts
,
- int *rdcc_nelmts
,
- size_t *rdcc_nbytes
,
- double *rdcc_w0
- )
- H5Pget_cache
retrieves the maximum possible
- number of elements in the meta
- data cache and raw data chunk cache, the maximum possible number of
- bytes in the raw data chunk cache, and the preemption policy value.
- - Any (or all) arguments may be null pointers, in which case the - corresponding datum is not returned. -
hid_t plist_id |
- IN: Identifier of the file access property list. |
int *mdc_nelmts |
- IN/OUT: Number of elements (objects) in the meta data cache. |
int *rdcc_nelmts |
- IN/OUT: Number of elements (objects) in the raw data chunk cache. |
size_t *rdcc_nbytes |
- IN/OUT: Total size of the raw data chunk cache, in bytes. |
double *rdcc_w0 |
- IN/OUT: Preemption policy. |
-SUBROUTINE h5pget_cache_f(prp_id, mdc_nelmts, rdcc_nelmts, rdcc_nbytes, - rdcc_w0, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: mdc_nelmts ! Number of elements (objects) - ! in the meta data cache - INTEGER(SIZE_T), INTENT(OUT) :: rdcc_nelmts ! Number of elements (objects) - ! in the meta data cache - INTEGER(SIZE_T), INTENT(OUT) :: rdcc_nbytes ! Total size of the raw data - ! chunk cache, in bytes - REAL, INTENT(OUT) :: rdcc_w0 ! Preemption policy - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_cache_f -- - -
H5Pget_chunk
(hid_t plist
,
- int max_ndims
,
- hsize_t * dims
- )
- H5Pget_chunk
retrieves the size of chunks for the
- raw data of a chunked layout dataset.
- This function is only valid for dataset creation property lists.
- At most, max_ndims
elements of dims
- will be initialized.
- hid_t plist |
- IN: Identifier of property list to query. |
int max_ndims |
- IN: Size of the dims array. |
hsize_t * dims |
- OUT: Array to store the chunk dimensions. |
-SUBROUTINE h5pget_chunk_f(prp_id, ndims, dims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: ndims ! Number of chunk dimensions - ! to return - INTEGER(HSIZE_T), DIMENSION(ndims), INTENT(OUT) :: dims - ! Array containing sizes of - ! chunk dimensions - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! chunk rank on success - ! and -1 on failure -END SUBROUTINE h5pget_chunk_f -- - -
H5Pget_class
(hid_t plist
- )
- H5Pget_class
returns the property list class for the
- property list identified by the plist
parameter.
- Valid property list classes are defined in the description of
- H5Pcreate
.
- hid_t plist |
- IN: Identifier of property list to query. |
-SUBROUTINE h5pget_class_f(prp_id, classtype, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: classtype ! The type of the property list - ! to be created - ! Possible values are: - ! H5P_NO_CLASS - ! H5P_FILE_CREATE_F - ! H5P_FILE_ACCESS_F - ! H5PE_DATASET_CREATE_F - ! H5P_DATASET_XFER_F - ! H5P_MOUNT_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_class_f -- - -
H5Pget_class_name
(
- hid_t pcid
- )
-
- H5Pget_class_name
retrieves the name of a
- generic property list class. The pointer to the name
- must be freed by the user after each successful call.
-
- hid_t pcid |
- IN: Identifier of the property class to query |
-SUBROUTINE h5pget_class_name_f(prp_id, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier to - ! query - CHARACTER(LEN=*), INTENT(INOUT) :: name ! Buffer to retrieve class name - INTEGER, INTENT(OUT) :: hdferr ! Error code, possible values: - ! Success: Actual length of the - ! class name - ! If provided buffer "name" is - ! smaller, than name will be - ! truncated to fit into - ! provided user buffer - ! Failure: -1 -END SUBROUTINE h5pget_class_name_f -- - -
H5Pget_class_parent
(
- hid_t pcid
- )
-
- H5Pget_class_parent
retrieves an identifier for the
- parent class of a property class.
-
- hid_t pcid |
- IN: Identifier of the property class to query |
-SUBROUTINE h5pget_class_parent_f(prp_id, parent_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HID_T), INTENT(OUT) :: parent_id ! Parent class property list - ! identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_class_parent_f -- - -
H5Pget_data_transform
- (hid_t plist_id
,
- char *expression
,
- size_t size
)
- H5Pget_data_transform
retrieves the data
- transform expression previously set in the dataset transfer
- property list plist_id by H5Pset_data_transform
.
- H5Pget_data_transform
can be used to both
- retrieve the transform expression and to query its size.
-
- If expression
is non-NULL, up to size
- bytes of the data transform expression are written to the buffer.
- If expression
is NULL, size
is ignored
- and the function does not write anything to the buffer.
- The function always returns the size of the data transform expression.
-
- If 0
is returned for the size of the expression,
- no data transform expression exists for the property list.
-
- If an error occurs, the buffer pointed to by expression
- is unchanged and the function returns a negative value.
-
hid_t plist_id |
- IN: Identifier of the property list or class |
char *expression |
- OUT: Pointer to memory where the transform - expression will be copied |
size_t size |
- IN: Number of bytes of the transform expression - to copy to |
H5Pget_driver
(
- hid_t plist_id
- )
- H5Pget_driver
returns the identifier of the
- low-level file driver associated with the file access property list
- or data transfer property list plist_id
.
- - Valid driver identifiers with the standard HDF5 library distribution - include the following: -
- H5FD_CORE - H5FD_FAMILY - H5FD_GASS - H5FD_LOG - H5FD_MPIO - H5FD_MULTI - H5FD_SEC2 - H5FD_STDIO - H5FD_STREAM- If a user defines and registers custom drivers or - if additional drivers are defined in an HDF5 distribution, - this list will be longer. -
- The returned driver identifier is only valid as long as the - file driver remains registered. -
hid_t plist_id |
- IN: File access or data transfer property list identifier. |
-SUBROUTINE h5pget_driver_f(prp_id, driver, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: driver ! Low-level file driver identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_driver_f -- - -
H5Pget_dxpl_mpio
(
- hid_t dxpl_id
,
- H5FD_mpio_xfer_t *xfer_mode
- )
- H5Pget_dxpl_mpio
queries the data transfer mode
- currently set in the data transfer property list dxpl_id
.
-
- Upon return, xfer_mode
contains the data transfer mode,
- if it is non-null.
-
- H5Pget_dxpl_mpio
is not a collective function.
-
hid_t dxpl_id |
- IN: Data transfer property list identifier. |
H5FD_mpio_xfer_t *xfer_mode |
- OUT: Data transfer mode. |
-SUBROUTINE h5pget_dxpl_mpio_f(prp_id, data_xfer_mode, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: data_xfer_mode ! Data transfer mode - ! Possible values are: - ! H5FD_MPIO_INDEPENDENT_F - ! H5FD_MPIO_COLLECTIVE_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_dxpl_mpio_f -- - -
H5Pget_dxpl_multi
(
- hid_t dxpl_id
,
- const hid_t *memb_dxpl
- )
-H5Pget_dxpl_multi
returns the data transfer property list
- information for the multi-file driver.
- hid_t dxpl_id , |
- IN: Data transfer property list identifier. |
const hid_t *memb_dxpl |
- OUT: Array of data access property lists. |
H5Pget_edc_check
(hid_t plist
)
- H5Pget_edc_check
queries the dataset transfer property
- list plist
to determine whether error detection
- is enabled for data read operations.
- hid_t plist |
- IN: Dataset transfer property list identifier. |
H5Z_ENABLE_EDC
or H5Z_DISABLE_EDC
- if successful;
- otherwise returns a negative value.
- -SUBROUTINE h5pget_edc_check_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset transfer property list - ! identifier - INTEGER, INTENT(OUT) :: flag ! EDC flag; possible values - ! H5Z_DISABLE_EDC_F - ! H5Z_ENABLE_EDC_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_edc_check_f -- - -
H5Pget_external
(hid_t plist
,
- unsigned idx
,
- size_t name_size
,
- char *name
,
- off_t *offset
,
- hsize_t *size
- )
- H5Pget_external
returns information about an external
- file. The external file is specified by its index, idx
,
- which is a number from zero to N-1, where N is the value
- returned by H5Pget_external_count
.
- At most name_size
characters are copied into the
- name
array. If the external file name is
- longer than name_size
with the null terminator, the
- return value is not null terminated (similar to strncpy()
).
-
- If name_size
is zero or name
is the
- null pointer, the external file name is not returned.
- If offset
or size
are null pointers
- then the corresponding information is not returned.
-
hid_t plist |
- IN: Identifier of a dataset creation property list. |
unsigned idx |
- IN: External file index. |
size_t name_size |
- IN: Maximum length of name array. |
char *name |
- OUT: Name of the external file. |
off_t *offset |
- OUT: Pointer to a location to return an offset value. |
hsize_t *size |
- OUT: Pointer to a location to return the size of the - external file data. |
-SUBROUTINE h5pget_external_f(prp_id, idx, name_size, name, offset,bytes, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: idx ! External file index. - INTEGER, INTENT(IN) :: name_size ! Maximum length of name array - CHARACTER(LEN=*), INTENT(OUT) :: name ! Name of an external file - INTEGER, INTENT(OUT) :: offset ! Offset, in bytes, from the - ! beginning of the file to the - ! location in the file where - ! the data starts. - INTEGER(HSIZE_T), INTENT(OUT) :: bytes ! Number of bytes reserved in - ! the file for the data - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_external_f -- - -
H5Pget_external_count
(hid_t plist
- )
- H5Pget_external_count
returns the number of external files
- for the specified dataset.
- hid_t plist |
- IN: Identifier of a dataset creation property list. |
-SUBROUTINE h5pget_external_count_f (prp_id, count, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: count ! Number of external files for - ! the specified dataset - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_external_count_f -- - -
H5Pget_family_offset
(
- hid_t fapl_id
,
- hsize_t *offset
- )
- H5Pget_family_offset
retrieves the value of offset
- from the file access property list fapl_id
- so that the user application
- can retrieve a file handle for low-level access to a particular member
- of a family of files. The file handle is retrieved with a separate call
- to H5Fget_vfd_handle
- (or, in special circumstances, to H5FDget_vfd_handle
;
- see Virtual File Layer and List of VFL Functions
- in HDF5 Technical Notes).
-
- The data offset returned in offset
is the offset
- of the data in the HDF5 file that is stored on disk in the selected
- member file in a family of files.
-
- Use of this function is only appropriate for an HDF5 file written as a
- family of files with the FAMILY
file driver.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t *offset |
- IN: Offset in bytes within the HDF5 file. |
H5Pget_fapl_core
(
- hid_t fapl_id
,
- size_t *increment
,
- hbool_t *backing_store
- )
- H5Pget_fapl_core
queries the H5FD_CORE
- driver properties as set by H5Pset_fapl_core
.
- hid_t fapl_id |
- IN: File access property list identifier. |
size_t *increment |
- OUT: Size, in bytes, of memory increments. |
hbool_t *backing_store |
- OUT: Boolean flag indicating whether to write the file - contents to disk when the file is closed. |
-SUBROUTINE h5pget_fapl_core_f(prp_id, increment, backing_store, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(SIZE_T), INTENT(OUT) :: increment ! File block size in bytes - LOGICAL, INTENT(OUT) :: backing_store ! Flag to indicate that entire - ! file contents are flushed to - ! a file with the same name as - ! this core file - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_fapl_core_f -- - -
H5Pget_fapl_family
(
- hid_t fapl_id
,
- hsize_t *memb_size
,
- hid_t *memb_fapl_id
- )
- H5Pget_fapl_family
returns file access property list
- for use with the family driver.
- This information is returned through the output parameters.
- hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t *memb_size |
- OUT: Size in bytes of each file member. |
hid_t *memb_fapl_id |
- OUT: Identifier of file access property list for each - family member. |
-SUBROUTINE h5pget_fapl_family_f(prp_id, imemb_size, memb_plist, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), INTENT(OUT) :: memb_size ! Logical size, in bytes, - ! of each family member - INTEGER(HID_T), INTENT(OUT) :: memb_plist ! Identifier of the file - ! access property list to be - ! used for each family member - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_fapl_family_f -- - -
H5Pget_fapl_gass
(
- hid_t fapl_id
,
- GASS_Info *info
- )
- fapl_id
is set
- for use of the H5FD_GASS
driver,
- H5Pget_fapl_gass
returns the GASS_Info
- object through the info
pointer.
- - The GASS_Info information is copied, so it is valid - only until the file access property list is modified or closed. -
H5Pget_fapl_gass
is an experimental function.
- It is designed for use only when accessing files via the
- GASS facility of the Globus environment.
- For further information, see
- http//www.globus.org/.
- hid_t fapl_id , |
- IN: File access property list identifier. |
GASS_Info *info |
- OUT: Pointer to the GASS information structure. |
H5Pget_fapl_mpio
(
- hid_t fapl_id
,
- MPI_Comm *comm
,
- MPI_Info *info
- )
- H5FD_MPIO
- driver, H5Pget_fapl_mpio
returns the MPI communicator and
- information through the comm
and info
- pointers, if those values are non-null.
-
- Neither comm
nor info
is copied,
- so they are valid only until the file access property list
- is either modified or closed.
-
hid_t fapl_id |
- IN: File access property list identifier. |
MPI_Comm *comm |
- OUT: MPI-2 communicator. |
MPI_Info *info |
- OUT: MPI-2 info object. |
-SUBROUTINE h5pget_fapl_mpio_f(prp_id, comm, info, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: comm ! Buffer to return communicator - INTEGER, INTENT(IN) :: info ! Buffer to return info object as - ! defined in MPI_FILE_OPEN of MPI-2 - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_fapl_mpio_f -- - -
H5Pget_fapl_mpiposix
(
- hid_t fapl_id
,
- MPI_Comm *comm
- )
- H5FD_MPIO
- driver, H5Pget_fapl_mpiposix
returns
- the MPI communicator through the comm
- pointer, if those values are non-null.
-
- comm
is not copied, so it is valid only
- until the file access property list is either modified or closed.
-
hid_t fapl_id |
- IN: File access property list identifier. |
MPI_Comm *comm |
- OUT: MPI-2 communicator. |
-SUBROUTINE h5pget_fapl_mpiposix_f(prp_id, comm, use_gpfs, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: comm ! Buffer to return communicator - LOGICAL, INTENT(OUT) :: use_gpfs - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5pget_fapl_mpiposix_f -- - -
H5Pget_fapl_multi
(
- hid_t fapl_id
,
- const H5FD_mem_t *memb_map
,
- const hid_t *memb_fapl
,
- const char **memb_name
,
- const haddr_t *memb_addr
,
- hbool_t *relax
- )
- H5Pget_fapl_multi
returns information about the
- multi-file access property list.
- hid_t fapl_id |
- IN: File access property list identifier. |
const H5FD_mem_t *memb_map |
- OUT: Maps memory usage types to other memory usage types. |
const hid_t *memb_fapl |
- OUT: Property list for each memory usage type. |
const char **memb_name |
- OUT: Name generator for names of member files. |
const haddr_t *memb_addr |
- OUT: |
hbool_t *relax |
- OUT: Allows read-only access to incomplete file sets
- when TRUE . |
-SUBROUTINE h5pget_fapl_multi_f(prp_id, memb_map, memb_fapl, memb_name, - memb_addr, relax, hdferr) - IMPLICIT NONE - INTEGER(HID_T),INTENT(IN) :: prp_id ! Property list identifier - - INTEGER,DIMENSION(0:H5FD_MEM_NTYPES_F-1),INTENT(OUT) :: memb_map - INTEGER(HID_T),DIMENSION(0:H5FD_MEM_NTYPES_F-1),INTENT(OUT) :: memb_fapl - CHARACTER(LEN=*),DIMENSION(0:H5FD_MEM_NTYPES_F-1),INTENT(OUT) :: memb_name - REAL, DIMENSION(0:H5FD_MEM_NTYPES_F-1), INTENT(OUT) :: memb_addr - ! Numbers in the interval [0,1) (e.g. 0.0 0.1 0.5 0.2 0.3 0.4) - ! real address in the file will be calculated as X*HADDR_MAX - - LOGICAL, INTENT(OUT) :: relax - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_fapl_multi_f -- - -
H5Pget_fapl_srb
(
- hid_t fapl_id
,
- SRB_Info *info
- )
- fapl_id
is set
- for use of the H5FD_SRB
driver,
- H5Pget_fapl_srb
returns the SRB_Info
- object through the info
pointer.
- - The SRB_Info information is copied, so it is valid - only until the file access property list is modified or closed. -
H5Pset_fapl_gass
is an experimental function.
- It is designed for use only when accessing files via the
- Storage Resource Broker (SRB). For further information, see
- http//www.npaci.edu/Research/DI/srb/.
- hid_t fapl_id |
- IN: File access property list identifier. |
SRB_Info *info |
- OUT: Pointer to the SRB information structure. |
H5Pget_fapl_stream
(
- hid_t fapl_id
,
- H5FD_stream_fapl_t *fapl
- )
- H5Pget_fapl_stream
returns the file access properties
- set for the use of the streaming I/O driver.
-
- H5Pset_fapl_stream
and H5Pget_fapl_stream
- are not intended for use in a parallel environment.
-
hid_t fapl_id |
- IN: File access property list identifier. |
H5FD_stream_fapl_t *fapl |
- OUT: The streaming I/O file access property list. |
H5Pget_fclose_degree
(hid_t fapl_id
,
- H5F_close_degree_t *fc_degree
)
- H5Pget_fclose_degree
returns the current setting of the file
- close degree property fc_degree
in the file access property list
- fapl_id
.
- The value of fc_degree
determines how aggressively H5Fclose
- deals with objects within a file that remain open when H5Fclose
- is called to close that file. fc_degree
can have any one of
- four valid values as described above in H5Pset_fclose_degree
.
-
hid_t fapl_id |
- IN: File access property list identifier. |
H5F_close_degree_t *fc_degree |
- OUT: Pointer to a location to which to return the file close degree
- property, the value of fc_degree . |
-SUBROUTINE h5pget_fclose_degree_f(fapl_id, degree, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: fapl_id ! File access property list identifier - INTEGER, INTENT(OUT) :: degree ! Info about file close behavior - ! Possible values: - ! H5F_CLOSE_DEFAULT_F - ! H5F_CLOSE_WEAK_F - ! H5F_CLOSE_SEMI_F - ! H5F_CLOSE_STRONG_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_fclose_degree_f -- - -
H5Pget_fill_time
(hid_t plist_id
,
- H5D_fill_time_t *fill_time
- )
- H5Pget_fill_time
examines the dataset creation
- property list plist_id
to determine when fill values
- are to be written to a dataset.
-
- Valid values returned in fill_time
are as follows:
-
- H5D_FILL_TIME_IFSET
- | - Fill values are written to the dataset when storage space is allocated - only if there is a user-defined fill value, i.e., one set with - H5Pset_fill_value. - (Default) - | |
- H5D_FILL_TIME_ALLOC
- | - Fill values are written to the dataset when storage space is allocated. - | |
- H5D_FILL_TIME_NEVER
- | - Fill values are never written to the dataset. - |
H5Pget_fill_time
is designed to work in coordination
- with the dataset fill value and
- dataset storage allocation time properties, retrieved with the functions
- H5Pget_fill_value
and H5Pget_alloc_time
.
- hid_t plist_id |
- IN: Dataset creation property list identifier. |
H5D_fill_time_t *fill_time |
- OUT: Setting for the timing of writing fill values to the dataset. |
-SUBROUTINE h5pget_fill_time_f(plist_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset creation property - ! list identifier - INTEGER(HSIZE_T), INTENT(OUT) :: flag ! Fill time flag - ! Possible values are: - ! H5D_FILL_TIME_ERROR_F - ! H5D_FILL_TIME_ALLOC_F - ! H5D_FILL_TIME_NEVER_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_fill_time_f -- - -
H5Pget_fill_value
(hid_t plist_id
,
- hid_t type_id
,
- void *value
- )
- H5Pget_fill_value
returns the dataset
- fill value defined in the dataset creation property list
- plist_id
.
-
- The fill value is returned through the value
- pointer and will be converted to the datatype specified
- by type_id
.
- This datatype may differ from the
- fill value datatype in the property list,
- but the HDF5 library must be able to convert between the
- two datatypes.
-
- If the fill value is undefined,
- i.e., set to NULL
in the property list,
- H5Pget_fill_value
will return an error.
- H5Pfill_value_defined
should be used to
- check for this condition before
- H5Pget_fill_value
is called.
-
- Memory must be allocated by the calling application. -
H5Pget_fill_value
is designed to coordinate
- with the dataset storage allocation time and
- fill value write time properties, which can be retrieved
- with the functions H5Pget_alloc_time
- and H5Pget_fill_time
, respectively.
-
- hid_t plist_id |
- IN: Dataset creation property list identifier. |
hid_t type_id , |
- IN: Datatype identifier for the value passed
- via value . |
void *value |
- OUT: Pointer to buffer to contain the returned fill value. |
-SUBROUTINE h5pget_fill_value_f(prp_id, type_id, fillvalue, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier of fill - ! value datatype (in memory) - TYPE(VOID), INTENT(IN) :: fillvalue ! Fillvalue - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - -END SUBROUTINE h5pget_fill_value_f -- - -
H5Pget_filter
(hid_t plist
,
- int filter_number
,
- unsigned int *flags
,
- size_t *cd_nelmts
,
- unsigned int *cd_values
,
- size_t namelen
,
- char name[]
- )
- H5Pget_filter
returns information about a
- filter, specified by its filter number, in a filter pipeline,
- specified by the property list with which it is associated.
-
- If plist
is a dataset creation property list,
- the pipeline is a permanent filter pipeline;
- if plist
is a dataset transfer property list,
- the pipeline is a transient filter pipeline.
-
- On input, cd_nelmts
indicates the number of entries
- in the cd_values
array, as allocated by the caller;
- on return,cd_nelmts
contains the number of values
- defined by the filter.
-
- filter_number
is a value between zero and
- N-1, as described in
- H5Pget_nfilters
.
- The function will return a negative value if the filter number
- is out of range.
-
- If name
is a pointer to an array of at least
- namelen
bytes, the filter name will be copied
- into that array. The name will be null terminated if
- namelen
is large enough. The filter name returned
- will be the name appearing in the file, the name registered
- for the filter, or an empty string.
-
- The structure of the flags
argument is discussed
- in H5Pset_filter
.
-
plist
must be a dataset creation property
- list.
- hid_t plist |
- IN: Property list identifier. |
int filter_number |
- IN: Sequence number within the filter pipeline of - the filter for which information is sought. |
unsigned int *flags |
- OUT: Bit vector specifying certain general properties - of the filter. |
size_t *cd_nelmts |
- IN/OUT: Number of elements in cd_values . |
unsigned int *cd_values |
- OUT: Auxiliary data for the filter. |
size_t namelen |
- IN: Anticipated number of characters in name . |
char name[] |
- OUT: Name of the filter. |
- H5Z_FILTER_DEFLATE
- | - Data compression filter, employing the gzip algorithm - |
- H5Z_FILTER_SHUFFLE
- | - Data shuffling filter - |
- H5Z_FILTER_FLETCHER32
- | - Error detection filter, employing the Fletcher32 checksum algorithm - |
- H5Z_FILTER_SZIP
- | - Data compression filter, employing the SZIP algorithm - |
-SUBROUTINE h5pget_filter_f(prp_id, filter_number, flags, cd_nelmts, - cd_values, namelen, name, filter_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: filter_number ! Sequence number within the filter - ! pipeline of the filter for which - ! information is sought - INTEGER, DIMENSION(*), INTENT(OUT) :: cd_values - ! Auxiliary data for the filter - INTEGER, INTENT(OUT) :: flags ! Bit vector specifying certain - ! general properties of the filter - INTEGER(SIZE_T), INTENT(INOUT) :: cd_nelmts - ! Number of elements in cd_values - INTEGER(SIZE_T), INTENT(IN) :: namelen ! Anticipated number of characters - ! in name - CHARACTER(LEN=*), INTENT(OUT) :: name ! Name of the filter - INTEGER, INTENT(OUT) :: filter_id ! Filter identification number - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_filter_f -- - -
H5Pget_filter_by_id
(
- hid_t plist_id
,
- H5Z_filter_t filter
,
- unsigned int *flags
,
- size_t *cd_nelmts
,
- unsigned int cd_values[]
,
- size_t namelen
,
- char name[]
- )
- H5Pget_filter_by_id
returns information about the
- filter specified in filter
, a filter identifier.
-
- plist_id
must identify a dataset creation property list
- and filter
will be in a permanent filter pipeline.
-
- The filter
and flags
parameters are used
- in the same manner as described in the discussion of
- H5Pset_filter
.
-
- Aside from the fact that they are used for output, the
- parameters cd_nelmts
and cd_values[]
are
- used in the same manner as described in the discussion
- of H5Pset_filter
.
- On input, the cd_nelmts
parameter indicates the
- number of entries in the cd_values[]
array
- allocated by the calling program; on exit it contains the
- number of values defined by the filter.
-
- On input, the name_len
parameter indicates the
- number of characters allocated for the filter name
- by the calling program in the array name[]
.
- On exit it contains the length in characters of name of the filter.
- On exit name[]
contains the name of the filter
- with one character of the name in each element of the array.
-
- If the filter specified in filter
is not
- set for the property list, an error will be returned
- and H5Pget_filter_by_id
will fail.
-
hid_t plist_id |
- IN: Property list identifier. |
H5Z_filter_t filter |
- IN: Filter identifier. |
unsigned int flags |
- OUT: Bit vector specifying certain general properties - of the filter. |
size_t cd_nelmts |
- IN/OUT: Number of elements in cd_values . |
const unsigned int cd_values[] |
- OUT: Auxiliary data for the filter. |
size_t namelen |
- IN/OUT: Length of filter name and
- number of elements in name[] . |
char *name[] |
- OUT: Name of filter. |
-SUBROUTINE h5pget_filter_by_id_f(prp_id, filter_id, flags, cd_nelmts, - cd_values, namelen, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: filter_id ! Filter identifier - INTEGER(SIZE_T), INTENT(INOUT) :: cd_nelmts - ! Number of elements in cd_values - INTEGER, DIMENSION(*), INTENT(OUT) :: cd_values - ! Auxiliary data for the filter - INTEGER, INTENT(OUT) :: flags ! Bit vector specifying certain - ! general properties of the filter - INTEGER(SIZE_T), INTENT(IN) :: namelen ! Anticipated number of characters - ! in name - CHARACTER(LEN=*), INTENT(OUT) :: name ! Name of the filter - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_filter_by_id_f -- - -
H5Pget_gc_references
(hid_t plist
,
- unsigned *gc_ref
- )
- H5Pget_gc_references
returns the current setting
- for the garbage collection references property from
- the specified file access property list.
- The garbage collection references property is set
- by H5Pset_gc_references.
- hid_t plist |
- IN: File access property list identifier. |
unsigned gc_ref |
- OUT: Flag returning the state of reference garbage collection.
- A returned value of 1 indicates that
- garbage collection is on while
- 0 indicates that garbage collection is off. |
-SUBROUTINE h5pget_gc_references_f (prp_id, gc_reference, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: gc_reference ! The flag for garbage collecting - ! references for the file - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_gc_references_f -- - -
H5Pget_hyper_vector_size
(hid_t dxpl_id
,
- size_t *vector_size
- )
- H5Pset_hyper_vector_size
retrieves the number of
- I/O vectors to be accumulated in memory before being issued
- to the lower levels of the HDF5 library for reading or writing the
- actual data.
-
- The number of I/O vectors set in the dataset transfer property list
- dxpl_id
is returned in vector_size
.
- Unless the default value is in use, vector_size
- was previously set with a call to
- H5Pset_hyper_vector_size.
-
hid_t dxpl_id |
- IN: Dataset transfer property list identifier. |
size_t *vector_size |
- OUT: Number of I/O vectors to accumulate in memory for I/O operations. |
-SUBROUTINE h5pget_hyper_vector_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset transfer property list - ! identifier - INTEGER(SIZE_T), INTENT(OUT) :: size ! Vector size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_hyper_vector_size_f -- - -
H5Pget_istore_k
(hid_t plist
,
- unsigned * ik
- )
- H5Pget_istore_k
queries the 1/2 rank of
- an indexed storage B-tree.
- The argument ik
may be the null pointer (NULL).
- This function is only valid for file creation property lists.
- - See H5Pset_istore_k for details. -
hid_t plist |
- IN: Identifier of property list to query. |
unsigned * ik |
- OUT: Pointer to location to return the chunked storage B-tree 1/2 rank. |
-SUBROUTINE h5pget_istore_k_f(prp_id, ik, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: ik ! 1/2 rank of chunked storage B-tree - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_istore_k_f -- - -
H5Pget_layout
(hid_t plist
)
- H5Pget_layout
returns the layout of the raw data for
- a dataset. This function is only valid for dataset creation
- property lists.
- - Note that a compact storage layout may affect writing data to - the dataset with parallel applications. See note in - H5Dwrite - documentation for details. - -
hid_t plist |
- IN: Identifier for property list to query. |
- Otherwise, returns a negative value indicating failure. -
-SUBROUTINE h5pget_layout_f (prp_id, layout, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: layout ! Type of storage layout for raw data - ! possible values are: - ! H5D_COMPACT_F - ! H5D_CONTIGUOUS_F - ! H5D_CHUNKED_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_layout_f -- - -
H5Pget_meta_block_size
(
- hid_t fapl_id
,
- hsize_t *size
- )
- H5Pget_meta_block_size
returns the current
- minimum size, in bytes, of new metadata block allocations.
- This setting is retrieved from the file access property list
- fapl_id
.
-
- This value is set by
- H5Pset_meta_block_size
- and is retrieved from the file access property list
- fapl_id
.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t *size |
- OUT: Minimum size, in bytes, of metadata block allocations. |
-SUBROUTINE h5pget_meta_block_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! File access property list - ! identifier - INTEGER(HSIZE_T), INTENT(OUT) :: size ! Metadata block size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_meta_block_size_f -- - -
H5Pset_multi_type
(
- hid_t fapl_id
,
- H5FD_mem_t *type
- )
- MULTI
driver.
- H5Pget_multi_type
retrieves the data type setting from the
- file access or data transfer property list fapl_id
.
- This enables a user application to specify the type of data the
- application wishes to access so that the application
- can retrieve a file handle for low-level access to the particular member
- of a set of MULTI
files in which that type of data is stored.
- The file handle is retrieved with a separate call
- to H5Fget_vfd_handle
- (or, in special circumstances, to H5FDget_vfd_handle
;
- see Virtual File Layer and List of VFL Functions
- in HDF5 Technical Notes).
-
- The type of data returned in type
will be one of those
- listed in the discussion of the type
parameter in the the
- description of the function
- H5Pset_multi_type
.
-
- Use of this function is only appropriate for an HDF5 file written
- as a set of files with the MULTI
file driver.
-
hid_t fapl_id |
- IN: File access property list or data transfer property list identifier. |
H5FD_mem_t *type |
- OUT: Type of data. |
H5Pget_nfilters
(hid_t plist
)
- H5Pget_nfilters
returns the number of filters
- defined in the filter pipeline associated with the property list
- plist
.
- - In each pipeline, the filters are numbered from - 0 through N-1, where N is the value returned - by this function. During output to the file, the filters are - applied in increasing order; during input from the file, they - are applied in decreasing order. -
- H5Pget_nfilters
returns the number of filters
- in the pipeline, including zero (0
) if there
- are none.
-
plist_id
must be a dataset creation
- property list.
- hid_t plist |
- IN: Property list identifier. |
-SUBROUTINE h5pget_nfilters_f(prp_id, nfilters, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset creation property - ! list identifier - INTEGER, INTENT(OUT) :: nfilters ! The number of filters in - ! the pipeline - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_nfilters_f -- - -
H5Pget_nprops
(
- hid_t id
,
- size_t *nprops
- )
-
- H5Pget_nprops
retrieves the number of properties in a
- property list or class.
- If a property class identifier is given, the number of registered
- properties in the class is returned in nprops
.
- If a property list identifier is given, the current number of
- properties in the list is returned in nprops
.
-
- hid_t id |
- IN: Identifier of property object to query |
size_t *nprops |
- OUT: Number of properties in object |
-SUBROUTINE h5pget_nprops_f(prp_id, nprops, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(SIZE_T), INTENT(OUT) :: nprops ! Number of properties - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_nprops_f -- - -
H5Pget_preserve
(hid_t plist
)
- H5Pget_preserve
checks the status of the
- dataset transfer property list.
- hid_t plist |
- IN: Identifier for the dataset transfer property list. |
-SUBROUTINE h5pget_preserve_f(prp_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset transfer property - ! list identifier - LOGICAL, INTENT(OUT) :: flag ! Status of for the dataset - ! transfer property list - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_preserve_f -- - -
H5Pget_sieve_buf_size
(
- hid_t fapl_id
,
- hsize_t *size
- )
- H5Pget_sieve_buf_size
retrieves, size
,
- the current maximum size of the data sieve buffer.
-
- This value is set by
- H5Pset_sieve_buf_size
- and is retrieved from the file access property list
- fapl_id
.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t *size |
- IN: Maximum size, in bytes, of data sieve buffer. |
-SUBROUTINE h5pget_sieve_buf_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! File access property list - ! identifier - INTEGER(SIZE_T), INTENT(OUT) :: size ! Sieve buffer size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_sieve_buf_size_f -- - -
H5Pget_size
(
- hid_t id
,
- const char *name
,
- size_t *size
- )
-
- H5Pget_size
retrieves the size of a
- property's value in bytes. This function operates on both
- property lists and property classes
-
-
- Zero-sized properties are allowed and return 0
.
-
-
-
hid_t id |
- IN: Identifier of property object to query |
const char *name |
- IN: Name of property to query |
size_t *size |
- OUT: Size of property in bytes |
-SUBROUTINE h5pget_size_f(prp_id, name, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to query - INTEGER(SIZE_T), INTENT(OUT) :: size ! Size in bytes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_size_f -- - -
H5Pget_sizes
(hid_t plist
,
- size_t * sizeof_addr
,
- size_t * sizeof_size
- )
- H5Pget_sizes
retrieves the size of the offsets
- and lengths used in an HDF5 file.
- This function is only valid for file creation property lists.
- hid_t plist |
- IN: Identifier of property list to query. |
size_t * size |
- OUT: Pointer to location to return offset size in bytes. |
size_t * size |
- OUT: Pointer to location to return length size in bytes. |
-SUBROUTINE h5pget_sizes_f(prp_id, sizeof_addr, sizeof_size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(SIZE_T), DIMENSION(:), INTENT(OUT) :: sizeof_addr - ! Size of an object address in bytes - INTEGER(SIZE_T), DIMENSION(:), INTENT(OUT) :: sizeof_size - ! Size of an object in bytes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_sizes_f -- - -
H5Pget_small_data_block_size
(hid_t fapl_id
,
- hsize_t *size
- )
- H5Pget_small_data_block_size
retrieves the
- current setting for the size of the small data block.
-
- If the returned value is zero (0
), the small data
- block mechanism has been disabled for the file.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t *size |
- OUT: Maximum size, in bytes, of the small data block. |
-SUBROUTINE h5pget_small_data_block_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! File access property list - ! identifier - INTEGER(HSIZE_T), INTENT(OUT) :: size ! Small raw data block size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_small_data_block_size_f -- - -
H5Pget_sym_k
(hid_t plist
,
- unsigned * ik
,
- unsigned * lk
- )
- H5Pget_sym_k
retrieves the size of the
- symbol table B-tree 1/2 rank and the symbol table leaf
- node 1/2 size. This function is only valid for file creation
- property lists. If a parameter valued is set to NULL, that
- parameter is not retrieved. See the description for
- H5Pset_sym_k for more
- information.
- hid_t plist |
- IN: Property list to query. |
unsigned * ik |
- OUT: Pointer to location to return the symbol table's B-tree 1/2 rank. |
unsigned * size |
- OUT: Pointer to location to return the symbol table's leaf node 1/2 size. |
-SUBROUTINE h5pget_sym_k_f(prp_id, ik, lk, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: ik ! Symbol table tree rank - INTEGER, INTENT(OUT) :: lk ! Symbol table node size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_sym_k_f -- - -
H5Pget_userblock
(hid_t plist
,
- hsize_t * size
- )
- H5Pget_userblock
retrieves the size of a user block
- in a file creation property list.
- hid_t plist |
- IN: Identifier for property list to query. |
hsize_t * size |
- OUT: Pointer to location to return user-block size. |
-SUBROUTINE h5pget_userblock_f(prp_id, block_size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), DIMENSION(:), INTENT(OUT) :: block_size - ! Size of the user-block in bytes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_userblock_f -- - -
H5Pget_version
(hid_t plist
,
- unsigned * super
,
- unsigned * freelist
,
- unsigned * stab
,
- unsigned * shhdr
- )
- H5Pget_version
retrieves the version information of various objects
- for a file creation property list. Any pointer parameters which are
- passed as NULL are not queried.
- hid_t plist |
- IN: Identifier of the file creation property list. |
unsigned * super |
- OUT: Pointer to location to return super block version number. |
unsigned * freelist |
- OUT: Pointer to location to return global freelist version number. |
unsigned * stab |
- OUT: Pointer to location to return symbol table version number. |
unsigned * shhdr |
- OUT: Pointer to location to return shared object header version number. |
-SUBROUTINE h5pget_version_f(prp_id, boot, freelist, & - stab, shhdr, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, DIMENSION(:), INTENT(OUT) :: boot ! Array to put boot block - ! version number - INTEGER, DIMENSION(:), INTENT(OUT) :: freelist - ! Array to put global - ! freelist version number - INTEGER, DIMENSION(:), INTENT(OUT) :: stab ! Array to put symbol table - ! version number - INTEGER, DIMENSION(:), INTENT(OUT) :: shhdr ! Array to put shared object - ! header version number - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pget_version_f -- - -
H5Pget_vlen_mem_manager
(hid_t plist
,
- H5MM_allocate_t *alloc
,
- void **alloc_info
,
- H5MM_free_t *free
,
- void **free_info
- )
- H5Dread
and H5Dvlen_reclaim
.
- H5Pget_vlen_mem_manager
is the companion function to
- H5Pset_vlen_mem_manager
, returning the parameters
- set by that function.
- hid_t plist |
- IN: Identifier for the dataset transfer property list. |
H5MM_allocate_t alloc |
- OUT: User's allocate routine, or NULL
- for system malloc . |
void *alloc_info |
- OUT: Extra parameter for user's allocation routine.
- - Contents are ignored if preceding parameter is - NULL . |
H5MM_free_t free |
- OUT: User's free routine, or NULL for
- system free . |
void *free_info |
- OUT: Extra parameter for user's free routine.
- - Contents are ignored if preceding parameter is - NULL . |
H5Pinsert
(
- hid_t plid
,
- const char *name
,
- size_t size
,
- void *value
,
- H5P_prp_set_func_t set
,
- H5P_prp_get_func_t get
,
- H5P_prp_delete_func_t delete
,
- H5P_prp_copy_func_t copy
,
- H5P_prp_compare_func_t compare
,
- H5P_prp_close_func_t close
- )
-
- H5Pinsert
create a new property in a property list.
- The property will exist only in this property list and copies made
- from it.
-
-
- The initial property value must be provided in
- value
and the property value will be set accordingly.
-
-
- The name of the property must not already exist in this list, - or this routine will fail. - -
- The set
and get
callback routines may
- be set to NULL if they are not needed.
-
-
- Zero-sized properties are allowed and do not store any data in the - property list. The default value of a zero-size property may be set - to NULL. They may be used to indicate the presence or absence of a - particular piece of information. -
- - Theset
routine is called before a new value is copied
- into the property.
- The H5P_prp_set_func_t
callback function is defined
- as follows:
- H5P_prp_set_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *new_value
);
- hid_t prop_id |
- IN: The identifier of the property list being modified |
const char *name |
- IN: The name of the property being modified |
size_t size |
- IN: The size of the property in bytes |
void **new_value |
- IN: Pointer to new value pointer for the property being - modified |
set
routine may modify the value pointer to be set
- and those changes will be used when setting the property's value.
- If the set
routine returns a negative value, the new
- property value is not copied into the property and the set routine
- returns an error value.
- The set
routine will be called for the initial value.
-
- Note:
- The set
callback function may be useful
- to range check the value being set for the property
- or may perform some transformation or translation of the
- value set. The get
callback would then
- reverse the transformation or translation.
- A single get
or set
callback
- could handle multiple properties by
- performing different actions based on the
- property name or other properties in the property list.
-
-
- The get
routine is called when a value is retrieved
- from a property value.
- The H5P_prp_get_func_t
callback function is defined
- as follows:
-
H5P_prp_get_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *value
);
- hid_t prop_id |
- IN: The identifier of the property list being queried |
const char *name |
- IN: The name of the property being queried |
size_t size |
- IN: The size of the property in bytes |
void *value |
- IN: The value of the property being returned |
get
routine may modify the value to be returned from
- the query and those changes will be preserved.
- If the get
routine returns a negative value, the query
- routine returns an error value.
-
-
-
- The delete
routine is called when a property is being
- deleted from a property list.
- The H5P_prp_delete_func_t
callback function is defined
- as follows:
-
typedef herr_t
(*H5P_prp_delete_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *value
);
- hid_t prop_id |
- IN: The identifier of the property list the property is - being deleted from |
const char * name |
- IN: The name of the property in the list |
size_t size |
- IN: The size of the property in bytes |
void * value |
- IN: The value for the property being deleted |
delete
routine may modify the value passed in,
- but the value is not used by the library when the delete
- routine returns. If the delete
routine returns a
- negative value, the property list delete routine returns an
- error value but the property is still deleted.
-
-
-
- The copy
routine is called when a new property list
- with this property is being created through a copy operation.
- The H5P_prp_copy_func_t
callback function is defined
- as follows:
-
H5P_prp_copy_func_t
)(
- const char *name
,
- size_t size
,
- void *value
);
- const char *name |
- IN: The name of the property being copied |
size_t size |
- IN: The size of the property in bytes |
void * value |
- IN/OUT: The value for the property being copied |
copy
routine may modify the value to be set and
- those changes will be stored as the new value of the property.
- If the copy
routine returns a negative value, the
- new property value is not copied into the property and the
- copy routine returns an error value.
-
-
-
- The compare
routine is called when a property list with
- this property is compared to another property list with the same property.
- The H5P_prp_compare_func_t
callback function is defined
- as follows:
-
H5P_prp_compare_func_t
)(
- const void *value1
,
- const void *value2
,
- size_t size
);
- const void *value1 |
- IN: The value of the first property to compare |
const void *value2 |
- IN: The value of the second property to compare |
size_t size |
- IN: The size of the property in bytes |
compare
routine may not modify the values.
- The compare
routine should return a positive value if
- value1
is greater than value2
, a negative value
- if value2
is greater than value1
and zero if
- value1
and value2
are equal.
-
-
- The close
routine is called when a property list
- with this property is being closed.
- The H5P_prp_close_func_t
callback function is defined
- as follows:
-
H5P_prp_close_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *value
);
- hid_t |
- IN: The ID of the property list being closed |
const char * name |
- IN: The name of the property in the list |
size_t size |
- IN: The size of the property in bytes |
void * value |
- IN: The value for the property being closed |
close
routine may modify the value passed in, the value
- is not used by the library when the close
routine returns.
- If the close
routine returns a negative value, the
- property list close routine returns an error value but the property list
- is still closed.
-
-
- Note:
- There is no create
callback routine for temporary property
- list objects; the initial value is assumed to have any necessary setup
- already performed on it.
-
-
-
hid_t plid |
- IN: Property list identifier to create temporary property - within |
const char *name |
- IN: Name of property to create |
size_t size |
- IN: Size of property in bytes |
void *value |
- IN: Initial value for the property |
H5P_prp_set_func_t set |
- IN: Callback routine called before a new value is copied into - the property's value |
H5P_prp_get_func_t get |
- IN: Callback routine called when a property value is retrieved - from the property |
H5P_prp_delete_func_t delete |
- IN: Callback routine called when a property is deleted from - a property list |
H5P_prp_copy_func_t copy |
- IN: Callback routine called when a property is copied from - an existing property list |
H5P_prp_compare_func_t compare |
- IN: Callback routine called when a property is compared with - another property list |
H5P_prp_close_func_t close |
- IN: Callback routine called when a property list is being closed - and the property value will be disposed of |
-SUBROUTINE h5pinsert_f - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist ! Property list class identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to insert - INTEGER(SIZE_T), INTENT(IN) :: size ! Size of the property value - TYPE, INTENT(IN) :: value ! Property value - ! Supported types are: - ! INTEGER - ! REAL - ! DOUBLE PRECISION - ! CHARACTER(LEN=*) - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pinsert_f -- - -
H5Pisa_class
(
- hid_t plist
,
- hid_t pclass
- )
-
- H5Pisa_class
checks to determine whether a property list
- is a member of the specified class.
-
- hid_t plist |
- IN: Identifier of the property list |
hid_t pclass |
- IN: Identifier of the property class |
-SUBROUTINE h5pisa_class_f(plist, pclass, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist ! Property list identifier - INTEGER(HID_T), INTENT(IN) :: pclass ! Class identifier - LOGICAL, INTENT(OUT) :: flag ! Logical flag - ! .TRUE. if a member - ! .FALSE. otherwise - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pisa_class_f -- - -
H5Piterate
(
- hid_t id
,
- int * idx
,
- H5P_iterate_t iter_func
,
- void * iter_data
- )
-
- H5Piterate
iterates over the properties in the
- property object specified in id
, which may be either a
- property list or a property class, performing a specified
- operation on each property in turn.
-
-
- For each property in the object, iter_func
and
- the additional information specified below are passed to the
- H5P_iterate_t
operator function.
-
- (NOTE: iter_func
was changed to
- H5P_iterate_t
in the preceding sentence.
- Is this correct?)
-
-
- The iteration begins with the idx
-th property in
- the object; the next element to be processed by the operator
- is returned in idx
.
- If idx
is NULL, the iterator starts at the first
- property; since no stopping point is returned in this case,
- the iterator cannot be restarted if one of the calls to its
- operator returns non-zero.
-
H5P_iterate_t
operator is
- as follows:
- H5P_iterate_t
)(
- hid_t id
,
- const char *>name
,
- void *iter_data
- )
- id
,
- the name of the current property within the object, name
,
- and the pointer to the operator data passed in to
- H5Piterate
, iter_data
.
-
-
- The valid return values from an operator are as follows:
- Zero | -Causes the iterator to continue, returning zero when all - properties have been processed |
Positive | -Causes the iterator to immediately return that positive - value, indicating short-circuit success. The iterator can - be restarted at the index of the next property |
Negative | -Causes the iterator to immediately return that value, - indicating failure. The iterator can be restarted at the - index of the next property |
- H5Piterate
assumes that the properties in the object
- identified by id
remain unchanged through the iteration.
- If the membership changes during the iteration, the function's behavior
- is undefined.
-
-
hid_t id |
- IN: Identifier of property object to iterate over |
int * idx |
- IN/OUT: Index of the property to begin with |
H5P_iterate_t iter_func |
- IN: Function pointer to function to be called with each - property iterated over |
void * iter_data |
- IN/OUT: Pointer to iteration data from user |
iter_func
if it was non-zero;
- zero if all properties have been processed
- H5Pmodify_filter
(hid_t plist
,
- H5Z_filter_t filter
,
- unsigned int flags
,
- size_t cd_nelmts
,
- const unsigned int cd_values[]
- )
- H5Pmodify_filter
modifies the specified
- filter
in the filter pipeline.
- plist
must be a dataset creation property list
- and the modified filter will be in a permanent filter pipeline.
-
- The filter
, flags
- cd_nelmts[]
, and cd_values
parameters
- are used in the same manner and accept the same values as described
- in the discussion of H5Pset_filter.
-
plist_id
must be a dataset creation
- property list.
- hid_t plist_id |
- IN: Property list identifier. |
H5Z_filter_t filter |
- IN: Filter to be modified. |
unsigned int flags |
- IN: Bit vector specifying certain general properties - of the filter. |
size_t cd_nelmts |
- IN: Number of elements in cd_values . |
const unsigned int cd_values[] |
- IN: Auxiliary data for the filter. |
-SUBROUTINE h5pmodify_filter_f(prp_id, filter, flags, cd_nelmts, & - cd_values, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: filter ! Filter to be modified - INTEGER, INTENT(IN) :: flags ! Bit vector specifying certain - ! general properties of the filter - INTEGER(SIZE_T), INTENT(IN) :: cd_nelmts ! Number of elements in cd_values - INTEGER, DIMENSION(*), INTENT(IN) :: cd_values - ! Auxiliary data for the filter - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pmodify_filter_f -- - -
H5Pregister
(
- hid_t class
,
- const char * name
,
- size_t size
,
- void * default
,
- H5P_prp_create_func_t create
,
- H5P_prp_set_func_t set
,
- H5P_prp_get_func_t get
,
- H5P_prp_delete_func_t delete
,
- H5P_prp_copy_func_t copy
,
- H5P_prp_compare_func_t compare
,
- H5P_prp_close_func_t close
- )
-
- H5Pregister
registers a new property with a
- property list class.
- The property will exist in all property list objects of
- class
created after this routine finishes. The name
- of the property must not already exist, or this routine will fail.
- The default property value must be provided and all new property
- lists created with this property will have the property value set
- to the default value. Any of the callback routines may be set to
- NULL if they are not needed.
-
-
- Zero-sized properties are allowed and do not store any data in the
- property list. These may be used as flags to indicate the presence
- or absence of a particular piece of information. The default pointer
- for a zero-sized property may be set to NULL.
- The property create
and close
callbacks
- are called for zero-sized properties, but the set
and
- get
callbacks are never called.
-
- The create
routine is called when a new property list
- with this property is being created.
- The H5P_prp_create_func_t
callback function is defined
- as follows:
-
H5P_prp_create_func_t
)(
- const char *name
,
- size_t size
,
- void *initial_value
);
- const char *name |
- IN: The name of the property being modified |
size_t size |
- IN: The size of the property in bytes |
void *initial_value |
- IN/OUT: The default value for the property being created,
- which will be passed to H5Pregister |
create
routine may modify the value to be set and
- those changes will be stored as the initial value of the property.
- If the create
routine returns a negative value,
- the new property value is not copied into the property and the
- create routine returns an error value.
-
-
-
- The set
routine is called before a new value is copied
- into the property.
- The H5P_prp_set_func_t
callback function is defined
- as follows:
-
prop_id
,
- const char *name
,
- size_t size
,
- void *new_value
);
- hid_t prop_id |
- IN: The identifier of the property list being modified |
const char *name |
- IN: The name of the property being modified |
size_t size |
- IN: The size of the property in bytes |
void **new_value |
- IN/OUT: Pointer to new value pointer for the property being - modified |
set
routine may modify the value pointer to be set
- and those changes will be used when setting the property's value.
- If the set
routine returns a negative value, the new
- property value is not copied into the property and the
- set
routine returns an error value.
- The set
routine will not be called for the initial
- value, only the create
routine will be called.
-
- Note:
- The set
callback function may be useful
- to range check the value being set for the property
- or may perform some transformation or translation of the
- value set. The get
callback would then
- reverse the
-
- transformation or translation.
- A single get
or set
callback
- could handle multiple properties by
- performing different actions based on the
- property name or other properties in the property list.
-
-
- The get
routine is called when a value is retrieved
- from a property value.
- The H5P_prp_get_func_t
callback function is defined
- as follows:
-
H5P_prp_get_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *value
);
- hid_t prop_id |
- IN: The identifier of the property list being queried |
const char * name |
- IN: The name of the property being queried |
size_t size |
- IN: The size of the property in bytes |
void * value |
- IN/OUT: The value of the property being returned |
get
routine may modify the value to be returned from
- the query and those changes will be returned to the calling routine.
- If the set
routine returns a negative value, the query
- routine returns an error value.
-
-
-
- The delete
routine is called when a property is being
- deleted from a property list.
- The H5P_prp_delete_func_t
callback function is defined
- as follows:
-
H5P_prp_delete_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *value
);
- hid_t prop_id |
- IN: The identifier of the property list the property is being - deleted from |
const char * name |
- IN: The name of the property in the list |
size_t size |
- IN: The size of the property in bytes |
void * value |
- IN: The value for the property being deleted |
delete
routine may modify the value passed in,
- but the value is not used by the library when the delete
- routine returns. If the delete
routine returns
- a negative value, the property list delete routine returns
- an error value but the property is still deleted.
-
-
-
- The copy
routine is called when a new property list with
- this property is being created through a copy operation.
- The H5P_prp_copy_func_t
callback function is defined
- as follows:
-
H5P_prp_copy_func_t
)(
- const char *name
,
- size_t size
,
- void *value
);
- const char *name |
- IN: The name of the property being copied |
size_t size |
- IN: The size of the property in bytes |
void *value |
- IN/OUT: The value for the property being copied |
copy
routine may modify the value to be set and
- those changes will be stored as the new value of the property.
- If the copy
routine returns a negative value,
- the new property value is not copied into the property and
- the copy routine returns an error value.
-
-
-
- The compare
routine is called when a property list with
- this property is compared to another property list with the same property.
- The H5P_prp_compare_func_t
callback function is defined
- as follows:
-
H5P_prp_compare_func_t
)(
- const void *value1
,
- const void *value2
,
- size_t size
);
- const void *value1 |
- IN: The value of the first property to compare |
const void *value2 |
- IN: The value of the second property to compare |
size_t size |
- IN: The size of the property in bytes |
compare
routine may not modify the values.
- The compare
routine should return a positive value if
- value1
is greater than value2
, a negative value
- if value2
is greater than value1
and zero if
- value1
and value2
are equal.
-
-
-
- The close
routine is called when a property list with
- this property is being closed.
- The H5P_prp_close_func_t
callback function is defined
- as follows:
-
H5P_prp_close_func_t
)(
- hid_t prop_id
,
- const char *name
,
- size_t size
,
- void *value
);
- hid_t prop_id |
- IN: The identifier of the property list being - closed |
const char *name |
- IN: The name of the property in the list |
size_t size |
- IN: The size of the property in bytes |
void *value |
- IN: The value for the property being closed |
close
routine may modify the value passed in,
- but the value is not used by the library when the
- close
routine returns.
- If the close
routine returns a negative value,
- the property list close routine returns an error value but
- the property list is still closed.
-
-
-hid_t class |
- IN: Property list class to register permanent property - within |
const char * name |
- IN: Name of property to register |
size_t size |
- IN: Size of property in bytes |
void * default |
- IN: Default value for property in newly created property - lists |
H5P_prp_create_func_t create |
- IN: Callback routine called when a property list is being - created and the property value will be initialized |
H5P_prp_set_func_t set |
- IN: Callback routine called before a new value is copied - into the property's value |
H5P_prp_get_func_t get |
- IN: Callback routine called when a property value is - retrieved from the property |
H5P_prp_delete_func_t delete |
- IN: Callback routine called when a property is deleted from - a property list |
H5P_prp_copy_func_t copy |
- IN: Callback routine called when a property is copied from - a property list |
H5P_prp_compare_func_t compare |
- IN: Callback routine called when a property is compared with - another property list |
H5P_prp_close_func_t close |
- IN: Callback routine called when a property list is being - closed and the property value will be disposed of |
-SUBROUTINE h5pregister_f - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: class ! Property list class identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to register - INTEGER(SIZE_T), INTENT(IN) :: size ! Size of the property value - TYPE, INTENT(IN) :: value ! Property value - ! Supported types are: - ! INTEGER - ! REAL - ! DOUBLE PRECISION - ! CHARACTER(LEN=*) - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pregister_f -- - -
H5Premove
(
- hid_t plid
;
- const char *name
- )
-
- H5Premove
removes a property from a property list.
-
-
- Both properties which were in existence when the property list
- was created (i.e. properties registered with H5Pregister
)
- and properties added to the list after it was created (i.e. added
- with H5Pinsert
) may be removed from a property list.
- Properties do not need to be removed from a property list before the
- list itself is closed; they will be released automatically when
- H5Pclose
is called.
-
-
- If a close
callback exists for the removed property,
- it will be called before the property is released.
-
-
hid_t plid |
- IN: Identifier of the property list to modify |
const char *name |
- IN: Name of property to remove |
-SUBROUTINE h5premove_f(plid, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plid ! Property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to remove - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5premove_f -- - -
H5Pset
(
- hid_t plid
,
- const char *name
,
- void *value
)
- )
-
- H5Pset
sets a new value for a property in a
- property list. If there is a set
callback
- routine registered for this property, the value
will be
- passed to that routine and any changes to the value
- will be used when setting the property value.
- The information pointed to by the value
pointer
- (possibly modified by the set
callback) is copied into
- the property list value and may be changed by the application making
- the H5Pset
call without affecting the property value.
-
- - The property name must exist or this routine will fail. - -
- If the set
callback routine returns an error, the
- property value will not be modified.
-
-
- This routine may not be called for zero-sized properties - and will return an error in that case. - -
hid_t plid ;
- | IN: Property list identifier to modify |
const char *name;
- | IN: Name of property to modify |
void *value ;
- | IN: Pointer to value to set the property to |
-SUBROUTINE h5pset_f(plid, name, value, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plid ! Property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to set - TYPE, INTENT(IN) :: value ! Property value - ! Supported types are: - ! INTEGER - ! REAL - ! DOUBLE PRECISION - ! CHARACTER(LEN=*) - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_f -- - -
H5Pset_alignment
(hid_t plist
,
- hsize_t threshold
,
- hsize_t alignment
- )
- H5Pset_alignment
sets the alignment properties
- of a file access property list
- so that any file object greater than or equal in size to
- threshold
bytes will be aligned on an address
- which is a multiple of alignment
. The addresses
- are relative to the end of the user block; the alignment is
- calculated by subtracting the user block size from the
- absolute file address and then adjusting the address to be a
- multiple of alignment
.
-
- Default values for threshold
and
- alignment
are one, implying
- no alignment. Generally the default values will result in
- the best performance for single-process access to the file.
- For MPI-IO and other parallel systems, choose an alignment
- which is a multiple of the disk block size.
-
hid_t plist |
- IN: Identifier for a file access property list. |
hsize_t threshold |
- IN: Threshold value. - Note that setting the threshold value to 0 (zero) has - the effect of a special case, forcing everything - to be aligned. |
hsize_t alignment |
- IN: Alignment value. |
-SUBROUTINE h5pset_alignment_f(prp_id, threshold, alignment, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), INTENT(IN) :: threshold ! Threshold value - INTEGER(HSIZE_T), INTENT(IN) :: alignment ! Alignment value - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_alignment_f -- - -
H5Pset_alloc_time
(hid_t plist_id
,
- H5D_alloc_time_t alloc_time
- )
- H5Pset_alloc_time
sets up the timing for the allocation of
- storage space for a dataset's raw data.
- This property is set in the dataset creation property list
- plist_id
.
-
- Timing is specified in fill_time
with one of the
- following values:
-
- H5D_ALLOC_TIME_DEFAULT
- |
- Allocate dataset storage space at the default time. - (Defaults differ by storage method.) - | |
- H5D_ALLOC_TIME_EARLY
- |
- Allocate all space when the dataset is created. - (Default for compact datasets.) - | |
- H5D_ALLOC_TIME_INCR
- |
- Allocate space incrementally, as data is written to the dataset. - (Default for chunked storage datasets.) - H5Pset_alloc_time will return an error.
- | |
- H5D_ALLOC_TIME_LATE
- |
- Allocate all space when data is first written to the dataset. - (Default for contiguous datasets.) - |
H5Pset_alloc_time
is designed to work in concert
- with the dataset fill value and fill value write time properties,
- set with the functions
- H5Pset_fill_value
and H5Pset_fill_time
.
- -
- See H5Dcreate for - further cross-references. -
hid_t plist_id |
- IN: Dataset creation property list identifier. |
H5D_alloc_time_t alloc_time |
- IN: When to allocate dataset storage space. |
-SUBROUTINE h5pset_alloc_time_f(plist_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset creation property - ! list identifier - INTEGER(HSIZE_T), INTENT(IN) :: flag ! Allocation time flag - ! Possible values are: - ! H5D_ALLOC_TIME_ERROR_F - ! H5D_ALLOC_TIME_DEFAULT_F - ! H5D_ALLOC_TIME_EARLY_F - ! H5D_ALLOC_TIME_LATE_F - ! H5D_ALLOC_TIME_INCR_F- -
- INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_alloc_time_f -- - -
H5Pset_btree_ratios
(hid_t plist
,
- double left
,
- double middle
,
- double right
- )
- H5Pset_btree_ratios
sets the B-tree split ratios
- for a dataset transfer property list. The split ratios determine
- what percent of children go in the first node when a node splits.
-
- The ratio left
is used when the splitting node is
- the left-most node at its level in the tree;
- the ratio right
is used when the splitting node is
- the right-most node at its level;
- and the ratio middle
is used for all other cases.
-
- A node which is the only node at its level in the tree uses
- the ratio right
when it splits.
-
- All ratios are real numbers between 0 and 1, inclusive. -
hid_t plist |
- IN: The dataset transfer property list identifier. |
double left |
- IN: The B-tree split ratio for left-most nodes. |
double right |
- IN: The B-tree split ratio for right-most nodes and lone nodes. |
double middle |
- IN: The B-tree split ratio for all other nodes. |
-SUBROUTINE h5pset_btree_ratios_f(prp_id, left, middle, right, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id - ! Property list identifier - REAL, INTENT(IN) :: left ! The B-tree split ratio for left-most nodes - REAL, INTENT(IN) :: middle ! The B-tree split ratio for all other nodes - REAL, INTENT(IN) :: right ! The B-tree split ratio for right-most - ! nodes and lone nodes. - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_btree_ratios_f -- - -
H5Pset_buffer
(hid_t plist
,
- hsize_t size
,
- void *tconv
,
- void *bkg
- )
- H5Pset_buffer
- sets the maximum size
- for the type conversion buffer and background buffer and
- optionally supplies pointers to application-allocated buffers.
- If the buffer size is smaller than the entire amount of data
- being transferred between the application and the file, and a type
- conversion buffer or background buffer is required, then
- strip mining will be used.
- - Note that there are minimum size requirements for the buffer. - Strip mining can only break the data up along the first dimension, - so the buffer must be large enough to accommodate a complete slice - that encompasses all of the remaining dimensions. - For example, when strip mining a 100x200x300 hyperslab - of a simple data space, the buffer must be large enough to - hold 1x200x300 data elements. - When strip mining a 100x200x300x150 hyperslab of a simple data space, - the buffer must be large enough to hold 1x200x300x150 data elements. -
- If tconv
and/or bkg
are null pointers,
- then buffers will be allocated and freed during the data transfer.
-
- The default value for the maximum buffer is 1 Mb. -
hid_t plist |
- IN: Identifier for the dataset transfer property list. |
hsize_t size |
- IN: Size, in bytes, of the type conversion and background buffers. |
void tconv |
- IN: Pointer to application-allocated type conversion buffer. |
void bkg |
- IN: Pointer to application-allocated background buffer. |
-SUBROUTINE h5pset_buffer_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset transfer property - ! list identifier - INTEGER(HSIZE_T), INTENT(IN) :: size ! Conversion buffer size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_buffer_f -- - -
H5Pset_cache
(hid_t plist_id
,
- int mdc_nelmts
,
- int rdcc_nelmts
,
- size_t rdcc_nbytes
,
- double rdcc_w0
- )
- H5Pset_cache
sets
- the number of elements (objects) in the meta data cache and
- the number of elements, the total number of bytes, and
- the preemption policy value in the raw data chunk cache.
- - The plist_id is a file access property list. - The number of elements (objects) in the meta data cache - and the raw data chunk cache are mdc_nelmts and - rdcc_nelmts, respectively. - The total size of the raw data chunk cache and the preemption policy - are rdcc_nbytes and rdcc_w0. -
- Any (or all) of the H5Pget_cache
pointer arguments
- may be null pointers.
-
- The rdcc_w0 value should be between 0 and 1 inclusive and - indicates how much chunks that have been fully read are - favored for preemption. A value of zero means fully read - chunks are treated no differently than other chunks (the - preemption is strictly LRU) while a value of one means fully - read chunks are always preempted before other chunks. -
hid_t plist_id |
- IN: Identifier of the file access property list. |
int mdc_nelmts |
- IN: Number of elements (objects) in the meta data cache. |
int rdcc_nelmts |
- IN: Number of elements (objects) in the raw data chunk cache. |
size_t rdcc_nbytes |
- IN: Total size of the raw data chunk cache, in bytes. |
double rdcc_w0 |
- IN: Preemption policy. |
-SUBROUTINE h5pset_cache_f(prp_id, mdc_nelmts,rdcc_nelmts, rdcc_nbytes, rdcc_w0, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: mdc_nelmts ! Number of elements (objects) - ! in the meta data cache - INTEGER(SIZE_T), INTENT(IN) :: rdcc_nelmts ! Number of elements (objects) - ! in the meta data cache - INTEGER(SIZE_T), INTENT(IN) :: rdcc_nbytes ! Total size of the raw data - ! chunk cache, in bytes - REAL, INTENT(IN) :: rdcc_w0 ! Preemption policy - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_cache_f -- - -
H5Pset_chunk
(hid_t plist
,
- int ndims
,
- const hsize_t * dim
- )
- H5Pset_chunk
sets the size of the chunks used to
- store a chunked layout dataset. This function is only valid
- for dataset creation property lists.
-
- The ndims
parameter currently must be the same size
- as the rank of the dataset.
-
- The values of the dim
- array define the size of the chunks to store the dataset's raw data.
- The unit of measure for dim
values is
- dataset elements.
-
- As a side-effect of this function, the layout of the dataset is
- changed to H5D_CHUNKED
, if it is not already so set.
- (See H5Pset_layout
.)
-
hid_t plist |
- IN: Identifier for property list to query. |
int ndims |
- IN: The number of dimensions of each chunk. |
const hsize_t * dim |
- IN: An array defining the size, in dataset elements, - of each chunk. |
-SUBROUTINE h5pset_chunk_f(prp_id, ndims, dims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: ndims ! Number of chunk dimensions - INTEGER(HSIZE_T), DIMENSION(ndims), INTENT(IN) :: dims - ! Array containing sizes of - ! chunk dimensions - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_chunk_f -- - -
H5Pset_data_transform
- (hid_t plist_id
,
- const char *expression
)
- H5Pset_data_transform
sets the data transform to
- be used for reading and writing data.
- This function operates on the dataset transfer property lists
- plist_id.
-
- The expression parameter is a string containing an algebraic
- expression, such as (5/9.0)*(x-32)
- or x*(x-5)
.
- When a dataset is read or written with this property list,
- the transform expression is applied with the x
- being replaced by the values in the dataset.
- When reading data, the values in the file are not changed
- and the transformed data is returned to the user.
-
- Data transforms can only be applied to integer or floating-point - datasets. Order of operations is obeyed and the only supported - operations are +, -, *, and /. Parentheses can be nested arbitrarily - and can be used to change precedence. -
- When writing data back to the dataset, the transformed data is - written to the file and there is no way to recover the original - values to which the transform was applied. -
hid_t plist_id |
- IN: Identifier of the property list or class |
const char *expression |
- IN: Pointer to the null-terminated data transform expression - |
H5Pset_deflate
(hid_t plist
,
- int level
- )
- H5Pset_deflate
sets the compression method for a
- dataset creation property list to H5D_COMPRESS_DEFLATE
- and the compression level to level
, which should
- be a value from zero to nine, inclusive.
- Lower compression levels are faster but result in less compression.
- This is the same algorithm as used by the GNU gzip program.
- hid_t plist |
- IN: Identifier for the dataset creation property list. |
int level |
- IN: Compression level. |
-SUBROUTINE h5pset_deflate_f(prp_id, level, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: level ! Compression level - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_deflate_f -- - -
H5Pset_dxpl_mpio
(
- hid_t dxpl_id
,
- H5FD_mpio_xfer_t xfer_mode
- )
- H5Pset_dxpl_mpio
sets the data transfer property list
- dxpl_id
to use transfer mode xfer_mode
.
- The property list can then be used to control the I/O transfer mode
- during data I/O operations.
- - Valid transfer modes are as follows: -
H5FD_MPIO_INDEPENDENT
- H5FD_MPIO_COLLECTIVE
- hid_t dxpl_id |
- IN: Data transfer property list identifier. |
H5FD_mpio_xfer_t xfer_mode |
- IN: Transfer mode. |
-SUBROUTINE h5pset_dxpl_mpio_f(prp_id, data_xfer_mode, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: data_xfer_mode ! Data transfer mode - ! Possible values are: - ! H5FD_MPIO_INDEPENDENT_F - ! H5FD_MPIO_COLLECTIVE_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_dxpl_mpio_f -- - -
H5Pset_dxpl_multi
(
- hid_t dxpl_id
,
- const hid_t *memb_dxpl
- )
- H5Pset_dxpl_multi
sets the data transfer property list
- dxpl_id
to use the multi-file driver for each
- memory usage type memb_dxpl[]
.
-
- H5Pset_dxpl_multi
can only be used after
- the member map has been set with H5Pset_fapl_multi
.
-
hid_t dxpl_id , |
- IN: Data transfer property list identifier. |
const hid_t *memb_dxpl |
- IN: Array of data access property lists. |
H5Pset_edc_check
(hid_t plist
,
- H5Z_EDC_t check
)
- H5Pset_edc_check
sets the dataset transfer property
- list plist
to enable or disable error detection
- when reading data.
-
- Whether error detection is enabled or disabled is specified
- in the check
parameter.
- Valid values are as follows:
-
- H5Z_ENABLE_EDC (default)
- - H5Z_DISABLE_EDC
- |
- The error detection algorithm used is the algorithm previously - specified in the corresponding dataset creation property list. -
- This function does not affect the use of error detection when - writing data. -
hid_t plist |
- IN: Dataset transfer property list identifier. |
H5Z_EDC_t check |
- IN: Specifies whether error checking is enabled or disabled - for dataset read operations. |
-SUBROUTINE h5pset_edc_check_f(prp_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset transfer property - ! list identifier - INTEGER, INTENT(IN) :: flag ! EDC flag; possible values - ! H5Z_DISABLE_EDC_F - ! H5Z_ENABLE_EDC_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - -END SUBROUTINE h5pset_edc_check_f -- - -
H5Pset_external
(hid_t plist
,
- const char *name
,
- off_t offset
,
- hsize_t size
- )
- H5Pset_external
sets the
- external storage property in the property list,
- thus designating that the dataset will be stored in
- one or more non-HDF5 file(s) external to the HDF5 file.
- This call also adds the file name
as the
- first file in the list of external files.
- Subsequent calls to the function add the named file as
- the next file in the list.
-
- If a dataset is split across multiple files, then the files
- should be defined in order. The total size of the dataset is
- the sum of the size
arguments for all the external files.
- If the total size is larger than the size of a dataset then the
- dataset can be extended (provided the data space also allows
- the extending).
-
- The size
argument specifies the number of bytes reserved
- for data in the external file.
- If size
is set to H5F_UNLIMITED
, the
- external file can be of unlimited size and no more files can be added
- to the external files list.
-
- All of the external files for a given dataset must be
- specified with H5Pset_external
- before H5Dcreate
is called to create
- the dataset.
- If one these files does not exist on the system when
- H5Dwrite
is called to write data to it,
- the library will create the file.
-
hid_t plist |
- IN: Identifier of a dataset creation property list. |
const char *name |
- IN: Name of an external file. |
off_t offset |
- IN: Offset, in bytes, from the beginning of the file - to the location in the file where the data starts. |
hsize_t size |
- IN: Number of bytes reserved in the file for the data. |
-SUBROUTINE h5pset_external_f(prp_id, name, offset,bytes, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of an external file - INTEGER, INTENT(IN) :: offset ! Offset, in bytes, from the - ! beginning of the file to the - ! location in the file where - ! the data starts - INTEGER(HSIZE_T), INTENT(IN) :: bytes ! Number of bytes reserved in - ! the file for the data - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_external_f -- - -
H5Pset_family_offset
(
- hid_t fapl_id
,
- hsize_t offset
- )
- H5Pset_family_offset
sets the offset property in the
- file access property list fapl_id
so that the user application
- can retrieve a file handle for low-level access to a particular member
- of a family of files. The file handle is retrieved with a separate call
- to H5Fget_vfd_handle
- (or, in special circumstances, to H5FDget_vfd_handle
;
- see Virtual File Layer and List of VFL Functions
- in HDF5 Technical Notes).
-
- The value of offset
is an offset in bytes from the
- beginning of the HDF5 file, identifying a user-determined location
- within the HDF5 file. The file handle the user application is seeking
- is for the specific member-file in the associated family of files
- to which this offset is mapped.
-
- Use of this function is only appropriate for an HDF5 file written as a
- family of files with the FAMILY
file driver.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t offset |
- IN: Offset in bytes within the HDF5 file. |
-SUBROUTINE h5pset_family_offset_f(prp_id, offset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), INTENT(IN) :: offset ! Offset in bytes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - -END SUBROUTINE h5pset_family_offset_f -- - -
H5Pset_fapl_core
(
- hid_t fapl_id
,
- size_t increment
,
- hbool_t backing_store
- )
- H5FD_CORE
driver.
- H5Pset_fapl_core
modifies the file access property list
- to use the H5FD_CORE
driver.
-
- The H5FD_CORE
driver enables an application to work
- with a file in memory, speeding reads and writes as no disk access
- is made. File contents are stored only in memory until the file
- is closed. The backing_store
parameter determines
- whether file contents are ever written to disk.
-
- increment
specifies the increment by which allocated
- memory is to be increased each time more memory is required.
-
- If backing_store
is set to 1
- (TRUE
), the file contents are flushed to a file
- with the same name as this core file when the file is closed
- or access to the file is terminated in memory.
-
H5FD_CORE
driver to manipulate the file.
- hid_t fapl_id |
- IN: File access property list identifier. |
size_t increment |
- IN: Size, in bytes, of memory increments. |
hbool_t backing_store |
- IN: Boolean flag indicating whether to write the file - contents to disk when the file is closed. |
-SUBROUTINE h5pset_fapl_core_f(prp_id, increment, backing_store, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(SIZE_T), INTENT(IN) :: increment ! File block size in bytes - LOGICAL, INTENT(IN) :: backing_store ! Flag to indicate that entire - ! file contents are flushed to - ! a file with the same name as - ! this core file - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_core_f -- - -
H5Pset_fapl_family
(
- hid_t fapl_id
,
- hsize_t memb_size
,
- hid_t memb_fapl_id
- )
- H5Pset_fapl_family
sets the file access property list
- identifier, fapl_id
, to use the family driver.
-
- memb_size
is the size in bytes of each file member. This size
- will be saved in the file when the property list fapl_id
is used
- to create a new file. If fapl_id
is used to open an existing
- file, memb_size
has to be equal to the original size saved in
- the file. A failure with an errror message indicating the correct member
- size will be returned if memb_size
does not match the size saved.
- If any user does not know the original size, H5F_FAMILY_DEFAULT
- can be passed in. The library will retrieve the correct size saved in the file.
-
- memb_fapl_id
is the identifier of the
- file access property list to be used for each family member.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t memb_size |
- IN: Size in bytes of each file member. |
hid_t memb_fapl_id |
- IN: Identifier of file access property list for each - family member. |
-SUBROUTINE h5pset_fapl_family_f(prp_id, imemb_size, memb_plist, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), INTENT(IN) :: memb_size ! Logical size, in bytes, - ! of each family member - INTEGER(HID_T), INTENT(IN) :: memb_plist ! Identifier of the file - ! access property list to be - ! used for each family member - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_family_f -- - -
H5Pset_fapl_gass
(
- hid_t fapl_id
,
- GASS_Info info
- )
- H5Pset_fapl_gass
stores user-supplied GASS information,
- the GASS_Info struct data as passed in info
,
- to the file access property list fapl_id
.
- fapl_id
can then be used to create and/or open the file.
-
- The GASS_Info object, info
, is used for
- file open operations when using GASS in the Globus environment.
-
- Any modification to info
after this function call
- returns may have undetermined effect to the access property list.
- Users must call H5Pset_fapl_gass
again to setup
- the property list.
-
H5Pset_fapl_gass
is an experimental function.
- It is designed for use only when accessing files via the
- GASS facility of the Globus environment.
- For further information, see
- http//www.globus.org/.
- hid_t fapl_id , |
- IN: File access property list identifier. |
GASS_Info info |
- IN: Pointer to the GASS information structure. |
H5Pset_fapl_log
(
- hid_t fapl_id
,
- const char *logfile
,
- unsigned int flags
,
- size_t buf_size
- )
- H5Pset_fapl_log
modifies the
- file access property list to use the logging driver
- H5FD_LOG
.
-
- logfile
is the name of the file in which the
- logging entries are to be recorded.
-
- The actions to be logged are specified in the parameter flags
- using the pre-defined constants described in the following table.
- Multiple flags can be set through the use of an logical OR contained
- in parentheses. For example, logging read and write locations would
- be specified as (H5FD_LOG_LOC_READ|H5FD_LOG_LOC_WRITE)
.
-
-
- - Flag - - |
- - Description - |
- - H5FD_LOG_LOC_READ
-
- |
- - Track the location and length of every read, write, or seek operation. - |
- H5FD_LOG_LOC_WRITE
-
- | |
- H5FD_LOG_LOC_SEEK
-
- | |
- H5FD_LOG_LOC_IO
-
- | - Track all I/O locations and lengths. - The logical equivalent of the following: - |
-
- |
- (H5FD_LOG_LOC_READ | H5FD_LOG_LOC_WRITE | H5FD_LOG_LOC_SEEK)
- |
- - H5FD_LOG_FILE_READ
-
- |
- - Track the number of times each byte is read or written. - |
- H5FD_LOG_FILE_WRITE
-
- | |
- H5FD_LOG_FILE_IO
-
- | - Track the number of times each byte is read and written. - The logical equivalent of the following: - |
-
- |
- (H5FD_LOG_FILE_READ | H5FD_LOG_FILE_WRITE)
- |
- - H5FD_LOG_FLAVOR
-
- |
- - Track the type, or flavor, of information stored at each byte. - |
- - H5FD_LOG_NUM_READ
-
- |
- - Track the total number of read, write, or seek operations that occur. - |
- H5FD_LOG_NUM_WRITE
-
- | |
- H5FD_LOG_NUM_SEEK
-
- | |
- H5FD_LOG_NUM_IO
-
- | - Track the total number of all types of I/O operations. - The logical equivalent of the following: - |
-
- |
- (H5FD_LOG_NUM_READ | H5FD_LOG_NUM_WRITE | H5FD_LOG_NUM_SEEK)
- |
- - H5FD_LOG_TIME_OPEN
-
- |
- - Track the time spent in open, read, write, seek, or close operations. - - Partially implemented: write and seek - - Fully implemented: close - |
- H5FD_LOG_TIME_READ
-
- | |
- H5FD_LOG_TIME_WRITE
-
- | |
- H5FD_LOG_TIME_SEEK
-
- | |
- H5FD_LOG_TIME_CLOSE
-
- | |
- H5FD_LOG_TIME_IO
-
- | - Track the time spent in each of the above operations. - The logical equivalent of the following: - |
-
- |
- (H5FD_LOG_TIME_OPEN | H5FD_LOG_TIME_READ | H5FD_LOG_TIME_WRITE
- | H5FD_LOG_TIME_SEEK | H5FD_LOG_TIME_CLOSE)
- |
- - H5FD_LOG_ALLOC
-
- |
- - Track the allocation of space in the file. - |
- - H5FD_LOG_ALL
-
- |
- - Track everything. - The logical equivalent of the following: - |
-
- |
- (H5FD_LOG_ALLOC | H5FD_LOG_TIME_IO | H5FD_LOG_NUM_IO | H5FD_LOG_FLAVOR
- |H5FD_LOG_FILE_IO | H5FD_LOG_LOC_IO)
- |
- - |
- - |
- The logging driver can track the number of times
- each byte in the file is read from or written to
- (using H5FD_LOG_FILE_READ
and H5FD_LOG_FILE_WRITE
)
- and what kind of data is at that location
- (e.g., meta data, raw data; using H5FD_LOG_FLAVOR
).
- This information is tracked in a buffer of size buf_size
,
- which must be at least the size in bytes of the file to be logged.
-
-
hid_t fapl_id |
- IN: File access property list identifier. |
char *logfile |
- IN: Name of the log file. |
unsigned int flags |
- IN: Flags specifying the types of logging activity. |
size_t buf_size |
- IN: The size of the logging buffer. |
H5Pset_fapl_mpio
(
- hid_t fapl_id
,
- MPI_Comm comm
,
- MPI_Info info
- )
- H5Pset_fapl_mpio
stores the user-supplied
- MPI IO parameters comm
, for communicator, and
- info
, for information, in
- the file access property list fapl_id
.
- That property list can then be used to create and/or open the file.
-
- H5Pset_fapl_mpio
is available only in the
- parallel HDF5 library and is not a collective function.
-
- comm
is the MPI communicator to be used for
- file open as defined in MPI_FILE_OPEN
of MPI-2.
- This function does not create a duplicated communicator.
- Modifications to comm
after this function call
- returns may have an undetermined effect on the access property list.
- Users should not modify the communicator while it is defined
- in a property list.
-
- info
is the MPI info object to be used for
- file open as defined in MPI_FILE_OPEN
of MPI-2.
- This function does not create a duplicated info object.
- Any modification to the info object after this function call
- returns may have an undetermined effect on the access property list.
- Users should not modify the info while it is defined
- in a property list.
-
hid_t fapl_id |
- IN: File access property list identifier. |
MPI_Comm comm |
- IN: MPI-2 communicator. |
MPI_Info info |
- IN: MPI-2 info object. |
-SUBROUTINE h5pset_fapl_mpio_f(prp_id, comm, info, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: comm ! MPI communicator to be used for - ! file open as defined in - ! MPI_FILE_OPEN of MPI-2 - INTEGER, INTENT(IN) :: info ! MPI info object to be used for - ! file open as defined in - ! MPI_FILE_OPEN of MPI-2 - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_mpio_f -- - -
H5Pset_fapl_mpiposix
(
- hid_t fapl_id
,
- MPI_Comm comm
- )
- H5Pset_fapl_mpiposix
stores the user-supplied
- MPI IO parameter comm
, for communicator,
- in the file access property list fapl_id
.
- That property list can then be used to create and/or open the file.
-
- H5Pset_fapl_mpiposix
is available only in the
- parallel HDF5 library and is not a collective function.
-
- comm
is the MPI communicator to be used for
- file open as defined in MPI_FILE_OPEN
of MPI-2.
- This function does not create a duplicated communicator.
- Modifications to comm
after this function call
- returns may have an undetermined effect on the access property list.
- Users should not modify the communicator while it is defined
- in a property list.
-
hid_t fapl_id |
- IN: File access property list identifier. |
MPI_Comm comm |
- IN: MPI-2 communicator. |
-SUBROUTINE h5pset_fapl_mpiposix_f(prp_id, comm, use_gpfs, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: comm ! MPI communicator to be used - ! for file open as defined in - ! MPI_FILE_OPEN of MPI-2 - LOGICAL, INTENT(IN) :: use_gpfs - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5pset_fapl_mpiposix_f -- - -
H5Pset_fapl_multi
(
- hid_t fapl_id
,
- const H5FD_mem_t *memb_map
,
- const hid_t *memb_fapl
,
- const char * const *memb_name
,
- const haddr_t *memb_addr
,
- hbool_t relax
- )
- H5Pset_fapl_multi
sets the file access property list
- fapl_id
to use the multi-file driver.
- - The multi-file driver enables different types of HDF5 data and - metadata to be written to separate files. These files are viewed - by the HDF5 library and the application as a single virtual HDF5 file - with a single HDF5 file address space. - The types of data that can be broken out into separate files include - raw data, the superblock, B-tree data, global heap data, - local heap data, and object headers. - At the programmer's discretion, two or more types of data can be - written to the same file while other types of data are written to - separate files. -
- The array memb_map
maps memory usage types to other
- memory usage types and is the mechanism that allows the caller
- to specify how many files are created.
- The array contains H5FD_MEM_NTYPES
entries,
- which are either the value H5FD_MEM_DEFAULT
- or a memory usage type.
- The number of unique values determines the number of files
- that are opened.
-
- The array memb_fapl
contains a property list
- for each memory usage type that will be associated with a file.
-
- The array memb_name
should be a name generator
- (a printf-style format with a %s which will be replaced with the
- name passed to H5FDopen
, usually from
- H5Fcreate
or H5Fopen
).
-
- The array memb_addr
specifies the offsets within the
- virtual address space, from 0
(zero) to
- HADDR_MAX
, at which each type of data storage begins.
-
- If relax
is set to TRUE
(or 1
),
- then opening an existing file for read-only access will not fail
- if some file members are missing.
- This allows a file to be accessed in a limited sense if just the
- meta data is available.
-
- Default values for each of the optional arguments are as follows: -
memb_map
- H5FD_MEM_DEFAULT
for each element.
- memb_fapl
- H5P_DEFAULT
for each element.
- memb_name
- %s-X.h5
- where X
is one of the
- following letters:
- s
for H5FD_MEM_SUPER
- b
for H5FD_MEM_BTREE
- r
for H5FD_MEM_DRAW
- g
for H5FD_MEM_GHEAP
- l
for H5FD_MEM_LHEAP
- o
for H5FD_MEM_OHDR
- memb_addr
- HADDR_UNDEF
for each element.
- hid_t fapl_id |
- IN: File access property list identifier. |
const H5FD_mem_t *memb_map |
- IN: Maps memory usage types to other memory usage types. |
const hid_t *memb_fapl |
- IN: Property list for each memory usage type. |
const char * const *memb_name |
- IN: Name generator for names of member files. |
const haddr_t *memb_addr |
- IN: The offsets within the virtual address space,
- from 0 (zero) to HADDR_MAX ,
- at which each type of data storage begins. |
hbool_t relax |
- IN: Allows read-only access to incomplete file sets
- when TRUE . |
- H5FD_mem_t mt, memb_map[H5FD_MEM_NTYPES]; - hid_t memb_fapl[H5FD_MEM_NTYPES]; - const char *memb[H5FD_MEM_NTYPES]; - haddr_t memb_addr[H5FD_MEM_NTYPES]; - - // The mapping... - for (mt=0; mt<H5FD_MEM_NTYPES; mt++) { - memb_map[mt] = H5FD_MEM_SUPER; - } - memb_map[H5FD_MEM_DRAW] = H5FD_MEM_DRAW; - - // Member information - memb_fapl[H5FD_MEM_SUPER] = H5P_DEFAULT; - memb_name[H5FD_MEM_SUPER] = "%s.meta"; - memb_addr[H5FD_MEM_SUPER] = 0; - - memb_fapl[H5FD_MEM_DRAW] = H5P_DEFAULT; - memb_name[H5FD_MEM_DRAW] = "%s.raw"; - memb_addr[H5FD_MEM_DRAW] = HADDR_MAX/2; - - hid_t fapl = H5Pcreate(H5P_FILE_ACCESS); - H5Pset_fapl_multi(fapl, memb_map, memb_fapl, - memb_name, memb_addr, TRUE); -- -
-SUBROUTINE h5pset_fapl_multi_f(prp_id, memb_map, memb_fapl, memb_name, - memb_addr, relax, hdferr) - IMPLICIT NONE - INTEGER(HID_T),INTENT(IN) :: prp_id ! Property list identifier - - INTEGER,DIMENSION(0:H5FD_MEM_NTYPES_F-1),INTENT(IN) :: memb_map - INTEGER(HID_T),DIMENSION(0:H5FD_MEM_NTYPES_F-1),INTENT(IN) :: memb_fapl - CHARACTER(LEN=*),DIMENSION(0:H5FD_MEM_NTYPES_F-1),INTENT(IN) :: memb_name - REAL, DIMENSION(0:H5FD_MEM_NTYPES_F-1), INTENT(IN) :: memb_addr - ! Numbers in the interval [0,1) (e.g. 0.0 0.1 0.5 0.2 0.3 0.4) - ! real address in the file will be calculated as X*HADDR_MAX - - LOGICAL, INTENT(IN) :: relax - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_multi_f -- - -
H5Pset_fapl_sec2
(
- hid_t fapl_id
- )
- H5Pset_fapl_sec2
modifies the file access property list
- to use the H5FD_SEC2
driver.
- hid_t fapl_id |
- IN: File access property list identifier. |
-SUBROUTINE h5pset_fapl_sec2_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_sec2_f -- - -
H5Pset_fapl_split
(
- hid_t fapl_id
,
- const char *meta_ext
,
- hid_t meta_plist_id
,
- const char *raw_ext
,
- hid_t raw_plist_id
- )
- H5Pset_fapl_split
is a compatibility function that
- enables the multi-file driver to emulate the split driver from
- HDF5 Releases 1.0 and 1.2.
- The split file driver stored metadata and raw data in separate files
- but provided no mechanism for separating types of metadata.
-
- fapl_id
is a file access property list identifier.
-
- meta_ext
is the filename extension for the metadata file.
- The extension is appended to the name passed to H5FDopen
,
- usually from H5Fcreate
or H5Fopen
,
- to form the name of the metadata file.
- If the string %s is used in the extension, it works like the
- name generator as in H5Pset_fapl_multi
.
-
- meta_plist_id
is the file access property list identifier
- for the metadata file.
-
- raw_ext
is the filename extension for the raw data file.
- The extension is appended to the name passed to H5FDopen
,
- usually from H5Fcreate
or H5Fopen
,
- to form the name of the rawdata file.
- If the string %s is used in the extension, it works like the
- name generator as in H5Pset_fapl_multi
.
-
- raw_plist_id
is the file access property list identifier
- for the raw data file.
-
- If a user wishes to check to see whether this driver is in use,
- the user must call H5Pget_driver
and compare the
- returned value to the string H5FD_MULTI
.
- A positive match will confirm that the multi driver is in use;
- HDF5 provides no mechanism to determine whether it was called
- as the special case invoked by H5Pset_fapl_split
.
-
hid_t fapl_id , |
- IN: File access property list identifier. |
const char *meta_ext, |
- IN: Metadata filename extension. |
hid_t meta_plist_id , |
- IN: File access property list identifier for the metadata file. |
const char *raw_ext , |
- IN: Raw data filename extension. |
hid_t raw_plist_id |
- IN: File access property list identifier for the raw data file. |
-/* Example 1: Both metadata and rawdata files are in the same */ -/* directory. Use Station1-m.h5 and Station1-r.h5 as */ -/* the metadata and rawdata files. */ -hid_t fapl, fid; -fapl = H5Pcreate(H5P_FILE_ACCESS); -H5Pset_fapl_split(fapl, "-m.h5", H5P_DEFAULT, "-r.h5", H5P_DEFAULT); -fid=H5Fcreate("Station1",H5F_ACC_TRUNC,H5P_DEFAULT,fapl); - -/* Example 2: metadata and rawdata files are in different */ -/* directories. Use PointA-m.h5 and /pfs/PointA-r.h5 as */ -/* the metadata and rawdata files. */ -hid_t fapl, fid; -fapl = H5Pcreate(H5P_FILE_ACCESS); -H5Pset_fapl_split(fapl, "-m.h5", H5P_DEFAULT, "/pfs/%s-r.h5", H5P_DEFAULT); -fid=H5Fcreate("PointA",H5F_ACC_TRUNC,H5P_DEFAULT,fapl);- - -
-SUBROUTINE h5pset_fapl_split_f(prp_id, meta_ext, meta_plist, raw_ext, & - raw_plist, hdferr) - IMPLICIT NONE - INTEGER(HID_T),INTENT(IN) :: prp_id ! Property list identifier - CHARACTER(LEN=*),INTENT(IN) :: meta_ext ! Name of the extension for - ! the metafile filename - INTEGER(HID_T),INTENT(IN) :: meta_plist ! Identifier of the meta file - ! access property list - CHARACTER(LEN=*),INTENT(IN) :: raw_ext ! Name extension for the raw - ! file filename - INTEGER(HID_T),INTENT(IN) :: raw_plist ! Identifier of the raw file - ! access property list - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_split_f -- - -
H5Pset_fapl_srb
(
- hid_t fapl_id
,
- SRB_Info info
- )
- H5Pset_fapl_srb
stores the SRB client-to-server
- connection handler SRB_CONN
after the connection
- is established and other user-supplied SRB information.
-
- The user-supplied SRB information is contained in the
- SRB_Info struct pointed to by info
- and is stored in the file access property list fapl_id
.
- This information can then be used to create or open a file.
-
H5Pset_fapl_gass
is an experimental function.
- It is designed for use only when accessing files via the
- Storage Resource Broker (SRB). For further information, see
- http//www.npaci.edu/Research/DI/srb/.
- hid_t fapl_id |
- IN: File access property list identifier. |
SRB_Info info |
- IN: Pointer to the SRB information structure. |
H5Pset_fapl_stdio
(
- hid_t fapl_id
- )
- H5Pset_fapl_stdio
modifies the file access property list
- to use the standard I/O driver, H5FD_STDIO
.
- hid_t fapl_id |
- IN: File access property list identifier. |
-SUBROUTINE h5pset_fapl_stdio_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fapl_stdio_f -- - -
H5Pset_fapl_stream
(
- hid_t fapl_id
,
- H5FD_stream_fapl_t *fapl
- )
- H5Pset_fapl_stream
sets up the use of the
- streaming I/O driver.
-
- fapl_id
is the identifier for the
- file access property list currently in use.
-
- fapl
is the file access property list.
-
- The H5FD_stream_fapl_t
struct contains the following
- elements:
-
size_t | -increment |
H5FD_STREAM_SOCKET_TYPE | -socket |
hbool_t | -do_socket_io |
unsigned int | -backlog |
H5FD_stream_broadcast_t | -broadcast_fn |
void * | -broadcast_arg |
increment
specifies how much memory to allocate
- each time additional memory is required.
- socket
is an external socket descriptor;
- if a valid socket argument is provided, that socket will be used.
- do_socket_io
is a boolean value specifying whether
- to perform I/O on socket
.
- backlog
is the argument for the
- listen
call.
- broadcast_fn
is the broadcast callback function.
- broadcast_arg
is the user argument to
- the broadcast callback function.
-
- H5Pset_fapl_stream
and H5Pget_fapl_stream
- are not intended for use in a parallel environment.
-
hid_t fapl_id |
- IN: File access property list identifier. |
H5FD_stream_fapl_t *fapl |
- IN: The streaming I/O file access property list. |
H5Pset_fclose_degree
(hid_t fapl_id
,
- H5F_close_degree_t fc_degree
)
- H5Pset_fclose_degree
sets the file close degree property fc_degree
- in the file access property list fapl_id
.
- The value of fc_degree
determines how aggressively H5Fclose
- deals with objects within a file that remain open when H5Fclose
- is called to close that file. fc_degree
can have any one of
- four valid values:
-
Degree name | -H5Fclose behavior with no open object
- in file |
- H5Fclose behavior with open object(s)
- in file |
-
---|---|---|
H5F_CLOSE_WEAK |
- Actual file is closed. | -Access to file identifier is terminated; actual file - close is delayed until all objects in file are closed | -
H5F_CLOSE_SEMI |
- Actual file is closed. | -Function returns FAILURE | -
H5F_CLOSE_STRONG |
- Actual file is closed. | -All open objects remaining in the file are closed then - file is closed | -
H5F_CLOSE_DEFAULT |
- The VFL driver chooses the behavior. Currently,
- all VFL drivers set this value to H5F_CLOSE_WEAK , except
- for the MPI-I/O driver, which sets it to H5F_CLOSE_SEMI .
- |
-
hid_t fapl_id |
- IN: File access property list identifier. |
H5F_close_degree_t fc_degree |
- IN: Pointer to a location containing the file close degree property,
- the value of fc_degree . |
-SUBROUTINE h5pset_fclose_degree_f(fapl_id, degree, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: fapl_id ! File access property list identifier - INTEGER, INTENT(IN) :: degree ! Info about file close behavior - ! Possible values: - ! H5F_CLOSE_DEFAULT_F - ! H5F_CLOSE_WEAK_F - ! H5F_CLOSE_SEMI_F - ! H5F_CLOSE_STRONG_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fclose_degree_f -- - -
H5Pset_fill_time
(hid_t plist_id
,
- H5D_fill_time_t fill_time
- )
- H5Pset_fill_time
sets up the timing for writing fill values
- to a dataset.
- This property is set in the dataset creation property list plist_id
.
-
- Timing is specified in fill_time
with one of the following values:
-
- H5D_FILL_TIME_IFSET
- | - Write fill values to the dataset when storage space is allocated - only if there is a user-defined fill value, i.e., one set with - H5Pset_fill_value. - (Default) - | |
- H5D_FILL_TIME_ALLOC
- | - Write fill values to the dataset when storage space is allocated. - | |
- H5D_FILL_TIME_NEVER
- | - Never write fill values to the dataset. - |
H5Pset_fill_time
is designed for coordination
- with the dataset fill value and
- dataset storage allocation time properties, set with the functions
- H5Pset_fill_value
and H5Pset_alloc_time
.
- - See H5Dcreate for - further cross-references. -
hid_t plist_id |
- IN: Dataset creation property list identifier. |
H5D_fill_time_t fill_time |
- IN: When to write fill values to a dataset. |
-SUBROUTINE h5pset_fill_time_f(plist_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset creation property - ! list identifier - INTEGER(HSIZE_T), INTENT(IN) :: flag ! File time flag - ! Possible values are: - ! H5D_FILL_TIME_ERROR_F - ! H5D_FILL_TIME_ALLOC_F - ! H5D_FILL_TIME_NEVER_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fill_time_f -- - -
H5Pset_fill_value
(hid_t plist_id
,
- hid_t type_id
,
- const void *value
- )
- H5Pset_fill_value
sets the fill value for
- a dataset in the dataset creation property list.
-
- value
is interpreted as being of datatype
- type_id
. This datatype may differ from that of
- the dataset, but the HDF5 library must be able to convert
- value
to the dataset datatype when the dataset
- is created.
-
- The default fill value is 0
(zero), which is
- interpreted according to the actual dataset datatype.
-
- Setting value
to NULL
indicates
- that the fill value is to be undefined.
-
- A fill value should be defined so that it is appropriate for
- the application. While the HDF5 default fill value is
- 0
(zero), it is often appropriate to use another value.
- It might be useful, for example, to use a value that is
- known to be impossible for the application to legitimately generate.
-
- H5Pset_fill_value
is designed to work in
- concert with H5Pset_alloc_time
and
- H5Pset_fill_time
.
- H5Pset_alloc_time
and H5Pset_fill_time
- govern the timing of dataset storage allocation and fill value
- write operations and can be important in tuning application
- performance.
-
- See H5Dcreate for - further cross-references. -
hid_t plist_id |
- IN: Dataset creation property list identifier. |
hid_t type_id , |
- IN: Datatype of value . |
const void *value |
- IN: Pointer to buffer containing value to use as fill value. |
-SUBROUTINE h5pset_fill_value_f(prp_id, type_id, fillvalue, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier of fill - ! value datatype (in memory) - TYPE(VOID), INTENT(IN) :: fillvalue ! Fillvalue - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fill_value_f -- - -
H5Pset_filter
(hid_t plist
,
- H5Z_filter_t filter
,
- unsigned int flags
,
- size_t cd_nelmts
,
- const unsigned int cd_values[]
- )
- H5Pset_filter
adds the specified
- filter
and corresponding properties to the
- end of an output filter pipeline.
- If plist
is a dataset creation property list,
- the filter is added to the permanent filter pipeline;
- if plist
is a dataset transfer property list,
- the filter is added to the transient filter pipeline.
-
- The array cd_values
contains
- cd_nelmts
integers which are auxiliary data
- for the filter. The integer values will be stored in the
- dataset object header as part of the filter information.
-
- The flags
argument is a bit vector with
- the following fields specifying certain general properties
- of the filter:
-
H5Z_FLAG_OPTIONAL |
- If this bit is set then the filter is
- optional. If the filter fails (see below) during an
- H5Dwrite operation then the filter is
- just excluded from the pipeline for the chunk for which
- it failed; the filter will not participate in the
- pipeline during an H5Dread of the chunk.
- This is commonly used for compression filters: if the
- filter result would be larger than the input, then
- the compression filter returns failure and the
- uncompressed data is stored in the file. If this bit is
- clear and a filter fails, then H5Dwrite
- or H5Dread also fails.
- - This flag should not be set for the Fletcher32 checksum - filter as it will bypass the checksum filter without - reporting checksum errors to an application. |
-
- The filter
parameter specifies the filter to be set.
- Valid filter identifiers are as follows:
-
-
- H5Z_FILTER_DEFLATE
- | - Data compression filter, employing the gzip algorithm - |
- H5Z_FILTER_SHUFFLE
- | - Data shuffling filter - |
- H5Z_FILTER_FLETCHER32
- | - Error detection filter, employing the Fletcher32 checksum algorithm - |
- H5Z_FILTER_SZIP
- | - Data compression filter, employing the SZIP algorithm - |
- Also see H5Pset_edc_check and - H5Pset_filter_callback. - -
plist
must be a dataset creation
- property list.
- - If multiple filters are set for a property list, they will be - applied to each chunk in the order in which they were set. -
hid_t plist |
- IN: Property list identifier. |
H5Z_filter_t filter |
- IN: Filter identifier for the filter - to be added to the pipeline. |
unsigned int flags |
- IN: Bit vector specifying certain general properties - of the filter. |
size_t cd_nelmts |
- IN: Number of elements in cd_values . |
const unsigned int cd_values[] |
- IN: Auxiliary data for the filter. |
-SUBROUTINE h5pset_filter_f(prp_id, filter, flags, cd_nelmts, cd_values, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: filter ! Filter to be added to the pipeline - INTEGER, INTENT(IN) :: flags ! Bit vector specifying certain - ! general properties of the filter - INTEGER(SIZE_T), INTENT(IN) :: cd_nelmts - ! Number of elements in cd_values - INTEGER, DIMENSION(*), INTENT(IN) :: cd_values - ! Auxiliary data for the filter - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_filter_f -- - -
H5Pset_filter_callback
(hid_t plist
,
- H5Z_filter_func_t func
,
- void *op_data
)
- H5Pset_filter_callback
sets the user-defined
- filter callback function func
in the
- dataset transfer property list plist
.
-
- The parameter op_data
is a pointer to user-defined
- input data for the callback function and will be passed through
- to the callback function.
-
- The callback function func
defines the actions
- an application is to take when a filter fails.
- The function prototype is as follows:
-
typedef
H5Z_cb_return_t (H5Z_filter_func_t
)
- (H5Z_filter_t filter
,
- void *buf
,
- size_t buf_size
,
- void *op_data
)
-
- where filter
indicates which filter has failed,
- buf
and buf_size
are used to pass in
- the failed data,
- and op_data
is the required input data for this
- callback function.
-
- Valid callback function return values are
- H5Z_CB_FAIL
and H5Z_CB_CONT
.
-
hid_t plist |
- IN: Dataset transfer property list identifier. |
H5Z_filter_func_t func |
- IN: User-defined filter callback function. |
void *op_data |
- IN: User-defined input data for the callback function. |
H5Pset_fletcher32
(hid_t plist
)
- H5Pset_fletcher32
sets the Fletcher32 checksum filter
- in the dataset creation property list plist
.
- hid_t plist |
- IN: Dataset creation property list identifier. |
-SUBROUTINE h5pset_fletcher32_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset creation property list - ! identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_fletcher32_f -- - -
H5Pset_gc_reference
(hid_t plist
,
- unsigned gc_ref
- )
- H5Pset_gc_references
sets the flag for
- garbage collecting references for the file.
- - Dataset region references and other reference types use space - in an HDF5 file's global heap. If garbage collection is on - and the user passes in an uninitialized value in a reference structure, - the heap might get corrupted. When garbage collection is off, however, - and the user re-uses a reference, the previous heap block will be - orphaned and not returned to the free heap space. -
- When garbage collection is on, the user must initialize the - reference structures to 0 or risk heap corruption. -
- The default value for garbage collecting references is off. -
hid_t plist |
- IN: File access property list identifier. |
unsigned gc_ref |
- IN: Flag setting reference garbage collection to
- on (1 ) or off (0 ). |
-SUBROUTINE h5pset_gc_references_f (prp_id, gc_reference, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: gc_reference ! Flag for garbage collecting - ! references for the file - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_gc_references_f -- - -
H5Pset_hyper_vector_size
(hid_t dxpl_id
,
- size_t vector_size
- )
- H5Pset_hyper_vector_size
sets the number of
- I/O vectors to be accumulated in memory before being issued
- to the lower levels of the HDF5 library for reading or writing the
- actual data.
- - The I/O vectors are hyperslab offset and length pairs - and are generated during hyperslab I/O. -
- The number of I/O vectors is passed in vector_size
- to be set in the dataset transfer property list dxpl_id
.
- vector_size
must be greater than 1
(one).
-
- H5Pset_hyper_vector_size
is an I/O optimization function;
- increasing vector_size
should provide better performance,
- but the library will use more memory during hyperslab I/O.
- The default value of vector_size
is 1024
.
-
hid_t dxpl_id |
- IN: Dataset transfer property list identifier. |
size_t vector_size |
- IN: Number of I/O vectors to accumulate in memory for I/O operations.
- Must be greater than 1 (one). Default value: 1024 . |
-SUBROUTINE h5pset_hyper_vector_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! Dataset transfer property list - ! identifier - INTEGER(SIZE_T), INTENT(IN) :: size ! Vector size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_hyper_vector_size_f -- - -
H5Pset_istore_k
(hid_t plist
,
- unsigned ik
- )
- H5Pset_istore_k
sets the size of the parameter
- used to control the B-trees for indexing chunked datasets.
- This function is only valid for file creation property lists.
-
- ik
is one half the rank of a tree that stores
- chunked raw data. On average, such a tree will be 75% full,
- or have an average rank of 1.5 times the value of
- ik
.
-
hid_t plist |
- IN: Identifier of property list to query. |
unsigned ik |
- IN: 1/2 rank of chunked storage B-tree. |
-SUBROUTINE h5pset_istore_k_f (prp_id, ik, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: ik ! 1/2 rank of chunked storage B-tree - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_istore_k_f -- - -
H5Pset_layout
(hid_t plist
,
- H5D_layout_t layout
- )
- H5Pset_layout
sets the type of storage used to store the
- raw data for a dataset.
- This function is only valid for dataset creation property lists.
-
- Valid values for layout
are:
-
- Note that a compact storage layout may affect writing data to - the dataset with parallel applications. See note in - H5Dwrite - documentation for details. -
hid_t plist |
- IN: Identifier of property list to query. |
H5D_layout_t layout |
- IN: Type of storage layout for raw data. |
-SUBROUTINE h5pset_layout_f (prp_id, layout, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: layout ! Type of storage layout for raw data - ! Possible values are: - ! H5D_COMPACT_F - ! H5D_CONTIGUOUS_F - ! H5D_CHUNKED_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_layout_f -- - -
H5Pset_meta_block_size
(
- hid_t fapl_id
,
- hsize_t size
- )
- H5Pset_meta_block_size
sets the
- minimum size, in bytes, of metadata block allocations when
- H5FD_FEAT_AGGREGATE_METADATA
is set by a VFL driver.
- - Each raw metadata block is initially allocated to be of the - given size. Specific metadata objects (e.g., object headers, - local heaps, B-trees) are then sub-allocated from this block. -
- The default setting is 2048 bytes, meaning that the library
- will attempt to aggregate metadata in at least 2K blocks in the file.
- Setting the value to 0
(zero) with this function
- will turn off metadata aggregation, even if the VFL driver attempts
- to use the metadata aggregation strategy.
-
- Metadata aggregation reduces the number of small data objects - in the file that would otherwise be required for metadata. - The aggregated block of metadata is usually written in a - single write action and always in a contiguous block, - potentially significantly improving library and application - performance. -
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t size |
- IN: Minimum size, in bytes, of metadata block allocations. |
-SUBROUTINE h5pset_meta_block_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! File access property list - ! identifier - INTEGER(HSIZE_T), INTENT(IN) :: size ! Metadata block size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_meta_block_size_f -- - -
H5Pset_multi_type
(
- hid_t fapl_id
,
- H5FD_mem_t type
- )
- MULTI
driver.
- H5Pset_multi_type
sets the data type property in the
- file access or data transfer property list fapl_id
.
- This enables a user application to specify the type of data the
- application wishes to access so that the application
- can retrieve a file handle for low-level access to the particular member
- of a set of MULTI
files in which that type of data is stored.
- The file handle is retrieved with a separate call
- to H5Fget_vfd_handle
- (or, in special circumstances, to H5FDget_vfd_handle
;
- see Virtual File Layer and List of VFL Functions
- in HDF5 Technical Notes).
-
- The type of data specified in type
may be one of the following:
-
- H5FD_MEM_DEFAULT
- | - Need description.... - | |
- H5FD_MEM_SUPER
- | - Super block ... need description.... - | |
- H5FD_MEM_BTREE
- | - Btree ... need description.... - | |
- H5FD_MEM_DRAW
- | - Need description.... - | |
- H5FD_MEM_GHEAP
- | - Global heap ... need description.... - | |
- H5FD_MEM_LHEAP
- | - Local Heap ... need description.... - | |
- H5FD_MEM_OHDR
- | - Need description.... - |
- Use of this function is only appropriate for an HDF5 file written
- as a set of files with the MULTI
file driver.
-
hid_t fapl_id |
- IN: File access property list or data transfer property list identifier. |
H5FD_mem_t type |
- OUT: Type of data. |
H5Pset_preserve
(hid_t plist
,
- hbool_t status
- )
- H5Pset_preserve
sets the
- dataset transfer property list status to TRUE or FALSE.
- - When reading or writing compound data types and the - destination is partially initialized and the read/write is - intended to initialize the other members, one must set this - property to TRUE. Otherwise the I/O pipeline treats the - destination datapoints as completely uninitialized. -
hid_t plist |
- IN: Identifier for the dataset transfer property list. |
hbool_t status |
- IN: Status of for the dataset transfer property list - (TRUE/FALSE). |
-SUBROUTINE h5pset_preserve_f(prp_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Dataset transfer property - ! list identifier - LOGICAL, INTENT(IN) :: flag ! Status for the dataset - ! transfer property list - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_preserve_f -- - -
H5Pset_shuffle
(hid_t plist_id
)
- H5Pset_shuffle
sets the shuffle filter,
- H5Z_FILTER_SHUFFLE
,
- in the dataset creation property list plist_id
.
-
- The shuffle filter de-interlaces
- a block of data by reordering the bytes.
- All the bytes from one consistent byte position of
- each data element are placed together in one block;
- all bytes from a second consistent byte position of
- each data element are placed together a second block; etc.
- For example, given three data elements of a 4-byte datatype
- stored as 012301230123
,
- shuffling will re-order data as 000111222333
.
- This can be a valuable step in an effective compression
- algorithm because the bytes in each byte position are often
- closely related to each other and putting them together
- can increase the compression ratio.
-
- As implied above, the primary value of the shuffle filter - lies in its coordinated use with a compression filter; - it does not provide data compression when used alone. - When the shuffle filter is applied to a dataset - immediately prior to the use of a compression filter, - the compression ratio achieved is often superior to that - achieved by the use of a compression filter without - the shuffle filter. -
hid_t plist_id |
- IN: Dataset creation property list identifier. |
-SUBROUTINE h5pset_shuffle_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_shuffle_f -- - -
H5Pset_sieve_buf_size
(
- hid_t fapl_id
,
- hsize_t size
- )
- H5Pset_sieve_buf_size
sets size
,
- the maximum size in bytes of the data sieve buffer, which is
- used by file drivers that are capable of using data sieving.
- - The data sieve buffer is used when performing I/O on datasets - in the file. Using a buffer which is large enough to hold - several pieces of the dataset being read in for - hyperslab selections boosts performance by quite a bit. -
- The default value is set to 64KB, indicating that file I/O for - raw data reads and writes will occur in at least 64KB blocks. - Setting the value to 0 with this API function will turn off the - data sieving, even if the VFL driver attempts to use that strategy. -
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t size |
- IN: Maximum size, in bytes, of data sieve buffer. |
-SUBROUTINE h5pset_sieve_buf_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! File access property list - ! identifier - INTEGER(SIZE_T), INTENT(IN) :: size ! Sieve buffer size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_sieve_buf_size_f -- - -
H5Pset_sizes
(hid_t plist
,
- size_t sizeof_addr
,
- size_t sizeof_size
- )
- H5Pset_sizes
sets the byte size of the offsets
- and lengths used to address objects in an HDF5 file.
- This function is only valid for file creation property lists.
- Passing in a value of 0 for one of the sizeof_...
- parameters retains the current value.
- The default value for both values is the same as
- sizeof(hsize_t)
in the library (normally 8 bytes).
- Valid values currently are 2, 4, 8 and 16.
- hid_t plist |
- IN: Identifier of property list to modify. |
size_t sizeof_addr |
- IN: Size of an object offset in bytes. |
size_t sizeof_size |
- IN: Size of an object length in bytes. |
-SUBROUTINE h5pset_sizes_f (prp_id, sizeof_addr, sizeof_size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(SIZE_T), INTENT(IN) :: sizeof_addr ! Size of an object offset - ! in bytes - INTEGER(SIZE_T), INTENT(IN) :: sizeof_size ! Size of an object length - ! in bytes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_sizes_f -- - -
H5Pset_small_data_block_size
(hid_t fapl_id
,
- hsize_t size
- )
- H5Pset_small_data_block_size
reserves blocks of
- size
bytes for the contiguous storage of the raw data
- portion of small datasets.
- The HDF5 library then writes the raw data from small datasets
- to this reserved space, thus reducing unnecessary discontinuities
- within blocks of meta data and improving IO performance.
- - A small data block is actually allocated the first time a - qualifying small dataset is written to the file. - Space for the raw data portion of this small dataset is suballocated - within the small data block. - The raw data from each subsequent small dataset is also written to - the small data block until it is filled; additional small data blocks - are allocated as required. -
- The HDF5 library employs an algorithm that determines whether
- IO performance is likely to benefit from the use of this mechanism
- with each dataset as storage space is allocated in the file.
- A larger size
will result in this mechanism being
- employed with larger datasets.
-
- The small data block size is set as an allocation property in the
- file access property list identified by fapl_id
.
-
- Setting size
to zero (0
) disables the
- small data block mechanism.
-
hid_t fapl_id |
- IN: File access property list identifier. |
hsize_t size |
- IN: Maximum size, in bytes, of the small data block.
- - The default size is 2048 . |
-SUBROUTINE h5pset_small_data_block_size_f(plist_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: plist_id ! File access - ! property list identifier - INTEGER(HSIZE_T), INTENT(IN) :: size ! Small raw data block size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_small_data_block_size_f -- - -
H5Pset_sym_k
(hid_t plist
,
- unsigned ik
,
- unsigned lk
- )
- H5Pset_sym_k
sets the size of parameters used to
- control the symbol table nodes. This function is only valid
- for file creation property lists. Passing in a value of 0 for
- one of the parameters retains the current value.
-
- ik
is one half the rank of a tree that stores a symbol
- table for a group. Internal nodes of the symbol table are on
- average 75% full. That is, the average rank of the tree is
- 1.5 times the value of ik
.
-
- lk
is one half of the number of symbols that can
- be stored in a symbol table node. A symbol table node is the
- leaf of a symbol table tree which is used to store a group.
- When symbols are inserted randomly into a group, the group's
- symbol table nodes are 75% full on average. That is, they
- contain 1.5 times the number of symbols specified by
- lk
.
-
hid_t plist |
- IN: Identifier for property list to query. |
unsigned ik |
- IN: Symbol table tree rank. |
unsigned lk |
- IN: Symbol table node size. |
-SUBROUTINE h5pset_sym_k_f (prp_id, ik, lk, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER, INTENT(IN) :: ik ! Symbol table tree rank - INTEGER, INTENT(IN) :: lk ! Symbol table node size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_sym_k_f -- - -
H5Pset_szip
(hid_t plist
,
- unsigned int options_mask
,
- unsigned int pixels_per_block
)
- H5Pset_szip
sets an SZIP compression filter,
- H5Z_FILTER_SZIP
, for a dataset.
- SZIP is a compression method designed for use with scientific data.
- - Before proceeding, be aware that there are factors that affect - your rights and ability to use SZIP compression. - See the documents at - SZIP Compression in HDF5 - for important information regarding terms of use and - the SZIP copyright notice, - for further discussion of SZIP compression in HDF5, - and for a list of SZIP-related references. - -
- In the text below, the term pixel refers to - an HDF5 data element. - This terminology derives from SZIP compression's use with image data, - where pixel referred to an image pixel. -
- The SZIP bits_per_pixel
value (see Notes, below)
- is automatically set, based on the HDF5 datatype.
- SZIP can be used with atomic datatypes that may have size
- of 8, 16, 32, or 64 bits.
- Specifically, a dataset with a datatype that is
- 8-, 16-, 32-, or 64-bit
- signed or unsigned integer;
- char; or
- 32- or 64-bit float
- can be compressed with SZIP.
- See Notes, below, for further discussion of the
- the SZIP bits_per_pixel
setting.
-
-
- SZIP compression cannot be applied to
- compound datatypes,
- array datatypes,
- variable-length datatypes,
- enumerations, or
- any other user-defined datatypes.
- If an SZIP filter is set up for a dataset containing a non-allowed
- datatype, H5Pset_szip
will succeed but the subsequent call
- to H5Dcreate
- will fail;
- the conflict is detected only when the property list is used.
-
-
-
- SZIP options are passed in an options mask, options_mask
,
- as follows.
-
- - Option - |
- - Description - - (Mutually exclusive; select one.) - |
- - H5_SZIP_EC_OPTION_MASK
- |
- - Selects entropy coding method. - |
- H5_SZIP_NN_OPTION_MASK
- | - Selects nearest neighbor coding method. - |
- - |
- - |
H5_SZIP_EC_OPTION_MASK
, is best suited for
- data that has been processed.
- The EC method works best for small numbers.
- H5_SZIP_NN_OPTION_MASK
,
- preprocesses the data then the applies EC method as above.
-
- SZIP compresses data block by block, with a user-tunable block size.
- This block size is passed in the parameter
- pixels_per_block
and must be even and not greater than 32,
- with typical values being 8
, 10
,
- 16
, or 32
.
- This parameter affects compression ratio;
- the more pixel values vary, the smaller this number should be to
- achieve better performance.
-
- In HDF5, compression can be applied only to chunked datasets.
- If pixels_per_block
is bigger than the total
- number of elements in a dataset chunk,
- H5Pset_szip
will succeed but the subsequent call to
- H5Dcreate
- will fail; the conflict is detected only when the property list
- is used.
-
- To achieve optimal performance for SZIP compression,
- it is recommended that a chunk's fastest-changing dimension
- be equal to N times pixels_per_block
- where N is the maximum number of blocks per scan line
- allowed by the SZIP library.
- In the current version of SZIP, N is set to 128.
-
- H5Pset_szip
will fail if SZIP encoding is
- disabled in the available copy of the SZIP library.
-
- H5Zget_filter_info
can be employed
- to avoid such a failure.
-
hid_t plist |
- IN: Dataset creation property list - identifier. |
unsigned int options_mask |
- IN: A bit-mask conveying the desired SZIP options.
- Valid values are H5_SZIP_EC_OPTION_MASK
- and H5_SZIP_NN_OPTION_MASK . |
unsigned int pixels_per_block |
- IN: The number of pixels or data elements in each data block. |
- In non-HDF5 applications, SZIP typically requires that the - user application supply additional parameters: -
pixels_in_object
,
- the number of pixels in the object to be compressed
- bits_per_pixel
,
- the number of bits per pixel
- pixels_per_scanline
,
- the number of pixels per scan line
-
- These values need not be independently supplied in the HDF5
- environment as they are derived from the datatype and dataspace,
- which are already known.
- In particular, HDF5 sets
- pixels_in_object
to the number of elements in a chunk
- and bits_per_pixel
to the size of the element or
- pixel datatype.
- The following algorithm is used to set
- pixels_per_scanline
:
-
pixels_per_scanline
to
- 128 times pixels_per_block
.
- pixels_per_block
,
- set pixels_per_scanline
to the minimum of
- size and 128 times pixels_per_block
.
- pixels_per_block
- but greater than the number elements in the chunk,
- set pixels_per_scanline
to the minimum of
- the number elements in the chunk and
- 128 times pixels_per_block
.
-
- The HDF5 datatype may have precision that is less than the
- full size of the data element, e.g., an 11-bit integer can be
- defined using
- H5Tset_precision
.
- To a certain extent, SZIP can take advantage of the
- precision of the datatype to improve compression:
-
H5Tset_offset
- or H5Tget_offset
),
- the data is the in lowest N bits of the data element.
- In this case, the SZIP bits_per_pixel
- is set to the precision
- of the HDF5 datatype.
- bits_per_pixel
- will be set to the number of bits in the full size of the data
- element.
- bits_per_pixel
will be set to 32.
- bits_per_pixel
will be set to 64.
-
- HDF5 always modifies the options mask provided by the user
- to set up usage of RAW_OPTION_MASK
,
- ALLOW_K13_OPTION_MASK
, and one of
- LSB_OPTION_MASK
or MSB_OPTION_MASK
,
- depending on endianness of the datatype.
-
-
-SUBROUTINE h5pset_szip_f(prp_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id - ! Dataset creation property list identifier - INTEGER, INTENT(IN) :: options_mask - ! A bit-mask conveying the desired - ! SZIP options - ! Current valid values in Fortran are: - ! H5_SZIP_EC_OM_F - ! H5_SZIP_NN_OM_F - INTEGER, INTENT(IN) :: pixels_per_block - ! The number of pixels or data elements - ! in each data block - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_szip_f -- - - - -
H5Pset_userblock
(hid_t plist
,
- hsize_t size
- )
- H5Pset_userblock
sets the user block size of a
- file creation property list.
- The default user block size is 0; it may be set to any
- power of 2 equal to 512 or greater (512, 1024, 2048, etc.).
- hid_t plist |
- IN: Identifier of property list to modify. |
hsize_t size |
- IN: Size of the user-block in bytes. |
-SUBROUTINE h5pset_userblock_f (prp_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: prp_id ! Property list identifier - INTEGER(HSIZE_T), INTENT(IN) :: size ! Size of the user-block in bytes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5pset_userblock_f -- - -
H5Pset_vlen_mem_manager
(hid_t plist
,
- H5MM_allocate_t alloc
,
- void *alloc_info
,
- H5MM_free_t free
,
- void *free_info
- )
- H5Dread
and H5Dvlen_reclaim
.
- H5Pset_vlen_mem_manager
sets the memory manager for
- variable-length datatype allocation in H5Dread
- and free in H5Dvlen_reclaim
.
-
- The alloc
and free
parameters
- identify the memory management routines to be used.
- If the user has defined custom memory management routines,
- alloc
and/or free
should be set to make
- those routine calls (i.e., the name of the routine is used as
- the value of the parameter);
- if the user prefers to use the system's malloc
- and/or free
, the alloc
and
- free
parameters, respectively, should be set to
- NULL
-
- The prototypes for these user-defined functions would appear as follows:
-
- typedef void *(*H5MM_allocate_t
)(size_t size
,
- void *alloc_info
) ;
-
-
- typedef void (*H5MM_free_t
)(void *mem
,
- void *free_info
) ;
-
- The alloc_info
and free_info
parameters
- can be used to pass along any required information to
- the user's memory management routines.
-
- In summary, if the user has defined custom memory management
- routines, the name(s) of the routines are passed in the
- alloc
and free
parameters and the
- custom routines' parameters are passed in the
- alloc_info
and free_info
parameters.
- If the user wishes to use the system malloc
and
- free
functions, the alloc
and/or
- free
parameters are set to NULL
- and the alloc_info
and free_info
- parameters are ignored.
-
hid_t plist |
- IN: Identifier for the dataset transfer property list. |
H5MM_allocate_t alloc |
- IN: User's allocate routine, or NULL
- for system malloc . |
void *alloc_info |
- IN: Extra parameter for user's allocation routine.
- - Contents are ignored if preceding parameter is - NULL . |
H5MM_free_t free |
- IN: User's free routine, or NULL
- for system free . |
void *free_info |
- IN: Extra parameter for user's free routine.
- - Contents are ignored if preceding parameter is - NULL . |
H5Punregister
(
- H5P_class_t class
,
- const char *name
- )
-
- H5Punregister
removes a property from a
- property list class.
-
- - Future property lists created of that class will not contain - this property; - existing property lists containing this property are not affected. - -
H5P_class_t class |
- IN: Property list class from which to remove - permanent property |
const char *name |
- IN: Name of property to remove |
-SUBROUTINE h5punregister_f(class, name, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: class ! Property list class identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of property to remove - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5punregister_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
-
|
-
|
-
|
- -Alphabetical Listing - -
-
|
-
- - - |
-
|
-
- - - |
-
|
-
-
| - - |
-
|
- - - - - -
H5Rcreate
(void *ref
,
- hid_t loc_id
,
- const char *name
,
- H5R_type_t ref_type
,
- hid_t space_id
- )
-H5Rcreate
creates the reference, ref
,
- of the type specified in ref_type
, pointing to
- the object name
located at loc_id
.
-
- The HDF5 library maps the void type specified above
- for ref
to the type specified in ref_type
,
- which will be one of those appearing in the first column of
- the following table.
- The second column of the table lists the HDF5 constant associated
- with each reference type.
-
hdset_reg_ref_t | -H5R_DATASET_REGION |
- Dataset region reference |
hobj_ref_t | -H5R_OBJECT |
- Object reference |
- The parameters loc_id
and name
are
- used to locate the object.
-
- The parameter space_id
identifies the region
- to be pointed to for a dataset region reference.
- This parameter is unused with object references.
-
void *ref |
- OUT: Reference created by the function call. |
hid_t loc_id |
- IN: Location identifier used to locate the object being - pointed to. |
const char *name |
- IN: Name of object at location loc_id . |
H5R_type_t ref_type |
- IN: Type of reference. |
hid_t space_id |
- IN: Dataspace identifier with selection. - Used for dataset region references. |
To create an object reference -
-SUBROUTINE h5rcreate_f(loc_id, name, ref, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! Location identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the object at location - ! specified by loc_id identifier - TYPE(hobj_ref_t_f), INTENT(OUT) :: ref ! Object reference - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5rcreate_f -- - -
-SUBROUTINE h5rcreate_f(loc_id, name, space_id, ref, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! Location identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the dataset at location - ! specified by loc_id identifier - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataset's dataspace identifier - TYPE(hdset_reg_ref_t_f), INTENT(OUT) :: ref ! Dataset region reference - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5rcreate_f -- - -
H5Rdereference
(hid_t dataset
,
- H5R_type_t ref_type
,
- void *ref
- )
-H5Rdereference
- opens that object and returns an identifier.
-
- The parameter ref_type
specifies the reference type
- of ref
.
- ref_type
may contain either of the following values:
-
H5R_OBJECT
(0
)
- H5R_DATASET_REGION
(1
)
- hid_t dataset |
- IN: Dataset containing reference object. |
H5R_type_t ref_type |
- IN: The reference type of ref . |
void *ref |
- IN: Reference to open. |
To dereference an object -
-SUBROUTINE h5rdereference_f(dset_id, ref, obj_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - TYPE(hobj_ref_t_f), INTENT(IN) :: ref ! Object reference - INTEGER(HID_T), INTENT(OUT) :: obj_id ! Object identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5rdereference_f -- - -
-SUBROUTINE h5rdereference_f(dset_id, ref, obj_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - TYPE(hdset_reg_ref_t_f), INTENT(IN) :: ref ! Object reference - INTEGER(HID_T), INTENT(OUT) :: obj_id ! Object identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5rdereference_f -- - -
H5Rget_obj_type
(hid_t id
,
- H5R_type_t ref_type
,
- void *ref
- )
-ref_type
,
- and a reference to an object, ref
,
- H5Rget_obj_type
- returns the type of the referenced object.
-
- Valid object reference types, to pass in as ref_type
,
- include the following:
-
- H5R_OBJECT | - Reference is an object reference. - | |
- H5R_DATASET_REGION | - Reference is a dataset region reference. - |
- Valid object type return values include the following: -
- H5G_LINK | - Object is a symbolic link. - | |
- H5G_GROUP | - Object is a group. - | |
- H5G_DATASET | - Object is a dataset. - | |
- H5G_TYPE | - Object is a named datatype. - |
hid_t id , |
- IN: The dataset containing the reference object or - the location identifier of the object that the - dataset is located within. |
H5R_type_t ref_type |
- IN: Type of reference to query. |
void *ref |
- IN: Reference to query. |
H5Gpublic.h
if successful;
- otherwise returns H5G_UNKNOWN
.
--SUBROUTINE h5rget_object_type_f(dset_id, ref, obj_type, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - TYPE(hobj_ref_t_f), INTENT(IN) :: ref ! Object reference - INTEGER, INTENT(OUT) :: obj_type ! Object type - ! H5G_UNKNOWN_F (-1) - ! H5G_LINK_F 0 - ! H5G_GROUP_F 1 - ! H5G_DATASET_F 2 - ! H5G_TYPE_F 3 - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5rget_object_type_f -- - -
H5Rget_region
(hid_t dataset
,
- H5R_type_t ref_type
,
- void *ref
- )
-ref
,
- H5Rget_region
creates a copy of the dataspace
- of the dataset pointed to and defines a selection in the copy
- which is the region pointed to.
-
- The parameter ref_type
specifies the reference type
- of ref
.
- ref_type
may contain the following value:
-
H5R_DATASET_REGION
(1
)
- hid_t dataset |
- IN: Dataset containing reference object. |
H5R_type_t ref_type |
- IN: The reference type of ref . |
void *ref |
- IN: Reference to open. |
-SUBROUTINE h5rget_region_f(dset_id, ref, space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dset_id ! Dataset identifier - TYPE(hdset_reg_ref_t_f), INTENT(IN) :: ref ! Dataset region reference - INTEGER(HID_T), INTENT(OUT) :: space_id ! Space identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5rget_region_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
H5Sclose
(hid_t space_id
- )
-H5Sclose
releases a dataspace.
- Further access through the dataspace identifier is illegal.
- Failure to release a dataspace with this call will
- result in resource leaks.
-hid_t space_id |
- Identifier of dataspace to release. |
-SUBROUTINE h5sclose_f(space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sclose_f -- - -
H5Scopy
(hid_t space_id
- )
-H5Scopy
creates a new dataspace which is an exact
- copy of the dataspace identified by space_id
.
- The dataspace identifier returned from this function should be
- released with H5Sclose
or resource leaks will occur.
-hid_t space_id |
- Identifier of dataspace to copy. |
-SUBROUTINE h5scopy_f(space_id, new_space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HID_T), INTENT(OUT) :: new_space_id ! Identifier of dataspace copy - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5scopy_f -- - -
H5Screate
(H5S_class_t type
)
-H5Screate
creates a new dataspace of a particular
- type
.
- The types currently supported are H5S_SCALAR
and
- H5S_SIMPLE
;
- others are planned to be added later.
-H5S_class_t type |
- The type of dataspace to be created. |
-SUBROUTINE h5screate_f(classtype, space_id, hdferr) - IMPLICIT NONE - INTEGER, INTENT(IN) :: classtype ! The type of the dataspace - ! to be created. Possible values - ! are: - ! H5S_SCALAR_F - ! H5S_SIMPLE_F - INTEGER(HID_T), INTENT(OUT) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5screate_f -- - -
H5Screate_simple
(int rank
,
- const hsize_t * dims
,
- const hsize_t * maxdims
- )
-H5Screate_simple
creates a new simple dataspace
- and opens it for access.
-
- rank
is the number of dimensions used in the dataspace.
-
- dims
is an array specifying the size of each dimension
- of the dataset while
- maxdims
is an array specifying the upper limit on
- the size of each dimension.
- maxdims
may be the null pointer, in which case the
- upper limit is the same as dims
.
-
- If an element of maxdims
is
- H5S_UNLIMITED
, (-1
),
- the maximum size of the corresponding dimension is unlimited.
- Otherwise, no element of maxdims
should be
- smaller than the corresponding element of dims
.
-
- The dataspace identifier returned from this function must be
- released with H5Sclose
or resource leaks will occur.
-
int rank |
- Number of dimensions of dataspace. |
const hsize_t * dims |
- An array of the size of each dimension. |
const hsize_t * maxdims |
- An array of the maximum size of each dimension. |
-SUBROUTINE h5screate_simple_f(rank, dims, space_id, hdferr, maxdims) - IMPLICIT NONE - INTEGER, INTENT(IN) :: rank ! Number of dataspace dimensions - INTEGER(HSIZE_T), INTENT(IN) :: dims(*) ! Array with the dimension sizes - INTEGER(HID_T), INTENT(OUT) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HSIZE_T), OPTIONAL, INTENT(IN) :: maxdims(*) - ! Array with the maximum - ! dimension sizes -END SUBROUTINE h5screate_simple_f -- - -
H5Sextent_copy
(hid_t dest_space_id
,
- hid_t source_space_id
- )
-H5Sextent_copy
copies the extent from
- source_space_id
to dest_space_id
.
- This action may change the type of the dataspace.
-hid_t dest_space_id |
- IN: The identifier for the dataspace to which - the extent is copied. |
hid_t source_space_id |
- IN: The identifier for the dataspace from which - the extent is copied. |
-SUBROUTINE h5sextent_copy_f(dest_space_id, source_space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: dest_space_id ! Identifier of destination - ! dataspace - INTEGER(HID_T), INTENT(IN) :: source_space_id ! Identifier of source - ! dataspace - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sextent_copy_f -- - -
H5Sget_select_bounds
(hid_t space_id
,
- hsize_t *start
,
- hsize_t *end
- )
-H5Sget_select_bounds
retrieves the coordinates of
- the bounding box containing the current selection and places
- them into user-supplied buffers.
-
- The start
and end
buffers must be large
- enough to hold the dataspace rank number of coordinates.
-
- The bounding box exactly contains the selection. - I.e., if a 2-dimensional element selection is currently - defined as containing the points (4,5), (6,8), and (10,7), - then the bounding box will be (4, 5), (10, 8). -
- The bounding box calculation includes the current offset of the - selection within the dataspace extent. -
- Calling this function on a none
selection will
- return FAIL
.
-
hid_t space_id |
- IN: Identifier of dataspace to query. |
hsize_t *start |
- OUT: Starting coordinates of the bounding box. |
hsize_t *end |
- OUT: Ending coordinates of the bounding box, - i.e., the coordinates of the diagonally opposite corner. |
-SUBROUTINE h5sget_select_bounds_f(space_id, start, end, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id - ! Dataspace identifier - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: start - ! Starting coordinates of the bounding box - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: end - ! Ending coordinates of the bounding box, - ! i.e., the coordinates of the diagonally - ! opposite corner - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5sget_select_bounds_f -- - -
H5Sget_select_elem_npoints
(hid_t space_id
- )
-H5Sget_select_elem_npoints
returns
- the number of element points in the current dataspace selection.
-hid_t space_id |
- IN: Identifier of dataspace to query. |
-SUBROUTINE h5sget_select_elem_npoints_f(space_id, num_points, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: num_points ! Number of points in - ! the current elements selection - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5sget_select_elem_npoints_f -- - -
H5Sget_select_elem_pointlist
(hid_t space_id
,
- hsize_t startpoint
,
- hsize_t numpoints
,
- hsize_t *buf
- )
-H5Sget_select_elem_pointlist
returns the list of
- element points in the current dataspace selection. Starting with
- the startpoint
-th point in the list of points,
- numpoints
points are put into the user's buffer.
- If the user's buffer fills up before numpoints
- points are inserted, the buffer will contain only as many
- points as fit.
-
- The element point coordinates have the same dimensionality (rank)
- as the dataspace they are located within. The list of element points
- is formatted as follows:
-
- <coordinate>, followed by
-
- the next coordinate,
-
- etc.
-
- until all of the selected element points have been listed.
-
- The points are returned in the order they will be iterated through - when the selection is read/written from/to disk. -
hid_t space_id |
- IN: Dataspace identifier of selection to query. |
hsize_t startpoint |
- IN: Element point to start with. |
hsize_t numpoints |
- IN: Number of element points to get. |
hsize_t *buf |
- OUT: List of element points selected. |
-SUBROUTINE h5sget_select_elem_pointlist_f(space_id, startpoint, num_points, - buf, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HSIZE_T), INTENT(IN) :: startpoint ! Element point to start with - INTEGER, INTENT(OUT) :: num_points ! Number of points to get in - ! the current element selection - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: buf - ! List of points selected - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5sget_select_elem_pointlist_f -- - -
H5Sget_select_hyper_blocklist
(hid_t space_id
,
- hsize_t startblock
,
- hsize_t numblocks
,
- hsize_t *buf
- )
-H5Sget_select_hyper_blocklist
returns a list of
- the hyperslab blocks currently selected. Starting with the
- startblock
-th block in the list of blocks,
- numblocks
blocks are put into the user's buffer.
- If the user's buffer fills up before numblocks
- blocks are inserted, the buffer will contain only as many
- blocks as fit.
-
- The block coordinates have the same dimensionality (rank)
- as the dataspace they are located within. The list of blocks
- is formatted as follows:
-
- <"start" coordinate>, immediately followed by
-
- <"opposite" corner coordinate>, followed by
-
- the next "start" and "opposite" coordinates,
-
- etc.
-
- until all of the selected blocks have been listed.
-
- No guarantee is implied as the order in which blocks are listed. -
hid_t space_id |
- IN: Dataspace identifier of selection to query. |
hsize_t startblock |
- IN: Hyperslab block to start with. |
hsize_t numblocks |
- IN: Number of hyperslab blocks to get. |
hsize_t *buf |
- OUT: List of hyperslab blocks selected. |
-SUBROUTINE h5sget_select_hyper_blocklist_f(space_id, startblock, num_blocks, - buf, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HSIZE_T), INTENT(IN) :: startblock ! Hyperslab block to start with - INTEGER, INTENT(OUT) :: num_blocks ! Number of hyperslab blocks to - ! get in the current hyperslab - ! selection - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: buf - ! List of hyperslab blocks selected - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5sget_select_hyper_blocklist_f -- - -
H5Sget_select_hyper_nblocks
(hid_t space_id
- )
-H5Sget_select_hyper_nblocks
returns the
- number of hyperslab blocks in the current dataspace selection.
-hid_t space_id |
- IN: Identifier of dataspace to query. |
-SUBROUTINE h5sget_select_hyper_nblocks_f(space_id, num_blocks, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: num_blocks ! Number of hyperslab blocks in - ! the current hyperslab selection - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5sget_select_hyper_nblocks_f -- - -
H5Sget_select_npoints
(hid_t space_id
)
-H5Sget_select_npoints
determines the number of elements
- in the current selection of a dataspace.
-hid_t space_id |
- Dataspace identifier. |
-SUBROUTINE h5sget_select_npoints_f(space_id, npoints, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HSSIZE_T), INTENT(OUT) :: npoints ! Number of elements in the - ! selection - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sget_select_npoints_f -- - -
H5Sget_select_type
(hid_t space_id
)
-H5Sget_select_type
retrieves the
- type of selection currently defined for the dataspace
- space_id
.
-hid_t space_id |
- Dataspace identifier. |
H5S_sel_type
,
- if successful.
- Valid return values are as follows:
-
- H5S_SEL_NONE
- | - No selection is defined. - |
- H5S_SEL_POINTS
- | - A sequence of points is selected. - |
- H5S_SEL_HYPERSLABS
- | - A hyperslab or compound hyperslab is selected. - |
- H5S_SEL_ALL
- | - The entire dataset is selected. - |
-SUBROUTINE h5sget_select_type_f(space_id, type, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: type ! Selection type - ! Valid values are: - ! H5S_SEL_ERROR_F - ! H5S_SEL_NONE_F - ! H5S_SEL_POINTS_F - ! H5S_SEL_HYPERSLABS_F - ! H5S_SEL_ALL_F - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5sget_select_type_f -- - -
H5Sget_simple_extent_dims
(hid_t space_id
,
- hsize_t *dims
,
- hsize_t *maxdims
- )
-H5Sget_simple_extent_dims
returns the size and maximum sizes
- of each dimension of a dataspace through the dims
- and maxdims
parameters.
-
- Either or both of dims
and maxdims
- may be NULL.
-
- If a value in the returned array maxdims
is
- H5S_UNLIMITED
(-1),
- the maximum size of that dimension is unlimited.
-
hid_t space_id |
- IN: Identifier of the dataspace object to query |
hsize_t *dims |
- OUT: Pointer to array to store the size of each dimension. |
hsize_t *maxdims |
- OUT: Pointer to array to store the maximum size of each dimension. |
-SUBROUTINE h5sget_simple_extent_dims_f(space_id, dims, maxdims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: dims - ! Array to store dimension sizes - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: maxdims - ! Array to store max dimension sizes - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! Dataspace rank on success - ! and -1 on failure -END SUBROUTINE h5sget_simple_extent_dims_f -- - -
H5Sget_simple_extent_ndims
(hid_t space_id
)
-H5Sget_simple_extent_ndims
determines the dimensionality (or rank)
- of a dataspace.
-hid_t space_id |
- Identifier of the dataspace |
-SUBROUTINE h5sget_simple_extent_ndims_f(space_id, rank, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: rank ! Number of dimensions - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sget_simple_extent_ndims_f -- - -
H5Sget_simple_extent_npoints
(hid_t space_id
)
-H5Sget_simple_extent_npoints
determines the number of elements
- in a dataspace. For example, a simple 3-dimensional dataspace
- with dimensions 2, 3, and 4 would have 24 elements.
-hid_t space_id |
- ID of the dataspace object to query |
-SUBROUTINE h5sget_simple_extent_npoints_f(space_id, npoints, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HSIZE_T), INTENT(OUT) :: npoints ! Number of elements in dataspace - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sget_simple_extent_npoints_f -- - -
H5Sget_simple_extent_type
(hid_t space_id
)
-H5Sget_simple_extent_type
queries a dataspace to determine the
- current class of a dataspace.
-
- The function returns a class name, one of the following:
- H5S_SCALAR
,
- H5S_SIMPLE
, or
- H5S_NONE
.
-
hid_t space_id |
- Dataspace identifier. |
-SUBROUTINE h5sget_simple_extent_type_f(space_id, classtype, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: classtype ! Class type - ! Possible values are: - ! H5S_NO_CLASS_F - ! H5S_SCALAR_F - ! H5S_SIMPLE_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sget_simple_extent_type_f -- - -
H5Sis_simple
(hid_t space_id
)
-H5Sis_simple
determines whether a dataspace is
- a simple dataspace. [Currently, all dataspace objects are simple
- dataspaces, complex dataspace support will be added in the future]
-hid_t space_id |
- Identifier of the dataspace to query |
TRUE
,
- or 0
(zero), for FALSE
.
- Otherwise returns a negative value.
--SUBROUTINE h5sis_simple_f(space_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - LOGICAL, INTENT(OUT) :: flag ! Flag, indicates if dataspace - ! is simple or not: - ! TRUE or FALSE - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sis_simple_f -- - -
H5Soffset_simple
(hid_t space_id
,
- const hssize_t *offset
- )
-H5Soffset_simple
sets the offset of a
- simple dataspace space_id
. The offset
- array must be the same number of elements as the number of
- dimensions for the dataspace. If the offset
- array is set to NULL, the offset for the dataspace
- is reset to 0.
- - This function allows the same shaped selection to be moved - to different locations within a dataspace without requiring it - to be redefined. -
hid_t space_id |
- IN: The identifier for the dataspace object to reset. |
const hssize_t *offset |
- IN: The offset at which to position the selection. |
-SUBROUTINE h5soffset_simple_f(space_id, offset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER(HSSIZE_T), DIMENSION(*), INTENT(IN) :: offset - ! The offset at which to position - ! the selection - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5soffset_simple_f -- - -
H5Sselect_all
(hid_t space_id
)
-H5Sselect_all
selects the entire extent
- of the dataspace space_id
.
-
- More specifically, H5Sselect_all
selects
- the special 5S_SELECT_ALL region for the dataspace
- space_id
. H5S_SELECT_ALL selects the
- entire dataspace for any dataspace it is applied to.
-
hid_t space_id |
- IN: The identifier for the dataspace in which the - selection is being made. |
-SUBROUTINE h5sselect_all_f(space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sselect_all_f -- - -
H5Sselect_elements
(hid_t space_id
,
- H5S_seloper_t op
,
- const size_t num_elements
,
- const hsize_t *coord
[ ]
- )
-H5Sselect_elements
selects array elements to be
- included in the selection for the space_id
dataspace.
-
- The number of elements selected is set in the
- num_elements
parameter.
-
- The coord
array is a two-dimensional array of
- size dataspace_rank
by num_elements
- containing a list of of zero-based values specifying the
- coordinates in the dataset of the selected elements.
- The order of the element coordinates in the
- coord
array specifies the order in which
- the array elements are iterated through when I/O is performed.
- Duplicate coordinate locations are not checked for.
-
- The selection operator op
determines how the
- new selection is to be combined with the previously existing
- selection for the dataspace.
- The following operators are supported:
-
- H5S_SELECT_SET
- | - Replaces the existing selection with the parameters from - this call. - Overlapping blocks are not supported with this operator. - Adds the new selection to the existing selection. - |
- H5S_SELECT_APPEND
- | - Adds the new selection following the last element of the - existing selection. - |
- H5S_SELECT_PREPEND
- | - Adds the new selection preceding the first element of the - existing selection. - |
hid_t space_id |
- Identifier of the dataspace. |
H5S_seloper_t op |
- Operator specifying how the new selection is to be - combined with the existing selection for the dataspace. |
const size_t num_elements |
- Number of elements to be selected. |
const hsize_t *coord [ ] |
- A 2-dimensional array of 0-based values specifying the - coordinates of the elements being selected. |
-SUBROUTINE h5sselect_elements_f(space_id, operator, num_elements, - coord, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(IN) :: op ! Flag, valid values are: - ! H5S_SELECT_SET_F - ! H5S_SELECT_OR_F - INTEGER, INTENT(IN) :: num_elements ! Number of elements to be selected - INTEGER(HSIZE_T), DIMENSION(*,*), INTENT(IN) :: coord - ! Array with the coordinates - ! of the selected elements: - ! coord(num_elements, rank)- -
- INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sselect_elements_f -- - -
H5Sselect_hyperslab
(hid_t space_id
,
- H5S_seloper_t op
,
- const hsize_t *start
,
- const hsize_t *stride
,
- const hsize_t *count
,
- const hsize_t *block
- )
-H5Sselect_hyperslab
selects a hyperslab region
- to add to the current selected region for the dataspace
- specified by space_id
.
-
- The start
, stride
, count
,
- and block
arrays must be the same size as the rank
- of the dataspace.
-
- The selection operator op
determines how the new
- selection is to be combined with the already existing selection
- for the dataspace.
- The following operators are supported:
-
- H5S_SELECT_SET
- | - Replaces the existing selection with the parameters from this call. - Overlapping blocks are not supported with this operator. - |
- H5S_SELECT_OR
- | - Adds the new selection to the existing selection. - - (Binary OR) - |
- H5S_SELECT_AND
- | - Retains only the overlapping portions of the new selection and - the existing selection. - - (Binary AND) - |
- H5S_SELECT_XOR
- | - Retains only the elements that are members of the new selection or - the existing selection, excluding elements that are members of - both selections. - - (Binary exclusive-OR, XOR) - |
- H5S_SELECT_NOTB
- | - Retains only elements of the existing selection that are not in - the new selection. - |
- H5S_SELECT_NOTA
- | - Retains only elements of the new selection that are not in - the existing selection. - |
- The start
array determines the starting coordinates
- of the hyperslab to select.
-
- The stride
array chooses array locations
- from the dataspace with each value in the stride
- array determining how many elements to move in each dimension.
- Setting a value in the stride
array to 1 moves to
- each element in that dimension of the dataspace; setting a value
- of 2
in alocation in the stride
array
- moves to every other element in that dimension of the dataspace.
- In other words, the stride
determines the
- number of elements to move from the start
location
- in each dimension.
- Stride values of 0
are not allowed.
- If the stride
parameter is NULL
,
- a contiguous hyperslab is selected (as if each value in the
- stride
array were set to all 1's).
-
- The count
array determines how many blocks to
- select from the dataspace, in each dimension.
-
- The block
array determines
- the size of the element block selected from the dataspace.
- If the block
parameter is set to NULL
,
- the block size defaults to a single element in each dimension
- (as if the block
array were set to all
- 1
's).
-
- For example, in a 2-dimensional dataspace, setting
- start
to [1,1],
- stride
to [4,4],
- count
to [3,7], and
- block
to [2,2]
- selects 21 2x2 blocks of array elements starting with
- location (1,1) and selecting blocks at locations
- (1,1), (5,1), (9,1), (1,5), (5,5), etc.
-
- Regions selected with this function call default to C order - iteration when I/O is performed. -
hid_t space_id |
- IN: Identifier of dataspace selection to modify |
H5S_seloper_t op |
- IN: Operation to perform on current selection. |
const hsize_t *start |
- IN: Offset of start of hyperslab |
const hsize_t *count |
- IN: Number of blocks included in hyperslab. |
const hsize_t *stride |
- IN: Hyperslab stride. |
const hsize_t *block |
- IN: Size of block in hyperslab. |
-SUBROUTINE h5sselect_hyperslab_f(space_id, operator, start, count, - hdferr, stride, block) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(IN) :: op ! Flag, valid values are: - ! H5S_SELECT_SET_F - ! H5S_SELECT_OR_F - INTEGER(HSIZE_T), DIMENSION(*), INTENT(IN) :: start - ! Starting coordinates of hyperslab - INTEGER(HSIZE_T), DIMENSION(*), INTENT(IN) :: count - ! Number of blocks to select - ! from dataspace - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure - INTEGER(HSIZE_T), DIMENSION(*), OPTIONAL, INTENT(IN) :: stride - ! Array of how many elements to - ! move in each direction - INTEGER(HSIZE_T), DIMENSION(*), OPTIONAL, INTENT(IN) :: block - ! Size of the element block -END SUBROUTINE h5sselect_hyperslab_f -- - -
H5Sselect_none
(hid_t space_id
)
-H5Sselect_none
resets the selection region
- for the dataspace space_id
to include no elements.
-hid_t space_id |
- IN: The identifier for the dataspace in which the - selection is being reset. |
-SUBROUTINE h5sselect_none_f(space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sselect_none_f -- - -
H5Sselect_valid
(hid_t space_id
)
-H5Sselect_valid
verifies that the selection
- for the dataspace space_id
is within the extent
- of the dataspace if the current offset for the dataspace is used.
-hid_t space_id |
- The identifier for the dataspace being queried. |
TRUE
,
- if the selection is contained within the extent
- or 0
(zero), for FALSE
, if it is not.
- Returns a negative value on error conditions
- such as the selection or extent not being defined.
--SUBROUTINE h5sselect_valid_f(space_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - LOGICAL, INTENT(OUT) :: flag ! TRUE if the selection is - ! contained within the extent, - ! FALSE otherwise. - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sselect_valid_f -- - -
H5Sset_extent_none
(hid_t space_id
)
-H5Sset_extent_none
removes the extent from
- a dataspace and sets the type to H5S_NO_CLASS.
-hid_t space_id |
- The identifier for the dataspace from which - the extent is to be removed. |
-SUBROUTINE h5sset_extent_none_f(space_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sset_extent_none_f -- - -
H5Sset_extent_simple
(hid_t space_id
,
- int rank
,
- const hsize_t *current_size
,
- const hsize_t *maximum_size
- )
-H5Sset_extent_simple
sets or resets the size of
- an existing dataspace.
-
- rank
is the dimensionality, or number of
- dimensions, of the dataspace.
-
- current_size
is an array of size rank
- which contains the new size of each dimension in the dataspace.
- maximum_size
is an array of size rank
- which contains the maximum size of each dimension in the
- dataspace.
-
- Any previous extent is removed from the dataspace, the dataspace
- type is set to H5S_SIMPLE
, and the extent is set as
- specified.
-
hid_t space_id |
- Dataspace identifier. | -
int rank |
- Rank, or dimensionality, of the dataspace. | -
const hsize_t *current_size |
- Array containing current size of dataspace. | -
const hsize_t *maximum_size |
- Array containing maximum size of dataspace. |
-SUBROUTINE h5sset_extent_simple_f(space_id, rank, current_size, - maximum_size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: space_id ! Dataspace identifier - INTEGER, INTENT(IN) :: rank ! Dataspace rank - INTEGER(HSIZE_T), DIMENSION(rank), INTENT(IN) :: current_size - ! Array with the new sizes - ! of dimensions - INTEGER(HSIZE_T), DIMENSION(rank), INTENT(IN) :: - ! Array with the new maximum - ! sizes of dimensions - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5sset_extent_simple_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-The C Interfaces: - -
- |
-
- - - |
- |
-
- - - |
- |
-
-General Datatype Operations
- - Enumeration Datatypes - |
-Atomic Datatype Properties
- |
- Array Datatypes
- - Compound Datatype Properties - - Variable-length Datatypes - - Opaque Datatypes - |
-The Datatype interface, H5T, provides a mechanism to describe the - storage format of individual data points of a data set and is - hopefully designed in such a way as to allow new features to be - easily added without disrupting applications that use the data - type interface. A dataset (the H5D interface) is composed of a - collection or raw data points of homogeneous type organized - according to the data space (the H5S interface). - -
-A datatype is a collection of datatype properties, all of - which can be stored on disk, and which when taken as a whole, - provide complete information for data conversion to or from that - datatype. The interface provides functions to set and query - properties of a datatype. - -
-A data point is an instance of a datatype, - which is an instance of a type class. We have defined - a set of type classes and properties which can be extended at a - later time. The atomic type classes are those which describe - types which cannot be decomposed at the datatype interface - level; all other classes are compound. - -
-See The Datatype Interface (H5T) -in the HDF5 User's Guide for further information, including a complete list of all supported datatypes. - - - - - -
H5Tarray_create
(
- hid_t base
,
- int rank
,
- const hsize_t dims[/*rank*/]
,
- const int perm[/*rank*/]
- )
-H5Tarray_create
creates a new array datatype object.
-
- base
is the datatype of every element of the array,
- i.e., of the number at each position in the array.
-
- rank
is the number of dimensions and the
- size of each dimension is specified in the array dims
.
- The value of rank
is currently limited to
- H5S_MAX_RANK
and must be greater than 0
- (zero).
- All dimension sizes specified in dims
must be greater
- than 0
(zero).
-
- The array perm
is designed to contain the dimension
- permutation, i.e. C versus FORTRAN array order.
-
- (The parameter perm
is currently unused and is not yet implemented.)
-
-
hid_t base |
- IN: Datatype identifier for the array base datatype. |
int rank |
- IN: Rank of the array. |
const hsize_t dims[/*rank*/] |
- IN: Size of each array dimension. |
const int perm[/*rank*/] |
- IN: Dimension permutation. - - (Currently not implemented.) |
-SUBROUTINE h5tarray_create_f(base_id, rank, dims, type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: base_id ! Identifier of array base datatype - INTEGER, INTENT(IN) :: rank ! Rank of the array - INTEGER(HSIZE_T), DIMENSION(*), INTENT(IN) :: dims - ! Sizes of each array dimension - INTEGER(HID_T), INTENT(OUT) :: type_id ! Identifier of the array datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tarray_create_f -- - -
H5Tclose
(hid_t type_id
- )
-H5Tclose
releases a datatype. Further access
- through the datatype identifier is illegal. Failure to release
- a datatype with this call will result in resource leaks.
-hid_t type_id |
- Identifier of datatype to release. |
-SUBROUTINE h5tclose_f(type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tclose_f -- - -
H5Tcommit
(hid_t loc_id
,
- const char * name
,
- hid_t type
- )
-H5Tcommit
commits a transient datatype
- (not immutable) to a file, turned it into a named datatype.
- The loc_id
is either a file or group identifier
- which, when combined with name
, refers to a new
- named datatype.
-hid_t loc_id |
- IN: A file or group identifier. |
const char * name |
- IN: A datatype name. |
hid_t type |
- IN: A datatype identifier. |
-SUBROUTINE h5tcommit_f(loc_id, name, type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Datatype name within file or group - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tcommit_f -- - -
H5Tcommitted
(hid_t type
)
-H5Tcommitted
queries a type to determine whether
- the type specified by the type
identifier
- is a named type or a transient type. If this function returns
- a positive value, then the type is named (that is, it has been
- committed, perhaps by some other application). Datasets which
- return committed datatypes with H5Dget_type()
are
- able to share the datatype with other datasets in the same file.
-hid_t type |
- IN: Datatype identifier. |
TRUE
,
- if the datatype has been committed, or 0
(zero),
- for FALSE
, if the datatype has not been committed.
- Otherwise returns a negative value.
--SUBROUTINE h5tcommitted_f(type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tcommitted_f -- - -
H5Tconvert
(hid_t src_id
,
- hid_t dst_id
,
- size_t nelmts
,
- void *buf
,
- void *background
,
- hid_t plist_id
- )
-H5Tconvert
converts nelmts
elements
- from the type specified by the src_id
identifier
- to type dst_id
.
- The source elements are packed in buf
and on return
- the destination will be packed in buf
.
- That is, the conversion is performed in place.
- The optional background buffer is an array of nelmts
- values of destination type which are merged with the converted
- values to fill in cracks (for instance, background
- might be an array of structs with the a
and
- b
fields already initialized and the conversion
- of buf
supplies the c
and d
- field values).
-
- The parameter plist_id
contains the dataset transfer
- property list identifier which is passed to the conversion functions.
- As of Release 1.2, this parameter is only used to pass along the
- variable-length datatype custom allocation information.
-
hid_t src_id |
- Identifier for the source datatype. |
hid_t dst_id |
- Identifier for the destination datatype. |
size_t nelmts |
- Size of array buf . |
void *buf |
- Array containing pre- and post-conversion values. |
void *background |
- Optional background buffer. |
hid_t plist_id |
- Dataset transfer property list identifier. |
H5Tcopy
(hid_t type_id
)
-H5Tcopy
copies an existing datatype.
- The returned type is always transient and unlocked.
-
- The type_id
argument can be either a datatype
- identifier, a predefined datatype (defined in
- H5Tpublic.h
), or a dataset identifier.
- If type_id
is a dataset identifier instead of a
- datatype identifier, then this function returns a transient,
- modifiable datatype which is a copy of the dataset's datatype.
-
- The datatype identifier returned should be released with
- H5Tclose
or resource leaks will occur.
-
-
hid_t type_id |
- Identifier of datatype to copy. Can be a datatype
- identifier, a predefined datatype (defined in
- H5Tpublic.h ), or a dataset identifier. |
-SUBROUTINE h5tcopy_f(type_id, new_type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER(HID_T), INTENT(OUT) :: new_type_id ! Identifier of datatype's copy - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tcopy_f -- - -
H5Tcreate
(H5T_class_t class
,
- size_tsize
- )
-H5Tcreate
creates a new datatype of the specified
- class with the specified number of bytes.
- - The following datatype classes are supported with this function: -
H5T_COMPOUND
- H5T_OPAQUE
- H5T_ENUM
-
- Use H5Tcopy
to create integer or floating-point datatypes.
-
- The datatype identifier returned from this function should be
- released with H5Tclose
or resource leaks will result.
-
H5T_class_t class |
- Class of datatype to create. |
size_t size |
- The number of bytes in the datatype to create. |
-SUBROUTINE h5tcreate_f(class, size, type_id, hdferr) - IMPLICIT NONE - INTEGER, INTENT(IN) :: class ! Datatype class can be one of - ! H5T_COMPOUND_F (6) - ! H5T_ENUM_F (8) - ! H5T_OPAQUE_F (9) - INTEGER(SIZE_T), INTENT(IN) :: size ! Size of the datatype - INTEGER(HID_T), INTENT(OUT) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tcreate_f -- - -
H5Tdetect_class
(hid_t dtype_id
,
- H5T_class_tdtype_class
- )
-H5Tdetect_class
determines whether the datatype
- specified in dtype_id
contains any datatypes of the
- datatype class specified in dtype_class
.
- - This function is useful primarily in recursively examining - all the fields and/or base types - of compound, array, and variable-length datatypes. -
- Valid class identifiers are as defined in
- H5Tget_class
.
-
hid_t dtype_id |
- Datatype identifier. |
H5T_class_t dtype_class |
- Datatype class. |
TRUE
or FALSE
if successful;
- otherwise returns a negative value.
-H5Tenum_create
(hid_t parent_id
- )
-H5Tenum_create
creates a new enumeration datatype
- based on the specified base datatype, parent_id
,
- which must be an integer type.
-hid_t parent_id |
- IN: Datatype identifier for the base datatype. |
-SUBROUTINE h5tenum_create_f(parent_id, new_type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: parent_id ! Datatype identifier for - ! the base datatype - INTEGER(HID_T), INTENT(OUT) :: new_type_id ! Datatype identifier for the - ! new enumeration datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tenum_create_f -- - -
H5Tenum_insert
(hid_t type
,
- const char *name
,
- void *value
- )
-H5Tenum_insert
inserts a
- new enumeration datatype member into an enumeration datatype.
-
- type
is the enumeration datatype,
- name
is the name of the new member, and
- value
points to the value of the new member.
-
- name
and value
must both
- be unique within type
.
-
- value
points to data which is of the
- datatype defined when the enumeration datatype was created.
-
hid_t type |
- IN: Datatype identifier for the enumeration datatype. |
const char *name |
- IN: Name of the new member. |
void *value |
- IN: Pointer to the value of the new member. |
-SUBROUTINE h5tenum_insert_f(type_id, name, value, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the new member - INTEGER, INTENT(IN) :: value ! Value of the new member - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tenum_insert_f -- - -
H5Tenum_nameof
(hid_t type
- void *value
,
- char *name
,
- size_t size
- )
-H5Tenum_nameof
finds the symbol name that
- corresponds to the specified value
- of the enumeration datatype type
.
-
- At most size
characters of the symbol
- name are copied into the name
buffer.
- If the entire symbol name and null terminator
- do not fit in the name
buffer, then as
- many characters as possible are copied
- (not null terminated) and the function fails.
-
hid_t type |
- IN: Enumeration datatype identifier. |
void *value, |
- IN: Value of the enumeration datatype. |
char *name , |
- OUT: Buffer for output of the symbol name. |
size_t size |
- IN: Anticipated size of the symbol name, in bytes (characters). |
size
allows it,
- the first character of name
is
- set to NULL
.
--SUBROUTINE h5tenum_nameof_f(type_id, name, namelen, value, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(OUT) :: name ! Name of the enumeration datatype - INTEGER(SIZE_T), INTENT(IN) :: namelen ! Length of the name - INTEGER, INTENT(IN) :: value ! Value of the enumeration datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tenum_nameof_f -- - -
H5Tenum_valueof
(hid_t type
- char *name
,
- void *value
- )
-H5Tenum_valueof
finds the value that
- corresponds to the specified name
- of the enumeration datatype type
.
-
- The value
argument should be at least
- as large as the value of H5Tget_size(type)
- in order to hold the result.
-
hid_t type |
- IN: Enumeration datatype identifier. |
const char *name, |
- IN: Symbol name of the enumeration datatype. |
void *value , |
- OUT: Buffer for output of the value of the enumeration datatype. |
-SUBROUTINE h5tenum_valueof_f(type_id, name, value, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the enumeration datatype - INTEGER, INTENT(OUT) :: value ! Value of the enumeration datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tenum_valueof_f -- - -
H5Tequal
(hid_t type_id1
,
- hid_ttype_id2
- )
-H5Tequal
determines whether two datatype identifiers
- refer to the same datatype.
-hid_t type_id1 |
- Identifier of datatype to compare. |
hid_t type_id2 |
- Identifier of datatype to compare. |
TRUE
,
- if the datatype identifiers refer to the same datatype,
- or 0
(zero), for FALSE
.
- Otherwise returns a negative value.
--SUBROUTINE h5tequal_f(type1_id, type2_id, flag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type1_id ! Datatype identifier - INTEGER(HID_T), INTENT(IN) :: type2_id ! Datatype identifier - LOGICAL, INTENT(OUT) :: flag ! TRUE/FALSE flag to indicate - ! if two datatypes are equal - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tequal_f -- - -
H5Tfind
(hid_t src_id
,
- hid_t dst_id
,
- H5T_cdata_t **pcdata
- )
-H5Tfind
finds a conversion function that can
- handle a conversion from type src_id
to type
- dst_id
.
- The pcdata
argument is a pointer to a pointer
- to type conversion data which was created and initialized
- by the soft type conversion function of this path when the
- conversion function was installed on the path.
-hid_t src_id |
- IN: Identifier for the source datatype. |
hid_t dst_id |
- IN: Identifier for the destination datatype. |
H5T_cdata_t **pcdata |
- OUT: Pointer to type conversion data. |
H5Tget_array_dims
(
- hid_t adtype_id
,
- hsize_t *dims[]
,
- int *perm[]
- )
-H5Tget_array_dims
returns the sizes of the dimensions
- and the dimension permutations of the specified array datatype object.
-
- The sizes of the dimensions are returned in the array dims
.
- The dimension permutations, i.e., C versus FORTRAN array order,
- are returned in the array perm
.
-
hid_t adtype_id |
- IN: Datatype identifier of array object. |
hsize_t *dims[] |
- OUT: Sizes of array dimensions. |
int *perm[] |
- OUT: Dimension permutations. |
-SUBROUTINE h5tget_array_dims_f(type_id, dims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Identifier of the array datatype - INTEGER(HSIZE_T), DIMENSION(*), INTENT(OUT) :: dims - ! Buffer to store array datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_array_dims_f -- - -
H5Tget_array_ndims
(
- hid_t adtype_id
- )
-H5Tget_array_ndims
returns the rank,
- the number of dimensions, of an array datatype object.
-hid_t adtype_id |
- IN: Datatype identifier of array object. |
-SUBROUTINE h5tget_array_ndims_f(type_id, ndims, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Identifier of the array datatype - INTEGER, INTENT(OUT) :: ndims ! Number of array dimensions - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_array_ndims_f -- - -
H5Tget_class
(hid_t type_id
- )
-H5Tget_class
returns the datatype class identifier.
-
- Valid class identifiers, as defined in H5Tpublic.h
, are:
-
H5T_INTEGER
- H5T_FLOAT
- H5T_TIME
- H5T_STRING
- H5T_BITFIELD
- H5T_OPAQUE
- H5T_COMPOUND
- H5T_REFERENCE
- H5T_ENUM
- H5T_VLEN
- H5T_ARRAY
-
- Note that the library returns H5T_STRING
- for both fixed-length and variable-length strings.
-
hid_t type_id |
- Identifier of datatype to query. |
H5T_NO_CLASS
(-1).
--SUBROUTINE h5tget_class_f(type_id, class, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: class ! Datatype class, possible values are: - ! H5T_NO_CLASS_F - ! H5T_INTEGER_F - ! H5T_FLOAT_F - ! H5T_TIME_F - ! H5T_STRING_F - ! H5T_BITFIELD_F - ! H5T_OPAQUE_F - ! H5T_COMPOUND_F - ! H5T_REFERENCE_F - ! H5T_ENUM_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tget_class_f -- - -
H5Tget_cset
(hid_t type_id
- )
-H5Tget_cset
retrieves the character set type
- of a string datatype. Valid character set types are:
- 0
)
- hid_t type_id |
- Identifier of datatype to query. |
H5T_CSET_ERROR
(-1).
--SUBROUTINE h5tget_cset_f(type_id, cset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: cset ! Character set type of a string - ! datatype - ! Possible values of padding type are: - ! H5T_CSET_ASCII_F = 0 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_cset_f -- - -
H5Tget_ebias
(hid_t type_id
- )
-H5Tget_ebias
retrieves the exponent bias of a floating-point type.
-hid_t type_id |
- Identifier of datatype to query. |
-SUBROUTINE h5tget_ebias_f(type_id, ebias, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: ebias ! Datatype exponent bias - ! of a floating-point type - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_ebias_f -- - -
H5Tget_fields
(hid_t type_id
,
- size_t *spos
,
- size_t *epos
,
- size_t *esize
,
- size_t *mpos
,
- size_t *msize
- )
-H5Tget_fields
retrieves information about the locations of the various
- bit fields of a floating point datatype. The field positions are bit
- positions in the significant region of the datatype. Bits are
- numbered with the least significant bit number zero.
- Any (or even all) of the arguments can be null pointers.
-hid_t type_id |
- IN: Identifier of datatype to query. |
size_t *spos |
- OUT: Pointer to location to return floating-point sign bit. |
size_t *epos |
- OUT: Pointer to location to return exponent bit-position. |
size_t *esize |
- OUT: Pointer to location to return size of exponent in bits. |
size_t *mpos |
- OUT: Pointer to location to return mantissa bit-position. |
size_t *msize |
- OUT: Pointer to location to return size of mantissa in bits. |
-SUBROUTINE h5tget_fields_f(type_id, epos, esize, mpos, msize, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: epos ! Exponent bit-position - INTEGER, INTENT(OUT) :: esize ! Size of exponent in bits - INTEGER, INTENT(OUT) :: mpos ! Mantissa bit-position - INTEGER, INTENT(OUT) :: msize ! Size of mantissa in bits - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_fields_f -- - -
H5Tget_inpad
(hid_t type_id
- )
-H5Tget_inpad
retrieves the internal padding type for
- unused bits in floating-point datatypes.
- Valid padding types are:
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- Identifier of datatype to query. |
H5T_PAD_ERROR
(-1).
--SUBROUTINE h5tget_inpad_f(type_id, padtype, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: padtype ! Padding type for unused bits - ! in floating-point datatypes - ! Possible values of padding type are: - ! H5T_PAD_ZERO_F = 0 - ! H5T_PAD_ONE_F = 1 - ! H5T_PAD_BACKGROUND_F = 2 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_inpad_f -- - -
H5Tget_member_class
(
- hid_t cdtype_id
,
- unsigned member_no
- )
-cdtype_id
, the function
- H5Tget_member_class
returns the datatype class of
- the compound datatype member specified by member_no
.
-
- Valid class identifiers are as defined in
- H5Tget_class
.
-
hid_t cdtype_id |
- IN: Datatype identifier of compound object. |
unsigned member_no |
- IN: Compound object member number. |
-SUBROUTINE h5tget_member_class_f(type_id, member_no, class, hdferr) - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: member_no ! Member number - INTEGER, INTENT(OUT) :: class ! Member class - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_member_class_f -- -
H5Tget_member_index
(hid_t type_id
,
- const char * field_name
- )
-H5Tget_member_index
retrieves the index of a field
- of a compound datatype or an element of an enumeration datatype.
-
- The name of the target field or element is specified in
- field_name
.
-
- Fields are stored in no particular order
- with index values of 0 through N-1, where N is
- the value returned by H5Tget_nmembers
.
-
hid_t type_id |
- Identifier of datatype to query. |
const char * field_name |
- Name of the field or member whose index is to be retrieved. |
-SUBROUTINE h5tget_member_index_f(type_id, name, index, hdferr) - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Member name - INTEGER, INTENT(OUT) :: index ! Member index - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_member_index_f -- -
H5Tget_member_name
(hid_t type_id
,
- unsigned field_idx
- )
-H5Tget_member_name
retrieves the name of a field
- of a compound datatype or an element of an enumeration datatype.
-
- The index of the target field or element is specified in
- field_idx
.
- Compound datatype fields and enumeration datatype elements
- are stored in no particular order
- with index values of 0 through N-1, where N
- is the value returned by H5Tget_nmembers
.
-
- A buffer to receive the name of the field is
- allocated with malloc()
and the caller is responsible
- for freeing the memory used.
-
hid_t type_id |
- Identifier of datatype to query. |
unsigned field_idx |
- Zero-based index of the field or element whose name - is to be retrieved. |
malloc()
if successful;
- otherwise returns NULL.
--SUBROUTINE h5tget_member_name_f(type_id,index, member_name, namelen, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: index ! Field index (0-based) of - ! the field name to retrieve - CHARACTER(LEN=*), INTENT(OUT) :: member_name ! Name of a field of - ! a compound datatype - INTEGER, INTENT(OUT) :: namelen ! Length of the name - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_member_name_f -- - -
H5Tget_member_offset
(hid_t type_id
,
- unsigned memb_no
- )
-H5Tget_member_offset
retrieves the
- byte offset of the beginning of a field within a
- compound datatype with respect to the beginning
- of the compound data type datum.
-hid_t type_id |
- Identifier of datatype to query. |
unsigned memb_no |
- Number of the field whose offset is requested. |
0
(zero).
- Note that zero is a valid offset and that this function
- will fail only if a call to H5Tget_member_class()
- fails with the same arguments.
--SUBROUTINE h5tget_member_offset_f(type_id, member_no, offset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: member_no ! Number of the field - ! whose offset is requested - INTEGER(SIZE_T), INTENT(OUT) :: offset ! Byte offset of the the - ! beginning of the field - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_member_offset_f -- - -
H5Tget_member_type
(hid_t type_id
,
- unsigned field_idx
- )
-H5Tget_member_type
returns the datatype of the specified member. The caller
- should invoke H5Tclose() to release resources associated with the type.
-hid_t type_id |
- Identifier of datatype to query. |
unsigned field_idx |
- Field index (0-based) of the field type to retrieve. |
-SUBROUTINE h5tget_member_type_f(type_id, field_idx, datatype, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: field_idx ! Field index (0-based) of the - ! field type to retrieve - INTEGER(HID_T), INTENT(OUT) :: datatype ! Identifier of a copy of - ! the datatype of the field - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_member_type_f -- - -
H5Tget_member_value
(hid_t type
- unsigned memb_no
,
- void *value
- )
-H5Tget_member_value
returns the value of
- the enumeration datatype member memb_no
.
-
- The member value is returned in a user-supplied buffer
- pointed to by value
.
-
hid_t type |
- IN: Datatype identifier for the enumeration datatype. |
unsigned memb_no , |
- IN: Number of the enumeration datatype member. |
void *value |
- OUT: Pointer to a buffer for output of the - value of the enumeration datatype member. |
-SUBROUTINE h5tget_member_value_f(type_id, member_no, value, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: member_no ! Number of the enumeration - ! datatype member - INTEGER, INTENT(OUT) :: value ! Value of the enumeration datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_member_value_f -- - -
H5Tget_native_type
(hid_t type_id
,
- H5T_direction_t direction
- )
-H5Tget_native_type
returns the equivalent native datatype
- for the datatype specified in type_id
.
-
- H5Tget_native_type
is a high-level function designed
- primarily to facilitate use of the H5Dread
function,
- for which users otherwise must undertake a multi-step process to
- determine the native datatype of a dataset prior to reading it
- into memory.
- It can be used not only to determine
- the native datatype for atomic datatypes,
- but also to determine the native datatypes of the individual components of
- a compound datatype, an enumerated datatype, an array datatype, or
- a variable-length datatype.
-
- H5Tget_native_type
selects the matching native datatype
- from the following list:
-
H5T_NATIVE_CHAR - H5T_NATIVE_SHORT - H5T_NATIVE_INT - H5T_NATIVE_LONG - H5T_NATIVE_LLONG - - H5T_NATIVE_UCHAR - H5T_NATIVE_USHORT - H5T_NATIVE_UINT - H5T_NATIVE_ULONG - H5T_NATIVE_ULLONG - - H5T_NATIVE_FLOAT - H5T_NATIVE_DOUBLE - H5T_NATIVE_LDOUBLE-
- The direction
parameter indicates the order
- in which the library searches for a native datatype match.
- Valid values for direction
are as follows:
-
- H5T_DIR_ASCEND |
- Searches the above list in ascending size of the datatype, - i.e., from top to bottom. (Default) - | |
- H5T_DIR_DESCEND |
- Searches the above list in descending size of the datatype, - i.e., from bottom to top. - |
- H5Tget_native_type
is designed primarily for
- use with intenger and floating point datatypes.
- Time, bifield, opaque, and reference datatypes are returned
- as a copy of type_id
.
-
- The identifier returned by H5Tget_native_type
- should eventually be closed by calling H5Tclose
- to release resources.
-
hid_t type_id |
- Datatype identifier for the dataset datatype. |
H5T_direction_t direction |
- Direction of search. |
H5Tget_nmembers
(hid_t type_id
- )
-H5Tget_nmembers
retrieves
- the number of fields in a compound datatype or
- the number of members of an enumeration datatype.
-hid_t type_id |
- Identifier of datatype to query. |
-SUBROUTINE h5tget_nmembers_f(type_id, num_members, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: num_members ! Number of fields in a - ! compound datatype - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_nmembers_f -- - -
H5Tget_norm
(hid_t type_id
- )
-H5Tget_norm
retrieves the mantissa normalization of
- a floating-point datatype. Valid normalization types are:
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- Identifier of datatype to query. |
H5T_NORM_ERROR
(-1).
--SUBROUTINE h5tget_norm_f(type_id, norm, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(OUT) :: norm ! Mantissa normalization of a - ! floating-point datatype - ! Valid normalization types are: - ! H5T_NORM_IMPLIED_F(0) - ! MSB of mantissa is not - ! stored, always 1 - ! H5T_NORM_MSBSET_F(1) - ! MSB of mantissa is always 1 - ! H5T_NORM_NONE_F(2) - ! Mantissa is not normalized - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_norm_f -- - -
H5Tget_offset
(hid_t type_id
- )
-H5Tget_offset
retrieves the bit offset of the first significant bit.
- The significant bits of an atomic datum can be offset from the beginning
- of the memory for that datum by an amount of padding. The `offset'
- property specifies the number of bits of padding that appear to the
- "right of" the value. That is, if we have a 32-bit datum with 16-bits
- of precision having the value 0x1122 then it will be laid out in
- memory as (from small byte address toward larger byte addresses):
- Byte Position | -Big-Endian Offset=0 | -Big-Endian Offset=16 | -Little-Endian Offset=0 | -Little-Endian Offset=16 | -
---|---|---|---|---|
0: | -[ pad] | -[0x11] | -[0x22] | -[ pad] | -
1: | -[ pad] | -[0x22] | -[0x11] | -[ pad] | -
2: | -[0x11] | -[ pad] | -[ pad] | -[0x22] | -
3: | -[0x22] | -[ pad] | -[ pad] | -[0x11] | -
hid_t type_id |
- Identifier of datatype to query. |
-SUBROUTINE h5tget_offset_f(type_id, offset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: offset ! Datatype bit offset of the - ! first significant bit - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_offset_f -- - -
H5Tget_order
(hid_t type_id
- )
-H5Tget_order
returns the byte order of an
- atomic datatype.
- - Possible return values are: -
H5T_ORDER_LE
(0
)
- H5T_ORDER_BE
(1
)
- H5T_ORDER_VAX
(2
)
- hid_t type_id |
- Identifier of datatype to query. |
H5T_ORDER_ERROR
(-1).
--SUBROUTINE h5tget_order_f(type_id, order, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: order ! Datatype byte order - ! Possible values are: - ! H5T_ORDER_LE_F - ! H5T_ORDER_BE_F - ! H5T_ORDER_VAX_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tget_order_f -- - -
H5Tget_overflow
(void
)
-H5Tset_overflow
returns a pointer
- to the current global overflow function.
- This is an application-defined function that is called whenever a
- datatype conversion causes an overflow.
-H5Tget_pad
(hid_t type_id
,
- H5T_pad_t * lsb
,
- H5T_pad_t * msb
- )
-H5Tget_pad
retrieves the padding type of the least and most-significant
- bit padding. Valid types are:
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- IN: Identifier of datatype to query. |
H5T_pad_t * lsb |
- OUT: Pointer to location to return least-significant - bit padding type. |
H5T_pad_t * msb |
- OUT: Pointer to location to return most-significant - bit padding type. |
-SUBROUTINE h5tget_pad_f(type_id, lsbpad, msbpad, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: lsbpad ! Padding type of the - ! least significant bit - INTEGER, INTENT(OUT) :: msbpad ! Padding type of the - ! most significant bit - ! Possible values of - ! padding type are: - ! H5T_PAD_ZERO_F = 0 - ! H5T_PAD_ONE_F = 1 - ! H5T_PAD_BACKGROUND_F = 2 - ! H5T_PAD_ERROR_F = -1 - ! H5T_PAD_NPAD_F = 3 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_pad_f -- - -
H5Tget_precision
(hid_t type_id
- )
-H5Tget_precision
returns the precision of an atomic datatype. The
- precision is the number of significant bits which, unless padding is
- present, is 8 times larger than the value returned by H5Tget_size().
-hid_t type_id |
- Identifier of datatype to query. |
-SUBROUTINE h5tget_precision_f(type_id, precision, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: precision ! Datatype precision - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_precision_f -- - -
H5Tget_sign
(hid_t type_id
- )
-H5Tget_sign
retrieves the sign type for an integer type.
- Valid types are:
- 0
)
- 1
)
- hid_t type_id |
- Identifier of datatype to query. |
H5T_SGN_ERROR
(-1).
--SUBROUTINE h5tget_sign_f(type_id, sign, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: sign ! Sign type for an integer type - ! Possible values are: - ! Unsigned integer type - ! H5T_SGN_NONE_F = 0 - ! Two's complement signed - ! integer type - ! H5T_SGN_2_F = 1 - ! or error value - ! H5T_SGN_ERROR_F = -1 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_sign_f -- - -
H5Tget_size
(hid_t type_id
- )
-H5Tget_size
returns the size of a datatype in bytes.
-hid_t type_id |
- Identifier of datatype to query. |
-SUBROUTINE h5tget_size_f(type_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER(SIZE_T), INTENT(OUT) :: size ! Datatype size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tget_size_f -- - -
H5Tget_strpad
(hid_t type_id
- )
-H5Tget_strpad
retrieves the storage mechanism
- for a string datatype, as defined in
- H5Tset_strpad
.
-hid_t type_id |
- Identifier of datatype to query. |
H5T_STR_ERROR
(-1).
--SUBROUTINE h5tget_strpad_f(type_id, strpad, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(OUT) :: strpad ! String padding method for a string datatype - ! Possible values of padding type are: - ! Pad with zeros (as C does): - ! H5T_STR_NULLPAD_F(0) - ! Pad with spaces (as FORTRAN does): - ! H5T_STR_SPACEPAD_F(1) - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_strpad_f -- - -
H5Tget_super
(hid_t type
- )
-H5Tget_super
returns the base datatype from which the
- datatype type
is derived.
- - In the case of an enumeration type, the return value is an integer type. -
hid_t type |
- Datatype identifier for the derived datatype. |
-SUBROUTINE h5tget_super_f(type_id, base_type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER(HID_T), INTENT(OUT) :: type_id ! Base datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_super_f -- - -
H5Tget_tag
(hid_t type_id
- )
-H5Tget_tag
returns the tag associated with
- the opaque datatype type_id
.
- - The tag is returned via a pointer to an - allocated string, which the caller must free. -
hid_t type_id |
- Datatype identifier for the opaque datatype. |
NULL
.
--SUBROUTINE h5tget_tag_f(type_id, tag,taglen, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(OUT) :: tag ! Unique ASCII string with which the - ! opaque datatype is to be tagged - INTEGER, INTENT(OUT) :: taglen ! Length of tag - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tget_tag_f -- - -
H5Tinsert
(hid_t type_id
,
- const char * name
,
- size_t offset
,
- hid_t field_id
- )
-H5Tinsert
adds another member to the compound datatype
- type_id
. The new member has a name
which
- must be unique within the compound datatype.
- The offset
argument defines the start of the member
- in an instance of the compound datatype, and field_id
- is the datatype identifier of the new member.
- - Note: Members of a compound datatype do not have to be atomic datatypes; - a compound datatype can have a member which is a compound datatype. -
hid_t type_id |
- Identifier of compound datatype to modify. |
const char * name |
- Name of the field to insert. |
size_t offset |
- Offset in memory structure of the field to insert. |
hid_t field_id |
- Datatype identifier of the field to insert. |
-SUBROUTINE h5tinsert_f(type_id, name, offset, field_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Name of the field to insert - INTEGER(SIZE_T), INTENT(IN) :: offset ! Offset in memory structure - ! of the field to insert - INTEGER(HID_T), INTENT(IN) :: field_id ! Datatype identifier of the - ! new member - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tinsert_f -- - -
H5Tis_variable_str
(hid_t dtype_id
- )
-H5Tvlen_create
determines whether the datatype
- identified in dtype_id
is a variable-length string.
- - This function can be used to distinguish between - fixed and variable-length string datatypes. -
hid_t dtype_id |
- Datatype identifier. |
TRUE
or FALSE
if successful;
- otherwise returns a negative value.
--SUBROUTINE h5tis_variable_str_f(type_id, status, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - LOGICAL, INTENT(OUT) :: status ! Logical flag: - ! .TRUE. if datatype is a - ! varibale string - ! .FALSE. otherwise - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tis_variable_str_f -- - -
H5Tlock
(hid_t type_id
- )
-H5Tlock
locks the datatype specified by the
- type_id
identifier, making it read-only and
- non-destructible. This is normally done by the library for
- predefined datatypes so the application does not
- inadvertently change or delete a predefined type.
- Once a datatype is locked it can never be unlocked.
-hid_t type_id |
- Identifier of datatype to lock. |
H5Topen
(hid_t loc_id
,
- const char * name
- )
-H5Topen
opens a named datatype at the location
- specified by loc_id
and returns an identifier
- for the datatype. loc_id
is either a file or
- group identifier. The identifier should eventually be closed
- by calling H5Tclose
to release resources.
-hid_t loc_id |
- IN: A file or group identifier. |
const char * name |
- IN: A datatype name, defined within the file
- or group identified by loc_id . |
-SUBROUTINE h5topen_f(loc_id, name, type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: loc_id ! File or group identifier - CHARACTER(LEN=*), INTENT(IN) :: name ! Datatype name within file or - ! group - INTEGER(HID_T), INTENT(out) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5topen_f -- - -
H5Tpack
(hid_t type_id
- )
-H5Tpack
recursively removes padding from within a compound
- datatype to make it more efficient (space-wise) to store that data.
-hid_t type_id |
- Identifier of datatype to modify. |
-SUBROUTINE h5tpack_f(type_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tpack_f -- - -
H5Tregister
(H5T_pers_t pers
,
- const char * name
,
- hid_t src_id
,
- hid_t dst_id
,
- H5T_conv_t func
- )
-H5Tregister
registers a hard or soft conversion function
- for a datatype conversion path.
-
- The parameter pers
indicates whether a conversion function
- is hard (H5T_PERS_HARD
)
- or soft (H5T_PERS_SOFT
).
-
- A conversion path can have only one hard function.
- When pers
is H5T_PERS_HARD
,
- func
replaces any previous hard function.
- If pers
is H5T_PERS_HARD
and
- func
is the null pointer, then any hard function
- registered for this path is removed.
-
- When pers
is H5T_PERS_SOFT
,
- H5Tregister
- adds the function to the end of the master soft list and replaces
- the soft function in all applicable existing conversion paths.
- Soft functions are used when determining which conversion function
- is appropriate for this path.
-
- The name
is used only for debugging and should be a
- short identifier for the function.
-
- The path is specified by the source and destination datatypes
- src_id
and dst_id
.
- For soft conversion functions, only the class of these types is important.
-
- The type of the conversion function pointer is declared as: -
-typedef
herr_t (*H5T_conv_t
) (hid_tsrc_id
, - hid_tdst_id
, - H5T_cdata_t *cdata
, - size_tnelmts
, - size_tbuf_stride
, - size_tbkg_stride
, - void *buf
, - void *bkg
, - hid_tdset_xfer_plist
)
- The H5T_cdata_t
struct is declared as:
-
-typedef
struct*H5T_cdata_t
(H5T_cmd_tcommand
, - H5T_bkg_tneed_bkg
, - hbool_t *recalc
, - void *priv
)
- The H5T_conv_t
parameters and
- the elements of the H5T_cdata_t
struct
- are described more fully in the
- “Data Conversion”
- section of “The Datatype Interface (H5T)”
- in the HDF5 User's Guide.
-
H5T_pers_t pers |
- H5T_PERS_HARD for hard conversion functions;
- H5T_PERS_SOFT for soft conversion functions. |
const char * name |
- Name displayed in diagnostic output. |
hid_t src_id |
- Identifier of source datatype. |
hid_t dst_id |
- Identifier of destination datatype. |
H5T_conv_t func |
- Function to convert between source and destination datatypes. |
H5Tset_cset
(hid_t type_id
,
- H5T_cset_t cset
- )
-H5Tset_cset
the character set to be used.
- - HDF5 is able to distinguish between character sets of different - nationalities and to convert between them to the extent possible. - Valid character set types are: -
0
)
- hid_t type_id |
- Identifier of datatype to modify. |
H5T_cset_t cset |
- Character set type. |
-SUBROUTINE h5tset_cset_f(type_id, cset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(IN) :: cset ! Character set type of a string datatype - ! Possible values of padding type are: - ! H5T_CSET_ASCII_F = 0 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_cset_f -- - -
H5Tset_ebias
(hid_t type_id
,
- size_t ebias
- )
-H5Tset_ebias
sets the exponent bias of a floating-point type.
-hid_t type_id |
- Identifier of datatype to set. |
size_t ebias |
- Exponent bias value. |
-SUBROUTINE h5tset_ebias_f(type_id, ebias, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: ebias ! Datatype exponent bias - ! of a floating-point type, - ! which cannot be 0 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_ebias_f -- - -
H5Tset_fields
(hid_t type_id
,
- size_t spos
,
- size_t epos
,
- size_t esize
,
- size_t mpos
,
- size_t msize
- )
-H5Tset_fields
sets the locations and sizes of the various
- floating-point bit fields. The field positions are bit positions in the
- significant region of the datatype. Bits are numbered with the least
- significant bit number zero.
-
- Fields are not allowed to extend beyond the number of bits of - precision, nor are they allowed to overlap with one another. -
hid_t type_id |
- Identifier of datatype to set. |
size_t spos |
- Sign position, i.e., the bit offset of the floating-point - sign bit. |
size_t epos |
- Exponent bit position. |
size_t esize |
- Size of exponent in bits. |
size_t mpos |
- Mantissa bit position. |
size_t msize |
- Size of mantissa in bits. |
-SUBROUTINE h5tset_fields_f(type_id, epos, esize, mpos, msize, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: epos ! Exponent bit-position - INTEGER, INTENT(IN) :: esize ! Size of exponent in bits - INTEGER, INTENT(IN) :: mpos ! Mantissa bit-position - INTEGER, INTENT(IN) :: msize ! Size of mantissa in bits - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_fields_f -- - -
H5Tset_inpad
(hid_t type_id
,
- H5T_pad_t inpad
- )
-H5Tset_inpad
will be filled
- according to the value of the padding value property inpad
.
- Valid padding types are:
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- Identifier of datatype to modify. |
H5T_pad_t pad |
- Padding type. |
-SUBROUTINE h5tset_inpad_f(type_id, padtype, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(IN) :: padtype ! Padding type for unused bits - ! in floating-point datatypes. - ! Possible values of padding type are: - ! H5T_PAD_ZERO_F = 0 - ! H5T_PAD_ONE_F = 1 - ! H5T_PAD_BACKGROUND_F = 2 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_inpad_f -- - -
H5Tset_norm
(hid_t type_id
,
- H5T_norm_t norm
- )
-H5Tset_norm
sets the mantissa normalization of
- a floating-point datatype. Valid normalization types are:
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- Identifier of datatype to set. |
H5T_norm_t norm |
- Mantissa normalization type. |
-SUBROUTINE h5tset_norm_f(type_id, norm, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(IN) :: norm ! Mantissa normalization of a - ! floating-point datatype - ! Valid normalization types are: - ! H5T_NORM_IMPLIED_F(0) - ! MSB of mantissa is not stored, - ! always 1 - ! H5T_NORM_MSBSET_F(1) - ! MSB of mantissa is always 1 - ! H5T_NORM_NONE_F(2) - ! Mantissa is not normalized - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_norm_f -- - -
H5Tset_offset
(hid_t type_id
,
- size_t offset
- )
-H5Tset_offset
sets the bit offset of the first significant bit. The
- significant bits of an atomic datum can be offset from the beginning of
- the memory for that datum by an amount of padding. The `offset'
- property specifies the number of bits of padding that appear to the
- "right of" the value. That is, if we have a 32-bit datum with 16-bits
- of precision having the value 0x1122 then it will be laid out in
- memory as (from small byte address toward larger byte addresses):
- Byte Position | -Big-Endian Offset=0 | -Big-Endian Offset=16 | -Little-Endian Offset=0 | -Little-Endian Offset=16 | -
---|---|---|---|---|
0: | -[ pad] | -[0x11] | -[0x22] | -[ pad] | -
1: | -[ pad] | -[0x22] | -[0x11] | -[ pad] | -
2: | -[0x11] | -[ pad] | -[ pad] | -[0x22] | -
3: | -[0x22] | -[ pad] | -[ pad] | -[0x11] | -
If the offset is incremented then the total size is -incremented also if necessary to prevent significant bits of -the value from hanging over the edge of the datatype. - -
The offset of an H5T_STRING cannot be set to anything but -zero. -
hid_t type_id |
- Identifier of datatype to set. |
size_t offset |
- Offset of first significant bit. |
-SUBROUTINE h5tset_offset_f(type_id, offset, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: offset ! Datatype bit offset of - ! the first significant bit - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_offset_f -- - -
H5Tset_order
(hid_t type_id
,
- H5T_order_torder
- )
-H5Tset_order
sets the byte ordering of an atomic datatype.
- Byte orderings currently supported are:
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- Identifier of datatype to set. |
H5T_order_t order |
- Byte ordering constant. |
-SUBROUTINE h5tset_order_f(type_id, order, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: order ! Datatype byte order - ! Possible values are: - ! H5T_ORDER_LE_F - ! H5T_ORDER_BE_F - ! H5T_ORDER_VAX_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tset_order_f -- - -
H5Tset_overflow
(H5T_overflow_t func
)
-H5Tset_overflow
sets the overflow handler
- to be the function specified by func
.
- func
will be called for all datatype conversions that
- result in an overflow.
-
- See the definition of H5T_overflow_t
in
- H5Tpublic.h
for documentation
- of arguments and return values.
- The prototype for H5T_overflow_t
is as follows:
- herr_t (*H5T_overflow_t)(hid_t src_id, hid_t dst_id,
- void *src_buf, void *dst_buf);
-
-
- The NULL pointer may be passed to remove the overflow handler. -
H5T_overflow_t func |
- Overflow function. |
H5Tset_pad
(hid_t type_id
,
- H5T_pad_t lsb
,
- H5T_pad_t msb
- )
-H5Tset_pad
sets the least and most-significant bits padding types.
- 0
)
- 1
)
- 2
)
- hid_t type_id |
- Identifier of datatype to set. |
H5T_pad_t lsb |
- Padding type for least-significant bits. |
H5T_pad_t msb |
- Padding type for most-significant bits. |
-SUBROUTINE h5tset_pad_f(type_id, lsbpad, msbpad, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: lsbpad ! Padding type of the - ! least significant bit - INTEGER, INTENT(IN) :: msbpad ! Padding type of the - ! most significant bit - ! Possible values of padding - ! type are: - ! H5T_PAD_ZERO_F = 0 - ! H5T_PAD_ONE_F = 1 - ! H5T_PAD_BACKGROUND_F = 2 - ! H5T_PAD_ERROR_F = -1 - ! H5T_PAD_NPAD_F = 3 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_pad_f -- - -
H5Tset_precision
(hid_t type_id
,
- size_tprecision
- )
-H5Tset_precision
sets the precision of an atomic datatype.
- The precision is the number of significant bits which, unless padding
- is present, is 8 times larger than the value returned by H5Tget_size().
- If the precision is increased then the offset is decreased and then - the size is increased to insure that significant bits do not "hang - over" the edge of the datatype. -
Changing the precision of an H5T_STRING automatically changes the - size as well. The precision must be a multiple of 8. -
When decreasing the precision of a floating point type, set the - locations and sizes of the sign, mantissa, and exponent fields - first. -
hid_t type_id |
- Identifier of datatype to set. |
size_t precision |
- Number of bits of precision for datatype. |
-SUBROUTINE h5tset_precision_f(type_id, precision, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER, INTENT(IN) :: precision ! Datatype precision - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_precision_f -- - -
H5Tset_sign
(hid_t type_id
,
- H5T_sign_t sign
- )
-H5Tset_sign
sets the sign property for an integer type.
- 0
)
- 1
)
- hid_t type_id |
- Identifier of datatype to set. |
H5T_sign_t sign |
- Sign type. |
-SUBROUTINE h5tset_sign_f(type_id, sign, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(IN) :: sign ! Sign type for an integer type - ! Possible values are: - ! Unsigned integer type - ! H5T_SGN_NONE_F = 0 - ! Two's complement signed integer type - ! H5T_SGN_2_F = 1 - ! or error value - ! H5T_SGN_ERROR_F=-1 - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_sign_f -- - -
H5Tset_size
(hid_t type_id
,
- size_tsize
- )
-H5Tset_size
sets the total size in bytes,
- size
, for a datatype. If the datatype is atomic and size
- is decreased so that the significant bits of the datatype extend beyond
- the edge of the new size, then the `offset' property is decreased
- toward zero. If the `offset' becomes zero and the significant
- bits of the datatype still hang over the edge of the new size, then
- the number of significant bits is decreased.
- The size set for a string should include space for the null-terminator
- character, otherwise it will not be stored on (or retrieved from) disk.
- Adjusting the size of a string automatically sets the precision
- to 8*size
. A compound datatype may increase in size,
- but may not shrink. All datatypes must have a positive size.
-hid_t type_id |
- Identifier of datatype to change size. |
size_t size |
- Size in bytes to modify datatype. |
-SUBROUTINE h5tset_size_f(type_id, size, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - INTEGER(SIZE_T), INTENT(IN) :: size ! Datatype size - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success and -1 on failure -END SUBROUTINE h5tset_size_f -- - -
H5Tset_strpad
(hid_t type_id
,
- H5T_str_t strpad
- )
-H5Tset_strpad
defines the storage mechanism for the string.
- - The method used to store character strings differs with the - programming language: -
strpad
, are as follows:
- 0
)
- 1
)
- 2
)
-
- When converting from a longer string to a shorter string,
- the behavior is as follows.
- If the short string is H5T_STR_NULLPAD
or
- H5T_STR_SPACEPAD
, then the string is simply truncated.
- If the short string is H5T_STR_NULLTERM
, it is
- truncated and a null terminator is appended.
-
- When converting from a shorter string to a longer string, - the long string is padded on the end by appending nulls or spaces. - - -
hid_t type_id |
- Identifier of datatype to modify. |
H5T_str_t strpad |
- String padding type. |
-SUBROUTINE h5tset_strpad_f(type_id, strpad, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id - ! Datatype identifier - INTEGER, INTENT(IN) :: strpad ! String padding method for a string datatype - ! Possible values of padding type are: - ! Pad with zeros (as C does): - ! H5T_STR_NULLPAD_F(0) - ! Pad with spaces (as FORTRAN does): - ! H5T_STR_SPACEPAD_F(1) - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_strpad_f -- - -
H5Tset_tag
(hid_t type_id
- const char *tag
- )
-H5Tset_tag
tags an opaque datatype type_id
- with a descriptive ASCII identifier, tag
.
-hid_t type_id |
- IN: Datatype identifier for the opaque datatype to be tagged. |
const char *tag |
- IN: Descriptive ASCII string with which the - opaque datatype is to be tagged. |
-SUBROUTINE h5tset_tag_f(type_id, tag, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier - CHARACTER(LEN=*), INTENT(IN) :: tag ! Unique ASCII string with which the - ! opaque datatype is to be tagged - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tset_tag_f -- - -
H5Tunregister
(H5T_conv_t func
- )
-H5Tunregister
removes a conversion function from all conversion paths.
- - The conversion function pointer type declaration is described in - H5Tregister. -
H5T_conv_t func |
- Function to remove from conversion paths. | -
H5Tvlen_create
(hid_t base_type_id
- )
-H5Tvlen_create
creates a new variable-length (VL) datatype.
- - The base datatype will be the datatype that the sequence is composed of, - characters for character strings, vertex coordinates for polygon lists, etc. - The base type specified for the VL datatype can be of any HDF5 datatype, - including another VL datatype, a compound datatype or an atomic datatype. -
- When necessary, use H5Tget_super
to determine the base type
- of the VL datatype.
-
- The datatype identifier returned from this function should be
- released with H5Tclose
or resource leaks will result.
-
hid_t base_type_id |
- Base type of datatype to create. |
-SUBROUTINE h5tvlen_create_f(type_id, vltype_id, hdferr) - IMPLICIT NONE - INTEGER(HID_T), INTENT(IN) :: type_id ! Datatype identifier of base type - ! Base type can only be atomic - INTEGER(HID_T), INTENT(OUT) :: vltype_id ! VL datatype identifier - INTEGER, INTENT(OUT) :: hdferr ! Error code -END SUBROUTINE h5tvlen_create_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
- - |
-
|
-
|
- - |
-
|
-
|
-HDF5 supports a filter pipeline that provides the capability for standard -and customized raw data processing during I/O operations. -HDF5 is distributed with a small set of standard filters such as -compression (gzip, SZIP, and a shuffling algorithm) and -error checking (Fletcher32 checksum). -For further flexibility, the library allows a -user application to extend the pipeline through the -creation and registration of customized filters. -
-The flexibility of the filter pipeline implementation enables the -definition of additional filters by a user application. -A filter -
H5D_CHUNKED
- storage layout), and
- -The HDF5 library does not support filters for contiguous datasets -because of the difficulty of implementing random access for partial I/O. -Compact dataset filters are not supported because it would not produce -significant results. -
-Filter identifiers for the filters distributed with the HDF5 Library -are as follows: -
- H5Z_FILTER_DEFLATE | The gzip compression, - or deflation, filter - |
- H5Z_FILTER_SZIP | The SZIP compression filter - |
- H5Z_FILTER_SHUFFLE | The shuffle algorithm filter - |
- H5Z_FILTER_FLETCHER32 | The Fletcher32 checksum, - or error checking, filter - |
-See The Dataset Interface (H5D) -in the HDF5 User's Guide for further information regarding -data compression. - - - - - -
H5Zfilter_avail
(H5Z_filter_t filter
)
- H5Zfilter_avail
determines whether the filter
- specified in filter
is available to the application.
- H5Z_filter_t filter |
- IN: Filter identifier. - See the introduction to this section of the reference manual - for a list of valid filter identifiers. |
-SUBROUTINE h5zfilter_avail_f(filter, status, hdferr) - IMPLICIT NONE - INTEGER, INTENT(IN) :: filter ! Filter - ! Valid values are: - ! H5Z_FILTER_DEFLATE_F - ! H5Z_FILTER_SHUFFLE_F - ! H5Z_FILTER_FLETCHER32_F - ! H5Z_FILTER_SZIP_F - LOGICAL, INTENT(OUT) :: status ! Flag indicating whether - ! filter is available: - ! .TRUE. - ! .FALSE. -END SUBROUTINE h5zfilter_avail_f -- - -
H5Zget_filter_info
(
- H5Z_filter_t filter
,
- unsigned int *filter_config_flags
- )
-H5Zget_filter_info
retrieves information about a filter.
- At present, this means that the function retrieves a
- filter's configuration flags, indicating whether the filter is
- configured to decode data, to encode data, neither, or both.
-
- If filter_config_flags
is not set to NULL
- prior to the function call, the returned parameter contains a
- bit field specifying the available filter configuration.
- The configuration flag values can then be determined through
- a series of bitwise AND operations, as described below.
-
- Valid filter configuration flags include the following: -
|
- H5Z_FILTER_CONFIG_ENCODE_ENABLED |
- Encoding is enabled for this filter - |
- | H5Z_FILTER_CONFIG_DECODE_ENABLED |
- Decoding is enabled for this filter - |
- | (These flags
- are defined in the HDF5 Library source code file
- H5Zpublic.h .)
- |
filter_config_flags
and a valid
- filter configuration flag will reveal whether
- the related configuration option is available.
- For example, if the value of
-
- H5Z_FILTER_CONFIG_ENCODE_ENABLED
- &
- filter_config_flags
- 0
(zero),
- the queried filter is configured to encode data;
- if the value is FALSE
,
- i.e., equal to 0
(zero),
- the filter is not so configured.
-
- If a filter is not encode-enabled, the corresponding
- H5Pset_*
function will return an error if the
- filter is added to a dataset creation property list (which is
- required if the filter is to be used to encode that dataset).
- For example, if the H5Z_FILTER_CONFIG_ENCODE_ENABLED
- flag is not returned for the SZIP filter,
- H5Z_FILTER_SZIP
, a call to H5Pset_szip
- will fail.
-
- If a filter is not decode-enabled, the application will not be - able to read an existing file encoded with that filter. -
- This function should be called, and the returned
- filter_config_flags
analyzed, before calling
- any other function, such as H5Pset_szip
,
- that might require a particular filter configuration.
-
-
filter
- filter_config_flags
- -SUBROUTINE h5zget_filter_info_f(filter, config_flags, hdferr) - - IMPLICIT NONE - INTEGER, INTENT(IN) :: filter ! Filter, may be one of the - ! following: - ! H5Z_FILTER_DEFLATE_F - ! H5Z_FILTER_SHUFFLE_F - ! H5Z_FILTER_FLETCHER32_F - ! H5Z_FILTER_SZIP_F - INTEGER, INTENT(OUT) :: config_flags ! Bit field indicating whether - ! a filter's encoder and/or - ! decoder are available - INTEGER, INTENT(OUT) :: hdferr ! Error code - -END SUBROUTINE h5zfilter_avail_f -- -
H5Zregister
(const H5Z_class_t filter_class
)
- )
-H5Zregister
registers a new filter with the
- HDF5 library.
-
- Making a new filter available to an application is a two-step
- process. The first step is to write
- the three filter callback functions described below:
- can_apply_func
, set_local_func
, and
- filter_func
.
- This call to H5Zregister
,
- registering the filter with the
- library, is the second step.
- The can_apply_func
and set_local_func
- fields can be set to NULL
- if they are not required for the filter being registered.
-
- H5Zregister
accepts a single parameter,
- the filter_class
data structure,
- which is defined as follows:
-
- typedef struct H5Z_class_t { - H5Z_filter_t filter_id; - const char *comment; - H5Z_can_apply_func_t can_apply_func; - H5Z_set_local_func_t set_local_func; - H5Z_func_t filter_func; - } H5Z_class_t; -- -
- filter_id
is the identifier for the new filter.
- This is a user-defined value between
- H5Z_FILTER_RESERVED
and H5Z_FILTER_MAX
,
- both of which are defined in the HDF5 source file
- H5Zpublic.h
.
-
- comment
is used for debugging,
- may contain a descriptive name for the filter,
- and may be the null pointer.
-
- can_apply_func
, described in detail below,
- is a user-defined callback function which determines whether
- the combination of the dataset creation property list values,
- the datatype, and the dataspace represent a valid combination
- to apply this filter to.
-
- set_local_func
, described in detail below,
- is a user-defined callback function which sets any parameters that
- are specific to this dataset, based on the combination of the
- dataset creation property list values, the datatype, and the
- dataspace.
-
- filter_func
, described in detail below,
- is a user-defined callback function which performs the action
- of the filter.
-
- The statistics associated with a filter are not reset - by this function; they accumulate over the life of the library. - -
- The callback functions
-
- Before H5Zregister
can link a filter into an
- application, three callback functions must be defined
- as described in the HDF5 Library header file H5Zpublic.h
.
-
-
- The can apply callback function is defined as follows:
-
H5Z_can_apply_func_t
)
- (hid_t dcpl_id
,
- hid_t type_id
,
- hid_t space_id
)
-
- Before a dataset is created, the can apply callbacks for
- any filters used in the dataset creation property list are called
- with the dataset's dataset creation property list, dcpl_id
,
- the dataset's datatype, type_id
, and
- a dataspace describing a chunk, space_id
,
- (for chunked dataset storage).
-
- This callback must determine whether the combination of the - dataset creation property list settings, the datatype, and the - dataspace represent a valid combination to which to apply this filter. - For example, an invalid combination may involve - the filter not operating correctly on certain datatypes, - on certain datatype sizes, or on certain sizes of the chunk dataspace. -
- This callback can be the NULL
pointer, in which case
- the library will assume that the filter can be applied to a dataset with
- any combination of dataset creation property list values, datatypes,
- and dataspaces.
-
- The can apply callback function must return - a positive value for a valid combination, - zero for an invalid combination, and - a negative value for an error. - -
- The set local callback function is defined as follows:
-
H5Z_set_local_func_t
)
- (hid_t dcpl_id
,
- hid_t type_id
,
- hid_t space_id
)
-
- After the can apply callbacks are checked for a new dataset,
- the set local callback functions for any filters used in the
- dataset creation property list are called.
- These callbacks receive
- dcpl_id
, the dataset's private copy of the dataset
- creation property list passed in to H5Dcreate
- (i.e. not the actual property list passed in to H5Dcreate
);
- type_id
, the datatype identifier passed in to
- H5Dcreate
,
- which is not copied and should not be modified; and
- space_id
, a dataspace describing the chunk
- (for chunked dataset storage), which should also not be modified.
-
- The set local callback must set any filter parameters that are - specific to this dataset, based on the combination of the - dataset creation property list values, the datatype, and the dataspace. - For example, some filters perform different actions based on - different datatypes, datatype sizes, numbers of dimensions, - or dataspace sizes. -
- The set local callback may be the NULL
pointer,
- in which case, the library will assume that there are
- no dataset-specific settings for this filter.
-
- The set local callback function must return - a non-negative value on success and - a negative value for an error. - -
- The filter operation callback function, - defining the filter's operation on the data, is defined as follows: -
H5Z_func_t
)
- (unsigned int flags
,
- size_t cd_nelmts
,
- const unsigned int cd_values[]
,
- size_t nbytes
,
- size_t *buf_size
,
- void **buf
)
-
- The parameters flags
, cd_nelmts
,
- and cd_values
are the same as for the function
- H5Pset_filter
.
- The one exception is that an additional flag,
- H5Z_FLAG_REVERSE
, is set when
- the filter is called as part of the input pipeline.
-
- The parameter *buf
points to the input buffer
- which has a size of *buf_size
bytes,
- nbytes
of which are valid data.
-
- The filter should perform the transformation in place if
- possible. If the transformation cannot be done in place,
- then the filter should allocate a new buffer with
- malloc()
and assign it to *buf
,
- assigning the allocated size of that buffer to
- *buf_size
.
- The old buffer should be freed by calling free()
.
-
- If successful, the filter operation callback function
- returns the number of valid bytes of data contained in *buf
.
- In the case of failure, the return value is 0
(zero)
- and all pointer arguments are left unchanged.
-
H5Zregister
interface is substantially revised
- from the HDF5 Release 1.4.x series.
- The H5Z_class_t
struct and
- the set local and can apply callback functions
- first appeared in HDF5 Release 1.6.
-const H5Z_class_t filter_class |
- IN: Struct containing filter-definition information. |
H5Zunregister
(H5Z_filter_t filter
)
- H5Zunregister
unregisters the filter
- specified in filter
.
-
- After a call to H5Zunregister
, the filter
- specified in filter
will no longer be
- available to the application.
-
H5Z_filter_t filter |
- IN: Identifier of the filter to be unregistered. - See the introduction to this section of the reference manual - for a list of identifiers for standard filters - distributed with the HDF5 Library. |
-SUBROUTINE h5zunregister_f(filter, hdferr) - IMPLICIT NONE - INTEGER, INTENT(IN) :: filter ! Filter; one of the possible values: - ! H5Z_FILTER_DEFLATE_F - ! H5Z_FILTER_SHUFFLE_F - ! H5Z_FILTER_FLETCHER32_F - ! H5Z_FILTER_SZIP_F - INTEGER, INTENT(OUT) :: hdferr ! Error code - ! 0 on success, and -1 on failure -END SUBROUTINE h5zunregister_f -- - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
- -
- An object reference points to an entire object in the - current HDF5 file by storing the relative file address - (OID) of the object header for the object pointed to. - The relative file address of an object header is - constant for the life of the object. - An object reference is of a fixed size in the file. -
-
- A dataset region reference points to a region of a - dataset in the current HDF5 file by storing the OID - of the dataset and the global heap offset of the - region referenced. The region referenced is located - by retrieving the coordinates of the areas in the - region from the global heap. A dataset region - reference is of a variable size in the file. -
Reference Type | Value | Description |
---|---|---|
H5R_OBJECT |
- 0 |
- Object reference |
H5R_DATASET_REGION |
- 1 |
- Dataset region reference |
-
H5Rcreate(
void *reference,
- hid_t loc_id,
- const char *name,
- H5R_type_t type,
- hid_t space_id)
- H5Rcreate
creates an object which is a
- particular type of reference (specified with the
- type
parameter) to some file object and/or
- location specified with the space_id
parameter.
- For dataset region references, the selection specified
- in the dataspace is the portion of the dataset which
- will be referred to.
- - -
H5Rdereference(
hid_t dset,
- H5R_type_t rtype,
- void *reference)
- H5Rdereference
opens the object referenced
- and returns an identifier for that object.
- The parameter reference
specifies a reference of
- type rtype
that is stored in the dataset
- dset
.
- - -
H5Rget_object_type(
hid_t obj_id,
- void *reference)
- H5Rget_object_type
retrieves the type of object
- that an object reference points to.
- The parameter obj_id
specifies the dataset
- containing the reference object or the location identifier
- of the object that the dataset is located within.
- The parameter reference
specifies the
- reference being queried.
- - -
H5Rget_region(
H5D_t dataset,
- H5R_type_t type,
- void *reference)
- H5Rget_region
creates a copy of dataspace of
- the dataset that is pointed to and defines a selection in
- the copy which is the location (or region) pointed to.
- The parameter ref
specifies a reference of
- type rtype
that is stored in the dataset
- dset
.
- - -
H5Iget_type(
hid_t id)
- id
. Valid return values appear
- in the following list:
- H5I_FILE |
- File objects |
H5I_GROUP |
- Group objects |
H5I_DATATYPE |
- Datatype objects |
H5I_DATASPACE |
- Dataspace objects |
H5I_DATASET |
- Dataset objects |
H5I_ATTR |
- Attribute objects |
- This function was inspired by the need of users to figure
- out which type of object closing function
- (H5Dclose
, H5Gclose
, etc.)
- to call after a call to H5Rdereference
,
- but it is also of general use.
-
-
- -
-{ - hid_t file1; - hid_t dataset1; - hid_t datatype, dataspace; - char buf[128]; - hobj_ref_t link; - hobj_ref_t data[10][10]; - int rank; - size_t dimsf[2]; - int i, j; - - /* Open the file */ - file1=H5Fopen("example.h5", H5F_ACC_RDWR, H5P_DEFAULT); - - /* Describe the size of the array and create the data space */ - rank=2; - dimsf[0] = 10; - dimsf[1] = 10; - dataspace = H5Screate_simple(rank, dimsf, NULL); - - /* Define datatype */ - datatype = H5Tcopy(H5T_STD_REF_OBJ); - - /* Create a dataset */ - dataset1=H5Dcreate(file1,"Dataset One",datatype,dataspace,H5P_DEFAULT); - - /* Construct array of OIDs for other datasets in the file */ - /* somewhat hokey and artificial, but demonstrates the point */ - for(i=0; i<10; i++) - for(j=0; j<10; j++) - { - sprintf(buf,"/Group/Linked Set %d-%d",i,j); - if(H5Rcreate(&link,file1,buf,H5R_OBJECT,-1)>0) - data[i][j]=link; - } /* end for */ - - /* Write the data to the dataset using default transfer properties. */ - H5Dwrite(dataset, H5T_STD_REF_OBJ, H5S_ALL, H5S_ALL, H5P_DEFAULT, data); - - /* Close everything */ - H5Sclose(dataspace); - H5Tclose(datatype); - H5Dclose(dataset1); - H5Fclose(file1); -} -- - -Object Reference Reading Example -
- -
-{ - hid_t file1; - hid_t dataset1, tmp_dset; - href_t data[10][10]; - int i, j; - - /* Open the file */ - file1=H5Fopen("example.h5", H5F_ACC_RDWR, H5P_DEFAULT); - - /* Open the dataset */ - dataset1=H5Dopen(file1,"Dataset One",H5P_DEFAULT); - - /* - * Read the data to the dataset using default transfer properties. - * (we are assuming the dataset is the same and not querying the - * dimensions, etc.) - */ - H5Dread(dataset, H5T_STD_REF_OBJ, H5S_ALL, H5S_ALL, H5P_DEFAULT, data); - - /* Analyze array of OIDs of linked datasets in the file */ - /* somewhat hokey and artificial, but demonstrates the point */ - for(i=0; i<10; i++) - for(j=0; j<10; i++) - { - if((tmp_dset=H5Rdereference(dataset, H5T_STD_REF_OBJ, data[i][j]))>0) - { -- - -Dataset Region Reference Writing Example -- } /* end if */ - H5Dclose(tmp_dset); - } /* end for */ - - - /* Close everything */ - H5Dclose(dataset1); - H5Fclose(file1); -} -
- -
-{ - hid_t file1; - hid_t dataset1, dataset2; - hid_t datatype, dataspace1, dataspace2; - char buf[128]; - href_t link; - href_t data[10][10]; /* HDF5 reference type */ - int rank; - size_t dimsf[2]; - hsize_t start[3],count[3]; - int i, j; - - /* Open the file */ - file1=H5Fopen("example.h5", H5F_ACC_RDWR, H5P_DEFAULT); - - /* Describe the size of the array and create the data space */ - rank=2; - dimsf[0] = 10; - dimsf[1] = 10; - dataspace1 = H5Screate_simple(rank, dimsf, NULL); - - /* Define Dataset Region Reference datatype */ - datatype = H5Tcopy(H5T_STD_REF_DATAREG); - - /* Create a dataset */ - dataset1=H5Dcreate(file1,"Dataset One",datatype,dataspace1,H5P_DEFAULT); - - /* Construct array of OIDs for other datasets in the file */ - /* (somewhat artificial, but demonstrates the point) */ - for(i=0; i<10; i++) - for(j=0; j<10; i++) - { - sprintf(buf,"/Group/Linked Set %d-%d",i,j); - - /* Get the dataspace for the object to point to */ - dataset2=H5Dopen(file1,buf,H5P_DEFAULT); - dataspace2=H5Dget_space(dataspace2); - - /* Select the region to point to */ - /* (could be different region for each pointer) */ - start[0]=5; start[1]=4; start[2]=3; - count[0]=2; count[1]=4; count[2]=1; - H5Sselect_hyperslab(dataspace2,H5S_SELECT_SET,start,NULL,count,NULL); - - if(H5Rcreate(&link,file1,buf,H5R_REF_DATAREG,dataspace2)>0) - /* Store the reference */ - data[i][j]=link; - - H5Sclose(dataspace2); - H5Dclose(dataspace2); - } /* end for */ - - /* Write the data to the dataset using default transfer properties. */ - H5Dwrite(dataset, H5T_STD_REF_DATAREG, H5S_ALL, H5S_ALL, H5P_DEFAULT, data); - - /* Close everything */ - H5Sclose(dataspace); - H5Tclose(datatype); - H5Dclose(dataset1); - H5Fclose(file1); -} -- - -Dataset Region Reference Reading Example -
- -
-{ - hid_t file1; - hid_t dataset1, tmp_dset; - hid_t dataspace; - href_t data[10][10]; /* HDF5 reference type */ - int i, j; - - /* Open the file */ - file1=H5Fopen("example.h5", H5F_ACC_RDWR, H5P_DEFAULT); - - /* Open the dataset */ - dataset1=H5Dopen(file1,"Dataset One",H5P_DEFAULT); - - /* - * Read the data to the dataset using default transfer properties. - * (we are assuming the dataset is the same and not querying the - * dimensions, etc.) - */ - H5Dread(dataset, H5T_STD_REF_DATAREG, H5S_ALL, H5S_ALL, H5P_DEFAULT, data); - - /* Analyze array of OIDs of linked datasets in the file */ - /* (somewhat artificial, but demonstrates the point) */ - for(i=0; i<10; i++) - for(j=0; j<10; i++) - { - if((tmp_dset=H5Rdereference(dataset, H5D_STD_REF_DATAREG,data[i][j]))>0) - { - /* Get the dataspace with the pointed to region selected */ - dataspace=H5Rget_space(data[i][j]); - -- - - -- - H5Sclose(dataspace); - } /* end if */ - H5Dclose(tmp_dset); - } /* end for */ - - - /* Close everything */ - H5Dclose(dataset1); - H5Fclose(file1); -} -
- HDF5 documents and links - Introduction to HDF5 - HDF5 Reference Manual - HDF5 User's Guide for Release 1.6 - - |
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
- - Files - Datasets - Datatypes - Dataspaces - Groups - - References - Attributes - Property Lists - Error Handling - - Filters - Caching - Chunking - Mounting Files - - Performance - Debugging - Environment - DDL - |
-HDF5 documents and links -Introduction to HDF5 - - |
-
-
-HDF5 User's Guide -HDF5 Application Developer's Guide -HDF5 Reference Manual - - - - |
- This informal volume of technical notes is of interest to
- those who develop and maintain the HDF5 library and
- related, closely-coupled drivers.
- These notes are not generally of interest to applications developers
- and certainly not of interest to users.
- (Some of these documents may be somewhat out of date as they were
- working papers for the design process.)
- - - | ||
- Memory Management - | - A discussion of memory management issues in HDF5 - | |
- Memory Management and
- - Free Lists - | - Notes regarding the implementation of free lists and memory management - | |
Heap Management - | - A discussion of the H5H heap management fuctions - | |
Raw Data Storage - | - A discussion of the storage of raw HDF5 data - | |
Virtual File Layer - | - A description of the HDF5 virtual file layer (VFL), - a public API for the implementation of custom I/O drivers - | |
List of VFL Functions - |
- A list of the VFL functions, H5FD*
- | |
I/O Pipeline - | - A description of the raw data I/O pipeline - | |
- Large Datasets on Small
- - Machines - | - A guide to accessing large datasets on small computers - | |
- Relocating a File Data
- - Structure - | - A discussion of the issues involved in moving file data structures once - they have been cached in memory - | |
- Working with External Files - | - A guide to the use of multiple files with HDF5 - | |
Object Headers - | - A discussion of the H5O object header functions - | |
- Symbol Table Caching Issues - | - A discussion of issues involving caching of object header messages in - symbol table entries - | |
- HDF4/HDF5 Compatibility - | - A discussion of compatibility issues between HDF4 and HDF5 - | |
- Testing the Chunked Layout
- - of HDF5 - | - A white paper discussing the motivation to implement raw data chunking - in the HDF5 library - | |
Library Maintenance - | - A discussion of HDF5 library maintenance issues - | |
Code Review - | - Code Review 1 and 2 - | |
- Release Version Numbers - | - A description of HDF5 release version numbers - | |
Naming Schemes - | - A discussion of naming schemes for HDF5 library modules, functions, - datatypes, etc. - | |
-Thread Safe HDF5 Library
- - Implementation - | - A report on the implementation of a thread safe HDF5 library. - | |
-Using HDF5 with OpenMP - | - A short report on using HDF5 with OpenMP. - | |
HDF5 Software Controls - | - Descriptions of the HDF5 knobs and controls, such as the - environment variables and settings that control the functionality - of the HDF5 libraries and tools. - | |
Daily Test Explained - | - An explanation of the Daily Testing for HDF software conducted. - | |
Test Review - | - Results of reviewing tests for API functions. - | |
Basic Performance Tools - | - A description of the three basic performance tools (chunk, iopipe, overhead). - | |
Variable-Length Datatype Info - | - Description of various aspects of using variable-length datatypes in HDF5. - | |
Reserved File Address Space - | - Description of HDF5's internal system for ensuring that files stay within their address space. - | |
Data Transform Report - | - Report of the Data Transform implementation. - | |
Automake Use Cases - | - Simple explanations of how to make some common changes to HDF5's Automake-generated Makefiles.am. - |
-HDF5 documents and links -Introduction to HDF5 - - |
-
-
-HDF5 User's Guide -HDF5 Application Developer's Guide -HDF5 Reference Manual - - - - |
-
-
-HDF Help Desk
- -Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0 - - -Last modified: 15 December 2004 - - |
--Copyright - |
Suppose you need to make a minor change to the Makefile in the test directory
-(hdf5/test/Makefile
). You have checked out hdf5 from the CVS repository into
-~/scratch/hdf5
. You want to build the library in a directory named
-~/scratch/build
.
-First, edit the Makefile.am in the source tree. You must make any changes in the Makefile.am,
-not the Makefile, since the Makefile is automatically generated.
cd ~/scratch/hdf5/test
-vi Makefile.am
Now, go to the root of the source tree and run the reconfigure script, which updates the -source tree. It will create a new Makefile.in in the test directory with your changes.
- -cd ~/scratch/hdf5
-./bin/reconfigure
After running bin/reconfigure
, you will want to test your change. Go to
-~/scratch/build
and run configure
.
cd ~/scratch/build
-
-../hdf5/configure
-
-make check
Configure generates Makefiles from the Makefiles.in in the source tree. The dependencies are:
- -Makefile.am -> (bin/reconfigure) -> Makefile.in -> (configure) -> Makefile
Reconfigure should also be used when any change is made to configure.in.
-Suppose you want to add the source file h5testfoo.c
to the HDF5 test
-library in the test directory. You open up test/Makefile.am
in your
-favorite text editor and scroll down until you see the line:
libh5test_la_SOURCES=h5test.c testframe.c
Just add h5testfoo.c
to the list of sources. You're done!
-Now run bin/reconfigure
to create a new Makefile.in from the Makefile.am you just
-edited.
Suppose you want to create a new test executable named newtest
with one
-source file, newtest.c
. You open up test/Makefile.am
and find
-the line
TEST_PROG=testhdf5 lheap ohdr ...
Just add newtest
to the list of programs. That's it! Automake will by
-default guess that your program newtest
has one source file named
-newtest.c
.
-Now run bin/reconfigure
to update the Makefile.in.
Suppose you want to create a new test executable named newertest
with
-several source files. You open up test/Makefile.am
as before and find the line
TEST_PROG=testhdf5 lheap ohdr ...
Add newertest
to the list of programs.
-Now you need to tell Automake how to build newertest. Add a new line below
-TEST_PROG
:
newtest_SOURCES = source1.c source2.c source3.c
You don't need to mention header files, as these will be automatically detected.
-Now run bin/reconfigure
to update the Makefile.in.
To add the directory for a new tool, h5merge
, go to the Makefile.am
-in the tools directory (the parent directory of the directory you want to add).
-Find the line that reads
SUBDIRS=lib h5dump...
Add h5merge
to this list of subdirectories.
-Now you probably want to create a Makefile.am in the h5merge directory. A good starting
-point for this Makefile.am might be the sample Makefile.am in the config directory
-(config/Makefile.am.blank
). Alternately, you could copy the Makefile.am
-from another directory.
-Once you have your new Makefile.am in place, edit configure.in
in the root
-directory. Near the end of the file is a list of files generated by configure.
-Add tools/h5merge/Makefile.in
to this list.
-Now run bin/reconfigure
. This will update configure and generate a Makefile.in in the
-tools/h5merge
directory. Don't forget to add both the Makefile.am and the Makefile.in to
-CVS, and to update the manifest!.
Suppose you only want to compile a program when HDF5 is configured to run in
-parallel--for example, a parallel version of h5repack called h5prepack
.
-Open up the h5repack Makefile.am
-The simple solution is:
if BUILD_PARALLEL_CONDITIONAL
- H5PREPACK=h5prepack
-endif
Now the variable $H5PREPACK
will be "h5prepack" if parallel is
-enabled and "" if parallel is disabled. Add $H5PREPACK
to the list of
-programs to be built:
bin_PROGRAMS=h5repack $(H5PREPACK)
Add sources for this program as usual:
- -h5prepack_SOURCES=...
Don't forget to run bin/reconfigure
when you're done!
Automake conditionals can be a very powerful tool. Suppose that instead of building
-two versions of h5repack during a parallel build, you want to change the name of
-the tool depending on whether or not HDF5 is configured to run in parallel--you
-want to create either h5repack or h5prepack, but not both.
-Open up the h5repack Makefile.am and use an automake conditional:
if BUILD_PARALLEL_CONDITIONAL
- H5REPACK_NAME=h5prepack
-else
- H5REPACK_NAME=h5repack
-endif
-bin_PROGRAMS=$(H5REPACK_NAME)
Now you only build one program, but the name of that program changes. You still need -to define sources for both h5repack and h5prepack, but you needn't type them out twice if -they are the same:
- -h5repack_SOURCES=...
-h5prepack_SOURCES=$(h5repack_SOURCES)
Don't forget to run bin/reconfigure
when you're done!
Suppose you want to add a new library to the HDF5 build tree, libfoo. The procedure for -building libraries is very similar to that for building programs:
- -lib_LTLIBRARIES=libfoo.la
-libfoo_la_SOURCES=sourcefoo.c sourcefootwo.c
This library will be installed in the lib directory when a user types
-"make install
".
-You might instead be building a convenience library for testing purposes (like
-libh5test.la
) and not want it to be installed. If this is the case, you
-would type
check_LTLIBRARIES=libfoo.la
-instead of
-lib_LTLIBRARIES=libfoo.la
To make it easier for other directories to link to your library,
-you might want to assign its path to a variable in all HDF5 Makefiles. You can
-make changes to all Makefiles by editing config/commence.am
and adding a line
-like
LIBFOO=$(top_builddir)/foo/src/libfoo.la
config/commence.am
is textually included in all Makefiles.am when automake
-processes them.
-As always, if you change a Makefile.am or config/commence.am
, don't forget to run
-bin/reconfigure
.
If you have added or removed a function from HDF5, or if you have changed a function
-signature, you must indicate this by updating the file lt_vers.am
located in
-the config
directory.
-If you have changed the API at all, increment LT_VERS_INTERFACE
and set
-LT_VERS_REVISION
to zero.
-If you have added functions but not altered or removed existing ones, also increment
-LT_VERS_AGE
.
-If instead you have altered or removed any functions, reset LT_VERS_AGE
to
-zero.
Times reads and writes to an HDF5 2-d dataset and compares that with - reads and writes using POSIX I/O. Reports seven measurements in - terms of CPU user time, CPU system time, elapsed time, and - bandwidth: - - -
This is a pretty stupid performance test. It accesses the same area - of file and memory over and over and the file size is way too - small. But it is good at showing how much overhead there is in the - library itself. - - -
Determines how efficient the raw data cache is for various access - patterns of a chunked dataset, both reading and writing. The access - pattern is either (a) we access the entire dataset by moving a window - across and down a 2-d dataset in row-major order a full window - height and width at a time, or (b) we access part of a dataset by moving - the window diagonally from the (0,0) corner to the opposite corner - by half the window height and width at a time. The window is - measured in terms of the chunk size. - - -
The result is:
-
A table written to stdout that contains the window size as a
- fraction of the chunk size and the efficiencey of the cache (i.e.,
- number of bytes accessed by H5Dread() or H5Dwrite() divided by the
- number of bytes of the dataset actually read or written by lower
- layers.
-
-
-
A gnuplot script and data files which can be displayed by running - gnuplot and typing the command `load "x-gnuplot"'. - - -
Measures the overhead used by the B-tree for indexing chunked - datasets. As data is written to a chunked dataset the B-tree - grows and its nodes get split. When a node splits one of three - ratios are used to determine how many items from the original node - go into the new left and right nodes, and these ratios affect the - total size of the B-tree in a way that depends on the order that - data is written to the dataset. - - -
Invoke as `overhead usage' for more information. -
The HDF5 library is able to handle files larger than the
- maximum file size, and datasets larger than the maximum memory
- size. For instance, a machine where sizeof(off_t)
- and sizeof(size_t)
are both four bytes can handle
- datasets and files as large as 18x10^18 bytes. However, most
- Unix systems limit the number of concurrently open files, so a
- practical file size limit is closer to 512GB or 1TB.
-
-
Two "tricks" must be imployed on these small systems in order
- to store large datasets. The first trick circumvents the
- off_t
file size limit and the second circumvents
- the size_t
main memory limit.
-
-
Systems that have 64-bit file addresses will be able to access - those files automatically. One should see the following output - from configure: - -
-
-
-checking size of off_t... 8
-
Also, some 32-bit operating systems have special file systems - that can support large (>2GB) files and HDF5 will detect - these and use them automatically. If this is the case, the - output from configure will show: - -
-
-
-checking for lseek64... yes
-checking for fseek64... yes
-
Otherwise one must use an HDF5 file family. Such a family is
- created by setting file family properties in a file access
- property list and then supplying a file name that includes a
- printf
-style integer format. For instance:
-
-
-
-
-hid_t plist, file;
-plist = H5Pcreate (H5P_FILE_ACCESS);
-H5Pset_family (plist, 1<<30, H5P_DEFAULT);
-file = H5Fcreate ("big%03d.h5", H5F_ACC_TRUNC, H5P_DEFAULT, plist);
-
The second argument (1<<30
) to
- H5Pset_family()
indicates that the family members
- are to be 2^30 bytes (1GB) each although we could have used any
- reasonably large value. In general, family members cannot be
- 2GB because writes to byte number 2,147,483,647 will fail, so
- the largest safe value for a family member is 2,147,483,647.
- HDF5 will create family members on demand as the HDF5 address
- space increases, but since most Unix systems limit the number of
- concurrently open files the effective maximum size of the HDF5
- address space will be limited (the system on which this was
- developed allows 1024 open files, so if each family member is
- approx 2GB then the largest HDF5 file is approx 2TB).
-
-
If the effective HDF5 address space is limited then one may be - able to store datasets as external datasets each spanning - multiple files of any length since HDF5 opens external dataset - files one at a time. To arrange storage for a 5TB dataset split - among 1GB files one could say: - -
-
-
-hid_t plist = H5Pcreate (H5P_DATASET_CREATE);
-for (i=0; i<5*1024; i++) {
- sprintf (name, "velocity-%04d.raw", i);
- H5Pset_external (plist, name, 0, (size_t)1<<30);
-}
-
The second limit which must be overcome is that of
- sizeof(size_t)
. HDF5 defines a data type called
- hsize_t
which is used for sizes of datasets and is,
- by default, defined as unsigned long long
.
-
-
To create a dataset with 8*2^30 4-byte integers for a total of
- 32GB one first creates the dataspace. We give two examples
- here: a 4-dimensional dataset whose dimension sizes are smaller
- than the maximum value of a size_t
, and a
- 1-dimensional dataset whose dimension size is too large to fit
- in a size_t
.
-
-
-
-
-hsize_t size1[4] = {8, 1024, 1024, 1024};
-hid_t space1 = H5Screate_simple (4, size1, size1);
-
-hsize_t size2[1] = {8589934592LL};
-hid_t space2 = H5Screate_simple (1, size2, size2};
-
However, the LL
suffix is not portable, so it may
- be better to replace the number with
- (hsize_t)8*1024*1024*1024
.
-
-
For compilers that don't support long long
large
- datasets will not be possible. The library performs too much
- arithmetic on hsize_t
types to make the use of a
- struct feasible.
-
-
This is the results of studying the chunked layout policy in - HDF5. A 1000 by 1000 array of integers was written to a file - dataset extending the dataset with each write to create, in the - end, a 5000 by 5000 array of 4-byte integers for a total data - storage size of 100 million bytes. - -
-
After the array was written, it was read back in blocks that - were 500 by 500 bytes in row major order (that is, the top-left - quadrant of output block one, then the top-right quadrant of - output block one, then the top-left quadrant of output block 2, - etc.). - -
I tried to answer two questions: -
I started with chunk sizes that were multiples of the read - block size or k*(500, 500). - -
-
Chunk Size (elements) | -Meta Data Overhead (ppm) | -Raw Data Overhead (ppm) | -
---|---|---|
500 by 500 | -85.84 | -0.00 | -
1000 by 1000 | -23.08 | -0.00 | -
5000 by 1000 | -23.08 | -0.00 | -
250 by 250 | -253.30 | -0.00 | -
499 by 499 | -85.84 | -205164.84 | -
-
The first half of Figure 2 shows output to the file while the - second half shows input. Each dot represents a file-level I/O - request and the lines that connect the dots are for visual - clarity. The size of the request is not indicated in the - graph. The output block size is four times the chunk size which - results in four file-level write requests per block for a total - of 100 requests. Since file space for the chunks was allocated - in output order, and the input block size is 1/4 the output - block size, the input shows a staircase effect. Each input - request results in one file-level read request. The downward - spike at about the 60-millionth byte is probably the result of a - cache miss for the B-tree and the downward spike at the end is - probably a cache flush or file boot block update. - -
-
In this test I increased the chunk size to match the output - chunk size and one can see from the first half of the graph that - 25 file-level write requests were issued, one for each output - block. The read half of the test shows that four times the - amount of data was read as written. This results from the fact - that HDF5 must read the entire chunk for any request that falls - within that chunk, which is done because (1) if the data is - compressed the entire chunk must be decompressed, and (2) the - library assumes that a chunk size was chosen to optimize disk - performance. - -
-
Increasing the chunk size further results in even worse - performance since both the read and write halves of the test are - re-reading and re-writing vast amounts of data. This proves - that one should be careful that chunk sizes are not much larger - than the typical partial I/O request. - -
-
If the chunk size is decreased then the amount of data - transfered between the disk and library is optimal for no - caching, but the amount of meta data required to describe the - chunk locations increases to 250 parts per million. One can - also see that the final downward spike contains more file-level - write requests as the meta data is flushed to disk just before - the file is closed. - -
-
This test shows the result of choosing a chunk size which is - close to the I/O block size. Because the total size of the - array isn't a multiple of the chunk size, the library allocates - an extra zone of chunks around the top and right edges of the - array which are only partially filled. This results in - 20,516,484 extra bytes of storage, a 20% increase in the total - raw data storage size. But the amount of meta data overhead is - the same as for the 500 by 500 test. In addition, the mismatch - causes entire chunks to be read in order to update a few - elements along the edge or the chunk which results in a 3.6-fold - increase in the amount of data transfered. - -
This is one of the functions exported from the
- H5B.c
file that implements a B-link-tree class
- without worrying about concurrency yet (thus the `Note:' in the
- function prologue). The H5B.c
file provides the
- basic machinery for operating on generic B-trees, but it isn't
- much use by itself. Various subclasses of the B-tree (like
- symbol tables or indirect storage) provide their own interface
- and back end to this function. For instance,
- H5G_stab_find()
takes a symbol table OID and a name
- and calls H5B_find()
with an appropriate
- udata
argument that eventually gets passed to the
- H5G_stab_find()
function.
-
-
-
-
- 1 /*-------------------------------------------------------------------------
- 2 * Function: H5B_find
- 3 *
- 4 * Purpose: Locate the specified information in a B-tree and return
- 5 * that information by filling in fields of the caller-supplied
- 6 * UDATA pointer depending on the type of leaf node
- 7 * requested. The UDATA can point to additional data passed
- 8 * to the key comparison function.
- 9 *
-10 * Note: This function does not follow the left/right sibling
-11 * pointers since it assumes that all nodes can be reached
-12 * from the parent node.
-13 *
-14 * Return: Success: SUCCEED if found, values returned through the
-15 * UDATA argument.
-16 *
-17 * Failure: FAIL if not found, UDATA is undefined.
-18 *
-19 * Programmer: Robb Matzke
-20 * matzke@llnl.gov
-21 * Jun 23 1997
-22 *
-23 * Modifications:
-24 *
-25 *-------------------------------------------------------------------------
-26 */
-27 herr_t
-28 H5B_find (H5F_t *f, const H5B_class_t *type, const haddr_t *addr, void *udata)
-29 {
-30 H5B_t *bt=NULL;
-31 intn idx=-1, lt=0, rt, cmp=1;
-32 int ret_value = FAIL;
-
All pointer arguments are initialized when defined. I don't - worry much about non-pointers because it's usually obvious when - the value isn't initialized. - -
-
-
-33
-34 FUNC_ENTER (H5B_find, NULL, FAIL);
-35
-36 /*
-37 * Check arguments.
-38 */
-39 assert (f);
-40 assert (type);
-41 assert (type->decode);
-42 assert (type->cmp3);
-43 assert (type->found);
-44 assert (addr && H5F_addr_defined (addr));
-
I use assert
to check invariant conditions. At
- this level of the library, none of these assertions should fail
- unless something is majorly wrong. The arguments should have
- already been checked by higher layers. It also provides
- documentation about what arguments might be optional.
-
-
-
-
-45
-46 /*
-47 * Perform a binary search to locate the child which contains
-48 * the thing for which we're searching.
-49 */
-50 if (NULL==(bt=H5AC_protect (f, H5AC_BT, addr, type, udata))) {
-51 HGOTO_ERROR (H5E_BTREE, H5E_CANTLOAD, FAIL);
-52 }
-
You'll see this quite often in the low-level stuff and it's
- documented in the H5AC.c
file. The
- H5AC_protect
insures that the B-tree node (which
- inherits from the H5AC package) whose OID is addr
- is locked into memory for the duration of this function (see the
- H5AC_unprotect
on line 90). Most likely, if this
- node has been accessed in the not-to-distant past, it will still
- be in memory and the H5AC_protect
is almost a
- no-op. If cache debugging is compiled in, then the protect also
- prevents other parts of the library from accessing the node
- while this function is protecting it, so this function can allow
- the node to be in an inconsistent state while calling other
- parts of the library.
-
-
The alternative is to call the slighlty cheaper
- H5AC_find
and assume that the pointer it returns is
- valid only until some other library function is called, but
- since we're accessing the pointer throughout this function, I
- chose to use the simpler protect scheme. All protected objects
- must be unprotected before the file is closed, thus the
- use of HGOTO_ERROR
instead of
- HRETURN_ERROR
.
-
-
-
-
-53 rt = bt->nchildren;
-54
-55 while (lt<rt && cmp) {
-56 idx = (lt + rt) / 2;
-57 if (H5B_decode_keys (f, bt, idx)<0) {
-58 HGOTO_ERROR (H5E_BTREE, H5E_CANTDECODE, FAIL);
-59 }
-60
-61 /* compare */
-62 if ((cmp=(type->cmp3)(f, bt->key[idx].nkey, udata,
-63 bt->key[idx+1].nkey))<0) {
-64 rt = idx;
-65 } else {
-66 lt = idx+1;
-67 }
-68 }
-69 if (cmp) {
-70 HGOTO_ERROR (H5E_BTREE, H5E_NOTFOUND, FAIL);
-71 }
-
Code is arranged in paragraphs with a comment starting each
- paragraph. The previous paragraph is a standard binary search
- algorithm. The (type->cmp3)()
is an indirect
- function call into the subclass of the B-tree. All indirect
- function calls have the function part in parentheses to document
- that it's indirect (quite obvious here, but not so obvious when
- the function is a variable).
-
-
It's also my standard practice to have side effects in
- conditional expressions because I can write code faster and it's
- more apparent to me what the condition is testing. But if I
- have an assignment in a conditional expr, then I use an extra
- set of parens even if they're not required (usually they are, as
- in this case) so it's clear that I meant =
instead
- of ==
.
-
-
-
-
-72
-73 /*
-74 * Follow the link to the subtree or to the data node.
-75 */
-76 assert (idx>=0 && idx
Here I broke the "side effect in conditional" rule, which I
- sometimes do if the expression is so long that the
- <0
gets lost at the end. Another thing to note is
- that success/failure is always determined by comparing with zero
- instead of SUCCEED
or FAIL
. I do this
- because occassionally one might want to return other meaningful
- values (always non-negative) or distinguish between various types of
- failure (always negative).
-
-
-
-
-88
-89 done:
-90 if (bt && H5AC_unprotect (f, H5AC_BT, addr, bt)<0) {
-91 HRETURN_ERROR (H5E_BTREE, H5E_PROTECT, FAIL);
-92 }
-93 FUNC_LEAVE (ret_value);
-94 }
-
For lack of a better way to handle errors during error cleanup,
- I just call the HRETURN_ERROR
macro even though it
- will make the error stack not quite right. I also use short
- circuiting boolean operators instead of nested if
- statements since that's standard C practice.
-
-
The following code is an API function from the H5F package... - -
-
-
- 1 /*--------------------------------------------------------------------------
- 2 NAME
- 3 H5Fflush
- 4
- 5 PURPOSE
- 6 Flush all cached data to disk and optionally invalidates all cached
- 7 data.
- 8
- 9 USAGE
-10 herr_t H5Fflush(fid, invalidate)
-11 hid_t fid; IN: File ID of file to close.
-12 hbool_t invalidate; IN: Invalidate all of the cache?
-13
-14 ERRORS
-15 ARGS BADTYPE Not a file atom.
-16 ATOM BADATOM Can't get file struct.
-17 CACHE CANTFLUSH Flush failed.
-18
-19 RETURNS
-20 SUCCEED/FAIL
-21
-22 DESCRIPTION
-23 This function flushes all cached data to disk and, if INVALIDATE
-24 is non-zero, removes cached objects from the cache so they must be
-25 re-read from the file on the next access to the object.
-26
-27 MODIFICATIONS:
-28 --------------------------------------------------------------------------*/
-
An API prologue is used for each API function instead of my - normal function prologue. I use the prologue from Code Review 1 - for non-API functions because it's more suited to C programmers, - it requires less work to keep it synchronized with the code, and - I have better editing tools for it. - -
-
-
-29 herr_t
-30 H5Fflush (hid_t fid, hbool_t invalidate)
-31 {
-32 H5F_t *file = NULL;
-33
-34 FUNC_ENTER (H5Fflush, H5F_init_interface, FAIL);
-35 H5ECLEAR;
-
API functions are never called internally, therefore I always - clear the error stack before doing anything. - -
-
-
-36
-37 /* check arguments */
-38 if (H5_FILE!=H5Aatom_group (fid)) {
-39 HRETURN_ERROR (H5E_ARGS, H5E_BADTYPE, FAIL); /*not a file atom*/
-40 }
-41 if (NULL==(file=H5Aatom_object (fid))) {
-42 HRETURN_ERROR (H5E_ATOM, H5E_BADATOM, FAIL); /*can't get file struct*/
-43 }
-
If something is wrong with the arguments then we raise an
- error. We never assert
arguments at this level.
- We also convert atoms to pointers since atoms are really just a
- pointer-hiding mechanism. Functions that can be called
- internally always have pointer arguments instead of atoms
- because (1) then they don't have to always convert atoms to
- pointers, and (2) the various pointer data types provide more
- documentation and type checking than just an hid_t
- type.
-
-
-
-
-44
-45 /* do work */
-46 if (H5F_flush (file, invalidate)<0) {
-47 HRETURN_ERROR (H5E_CACHE, H5E_CANTFLUSH, FAIL); /*flush failed*/
-48 }
-
An internal version of the function does the real work. That
- internal version calls assert
to check/document
- it's arguments and can be called from other library functions.
-
-
-
-
-49
-50 FUNC_LEAVE (SUCCEED);
-51 }
-
$HOME/snapshots-XXX is where daily tests occur.
- -· -current/ latest -version
- -· -previous/ last -released version
- -· -log/ log -files of most recent tests
- -· -log/OLD/ previous -log files
- -· -TestDir/<host>/ build -and test area of machine <host> supporting srcdir build
- -· -allhostfile holds -all test host names
- -· -snaptest.cfg holds -various test configurations
- -· -release_always always -make snapshot release tarball if all tests pass (implemented for hdf4 daily -tests only)
- -· -release_asap make -one snapshot release tarball if all tests pass (file is renamed after -release)
- -· -release_not do -not make snapshot release tarball even if all tests pass
- -This shows steps of the daily tests for HDF5 development -version (currenly v1.5). The HDF5 v1.4 -and HDF4 are similar. snapshots-XXX -here means $HOME/snapshots-hdf5/.
- -
· CVS -updates some documents on websites
- -· CVS -updates $HOME/HDF5/v_1_5/hdf5/ (the -bin/runtest in it is ready to be used in -next step)
- -· Clean -up snapshots-XXX/log area
- -a. -Purge older files from OLD/
- -b. -Moves log files from yesterday to OLD/
- -· cd -$HOME/HDF5/v_1_5/hdf5
- -· Launch -“bin/runtest –all” from eirene
- -· CVS -updates $HOME/snapshots-XXX/current (the commands in bin/ are now ready be used -in the following steps).
- -· Executes -snapshots-XXX/current/bin/chkmanifest for MANIFEST file.
- -· Diff -current/ and previous/ versions. If no -significant differences found, no need to run daily test per hosts. Will not make snapshot release tarball -either.
- -· If -significant differences found, prepare to run the daily tests for all hosts.
- -· Reads -allhostfile for test hosts. For each -host:
- -a. -use ping then rsh/ssh to make sure the host is on line and -responding
- -b. -if srcdir is support, fork off the following command for all -hosts and wait for them to finish. -Otherwise, launch one at a time.
- -c. -rsh host “cd $HOME/snapshots-XXX/hdf5; bin/runtest” >& -#<host>
- -· Since -“-all” is not used, it is for launching the test for this host only.
- -· Reads -snapshots-XXX/snaptest.cfg and looks for configuration entries that are for -this host.
- -· For -each configuration, runs snapshots-XXX/bin/snapshot with the configuration.
- -· Configure, -build and test results are stored in log/<host>_YYMMDD_HHMM (e.g., -arabica_021024_0019)
- -· Gather -all those #<host> files and other summary report into one daily report -(e.g., DailyHDF5Tests-eirene_021024)
- -· Checks -the tail of log/<host>_YYMMDD_HHMM to make sure it does complete -properly.
- -· Do
-a snapshot release if
- test-succeeded &&
- release-not-is-not-present
-&&
- ( today-is-saturday ||
-release-asap-is-requested )
· HDF4 -does not know how to create a release tarball. -Its release process only renames current/ as previous/ to reduce future -test time. It also supports an option -of release-always which tells daily test to make a release whenever all tests -pass. The release-asap only make the -release once and the file is renamed, blocking any future ASAP release until -someone turns it on again.
- -Robb Matzke first setup the snapshot directory structure and -created pretty complete version of commands snaptest, release and h5ver. The -initial version is for testing in one host with the default configuration. I just added more whistles and bells. Jim Barlow helped me how to authenticate a -cron task with keytab.
- -----
- -First
-created by Albert Cheng, October 24, 2002.
Revised
-October 28, 2002.
Arithmetic Data Transforms
Leon Arber, Albert -Cheng, William Wendling[1]
- -Data can be stored and represented in many different -ways. In most fields of science, for -example, the metric system is used for storing all data. However, many fields of engineering still use -the English system. In such scenarios, -there needs to be a way to easily perform arbitrary scaling of data. The data transforms provide just such -functionality. They allow arbitrary -arithmetic expressions to be applied to a dataset during read and write -operations. This means that data can be -stored in Celsius in a data file, but read in and automatically converted to -Fahrenheit. Alternatively, data that is -obtained in Fahrenheit can be written out to the data file in Celsius.
- -Although a user can always manually modify the data they -read and write, having the data transform as a property means that the user -doesn’t have to worry about forgetting to call the conversion function or even -writing it in the first place.
- -The data transform functionality is implemented as a -property that is set on a dataset transfer property list. There are two functions available: one for -setting the transform and another for finding out what transform, if any, is -currently set.
- -The function for setting the transform is:
- -herr_t
-H5Pset_data_transform(hid_t plist_id, const char* expression)
plist_id -is the identifier of the dataset transfer property list on which the -data transform property should be set.
- -expression -is a pointer to a string of the form “(5/9.0)*(x-32)” which describes -the transform.
- -The function for getting the transform is:
- -ssize_t
-H5Pget_data_transform(hid_t plist_id, char* expression, size_t size)
plist_id -is the identifier of the dataset transfer property list which will be -queried for its data transform property.
- -expression -is either NULL or a pointer to memory where the data transform string, -if present, will be copied.
- -size -is the number of bytes to copy from the transform string into -expression. H5Pget_data_transform will -never copy more than the length of the transform expression.
- -Data transforms are set by passing a pointer to a string, -which is the data transform expression. -This string describes what sort of arithmetic transform should be done -during data transfer of read or write. -The string is a standard mathematical expression, as would be entered -into a something like MATLAB.
- -Expressions are defined by the following context-free -grammar:
- -expr:= term | term + -term | term - term
- -term := factor | factor * factor | factor / factor
- -factor := number | -symbol | - factor | + factor | ( expr )
- -symbol := [a-zA-Z][a-zA-Z0-9]*
- -number := INT | FLOAT
- -where INT is interpreted as a C long int and FLOAT is interpreted -as a C double
- -This grammar allows for order of operations (multiplication -and dividision take precedence over addition and subtraction), floating and -integer constants, and grouping of terms by way of parentheses. Although the grammar allows symbols to be -arbitrary strings, this documentation will always use ‘x’ for symbols.
- -Within a transform expression, the symbol represents a -variable which contains the data to be manipulated. For this reason, the terms symbol and -variable will be used interchangeably. -Furthermore, in the current implementation of data transforms, all -symbols appearing in an expression are interpreted as referring to the same -dataset. So, an expression such as -“alpha + 5” is equivalent to “x+5” and an expression such as “alpha + 3*beta + -5” is equivalent to “alpha + 3*alpha + 5” which is equivalent to “4*x + -5”.
- -When the data transform property of a dataset transfer -property list is set, a parse tree of the expression is immediately generated -and its root is saved in the property list. -The generation of the parse involves several steps.
- -First, the expression is reduced, so as to simply the final -parse and speed up the transform operations. -Expressions such as “(5/9.0) * (x-32)” will be reduced to -“.555555*(x-32).” While further -simplification is algebraically possible, the data transform code will only -reduce simple trivial arithmetic operations. -
- -Then, this reduced expression is parsed into a set of -tokens, from which the parse tree is generated. -From the expression “(5/9.0)*(x-32),” for example, the following parse -tree would be created:
- -*
- -/ \ -
- -.555555 -
- -/ \
- -x -32
- -- -
When a read is performed with a dataset transfer property -list that has the data transform property set, the following sequence of events -occurs:
- -Step 2 works like this:
- -If the transform expression is “(5/9.0)*(x-32),” with the -parse tree shown above and the buffer contains [-10 0 10 50 100], then the -intermediate steps involved in the transform are:
- -Note that the original data in the file was not modified.
- -The process of a write works much the same way, but in the -reverse order. When a file is written -out with a dataset transfer property list that has the data transform property -set:
- -- -
Step 2 works exactly as in the read example. Note that the user’s data is not modified. Also, since the transform property is not -saved with the dataset, in order to recover the original data, a user must know -the inverse of the transform that was applied in order to recover it. In the case of “(5/9.0)*(x-32)” this inverse -would be “(9/5.0)*x + 32”. Reading from -a data file that had previously been written out with a transform string of -“(5/9.0)*(x-32)” with a transform string of “(9/5.0)*x + 32” would effectively -recover the original data the author of the file had been using.[2]
- -Because the data transform sits and modifies data between -the file space and the memory space, various effects can occur that are the -result of the typecasting that may be involved in the operations. In addition, because constants in the data -transform expression can be either INT or FLOAT, the data transform itself can -be a source of truncation.
- -In the example above, the reason that the transform -expression is always written as “(5/9.0)*(x-32)” is because, if it were written -without a floating point constant, it would always evaluate to 0. The expression “(5/9)*(x-32)” would, when -set, get reduced to “0*(x-32)” because both 5 and 9 would get read as C long -ints and, when divided, the result would get truncated to 0. This resulting expression, “0*(x-32),” would -cause any data read or written to be saved as an array of all 0’s.
- -Another source of unpredictability caused by truncation -occurs when intermediate data is of a type that is more precise than the -destination memory type. For example, if -the transform expression “(1/2.0)*x” is applied to data read from a file that -is being read into an integer memory buffer, the results can be -unpredictable. If the source array is [1 -2 3 4], then the resulting array could be either [0 1 1 2] or [0 0 1 1], -depending on the floating point unit of the processors. Note that this result is independent of the -source data type. It doesn’t matter if -the source data is integer or floating point because the 2.0 in the data -transform expression will cause everything to be evaluated in a floating-point -context.
- -When setting transform expressions, care must be taken to -ensure that the truncation does not adversely affect the data. A workaround for the possible effects of a -transform such as “(1/2.0) * x” would be to used the transform expression -“(1/2.0)*x + 0.5” instead of the original. -This will ensure that all truncation rounds up, with the possible -exception of a boundary condition.
- -The following code snippet shows an example using data -transform, where the data transform property is set and a write is -performed. Then, a read is performed -with no data transform property set. It -is assumed that dataset is a dataset -that has been opened and windchillF and -windchillC are both arrays that hold -floating point data. The result of this -snippet is to fill windchillC with the -data in windchillF, converted to -Celcius.
- -hid_t dxpl_id_c_to_f;
const char* c_to_f =
-“(9/5.0)*x + 32”;
/* Create the dataset
-transfer property list */
dxpl_id_c_to_f =
-H5Pcreate(H5P_DATASET_XFER);
/* Set the data transform
-to be used on the read*/
H5Pset_data_transform(dxpl_id_c_to_f,
-c_to_f);
/*
* Write the data to the
-dataset using the f_to_c transform
*/
status = H5Dwrite(dataset, H5T_NATIVE_FLOAT,
-H5S_ALL, H5S_ALL, dxpl_id_f_to_c, windchillF);
/* Read the data with the
-c_to_f data transform */
H5Dread(dataset, H5T_NATIVE_FLOAT, H5S_ALL,
-H5S_ALL, H5P_DEFAULT, windchillC);
Querying the data transform string of a dataset transfer -property list requires the use of the H5Pget_data_transform function. This function provides the ability to both -query the size of the string stored and retrieve part or all of it. Note that H5Pget_data_transform will return -the expression that was set by H5Pset_data_transform. The reduced transform string, computed when -H5Pset_data_transform is called, is not stored in string form and is not -available to the user.
- -In order to ascertain the size of the string, a NULL expression should be passed to the -function. This will make the function -return the length of the transform string (not including the terminated ‘\0’ -character).
- -To actually retrieve the string, a pointer to a valid memory -location should be passed in for expression and -the number of bytes from the string that should be copied to that memory -location should be passed in as size.
- -Some additional functionality can still be added to the data -transform. Currently the most important -feature lacking is the addition of operators, such as exponentiation and the -trigonometric functions. Although -exponentiation can be explicitly carried with a transform expression such as -“x*x*x” it may be easier to support expression like “x^3.” Also lacking are the -commonly used trigonometric functions, such as sin, cos, and tan.
- -Popular constants could also be added, such as π or -e.
- -More advanced functionality, such as the ability to perform -a transform on multiple datasets is also a possibility, but is a feature is -more a completely new addition than an extension to data transforms.
- -[1] Mr. -Wendling, who involved in the initial design and implemented the expression -parser, has left NCSA.
- -[2] See the -h5_dtransform.c example in the examples directory of the hdf5 library for just -such an illustration.
- -This table shows some of the layers of HDF5. Each layer calls - functions at the same or lower layers and never functions at - higher layers. An object identifier (OID) takes various forms - at the various layers: at layer 0 an OID is an absolute physical - file address; at layers 1 and 2 it's an absolute virtual file - address. At layers 3 through 6 it's a relative address, and at - layers 7 and above it's an object handle. - -
Layer-7 | -Groups | -Datasets | -|
Layer-6 | -Indirect Storage | -Symbol Tables | -|
Layer-5 | -B-trees | -Object Hdrs | -Heaps | -
Layer-4 | -Caching | -||
Layer-3 | -H5F chunk I/O | -||
Layer-2 | -H5F low | -||
Layer-1 | -File Family | -Split Meta/Raw | -|
Layer-0 | -Section-2 I/O | -Standard I/O | -Malloc/Free | -
The simplest form of hdf5 file is a single file containing only - hdf5 data. The file begins with the boot block, which is - followed until the end of the file by hdf5 data. The next most - complicated file allows non-hdf5 data (user defined data or - internal wrappers) to appear before the boot block and after the - end of the hdf5 data. The hdf5 data is treated as a single - linear address space in both cases. - -
The next level of complexity comes when non-hdf5 data is - interspersed with the hdf5 data. We handle that by including - the non-hdf5 interspersed data in the hdf5 address space and - simply not referencing it (eventually we might add those - addresses to a "do-not-disturb" list using the same mechanism as - the hdf5 free list, but it's not absolutely necessary). This is - implemented except for the "do-not-disturb" list. - -
The most complicated single address space hdf5 file is when we - allow the address space to be split among multiple physical - files. For instance, a >2GB file can be split into smaller - chunks and transfered to a 32 bit machine, then accessed as a - single logical hdf5 file. The library already supports >32 bit - addresses, so at layer 1 we split a 64-bit address into a 32-bit - file number and a 32-bit offset (the 64 and 32 are - arbitrary). The rest of the library still operates with a linear - address space. - -
Another variation might be a family of two files where all the - meta data is stored in one file and all the raw data is stored - in another file to allow the HDF5 wrapper to be easily replaced - with some other wrapper. - -
The H5Fcreate
and H5Fopen
functions
- would need to be modified to pass file-type info down to layer 2
- so the correct drivers can be called and parameters passed to
- the drivers to initialize them.
-
-
I've implemented fixed-size family members. The entire hdf5
- file is partitioned into members where each member is the same
- size. The family scheme is used if one passes a name to
- H5F_open
(which is called by H5Fopen()
- and H5Fcreate
) that contains a
- printf(3c)
-style integer format specifier.
- Currently, the default low-level file driver is used for all
- family members (H5F_LOW_DFLT, usually set to be Section 2 I/O or
- Section 3 stdio), but we'll probably eventually want to pass
- that as a parameter of the file access property list, which
- hasn't been implemented yet. When creating a family, a default
- family member size is used (defined at the top H5Ffamily.c,
- currently 64MB) but that also should be settable in the file
- access property list. When opening an existing family, the size
- of the first member is used to determine the member size
- (flushing/closing a family ensures that the first member is the
- correct size) but the other family members don't have to be that
- large (the local address space, however, is logically the same
- size for all members).
-
-
I haven't implemented a split meta/raw family yet but am rather
- curious to see how it would perform. I was planning to use the
- `.h5' extension for the meta data file and `.raw' for the raw
- data file. The high-order bit in the address would determine
- whether the address refers to meta data or raw data. If the user
- passes a name that ends with `.raw' to H5F_open
- then we'll chose the split family and use the default low level
- driver for each of the two family members. Eventually we'll
- want to pass these kinds of things through the file access
- property list instead of relying on naming convention.
-
-
We also need the ability to point to raw data that isn't in the - HDF5 linear address space. For instance, a dataset might be - striped across several raw data files. - -
Fortunately, the only two packages that need to be aware of - this are the packages for reading/writing contiguous raw data - and discontiguous raw data. Since contiguous raw data is a - special case, I'll discuss how to implement external raw data in - the discontiguous case. - -
Discontiguous data is stored as a B-tree whose keys are the - chunk indices and whose leaf nodes point to the raw data by - storing a file address. So what we need is some way to name the - external files, and a way to efficiently store the external file - name for each chunk. - -
I propose adding to the object header an External File - List message that is a 1-origin array of file names. - Then, in the B-tree, each key has an index into the External - File List (or zero for the HDF5 file) for the file where the - chunk can be found. The external file index is only used at - the leaf nodes to get to the raw data (the entire B-tree is in - the HDF5 file) but because of the way keys are copied among - the B-tree nodes, it's much easier to store the index with - every key. - -
One might also want to combine two or more HDF5 files in a - manner similar to mounting file systems in Unix. That is, the - group structure and meta data from one file appear as though - they exist in the first file. One opens File-A, and then - mounts File-B at some point in File-A, the mount - point, so that traversing into the mount point actually - causes one to enter the root object of File-B. File-A and - File-B are each complete HDF5 files and can be accessed - individually without mounting them. - -
We need a couple additional pieces of machinery to make this - work. First, an haddr_t type (a file address) doesn't contain - any info about which HDF5 file's address space the address - belongs to. But since haddr_t is an opaque type except at - layers 2 and below, it should be quite easy to add a pointer to - the HDF5 file. This would also remove the H5F_t argument from - most of the low-level functions since it would be part of the - OID. - -
The other thing we need is a table of mount points and some - functions that understand them. We would add the following - table to each H5F_t struct: - -
-
-
-struct H5F_mount_t {
- H5F_t *parent; /* Parent HDF5 file if any */
- struct {
- H5F_t *f; /* File which is mounted */
- haddr_t where; /* Address of mount point */
- } *mount; /* Array sorted by mount point */
- intn nmounts; /* Number of mounted files */
- intn alloc; /* Size of mount table */
-}
-
The H5Fmount
function takes the ID of an open
- file or group, the name of a to-be-mounted file, the name of the mount
- point, and a file access property list (like H5Fopen
).
- It opens the new file and adds a record to the parent's mount
- table. The H5Funmount
function takes the parent
- file or group ID and the name of the mount point and disassociates
- the mounted file from the mount point. It does not close the
- mounted file. The H5Fclose
- function closes/unmounts files recursively.
-
-
The H5G_iname
function which translates a name to
- a file address (haddr_t
) looks at the mount table
- at each step in the translation and switches files where
- appropriate. All name-to-address translations occur through
- this function.
-
-
I'm expecting to be able to implement the two new flavors of - single linear address space in about two days. It took two hours - to implement the malloc/free file driver at level zero and I - don't expect this to be much more work. - -
I'm expecting three days to implement the external raw data for
- discontiguous arrays. Adding the file index to the B-tree is
- quite trivial; adding the external file list message shouldn't
- be too hard since the object header message class from wich this
- message derives is fully implemented; and changing
- H5F_istore_read
should be trivial. Most of the
- time will be spent designing a way to cache Unix file
- descriptors efficiently since the total number open files
- allowed per process could be much smaller than the total number
- of HDF5 files and external raw data files.
-
-
I'm expecting four days to implement being able to mount one
- HDF5 file on another. I was originally planning a lot more, but
- making haddr_t
opaque turned out to be much easier
- than I planned (I did it last Fri). Most of the work will
- probably be removing the redundant H5F_t arguments for lots of
- functions.
-
-
The external raw data could be implemented as a single linear - address space, but doing so would require one to allocate large - enough file addresses throughout the file (>32bits) before the - file was created. It would make mixing an HDF5 file family with - external raw data, or external HDF5 wrapper around an HDF4 file - a more difficult process. So I consider the implementation of - external raw data files as a single HDF5 linear address space a - kludge. - -
The ability to mount one HDF5 file on another might not be a - very important feature especially since each HDF5 file must be a - complete file by itself. It's not possible to stripe an array - over multiple HDF5 files because the B-tree wouldn't be complete - in any one file, so the only choice is to stripe the array - across multiple raw data files and store the B-tree in the HDF5 - file. On the other hand, it might be useful if one file - contains some public data which can be mounted by other files - (e.g., a mesh topology shared among collaborators and mounted by - files that contain other fields defined on the mesh). Of course - the applications can open the two files separately, but it might - be more portable if we support it in the library. - -
So we're looking at about two weeks to implement all three - versions. I didn't get a chance to do any of them in AIO - although we had long-term plans for the first two with a - possibility of the third. They'll be much easier to implement in - HDF5 than AIO since I've been keeping these in mind from the - start. - -
- -At Release 1.2.2, free list management code was introduced to the HDF5 -library. This included one user-level function, H5garbage_collect, which -garbage collects on all the free-lists. H5garbage_collect is the only user- -accessible (i.e., application developer-accessible) element of this -functionality. - -The free-lists generally reduce the amount of dynamic memory used to around -75% of the pre-optimized amount as well as speed up the time in the library -code by ~5% The free-lists also help linearize the amount of memory used with -increasing numbers of datasets or re-writes on the data, so the amount of -memory used for the 1500/45 free-list case is only 66% of the memory used for -the unoptimized case. - -Overall, the introduction of free list management is a win: the library is -slightly faster and uses much less system resources than before. Most of the -emphasis has been focused on the main "thouroughfares" through the code; -less attention was paid to the "back streets" which are used much less -frequently and offer less potential for abuse. - -Adding a free-list for a data structure in the HDF5 library code is easy: - -Old code: ---------- - int foo(void) - { - H5W_t *w; - - for(i=0; i<x; i++) { - w=H5MM_malloc(sizeof(H5W_t)); - <use w> - H5MM_xfree(w); - } - } - -New code: ---------- -H5FL_DEFINE(H5W_t); - - int foo(void) - { - H5W_t *w; - - for(i=0; i<x; i++) { - w=H5FL_ALLOC(H5W_t,0); - <use w> - H5FL_FREE(H5W_t,w); - } - } - - -There are three kinds of free-lists: - -- for "regular" objects, - -- arrays of fixed size object (both fixed length and unknown length), and - -- "blocks" of bytes. - - "Regular" free-lists use the H5FL_<*> macros in H5FLprivate.h and are - designed for single, fixed-size data structures like typedef'ed structs, - etc. - - Arrays of objects use the H5FL_ARR_<*> macros and are designed for arrays - (both fixed in length and varying lengths) of fixed length data structures - (like typedef'ed types). - - "Block" free-lists use the H5FL_BLK_<*> macros and are designed to hold - varying sized buffers of bytes, with no structure. - - H5S.c contains examples for "regular" and fixed-sized arrays; - H5B.c contains examples for variable-sized arrays and "blocks". - -A free-list doesn't have to be used for every data structure allocated and -freed, just for those which are prone to abuse when multiple operations are -being performed. It is important to use the macros for declaring and -manipulating the free-lists however; they allow the free'd objects on the -lists to be garbage collected by the library at the library's termination -or at the user's request. - -One public API function has been added: H5garbage_collect, which garbage -collects on all the free-lists of all the different types. It's not required -to be called and is only necessary in situations when the application -performs actions which cause the library to allocate many objects and then -the application eventually releases those objects and wants to reduce the -memory used by the library from the peak usage required. The library -automatically garbage collects all the free lists when the application ends. - -Questions should be sent to the HDF Help Desk at hdfhelp@ncsa.uiuc.edu. - - -=========================================== -BENCHMARK INFORMATION -=========================================== - -New version with free lists: - -Datasets=500, Data Rewrites=15: - Peak Heap Usage: 18210820 bytes - Time in library code: 2.260 seconds - # of malloc calls: 22864 - -Datasets=1000, Data Rewrites=15: - Peak Heap Usage: 31932420 bytes - Time in library code: 5.090 seconds - # of malloc calls: 43045 - -Datasets=1500, Data Rewrites=15: - Peak Heap Usage: 41566212 bytes - Time in library code: 8.623 seconds - # of malloc calls: 60623 - -Datasets=500, Data Rewrites=30: - Peak Heap Usage: 19456004 bytes - Time in library code: 4.274 seconds - # of malloc calls: 23353 - -Datasets=1000, Data Rewrites=30: - Peak Heap Usage: 33988612 bytes - Time in library code: 9.955 seconds - # of malloc calls: 43855 - -Datasets=1500, Data Rewrites=30: - Peak Heap Usage: 43950084 bytes - Time in library code: 17.413 seconds - # of malloc calls: 61554 - -Datasets=500, Data Rewrites=45: - Peak Heap Usage: 20717572 bytes - Time in library code: 6.326 seconds - # of malloc calls: 23848 - -Datasets=1000, Data Rewrites=45: - Peak Heap Usage: 35807236 bytes - Time in library code: 15.146 seconds - # of malloc calls: 44572 - -Datasets=1500, Data Rewrites=45: - Peak Heap Usage: 46022660 bytes - Time in library code: 27.140 seconds - # of malloc calls: 62370 - - -Older version with no free lists: - -Datasets=500, Data Rewrites=15: - Peak Heap Usage: 25370628 bytes - Time in library code: 2.329 seconds - # of malloc calls: 194991 - -Datasets=1000, Data Rewrites=15: - Peak Heap Usage: 39550980 bytes - Time in library code: 5.251 seconds - # of malloc calls: 417971 - -Datasets=1500, Data Rewrites=15: - Peak Heap Usage: 68870148 bytes - Time in library code: 8.913 seconds - # of malloc calls: 676564 - -Datasets=500, Data Rewrites=30: - Peak Heap Usage: 31670276 bytes - Time in library code: 4.435 seconds - # of malloc calls: 370320 - -Datasets=1000, Data Rewrites=30: - Peak Heap Usage: 44646404 bytes - Time in library code: 10.325 seconds - # of malloc calls: 797125 - -Datasets=1500, Data Rewrites=30: - Peak Heap Usage: 68870148 bytes - Time in library code: 18.057 seconds - # of malloc calls: 1295336 - -Datasets=500, Data Rewrites=45: - Peak Heap Usage: 33906692 bytes - Time in library code: 6.577 seconds - # of malloc calls: 545656 - -Datasets=1000, Data Rewrites=45: - Peak Heap Usage: 56778756 bytes - Time in library code: 15.720 seconds - # of malloc calls: 1176285 - -Datasets=1500, Data Rewrites=45: - Peak Heap Usage: 68870148 bytes - Time in library code: 28.138 seconds - # of malloc calls: 1914097 - - -=========================================== -Last Modified: 3 May 2000 -HDF Help Desk: hdfhelp@ncsa.uiuc.edu - -- - diff --git a/doc/html/TechNotes/H4-H5Compat.html b/doc/html/TechNotes/H4-H5Compat.html deleted file mode 100644 index 2992476..0000000 --- a/doc/html/TechNotes/H4-H5Compat.html +++ /dev/null @@ -1,271 +0,0 @@ - - - -
The HDF5 development must proceed in such a manner as to - satisfy the following conditions: - -
There's at least one invarient: new object features introduced - in the HDF5 file format (like 2-d arrays of structs) might be - impossible to "translate" to a format that an old HDF4 - application can understand either because the HDF4 file format - or the HDF4 API has no mechanism to describe the object. - -
What follows is one possible implementation based on how - Condition B was solved in the AIO/PDB world. It also attempts - to satisfy these goals: - -
The proposed implementation uses wrappers to handle - compatability issues. A Format-X file is wrapped in a - Format-Y file by creating a Format-Y skeleton that replicates - the Format-X meta data. The Format-Y skeleton points to the raw - data stored in Format-X without moving the raw data. The - restriction is that raw data storage methods in Format-Y is a - superset of raw data storage methods in Format-X (otherwise the - raw data must be copied to Format-Y). We're assuming that meta - data is small wrt the entire file. - -
The wrapper can be a separate file that has pointers into the - first file or it can be contained within the first file. If - contained in a single file, the file can appear as a Format-Y - file or simultaneously a Format-Y and Format-X file. - -
The Format-X meta-data can be thought of as the original - wrapper around raw data and Format-Y is a second wrapper around - the same data. The wrappers are independend of one another; - modifying the meta-data in one wrapper causes the other to - become out of date. Modification of raw data doesn't invalidate - either view as long as the meta data that describes its storage - isn't modifed. For instance, an array element can change values - if storage is already allocated for the element, but if storage - isn't allocated then the meta data describing the storage must - change, invalidating all wrappers but one. - -
It's perfectly legal to modify the meta data of one wrapper - without modifying the meta data in the other wrapper(s). The - illegal part is accessing the raw data through a wrapper which - is out of date. - -
If raw data is wrapped by more than one internal wrapper - (internal means that the wrapper is in the same file as - the raw data) then access to that file must assume that - unreferenced parts of that file contain meta data for another - wrapper and cannot be reclaimed as free memory. - -
Since this is a temporary situation which can't be - automatically detected by the HDF5 library, we must rely - on the application to notify the HDF5 library whether or not it - must satisfy Condition B. (Even if we don't rely on the - application, at some point someone is going to remove the - Condition B constraint from the library.) So the module that - handles Condition B is conditionally compiled and then enabled - on a per-file basis. - -
If the application desires to produce an HDF4 file (determined
- by arguments to H5Fopen
), and the Condition B
- module is compiled into the library, then H5Fclose
- calls the module to traverse the HDF5 wrapper and generate an
- additional internal or external HDF4 wrapper (wrapper specifics
- are described below). If Condition B is implemented as a module
- then it can benefit from the metadata already cached by the main
- library.
-
-
An internal HDF4 wrapper would be used if the HDF5 file is - writable and the user doesn't mind that the HDF5 file is - modified. An external wrapper would be used if the file isn't - writable or if the user wants the data file to be primarily HDF5 - but a few applications need an HDF4 view of the data. - -
Modifying through the HDF5 library an HDF5 file that has
- internal HDF4 wrapper should invalidate the HDF4 wrapper (and
- optionally regenerate it when H5Fclose
is
- called). The HDF5 library must understand how wrappers work, but
- not necessarily anything about the HDF4 file format.
-
-
Modifying through the HDF5 library an HDF5 file that has an
- external HDF4 wrapper will cause the HDF4 wrapper to become out
- of date (but possibly regenerated during H5Fclose
).
- Note: Perhaps the next release of the HDF4 library should
- insure that the HDF4 wrapper file has a more recent modification
- time than the raw data file (the HDF5 file) to which it
- points(?)
-
-
Modifying through the HDF4 library an HDF5 file that has an - internal or external HDF4 wrapper will cause the HDF5 wrapper to - become out of date. However, there is now way for the old HDF4 - library to notify the HDF5 wrapper that it's out of date. - Therefore the HDF5 library must be able to detect when the HDF5 - wrapper is out of date and be able to fix it. If the HDF4 - wrapper is complete then the easy way is to ignore the original - HDF5 wrapper and generate a new one from the HDF4 wrapper. The - other approach is to compare the HDF4 and HDF5 wrappers and - assume that if they differ HDF4 is the right one, if HDF4 omits - data then it was because HDF4 is a partial wrapper (rather than - assume HDF4 deleted the data), and if HDF4 has new data then - copy the new meta data to the HDF5 wrapper. On the other hand, - perhaps we don't need to allow these situations (modifying an - HDF5 file with the old HDF4 library and then accessing it with - the HDF5 library is either disallowed or causes HDF5 objects - that can't be described by HDF4 to be lost). - -
To convert an HDF5 file to an HDF4 file on demand, one simply - opens the file with the HDF4 flag and closes it. This is also - how AIO implemented backward compatability with PDB in its file - format. - -
This condition must be satisfied for all time because there
- will always be archived HDF4 files. If a pure HDF4 file (that
- is, one without HDF5 meta data) is opened with an HDF5 library,
- the H5Fopen
builds an internal or external HDF5
- wrapper and then accesses the raw data through that wrapper. If
- the HDF5 library modifies the file then the HDF4 wrapper becomes
- out of date. However, since the HDF5 library hasn't been
- released, we can at least implement it to disable and/or reclaim
- the HDF4 wrapper.
-
-
If an external and temporary HDF5 wrapper is desired, the
- wrapper is created through the cache like all other HDF5 files.
- The data appears on disk only if a particular cached datum is
- preempted. Instead of calling H5Fclose
on the HDF5
- wrapper file we call H5Fabort
which immediately
- releases all file resources without updating the file, and then
- we unlink the file from Unix.
-
-
External wrappers are quite obvious: they contain only things - from the format specs for the wrapper and nothing from the - format specs of the format which they wrap. - -
An internal HDF4 wrapper is added to an HDF5 file in such a way
- that the file appears to be both an HDF4 file and an HDF5
- file. HDF4 requires an HDF4 file header at file offset zero. If
- a user block is present then we just move the user block down a
- bit (and truncate it) and insert the minimum HDF4 signature.
- The HDF4 dd
list and any other data it needs are
- appended to the end of the file and the HDF5 signature uses the
- logical file length field to determine the beginning of the
- trailing part of the wrapper.
-
-
-
HDF4 minimal file header. Its main job is to point to
- the dd list at the end of the file. |
-
User-defined block which is truncated by the size of the - HDF4 file header so that the HDF5 boot block file address - doesn't change. | -
The HDF5 boot block and data, unmodified by adding the - HDF4 wrapper. | -
The main part of the HDF4 wrapper. The dd
- list will have entries for all parts of the file so
- hdpack(?) doesn't (re)move anything. |
-
When such a file is opened by the HDF5 library for - modification it shifts the user block back down to address zero - and fills with zeros, then truncates the file at the end of the - HDF5 data or adds the trailing HDF4 wrapper to the free - list. This prevents HDF4 applications from reading the file with - an out of date wrapper. - -
If there is no user block then we have a problem. The HDF5 - boot block must be moved to make room for the HDF4 file header. - But moving just the boot block causes problems because all file - addresses stored in the file are relative to the boot block - address. The only option is to shift the entire file contents - by 512 bytes to open up a user block (too bad we don't have - hooks into the Unix i-node stuff so we could shift the entire - file contents by the size of a file system page without ever - performing I/O on the file :-) - -
Is it possible to place an HDF5 wrapper in an HDF4 file? I
- don't know enough about the HDF4 format, but I would suspect it
- might be possible to open a hole at file address 512 (and
- possibly before) by moving some things to the end of the file
- to make room for the HDF5 signature. The remainder of the HDF5
- wrapper goes at the end of the file and entries are added to the
- HDF4 dd
list to mark the location(s) of the HDF5
- wrapper.
-
-
Conversion programs that copy an entire HDF4 file to a separate, - self-contained HDF5 file and vice versa might be useful. - - - - -
- -Heap functions are in the H5H package. - - -off_t -H5H_new (hdf5_file_t *f, size_t size_hint, size_t realloc_hint); - - Creates a new heap in the specified file which can efficiently - store at least SIZE_HINT bytes. The heap can store more than - that, but doing so may cause the heap to become less efficient - (for instance, a heap implemented as a B-tree might become - discontigous). The REALLOC_HINT is the minimum number of bytes - by which the heap will grow when it must be resized. The hints - may be zero in which case reasonable (but probably not - optimal) values will be chosen. - - The return value is the address of the new heap relative to - the beginning of the file boot block. - -off_t -H5H_insert (hdf5_file_t *f, off_t addr, size_t size, const void *buf); - - Copies SIZE bytes of data from BUF into the heap whose address - is ADDR in file F. BUF must be the _entire_ heap object. The - return value is the byte offset of the new data in the heap. - -void * -H5H_read (hdf5_file_t *f, off_t addr, off_t offset, size_t size, void *buf); - - Copies SIZE bytes of data from the heap whose address is ADDR - in file F into BUF and then returns the address of BUF. If - BUF is the null pointer then a new buffer will be malloc'd by - this function and its address is returned. - - Returns buffer address or null. - -const void * -H5H_peek (hdf5_file_t *f, off_t addr, off_t offset) - - A more efficient version of H5H_read that returns a pointer - directly into the cache; the data is not copied from the cache - to a buffer. The pointer is valid until the next call to an - H5AC function directly or indirectly. - - Returns a pointer or null. Do not free the pointer. - -void * -H5H_write (hdf5_file_t *f, off_t addr, off_t offset, size_t size, - const void *buf); - - Modifies (part of) an object in the heap at address ADDR of - file F by copying SIZE bytes from the beginning of BUF to the - file. OFFSET is the address withing the heap where the output - is to occur. - - This function can fail if the combination of OFFSET and SIZE - would write over a boundary between two heap objects. - -herr_t -H5H_remove (hdf5_file_t *f, off_t addr, off_t offset, size_t size); - - Removes an object or part of an object which begins at byte - OFFSET within a heap whose address is ADDR in file F. SIZE - bytes are returned to the free list. Removing the middle of - an object has the side effect that one object is now split - into two objects. - - Returns success or failure. - -=========================================== -Last Modified: 8 July 1998 (technical content) -Last Modified: 28 April 2000 (included in HDF5 Technical Notes) -HDF Help Desk: hdfhelp@ncsa.uiuc.edu - -- - - diff --git a/doc/html/TechNotes/IOPipe.html b/doc/html/TechNotes/IOPipe.html deleted file mode 100644 index 7c24e2c..0000000 --- a/doc/html/TechNotes/IOPipe.html +++ /dev/null @@ -1,114 +0,0 @@ - - - -
The HDF5 raw data pipeline is a complicated beast that handles - all aspects of raw data storage and transfer of that data - between the file and the application. Data can be stored - contiguously (internal or external), in variable size external - segments, or regularly chunked; it can be sparse, extendible, - and/or compressible. Data transfers must be able to convert from - one data space to another, convert from one number type to - another, and perform partial I/O operations. Furthermore, - applications will expect their common usage of the pipeline to - perform well. - -
To accomplish these goals, the pipeline has been designed in a - modular way so no single subroutine is overly complicated and so - functionality can be inserted easily at the appropriate - locations in the pipeline. A general pipeline was developed and - then certain paths through the pipeline were optimized for - performance. - -
We describe only the file-to-memory side of the pipeline since - the memory-to-file side is a mirror image. We also assume that a - proper hyperslab of a simple data space is being read from the - file into a proper hyperslab of a simple data space in memory, - and that the data type is a compound type which may require - various number conversions on its members. - - - -
The diagrams should be read from the top down. The Line A
- in the figure above shows that H5Dread()
copies
- data from a hyperslab of a file dataset to a hyperslab of an
- application buffer by calling H5D_read()
. And
- H5D_read()
calls, in a loop,
- H5S_simp_fgath()
, H5T_conv_struct()
,
- and H5S_simp_mscat()
. A temporary buffer, TCONV, is
- loaded with data points from the file, then data type conversion
- is performed on the temporary buffer, and finally data points
- are scattered out to application memory. Thus, data type
- conversion is an in-place operation and data space conversion
- consists of two steps. An additional temporary buffer, BKG, is
- large enough to hold N instances of the destination
- data type where N is the same number of data points
- that can be held by the TCONV buffer (which is large enough to
- hold either source or destination data points).
-
-
The application sets an upper limit for the size of the TCONV
- buffer and optionally supplies a buffer. If no buffer is
- supplied then one will be created by calling
- malloc()
when the pipeline is executed (when
- necessary) and freed when the pipeline exits. The size of the
- BKG buffer depends on the size of the TCONV buffer and if the
- application supplies a BKG buffer it should be at least as large
- as the TCONV buffer. The default size for these buffers is one
- megabyte but the buffer might not be used to full capacity if
- the buffer size is not an integer multiple of the source or
- destination data point size (whichever is larger, but only
- destination for the BKG buffer).
-
-
-
-
Occassionally the destination data points will be partially
- initialized and the H5Dread()
operation should not
- clobber those values. For instance, the destination type might
- be a struct with members a
and b
where
- a
is already initialized and we're reading
- b
from the file. An extra line, G, is added to the
- pipeline to provide the type conversion functions with the
- existing data.
-
-
-
-
It will most likely be quite common that no data type - conversion is necessary. In such cases a temporary buffer for - data type conversion is not needed and data space conversion - can happen in a single step. In fact, when the source and - destination data are both contiguous (they aren't in the - picture) the loop degenerates to a single iteration. - - - - -
So far we've looked only at internal contiguous storage, but by - replacing Line B in Figures 1 and 2 and Line A in Figure 3 with - Figure 4 the pipeline is able to handle regularly chunked - objects. Line B of Figure 4 is executed once for each chunk - which contains data to be read and the chunk address is found by - looking at a multi-dimensional key in a chunk B-tree which has - one entry per chunk. - - - -
If a single chunk is requested and the destination buffer is - the same size/shape as the chunk, then the CHUNK buffer is - bypassed and the destination buffer is used instead as shown in - Figure 5. - - - -
- -* You can run make from any directory. However, running in a - subdirectory only knows how to build things in that directory and - below. However, all makefiles know when their target depends on - something outside the local directory tree: - - $ cd test - $ make - make: *** No rule to make target ../src/libhdf5.a - -* All Makefiles understand the following targets: - - all -- build locally. - install -- install libs, headers, progs. - uninstall -- remove installed files. - mostlyclean -- remove temp files (eg, *.o but not *.a). - clean -- mostlyclean plus libs and progs. - distclean -- all non-distributed files. - maintainer-clean -- all derived files but H5config.h.in and configure. - -* Most Makefiles also understand: - - TAGS -- build a tags table - dep, depend -- recalculate source dependencies - lib -- build just the libraries w/o programs - -* If you have personal preferences for which make, compiler, compiler - flags, preprocessor flags, etc., that you use and you don't want to - set environment variables, then use a site configuration file. - - When configure starts, it looks in the config directory for files - whose name is some combination of the CPU name, vendor, and - operating system in this order: - - CPU-VENDOR-OS - VENDOR-OS - CPU-VENDOR - OS - VENDOR - CPU - - The first file which is found is sourced and can therefore affect - the behavior of the rest of configure. See config/BlankForm for the - template. - -* If you use GNU make along with gcc the Makefile will contain targets - that automatically maintain a list of source interdependencies; you - seldom have to say `make clean'. I say `seldom' because if you - change how one `*.h' file includes other `*.h' files you'll have - to force an update. - - To force an update of all dependency information remove the - `.depend' file from each directory and type `make'. For - instance: - - $ cd $HDF5_HOME - $ find . -name .depend -exec rm {} \; - $ make - - If you're not using GNU make and gcc then dependencies come from - ".distdep" files in each directory. Those files are generated on - GNU systems and inserted into the Makefile's by running - config.status (which happens near the end of configure). - -* If you use GNU make along with gcc then the Perl script `trace' is - run just before dependencies are calculated to update any H5TRACE() - calls that might appear in the file. Otherwise, after changing the - type of a function (return type or argument types) one should run - `trace' manually on those source files (e.g., ../bin/trace *.c). - -* Object files stay in the directory and are added to the library as a - final step instead of placing the file in the library immediately - and removing it from the directory. The reason is three-fold: - - 1. Most versions of make don't allow `$(LIB)($(SRC:.c=.o))' - which makes it necessary to have two lists of files, one - that ends with `.c' and the other that has the library - name wrapped around each `.o' file. - - 2. Some versions of make/ar have problems with modification - times of archive members. - - 3. Adding object files immediately causes problems on SMP - machines where make is doing more than one thing at a - time. - -* When using GNU make on an SMP you can cause it to compile more than - one thing at a time. At the top of the source tree invoke make as - - $ make -j -l6 - - which causes make to fork as many children as possible as long as - the load average doesn't go above 6. In subdirectories one can say - - $ make -j2 - - which limits the number of children to two (this doesn't work at the - top level because the `-j2' is not passed to recursive makes). - -* To create a release tarball go to the top-level directory and run - ./bin/release. You can optionally supply one or more of the words - `tar', `gzip', `bzip2' or `compress' on the command line. The - result will be a (compressed) tar file(s) in the `releases' - directory. The README file is updated to contain the release date - and version number. - -* To create a tarball of all the files which are part of HDF5 go to - the top-level directory and type: - - tar cvf foo.tar `grep '^\.' MANIFEST |unexpand |cut -f1` - - -=========================================== -Last Modified: 15 October 1999 (technical content) -Last Modified: 28 April 2000 (included in HDF5 Technical Notes) -HDF Help Desk: hdfhelp@ncsa.uiuc.edu - -- - - diff --git a/doc/html/TechNotes/Makefile.am b/doc/html/TechNotes/Makefile.am deleted file mode 100644 index a0aca2d..0000000 --- a/doc/html/TechNotes/Makefile.am +++ /dev/null @@ -1,25 +0,0 @@ -# HDF5 Library Doc Makefile(.in) -# -# Copyright (C) 1997, 2002 -# National Center for Supercomputing Applications. -# All rights reserved. -# -## -## Makefile.am -## Run automake to generate a Makefile.in from this file. -# - -include $(top_srcdir)/config/commence-doc.am - -localdocdir = $(docdir)/hdf5/TechNotes - -# Public doc files (to be installed)... -localdoc_DATA=BigDataSmMach.html ChStudy_1000x1000.gif ChStudy_250x250.gif \ - ChStudy_499x499.gif ChStudy_5000x1000.gif ChStudy_500x500.gif \ - ChStudy_p1.gif ChunkingStudy.html CodeReview.html \ - ExternalFiles.html FreeLists.html H4-H5Compat.html HeapMgmt.html \ - IOPipe.html LibMaint.html MemoryMgmt.html MoveDStruct.html \ - NamingScheme.html ObjectHeader.html RawDStorage.html \ - SWControls.html SymbolTables.html ThreadSafeLibrary.html VFL.html \ - VFLfunc.html Version.html openmp-hdf5.c openmp-hdf5.html \ - pipe1.gif pipe2.gif pipe3.gif pipe4.gif pipe5.gif version.gif diff --git a/doc/html/TechNotes/Makefile.in b/doc/html/TechNotes/Makefile.in deleted file mode 100644 index 2dc4278..0000000 --- a/doc/html/TechNotes/Makefile.in +++ /dev/null @@ -1,494 +0,0 @@ -# Makefile.in generated by automake 1.9.5 from Makefile.am. -# @configure_input@ - -# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, -# 2003, 2004, 2005 Free Software Foundation, Inc. -# This Makefile.in is free software; the Free Software Foundation -# gives unlimited permission to copy and/or distribute it, -# with or without modifications, as long as this notice is preserved. - -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY, to the extent permitted by law; without -# even the implied warranty of MERCHANTABILITY or FITNESS FOR A -# PARTICULAR PURPOSE. - -@SET_MAKE@ - -# HDF5 Library Doc Makefile(.in) -# -# Copyright (C) 1997, 2002 -# National Center for Supercomputing Applications. -# All rights reserved. -# -# - -srcdir = @srcdir@ -top_srcdir = @top_srcdir@ -VPATH = @srcdir@ -pkgdatadir = $(datadir)/@PACKAGE@ -pkglibdir = $(libdir)/@PACKAGE@ -pkgincludedir = $(includedir)/@PACKAGE@ -top_builddir = ../../.. -am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd -INSTALL = @INSTALL@ -install_sh_DATA = $(install_sh) -c -m 644 -install_sh_PROGRAM = $(install_sh) -c -install_sh_SCRIPT = $(install_sh) -c -INSTALL_HEADER = $(INSTALL_DATA) -transform = $(program_transform_name) -NORMAL_INSTALL = : -PRE_INSTALL = : -POST_INSTALL = : -NORMAL_UNINSTALL = : -PRE_UNINSTALL = : -POST_UNINSTALL = : -build_triplet = @build@ -host_triplet = @host@ -DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \ - $(top_srcdir)/config/commence-doc.am \ - $(top_srcdir)/config/commence.am -subdir = doc/html/TechNotes -ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 -am__aclocal_m4_deps = $(top_srcdir)/configure.in -am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ - $(ACLOCAL_M4) -mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs -CONFIG_HEADER = $(top_builddir)/src/H5config.h -CONFIG_CLEAN_FILES = -SOURCES = -DIST_SOURCES = -am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; -am__vpath_adj = case $$p in \ - $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ - *) f=$$p;; \ - esac; -am__strip_dir = `echo $$p | sed -e 's|^.*/||'`; -am__installdirs = "$(DESTDIR)$(localdocdir)" -localdocDATA_INSTALL = $(INSTALL_DATA) -DATA = $(localdoc_DATA) -DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) - -# Set the paths for AFS installs of autotools for Linux machines -# Ideally, these tools should never be needed during the build. -ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal -ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@ -AMDEP_FALSE = @AMDEP_FALSE@ -AMDEP_TRUE = @AMDEP_TRUE@ -AMTAR = @AMTAR@ -AM_MAKEFLAGS = @AM_MAKEFLAGS@ -AR = @AR@ -AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf -AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader -AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake -AWK = @AWK@ -BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@ -BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@ -BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@ -BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@ -BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@ -BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@ -BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@ -BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@ -BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@ -BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@ -BUILD_PDB2HDF = @BUILD_PDB2HDF@ -BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@ -BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@ -BYTESEX = @BYTESEX@ -CC = @CC@ -CCDEPMODE = @CCDEPMODE@ -CC_VERSION = @CC_VERSION@ -CFLAGS = @CFLAGS@ -CONFIG_DATE = @CONFIG_DATE@ -CONFIG_MODE = @CONFIG_MODE@ -CONFIG_USER = @CONFIG_USER@ -CPP = @CPP@ -CPPFLAGS = @CPPFLAGS@ -CXX = @CXX@ -CXXCPP = @CXXCPP@ -CXXDEPMODE = @CXXDEPMODE@ -CXXFLAGS = @CXXFLAGS@ -CYGPATH_W = @CYGPATH_W@ -DEBUG_PKG = @DEBUG_PKG@ -DEFS = @DEFS@ -DEPDIR = @DEPDIR@ -DYNAMIC_DIRS = @DYNAMIC_DIRS@ -ECHO = @ECHO@ -ECHO_C = @ECHO_C@ -ECHO_N = @ECHO_N@ -ECHO_T = @ECHO_T@ -EGREP = @EGREP@ -EXEEXT = @EXEEXT@ -F77 = @F77@ - -# Make sure that these variables are exported to the Makefiles -F9XMODEXT = @F9XMODEXT@ -F9XMODFLAG = @F9XMODFLAG@ -F9XSUFFIXFLAG = @F9XSUFFIXFLAG@ -FC = @FC@ -FCFLAGS = @FCFLAGS@ -FCLIBS = @FCLIBS@ -FFLAGS = @FFLAGS@ -FILTERS = @FILTERS@ -FSEARCH_DIRS = @FSEARCH_DIRS@ -H5_VERSION = @H5_VERSION@ -HADDR_T = @HADDR_T@ -HDF5_INTERFACES = @HDF5_INTERFACES@ -HID_T = @HID_T@ -HL = @HL@ -HL_FOR = @HL_FOR@ -HSIZET = @HSIZET@ -HSIZE_T = @HSIZE_T@ -HSSIZE_T = @HSSIZE_T@ -INSTALL_DATA = @INSTALL_DATA@ -INSTALL_PROGRAM = @INSTALL_PROGRAM@ -INSTALL_SCRIPT = @INSTALL_SCRIPT@ -INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ -INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@ -LDFLAGS = @LDFLAGS@ -LIBOBJS = @LIBOBJS@ -LIBS = @LIBS@ -LIBTOOL = @LIBTOOL@ -LN_S = @LN_S@ -LTLIBOBJS = @LTLIBOBJS@ -LT_STATIC_EXEC = @LT_STATIC_EXEC@ -MAINT = @MAINT@ -MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@ -MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@ -MAKEINFO = @MAKEINFO@ -MPE = @MPE@ -OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@ -OBJEXT = @OBJEXT@ -PACKAGE = @PACKAGE@ -PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ -PACKAGE_NAME = @PACKAGE_NAME@ -PACKAGE_STRING = @PACKAGE_STRING@ -PACKAGE_TARNAME = @PACKAGE_TARNAME@ -PACKAGE_VERSION = @PACKAGE_VERSION@ -PARALLEL = @PARALLEL@ -PATH_SEPARATOR = @PATH_SEPARATOR@ -PERL = @PERL@ -PTHREAD = @PTHREAD@ -RANLIB = @RANLIB@ -ROOT = @ROOT@ -RUNPARALLEL = @RUNPARALLEL@ -RUNSERIAL = @RUNSERIAL@ -R_INTEGER = @R_INTEGER@ -R_LARGE = @R_LARGE@ -SEARCH = @SEARCH@ -SETX = @SETX@ -SET_MAKE = @SET_MAKE@ - -# Hardcode SHELL to be /bin/sh. Most machines have this shell, and -# on at least one machine configure fails to detect its existence (janus). -# Also, when HDF5 is configured on one machine but run on another, -# configure's automatic SHELL detection may not work on the build machine. -SHELL = /bin/sh -SIZE_T = @SIZE_T@ -STATIC_SHARED = @STATIC_SHARED@ -STRIP = @STRIP@ -TESTPARALLEL = @TESTPARALLEL@ -TRACE_API = @TRACE_API@ -USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@ -USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@ -USE_FILTER_NBIT = @USE_FILTER_NBIT@ -USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@ -USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@ -USE_FILTER_SZIP = @USE_FILTER_SZIP@ -VERSION = @VERSION@ -ac_ct_AR = @ac_ct_AR@ -ac_ct_CC = @ac_ct_CC@ -ac_ct_CXX = @ac_ct_CXX@ -ac_ct_F77 = @ac_ct_F77@ -ac_ct_FC = @ac_ct_FC@ -ac_ct_RANLIB = @ac_ct_RANLIB@ -ac_ct_STRIP = @ac_ct_STRIP@ -am__fastdepCC_FALSE = @am__fastdepCC_FALSE@ -am__fastdepCC_TRUE = @am__fastdepCC_TRUE@ -am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@ -am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@ -am__include = @am__include@ -am__leading_dot = @am__leading_dot@ -am__quote = @am__quote@ -am__tar = @am__tar@ -am__untar = @am__untar@ -bindir = @bindir@ -build = @build@ -build_alias = @build_alias@ -build_cpu = @build_cpu@ -build_os = @build_os@ -build_vendor = @build_vendor@ -datadir = @datadir@ -exec_prefix = @exec_prefix@ -host = @host@ -host_alias = @host_alias@ -host_cpu = @host_cpu@ -host_os = @host_os@ -host_vendor = @host_vendor@ - -# Install directories that automake doesn't know about -includedir = $(exec_prefix)/include -infodir = @infodir@ -install_sh = @install_sh@ -libdir = @libdir@ -libexecdir = @libexecdir@ -localstatedir = @localstatedir@ -mandir = @mandir@ -mkdir_p = @mkdir_p@ -oldincludedir = @oldincludedir@ -prefix = @prefix@ -program_transform_name = @program_transform_name@ -sbindir = @sbindir@ -sharedstatedir = @sharedstatedir@ -sysconfdir = @sysconfdir@ -target_alias = @target_alias@ - -# Shell commands used in Makefiles -RM = rm -f -CP = cp - -# Some machines need a command to run executables; this is that command -# so that our tests will run. -# We use RUNTESTS instead of RUNSERIAL directly because it may be that -# some tests need to be run with a different command. Older versions -# of the makefiles used the command -# $(LIBTOOL) --mode=execute -# in some directories, for instance. -RUNTESTS = $(RUNSERIAL) - -# Libraries to link to while building -LIBHDF5 = $(top_builddir)/src/libhdf5.la -LIBH5TEST = $(top_builddir)/test/libh5test.la -LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la -LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la -LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la -LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la -LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la -LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la -LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la -docdir = $(exec_prefix)/doc - -# Scripts used to build examples -H5CC = $(bindir)/h5cc -H5CC_PP = $(bindir)/h5pcc -H5FC = $(bindir)/h5fc -H5FC_PP = $(bindir)/h5pfc - -# .chkexe and .chksh files are used to mark tests that have run successfully. -MOSTLYCLEANFILES = *.chkexe *.chksh -localdocdir = $(docdir)/hdf5/TechNotes - -# Public doc files (to be installed)... -localdoc_DATA = BigDataSmMach.html ChStudy_1000x1000.gif ChStudy_250x250.gif \ - ChStudy_499x499.gif ChStudy_5000x1000.gif ChStudy_500x500.gif \ - ChStudy_p1.gif ChunkingStudy.html CodeReview.html \ - ExternalFiles.html FreeLists.html H4-H5Compat.html HeapMgmt.html \ - IOPipe.html LibMaint.html MemoryMgmt.html MoveDStruct.html \ - NamingScheme.html ObjectHeader.html RawDStorage.html \ - SWControls.html SymbolTables.html ThreadSafeLibrary.html VFL.html \ - VFLfunc.html Version.html openmp-hdf5.c openmp-hdf5.html \ - pipe1.gif pipe2.gif pipe3.gif pipe4.gif pipe5.gif version.gif - -all: all-am - -.SUFFIXES: -$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps) - @for dep in $?; do \ - case '$(am__configure_deps)' in \ - *$$dep*) \ - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \ - && exit 0; \ - exit 1;; \ - esac; \ - done; \ - echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/TechNotes/Makefile'; \ - cd $(top_srcdir) && \ - $(AUTOMAKE) --foreign doc/html/TechNotes/Makefile -.PRECIOUS: Makefile -Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status - @case '$?' in \ - *config.status*) \ - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ - *) \ - echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ - cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ - esac; - -$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh - -$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh -$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) - cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh - -mostlyclean-libtool: - -rm -f *.lo - -clean-libtool: - -rm -rf .libs _libs - -distclean-libtool: - -rm -f libtool -uninstall-info-am: -install-localdocDATA: $(localdoc_DATA) - @$(NORMAL_INSTALL) - test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)" - @list='$(localdoc_DATA)'; for p in $$list; do \ - if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ - f=$(am__strip_dir) \ - echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \ - $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \ - done - -uninstall-localdocDATA: - @$(NORMAL_UNINSTALL) - @list='$(localdoc_DATA)'; for p in $$list; do \ - f=$(am__strip_dir) \ - echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \ - rm -f "$(DESTDIR)$(localdocdir)/$$f"; \ - done -tags: TAGS -TAGS: - -ctags: CTAGS -CTAGS: - - -distdir: $(DISTFILES) - $(mkdir_p) $(distdir)/../../../config - @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ - topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \ - list='$(DISTFILES)'; for file in $$list; do \ - case $$file in \ - $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ - $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \ - esac; \ - if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ - dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \ - if test "$$dir" != "$$file" && test "$$dir" != "."; then \ - dir="/$$dir"; \ - $(mkdir_p) "$(distdir)$$dir"; \ - else \ - dir=''; \ - fi; \ - if test -d $$d/$$file; then \ - if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ - cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \ - fi; \ - cp -pR $$d/$$file $(distdir)$$dir || exit 1; \ - else \ - test -f $(distdir)/$$file \ - || cp -p $$d/$$file $(distdir)/$$file \ - || exit 1; \ - fi; \ - done -check-am: all-am -check: check-am -all-am: Makefile $(DATA) -installdirs: - for dir in "$(DESTDIR)$(localdocdir)"; do \ - test -z "$$dir" || $(mkdir_p) "$$dir"; \ - done -install: install-am -install-exec: install-exec-am -install-data: install-data-am -uninstall: uninstall-am - -install-am: all-am - @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am - -installcheck: installcheck-am -install-strip: - $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ - install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ - `test -z '$(STRIP)' || \ - echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install -mostlyclean-generic: - -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES) - -clean-generic: - -distclean-generic: - -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) - -maintainer-clean-generic: - @echo "This command is intended for maintainers to use" - @echo "it deletes files that may require special tools to rebuild." -clean: clean-am - -clean-am: clean-generic clean-libtool mostlyclean-am - -distclean: distclean-am - -rm -f Makefile -distclean-am: clean-am distclean-generic distclean-libtool - -dvi: dvi-am - -dvi-am: - -html: html-am - -info: info-am - -info-am: - -install-data-am: install-localdocDATA - -install-exec-am: - -install-info: install-info-am - -install-man: - -installcheck-am: - -maintainer-clean: maintainer-clean-am - -rm -f Makefile -maintainer-clean-am: distclean-am maintainer-clean-generic - -mostlyclean: mostlyclean-am - -mostlyclean-am: mostlyclean-generic mostlyclean-libtool - -pdf: pdf-am - -pdf-am: - -ps: ps-am - -ps-am: - -uninstall-am: uninstall-info-am uninstall-localdocDATA - -.PHONY: all all-am check check-am clean clean-generic clean-libtool \ - distclean distclean-generic distclean-libtool distdir dvi \ - dvi-am html html-am info info-am install install-am \ - install-data install-data-am install-exec install-exec-am \ - install-info install-info-am install-localdocDATA install-man \ - install-strip installcheck installcheck-am installdirs \ - maintainer-clean maintainer-clean-generic mostlyclean \ - mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ - uninstall uninstall-am uninstall-info-am \ - uninstall-localdocDATA - - -# Ignore most rules -lib progs check test _test check-p check-s: - @echo "Nothing to be done" - -tests dep depend: - @@SETX@; for d in X $(SUBDIRS); do \ - if test $$d != X; then \ - (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \ - fi; - done - -# In docs directory, install-doc is the same as install -install-doc install-all: - $(MAKE) $(AM_MAKEFLAGS) install -uninstall-doc uninstall-all: - $(MAKE) $(AM_MAKEFLAGS) uninstall -# Tell versions [3.59,3.63) of GNU make to not export all variables. -# Otherwise a system limit (for SysV at least) may be exceeded. -.NOEXPORT: diff --git a/doc/html/TechNotes/MemoryMgmt.html b/doc/html/TechNotes/MemoryMgmt.html deleted file mode 100644 index 93782b5..0000000 --- a/doc/html/TechNotes/MemoryMgmt.html +++ /dev/null @@ -1,510 +0,0 @@ - - - -
Some form of memory management may be necessary in HDF5 when - the various deletion operators are implemented so that the - file memory is not permanently orphaned. However, since an - HDF5 file was designed with persistent data in mind, the - importance of a memory manager is questionable. - -
On the other hand, when certain meta data containers (file glue) - grow, they may need to be relocated in order to keep the - container contiguous. - -
- Example: An object header consists of up to two - chunks of contiguous memory. The first chunk is a fixed - size at a fixed location when the header link count is - greater than one. Thus, inserting additional items into an - object header may require the second chunk to expand. When - this occurs, the second chunk may need to move to another - location in the file, freeing the file memory which that - chunk originally occupied. -- -
The relocation of meta data containers could potentially - orphan a significant amount of file memory if the application - has made poor estimates for preallocation sizes. - - -
Memory management by the library can be independent of memory - management support by the file format. The file format can - support no memory management, some memory management, or full - memory management. Similarly with the library. - -
We now evaluate each combination of library support with file - support: - -
The file contains an unsorted, doubly-linked list of free - blocks. The address of the head of the list appears in the - boot block. Each free block contains the following fields: - -
byte | -byte | -byte | -byte | - -
---|---|---|---|
Free Block Signature | - -|||
Total Free Block Size | - -|||
Address of Left Sibling | - -|||
Address of Right Sibling | - -|||
Remainder of Free Block |
-
The library reads as much of the free list as convenient when - convenient and pushes those entries onto stacks. This can - occur when a file is opened or any time during the life of the - file. There is one stack for each free block size and the - stacks are sorted by size in a balanced tree in memory. - -
Deallocation involves finding the correct stack or creating - a new one (an O(log K) operation where K is - the number of stacks), pushing the free block info onto the - stack (a constant-time operation), and inserting the free - block into the file free block list (a constant-time operation - which doesn't necessarily involve any I/O since the free blocks - can be cached like other objects). No attempt is made to - coalesce adjacent free blocks into larger blocks. - -
Allocation involves finding the correct stack (an O(log - K) operation), removing the top item from the stack - (a constant-time operation), and removing the block from the - file free block list (a constant-time operation). If there is - no free block of the requested size or larger, then the file - is extended. - -
To provide sharability of the free list between processes, - the last step of an allocation will check for the free block - signature and if it doesn't find one will repeat the process. - Alternatively, a process can temporarily remove free blocks - from the file and hold them in it's own private pool. - -
To summarize... -
The HDF5 file format supports a general B-tree mechanism - for storing data with keys. If we use a B-tree to represent - all parts of the file that are free and the B-tree is indexed - so that a free file chunk can be found if we know the starting - or ending address, then we can efficiently determine whether a - free chunk begins or ends at the specified address. Call this - the Address B-Tree. - -
If a second B-tree points to a set of stacks where the - members of a particular stack are all free chunks of the same - size, and the tree is indexed by chunk size, then we can - efficiently find the best-fit chunk size for a memory request. - Call this the Size B-Tree. - -
All free blocks of a particular size can be linked together - with an unsorted, doubly-linked, circular list and the left - and right sibling addresses can be stored within the free - chunk, allowing us to remove or insert items from the list in - constant time. - -
Deallocation of a block fo file memory consists of: - -
Allocation is similar to deallocation. - -
To summarize... - -
Since file data structures can be cached in memory by the H5AC - package it becomes problematic to move such a data structure in - the file. One cannot just copy a portion of the file from one - location to another because: - -
Here's a correct method to move data from one location to
- another. The example code assumes that one is moving a B-link
- tree node from old_addr
to new_addr
.
-
-
H5AC_flush
is
- FALSE
.
-
- H5AC_flush (f, H5AC_BT, old_addr, FALSE);
-
-
- H5F_block_read (f, old_addr, size, buf);
- H5F_block_write (f, new_addr, size, buf);
-
-
- H5AC_rename (f, H5AC_BT, old_addr, new_addr);
-
- -
-
For Example:
-
-
-
-
For Example:
-
-
-
-
-
For Example:
-
-
- and a header file of private stuff - -
-
- and a header for private prototypes - -
-
- By splitting the prototypes into separate include files we don't - have to recompile everything when just one function prototype - changes. - -
- - PACKAGES - - -
-Names exported beyond function scope begin with `H5' followed by zero, -one, or two upper-case letters that describe the class of object. -This prefix is the package name. The implementation of packages -doesn't necessarily have to map 1:1 to the source files. -
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Each package implements a single main class of object (e.g., the H5B -package implements B-link trees). The main data type of a package is -the package name followed by `_t'. -
-
-
-
- -Not all packages implement a data type (H5, H5MF) and some -packages provide access to a preexisting data type (H5MM, H5S). -
- - - PUBLIC vs PRIVATE - -
-If the symbol is for internal use only, then the package name is -followed by an underscore and the rest of the name. Otherwise, the -symbol is part of the API and there is no underscore between the -package name and the rest of the name. -
-
-
-
-For functions, this is important because the API functions never pass -pointers around (they use atoms instead for hiding the implementation) -and they perform stringent checks on their arguments. Internal -unctions, on the other hand, check arguments with assert(). -
-Data types like H5B_t carry no information about whether the type is -public or private since it doesn't matter. - -
- - - INTEGRAL TYPES - -
-Integral fixed-point type names are an optional `u' followed by `int' -followed by the size in bits (8, 16, -32, or 64). There is no trailing `_t' because these are common -enough and follow their own naming convention. -
-
--
- hbool_t -- boolean values (BTRUE, BFALSE, BFAIL) -
-- int8 -- signed 8-bit integers -
-- uint8 -- unsigned 8-bit integers -
-- int16 -- signed 16-bit integers -
-- uint16 -- unsigned 16-bit integers -
-- int32 -- signed 32-bit integers -
-- uint32 -- unsigned 32-bit integers -
-- int64 -- signed 64-bit integers -
-- uint64 -- unsigned 64-bit integers -
-- intn -- "native" integers -
-- uintn -- "native" unsigned integers - -
- - OTHER TYPES - - -
- -Other data types are always followed by `_t'. -
-
--
- H5B_key_t-- additional data type used by H5B package. -
- -However, if the name is so common that it's used almost everywhere, -then we make an alias for it by removing the package name and leading -underscore and replacing it with an `h' (the main datatype for a -package already has a short enough name, so we don't have aliases for -them). -
-
--
- typedef H5E_err_t herr_t; -
- - GLOBAL VARIABLES - -
-Global variables include the package name and end with `_g'. -
-
--
- H5AC_methods_g -- global variable in the H5AC package. -
- - - - -MACROS, PREPROCESSOR CONSTANTS, AND ENUM MEMBERS - - -
-Same rules as other symbols except the name is all upper case. There
-are a few exceptions:
-
- --
- MIN(x,y), MAX(x,y) and their relatives -
- No naming scheme; determined by OS and compiler.
- These appear only in one header file anyway.
-
-
- - -
-
-
- -haddr_t -H5O_new (hdf5_file_t *f, intn nrefs, size_t size_hint) - - Creates a new empty object header and returns its address. - The SIZE_HINT is the initial size of the data portion of the - object header and NREFS is the number of symbol table entries - that reference this object header (normally one). - - If SIZE_HINT is too small, then at least some default amount - of space is allocated for the object header. - -intn /*num remaining links */ -H5O_link (hdf5_file_t *f, /*file containing header */ - haddr_t addr, /*header file address */ - intn adjust) /*link adjustment amount */ - - -size_t -H5O_sizeof (hdf5_file_t *f, /*file containing header */ - haddr_t addr, /*header file address */ - H5O_class_t *type, /*message type or H5O_ANY */ - intn sequence) /*sequence number, usually zero */ - - Returns the size of a particular instance of a message in an - object header. When an object header has more than one - instance of a particular message type, then SEQUENCE indicates - which instance to return. - -void * -H5O_read (hdf5_file_t *f, /*file containing header */ - haddr_t addr, /*header file address */ - H5G_entry_t *ent, /*optional symbol table entry */ - H5O_class_t *type, /*message type or H5O_ANY */ - intn sequence, /*sequence number, usually zero */ - size_t size, /*size of output message */ - void *mesg) /*output buffer */ - - Reads a message from the object header into memory. - -const void * -H5O_peek (hdf5_file_t *f, /*file containing header */ - haddr_t addr, /*header file address */ - H5G_entry_t *ent, /*optional symbol table entry */ - H5O_class_t *type, /*type of message or H5O_ANY */ - intn sequence) /*sequence number, usually zero */ - -haddr_t /*new heap address */ -H5O_modify (hdf5_file_t *f, /*file containing header */ - haddr_t addr, /*header file address */ - H5G_entry_t *ent, /*optional symbol table entry */ - hbool_t *ent_modified, /*entry modification flag */ - H5O_class_t *type, /*message type */ - intn overwrite, /*sequence number or -1 */ - void *mesg) /*the message */ - - -=========================================== -Last Modified: 8 July 1998 (technical content) -Last Modified: 28 April 2000 (included in HDF5 Technical Notes) -HDF Help Desk: hdfhelp@ncsa.uiuc.edu - -- - - diff --git a/doc/html/TechNotes/RawDStorage.html b/doc/html/TechNotes/RawDStorage.html deleted file mode 100644 index 87ea54d..0000000 --- a/doc/html/TechNotes/RawDStorage.html +++ /dev/null @@ -1,274 +0,0 @@ - - - -
This document describes the various ways that raw data is - stored in an HDF5 file and the object header messages which - contain the parameters for the storage. - -
Raw data storage has three components: the mapping from some - logical multi-dimensional element space to the linear address - space of a file, compression of the raw data on disk, and - striping of raw data across multiple files. These components - are orthogonal. - -
Some goals of the storage mechanism are to be able to - efficently store data which is: - -
The Sparse Large, Dynamic Size, and Subslab Access methods
- share so much code that they can be described with a single
- message. The new Indexed Storage Message (0x0008
)
- will replace the old Chunked Object (0x0009
) and
- Sparse Object (0x000A
) Messages.
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Address of B-tree |
- |||
Number of Dimensions | -Reserved | -Reserved | -Reserved | -
Reserved (4 bytes) | -|||
Alignment for Dimension 0 (4 bytes) | -|||
Alignment for Dimension 1 (4 bytes) | -|||
... | -|||
Alignment for Dimension N (4 bytes) | -
The alignment fields indicate the alignment in logical space to - use when allocating new storage areas on disk. For instance, - writing every other element of a 100-element one-dimensional - array (using one HDF5 I/O partial write operation per element) - that has unit storage alignment would result in 50 - single-element, discontiguous storage segments. However, using - an alignment of 25 would result in only four discontiguous - segments. The size of the message varies with the number of - dimensions. - -
A B-tree is used to point to the discontiguous portions of - storage which has been allocated for the object. All keys of a - particular B-tree are the same size and are a function of the - number of dimensions. It is therefore not possible to change the - dimensionality of an indexed storage array after its B-tree is - created. - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
External File Number or Zero (4 bytes) | -|||
Chunk Offset in Dimension 0 (4 bytes) | -|||
Chunk Offset in Dimension 1 (4 bytes) | -|||
... | -|||
Chunk Offset in Dimension N (4 bytes) | -
The keys within a B-tree obey an ordering based on the chunk - offsets. If the offsets in dimension-0 are equal, then - dimension-1 is used, etc. The External File Number field - contains a 1-origin offset into the External File List message - which contains the name of the external file in which that chunk - is stored. - -
The indexed storage will support arbitrary striping at the - chunk level; each chunk can be stored in any file. This is - accomplished by using the External File Number field of an - indexed storage B-tree key as a 1-origin offset into an External - File List Message (0x0009) which takes the form: - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Name Heap Address |
- |||
Number of Slots Allocated (4 bytes) | -|||
Number of File Names (4 bytes) | -|||
Byte Offset of Name 1 in Heap (4 bytes) | -|||
Byte Offset of Name 2 in Heap (4 bytes) | -|||
... | -|||
Unused Slot(s) |
-
Each indexed storage array that has all or part of its data - stored in external files will contain a single external file - list message. The size of the messages is determined when the - message is created, but it may be possible to enlarge the - message on demand by moving it. At this time, it's not possible - for multiple arrays to share a single external file list - message. - -
- H5O_efl_t *H5O_efl_new (H5G_entry_t *object, intn
- nslots_hint, intn heap_size_hint)
-
-
- intn H5O_efl_index (H5O_efl_t *efl, const char *filename)
-
-
- H5F_low_t *H5O_efl_open (H5O_efl_t *efl, intn index, uintn mode)
-
-
- herr_t H5O_efl_release (H5O_efl_t *efl)
-
- H5O_efl_new
flushes the message
- to disk.
- HDF5 files use 8-byte addresses by default, but users can change this to 2, 4, or even 16 bytes. This means that it is possible to have files that only address 64 KB of space, and thus that HDF must handle the case of files that have enough space on disk but not enough internal address space to be written.
- -Thus, every time space is allocated in a file, HDF needs to check that this allocation is within the file’s address space. If not, HDF should output an error and ensure that all the data currently in the file (everything that is still addressable) is successfully written to disk.
- -Unfortunately, some structures are stored in memory and do not allocate space for themselves until the file is actually flushed to disk (object headers and the local heap). This is good for efficiency, since these structures can grow without creating the fragmentation that would result from frequent allocation and deallocation, but means that if the library runs out of addressable space while allocating memory, these structures will not be present in the file. Without them, HDF5 does not know how to parse the data in the file, rendering it unreadable.
- -Thus, HDF keeps track of the space “reserved for allocation” in the file (H5FD_t struct). When a function tries to allocate space in the file, it first checks that the allocation would not overflow the address space, taking the reserved space into account. When object headers or the heap finally allocate the space they have reserved, they free the reserved space before allocating file space.
- -A given object header is only flushed to disk once, but the heap can be flushed to disk multiple times over the life of the file and will require contiguous space every time. To handle this, the heap keeps track of how much space it has reserved. This allows it to reserve space only when it grows (when it is dirty and needs to be re-written to disk).
- -For instance, if the heap is flushed to disk, it frees its reserved space. If new data is inserted into the heap in memory, the heap may need to flush to disk again in a new, larger section of memory. Thus, not only does it reserve space in the file for this new data, but also for all of the previously-existing data in the heap to be re-written. The next insert, however, will only need to reserve space for its new data, since the rest of the heap already has space reserved for it.
- -Potential issues: -
-(Work in progress draft) -
- --A descriptions knobs and turns such as environment variables and settings -that controls the functionality of HDF5 libraries and tools. This is -intended for HDF5 libraries and tools developers. HDF5 application users -may consult the document A guide to -debugging HDF5 API calls. -
- --
-
-
-
Last -modified: December 11, 2000
- - - diff --git a/doc/html/TechNotes/SymbolTables.html b/doc/html/TechNotes/SymbolTables.html deleted file mode 100644 index 05ee538..0000000 --- a/doc/html/TechNotes/SymbolTables.html +++ /dev/null @@ -1,329 +0,0 @@ - - - -- -A number of issues involving caching of object header messages in -symbol table entries must be resolved. - -What is the motivation for these changes? - - If we make objects completely independent of object name it allows - us to refer to one object by multiple names (a concept called hard - links in Unix file systems), which in turn provides an easy way to - share data between datasets. - - Every object in an HDF5 file has a unique, constant object header - address which serves as a handle (or OID) for the object. The - object header contains messages which describe the object. - - HDF5 allows some of the object header messages to be cached in - symbol table entries so that the object header doesn't have to be - read from disk. For instance, an entry for a directory caches the - directory disk addresses required to access that directory, so the - object header for that directory is seldom read. - - If an object has multiple names (that is, a link count greater than - one), then it has multiple symbol table entries which point to it. - All symbol table entries must agree on header messages. The - current mechanism is to turn off the caching of header messages in - symbol table entries when the header link count is more than one, - and to allow caching once the link count returns to one. - - However, in the current implementation, a package is allowed to - copy a symbol table entry and use it as a private cache for the - object header. This doesn't work for a number of reasons (all but - one require a `delete symbol entry' operation). - - 1. If two packages hold copies of the same symbol table entry, - they don't notify each other of changes to the symbol table - entry. Eventually, one package reads a cached message and - gets the wrong value because the other package changed the - message in the object header. - - 2. If one package holds a copy of the symbol table entry and - some other part of HDF5 removes the object and replaces it - with some other object, then the original package will - continue to access the non-existent object using the new - object header. - - 3. If one package holds a copy of the symbol table entry and - some other part of HDF5 (re)moves the directory which - contains the object, then the package will be unable to - update the symbol table entry with the new cached - data. Packages that refer to the object by the new name will - use old cached data. - - -The basic problem is that there may be multiple copies of the object -symbol table entry floating around in the code when there should -really be at most one per hard link. - - Level 0: A copy may exist on disk as part of a symbol table node, which - is a small 1d array of symbol table entries. - - Level 1: A copy may be cached in memory as part of a symbol table node - in the H5Gnode.c file by the H5AC layer. - - Level 2a: Another package may be holding a copy so it can perform - fast lookup of any header messages that might be cached in - the symbol table entry. It can't point directly to the - cached symbol table node because that node can dissappear - at any time. - - Level 2b: Packages may hold more than one copy of a symbol table - entry. For instance, if H5D_open() is called twice for - the same name, then two copies of the symbol table entry - for the dataset exist in the H5D package. - -How can level 2a and 2b be combined? - - If package data structures contained pointers to symbol table - entries instead of copies of symbol table entries and if H5G - allocated one symbol table entry per hard link, then it's trivial - for Level 2a and 2b to benefit from one another's actions since - they share the same cache. - -How does this work conceptually? - - Level 2a and 2b must notify Level 1 of their intent to use (or stop - using) a symbol table entry to access an object header. The - notification of the intent to access an object header is called - `opening' the object and releasing the access is `closing' the - object. - - Opening an object requires an object name which is used to locate - the symbol table entry to use for caching of object header - messages. The return value is a handle for the object. Figure 1 - shows the state after Dataset1 opens Object with a name that maps - through Entry1. The open request created a copy of Entry1 called - Shadow1 which exists even if SymNode1 is preempted from the H5AC - layer. - - ______ - Object / \ - SymNode1 +--------+ | - +--------+ _____\ | Header | | - | | / / +--------+ | - +--------+ +---------+ \______/ - | Entry1 | | Shadow1 | /____ - +--------+ +---------+ \ \ - : : \ - +--------+ +----------+ - | Dataset1 | - +----------+ - FIGURE 1 - - - - The SymNode1 can appear and disappear from the H5AC layer at any - time without affecting the Object Header data cached in the Shadow. - The rules are: - - * If the SymNode1 is present and is about to disappear and the - Shadow1 dirty bit is set, then Shadow1 is copied over Entry1, the - Entry1 dirty bit is set, and the Shadow1 dirty bit is cleared. - - * If something requests a copy of Entry1 (for a read-only peek - request), and Shadow1 exists, then a copy (not pointer) of Shadow1 - is returned instead. - - * Entry1 cannot be deleted while Shadow1 exists. - - * Entry1 cannot change directly if Shadow1 exists since this means - that some other package has opened the object and may be modifying - it. I haven't decided if it's useful to ever change Entry1 - directly (except of course within the H5G layer itself). - - * Shadow1 is created when Dataset1 `opens' the object through - Entry1. Dataset1 is given a pointer to Shadow1 and Shadow1's - reference count is incremented. - - * When Dataset1 `closes' the Object the Shadow1 reference count is - decremented. When the reference count reaches zero, if the - Shadow1 dirty bit is set, then Shadow1's contents are copied to - Entry1, and the Entry1 dirty bit is set. Shadow1 is then deleted - if its reference count is zero. This may require reading SymNode1 - back into the H5AC layer. - -What happens when another Dataset opens the Object through Entry1? - - If the current state is represented by the top part of Figure 2, - then Dataset2 will be given a pointer to Shadow1 and the Shadow1 - reference count will be incremented to two. The Object header link - count remains at one so Object Header messages continue to be cached - by Shadow1. Dataset1 and Dataset2 benefit from one another - actions. The resulting state is represented by Figure 2. - - _____ - SymNode1 Object / \ - +--------+ _____\ +--------+ | - | | / / | Header | | - +--------+ +---------+ +--------+ | - | Entry1 | | Shadow1 | /____ \_____/ - +--------+ +---------+ \ \ - : : _ \ - +--------+ |\ +----------+ - \ | Dataset1 | - \________ +----------+ - \ \ - +----------+ | - | Dataset2 | |- New Dataset - +----------+ | - / - FIGURE 2 - - -What happens when the link count for Object increases while Dataset -has the Object open? - - SymNode2 - +--------+ - SymNode1 Object | | - +--------+ ____\ +--------+ /______ +--------+ - | | / / | header | \ `| Entry2 | - +--------+ +---------+ +--------+ +--------+ - | Entry1 | | Shadow1 | /____ : : - +--------+ +---------+ \ \ +--------+ - : : \ - +--------+ +----------+ \________________/ - | Dataset1 | | - +----------+ New Link - - FIGURE 3 - - The current state is represented by the left part of Figure 3. To - create a new link the Object Header had to be located by traversing - through Entry1/Shadow1. On the way through, the Entry1/Shadow1 - cache is invalidated and the Object Header link count is - incremented. Entry2 is then added to SymNode2. - - Since the Object Header link count is greater than one, Object - header data will not be cached in Entry1/Shadow1. - - If the initial state had been all of Figure 3 and a third link is - being added and Object is open by Entry1 and Entry2, then creation - of the third link will invalidate the cache in Entry1 or Entry2. It - doesn't matter which since both caches are already invalidated - anyway. - -What happens if another Dataset opens the same object by another name? - - If the current state is represented by Figure 3, then a Shadow2 is - created and associated with Entry2. However, since the Object - Header link count is more than one, nothing gets cached in Shadow2 - (or Shadow1). - -What happens if the link count decreases? - - If the current state is represented by all of Figure 3 then it isn't - possible to delete Entry1 because the object is currently open - through that entry. Therefore, the link count must have - decreased because Entry2 was removed. - - As Dataset1 reads/writes messages in the Object header they will - begin to be cached in Shadow1 again because the Object header link - count is one. - -What happens if the object is removed while it's open? - - That operation is not allowed. - -What happens if the directory containing the object is deleted? - - That operation is not allowed since deleting the directory requires - that the directory be empty. The directory cannot be emptied - because the open object cannot be removed from the directory. - -What happens if the object is moved? - - Moving an object is a process consisting of creating a new - hard-link with the new name and then deleting the old name. - This will fail if the object is open. - -What happens if the directory containing the entry is moved? - - The entry and the shadow still exist and are associated with one - another. - -What if a file is flushed or closed when objects are open? - - Flushing a symbol table with open objects writes correct information - to the file since Shadow is copied to Entry before the table is - flushed. - - Closing a file with open objects will create a valid file but will - return failure. - -How is the Shadow associated with the Entry? - - A symbol table is composed of one or more symbol nodes. A node is a - small 1-d array of symbol table entries. The entries can move - around within a node and from node-to-node as entries are added or - removed from the symbol table and nodes can move around within a - symbol table, being created and destroyed as necessary. - - Since a symbol table has an object header with a unique and constant - file offset, and since H5G contains code to efficiently locate a - symbol table entry given it's name, we use these two values as a key - within a shadow to associate the shadow with the symbol table - entry. - - struct H5G_shadow_t { - haddr_t stab_addr; /*symbol table header address*/ - char *name; /*entry name wrt symbol table*/ - hbool_t dirty; /*out-of-date wrt stab entry?*/ - H5G_entry_t ent; /*my copy of stab entry */ - H5G_entry_t *main; /*the level 1 entry or null */ - H5G_shadow_t *next, *prev; /*other shadows for this stab*/ - }; - - The set of shadows will be organized in a hash table of linked - lists. Each linked list will contain the shadows associated with a - particular symbol table header address and the list will be sorted - lexicographically. - - Also, each Entry will have a pointer to the corresponding Shadow or - null if there is no shadow. - - When a symbol table node is loaded into the main cache, we look up - the linked list of shadows in the shadow hash table based on the - address of the symbol table object header. We then traverse that - list matching shadows with symbol table entries. - - We assume that opening/closing objects will be a relatively - infrequent event compared with loading/flushing symbol table - nodes. Therefore, if we keep the linked list of shadows sorted it - costs O(N) to open and close objects where N is the number of open - objects in that symbol table (instead of O(1)) but it costs only - O(N) to load a symbol table node (instead of O(N^2)). - -What about the root symbol entry? - - Level 1 storage for the root symbol entry is always available since - it's stored in the hdf5_file_t struct instead of a symbol table - node. However, the contents of that entry can move from the file - handle to a symbol table node by H5G_mkroot(). Therefore, if the - root object is opened, we keep a shadow entry for it whose - `stab_addr' field is zero and whose `name' is null. - - For this reason, the root object should always be read through the - H5G interface. - -One more key invariant: The H5O_STAB message in a symbol table header -never changes. This allows symbol table entries to cache the H5O_STAB -message for the symbol table to which it points without worrying about -whether the cache will ever be invalidated. - - -=========================================== -Last Modified: 8 July 1998 (technical content) -Last Modified: 28 April 2000 (included in HDF5 Technical Notes) -HDF Help Desk: hdfhelp@ncsa.uiuc.edu - -- - - diff --git a/doc/html/TechNotes/TestReview.html b/doc/html/TechNotes/TestReview.html deleted file mode 100644 index 410f662..0000000 --- a/doc/html/TechNotes/TestReview.html +++ /dev/null @@ -1,57 +0,0 @@ - - - -
This document describes the current state of the API test review. Currently, -the tests for each API function are being reviewed on an individual basis and -each API's tests are being described and improvements made. -
- -API Function | -Date Last Reviewed | -Status | -
---|---|---|
H5Dget_offset - | -Tuesday, November 11th, 2002 - | -Tests need to be updated - | -
H5Tget_native_type - | -Tuesday, November 11th, 2002 - | -Tests need to be updated - | -
This document describes the API test review results for H5Dget_offset(). -
- -Test case - | - -Test source file - | - -Test method - | - -Expected test results - | - -Notes - | - -
---|---|---|---|---|
Chunked dataset - | - -dsets.c - | - -
-
|
-
-FAIL - | - -
- Because dataset is stored in chunks that are indexed by a B-tree, there is -no single piece of data to query the offset of. - -It may be possible in the future to -enhance this function by querying the offset of a particular chunk (or chunks), -but that has limited use because chunks could be compressed, etc. with an I/O -filter. - - |
-
-
Compact dataset - | - -dsets.c - | - -
-
|
-
-FAIL - | - -
- Because dataset is stored in the object header of the dataset, there is -no separate piece of data to query the offset of. - -It may be possible in the future to get the offset of the data in the object -header, but this is problematic due to the fact that the messages in the object -header can get relocated in the file when changes (like adding attributes, etc.) -are made to the dataset, invalidating the address given to the user. -filter. - - |
-
-
Contiguous dataset, [user block size] == 0, not external - | - -dsets.c - | - -
-
|
-
-
- Succeed in getting the proper address and be able to verify -that the data at that address in the file is what was written out. - -When data storage allocation is "late" (the default), querying the offset -should fail if performed before data is written to the dataset. - - |
-
-Needs additional test to verify that the data written out is located at the -correct offset in the file. - | - -
Contiguous dataset, [user block size] != 0, not external - | - -dsets.c - | - -
-
|
-
-
- Succeed in getting the proper address and be able to verify -that the data at that address in the file is what was written out. - -When data storage allocation is "late" (the default), querying the offset -should fail if performed before data is written to the dataset. - - |
-
-Needs test for this case. - | - -
Contiguous dataset, [user block size] == 0, external data storage - | - -external.c - | - -
-
|
-
-FAIL - | - -
- In theory, it's easy to return the offset of the data in the external file, -but this wasn't done because it would be too easy for users to assume that the -offset returned was in the HDF5 file instead of the external file. - - |
-
-
The H5Dget_offset() function is not tested in parallel. Currently, there -does not appear to be a need for this. -
- - -This document describes the API test review results for H5Dget_native_type(). -
- -Test case - | - -Test source file - | - -Test method - | - -Expected test results - | - -Notes - | - -
---|---|---|---|---|
Native int datatype - | - -native.c - | - -
-
|
-
-Check that type's size, order and class are correct. - | - -
- Data is written & read back in for this test. - -It would be convenient to have a function in the test module for choosing
-the correct atomic datatype based on the particular platform settings. This
-should use the H5_SIZEOF_ |
-
-
Native long long datatype - | - -native.c - | - -
-
|
-
-Check that type's size, order and class are correct. - | - -
- Data is NOT written & read back in for this test. - - |
-
-
Native char datatype - | - -native.c - | - -
-
|
-
-Check that type's size, order and class are correct. - | - -
- Data is NOT written & read back in for this test. - - |
-
-
Native float datatype - | - -native.c - | - -
-
|
-
-Check that type's size, order and class are correct. - | - -
- Data is NOT written & read back in for this test. - -Need test for native double datatype (stored as 32-bit floating-point -datatype in file). This will probably require using an "epsilon" if the data -is compared for this test. - - |
-
-
Compound datatype with atomic fields - | - -native.c - | - -
-
|
-
-Check that native and unpacked datatypes are equal. - | - -
- Data is written & read back in for this test. - - |
-
-
Compound datatype with one compound field - | - -native.c - | - -
-
|
-
-Check that native and unpacked datatypes are equal. - | - -
- Data is written & read back in for this test. - -Could use test for compound datatype with multiple compound fields. - -Could use test for 3 or more nested deep compound datatype. - - |
-
-
Enum datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - - |
-
-
Array datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- This is not tested currently. - - |
-
-
Array of compound datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - - |
-
-
Compound datatype with array field - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- This is not tested currently. - - |
-
-
VL datatype with atomic base type - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - -Combinations with VL datatypes in other composite types and with other -datatypes for the base type of the VL datatype are not tested. - - |
-
-
VL string datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - -Combinations with VL string datatypes in composite types -are not tested. - - |
-
-
Reference datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - -Combinations with reference datatypes in composite types -are not tested. - - |
-
-
Opaque datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - -Combinations with opaque datatypes in composite types -are not tested. - - |
-
-
Bitfield datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- Data is written & read back in for this test. - -Combinations with bitfield datatypes in composite types -are not tested. - - |
-
-
Time datatype - | - -native.c - | - -
-
|
-
-Check that native and original datatypes are equal. - | - -
- This is not tested currently. - - |
-
-
The H5Dget_native_type() function is not tested in parallel. Currently, -there does not appear to be a need for this. -
- - -- -
-The following code is placed at the beginning of H5private.h: -
- --- -- #ifdef H5_HAVE_THREADSAFE - #include <pthread.h> - #endif --
-H5_HAVE_THREADSAFE
is defined when the HDF-5 library is
-compiled with the --enable-threadsafe configuration option. In general,
-code for the non-threadsafe version of HDF-5 library are placed within
-the #else
part of the conditional compilation. The exception
-to this rule are the changes to the FUNC_ENTER
(in
-H5private.h), HRETURN
and HRETURN_ERROR
(in
-H5Eprivate.h) macros (see section 3.2).
-
-In the threadsafe implementation, the global library initialization
-variable H5_libinit_g
is changed to a global structure
-consisting of the variable with its associated lock (locks are explained
-in section 4.1):
-
-- -- hbool_t H5_libinit_g = FALSE; --
-becomes -
- --- -- H5_api_t H5_g; --
-where H5_api_t
is
-
-- -- typedef struct H5_api_struct { - H5_mutex_t init_lock; /* API entrance mutex */ - hbool_t H5_libinit_g; - } H5_api_t; --
-All former references to H5_libinit_g
in the library are now
-made using the macro H5_INIT_GLOBAL
. If the threadsafe
-library is to be used, the macro is set to H5_g.H5_libinit_g
-instead.
-
-A new global boolean variable H5_allow_concurrent_g
is used
-to determine if multiple threads are allowed to an API call
-simultaneously. This is set to FALSE
.
-
-All APIs that are allowed to do so have their own local variable that
-shadows the global variable and is set to TRUE
. In phase 1,
-no such APIs exist.
-
-It is defined in H5.c
as follows:
-
-- -- hbool_t H5_allow_concurrent_g = FALSE; --
-The global variable H5_first_init_g
of type
-pthread_once_t
is used to allow only the first thread in the
-application process to call an initialization function using
-pthread_once
. All subsequent calls to
-pthread_once
by any thread are disregarded.
-
-The call sets up the mutex in the global structure H5_g
(see
-section 3.1) via an initialization function
-H5_first_thread_init
. The first thread initialization
-function is described in section 4.2.
-
-H5_first_init_g
is defined in H5.c
as follows:
-
-- -- pthread_once_t H5_first_init_g = PTHREAD_ONCE_INIT; --
-A global pthread-managed key H5_errstk_key_g
is used to
-allow pthreads to maintain a separate error stack (of type
-H5E_t
) for each thread. This is defined in H5.c
-as:
-
-- -- pthread_key_t H5_errstk_key_g; --
-Error stack management is described in section 4.3. -
- -
-We need to preserve the thread cancellation status of each thread
-individually by using a key H5_cancel_key_g
. The status is
-preserved using a structure (of type H5_cancel_t
) which
-maintains the cancellability state of the thread before it entered the
-library and a count (which works very much like the recursive lock
-counter) which keeps track of the number of API calls the thread makes
-within the library.
-
-The structure is defined in H5private.h
as:
-
-- -- /* cancelability structure */ - typedef struct H5_cancel_struct { - int previous_state; - unsigned int cancel_count; - } H5_cancel_t; --
-Thread cancellation is described in section 4.4. -
- - -
-The FUNC_ENTER
macro is now extended to include macro calls
-to initialize first threads, disable cancellability and wraps a lock
-operation around the checking of the global initialization flag. It
-should be noted that the cancellability should be disabled before
-acquiring the lock on the library. Doing so otherwise would allow the
-possibility that the thread be cancelled just after it has acquired the
-lock on the library and in that scenario, if the cleanup routines are not
-properly set, the library would be permanently locked out.
-
-The additional macro code and new macro definitions can be found in
-Appendix E.1 to E.5. The changes are made in H5private.h
.
-
-The HRETURN
and HRETURN_ERROR
macros are the
-counterparts to the FUNC_ENTER
macro described in section
-3.1. FUNC_LEAVE
makes a macro call to HRETURN
,
-so it is also covered here.
-
-The basic changes to these two macros involve adding macro calls to call -an unlock operation and re-enable cancellability if necessary. It should -be noted that the cancellability should be re-enabled only after the -thread has released the lock to the library. The consequence of doing -otherwise would be similar to that described in section 3.1. -
- -
-The additional macro code and new macro definitions can be found in
-Appendix E.9 to E.9. The changes are made in H5Eprivate.h
.
-
-A recursive mutex lock m allows a thread t1 to successfully lock m more -than once without blocking t1. Another thread t2 will block if t2 tries -to lock m while t1 holds the lock to m. If t1 makes k lock calls on m, -then it also needs to make k unlock calls on m before it releases the -lock. -
- --Our implementation of recursive locks is built on top of a pthread mutex -lock (which is not recursive). It makes use of a pthread condition -variable to have unsuccessful threads wait on the mutex. Waiting threads -are awaken by a signal from the final unlock call made by the thread -holding the lock. -
- -
-Recursive locks are defined to be the following type
-(H5private.h
):
-
-- -- typedef struct H5_mutex_struct { - pthread_t owner_thread; /* current lock owner */ - pthread_mutex_t atomic_lock; /* lock for atomicity of new mechanism */ - pthread_cond_t cond_var; /* condition variable */ - unsigned int lock_count; - } H5_mutex_t; --
-Detailed implementation code can be found in Appendix A. The
-implementation changes are made in H5TS.c
.
-
-Because the mutex lock associated with a recursive lock cannot be
-statically initialized, a mechanism is required to initialize the
-recursive lock associated with H5_g
so that it can be used
-for the first time.
-
-The pthreads library allows this through the pthread_once call which as
-described in section 3.3 allows only the first thread accessing the
-library in an application to initialize H5_g
.
-
-In addition to initializing H5_g
, it also initializes the
-key (see section 3.4) for use with per-thread error stacks (see section
-4.3).
-
-The first thread initialization mechanism is implemented as the function
-call H5_first_thread_init()
in H5TS.c
. This is
-described in appendix B.
-
-Pthreads allows individual threads to access dynamic and persistent
-per-thread data through the use of keys. Each key is associated with
-a table that maps threads to data items. Keys can be initialized by
-pthread_key_create()
in pthreads (see sections 3.4 and 4.2).
-Per-thread data items are accessed using a key through the
-pthread_getspecific()
and pthread_setspecific()
-calls to read and write to the association table respectively.
-
-Per-thread error stacks are accessed through the key
-H5_errstk_key_g
which is initialized by the first thread
-initialization call (see section 4.2).
-
-In the non-threadsafe version of the library, there is a global stack
-variable H5E_stack_g[1]
which is no longer defined in the
-threadsafe version. At the same time, the macro call to gain access to
-the error stack H5E_get_my_stack
is changed from:
-
-- -- #define H5E_get_my_stack() (H5E_stack_g+0) --
-to: -
- --- -- #define H5E_get_my_stack() H5E_get_stack() --
-where H5E_get_stack()
is a surrogate function that does the
-following operations:
-
H5_errstk_key_g
using
- pthread_setspecific()
. The way we detect if it is the
- first time is through pthread_getspecific()
which
- returns NULL
if no previous value is associated with
- the thread using the key.pthread_getspecific()
returns a non-null value,
- then that is the pointer to the error stack associated with the
- thread and the stack can be used as usual.
-A final change to the error reporting routines is as follows; the current
-implementation reports errors to always be detected at thread 0. In the
-threadsafe implementation, this is changed to report the number returned
-by a call to pthread_self()
.
-
-The change in code (reflected in H5Eprint
of file
-H5E.c
) is as follows:
-
-- -- #ifdef H5_HAVE_THREADSAFE - fprintf (stream, "HDF5-DIAG: Error detected in thread %d." - ,pthread_self()); - #else - fprintf (stream, "HDF5-DIAG: Error detected in thread 0."); - #endif --
-Code for H5E_get_stack()
can be found in Appendix C. All the
-above changes were made in H5E.c
.
-
-To prevent thread cancellations from killing a thread while it is in the -library, we maintain per-thread information about the cancellability -status of the thread before it entered the library so that we can restore -that same status when the thread leaves the library. -
- --By enter and leave the library, we mean the points when a -thread makes an API call from a user application and the time that API -call returns. Other API or callback function calls made from within that -API call are considered within the library. -
- --Because other API calls may be made from within the first API call, we -need to maintain a counter to determine which was the first and -correspondingly the last return. -
- -
-When a thread makes an API call, the macro H5_API_SET_CANCEL
-calls the worker function H5_cancel_count_inc()
which does
-the following:
-
PTHREAD_CANCEL_DISABLE
- while storing the previous state into the cancellability structure.
- cancel_count
is also incremented in this case.
-When a thread leaves an API call, the macro
-H5_API_UNSET_CANCEL
calls the worker function
-H5_cancel_count_dec()
which does the following:
-
cancel_count
is greater than 1, indicating that the
- thread is not yet about to leave the library, then
- cancel_count
is simply decremented.
-H5_cancel_count_inc
and H5_cancel_count_dec
are
-described in Appendix D and may be found in H5TS.c
.
-
-Except where stated, all tests involve 16 simultaneous threads that make -use of HDF-5 API calls without any explicit synchronization typically -required in a non-threadsafe environment. -
- --The test program sets up 16 threads to simultaneously create 16 -different datasets named from zero to fifteen for a single -file and then writing an integer value into that dataset equal to the -dataset's named value. -
- --The main thread would join with all 16 threads and attempt to match the -resulting HDF-5 file with expected results - that each dataset contains -the correct value (0 for zero, 1 for one etc ...) and all -datasets were correctly created. -
- -
-The test is implemented in the file ttsafe_dcreate.c
.
-
-The error stack test is one in which 16 threads simultaneously try to -create datasets with the same name. The result, when properly serialized, -should be equivalent to 16 attempts to create the dataset with the same -name. -
- --The error stack implementation runs correctly if it reports 15 instances -of the dataset name conflict error and finally generates a correct HDF-5 -containing that single dataset. Each thread should report its own stack -of errors with a thread number associated with it. -
- -
-The test is implemented in the file ttsafe_error.c
.
-
-The main idea in thread cancellation safety is as follows; a child thread
-is spawned to create and write to a dataset. Following that, it makes a
-H5Diterate
call on that dataset which activates a callback
-function.
-
-A deliberate barrier is invoked at the callback function which waits for -both the main and child thread to arrive at that point. After that -happens, the main thread proceeds to make a thread cancel call on the -child thread while the latter sleeps for 3 seconds before proceeding to -write a new value to the dataset. -
- --After the iterate call, the child thread logically proceeds to wait -another 3 seconds before writing another newer value to the dataset. -
- --The test is correct if the main thread manages to read the second value -at the end of the test. This means that cancellation did not take place -until the end of the iteration call despite of the 3 second wait within -the iteration callback and the extra dataset write operation. -Furthermore, the cancellation should occur before the child can proceed -to write the last value into the dataset. -
- -
-A main thread makes 16 threaded calls to H5Acreate
with a
-generated name for each attribute. Sixteen attributes should be created
-for the single dataset in random (chronological) order and receive values
-depending on its generated attribute name (e.g. attrib010 would
-receive the value 10).
-
-After joining with all child threads, the main thread proceeds to read -each attribute by generated name to see if the value tallies. Failure is -detected if the attribute name does not exist (meaning they were never -created) or if the wrong values were read back. -
- --- -- void H5_mutex_init(H5_mutex_t *H5_mutex) - { - H5_mutex->owner_thread = NULL; - pthread_mutex_init(&H5_mutex->atomic_lock, NULL); - pthread_cond_init(&H5_mutex->cond_var, NULL); - H5_mutex->lock_count = 0; - } - - void H5_mutex_lock(H5_mutex_t *H5_mutex) - { - pthread_mutex_lock(&H5_mutex->atomic_lock); - - if (pthread_equal(pthread_self(), H5_mutex->owner_thread)) { - /* already owned by self - increment count */ - H5_mutex->lock_count++; - } else { - if (H5_mutex->owner_thread == NULL) { - /* no one else has locked it - set owner and grab lock */ - H5_mutex->owner_thread = pthread_self(); - H5_mutex->lock_count = 1; - } else { - /* if already locked by someone else */ - while (1) { - pthread_cond_wait(&H5_mutex->cond_var, &H5_mutex->atomic_lock); - - if (H5_mutex->owner_thread == NULL) { - H5_mutex->owner_thread = pthread_self(); - H5_mutex->lock_count = 1; - break; - } /* else do nothing and loop back to wait on condition*/ - } - } - } - - pthread_mutex_unlock(&H5_mutex->atomic_lock); - } - - void H5_mutex_unlock(H5_mutex_t *H5_mutex) - { - pthread_mutex_lock(&H5_mutex->atomic_lock); - H5_mutex->lock_count--; - - if (H5_mutex->lock_count == 0) { - H5_mutex->owner_thread = NULL; - pthread_cond_signal(&H5_mutex->cond_var); - } - pthread_mutex_unlock(&H5_mutex->atomic_lock); - } --
-- - -- void H5_first_thread_init(void) - { - /* initialize global API mutex lock */ - H5_g.H5_libinit_g = FALSE; - H5_g.init_lock.owner_thread = NULL; - pthread_mutex_init(&H5_g.init_lock.atomic_lock, NULL); - pthread_cond_init(&H5_g.init_lock.cond_var, NULL); - H5_g.init_lock.lock_count = 0; - - /* initialize key for thread-specific error stacks */ - pthread_key_create(&H5_errstk_key_g, NULL); - - /* initialize key for thread cancellability mechanism */ - pthread_key_create(&H5_cancel_key_g, NULL); - } --
-- -- H5E_t *H5E_get_stack(void) - { - H5E_t *estack; - - if (estack = pthread_getspecific(H5_errstk_key_g)) { - return estack; - } else { - /* no associated value with current thread - create one */ - estack = (H5E_t *)malloc(sizeof(H5E_t)); - pthread_setspecific(H5_errstk_key_g, (void *)estack); - return estack; - } - } --
-- -- void H5_cancel_count_inc(void) - { - H5_cancel_t *cancel_counter; - - if (cancel_counter = pthread_getspecific(H5_cancel_key_g)) { - /* do nothing here */ - } else { - /* - * first time thread calls library - create new counter and - * associate with key - */ - cancel_counter = (H5_cancel_t *)malloc(sizeof(H5_cancel_t)); - cancel_counter->cancel_count = 0; - pthread_setspecific(H5_cancel_key_g, (void *)cancel_counter); - } - - if (cancel_counter->cancel_count == 0) { - /* thread entering library */ - pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, - &(cancel_counter->previous_state)); - } - - cancel_counter->cancel_count++; - } - - void H5_cancel_count_dec(void) - { - H5_cancel_t *cancel_counter = pthread_getspecific(H5_cancel_key_g); - - if (cancel_counter->cancel_count == 1) - pthread_setcancelstate(cancel_counter->previous_state, NULL); - - cancel_counter->cancel_count--; - } --
FUNC_ENTER
-- -- /* Initialize the library */ \ - H5_FIRST_THREAD_INIT \ - H5_API_UNSET_CANCEL \ - H5_API_LOCK_BEGIN \ - if (!(H5_INIT_GLOBAL)) { \ - H5_INIT_GLOBAL = TRUE; \ - if (H5_init_library() < 0) { \ - HRETURN_ERROR (H5E_FUNC, H5E_CANTINIT, err, \ - "library initialization failed"); \ - } \ - } \ - H5_API_LOCK_END \ - : - : - : --
H5_FIRST_THREAD_INIT
-- - -- /* Macro for first thread initialization */ - #define H5_FIRST_THREAD_INIT \ - pthread_once(&H5_first_init_g, H5_first_thread_init); --
H5_API_UNSET_CANCEL
-- - -- #define H5_API_UNSET_CANCEL \ - if (H5_IS_API(FUNC)) { \ - H5_cancel_count_inc(); \ - } --
H5_API_LOCK_BEGIN
-- - -- #define H5_API_LOCK_BEGIN \ - if (H5_IS_API(FUNC)) { \ - H5_mutex_lock(&H5_g.init_lock); --
H5_API_LOCK_END
-- - -- #define H5_API_LOCK_END } --
HRETURN
and HRETURN_ERROR
-- -- : - : - H5_API_UNLOCK_BEGIN \ - H5_API_UNLOCK_END \ - H5_API_SET_CANCEL \ - return ret_val; \ - } --
H5_API_UNLOCK_BEGIN
-- -- #define H5_API_UNLOCK_BEGIN \ - if (H5_IS_API(FUNC)) { \ - H5_mutex_unlock(&H5_g.init_lock); --
H5_API_UNLOCK_END
-- - -- #define H5_API_UNLOCK_END } --
H5_API_SET_CANCEL
-- -- #define H5_API_SET_CANCEL \ - if (H5_IS_API(FUNC)) { \ - H5_cancel_count_dec(); \ - } --
-
-
- - -
-The HDF5 file format describes how HDF5 data structures and dataset raw -data are mapped to a linear format address space and the HDF5 -library implements that bidirectional mapping in terms of an -API. However, the HDF5 format specifications do not indicate how -the format address space is mapped onto storage and HDF (version 5 and -earlier) simply mapped the format address space directly onto a single -file by convention. - -
--Since early versions of HDF5 it became apparent that users want the ability to -map the format address space onto different types of storage (a single file, -multiple files, local memory, global memory, network distributed global -memory, a network protocol, etc.) with various types of maps. For -instance, some users want to be able to handle very large format address -spaces on operating systems that support only 2GB files by partitioning the -format address space into equal-sized parts each served by a separate -file. Other users want the same multi-file storage capability but want to -partition the address space according to purpose (raw data in one file, object -headers in another, global heap in a third, etc.) in order to improve I/O -speeds. - -
--In fact, the number of storage variations is probably larger than the -number of methods that the HDF5 team is capable of implementing and -supporting. Therefore, a Virtual File Layer API is being -implemented which will allow application teams or departments to design -and implement their own mapping between the HDF5 format address space -and storage, with each mapping being a separate file driver -(possibly written in terms of other file drivers). The HDF5 team will -provide a small set of useful file drivers which will also serve as -examples for those who which to write their own: - -
-H5FD_SEC2
-read
and write
to perform I/O to a single file. All I/O
-requests are unbuffered although the driver does optimize file seeking
-operations to some extent.
-
-H5FD_STDIO
-H5FD_CORE
-H5FD_MPIIO
-H5FD_FAMILY
-h5repart
tool can be used to change the sizes of the
-family members when stored as files or to convert a family of files to a
-single file or vice versa.
-
-H5FD_SPLIT
--Most application writers will use a driver defined by the HDF5 library or -contributed by another programming team. This chapter describes how existing -drivers are used. - -
- - - --Each file driver is defined in its own public header file which should -be included by any application which plans to use that driver. The -predefined drivers are in header files whose names begin with -`H5FD' followed by the driver name and `.h'. The `hdf5.h' -header file includes all the predefined driver header files. - -
-
-Once the appropriate header file is included a symbol of the form
-`H5FD_' followed by the upper-case driver name will be the driver
-identification number.(1) However, the
-value may change if the library is closed (e.g., by calling
-H5close
) and the symbol is referenced again.
-
-
-In order to create or open a file one must define the method by which the
-storage is accessed(2) and does so by creating a file access property list(3) which is passed to the H5Fcreate
or
-H5Fopen
function. A default file access property list is created by
-calling H5Pcreate
and then the file driver information is inserted by
-calling a driver initialization function such as H5Pset_fapl_family
:
-
-
-hid_t fapl = H5Pcreate(H5P_FILE_ACCESS); -size_t member_size = 100*1024*1024; /*100MB*/ -H5Pset_fapl_family(fapl, member_size, H5P_DEFAULT); -hid_t file = H5Fcreate("foo%05d.h5", H5F_ACC_TRUNC, H5P_DEFAULT, fapl); -H5Pclose(fapl); -- -
-Each file driver will have its own initialization function
-whose name is H5Pset_fapl_
followed by the driver name and which
-takes a file access property list as the first argument followed by
-additional driver-dependent arguments.
-
-
-An alternative to using the driver initialization function is to set the
-driver directly using the H5Pset_driver
function.(4) Its second argument is the file driver identifier, which may
-have a different numeric value from run to run depending on the order in which
-the file drivers are registered with the library. The third argument
-encapsulates the additional arguments of the driver initialization
-function. This method only works if the file driver writer has made the
-driver-specific property list structure a public datatype, which is
-often not the case.
-
-
-hid_t fapl = H5Pcreate(H5P_FILE_ACCESS); -static H5FD_family_fapl_t fa = {100*1024*1024, H5P_DEFAULT}; -H5Pset_driver(fapl, H5FD_FAMILY, &fa); -hid_t file = H5Fcreate("foo.h5", H5F_ACC_TRUNC, H5P_DEFAULT, fapl); -H5Pclose(fapl); -- -
-It is also possible to query the file driver information from a file access
-property list by calling H5Pget_driver
to determine the driver and then
-calling a driver-defined query function to obtain the driver information:
-
-
-hid_t driver = H5Pget_driver(fapl); -if (H5FD_SEC2==driver) { - /*nothing further to get*/ -} else if (H5FD_FAMILY==driver) { - hid_t member_fapl; - haddr_t member_size; - H5Pget_fapl_family(fapl, &member_size, &member_fapl); -} else if (....) { - .... -} -- - - -
-The H5Dread
and H5Dwrite
functions transfer data between
-application memory and the file. They both take an optional data transfer
-property list which has some general driver-independent properties and
-optional driver-defined properties. An application will typically perform I/O
-in one of three styles via the H5Dread
or H5Dwrite
function:
-
-
-Like file access properties in the previous section, data transfer properties -can be set using a driver initialization function or a general purpose -function. For example, to set the MPI-IO driver to use independent access for -I/O operations one would say: - -
- --hid_t dxpl = H5Pcreate(H5P_DATA_XFER); -H5Pset_dxpl_mpio(dxpl, H5FD_MPIO_INDEPENDENT); -H5Dread(dataset, type, mspace, fspace, buffer, dxpl); -H5Pclose(dxpl); -- -
-The alternative is to initialize a driver defined C struct
and pass it
-to the H5Pset_driver
function:
-
-
-hid_t dxpl = H5Pcreate(H5P_DATA_XFER); -static H5FD_mpio_dxpl_t dx = {H5FD_MPIO_INDEPENDENT}; -H5Pset_driver(dxpl, H5FD_MPIO, &dx); -H5Dread(dataset, type, mspace, fspace, buffer, dxpl); -- -
-The transfer propery list can be queried in a manner similar to the file -access property list: the driver provides a function (or functions) to return -various information about the transfer property list: - -
- --hid_t driver = H5Pget_driver(dxpl); -if (H5FD_MPIO==driver) { - H5FD_mpio_xfer_t xfer_mode; - H5Pget_dxpl_mpio(dxpl, &xfer_mode); -} else { - .... -} -- - - -
-The HDF5 specifications describe two things: the mapping of data onto a linear -format address space and the C API which performs the mapping. -However, the mapping of the format address space onto storage intentionally -falls outside the scope of the HDF5 specs. This is a direct result of the fact -that it is not generally possible to store information about how to access -storage inside the storage itself. For instance, given only the file name -`/arborea/1225/work/f%03d' the HDF5 library is unable to tell whether the -name refers to a file on the local file system, a family of files on the local -file system, a file on host `arborea' port 1225, a family of files on a -remote system, etc. - -
--Two ways which library could figure out where the storage is located are: -storage access information can be provided by the user, or the library can try -all known file access methods. This implementation uses the former method. - -
--In general, if a file was created with one driver then it isn't possible to -open it with another driver. There are of course exceptions: a file created -with MPIO could probably be opened with the sec2 driver, any file created -by the sec2 driver could be opened as a family of files with one member, -etc. In fact, sometimes a file must not only be opened with the same -driver but also with the same driver properties. The predefined drivers are -written in such a way that specifying the correct driver is sufficient for -opening a file. - -
- - --A driver is simply a collection of functions and data structures which are -registered with the HDF5 library at runtime. The functions fall into these -categories: - -
- --Some drivers need information about file access and data transfers which are -very specific to the driver. The information is usually implemented as a pair -of pointers to C structs which are allocated and initialized as part of an -HDF5 property list and passed down to various driver functions. There are two -classes of settings: file access modes that describe how to access the file -through the driver, and data transfer modes which are settings that control -I/O operations. Each file opened by a particular driver may have a different -access mode; each dataset I/O request for a particular file may have a -different data transfer mode. - -
--Since each driver has its own particular requirements for various settings, -each driver is responsible for defining the mode structures that it -needs. Higher layers of the library treat the structures as opaque but must be -able to copy and free them. Thus, the driver provides either the size of the -structure or a pair of function pointers for each of the mode types. - -
--Example: The family driver needs to know how the format address -space is partitioned and the file access property list to use for the -family members. - -
- --/* Driver-specific file access properties */ -typedef struct H5FD_family_fapl_t { - hsize_t memb_size; /*size of each member */ - hid_t memb_fapl_id; /*file access property list of each memb*/ -} H5FD_family_fapl_t; - -/* Driver specific data transfer properties */ -typedef struct H5FD_family_dxpl_t { - hid_t memb_dxpl_id; /*data xfer property list of each memb */ -} H5FD_family_dxpl_t; -- -
-In order to copy or free one of these structures the member file access -or data transfer properties must also be copied or freed. This is done -by providing a copy and close function for each structure: - -
--Example: The file access property list copy and close functions -for the family driver: - -
- --static void * -H5FD_family_fapl_copy(const void *_old_fa) -{ - const H5FD_family_fapl_t *old_fa = (const H5FD_family_fapl_t*)_old_fa; - H5FD_family_fapl_t *new_fa = malloc(sizeof(H5FD_family_fapl_t)); - assert(new_fa); - - memcpy(new_fa, old_fa, sizeof(H5FD_family_fapl_t)); - new_fa->memb_fapl_id = H5Pcopy(old_fa->memb_fapl_id); - return new_fa; -} - -static herr_t -H5FD_family_fapl_free(void *_fa) -{ - H5FD_family_fapl_t *fa = (H5FD_family_fapl_t*)_fa; - H5Pclose(fa->memb_fapl_id); - free(fa); - return 0; -} -- -
-Generally when a file is created or opened the file access properties
-for the driver are copied into the file pointer which is returned and
-they may be modified from their original value (for instance, the file
-family driver modifies the member size property when opening an existing
-family). In order to support the H5Fget_access_plist
function the
-driver must provide a fapl_get
callback which creates a copy of
-the driver-specific properties based on a particular file.
-
-
-Example: The file family driver copies the member size file -access property list into the return value: - -
- --static void * -H5FD_family_fapl_get(H5FD_t *_file) -{ - H5FD_family_t *file = (H5FD_family_t*)_file; - H5FD_family_fapl_t *fa = calloc(1, sizeof(H5FD_family_fapl_t*)); - - fa->memb_size = file->memb_size; - fa->memb_fapl_id = H5Pcopy(file->memb_fapl_id); - return fa; -} -- - - -
-The higher layers of the library expect files to have a name and allow the
-file to be accessed in various modes. The driver must be able to create a new
-file, replace an existing file, or open an existing file. Opening or creating
-a file should return a handle, a pointer to a specialization of the
-H5FD_t
struct, which allows read-only or read-write access and which
-will be passed to the other driver functions as they are
-called.(5)
-
-
-typedef struct { - /* Public fields */ - H5FD_class_t *cls; /*class data defined below*/ - - /* Private fields -- driver-defined */ - -} H5FD_t; -- -
-Example: The family driver requires handles to the underlying
-storage, the size of the members for this particular file (which might be
-different than the member size specified in the file access property list if
-an existing file family is being opened), the name used to open the file in
-case additional members must be created, and the flags to use for creating
-those additional members. The eoa
member caches the size of the format
-address space so the family members don't have to be queried in order to find
-it.
-
-
-/* The description of a file belonging to this driver. */ -typedef struct H5FD_family_t { - H5FD_t pub; /*public stuff, must be first */ - hid_t memb_fapl_id; /*file access property list for members */ - hsize_t memb_size; /*maximum size of each member file */ - int nmembs; /*number of family members */ - int amembs; /*number of member slots allocated */ - H5FD_t **memb; /*dynamic array of member pointers */ - haddr_t eoa; /*end of allocated addresses */ - char *name; /*name generator printf format */ - unsigned flags; /*flags for opening additional members */ -} H5FD_family_t; -- -
-Example: The sec2 driver needs to keep track of the underlying Unix
-file descriptor and also the end of format address space and current Unix file
-size. It also keeps track of the current file position and last operation
-(read, write, or unknown) in order to optimize calls to lseek
. The
-device
and inode
fields are defined on Unix in order to uniquely
-identify the file and will be discussed below.
-
-
-typedef struct H5FD_sec2_t { - H5FD_t pub; /*public stuff, must be first */ - int fd; /*the unix file */ - haddr_t eoa; /*end of allocated region */ - haddr_t eof; /*end of file; current file size*/ - haddr_t pos; /*current file I/O position */ - int op; /*last operation */ - dev_t device; /*file device number */ - ino_t inode; /*file i-node number */ -} H5FD_sec2_t; -- - - -
-All drivers must define a function for opening/creating a file. This -function should have a prototype which is: - -
--
-The file name name and file access property list fapl are
-the same as were specified in the H5Fcreate
or H5Fopen
-call. The flags are the same as in those calls also except the
-flag H5F_ACC_CREATE
is also present if the call was to
-H5Fcreate
and they are documented in the `H5Fpublic.h'
-file. The maxaddr argument is the maximum format address that the
-driver should be prepared to handle (the minimum address is always
-zero).
-
-Example: The sec2 driver opens a Unix file with the requested name -and saves information which uniquely identifies the file (the Unix device -number and inode). - -
- --static H5FD_t * -H5FD_sec2_open(const char *name, unsigned flags, hid_t fapl_id/*unused*/, - haddr_t maxaddr) -{ - unsigned o_flags; - int fd; - struct stat sb; - H5FD_sec2_t *file=NULL; - - /* Check arguments */ - if (!name || !*name) return NULL; - if (0==maxaddr || HADDR_UNDEF==maxaddr) return NULL; - if (ADDR_OVERFLOW(maxaddr)) return NULL; - - /* Build the open flags */ - o_flags = (H5F_ACC_RDWR & flags) ? O_RDWR : O_RDONLY; - if (H5F_ACC_TRUNC & flags) o_flags |= O_TRUNC; - if (H5F_ACC_CREAT & flags) o_flags |= O_CREAT; - if (H5F_ACC_EXCL & flags) o_flags |= O_EXCL; - - /* Open the file */ - if ((fd=open(name, o_flags, 0666))<0) return NULL; - if (fstat(fd, &sb)<0) { - close(fd); - return NULL; - } - - /* Create the new file struct */ - file = calloc(1, sizeof(H5FD_sec2_t)); - file->fd = fd; - file->eof = sb.st_size; - file->pos = HADDR_UNDEF; - file->op = OP_UNKNOWN; - file->device = sb.st_dev; - file->inode = sb.st_ino; - - return (H5FD_t*)file; -} -- - - -
-Closing a file simply means that all cached data should be flushed to the next -lower layer, the file should be closed at the next lower layer, and all -file-related data structures should be freed. All information needed by the -close function is already present in the file handle. - -
--
-The file argument is the handle which was returned by the open
-function, and the close
should free only memory associated with the
-driver-specific part of the handle (the public parts will have already been released by HDF5's virtual file layer).
-
-Example: The sec2 driver just closes the underlying Unix file, -making sure that the actual file size is the same as that known to the -library by writing a zero to the last file position it hasn't been -written by some previous operation (which happens in the same code which -flushes the file contents and is shown below). - -
- --static herr_t -H5FD_sec2_close(H5FD_t *_file) -{ - H5FD_sec2_t *file = (H5FD_sec2_t*)_file; - - if (H5FD_sec2_flush(_file)<0) return -1; - if (close(file->fd)<0) return -1; - free(file); - return 0; -} -- - - -
-Occasionally an application will attempt to open a single file more than one -time in order to obtain multiple handles to the file. HDF5 allows the files to -share information(6) but in order to -accomplish this HDF5 must be able to tell when two names refer to the same -file. It does this by associating a driver-defined key with each file opened -by a driver and comparing the key for an open request with the keys for all -other files currently open by the same driver. - -
--
-The driver may provide a function which compares two files f1 and
-f2 belonging to the same driver and returns a negative, positive, or
-zero value a la the strcmp
function.(7) If this
-function is not provided then HDF5 assumes that all calls to the open
-callback return unique files regardless of the arguments and it is up to the
-application to avoid doing this if that assumption is incorrect.
-
-Each time a file is opened the library calls the cmp
function to
-compare that file with all other files currently open by the same driver and
-if one of them matches (at most one can match) then the file which was just
-opened is closed and the previously opened file is used instead.
-
-
-Opening a file twice with incompatible flags will result in failure. For -instance, opening a file with the truncate flag is a two step process which -first opens the file without truncation so keys can be compared, and if no -matching file is found already open then the file is closed and immediately -reopened with the truncation flag set (if a matching file is already open then -the truncating open will fail). - -
--Example: The sec2 driver uses the Unix device and i-node as the -key. They were initialized when the file was opened. - -
- --static int -H5FD_sec2_cmp(const H5FD_t *_f1, const H5FD_t *_f2) -{ - const H5FD_sec2_t *f1 = (const H5FD_sec2_t*)_f1; - const H5FD_sec2_t *f2 = (const H5FD_sec2_t*)_f2; - - if (f1->device < f2->device) return -1; - if (f1->device > f2->device) return 1; - - if (f1->inode < f2->inode) return -1; - if (f1->inode > f2->inode) return 1; - - return 0; -} -- - - -
-Some drivers may also need to store certain information in the file superblock -in order to be able to reliably open the file at a later date. This is done by -three functions: one to determine how much space will be necessary to store -the information in the superblock, one to encode the information, and one to -decode the information. These functions are optional, but if any one is -defined then the other two must also be defined. - -
--
-The sb_size
function returns the number of bytes necessary to encode
-information needed later if the file is reopened. The sb_encode
-function encodes information from the file into buffer buf
-allocated by the caller. It also writes an 8-character (plus null
-termination) into the name
argument, which should be a unique
-identification for the driver. The sb_decode
function looks at
-the name
-
-
- decodes -data from the buffer buf and updates the file argument with the new information, -advancing *p in the process. -
-The part of this which is somewhat tricky is that the file must be readable -before the superblock information is decoded. File access modes fall outside -the scope of the HDF5 file format, but they are placed inside the boot block -for convenience.(8) - -
--Example: To be written later. - -
- - --HDF5 does not assume that a file is a linear address space of bytes. Instead, -the library will call functions to allocate and free portions of the HDF5 -format address space, which in turn map onto functions in the file driver to -allocate and free portions of file address space. The library tells the file -driver how much format address space it wants to allocate and the driver -decides what format address to use and how that format address is mapped onto -the file address space. Usually the format address is chosen so that the file -address can be calculated in constant time for data I/O operations (which are -always specified by format addresses). - -
- - - --The HDF5 format allows an optional userblock to appear before the actual HDF5 -data in such a way that if the userblock is sucked out of the file and -everything remaining is shifted downward in the file address space, then the -file is still a valid HDF5 file. The userblock size can be zero or any -multiple of two greater than or equal to 512 and the file superblock begins -immediately after the userblock. - -
--HDF5 allocates space for the userblock and superblock by calling an -allocation function defined below, which must return a chunk of memory at -format address zero on the first call. - -
- - --The library makes many types of allocation requests: - -
-H5FD_MEM_SUPER
-H5FD_MEM_BTREE
-H5FD_MEM_DRAW
-H5FD_MEM_META
-H5FD_MEM_GROUP
-H5FD_MEM_GHEAP
-H5FD_MEM_LHEAP
-H5FD_MEM_OHDR
-
-When a chunk of memory is freed the library adds it to a free list and
-allocation requests are satisfied from the free list before requesting memory
-from the file driver. Each type of allocation request enumerated above has its
-own free list, but the file driver can specify that certain object types can
-share a free list. It does so by providing an array which maps a request type
-to a free list. If any value of the map is H5MF_DEFAULT
(zero) then the
-object's own free list is used. The special value H5MF_NOLIST
indicates
-that the library should not attempt to maintain a free list for that
-particular object type, instead calling the file driver each time an object of
-that type is freed.
-
-
-Mappings predefined in the `H5FDpublic.h' file are: -
H5FD_FLMAP_SINGLE
-H5FD_FLMAP_DICHOTOMY
-H5FD_FLMAP_DEFAULT
-
-Example: To make a map that manages object headers on one free list
-and everything else on another free list one might initialize the map with the
-following code: (the use of H5FD_MEM_SUPER
is arbitrary)
-
-
-H5FD_mem_t mt, map[H5FD_MEM_NTYPES]; - -for (mt=0; mt<H5FD_MEM_NTYPES; mt++) { - map[mt] = (H5FD_MEM_OHDR==mt) ? mt : H5FD_MEM_SUPER; -} -- -
-If an allocation request cannot be satisfied from the free list then one of -two things happen. If the driver defines an allocation callback then it is -used to allocate space; otherwise new memory is allocated from the end of the -format address space by incrementing the end-of-address marker. - -
--
-The file argument is the file from which space is to be allocated,
-type is the type of memory being requested (from the list above) without
-being mapped according to the freelist map and size is the number of
-bytes being requested. The library is allowed to allocate large chunks of
-storage and manage them in a layer above the file driver (although the current
-library doesn't do that). The allocation function should return a format
-address for the first byte allocated. The allocated region extends from that
-address for size bytes. If the request cannot be honored then the
-undefined address value is returned (HADDR_UNDEF
). The first call to
-this function for a file which has never had memory allocated must
-return a format address of zero or HADDR_UNDEF
since this is how the
-library allocates space for the userblock and/or superblock.
-
-Example: To be written later. - -
- - -
-When the library is finished using a certain region of the format address
-space it will return the space to the free list according to the type of
-memory being freed and the free list map described above. If the free list has
-been disabled for a particular memory usage type (according to the free list
-map) and the driver defines a free
callback then it will be
-invoked. The free
callback is also invoked for all entries on the free
-list when the file is closed.
-
-
-
-The file argument is the file for which space is being freed; type -is the type of object being freed (from the list above) without being mapped -according to the freelist map; addr is the first format address to free; -and size is the size in bytes of the region being freed. The region -being freed may refer to just part of the region originally allocated and/or -may cross allocation boundaries provided all regions being freed have the same -usage type. However, the library will never attempt to free regions which have -already been freed or which have never been allocated. -
-A driver may choose to not define the free
function, in which case
-format addresses will be leaked. This isn't normally a huge problem since the
-library contains a simple free list of its own and freeing parts of the format
-address space is not a common occurrence.
-
-
-Example: To be written later. - -
- - --Each file driver must have some mechanism for setting and querying the end of -address, or EOA, marker. The EOA marker is the first format address -after the last format address ever allocated. If the last part of the -allocated address range is freed then the driver may optionally decrease the -eoa marker. - -
--
-This function returns the current value of the EOA marker for the specified -file. -
-Example: The sec2 driver just returns the current eoa marker value -which is cached in the file structure: - -
- --static haddr_t -H5FD_sec2_get_eoa(H5FD_t *_file) -{ - H5FD_sec2_t *file = (H5FD_sec2_t*)_file; - return file->eoa; -} -- -
-The eoa marker is initially zero when a file is opened and the library may set
-it to some other value shortly after the file is opened (after the superblock
-is read and the saved eoa marker is determined) or when allocating additional
-memory in the absence of an alloc
callback (described above).
-
-
-Example: The sec2 driver simply caches the eoa marker in the file -structure and does not extend the underlying Unix file. When the file is -flushed or closed then the Unix file size is extended to match the eoa marker. - -
- --static herr_t -H5FD_sec2_set_eoa(H5FD_t *_file, haddr_t addr) -{ - H5FD_sec2_t *file = (H5FD_sec2_t*)_file; - file->eoa = addr; - return 0; -} -- - - -
-These functions operate on data, transferring a region of the format address -space between memory and files. - -
- - - --A driver must specify two functions to transfer data from the library to the -file and vice versa. - -
--
-The read
function reads data from file file beginning at address
-addr and continuing for size bytes into the buffer buf
-supplied by the caller. The write
function transfers data in the
-opposite direction. Both functions take a data transfer property list
-dxpl which indicates the fine points of how the data is to be
-transferred and which comes directly from the H5Dread
or
-H5Dwrite
function. Both functions receive type of
-data being written, which may allow a driver to tune it's behavior for
-different kinds of data.
-
-Both functions should return a negative value if they fail to transfer the -requested data, or non-negative if they succeed. The library will never -attempt to read from unallocated regions of the format address space. - -
-
-Example: The sec2 driver just makes system calls. It tries not to
-call lseek
if the current operation is the same as the previous
-operation and the file position is correct. It also fills the output buffer
-with zeros when reading between the current EOF and EOA markers and restarts
-system calls which were interrupted.
-
-
-static herr_t -H5FD_sec2_read(H5FD_t *_file, H5FD_mem_t type/*unused*/, hid_t dxpl_id/*unused*/, - haddr_t addr, hsize_t size, void *buf/*out*/) -{ - H5FD_sec2_t *file = (H5FD_sec2_t*)_file; - ssize_t nbytes; - - assert(file && file->pub.cls); - assert(buf); - - /* Check for overflow conditions */ - if (REGION_OVERFLOW(addr, size)) return -1; - if (addr+size>file->eoa) return -1; - - /* Seek to the correct location */ - if ((addr!=file->pos || OP_READ!=file->op) && - file_seek(file->fd, (file_offset_t)addr, SEEK_SET)<0) { - file->pos = HADDR_UNDEF; - file->op = OP_UNKNOWN; - return -1; - } - - /* - * Read data, being careful of interrupted system calls, partial results, - * and the end of the file. - */ - while (size>0) { - do nbytes = read(file->fd, buf, size); - while (-1==nbytes && EINTR==errno); - if (-1==nbytes) { - /* error */ - file->pos = HADDR_UNDEF; - file->op = OP_UNKNOWN; - return -1; - } - if (0==nbytes) { - /* end of file but not end of format address space */ - memset(buf, 0, size); - size = 0; - } - assert(nbytes>=0); - assert((hsize_t)nbytes<=size); - size -= (hsize_t)nbytes; - addr += (haddr_t)nbytes; - buf = (char*)buf + nbytes; - } - - /* Update current position */ - file->pos = addr; - file->op = OP_READ; - return 0; -} -- -
-Example: The sec2 write
callback is similar except it updates
-the file EOF marker when extending the file.
-
-
-Some drivers may desire to cache data in memory in order to make larger I/O
-requests to the underlying file and thus improving bandwidth. Such drivers
-should register a cache flushing function so that the library can insure that
-data has been flushed out of the drivers in response to the application
-calling H5Fflush
.
-
-
-
- - --Example: The sec2 driver doesn't cache any data but it also doesn't -extend the Unix file as agressively as it should. Therefore, when finalizing a -file it should write a zero to the last byte of the allocated region so that -when reopening the file later the EOF marker will be at least as large as the -EOA marker saved in the superblock (otherwise HDF5 will refuse to open the -file, claiming that the data appears to be truncated). - -
- --static herr_t -H5FD_sec2_flush(H5FD_t *_file) -{ - H5FD_sec2_t *file = (H5FD_sec2_t*)_file; - - if (file->eoa>file->eof) { - if (-1==file_seek(file->fd, file->eoa-1, SEEK_SET)) return -1; - if (write(file->fd, "", 1)!=1) return -1; - file->eof = file->eoa; - file->pos = file->eoa; - file->op = OP_WRITE; - } - - return 0; -} -- - - -
-The library is capable of performing several generic optimizations on I/O, but -these types of optimizations may not be appropriate for a given VFL driver. -
- --Each driver may provide a query function to allow the library to query whether -to enable these optimizations. If a driver lacks a query function, the library -will disable all types of optimizations which can be queried. -
- --
-This function is called by the library to query which optimizations to enable -for I/O to this driver. These are the flags which are currently defined: - -
-Before a driver can be used the HDF5 library needs to be told of its -existence. This is done by registering the driver, which results in a driver -identification number. Instead of passing many arguments to the registration -function, the driver information is entered into a structure and the address -of the structure is passed to the registration function where it is -copied. This allows the HDF5 API to be extended while providing backward -compatibility at the source level. - -
--
-The driver described by struct cls is registered with the library and an -ID number for the driver is returned. -
-The H5FD_class_t
type is a struct with the following fields:
-
-
const char *name
-size_t fapl_size
-void *(*fapl_copy)(const void *fapl)
-fm_size
when both are defined.
-void (*fapl_free)(void *fapl)
-free
function to free the
-structure.
-size_t dxpl_size
-void *(*dxpl_copy)(const void *dxpl)
-xm_size
when both are
-defined.
-void (*dxpl_free)(void *dxpl)
-free
function to
-free the structure.
-H5FD_t *(*open)(const char *name, unsigned flags, hid_t fapl, haddr_t maxaddr)
-herr_t (*close)(H5FD_t *file)
-int (*cmp)(const H5FD_t *f1, const H5FD_t *f2)
-int (*query)(const H5FD_t *f, unsigned long *flags)
-haddr_t (*alloc)(H5FD_t *file, H5FD_mem_t type, hsize_t size)
-herr_t (*free)(H5FD_t *file, H5FD_mem_t type, haddr_t addr, hsize_t size)
-haddr_t (*get_eoa)(H5FD_t *file)
-herr_t (*set_eoa)(H5FD_t *file, haddr_t)
-haddr_t (*get_eof)(H5FD_t *file)
-herr_t (*read)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl, haddr_t addr, hsize_t size, void *buffer)
-herr_t (*write)(H5FD_t *file, H5FD_mem_t type, hid_t dxpl, haddr_t addr, hsize_t size, const void *buffer)
-herr_t (*flush)(H5FD_t *file)
-H5FD_mem_t fl_map[H5FD_MEM_NTYPES]
--Example: The sec2 driver would be registered as: - -
- --static const H5FD_class_t H5FD_sec2_g = { - "sec2", /*name */ - MAXADDR, /*maxaddr */ - NULL, /*sb_size */ - NULL, /*sb_encode */ - NULL, /*sb_decode */ - 0, /*fapl_size */ - NULL, /*fapl_get */ - NULL, /*fapl_copy */ - NULL, /*fapl_free */ - 0, /*dxpl_size */ - NULL, /*dxpl_copy */ - NULL, /*dxpl_free */ - H5FD_sec2_open, /*open */ - H5FD_sec2_close, /*close */ - H5FD_sec2_cmp, /*cmp */ - H5FD_sec2_query, /*query */ - NULL, /*alloc */ - NULL, /*free */ - H5FD_sec2_get_eoa, /*get_eoa */ - H5FD_sec2_set_eoa, /*set_eoa */ - H5FD_sec2_get_eof, /*get_eof */ - H5FD_sec2_read, /*read */ - H5FD_sec2_write, /*write */ - H5FD_sec2_flush, /*flush */ - H5FD_FLMAP_SINGLE, /*fl_map */ -}; - -hid_t -H5FD_sec2_init(void) -{ - if (!H5FD_SEC2_g) { - H5FD_SEC2_g = H5FDregister(&H5FD_sec2_g); - } - return H5FD_SEC2_g; -} -- -
-A driver can be removed from the library by unregistering it - -
--
-Unregistering a driver makes it unusable for creating new file access or data -transfer property lists but doesn't affect any property lists or files that -already use that driver. - -
- - - --
-This function is intended to be used by driver functions, not applications.
-It returns a pointer directly into the file access property list
-fapl
which is a copy of the driver's file access mode originally
-provided to the H5Pset_driver
function. If its argument is a data
-transfer property list fxpl
then it returns a pointer to the
-driver-specific data transfer information instead.
-
-The various private H5F_low_*
functions will be replaced by public
-H5FD*
functions so they can be called from drivers.
-
-
-All private functions H5F_addr_*
which operate on addresses will be
-renamed as public functions by removing the first underscore so they can be
-called by drivers.
-
-
-The haddr_t
address data type will be passed by value throughout the
-library. The original intent was that this type would eventually be a union of
-file address types for the various drivers and may become quite large, but
-that was back when drivers were part of HDF5. It will become an alias for an
-unsigned integer type (32 or 64 bits depending on how the library was
-configured).
-
-
-The various H5F*.c
driver files will be renamed H5FD*.c
and each
-will have a corresponding header file. All driver functions except the
-initializer and API will be declared static.
-
-
-This documentation didn't cover optimization functions which would be useful -to drivers like MPI-IO. Some drivers may be able to perform data pipeline -operations more efficiently than HDF5 and need to be given a chance to -override those parts of the pipeline. The pipeline would be designed to call -various H5FD optimization functions at various points which return one of -three values: the operation is not implemented by the driver, the operation is -implemented but failed in a non-recoverable manner, the operation is -implemented and succeeded. - -
--Various parts of HDF5 check the only the top-level file driver and do -something special if it is the MPI-IO driver. However, we might want to be -able to put the MPI-IO driver under other drivers such as the raw part of a -split driver or under a debug driver whose sole purpose is to accumulate -statistics as it passes all requests through to the MPI-IO driver. Therefore -we will probably need a function which takes a format address and or object -type and returns the driver which would have been used at the lowest level to -process the request. - -
- --
The driver name is by convention and might -not apply to drivers which are not distributed with HDF5. -
The access method also indicates how to translate -the storage name to a storage server such as a file, network protocol, or -memory. -
The term -"file access property list" is a misnomer since storage isn't -required to be a file. -
This -function is overloaded to operate on data transfer property lists also, as -described below. -
Read-only access is only appropriate when opening an existing -file. -
For instance, writing data to one handle will cause -the data to be immediately visible on the other handle. -
The ordering is -arbitrary as long as it's consistent within a particular file driver. -
File access modes do not describe data, but rather -describe how the HDF5 format address space is mapped to the underlying -file(s). Thus, in general the mapping must be known before the file superblock -can be read. However, the user usually knows enough about the mapping for the -superblock to be readable and once the superblock is read the library can fill -in the missing parts of the mapping. -
-This document was generated on 18 November 1999 using the -texi2html -translator version 1.51.
--Updated on 10/24/00 by hand, Quincey Koziol -
- - diff --git a/doc/html/TechNotes/VFLfunc.html b/doc/html/TechNotes/VFLfunc.html deleted file mode 100644 index 1e33593..0000000 --- a/doc/html/TechNotes/VFLfunc.html +++ /dev/null @@ -1,64 +0,0 @@ - - --The following functions support the HDF5 virtual file layer (VFL), enabling -the creation of customized I/O drivers. - -At this time, these functions are documented only in the HDF5 Virtual File -Layer design document and in the source code. - - - -herr_t H5Pset_driver(hid_t plist_id, hid_t driver_id, - const void *driver_info) - -void *H5Pget_driver_info(hid_t plist_id) - -hid_t H5FDregister(const H5FD_class_t *cls); - -herr_t H5FDunregister(hid_t driver_id); - -H5FD_t *H5FDopen(const char *name, unsigned flags, hid_t fapl_id, - haddr_t maxaddr); - -herr_t H5FDclose(H5FD_t *file); - -int H5FDcmp(const H5FD_t *f1, const H5FD_t *f2); - -int H5FDquery(const H5FD_t *f, unsigned long *flags); - -haddr_t H5FDalloc(H5FD_t *file, H5FD_mem_t type, hsize_t size); - -herr_t H5FDfree(H5FD_t *file, H5FD_mem_t type, haddr_t addr, hsize_t size); - -haddr_t H5FDrealloc(H5FD_t *file, H5FD_mem_t type, haddr_t addr, - hsize_t old_size, hsize_t new_size); - -haddr_t H5FDget_eoa(H5FD_t *file); - -herr_t H5FDset_eoa(H5FD_t *file, haddr_t eof); - -haddr_t H5FDget_eof(H5FD_t *file); - -herr_t H5FDread(H5FD_t *file, H5FD_mem_t type, hid_t dxpl_id, haddr_t addr, - size_t size, void *buf/*out*/); - -herr_t H5FDwrite(H5FD_t *file, H5FD_mem_t type, hid_t dxpl_id, - haddr_t addr, size_t size, const void *buf); - -herr_t H5FDflush(H5FD_t *file, unsigned closing); - -=========================================== -Last modified: 25 June 2002 -HDF Help Desk: hdfhelp@ncsa.uiuc.edu - -- - diff --git a/doc/html/TechNotes/VLTypes.html b/doc/html/TechNotes/VLTypes.html deleted file mode 100644 index 8a41c10..0000000 --- a/doc/html/TechNotes/VLTypes.html +++ /dev/null @@ -1,150 +0,0 @@ - - -
Variable-length (VL) datatypes have a great deal of flexibility, but can - be over- or mis-used. VL datatypes are ideal at capturing the notion - that elements in an HDF5 dataset (or attribute) can have different - amounts of information (VL strings are the canonical example), - but they have some drawbacks that this document attempts - to address. -
- -Because fast random access to dataset elements requires that each - element be a fixed size, the information stored for VL datatype elements - is actually information to locate the VL information, not - the information itself. -
- -VL datatypes are designed allow the amount of data stored in each - element of a dataset to vary. This change could be - over time as new values, with different lengths, were written to the - element. Or, the change can be over "space" - the dataset's space, - with each element in the dataset having the same fundamental type, but - different lengths. "Ragged arrays" are the classic example of elements - that change over the "space" of the dataset. If the elements of a - dataset are not going to change over "space" or time, a VL datatype - should probably not be used. -
- -Accessing VL information requires reading the element in the file, then - using that element's location information to retrieve the VL - information itself. - In the worst case, this obviously doubles the number of disk accesses - required to access the VL information. -
-However, in order to avoid this extra disk access overhead, the HDF5 - library groups VL information together into larger blocks on disk and - performs I/O only on those larger blocks. Additionally, these blocks of - information are cached in memory as long as possible. For most access - patterns, this amortizes the extra disk accesses over enough pieces of - VL information to hide the extra overhead involved. -
- -Because VL information must be located and retrieved from another - location in the file, extra information must be stored in the file to - locate - each item of VL information (i.e. each element in a dataset or each - VL field in a compound datatype, etc.). - Currently, that extra information amounts to 32 bytes per VL item. -
-- With some judicious re-architecting of the library and file format, - this could be reduced to 18 bytes per VL item with no loss in - functionality or additional time penalties. With some additional - effort, the space could perhaps could be pushed down as low as 8-10 - bytes per VL item with no loss in functionality, but potentially a - small time penalty. -
- -Storing data as VL information has some affects on chunked storage and - the filters that can be applied to chunked data. Because the data that - is stored in each chunk is the location to access the VL information, - the actual VL information is not broken up into chunks in the same way - as other data stored in chunks. Additionally, because the - actual VL information is not stored in the chunk, any filters which - operate on a chunk will operate on the information to - locate the VL information, not the VL information itself. -
- -Because the parallel I/O file drivers (MPI-I/O and MPI-posix) don't - allow objects with varying sizes to be created in the file, attemping - to create - a dataset or attribute with a VL datatype in a file managed by those - drivers will cause the creation call to fail. -
-Additionally, using - VL datatypes and the 'multi' and 'split' file drivers may not operate - in the manner desired. The HDF5 library currently categorizes the - "blocks of VL information" stored in the file as a type of metadata, - which means that they may not be stored with the other raw data for - the file. -
- -When VL information in the file is re-written, the old VL information - must be releases, space for the new VL information allocated and - the new VL information must be written to the file. This may cause - additional I/O accesses. -
- - - - - diff --git a/doc/html/TechNotes/Version.html b/doc/html/TechNotes/Version.html deleted file mode 100644 index 0e0853b..0000000 --- a/doc/html/TechNotes/Version.html +++ /dev/null @@ -1,137 +0,0 @@ - - - -The HDF5 version number is a set of three integer values
- written as either hdf5-1.2.3
or hdf5 version
- 1.2 release 3
.
-
-
The 5
is part of the library name and will only
- change if the entire file format and library are redesigned
- similar in scope to the changes between HDF4 and HDF5.
-
-
The 1
is the major version number and
- changes when there is an extensive change to the file format or
- library API. Such a change will likely require files to be
- translated and applications to be modified. This number is not
- expected to change frequently.
-
-
The 2
is the minor version number and is
- incremented by each public release that presents new features.
- Even numbers are reserved for stable public versions of the
- library while odd numbers are reserved for developement
- versions. See the diagram below for examples.
-
-
The 3
is the release number. For public
- versions of the library, the release number is incremented each
- time a bug is fixed and the fix is made available to the public.
- For development versions, the release number is incremented more
- often (perhaps almost daily).
-
-
It's often convenient to drop the release number when referring - to a version of the library, like saying version 1.2 of HDF5. - The release number can be any value in this case. - -
Version 1.0.0 was released for alpha testing the first week of - March, 1998. The developement version number was incremented to - 1.0.1 and remained constant until the the last week of April, - when the release number started to increase and development - versions were made available to people outside the core HDF5 - development team. - -
Version 1.0.23 was released mid-July as a second alpha - version. - -
Version 1.1.0 will be the first official beta release but the - 1.1 branch will also serve as a development branch since we're - not concerned about providing bug fixes separate from normal - development for the beta version. - -
After the beta release we rolled back the version number so the - first release is version 1.0 and development will continue on - version 1.1. We felt that an initial version of 1.0 was more - important than continuing to increment the pre-release version - numbers. - -
The motivation for separate public and development versions is - that the public version will receive only bug fixes while the - development version will receive new features. This also allows - us to release bug fixes expediently without waiting for the - development version to reach a stable state. - -
Eventually, the development version will near completion and a - new development branch will fork while the original one enters a - feature freeze state. When the original development branch is - ready for release the minor version number will be incremented - to an even value. - -
-
The library provides a set of macros and functions to query and - check version numbers. - -
H5_VERS_MAJOR
- H5_VERS_MINOR
- H5_VERS_RELEASE
- herr_t H5get_libversion (unsigned *majnum, unsigned
- *minnum, unsigned *relnum)
- void H5check(void)
- herr_t H5check_version (unsigned majnum,
- unsigned minnum, unsigned relnum)
- H5check()
macro
- with the include file version constants. The function
- compares its arguments to the result returned by
- H5get_libversion()
and if a mismatch is detected prints
- an error message on the standard error stream and aborts.
- - Using HDF5 with OpenMP - ---------------------- - - -1. Introduction to OpenMP -------------------------- - - - For shared-memory parallelism - - A combination of library and directives - - Available for C/C++ and Fortran - - SGI leading effort - - Information at http://www.openmp.org and - http://www.sgi.com/software/openmp - -2. Programming(SGI MPISpro compiler and C language) ---------------------------------------------------- - - - Turn on compiler '-mp' option - - Include 'omp.h' library in program - - Use library functions, directives and environment variables - - -3. Sample Programs ------------------- - -Appendix A contains four OpenMP-HDF5 test programs. (They are derived from -the hdf5/examples/h5_write.c). The purpose of these program is to -test OpenMP parallelism with the HDF5 library. - -All tests were run on modi4 with SGI MPISpro compiler(cc) and make. -Program 1 and Program 2 are the working programs. Program 3 and Program 4 -work occasionally due to racing conditions. -Follow the following steps to try the programs. - - a. have your hdf5 library compiled, - b. go to hdf5/examples directory, - c. add -mp option to the end of the CFLAGS list in the Makefile. If you - have the compiled program in another directory, you should go to the - examples in that directory. - d. modify the hdf5/examples/h5_write.c according to the program attached - here. - e. use hdf5/tools/h5dump to examine the output file. - - -4. Conclusion -------------- - -It is not safe to invoke HDF5 library calls via multiple threads in an -OpenMP program. But if one serializes HDF5 calls as illustrated in Program 1, -the HDF5 library works correctly with the OpenMP programs. - -The serialization of HDF5 calls will slow down the OpenMP program unnecessarily. -Future study is needed to check possible ways to "un-seralize" the HDF5 calls. -One possibility is that the HDF5 library has a beta-version of Thread-safe -implmentation though it is for Pthreads environment. One can check on the -feasibility of running OpenMP programs with this version of HDF5 Thread-safe -library. - - - -Appendix A: OpenMP-HDF5 Programs - -------- -Updated: 2000/11/28 -Contact: hdfhelp@ncsa.uiuc.edu -diff --git a/doc/html/TechNotes/pipe1.gif b/doc/html/TechNotes/pipe1.gif deleted file mode 100644 index 3b489a6..0000000 Binary files a/doc/html/TechNotes/pipe1.gif and /dev/null differ diff --git a/doc/html/TechNotes/pipe1.obj b/doc/html/TechNotes/pipe1.obj deleted file mode 100644 index 41f3461..0000000 --- a/doc/html/TechNotes/pipe1.obj +++ /dev/null @@ -1,136 +0,0 @@ -%TGIF 3.0-p5 -state(1,33,100,0,0,0,8,1,9,1,1,0,0,0,0,1,1,'Helvetica',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1408,1088,0,0,2880). -% -% @(#)$Header$ -% %W% -% -unit("1 pixel/pixel"). -page(1,"",1). -box('black',64,64,128,256,0,1,1,22,0,0,0,0,0,'1',[ -]). -box('black',80,96,112,224,26,1,1,23,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 128,160,912,160],1,2,1,24,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 160,160,144,224,160,272,176,224,160,160],1,2,1,25,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 848,160,832,224,848,272,864,224,848,160],1,2,1,34,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -box('black',464,192,496,256,26,1,1,39,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 160,224,464,224],1,2,1,40,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',2,[ - 496,224,848,224],1,2,1,41,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 192,224,176,288,192,336,208,288,192,224],1,2,1,42,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 432,224,416,288,432,336,448,288,432,224],1,2,1,43,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 192,288,432,288],1,2,1,44,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',464,352,496,416,26,1,1,45,0,0,0,0,0,'1',[ -]). -poly('black',5,[ - 528,224,512,288,528,336,544,288,528,224],1,2,1,46,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 816,224,800,288,816,336,832,288,816,224],1,2,1,47,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 528,288,816,288],1,2,1,48,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 464,256,456,304,464,328,488,304,488,256],1,2,1,62,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 480,352,488,304],2,2,1,85,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',912,64,976,256,0,1,1,87,0,0,0,0,0,'1',[ -]). -box('black',928,96,960,224,26,1,1,88,0,0,0,0,0,'1',[ -]). -text('black',96,48,'Helvetica',0,17,1,1,0,1,21,15,89,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "File"]). -text('black',944,48,'Helvetica',0,17,1,1,0,1,64,15,93,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Application"]). -text('black',480,144,'Helvetica',0,17,1,1,0,1,65,15,99,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5D_read()"]). -text('black',480,128,'Helvetica',0,17,1,1,0,1,58,15,108,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5Dread()"]). -text('black',304,208,'Helvetica',0,17,1,1,0,1,86,15,115,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_arr_read()"]). -text('black',304,192,'Helvetica',0,17,1,1,0,1,99,15,119,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5S_simp_fgath()"]). -text('black',296,288,'Helvetica',0,17,1,1,0,1,101,15,125,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_block_read()"]). -text('black',296,304,'Helvetica',0,17,1,1,0,1,90,15,132,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_low_read()"]). -text('black',296,320,'Helvetica',0,17,1,1,0,1,98,15,136,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_sec2_read()"]). -text('black',296,336,'Helvetica',0,17,1,1,0,1,33,15,140,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "read()"]). -text('black',664,208,'Helvetica',0,17,1,1,0,1,106,15,146,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_stride_copy()"]). -text('black',664,176,'Helvetica',0,17,1,1,0,1,104,15,150,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5S_simp_mscat()"]). -text('black',664,272,'Helvetica',0,17,1,1,0,1,54,15,154,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "memcpy()"]). -text('black',384,392,'Helvetica',0,17,1,1,0,1,105,15,170,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5T_conv_struct()"]). -poly('black',4,[ - 392,384,400,352,440,368,456,336],1,1,1,172,1,0,0,0,8,3,0,0,0,'1','8','3', - "6",[ -]). -text('black',480,176,'Helvetica',0,17,1,1,0,1,44,15,176,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "TCONV"]). -text('black',480,416,'Helvetica',0,17,1,1,0,1,25,15,182,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "BKG"]). -box('black',48,32,992,512,0,1,1,186,0,0,0,0,0,'1',[ -]). -poly('black',5,[ - 72,392,56,456,72,504,88,456,72,392],1,2,1,188,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',96,448,'Helvetica',0,17,1,0,0,1,46,15,189,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "== Loop"]). -poly('black',3,[ - 48,384,152,384,152,512],0,1,1,191,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -text('black',480,40,'Helvetica',0,24,1,1,0,1,380,29,197,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Fig 1: Internal Contiguous Storage"]). -text('black',136,144,'Helvetica',0,17,1,1,0,1,9,15,201,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "A"]). -text('black',160,208,'Helvetica',0,17,1,1,0,1,8,15,207,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "B"]). -text('black',192,272,'Helvetica',0,17,1,1,0,1,9,15,211,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "C"]). -text('black',504,208,'Helvetica',0,17,1,1,0,1,8,15,215,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "E"]). -text('black',528,272,'Helvetica',0,17,1,1,0,1,8,15,223,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "F"]). -text('black',464,304,'Helvetica',0,17,1,1,0,1,9,15,231,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "D"]). -text('black',664,192,'Helvetica',0,17,1,1,0,1,107,15,324,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_hyper_copy()"]). diff --git a/doc/html/TechNotes/pipe2.gif b/doc/html/TechNotes/pipe2.gif deleted file mode 100644 index 3a0c947..0000000 Binary files a/doc/html/TechNotes/pipe2.gif and /dev/null differ diff --git a/doc/html/TechNotes/pipe2.obj b/doc/html/TechNotes/pipe2.obj deleted file mode 100644 index 70d9c18..0000000 --- a/doc/html/TechNotes/pipe2.obj +++ /dev/null @@ -1,168 +0,0 @@ -%TGIF 3.0-p5 -state(1,33,100,0,0,0,8,1,9,1,1,1,1,0,0,1,1,'Helvetica',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1408,1088,0,0,2880). -% -% @(#)$Header$ -% %W% -% -unit("1 pixel/pixel"). -page(1,"",1). -box('black',64,64,128,256,0,1,1,22,0,0,0,0,0,'1',[ -]). -box('black',80,96,112,224,26,1,1,23,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 128,160,912,160],1,2,1,24,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 160,160,144,224,160,272,176,224,160,160],1,2,1,25,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 848,160,832,224,848,272,864,224,848,160],1,2,1,34,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -box('black',464,192,496,256,26,1,1,39,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 160,224,464,224],1,2,1,40,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',2,[ - 496,224,848,224],1,2,1,41,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 192,224,176,288,192,336,208,288,192,224],1,2,1,42,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 432,224,416,288,432,336,448,288,432,224],1,2,1,43,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 192,288,432,288],1,2,1,44,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',464,352,496,416,26,1,1,45,0,0,0,0,0,'1',[ -]). -poly('black',5,[ - 528,224,512,288,528,336,544,288,528,224],1,2,1,46,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 816,224,800,288,816,336,832,288,816,224],1,2,1,47,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 528,288,816,288],1,2,1,48,0,26,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 848,240,848,352,832,384,800,384,496,384],1,2,1,55,1,0,0,0,10,4,0,0,0,'2','10','4', - "70",[ -]). -poly('black',5,[ - 528,384,512,448,528,496,544,448,528,384],1,2,1,57,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 800,384,784,448,800,496,816,448,800,384],1,2,1,58,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 800,448,528,448],1,2,1,61,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',5,[ - 464,256,456,304,464,328,488,304,488,256],1,2,1,62,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 480,352,488,304],0,2,1,85,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',912,64,976,256,0,1,1,87,0,0,0,0,0,'1',[ -]). -box('black',928,96,960,224,26,1,1,88,0,0,0,0,0,'1',[ -]). -text('black',96,48,'Helvetica',0,17,1,1,0,1,21,15,89,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "File"]). -text('black',944,48,'Helvetica',0,17,1,1,0,1,64,15,93,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Application"]). -text('black',480,144,'Helvetica',0,17,1,1,0,1,65,15,99,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5D_read()"]). -text('black',480,128,'Helvetica',0,17,1,1,0,1,58,15,108,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5Dread()"]). -text('black',304,208,'Helvetica',0,17,1,1,0,1,86,15,115,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_arr_read()"]). -text('black',304,192,'Helvetica',0,17,1,1,0,1,99,15,119,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5S_simp_fgath()"]). -text('black',296,288,'Helvetica',0,17,1,1,0,1,101,15,125,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_block_read()"]). -text('black',296,304,'Helvetica',0,17,1,1,0,1,90,15,132,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_low_read()"]). -text('black',296,320,'Helvetica',0,17,1,1,0,1,98,15,136,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_sec2_read()"]). -text('black',296,336,'Helvetica',0,17,1,1,0,1,33,15,140,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "read()"]). -text('black',664,208,'Helvetica',0,17,1,1,0,1,106,15,146,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_stride_copy()"]). -text('black',664,176,'Helvetica',0,17,1,1,0,1,104,15,150,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5S_simp_mscat()"]). -text('black',664,272,'Helvetica',0,17,1,1,0,1,54,15,154,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "memcpy()"]). -text('black',672,368,'Helvetica',0,17,1,1,0,1,106,15,158,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_stride_copy()"]). -text('black',672,336,'Helvetica',0,17,1,1,0,1,105,15,162,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5S_simp_mgath()"]). -text('black',672,432,'Helvetica',0,17,1,1,0,1,54,15,166,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "memcpy()"]). -text('black',384,392,'Helvetica',0,17,1,1,0,1,105,15,170,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5T_conv_struct()"]). -poly('black',4,[ - 392,384,400,352,440,368,456,336],1,1,1,172,1,0,0,0,8,3,0,0,0,'1','8','3', - "6",[ -]). -text('black',480,176,'Helvetica',0,17,1,1,0,1,44,15,176,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "TCONV"]). -text('black',480,416,'Helvetica',0,17,1,1,0,1,25,15,182,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "BKG"]). -box('black',48,32,992,512,0,1,1,186,0,0,0,0,0,'1',[ -]). -poly('black',5,[ - 72,392,56,456,72,504,88,456,72,392],1,2,1,188,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',96,448,'Helvetica',0,17,1,0,0,1,46,15,189,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "== Loop"]). -poly('black',3,[ - 48,384,152,384,152,512],0,1,1,191,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -text('black',480,40,'Helvetica',0,24,1,1,0,1,404,29,197,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Fig 2: Partially Initialized Destination"]). -text('black',136,144,'Helvetica',0,17,1,1,0,1,9,15,201,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "A"]). -text('black',160,208,'Helvetica',0,17,1,1,0,1,8,15,207,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "B"]). -text('black',192,272,'Helvetica',0,17,1,1,0,1,9,15,211,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "C"]). -text('black',504,208,'Helvetica',0,17,1,1,0,1,8,15,215,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "E"]). -text('black',528,272,'Helvetica',0,17,1,1,0,1,8,15,223,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "F"]). -text('black',856,288,'Helvetica',0,17,1,1,0,1,9,15,225,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "G"]). -text('black',800,432,'Helvetica',0,17,1,1,0,1,9,15,229,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H"]). -text('black',464,304,'Helvetica',0,17,1,1,0,1,9,15,231,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "D"]). -poly('black',4,[ - 848,240,848,224,864,224,904,224],0,2,1,318,1,0,0,0,10,4,0,0,0,'2','10','4', - "6",[ -]). -text('black',664,192,'Helvetica',0,17,1,1,0,1,107,15,326,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_hyper_copy()"]). -text('black',672,352,'Helvetica',0,17,1,1,0,1,107,15,334,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_hyper_copy()"]). diff --git a/doc/html/TechNotes/pipe3.gif b/doc/html/TechNotes/pipe3.gif deleted file mode 100644 index 26d82ad..0000000 Binary files a/doc/html/TechNotes/pipe3.gif and /dev/null differ diff --git a/doc/html/TechNotes/pipe3.obj b/doc/html/TechNotes/pipe3.obj deleted file mode 100644 index cdfef7c..0000000 --- a/doc/html/TechNotes/pipe3.obj +++ /dev/null @@ -1,70 +0,0 @@ -%TGIF 3.0-p5 -state(1,33,100,0,0,0,8,1,9,1,1,0,0,0,0,1,1,'Helvetica',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1408,1088,0,0,2880). -% -% @(#)$Header$ -% %W% -% -unit("1 pixel/pixel"). -page(1,"",1). -box('black',64,64,128,256,0,1,1,22,0,0,0,0,0,'1',[ -]). -box('black',80,96,112,224,26,1,1,23,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 128,160,912,160],1,2,1,24,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',912,64,976,256,0,1,1,87,0,0,0,0,0,'1',[ -]). -box('black',928,96,960,224,26,1,1,88,0,0,0,0,0,'1',[ -]). -text('black',96,48,'Helvetica',0,17,1,1,0,1,21,15,89,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "File"]). -text('black',944,48,'Helvetica',0,17,1,1,0,1,64,15,93,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Application"]). -text('black',480,104,'Helvetica',0,17,1,1,0,1,65,15,99,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5D_read()"]). -text('black',480,88,'Helvetica',0,17,1,1,0,1,58,15,108,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5Dread()"]). -box('black',48,32,992,512,0,1,1,186,0,0,0,0,0,'1',[ -]). -poly('black',5,[ - 72,392,56,456,72,504,88,456,72,392],1,2,1,188,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',96,448,'Helvetica',0,17,1,0,0,1,46,15,189,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "== Loop"]). -poly('black',3,[ - 48,384,152,384,152,512],0,1,1,191,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -text('black',480,40,'Helvetica',0,24,1,1,0,1,295,29,197,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Fig 3: No Type Conversion"]). -text('black',136,144,'Helvetica',0,17,1,1,0,1,9,15,201,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "A"]). -poly('black',5,[ - 152,160,136,224,152,272,168,224,152,160],1,2,1,273,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',480,120,'Helvetica',0,17,1,1,0,1,96,15,277,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5S_simp_read()"]). -text('black',480,136,'Helvetica',0,17,1,1,0,1,86,15,281,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_arr_read()"]). -poly('black',5,[ - 880,160,864,224,880,272,896,224,880,160],1,2,1,283,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',2,[ - 152,224,880,224],1,2,1,286,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -text('black',480,232,'Helvetica',0,17,1,1,0,1,101,15,291,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_block_read()"]). -text('black',480,248,'Helvetica',0,17,1,1,0,1,90,15,293,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_low_read()"]). -text('black',480,264,'Helvetica',0,17,1,1,0,1,98,15,309,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_sec2_read()"]). -text('black',480,280,'Helvetica',0,17,1,1,0,1,33,15,311,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "read()"]). -text('black',176,208,'Helvetica',0,17,1,1,0,1,8,15,418,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "B"]). diff --git a/doc/html/TechNotes/pipe4.gif b/doc/html/TechNotes/pipe4.gif deleted file mode 100644 index a3a857b..0000000 Binary files a/doc/html/TechNotes/pipe4.gif and /dev/null differ diff --git a/doc/html/TechNotes/pipe4.obj b/doc/html/TechNotes/pipe4.obj deleted file mode 100644 index 6f50123..0000000 --- a/doc/html/TechNotes/pipe4.obj +++ /dev/null @@ -1,92 +0,0 @@ -%TGIF 3.0-p5 -state(1,33,100,0,0,0,8,1,9,1,1,1,2,1,0,1,0,'Helvetica',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1408,1088,0,0,2880). -% -% @(#)$Header$ -% %W% -% -unit("1 pixel/pixel"). -page(1,"",1). -box('black',64,64,128,256,0,1,1,22,0,0,0,0,0,'1',[ -]). -box('black',80,96,112,224,26,1,1,23,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 128,160,912,160],1,2,1,24,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',912,96,944,224,26,1,1,88,0,0,0,0,0,'1',[ -]). -text('black',96,48,'Helvetica',0,17,1,1,0,1,21,15,89,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "File"]). -text('black',928,72,'Helvetica',0,17,1,1,0,1,32,15,93,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Buffer"]). -box('black',48,32,992,512,0,1,1,186,0,0,0,0,0,'1',[ -]). -poly('black',5,[ - 72,392,56,456,72,504,88,456,72,392],1,2,1,188,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',96,448,'Helvetica',0,17,1,0,0,1,46,15,189,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "== Loop"]). -poly('black',3,[ - 48,384,152,384,152,512],0,1,1,191,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -text('black',480,40,'Helvetica',0,24,1,1,0,1,372,29,197,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Fig 4: Regularly Chunked Storage"]). -text('black',136,144,'Helvetica',0,17,1,1,0,1,9,15,201,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "A"]). -text('black',480,104,'Helvetica',0,17,1,1,0,1,86,15,281,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_arr_read()"]). -text('black',480,120,'Helvetica',0,17,1,1,0,1,102,15,349,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_istore_read()"]). -text('black',480,136,'Helvetica',0,17,1,1,0,1,167,15,351,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_istore_copy_hyperslab()"]). -poly('black',5,[ - 160,160,144,224,160,272,176,224,160,160],1,2,1,362,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -poly('black',5,[ - 880,160,864,224,880,272,896,224,880,160],1,2,1,363,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -box('black',448,192,512,256,26,1,1,364,0,0,0,0,0,'1',[ -]). -text('black',480,176,'Helvetica',0,17,1,1,0,1,43,15,367,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "CHUNK"]). -poly('black',2,[ - 160,224,448,224],1,2,1,372,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -poly('black',2,[ - 512,224,880,224],1,2,1,373,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -text('black',288,224,'Helvetica',0,17,1,1,0,1,101,15,385,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_block_read()"]). -text('black',288,240,'Helvetica',0,17,1,1,0,1,90,15,387,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_low_read()"]). -text('black',288,256,'Helvetica',0,17,1,1,0,1,98,15,391,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_sec2_read()"]). -text('black',288,272,'Helvetica',0,17,1,1,0,1,33,15,395,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "read()"]). -poly('black',5,[ - 456,256,448,296,480,320,512,296,504,256],1,2,1,401,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',184,208,'Helvetica',0,17,1,1,0,1,8,15,422,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "B"]). -text('black',520,208,'Helvetica',0,17,1,1,0,1,9,15,434,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "D"]). -text('black',440,272,'Helvetica',0,17,1,1,0,1,9,15,440,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "C"]). -text('black',480,320,'Helvetica',0,17,1,1,0,1,107,15,444,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5Z_uncompress()"]). -text('black',672,224,'Helvetica',0,17,1,1,0,1,107,15,454,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_hyper_copy()"]). -text('black',672,240,'Helvetica',0,17,1,1,0,1,106,15,464,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5V_stride_copy()"]). -text('black',672,256,'Helvetica',0,17,1,1,0,1,54,15,466,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "memcpy()"]). -text('black',168,488,'Helvetica',0,17,1,0,0,1,282,15,471,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "NOTE: H5Z_uncompress() is not implemented yet."]). diff --git a/doc/html/TechNotes/pipe5.gif b/doc/html/TechNotes/pipe5.gif deleted file mode 100644 index 6ae0098..0000000 Binary files a/doc/html/TechNotes/pipe5.gif and /dev/null differ diff --git a/doc/html/TechNotes/pipe5.obj b/doc/html/TechNotes/pipe5.obj deleted file mode 100644 index 4738bbd..0000000 --- a/doc/html/TechNotes/pipe5.obj +++ /dev/null @@ -1,52 +0,0 @@ -%TGIF 3.0-p5 -state(1,33,100,0,0,0,8,1,9,1,1,1,2,1,0,1,0,'Helvetica',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1408,1088,0,0,2880). -% -% @(#)$Header$ -% %W% -% -unit("1 pixel/pixel"). -page(1,"",1). -box('black',64,64,128,256,0,1,1,22,0,0,0,0,0,'1',[ -]). -box('black',80,96,112,224,26,1,1,23,0,0,0,0,0,'1',[ -]). -poly('black',2,[ - 128,160,912,160],1,2,1,24,0,0,0,0,10,4,0,0,0,'2','10','4', - "0",[ -]). -box('black',912,96,944,224,26,1,1,88,0,0,0,0,0,'1',[ -]). -text('black',96,48,'Helvetica',0,17,1,1,0,1,21,15,89,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "File"]). -text('black',928,72,'Helvetica',0,17,1,1,0,1,32,15,93,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Buffer"]). -box('black',48,32,992,512,0,1,1,186,0,0,0,0,0,'1',[ -]). -text('black',480,40,'Helvetica',0,24,1,1,0,1,333,29,197,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Fig 5: Reading a Single Chunk"]). -text('black',136,144,'Helvetica',0,17,1,1,0,1,9,15,201,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "A"]). -text('black',480,112,'Helvetica',0,17,1,1,0,1,86,15,281,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_arr_read()"]). -text('black',480,128,'Helvetica',0,17,1,1,0,1,102,15,349,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_istore_read()"]). -text('black',480,144,'Helvetica',0,17,1,1,0,1,167,15,351,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_istore_copy_hyperslab()"]). -text('black',480,160,'Helvetica',0,17,1,1,0,1,101,15,385,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_block_read()"]). -text('black',480,176,'Helvetica',0,17,1,1,0,1,90,15,387,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_low_read()"]). -text('black',480,192,'Helvetica',0,17,1,1,0,1,98,15,391,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5F_sec2_read()"]). -text('black',480,208,'Helvetica',0,17,1,1,0,1,33,15,395,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "read()"]). -text('black',864,240,'Helvetica',0,17,1,1,0,1,107,15,444,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "H5Z_uncompress()"]). -text('black',56,488,'Helvetica',0,17,1,0,0,1,282,15,471,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "NOTE: H5Z_uncompress() is not implemented yet."]). -poly('black',5,[ - 912,176,864,176,840,208,872,232,912,216],1,2,1,490,2,0,0,0,10,4,0,0,0,'2','10','4', - "",[ -]). -text('black',896,184,'Helvetica',0,17,1,0,0,1,8,15,491,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "B"]). diff --git a/doc/html/TechNotes/shuffling-algorithm-report.pdf b/doc/html/TechNotes/shuffling-algorithm-report.pdf deleted file mode 100755 index 459653c..0000000 Binary files a/doc/html/TechNotes/shuffling-algorithm-report.pdf and /dev/null differ diff --git a/doc/html/TechNotes/version.gif b/doc/html/TechNotes/version.gif deleted file mode 100644 index 41d4401..0000000 Binary files a/doc/html/TechNotes/version.gif and /dev/null differ diff --git a/doc/html/TechNotes/version.obj b/doc/html/TechNotes/version.obj deleted file mode 100644 index 96b5b7f..0000000 --- a/doc/html/TechNotes/version.obj +++ /dev/null @@ -1,96 +0,0 @@ -%TGIF 3.0-p5 -state(0,33,100,0,0,0,8,1,9,1,1,0,2,1,0,1,0,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880). -% -% @(#)$Header$ -% %W% -% -unit("1 pixel/pixel"). -page(1,"",1). -poly('black',2,[ - 128,128,128,448],0,3,1,0,0,0,0,0,12,5,0,0,0,'3','12','5', - "0",[ -]). -poly('black',2,[ - 128,128,128,64],0,3,1,1,0,0,2,0,12,5,0,0,0,'3','12','5', - "0",[ -]). -poly('black',2,[ - 128,448,128,512],0,3,1,4,0,0,2,0,12,5,0,0,0,'3','12','5', - "0",[ -]). -text('black',144,112,'Courier',0,17,1,0,0,1,42,14,22,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.3.30"]). -text('black',144,144,'Courier',0,17,1,0,0,1,42,14,30,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.3.31"]). -text('black',144,176,'Courier',0,17,1,0,0,1,42,14,32,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.3.32"]). -poly('black',2,[ - 256,208,256,448],0,3,1,34,0,0,0,0,12,5,0,0,0,'3','12','5', - "0",[ -]). -poly('black',2,[ - 256,448,256,512],0,3,1,36,0,0,2,0,12,5,0,0,0,'3','12','5', - "0",[ -]). -poly('black',2,[ - 128,192,256,208],1,1,1,37,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -text('black',144,224,'Courier',0,17,1,0,0,1,42,14,41,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.3.33"]). -text('black',144,256,'Courier',0,17,1,0,0,1,42,14,43,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.3.34"]). -text('black',272,224,'Courier',0,17,1,0,0,1,35,14,45,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.5.0"]). -text('black',272,256,'Courier',0,17,1,0,0,1,35,14,47,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.5.1"]). -text('black',272,288,'Courier',0,17,1,0,0,1,35,14,49,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.5.2"]). -text('black',272,320,'Courier',0,17,1,0,0,1,35,14,51,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.5.3"]). -text('black',144,288,'Courier',0,17,1,0,0,1,42,14,53,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.3.35"]). -text('black',144,320,'Courier',0,17,1,0,0,1,35,14,57,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.4.0"]). -text('black',144,368,'Courier',0,17,1,0,0,1,35,14,59,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.4.1"]). -text('black',272,192,'Helvetica',0,17,1,0,0,1,144,15,67,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "New development branch"]). -text('black',144,64,'Helvetica',0,17,1,0,0,1,163,15,69,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Original development branch"]). -text('black',16,208,'Helvetica',0,17,2,0,0,1,87,30,71,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Feature Freeze", - "at this point."]). -text('black',16,320,'Helvetica',0,17,2,0,0,1,84,30,73,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Public Release", - "at this point."]). -poly('black',2,[ - 104,208,128,208],1,1,1,77,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -poly('black',2,[ - 104,320,128,320],1,1,1,78,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -poly('black',2,[ - 256,336,128,352],1,1,1,79,0,0,0,0,8,3,0,0,0,'1','8','3', - "0",[ -]). -text('black',320,368,'Helvetica',0,17,3,0,0,1,137,45,82,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "Merge a bug fix from the", - "development branch to", - "the release branch."]). -box('black',312,368,464,416,0,1,1,87,0,0,0,0,0,'1',[ -]). -poly('black',4,[ - 312,392,240,384,296,344,232,344],1,1,1,90,1,0,0,0,8,3,0,0,0,'1','8','3', - "6",[ -]). -box('black',8,208,104,240,0,1,1,95,0,0,0,0,0,'1',[ -]). -box('black',8,320,104,352,0,1,1,98,0,0,0,0,0,'1',[ -]). -text('black',144,408,'Courier',0,17,1,0,0,1,35,14,102,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[ - "1.4.2"]). -box('black',0,40,480,528,0,1,1,104,0,0,0,0,0,'1',[ -]). diff --git a/doc/html/Tools.html b/doc/html/Tools.html deleted file mode 100644 index 21f967a..0000000 --- a/doc/html/Tools.html +++ /dev/null @@ -1,2760 +0,0 @@ - -
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
- -HDF5-related tools are available to assist the user in a variety of -activities, including - examining or managing HDF5 files, - converting raw data between HDF5 and other special-purpose formats, - moving data and files between the HDF4 and HDF5 formats, - measuring HDF5 library performance, and - managing HDF5 library and application compilation, - installation and configuration. -Unless otherwise specified below, these tools are distributed and -installed with HDF5. - - -
http://hdf.ncsa.uiuc.edu/hdf-java-html/
)
- HDFview
-- a browser that
- works with both HDF4 and HDF5 files and
- can be used to transfer data between the two formats
- http://hdf.ncsa.uiuc.edu/h4toh5/
)
- http://hdf.ncsa.uiuc.edu/tools5.html
)
-
-- -
h5dump
- [
OPTIONS]
file
-h5dump
enables the user to examine
- the contents of an HDF5 file and dump those contents, in human
- readable form, to an ASCII file.
-
- h5dump
dumps HDF5 file content to standard output.
- It can display the contents of the entire HDF5 file or
- selected objects, which can be groups, datasets, a subset of a
- dataset, links, attributes, or datatypes.
-
- The --header
option displays object header
- information only.
-
- Names are the absolute names of the objects. h5dump
- displays objects in the order same as the command order. If a
- name does not start with a slash, h5dump
begins
- searching for the specified object starting at the root group.
-
- If an object is hard linked with multiple names,
- h5dump
displays the content of the object in the
- first occurrence. Only the link information is displayed in later
- occurrences.
-
- h5dump
assigns a name for any unnamed datatype in
- the form of
- #
oid1:
oid2, where
- oid1 and oid2 are the object identifiers
- assigned by the library. The unnamed types are displayed within
- the root group.
-
- Datatypes are displayed with standard type names. For example,
- if a dataset is created with H5T_NATIVE_INT
type
- and the standard type name for integer on that machine is
- H5T_STD_I32BE
, h5dump
displays
- H5T_STD_I32BE
as the type of the dataset.
-
- h5dump
can also dump a subset of a dataset.
- This feature operates in much the same way as hyperslabs in HDF5;
- the parameters specified on the command line are passed to the
- function
- H5Sselect_hyperslab
and the resulting selection
- is displayed.
-
- The h5dump
output is described in detail in the
- DDL for HDF5, the
- Data Description Language document.
-
- Note: It is not permissible to specify multiple
- attributes, datasets, datatypes, groups, or soft links with one
- flag. For example, one may not issue the command
-
-
- WRONG:
- h5dump -a /attr1 /attr2 foo.h5
-
- to display both /attr1
and /attr2
.
- One must issue the following command:
-
-
- CORRECT:
- h5dump -a /attr1 -a /attr2 foo.h5
-
-
- It's possible to select the file driver with which to open the - HDF5 file by using the --filedriver (-f) command-line option. - Acceptable values for the --filedriver option are: "sec2", - "family", "split", "multi", and "stream". If the file driver flag - isn't specified, then the file will be opened with each driver in - turn and in the order specified above until one driver succeeds - in opening the file. -
-- One byte integer type data is displayed in decimal by default. When - displayed in ASCII, a non-printable code is displayed in 3 octal - digits preceeded by a back-slash unless there is a C language escape - sequence for it. For example, CR and LF are printed as \r and \n. - Though the NUL code is represented as \0 in C, it is printed as - \000 to avoid ambiguity as illustrated in the following 1 byte - char data (since this is not a string, embedded NUL is possible). -
- 141 142 143 000 060 061 062 012 - a b c \0 0 1 2 \n- h5dump prints them as "abc\000012\n". But if h5dump prints NUL as \0, - the output is "abc\0012\n" which is ambiguous. - - - -
--xml
option, h5dump
generates
- XML output. This output contains a complete description of the file,
- marked up in XML. The XML conforms to the HDF5 Document Type
- Definition (DTD) available at
-
- http://hdf.ncsa.uiuc.edu/DTDs/HDF5-File.dtd
.
- - The XML output is suitable for use with other tools, including the - HDF5 Java Tools. - -
-h or --help |
- Print a usage message and exit. | -
-B or --bootblock |
- Print the content of the boot block. (This - option is not yet implemented.) |
-
-H or --header |
- Print the header only; no data is displayed. | -
-A |
- Print the header and value of attributes; data of datasets - is not displayed. | -
-i or --object-ids |
- Print the object ids. | -
-r or --string |
- Print 1-bytes integer datasets as ASCII. | -
-V or --version |
- Print version number and exit. | -
-a P or --attribute=P |
- Print the specified attribute. | -
-d P or
- --dataset=P |
- Print the specified dataset. | -
-f D or --filedriver=D |
- Specify which driver to open the file with. | -
-g P or
- --group=P |
- Print the specified group and all members. | -
-l P or --soft-link=P |
- Print the value(s) of the specified soft link. | -
-o F or
- --output=F |
- Output raw data into file F. | -
-t T or
- --datatype=T |
- Print the specified named datatype. | -
-w N or
- --width=N |
- Set the number of columns of output. | -
-x or
- --xml |
- Output XML using XML schema (default) instead of DDL. | -
-u or
- --use-dtd |
- Output XML using XML DTD instead of DDL. | -
-D U or
- --xml-dtd=U |
- In XML output, refer to the DTD or schema at U - instead of the default schema/DTD. | -
-X S or
- --xml-dns=S |
- In XML output, (XML Schema) use qualified names in
- the XML: ":": no namespace, default: - "hdf5:" |
-
-s L or
- --start=L |
- Offset of start of subsetting selection. - Default: the beginning of the dataset. |
-
-S L or
- --stride=L |
- Hyperslab stride. - Default: 1 in all dimensions. |
-
-c L or
- --count=L |
- Number of blocks to include in the selection. | -
-k L or
- --block=L |
- Size of block in hyperslab. - Default: 1 in all dimensions. |
-
-- |
- Indicate that all following arguments are non-options. - E.g., to dump a file called `-f', use h5dump -- -f. | -
file | -The file to be examined. | -
D | -which file driver to use in opening the - file. Acceptable values are "sec2", "family", "split", - "multi", and "stream". Without the file driver flag the - file will be opened with each driver in turn and in the - order specified above until one driver succeeds in - opening the file. |
P | -The full path from the root group to - the object |
T | -The name of the datatype |
F | -A filename |
N | -An integer greater than 1 |
L | -A list of integers, the number of which is - equal to the number of dimensions in the dataspace being - queried |
U | -A URI (as defined in - [IETF RFC 2396], - updated by - [IETF RFC 2732]) - that refers to the DTD to be used to validate the XML |
Subsetting parameters can also be expressed in a convenient
- compact form, as follows:
-
-
- --dataset="/foo/mydataset[START;STRIDE;COUNT;BLOCK]"
-
- All of the semicolons (;
) are required, even when
- a parameter value is not specified.
- When not specified, default parameter values are used.
-
-
|
h5dump
displays the
- following information:
-
- |
h5ls
- [
OPTIONS]
- file
- [
OBJECTS...]
-h5ls
prints selected information about file objects
- in the specified format.
--h or -? or
- --help |
- Print a usage message and exit. |
-a or
- --address |
- Print addresses for raw data. |
-d or --data |
- Print the values of datasets. |
-e or
- --errors |
- Show all HDF5 error reporting. |
-f or
- --full |
- Print full path names instead of base names. |
-g or
- --group |
- Show information about a group, not its contents. |
-l or
- --label |
- Label members of compound datasets. |
-r or --recursive |
- List all groups recursively, avoiding cycles. |
-s or
- --string |
- Print 1-bytes integer datasets as ASCII. |
-S or
- --simple |
- Use a machine-readable output format. |
-w N or
- --width= N |
- Set the number of columns of output. |
-v or
- --verbose |
- Generate more verbose output. |
-V or
- --version |
- Print version number and exit. |
-x or
- --hexdump |
- Show raw data in hexadecimal format. |
file | -The file name may include a printf(3C) integer format
- such as %%05d to open a file family. |
objects | -Each object consists of an HDF5 file name optionally
- followed by a slash and an object name within the file
- (if no object is specified within the file then the
- contents of the root group are displayed). The file name
- may include a printf(3C) integer format such
- as "%05d" to open a file family. |
h5diff
file1 file2
- [OPTIONS]
- [object1 [object2 ] ]
-h5diff
is a command line tool that compares
- two HDF5 files, file1 and file2, and
- reports the differences between them.
-
- Optionally, h5diff
will compare two objects
- within these files.
- If only one object, object1, is specified,
- h5diff
will compare
- object1 in file1
- with object1 in file2.
- In two objects, object1 and object2,
- are specified, h5diff
will compare
- object1 in file1
- with object2 in file2.
- These objects must be HDF5 datasets.
-
- object1 and object2 must be expressed - as absolute paths from the respective file's root group. -
- h5diff
has the following four modes of output:
- Normal mode: print the number of differences found and where they occurred
- Report mode (-r): print the above plus the differences
- Verbose mode (-v): print the above plus a list of objects and warnings
- Quiet mode (-q): do not print output (h5diff always returns an exit code of
- 1 when differences are found).
-
- Additional information, with several sample cases, - can be found in the document - - H5diff Examples. -
file1 | -|
file2 | -The HDF5 files to be compared. |
-h |
- help message. |
-r |
- Report mode. Print the differences. |
-v |
- Verbose mode. Print the differences, list of objects, warnings. | -
-q |
- Quiet mode. Do not print output. | -
-n count |
- Print difference up to count - differences, then stop. count must be a positive integer. |
-d delta |
- Print only differences that are greater than the
- limit delta. delta must be a positive number.
- The comparison criterion is whether the absolute value of the
- difference of two corresponding values is greater than
- delta
- (e.g., |a–b| > delta ,
- where a is a value in file1 and
- b is a value in file2). |
-p relative |
- Print only differences that are greater than a
- relative error. relative must be a positive number.
- The comparison criterion is whether the absolute value of the
- difference 1 and the ratio of two corresponding values
- is greater than relative
- (e.g., |1–(b/a)| > relative
- where a is a value in file1 and
- b is a value in file2). |
object1 | -|
object2 | -Specific object(s) within the files to be compared. |
h5diff
call compares
- the object /a/b
in file1
- with the object /a/c
in file2
: h5diff file1 file2 /a/b /a/c
- h5diff
call compares
- the object /a/b
in file1
- with the same object in file2
: h5diff file1 file2 /a/b
- h5diff
call compares
- all objects in both files: h5diff file1 file2
-
-h5repack
-i file1-o file2 [-h] [-v] [-f
- 'filter'] [-l 'layout'][-m number][-e file]
-h5repack
is a command line tool that applies HDF5 filters
- to a input file file1, saving the output in a new file, file2.'filter'
- is a string with the format
- <list of objects> : <name of filter> = <filter
- parameters>.
-
- <list of objects> is a comma separated list of object names
- meaning apply compression only to those objects. If no object names are
- specified, the filter is applied to all objects
- <name of filter> can be:
- GZIP, to apply the HDF5 GZIP filter (GZIP compression)
- SZIP, to apply the HDF5 SZIP filter (SZIP compression)
- SHUF, to apply the HDF5 shuffle filter
- FLET, to apply the HDF5 checksum filter
- NONE, to remove the filter
- <filter parameters> is optional compression info
- SHUF (no parameter)
- FLET (no parameter)
- GZIP=<deflation level> from 1-9
- SZIP=<pixels per block,coding> (pixels per block is a even number in
- 2-32 and coding method is 'EC' or 'NN')
-
-h
- -f
filter
- -l
layout
- -v
- -e
file
- -d
delta
- |a–b| > delta
,
- where a
is a value in file1 and
- b
is a value in file2).-m
number
- 2) h5repack -i file1 -o file2 -f dset1:SZIP=8,NN -v
- Applies SZIP compression only
- to object 'dset1'
3) h5repack -i file1 -o file2 -l dset1,dset2:CHUNK=20x10 -v
- Applies chunked layout to
- objects 'dset1' and 'dset2'
-
-
h5repart
- [-v]
- [-V]
- [-[b|m]
N[g|m|k]]
- [-family_to_sec2]
- source_file
- dest_file
-h5repart
joins a family of files into a single file,
- or copies one family of files to another while changing the size
- of the family members. h5repart
can also be used to
- copy a single file to a single file with holes. At this stage,
- h5repart
can not split a single non-family file into
- a family of file(s).
-
- To convert a family of file(s) to a single non-family file
- (sec2
file), the option -family_to_sec2
- has to be used.
-
- Sizes associated with the -b
and -m
- options may be suffixed with g
for gigabytes,
- m
for megabytes, or k
for kilobytes.
-
- File family names include an integer printf
- format such as %d
.
-
-
-v |
- Produce verbose output. |
-V |
- Print a version number and exit. |
-b N |
- The I/O block size, defaults to 1kB |
-m N |
- The destination member size or 1GB |
-family_to_sec2 |
- Convert file driver from family to sec2 |
source_file | -The name of the source file |
dest_file | -The name of the destination files |
h5import
- infile in_options
- [infile in_options ...]
- -o outfile
-
- h5import
- infile in_options
- [infile in_options ...]
- -outfile outfile
-
- h5import -h
- h5import -help
-h5import
converts data
- from one or more ASCII or binary files, infile
,
- into the same number of HDF5 datasets
- in the existing or new HDF5 file, outfile
.
- Data conversion is performed in accordance with the
- user-specified type and storage properties
- specified in in_options
.
-
- The primary objective of h5import
is to
- import floating point or integer data.
- The utility's design allows for future versions that
- accept ASCII text files and store the contents as a
- compact array of one-dimensional strings,
- but that capability is not implemented in HDF5 Release 1.6.
-
-
- Input data and options:
- Input data can be provided in one of the following forms:
-
infile
,
- contains a single n-dimensional
- array of values of one of the above types expressed
- in the order of fastest-changing dimensions first.
-
- Floating point data in an ASCII input file must be
- expressed in the fixed floating form (e.g., 323.56)
- h5import
is designed to accept scientific notation
- (e.g., 3.23E+02) in an ASCII, but that is not implemented in HDF5 release 1.6.
-
- Each input file can be associated with options specifying - the datatype and storage properties. - These options can be specified either as - command line arguments - or in a configuration file. - Note that exactly one of these approaches must be used with a - single input file. -
- Command line arguments, best used with simple input files, - can be used to specify - the class, size, dimensions of the input data and - a path identifying the output dataset. -
- The recommended means of specifying input data options - is in a configuration file; this is also the only means of - specifying advanced storage features. - See further discussion in "The configuration file" below. -
- The only required option for input data is dimension sizes; - defaults are available for all others. -
- h5import
will accept up to 30 input files in a single call.
- Other considerations, such as the maximum length of a command line,
- may impose a more stringent limitation.
-
-
- Output data and options:
- The name of the output file is specified following
- the -o
or -output
option
- in outfile
.
- The data from each input file is stored as a separate dataset
- in this output file.
- outfile
may be an existing file.
- If it does not yet exist, h5import
will create it.
-
- Output dataset information and storage properties can be - specified only by means of a configuration file. -
- Dataset path - | If the groups in the path leading to the dataset
- do not exist, h5import will create them.- If no group is specified, the dataset will be created - as a member of the root group. - If no dataset name is specified, the default name is - dataset1 for the first input dataset,
- dataset2 for the second input dataset,
- dataset3 for the third input dataset,
- etc.- h5import does not overwrite a pre-existing
- dataset of the specified or default name.
- When an existing dataset of a conflicting name is
- encountered, h5import quits with an error;
- the current input file and any subsequent input files
- are not processed.
- | |
- Output type - | Datatype parameters for output data - | |
- Output data class - | Signed or unsigned integer or floating point - | |
- Output data size - | 8-, 16-, 32-, or 64-bit integer - 32- or 64-bit floating point - | |
- Output architecture - | IEEE - STD - NATIVE (Default)- Other architectures are included in the h5import design
- but are not implemented in this release.
- | |
- Output byte order - | Little- or big-endian. - Relevant only if output architecture - is IEEE , UNIX , or STD ;
- fixed for other architectures.
- | |
- Dataset layout and storage - properties - | Denote how raw data is to be organized on the disk. - If none of the following are specified, - the default configuration is contiguous layout and with no compression. - | |
- Layout - | Contiguous (Default) - Chunked - | |
- External storage - | Allows raw data to be stored in a non-HDF5 file or in an
- external HDF5 file. - Requires contiguous layout. - | |
- Compressed - | Sets the type of compression and the
- level to which the dataset must be compressed. - Requires chunked layout. - | |
- Extendable - | Allows the dimensions of the dataset increase over time
- and/or to be unlimited. - Requires chunked layout. - | |
- Compressed and - extendable - | Requires chunked layout. - | |
- - | - |
- -
- Command-line arguments:
- The h5import
syntax for the command-line arguments,
- in_options
, is as follows:
-
- h5import infile -d dim_list
- [-p pathname]
- [-t input_class]
- [-s input_size]
- [infile ...]
- -o outfile - or - h5import infile -dims dim_list
- [-path pathname]
- [-type input_class]
- [-size input_size]
- [infile ...]
- -outfile outfile - or - h5import infile -c config_file
- [infile ...]
- -outfile outfile
- |
-c config_file
option is used with
- an input file, no other argument can be used with that input file.
- If the -c config_file
option is not used with
- an input data file, the -d dim_list
argument
- (or -dims dim_list
)
- must be used and any combination of the remaining options may be used.
- Any arguments used must appear in exactly the order used
- in the syntax declarations immediately above.
-
-
- The configuration file:
- A configuration file is specified with the
- -c config_file
option:
-
- h5import infile -c config_file
- [infile -c config_file2 ...]
- -outfile outfile
- |
- The configuration file is an ASCII file and must be
- organized as "Configuration_Keyword Value" pairs,
- with one pair on each line.
- For example, the line indicating that
- the input data class (configuration keyword INPUT-CLASS
)
- is floating point in a text file (value TEXTFP
)
- would appear as follows:
- INPUT-CLASS TEXTFP
-
- A configuration file may have the following keywords each
- followed by one of the following defined values.
- One entry for each of the first two keywords,
- RANK
and DIMENSION-SIZES
,
- is required; all other keywords are optional.
-
-
-
- Keyword Value
- | Description - | ||
---|---|---|---|
- RANK
- | The number of dimensions in the dataset. (Required) - | ||
- rank
- | An integer specifying the number of dimensions in the dataset. - Example: 4 for a 4-dimensional dataset.
- | ||
- DIMENSION-SIZES
- | Sizes of the dataset dimensions. (Required) - | ||
- dim_sizes
- | A string of space-separated integers
- specifying the sizes of the dimensions in the dataset.
- The number of sizes in this entry must match the value in
- the RANK entry.
- The fastest-changing dimension must be listed first.- Example: 4 3 4 38 for a 38x4x3x4 dataset.
- | ||
- PATH
- | Path of the output dataset. - | ||
- path
- | The full HDF5 pathname identifying the output dataset
- relative to the root group within the output file. - I.e., path is a string consisting of
- optional group names, each followed by a slash,
- and ending with a dataset name.
- If the groups in the path do no exist, they will be
- created.- If PATH is not specified, the output dataset
- is stored as a member of the root group and the
- default dataset name is
- dataset1 for the first input dataset,
- dataset2 for the second input dataset,
- dataset3 for the third input dataset, etc.- Note that h5import does not overwrite a
- pre-existing dataset of the specified or default name.
- When an existing dataset of a conflicting name is
- encountered, h5import quits with an error;
- the current input file and any subsequent input files
- are not processed.- Example: The configuration file entry -
dataset1 will
- be written in the group grp2/ which is in
- the group grp1/ ,
- a member of the root group in the output file.
- | ||
- INPUT-CLASS
- | A string denoting the type of input data. - | ||
- TEXTIN
- | Input is signed integer data in an ASCII file. - | ||
- TEXTUIN
- | Input is unsigned integer data in an ASCII file. - | ||
- TEXTFP
- | Input is floating point data in fixed notation (e.g., 325.34) - in an ASCII file. - | ||
- TEXTFPE
- | Input is floating point data in scientific notation (e.g., 3.2534E+02)
- in an ASCII file. - (Not implemented in this release.) - | ||
- IN
- | Input is signed integer data in a binary file. - | ||
- UIN
- | Input is unsigned integer data in a binary file. - | ||
- FP
- | Input is floating point data in a binary file. (Default) - | ||
- STR
- | Input is character data in an ASCII file.
- With this value, the configuration keywords
- RANK , DIMENSION-SIZES ,
- OUTPUT-CLASS , OUTPUT-SIZE ,
- OUTPUT-ARCHITECTURE , and OUTPUT-BYTE-ORDER
- will be ignored.- (Not implemented in this release.) - | ||
- INPUT-SIZE
- | An integer denoting the size of the input data, in bits. - | ||
- 8 - 16 - 32 - 64
- | For signed and unsigned integer data:
- TEXTIN , TEXTUIN ,
- IN , or UIN .
- (Default: 32 )
- | ||
- 32 - 64
- | For floating point data:
- TEXTFP , TEXTFPE ,
- or FP .
- (Default: 32 )
- | ||
- OUTPUT-CLASS
- | A string denoting the type of output data. - | ||
- IN
- | Output is signed integer data. - (Default if INPUT-CLASS is
- IN or TEXTIN )
- | ||
- UIN
- | Output is unsigned integer data. - (Default if INPUT-CLASS is
- UIN or TEXTUIN )
- | ||
- FP
- | Output is floating point data. - (Default if INPUT-CLASS is not specified or is
- FP , TEXTFP , or TEXTFPE )
- | ||
- STR
- | Output is character data,
- to be written as a 1-dimensional array of strings. - (Default if INPUT-CLASS is STR )- (Not implemented in this release.) - | ||
- OUTPUT-SIZE
- | An integer denoting the size of the output data, in bits. - | ||
- 8 - 16 - 32 - 64
- | For signed and unsigned integer data:
- IN or UIN .
- (Default: Same as INPUT-SIZE , else 32 )
- | ||
- 32 - 64
- | For floating point data:
- FP .
- (Default: Same as INPUT-SIZE , else 32 )
- | ||
- OUTPUT-ARCHITECTURE
- | A string denoting the type of output architecture. - | ||
- NATIVE - STD - IEEE - INTEL *- CRAY *- MIPS *- ALPHA *- UNIX *
- | See the "Predefined Atomic Types" section
- in the "HDF5 Datatypes" chapter
- of the HDF5 User's Guide
- for a discussion of these architectures. - Values marked with an asterisk (*) are not implemented in this release. - (Default: NATIVE )
- | ||
- OUTPUT-BYTE-ORDER
- | A string denoting the output byte order. - This entry is ignored if the OUTPUT-ARCHITECTURE
- is not specified or if it is not specified as IEEE ,
- UNIX , or STD .
- | ||
- BE
- | Big-endian. (Default) - | ||
- LE
- | Little-endian. - | ||
- The following options are disabled by default, making - the default storage properties no chunking, no compression, - no external storage, and no extensible dimensions. - | |||
- CHUNKED-DIMENSION-SIZES - | Dimension sizes of the chunk for chunked output data. - | ||
- chunk_dims
- | A string of space-separated integers specifying the
- dimension sizes of the chunk for chunked output data.
- The number of dimensions must correspond to the value
- of RANK .- The presence of this field indicates that the - output dataset is to be stored in chunked layout; - if this configuration field is absent, - the dataset will be stored in contiguous layout. - | ||
- COMPRESSION-TYPE
- | Type of compression to be used with chunked storage. - Requires that CHUNKED-DIMENSION-SIZES
- be specified.
- | ||
- GZIP
- | Gzip compression. - Other compression algorithms are not implemented - in this release of h5import .
- | ||
- COMPRESSION-PARAM
- | Compression level. - Required if COMPRESSION-TYPE is specified.
- | ||
- 1 through 9
- | Gzip compression levels:
- 1 will result in the fastest compression
- while 9 will result in the
- best compression ratio.- (Default: 6. The default gzip compression level is 6; - not all compression methods will have a default level.) - | ||
- EXTERNAL-STORAGE
- | Name of an external file in which to create the output dataset. - Cannot be used with CHUNKED-DIMENSIONS-SIZES ,
- COMPRESSION-TYPE , OR MAXIMUM-DIMENSIONS .
- | ||
- external_file
-
-
-
- | A string specifying the name of an external file. - | ||
- MAXIMUM-DIMENSIONS
- | Maximum sizes of all dimensions. - Requires that CHUNKED-DIMENSION-SIZES be specified.
- | ||
- max_dims
- | A string of space-separated integers specifying the
- maximum size of each dimension of the output dataset.
- A value of -1 for any dimension implies
- unlimited size for that particular dimension.- The number of dimensions must correspond to the value - of RANK .- | ||
infile(s) |
- Name of the Input file(s). |
in_options |
- Input options. Note that while only the -dims argument
- is required, arguments must used in the order in which they are listed below. |
-d dim_list |
- |
-dims dim_list |
- Input data dimensions.
- dim_list is a string of
- comma-separated numbers with no spaces
- describing the dimensions of the input data.
- For example, a 50 x 100 2-dimensional array would be
- specified as -dims 50,100 .- Required argument: if no configuration file is used, - this command-line argument is mandatory. |
-p pathname |
- |
-pathname pathname
- |
- pathname is a string consisting of
- one or more strings separated by slashes (/ )
- specifying the path of the dataset in the output file.
- If the groups in the path do no exist, they will be
- created.- Optional argument: if not specified, - the default path is - dataset1 for the first input dataset,
- dataset2 for the second input dataset,
- dataset3 for the third input dataset,
- etc.- h5import does not overwrite a pre-existing
- dataset of the specified or default name.
- When an existing dataset of a conflicting name is
- encountered, h5import quits with an error;
- the current input file and any subsequent input files
- are not processed. |
-t input_class |
- |
-type input_class |
- input_class specifies the class of the
- input data and determines the class of the output data.- Valid values are as defined in the Keyword/Values table - in the section "The configuration file" above. - Optional argument: if not specified, - the default value is FP . |
-s input_size |
- |
-size input_size |
- input_size specifies the size in bits
- of the input data and determines the size of the output data.- Valid values for signed or unsigned integers are - 8 , 16 , 32 , and 64 .- Valid values for floating point data are - 32 and 64 .- Optional argument: if not specified, - the default value is 32 . |
-c config_file |
- config_file specifies a
- configuration file.- This argument replaces all other arguments except - infile and
- -o outfile |
-h |
- |
-help |
-
- Prints the h5import usage summary:- h5import -h[elp], OR - Then exits. - |
outfile |
- Name of the HDF5 output file. |
- h5import infile -dims 2,3,4 -type TEXTIN -size 32 -o out1
- | |
- This command creates a file out1 containing
- a single 2x3x4 32-bit integer dataset.
- Since no pathname is specified, the dataset is stored
- in out1 as /dataset1 .
- | |
- h5import infile -dims 20,50 -path bin1/dset1 -type FP -size 64 -o out2
- | |
- This command creates a file out2 containing
- a single a 20x50 64-bit floating point dataset.
- The dataset is stored in out2 as /bin1/dset1 .
- |
outfile
- at /work/h5/pkamat/First-set
.- PATH work/h5/pkamat/First-set - INPUT-CLASS TEXTFP - RANK 3 - DIMENSION-SIZES 5 2 4 - OUTPUT-CLASS FP - OUTPUT-SIZE 64 - OUTPUT-ARCHITECTURE IEEE - OUTPUT-BYTE-ORDER LE - CHUNKED-DIMENSION-SIZES 2 2 2 - MAXIMUM-DIMENSIONS 8 8 -1 -- - The next configuration file specifies the following:
NATIVE
format
- (as the output architecture is not specified).outfile
at /Second-set
.
- - PATH Second-set - INPUT-CLASS IN - RANK 5 - DIMENSION-SIZES 6 3 5 2 4 - OUTPUT-CLASS IN - OUTPUT-SIZE 32 - CHUNKED-DIMENSION-SIZES 2 2 2 2 2 - COMPRESSION-TYPE GZIP - COMPRESSION-PARAM 7 -- - - -
gif2h5
- gif_file h5_file
-gif2h5
accepts as input the GIF file gif_file
- and produces the HDF5 file h5_file as output.
-
-gif_file | -The name of the input GIF file |
h5_file | -The name of the output HDF5 file |
h52gif
- h5_file gif_file
- -i
h5_image
- [-p
h5_palette]
-h52gif
accepts as input the HDF5 file h5_file
- and the names of images and associated palettes within that file
- as input and produces the GIF file gif_file,
- containing those images, as output.
-
- h52gif
expects at least
- one h5_image.
- You may repeat
-
-
- -i
h5_image
- [-p
h5_palette]
-
- up to 50 times, for a maximum of 50 images.
-
-
h5_file | -The name of the input HDF5 file |
gif_file | -The name of the output GIF file |
-i h5_image |
- Image option, specifying the name of an HDF5 image or - dataset containing an image to be converted |
-p h5_palette |
- Palette option, specifying the name of an HDF5 dataset - containing a palette to be used in an image conversion |
h5toh4 -h
h5toh4
- h5file
- h4fileh5toh4
- h5fileh5toh4 -m
- h5file1
- h5file2
- h5file3 ...
-h5toh4
is an HDF5 utility which reads
- an HDF5 file, h5file, and converts all
- supported objects and pathways to produce an HDF4 file,
- h4file. If h4file already exists,
- it will be replaced.
-
- If only one file name is given, the name must end in
- .h5
and is assumed to represent the
- HDF5 input file. h5toh4
replaces the
- .h5
suffix with .hdf
to form
- the name of the resulting HDF4 file and proceeds as above.
- If a file with the name of the intended HDF4 file already
- exists, h5toh4
exits with an error without
- changing the contents of any file.
-
- The -m
option allows multiple HDF5 file
- arguments. Each file name is treated the same as the
- single file name case above.
-
- The -h
option causes the following
- syntax summary to be displayed:
-
h5toh4 file.h5 file.hdf - h5toh4 file.h5 - h5toh4 -m file1.h5 file2.h5 ...- -
- - The following HDF5 objects occurring in an HDF5 file are - converted to HDF4 objects in the HDF4 file: - -
- Attributes associated with any of the supported HDF5 - objects are carried over to the HDF4 objects. - Attributes may be of integer, floating point, or fixed length - string datatype and they may have up to 32 fixed dimensions. -
- All datatypes are converted to big-endian. - Floating point datatypes are converted to IEEE format. - -
h5toh4
and h4toh5
utilities
- are no longer part of the HDF5 product;
- they are distributed separately through the page
-
- Converting between HDF (4.x) and HDF5.
-
-
--h |
- Displays a syntax summary. |
-m |
- Converts multiple HDF5 files to multiple HDF4 files. |
h5file | -The HDF5 file to be converted. |
h4file | -The HDF4 file to be created. |
h4toh5 -h
h4toh5
- h4file
- h5fileh4toh5
- h4fileh4toh5
is a file conversion utility that reads
- an HDF4 file, h4file (input.hdf
for example),
- and writes an HDF5 file, h5file (output.h5
- for example), containing the same data.
-
- If no output file h5file is specified,
- h4toh5
uses the input filename to designate
- the output file, replacing the extension .hdf
- with .h5
.
- For example, if the input file scheme3.hdf
is
- specified with no output filename, h4toh5
will
- name the output file scheme3.h5
.
-
-
- The -h
option causes a syntax summary
- similar to the following to be displayed:
-
h4toh5 inputfile.hdf outputfile.h5 - h4toh5 inputfile.hdf-
- Each object in the HDF4 file is converted to an equivalent - HDF5 object, according to the mapping described in - - Mapping HDF4 Objects to HDF5 Objects. - (If this mapping changes between HDF5 Library releases, a more up-to-date - version may be available at - - Mapping HDF4 Objects to HDF5 Objects on the HDF FTP server.) -
- In this initial version, h4toh5
converts the following
- HDF4 objects:
-
- HDF4 Object - | - Resulting HDF5 Object - |
---|---|
- SDS - | - Dataset - |
- GR, RI8, and RI24 image - | - Dataset - |
- Vdata - | - Dataset - |
- Vgroup - | - Group - |
- Annotation - | - Attribute - |
- Palette - | - Dataset - |
h4toh5
and h5toh4
utilities
- are no longer part of the HDF5 product;
- they are distributed separately through the page
-
- Converting between HDF (4.x) and HDF5.
-
--h |
- Displays a syntax summary. |
h4file | -The HDF4 file to be converted. |
h5file | -The HDF5 file to be created. |
h5perf
[-h
| --help
]
- h5perf
[options]
-
-
-h5perf
provides tools for testing the performance
- of the Parallel HDF5 library.
-
- The following environment variables have the following
- effects on H5perf
behavior:
-
- HDF5_NOCLEANUP |
- If set, h5perf does not remove data files.
- (Default: Remove) | |
- HDF5_MPI_INFO |
- Must be set to a string containing a list of semi-colon separated
- key=value pairs for the MPI INFO object.- Example: | |
- HDF5_PARAPREFIX | - Sets the prefix for parallel output data files. |
These terms are used as follows in this section: | |
file | -A filename |
size | -A size specifier, expressed as an integer
- greater than or equal to 0 (zero) followed by a size indicator: - K for kilobytes (1024 bytes)- M for megabytes (1048576 bytes)- G for gigabytes (1073741824 bytes)- Example: 37M specifies 37 megabytes or 38797312 bytes. |
N | -An integer greater than or equal to 0 (zero) |
-h , --help |
- ||||||||||||||||||||||
- | Prints a usage message and exits. | -|||||||||||||||||||||
-a size, --align= size |
- ||||||||||||||||||||||
- | Specifies the alignment of objects in the HDF5 file. - (Default: 1) | -|||||||||||||||||||||
-A api_list, --api= api_list |
- ||||||||||||||||||||||
- | Specifies which APIs to test. api_list
- is a comma-separated list with the following valid values:
-
- Example, --api=mpiio,phdf5 specifies that the MPI I/O
- and parallel HDf5 APIs are to be monitored. |
- |||||||||||||||||||||
-B size, --block-size= size |
- ||||||||||||||||||||||
- | Specifies the block size within the transfer
- buffer. (Default: 128K) - Block size versus transfer buffer size: The transfer buffer size - is the size of a buffer in memory. The data in that buffer is broken - into block size pieces and written to the file. - Transfer block size is set by the -x (or --min-xfer-size )
- and -X (or --max-xfer-size ) options.- The pattern in which the blocks are written to the file is described - in the discussion of the -I (or --interleaved )
- option. |
- |||||||||||||||||||||
-c , --chunk |
- ||||||||||||||||||||||
- | Creates HDF5 datasets in chunked layout. (Default: - Off) | -|||||||||||||||||||||
-C , --collective |
- ||||||||||||||||||||||
- | Use collective I/O for the MPI I/O and
- Parallel HDF5 APIs. - (Default: Off, i.e., independent I/O) - If this option is set and the MPI-I/O and PHDF5 APIs are in use, all - the blocks in each transfer buffer will be written at once with an - MPI derived type. |
- |||||||||||||||||||||
-d N, --num-dsets N |
- ||||||||||||||||||||||
- | Sets the number of datasets per file. (Default: 1 ) |
- |||||||||||||||||||||
-D debug_flags, --debug= debug_flags |
- ||||||||||||||||||||||
- | Sets the debugging level. debug_flags
- is a comma-separated list of debugging flags with the following valid
- values:
-
- Example: --debug=2,r,t specifies to run a moderate level
- of debugging while collecting raw data I/O throughput information
- and verifying the correctness of the data. |
- |||||||||||||||||||||
-e size, --num-bytes= size |
- ||||||||||||||||||||||
- | Specifies the number of bytes per process per dataset.
- (Default: 256K ) |
- |||||||||||||||||||||
-F N, --num-files= N |
- ||||||||||||||||||||||
- | Specifies the number of files. (Default: 1 ) |
- |||||||||||||||||||||
-i N, --num-iterations= N |
- ||||||||||||||||||||||
- | Sets the number of iterations to perform. (Default:
- 1 ) |
-
-I , --interleaved |
- |
- | Sets interleaved block I/O. - (Default: Contiguous block I/O) - Interleaved vs. Contiguous blocks in a parallel environment: - When contiguous blocks are written to a dataset, the dataset is divided - into m regions, where m is the number of processes - writing separate portions of the dataset. Each process then writes - data to its own region. When interleaved blocks are written to a dataset, - space for the first block of the first process is allocated in the - dataset, then space is allocated for the first block of the second - process, etc., until space has been allocated for the first block - of each process. Space is then allocated for the second block of the - first process, the second block of the second process, etc. - For example, in the case of a 4 process run with 1M bytes-per-process, - 256K transfer buffer size, and 64KB block size, 16 contiguous - blocks per process would be written to the file in the manner - 1111111111111111222222222222222233333333333333334444444444444444 - while 16 interleaved blocks per process would be written to the file - as 1234123412341234123412341234123412341234123412341234123412341234 - If collective I/O is turned on, all of the four blocks per transfer - buffer will be written in one collective I/O call. |
-
-m , --mpi-posix |
- Sets use of MPI-posix driver for HDF5 I/O. (Default: - MPI-I/O driver) | -
-n , --no-fill |
- Specifies to not write fill values to HDF5 datasets.
- This option is supported only in HDF5 Release v1.6 or later. - (Default: Off, i.e., write fill values) |
-
-o file, --output= file |
- Sets the output file for raw data to file. - (Default: None) | -
-p N, --min-num-processes= N |
- Sets the minimum number of processes to be used. (Default:
- 1 ) |
-
-P N, --max-num-processes= N-
-
-
-
- |
- Sets the maximum number of processes to be used. - (Default: All MPI_COMM_WORLD processes) |
-
-T size, --threshold= size |
- Sets the threshold for alignment of objects in the
- HDF5 file. (Default: 1 ) |
-
-w , --write-only |
- Performs only write tests, not read tests. (Default: - Read and write tests) | -
-x size, --min-xfer-size= size |
- Sets the minimum transfer buffer size. (Default: 128K ) |
-
-X size, --max-xfer-size= size |
- Sets the maximum transfer buffer size. (Default: 1M ) |
-
h5redeploy
- [help
| -help
]
- h5redeploy
- [-echo
]
- [-force
]
- [-prefix=
dir]
- [-tool=
tool]
- [-show
]
-h5redeploy
updates the HDF5 compiler tools after
- the HDF5 software has been installed in a new location.
-
-help , -help |
- Prints a help message. |
-echo |
- Shows all the shell commands executed. |
-force |
- Performs the requested action without offering any prompt - requesting confirmation. |
-prefix= dir |
- Specifies a new directory in which to find the
- HDF5 subdirectories lib/ and include/ .
- (Default: current working directory) |
-tool= tool |
- Specifies the tool to update. tool must
- be in the current directory and must be writable.
- (Default: h5cc ) |
-show |
- Shows all of the shell commands to be executed - without actually executing them. |
h5cc
- [
OPTIONS]
<compile line>
-h5cc
can be used in much the same way MPIch is used
- to compile an HDF5 program. It takes care of specifying where the
- HDF5 header files and libraries are on the command line.
-
- h5cc
supersedes all other compiler scripts in that
- if you've used them to compile the HDF5 library, then
- h5cc
also uses those scripts. For example, when
- compiling an MPIch program, you use the mpicc
- script. If you've built HDF5 using MPIch, then h5cc
- uses the MPIch program for compilation.
-
- Some programs use HDF5 in only a few modules. It isn't necessary
- to use h5cc
to compile those modules which don't use
- HDF5. In fact, since h5cc
is only a convenience
- script, you are still able to compile HDF5 modules in the normal
- way. In that case, you will have to specify the HDF5 libraries
- and include paths yourself.
h5cc
to compile the program
- hdf_prog
, which consists of modules
- prog1.c
and prog2.c
and uses the HDF5
- shared library, would be as follows:
-
- - # h5cc -c prog1.c - # h5cc -c prog2.c - # h5cc -shlib -o hdf_prog prog1.o prog2.o- -
-help |
- Prints a help message. |
-echo |
- Show all the shell commands executed. |
-prefix=DIR |
- Use the directory DIR to find the HDF5
- lib/ and include/ subdirectories.
- - Default: prefix specified when configuring HDF5. |
-show |
- Show the commands without executing them. |
-shlib |
- Compile using shared HDF5 libraries. |
-noshlib |
- Compile using static HDF5 libraries [default]. |
<compile line> | -The normal compile line options for your compiler.
- h5cc uses the same compiler you used to compile HDF5.
- Check your compiler's manual for more information on which
- options are needed. |
h5cc
.
-
- HDF5_CC |
- Use a different C compiler. |
HDF5_CLINKER |
- Use a different linker. |
HDF5_USE_SHLIB=[yes|no] |
- Use shared version of the HDF5 library [default: no]. |
h5fc
- [
OPTIONS]
<compile line>
-
- h5fc
can be used in much the same way MPIch is used
- to compile an HDF5 program. It takes care of specifying where the
- HDF5 header files and libraries are on the command line.
-
- h5fc
supersedes all other compiler scripts in that
- if you've used them to compile the HDF5 Fortran library, then
- h5fc
also uses those scripts. For example, when
- compiling an MPIch program, you use the mpif90
- script. If you've built HDF5 using MPIch, then h5fc
- uses the MPIch program for compilation.
-
- Some programs use HDF5 in only a few modules. It isn't necessary
- to use h5fc
to compile those modules which don't use
- HDF5. In fact, since h5fc
is only a convenience
- script, you are still able to compile HDF5 Fortran modules in the
- normal way. In that case, you will have to specify the HDF5 libraries
- and include paths yourself.
-
- An example of how to use h5fc
to compile the program
- hdf_prog
, which consists of modules
- prog1.f90
and prog2.f90
- and uses the HDF5 Fortran library, would be as follows:
-
- # h5fc -c prog1.f90 - # h5fc -c prog2.f90 - # h5fc -o hdf_prog prog1.o prog2.o- -
-help |
- Prints a help message. |
-echo |
- Show all the shell commands executed. |
-prefix=DIR |
- Use the directory DIR to find HDF5
- lib/ and include/ subdirectories
- - Default: prefix specified when configuring HDF5. |
-show |
- Show the commands without executing them. |
<compile line> | -The normal compile line options for your compiler.
- h5fc uses the same compiler you used
- to compile HDF5. Check your compiler's manual for
- more information on which options are needed. |
h5cc
.
- HDF5_FC |
- Use a different Fortran90 compiler. |
HDF5_FLINKER |
- Use a different linker. |
h5c++
- [
OPTIONS]
<compile line>
-
- h5c++
can be used in much the same way MPIch is used
- to compile an HDF5 program. It takes care of specifying where the
- HDF5 header files and libraries are on the command line.
-
- h5c++
supersedes all other compiler scripts in that
- if you've used one set of compiler scripts to compile the
- HDF5 C++ library, then h5c++
uses those same scripts.
- For example, when compiling an MPIch program,
- you use the mpiCC
script.
-
- Some programs use HDF5 in only a few modules. It isn't necessary
- to use h5c++
to compile those modules which don't use
- HDF5. In fact, since h5c++
is only a convenience
- script, you are still able to compile HDF5 C++ modules in the
- normal way. In that case, you will have to specify the HDF5 libraries
- and include paths yourself.
-
- An example of how to use h5c++
to compile the program
- hdf_prog
, which consists of modules
- prog1.cpp
and prog2.cpp
- and uses the HDF5 C++ library, would be as follows:
-
- # h5c++ -c prog1.cpp - # h5c++ -c prog2.cpp - # h5c++ -o hdf_prog prog1.o prog2.o- -
-help |
- Prints a help message. |
-echo |
- Show all the shell commands executed. |
-prefix=DIR |
- Use the directory DIR to find HDF5
- lib/ and include/ subdirectories
- - Default: prefix specified when configuring HDF5. |
-show |
- Show the commands without executing them. |
<compile line> - |
- The normal compile line options for your compiler.
- h5c++ uses the same compiler you used
- to compile HDF5. Check your compiler's manual for
- more information on which options are needed. |
h5c++
.
- HDF5_CXX |
- Use a different C++ compiler. |
HDF5_CXXLINKER |
- Use a different linker. |
-HDF5 documents and links -Introduction to HDF5 -HDF5 User Guide - - |
-
-And in this document, the
-HDF5 Reference Manual
- -H5IM -H5LT -H5PT -H5TB - -H5 -H5A -H5D -H5E -H5F -H5G -H5I -H5P - -H5R -H5S -H5T -H5Z -Tools -Datatypes - |
-
-
-NCSA HDF5 (Hierarchical Data Format 5) Software Library and Utilities
-
-Copyright 1998, 1999, 2000, 2001 by the Board of Trustees of the University of Illinois
-
-All rights reserved.
-
- -Contributors: National Center for Supercomputing Applications (NCSA) at -the University of Illinois at Urbana-Champaign (UIUC), Lawrence Livermore -National Laboratory (LLNL), Sandia National Laboratories (SNL), Los Alamos -National Laboratory (LANL), Jean-loup Gailly and Mark Adler (gzip library). -
- -Redistribution and use in source and binary forms, with or without -modification, are permitted for any purpose (including commercial purposes) -provided that the following conditions are met: -
- -
-DISCLAIMER: -This work was prepared as an account of work sponsored by an agency of the -United States Government. Neither the United States Government nor the -University of California nor any of their employees, makes any warranty, -express or implied, or assumes any liability or responsibility for the -accuracy, completeness, or usefulness of any information, apparatus, -product, or process disclosed, or represents that its use would not -infringe privately-owned rights. Reference herein to any specific -commercial products, process, or service by trade name, trademark, -manufacturer, or otherwise, does not necessarily constitute or imply its -endorsement, recommendation, or favoring by the United States Government -or the University of California. The views and opinions of authors -expressed herein do not necessarily state or reflect those of the United -States Government or the University of California, and shall not be used -for advertising or product endorsement purposes. -
-
-
harry
that is
- a member of a group called dick
, which, in turn, is a
- member of the root group.
-
- /dick/harry
- H5Fcreate
and H5Fclose
.
- -
hdf5.h
must be included because it contains definitions
- and declarations used by the library.
- -
-
H5T_IEEE_F32LE
- - 4-byte little-endian, IEEE floating point H5T_NATIVE_INT
- - native integer
- -
-
-
-
DATASPACE { SIMPLE (4 , 6 ) / ( 4 , 6 ) }
- dset
has a simple dataspace
- with the current dimensions (4,6) and the maximum size of the
- dimensions (4,6).
- -
-
moo
in the group
- boo
, which is in the group foo
,
- which, in turn, is in the root group.
- How would you specify an absolute name to access this dataset?
-
- /foo/boo/moo
- moo
described in
- the previous section (Section 9, question 2) using a
- relative name.
- Describe a way to access the same dataset using an absolute name.
-
- /foo
and get the group ID.
- Access the group boo
using the group ID obtained
- in Step 1.
- Access the dataset moo
using the group ID obtained
- in Step 2.
- -gid = H5Gopen (file_id, "/foo", 0); /* absolute path */ -gid1 = H5Gopen (gid, "boo", 0); /* relative path */ -did = H5Dopen (gid1, "moo"); /* relative path */- -
/foo
and get the group ID.
- Access the dataset boo/moo
with the group ID
- just obtained.
- -gid = H5Gopen (file_id, "/foo", 0); /* absolute path */ -did = H5Dopen (gid, "boo/moo"); /* relative path */- -
-did = H5Dopen (file_id, "/foo/boo/moo"); /* absolute path */-
-The HDF5 library provides several interfaces, or APIs. -These APIs provide routines for creating, accessing, and manipulating -HDF5 files and objects. -
-The library itself is implemented in C. To facilitate the work of -FORTRAN90 and Java programmers, HDF5 function wrappers have been developed -in each of these languages. -At the time of this writing, a set of C++ wrappers is in development. -This tutorial discusses the use of the C functions and the FORTRAN wrappers. -
-All C routines in the HDF5 library begin with a prefix of the form H5*, -where * is one or two uppercase letters indicating the type of object on which the -function operates. -The FORTRAN wrappers come in the form of subroutines that begin with -h5 and end with _f. The APIs are listed below: -
-
- API
- |
-
- DESCRIPTION
- |
-
- H5
- |
- Library Functions: general-purpose H5 functions | -
- H5A
- |
- Annotation Interface: attribute access and manipulation - routines | -
- H5D
- |
- Dataset Interface: dataset access and manipulation - routines | -
- H5E
- |
- Error Interface: error handling routines | -
- H5F
- |
- File Interface: file access routines | -
- H5G
- |
- Group Interface: group creation and operation routines | -
- H5I
- |
- Identifier Interface: identifier routines | -
- H5P
- |
- Property List Interface: object property list manipulation - routines | -
- H5R
- |
- Reference Interface: reference routines | -
- H5S
- |
- Dataspace Interface: dataspace definition and access - routines | -
- H5T
- |
- Datatype Interface: datatype creation and manipulation - routines | -
- H5Z
- |
- Compression Interface: compression routine(s) | -
-Compound datatypes must be built out of other datatypes. First, one -creates an empty compound datatype and specifies its total size. Then -members are added to the compound datatype in any order. - -
-
h5_compound.c
-compound.f90
-Compound.java
-- -Field c : -1.0000 0.5000 0.3333 0.2500 0.2000 0.1667 0.1429 0.1250 0.1111 0.1000 - -Field a : -0 1 2 3 4 5 6 7 8 9 - -Field b : -0.0000 1.0000 4.0000 9.0000 16.0000 25.0000 36.0000 49.0000 64.0000 81.0000 - -+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -- - -
H5Tcreate
creates a new datatype of the specified class with
-the specified number of bytes.
-- hid_t H5Tcreate ( H5T_class_t class, size_t size ) --
H5T_COMPOUND
datatype class is supported with this
-function.
--
H5Tinsert
adds a member to the compound datatype specified by
-type_id.
-- herr_t H5Tinsert ( hid_t type_id, const char * name, off_t offset, - hid_t field_id ) --
HOFFSET
macro to compute the offset of a member within
-a struct:
-- HOFFSET ( s, m ) --This macro computes the offset of member m within a struct -variable s. - -
-
H5Tclose
releases a datatype.
-- herr_t H5Tclose ( hid_t type_id ) --The type_id parameter is the identifier of the datatype to release. -
-HDF5 "SDScompound.h5" { -GROUP "/" { - DATASET "ArrayOfStructures" { - DATATYPE { - H5T_STD_I32BE "a_name"; - H5T_IEEE_F32BE "b_name"; - H5T_IEEE_F64BE "c_name"; - } - DATASPACE { SIMPLE ( 10 ) / ( 10 ) } - DATA { - { - [ 0 ], - [ 0 ], - [ 1 ] - }, - { - [ 1 ], - [ 1 ], - [ 0.5 ] - }, - { - [ 2 ], - [ 4 ], - [ 0.333333 ] - }, - { - [ 3 ], - [ 9 ], - [ 0.25 ] - }, - { - [ 4 ], - [ 16 ], - [ 0.2 ] - }, - { - [ 5 ], - [ 25 ], - [ 0.166667 ] - }, - { - [ 6 ], - [ 36 ], - [ 0.142857 ] - }, - { - [ 7 ], - [ 49 ], - [ 0.125 ] - }, - { - [ 8 ], - [ 64 ], - [ 0.111111 ] - }, - { - [ 9 ], - [ 81 ], - [ 0.1 ] - } - } - } -} -} -- - - - - -
-Attributes are small datasets that can be used to describe the nature and/or -the intended usage of the object they are attached to. In this section, we -show how to create, read, and write an attribute. -
-
-
- Creating an attribute is similar to creating a dataset. To create an - attribute, the application must specify the object which the attribute is - attached to, the datatype and dataspace of the attribute data, - and the attribute creation property list. -
- The steps to create an attribute are as follows: -
- To create and close an attribute, the calling program must use
-H5Acreate
/h5acreate_f
and
-H5Aclose
/h5aclose_f
. For example:
-
-C: -
- attr_id = H5Acreate (dset_id, attr_name, type_id, space_id, creation_prp); - status = H5Aclose (attr_id); --FORTRAN: -
- CALL h5acreate_f (dset_id, attr_nam, type_id, space_id, attr_id, & - hdferr, creation_prp=creat_plist_id) - or - CALL h5acreate_f (dset_id, attr_nam, type_id, space_id, attr_id, hdferr) - - CALL h5aclose_f (attr_id, hdferr) -- -
- Attributes may only be read or written as an entire object; no partial I/O is - supported. Therefore, to perform I/O operations on an attribute, the - application needs only to specify the attribute and the attribute's memory - datatype. -
- The steps to read or write an attribute are as follows. -
-To read and/or write an attribute, the calling program must contain the
-H5Aread
/h5aread_f
and/or
-H5Awrite
/h5awrite_f
routines. For example:
-
-C: -
- status = H5Aread (attr_id, mem_type_id, buf); - status = H5Awrite (attr_id, mem_type_id, buf); --FORTRAN: -
- CALL h5awrite_f (attr_id, mem_type_id, buf, hdferr) - CALL h5aread_f (attr_id, mem_type_id, buf, hdferr) --
-
dset.h5
in C
-(dsetf.h5
in FORTRAN),
-obtains the identifier of the dataset /dset
,
-defines the attribute's dataspace, creates the dataset attribute, writes
-the attribute, and then closes the attribute's dataspace, attribute, dataset,
-and file. h5_crtatt.c
attrexample.f90
CreateAttribute.java
H5Acreate
/h5acreate_f
creates an attribute
- which is attached to the object specified by the first parameter,
- and returns an identifier.
--C: -
- hid_t H5Acreate (hid_t obj_id, const char *name, hid_t type_id, - hid_t space_id, hid_t creation_prp) --FORTRAN: -
- h5acreate_f (obj_id, name, type_id, space_id, attr_id, & - hdferr, creation_prp) - - obj_id INTEGER(HID_T) - name CHARACTER(LEN=*) - type_id INTEGER(HID_T) - space_id INTEGER(HID_T) - attr_id INTEGER(HID_T) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - creation_prp INTEGER(HID_T), OPTIONAL - --
-
-
-
-
H5P_DEFAULT
in C (H5P_DEFAULT_F
in FORTRAN)
- specifies the default creation property list.
- This parameter is optional in FORTRAN; when it is omitted,
- the default creation property list is used.
--
-
H5Awrite
/h5awrite_f
writes the entire attribute,
- and returns the status of the write.
--C: -
- herr_t H5Awrite (hid_t attr_id, hid_t mem_type_id, void *buf) --FORTRAN: -
- h5awrite_f (attr_id, mem_type_id, buf, hdferr) - - attr_id INTEGER(HID_T) - memtype_id INTEGER(HID_T) - buf TYPE(VOID) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - --
-
-
-
-
H5Aclose
/h5aclose_f
must be called
- to release the attribute from use.
- The C routine returns a non-negative value if successful;
- otherwise it returns a negative value.
- In FORTRAN, the return value is in the hdferr parameter:
- 0 if successful, -1 otherwise.
--C: -
- herr_t H5Aclose (hid_t attr_id) -- -FORTRAN: -
- h5aclose_f (attr_id, hdferr) - - attr_id INTEGER(HID_T) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - --
H5Aclose
/h5aclose_f
call is mandatory.
-
-The contents of dset.h5
(dsetf.h5
for FORTRAN) and the
-attribute definition are shown below:
-
-Fig. 7.1a dset.h5
in DDL
-
-
-HDF5 "dset.h5" { -GROUP "/" { - DATASET "dset" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) } - DATA { - 1, 2, 3, 4, 5, 6, - 7, 8, 9, 10, 11, 12, - 13, 14, 15, 16, 17, 18, - 19, 20, 21, 22, 23, 24 - } - ATTRIBUTE "attr" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 2 ) / ( 2 ) } - DATA { - 100, 200 - } - } - } -} -} --Fig. 7.1b
dsetf.h5
in DDL
--HDF5 "dsetf.h5" { -GROUP "/" { - DATASET "dset" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 6, 4 ) / ( 6, 4 ) } - DATA { - 1, 7, 13, 19, - 2, 8, 14, 20, - 3, 9, 15, 21, - 4, 10, 16, 22, - 5, 11, 17, 23, - 6, 12, 18, 24 - } - ATTRIBUTE "attr" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 2 ) / ( 2 ) } - DATA { - 100, 200 - } - } - } -} -} -- - - - -
- - <attribute> ::= ATTRIBUTE "<attr_name>" { <datatype> - <dataspace> - <data> } - -- - - - -
-A dataset is a multidimensional array of data elements, together with -supporting metadata. To create a dataset, the application program must specify -the location at which to create the dataset, the dataset name, the datatype -and dataspace of the data array, and the dataset creation property list. -
-
- There are two categories of datatypes in HDF5: atomic and compound - datatypes. An atomic datatype is a datatype which cannot be - decomposed into smaller datatype units at the API level. - These include the integer, float, date and time, string, bitfield, and - opaque datatypes. - A compound datatype is a collection of one or more - atomic datatypes and/or small arrays of such datatypes. -
- Figure 5.1 shows the HDF5 datatypes. Some of the HDF5 predefined - atomic datatypes are listed in Figures 5.2a and 5.2b. - In this tutorial, we consider only HDF5 predefined integers. - For further information on datatypes, see - The Datatype Interface (H5T) in the - HDF5 User's Guide. -
- Fig 5.1 HDF5 datatypes -
- - +-- integer - +-- floating point - +---- atomic ----+-- date and time - | +-- character string - HDF5 datatypes --| +-- bitfield - | +-- opaque - | - +---- compound - -- -
-
- Fig. 5.2a Examples of HDF5 predefined datatypes
-
-
|
-
- Fig. 5.2b Examples of HDF5 predefined native datatypes
-
|
- Fig 5.3 HDF5 dataspaces -
- - +-- simple - HDF5 dataspaces --| - +-- complex - -- The dimensions of a dataset can be fixed (unchanging), or they may be - unlimited, which means that they are extensible. A dataspace can also - describe a portion of a dataset, making it possible to do partial I/O - operations on selections. - -
-In HDF5, datatypes and dataspaces are independent objects which are created -separately from any dataset that they might be attached to. Because of this, -the creation of a dataset requires definition of the datatype and dataspace. -In this tutorial, we use HDF5 predefined datatypes (integer) and consider -only simple dataspaces. Hence, only the creation of dataspace objects is -needed. -
- -To create an empty dataset (no data written) the following steps need to be -taken: -
-C: -
- space_id = H5Screate_simple (rank, dims, maxdims); - status = H5Sclose (space_id ); --FORTRAN: -
- CALL h5screate_simple_f (rank, dims, space_id, hdferr, maxdims=max_dims) - or - CALL h5screate_simple_f (rank, dims, space_id, hdferr) - - CALL h5sclose_f (space_id, hdferr) -- -To create a dataset, the calling program must contain calls to create -and close the dataset. For example: -
-C: -
- dset_id = H5Dcreate (hid_t loc_id, const char *name, hid_t type_id, - hid_t space_id, hid_t creation_prp); - status = H5Dclose (dset_id); --FORTRAN: -
- CALL h5dcreate_f (loc_id, name, type_id, space_id, dset_id, & - hdferr, creation_prp=creat_plist_id) - or - CALL h5dcreate_f (loc_id, name, type_id, space_id, dset_id, hdferr) - - CALL h5dclose_f (dset_id, hdferr) --If using the pre-defined datatypes in FORTRAN, then a call must -be made to initialize and terminate access to the pre-defined datatypes: -
- CALL h5init_types_f (hdferr) - CALL h5close_types_f (hdferr) --
h5init_types_f
must be called before any HDF5 library
-subroutine calls are made;
-h5close_types_f
must be called after the final HDF5 library
-subroutine call.
-See the programming example below for an illustration of the use of
-these calls.
-
--
dset.h5
in the C version
-(dsetf.h5
in Fortran), defines the dataset dataspace, creates a
-dataset which is a 4x6 integer array, and then closes the dataspace,
-the dataset, and the file. h5_crtdat.c
dsetexample.f90
CreateDataset.java
-H5Screate_simple
/h5screate_simple_f
-creates a new simple dataspace and returns a dataspace identifier.
--C: - hid_t H5Screate_simple (int rank, const hsize_t * dims, - const hsize_t * maxdims) -FORTRAN: - h5screate_simple_f (rank, dims, space_id, hdferr, maxdims) - - rank INTEGER - dims(*) INTEGER(HSIZE_T) - space_id INTEGER(HID_T) - hdferr INTEGER - (Valid values: 0 on success and -1 on failure) - maxdims(*) INTEGER(HSIZE_T), OPTIONAL --
-
H5Dcreate
/h5dcreate_f
creates a dataset
-at the specified location and returns a dataset identifier.
--C: - hid_t H5Dcreate (hid_t loc_id, const char *name, hid_t type_id, - hid_t space_id, hid_t creation_prp) -FORTRAN: - h5dcreate_f (loc_id, name, type_id, space_id, dset_id, & - hdferr, creation_prp) - - loc_id INTEGER(HID_T) - name CHARACTER(LEN=*) - type_id INTEGER(HID_T) - space_id INTEGER(HID_T) - dset_id INTEGER(HID_T) - hdferr INTEGER - (Valid values: 0 on success and -1 on failure) - creation_prp INTEGER(HID_T), OPTIONAL --
-
-
-
-
H5P_DEFAULT
in C and H5P_DEFAULT_F
in FORTRAN
- specify the default dataset creation property list.
- This parameter is optional in FORTRAN; if it is omitted,
- the default dataset creation property list will be used.
--
-
H5Dcreate
/h5dcreate_f
creates an empty array
-and initializes the data to 0.
--
H5Dclose
/h5dclose_f
must be called to release
-the resource used by the dataset. This call is mandatory.
--C: - hid_t H5Dclose (hid_t dset_id) -FORTRAN: - h5dclose_f (dset_id, hdferr) - - dset_id INTEGER(HID_T) - hdferr INTEGER - (Valid values: 0 on success and -1 on failure) --
dset.h5
(dsetf.h5
-for FORTRAN) are shown in Figure 5.4 and Figures 5.5a
-and 5.5b.
--
-Figure 5.4 Contents of dset.h5 ( dsetf.h5 )
- |
- - |
Figure 5.5a dset.h5 in DDL |
- Figure 5.5b dsetf.h5 in DDL |
-
- -HDF5 "dset.h5" { -GROUP "/" { - DATASET "dset" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) } - DATA { - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0 - } - } -} -} -- |
-
- -HDF5 "dsetf.h5" { -GROUP "/" { - DATASET "dset" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 6, 4 ) / ( 6, 4 ) } - DATA { - 0, 0, 0, 0, - 0, 0, 0, 0, - 0, 0, 0, 0, - 0, 0, 0, 0, - 0, 0, 0, 0, - 0, 0, 0, 0 - } - } -} -} -- |
-
-Note in Figures 5.5a and 5.5b that
-
- Fig. 5.6 HDF5 Dataset Definition
-H5T_STD_I32BE
, a 32-bit Big Endian integer,
-is an HDF atomic datatype.
-
-
-
-Dataset Definition in DDL
-The following is the simplified DDL dataset definition:
-
- <dataset> ::= DATASET "<dataset_name>" { <datatype>
- <dataspace>
- <data>
- <dataset_attribute>* }
-
- <datatype> ::= DATATYPE { <atomic_type> }
-
- <dataspace> ::= DATASPACE { SIMPLE <current_dims> / <max_dims> }
-
- <dataset_attribute> ::= <attribute>
-
-
-
-
-
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-hdfhelp@@ncsa.uiuc.edu
-
-
Last Modified: June 22, 2001
-
-
-
-
-
-
-
-
-
diff --git a/doc/html/Tutor/crtfile.html b/doc/html/Tutor/crtfile.html
deleted file mode 100644
index fc235c4..0000000
--- a/doc/html/Tutor/crtfile.html
+++ /dev/null
@@ -1,317 +0,0 @@
-
-
-An HDF5 file is a binary file containing scientific data and supporting -metadata. The primary types of objects stored in an HDF5 file, groups and -datasets, will be discussed in other sections of this tutorial. -
-To create a file, an application must specify a filename, file -access mode, file creation property list, and file access property list. -
-
H5F_ACC_TRUNC
specifies that if the file already exists,
- the current contents will be deleted so that the application can rewrite
- the file with new data.
- H5F_ACC_EXCL
specifies that the open is to fail if
- the file already exists.
- - Note that there are two different access modes for opening exisitng files: -
H5F_ACC_RDONLY
specifies that the application has
- read access but will not be allowed to write any data.
- H5F_ACC_RDWR
specifies that the application has
- read and write access.
- - For further information, see - The File Interface (H5F) section of the - HDF5 User's Guide and - the H5F: File Interface - section of the HDF5 Reference Manual. -
-
H5P_DEFAULT
, is used.
-- The user-block is a fixed-length block of data located at the beginning - of the file which is ignored by the HDF5 library. - The user-block may be used to store - any data or information found to be useful to applications. -
- For further information, see - The File Interface (H5F) section of the - HDF5 User's Guide. -
-
H5P_DEFAULT
,
- is used in this tutorial.
-- For further information, see - The File Interface (H5F) section of the - HDF5 User's Guide. -
-The steps to create and close an HDF5 file are as follows: -
-C:
- file_id = H5Fcreate (filename, access_mode, create_id, access_id); - status = H5Fclose (file_id); --FORTRAN:
- CALL h5fcreate_f (filename, access_mode, file_id, hdferr, & - creation_prp=create_id, access_prp=access_id) - or - CALL h5fcreate_f (filename, access_mode, file_id, hdferr) - - CALL h5fclose_f (file_id, hdferr) --In FORTRAN, the file creation property list,
creation_prp
,
-and file access property list, access_prp
,
-are optional parameters;
-they can be omitted if the default values are to be used.
--
file.h5
in the C version,
-filef.h5
in FORTRAN, and then closes the file.- -
h5_crtfile.c
fileexample.f90
CreateFile.java
-
-NOTE: To download a tar file of all of the examples, including
-a Makefile, please go to the References page.
-
-
-
-
-
-
-
-
-
-Remarks
-
-
-
-hdf5.h
contains definitions and declarations
- and must be included in any program that uses the HDF5 library.
-
In FORTRAN:
- The module HDF5
contains definitions and declarations
- and must be used in any program that uses the HDF5 library.
-H5Fcreate
/h5fcreate_f
creates
- an HDF5 file and returns the file identifier.
-
-C:
- hid_t H5Fcreate (const char *name, unsigned access_mode, hid_t creation_prp,
- hid_t access_prp)
-FORTRAN:
- h5fcreate_f (name, access_mode, file_id, hdferr, creation_prp, access_prp)
-
- name CHARACTER(LEN=*)
- access_flag INTEGER
- (Valid values: H5F_ACC_RDWR_F, H5F_ACC_RDONLY_F,
- H5F_ACC_TRUNC_F, H5F_ACC_EXCL_F, H5F_ACC_DEBUG_F)
- file_id INTEGER(HID_T)
- hdferr INTEGER
- (Valid values: 0 on success and -1 on failure)
- creation_prp INTEGER(HID_T), OPTIONAL
- (Default value: H5P_DEFAULT_F)
- access_prp INTEGER(HID_T), OPTIONAL
- (Default value: H5P_DEFAULT_F)
-
-
-
-
-H5F_ACC_TRUNC
(H5F_ACC_TRUNC_F
in FORTRAN)
- will truncate a file if it already exists.
-H5P_DEFAULT
indicates that the
- default file creation property list is to be used.
- This option is optional in FORTRAN; if it is omitted, the default file
- creation property list, H5P_DEFAULT_F
, is used.
-H5P_DEFAULT
indicates that the
- default file creation property list is to be used.
- This option is optional in FORTRAN; if it is omitted, the default file
- creation property list, H5P_DEFAULT_F
, is used.
-H5Fclose
/h5fclose_f
- must be called to release the resources used by the file. This call
- is mandatory.
-
-C:
- herr_t H5Fclose (hid_t file_id)
-
-FORTRAN:
- h5fclose_f(file_id, hdferr)
-
-/
.
-File Contents
-The HDF team has developed tools for examining the contents of HDF5 files.
-The tool used in this tutorial is the HDF5 dumper, h5dump
,
-which displays the file contents in human-readable form.
-The output of h5dump
is an ASCII display formatted according
-to the HDF5 DDL grammar.
-This grammar is defined, using Backus-Naur Form, in the
-DDL in BNF for HDF5.
-
-To view the file contents, type: -
- h5dump <filename> -- -Figure 4.1 describes the file contents of
file.h5
(filef.h5
)
-using a directed graph.
-The directed graphs in this tutorial use an oval to represent an HDF5 group
-and a rectangle to represent an HDF5 dataset (none in this example).
-Arrows indicate the inclusion direction of the contents (none in this example).
-
-
-Fig. 4.1 Contents of file.h5
(filef.h5
)
-
- -- -Figure 4.2 is the text description of
file.h5
, as generated by
-h5dump
. The HDF5 file called file.h5
contains
-a group called /
, or the root group.
-(The file called filef.h5
,
-created by the FORTRAN version of the example, has the same output except
-that the filename shown is filef.h5
.)
-
- Fig. 4.2 file.h5
in DDL
-
- - HDF5 "file.h5" { - GROUP "/" { - } - } - -- - -
- Fig. 4.3 HDF5 File Definition -
- The following symbol definitions are used in the DDL: -
- - ::= defined as - <tname> a token with the name tname - <a> | <b> one of <a> or <b> - <a>* zero or more occurrences of <a> -- The simplified DDL for file definition is as follows: -
- <file> ::= HDF5 "<file_name>" { <root_group> } - - <root_group> ::= GROUP "/" { <group_attribute>* <group_member>* } - - <group_attribute> ::= <attribute> - - <group_member> ::= <group> | <dataset> -- - -
-An HDF5 group is a structure containing zero or more HDF5 objects. The two -primary HDF5 objects are groups and datasets. To create a group, the calling -program must: -
H5Gcreate
/h5gcreate_f
.
-To close the group, H5Gclose
/h5gclose_f
-must be called. For example:
--C: -
- group_id = H5Gcreate (loc_id, name, size_hint); - status = H5Gclose (group_id); --FORTRAN: -
- CALL h5gcreate_f (loc_id, name, group_id, error, size_hint=size) - or - CALL h5gcreate_f (loc_id, name, group_id, error) - - - CALL h5gclose_f (group_id, error) -- - -
-
group.h5
(groupf.h5
for FORTRAN),
-creates a group called MyGroup
in the root group,
-and then closes the group and file. h5_crtgrp.c
groupexample.f90
CreateGroup.java
-H5Gcreate
/h5gcreate_f
creates
- a new empty group, named MyGroup
and located in the
- root group, and returns a group identifier.
--C: -
- hid_t H5Gcreate (hid_t loc_id, const char *name, size_t size_hint) --FORTRAN: -
- h5gcreate_f (loc_id, name, group_id, hdferr, size_hint) - - loc_id INTEGER(HID_T) - name CHARACTER(LEN=*) - group_id INTEGER(HID_T) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - size_hint INTEGER(SIZE_T), OPTIONAL - (Default value: OBJECT_NAMELEN_DEFAULT_F) - --
-
-
-
-
H5Gclose
/h5gclose_f
closes the group.
- This call is mandatory.
--C: -
- herr_t H5Gclose (hid_t group_id) --FORTRAN: -
- h5gclose_f (group_id, hdferr) - - group_id INTEGER(HID_T) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - --
group.h5
and the
-definition of the group are shown below. (The FORTRAN program
-creates the HDF5 file groupf.h5
and the resulting DDL shows
-groupf.h5
in the first line.)
--
Fig. 8.1 The Contents of group.h5 .
- |
- - | -Fig. 8.2 group.h5 in DDL |
-
- | - |
- -HDF5 "group.h5" { -GROUP "/" { - GROUP "MyGroup" { - } -} -} -- |
-
-Recall that to create an HDF5 object, we have to specify the location where the
-object is to be created. This location is determined by the identifier of an HDF5
-object and the name of the object to be created. The name of the created
-object can be either an absolute name or a name relative to the specified
-identifier.
-In the previous example, we used the file identifier and the absolute name
-/MyGroup
to create a group.
-
-In this section, we discuss HDF5 names and show how to use absolute and -relative names. - -
/
) and the null terminator.
-A full name
-may be composed of any number of component names separated by slashes, with any
-of the component names being the special name .
(a dot or period).
-A name which begins with a slash is an absolute name which is accessed
-beginning with the root group of the file;
-all other names are relative names and and the named object is
-accessed beginning with the specified group.
-Multiple consecutive slashes in a full name are treated as single slashes
-and trailing slashes are not significant. A special case is the name /
(or
-equivalent) which refers to the root group.
--Functions which operate on names generally take a location identifier, which -can be either a file identifier or a group identifier, and perform the lookup -with respect to that location. -Several possibilities are described in the following table: - -
Location Type | -Object Name | -Description | -
File identifier | -
- /foo/bar |
- The object bar in group foo
- in the root group. |
-
Group identifier | -
- /foo/bar |
- The object bar in group foo in the
- root group of the file containing the specified group.
- In other words, the group identifier's only purpose is to
- specify a file. |
-
File identifier | -
- /
- |
- The root group of the specified file. | -
Group identifier | -
- /
- |
- The root group of the file containing the specified group. | -
Group identifier | -
- foo/bar |
- The object bar in group foo in
- the specified group. |
-
File identifier | -
- .
- |
- The root group of the file. | -
Group identifier | -
- .
- |
- The specified group. | -
Other identifier | -
- .
- |
- The specified object. | -
-
h5_crtgrpar.c
grpsexample.f90
CreateGroupAR.java
-H5Gcreate
/h5gcreate_f
creates a group at the
- location specified by a location identifier and a name.
- The location identifier can be a file identifier or a group identifier
- and the name can be relative or absolute.
--
H5Gcreate
/h5gcreate_f
creates the group
- MyGroup
in the root group of the specified file.
--
H5Gcreate
/h5gcreate_f
creates the group
- Group_A
in the group MyGroup
in the root group
- of the specified file. Note that the parent group (MyGroup
)
- already exists.
--
H5Gcreate
/h5gcreate_f
creates the group
- Group_B
in the specified group.
-
-Fig. 9.1 The Contents of groups.h5
- (groupsf.h5
for FORTRAN)
-
- -
- - - Fig. 9.2groups.h5
in DDL
- (for FORTRAN, the name in the first line is groupsf.h5
)
-- - HDF5 "groups.h5" { - GROUP "/" { - GROUP "MyGroup" { - GROUP "Group_A" { - } - GROUP "Group_B" { - } - } - } - } - -- - - -
H5Dcreate
/h5dcreate_f
-creates a dataset at the location specified by a location identifier and
-a name. Similar to H5Gcreate
/h5gcreate_f
,
-the location identifier can be a
-file identifier or a group identifier and the name can be
-relative or absolute. The location identifier and the name together determine
-the location where the dataset is to be created. If the location identifier
-and name refer to a group, then the dataset is created in that group.
-
-
-h5_crtgrpd.c
-grpdsetexample.f90
CreateGroupDataset.java
-
-Fig. 10.1 The Contents of groups.h5
- (groupsf.h5
for FORTRAN)
-
-- - - - Fig. 10.2a
groups.h5
in DDL
-- -HDF5 "groups.h5" { -GROUP "/" { - GROUP "MyGroup" { - GROUP "Group_A" { - DATASET "dset2" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 2, 10 ) / ( 2, 10 ) } - DATA { - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, - 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 - } - } - } - GROUP "Group_B" { - } - DATASET "dset1" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 3, 3 ) / ( 3, 3 ) } - DATA { - 1, 2, 3, - 1, 2, 3, - 1, 2, 3 - } - } - } -} -} -- Fig. 10.2b
groupsf.h5
in DDL
-- -HDF5 "groupsf.h5" { -GROUP "/" { - GROUP "MyGroup" { - GROUP "Group_A" { - DATASET "dset2" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 10, 2 ) / ( 10, 2 ) } - DATA { - 1, 1, - 2, 2, - 3, 3, - 4, 4, - 5, 5, - 6, 6, - 7, 7, - 8, 8, - 9, 9, - 10, 10 - } - } - } - GROUP "Group_B" { - } - DATASET "dset1" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 3, 3 ) / ( 3, 3 ) } - DATA { - 1, 1, 1, - 2, 2, 2, - 3, 3, 3 - } - } - } -} -} -- - -
These files are Java versions of the example programs used in the
-HDF-5 tutoral:
-
http://hdf.ncsa.uiuc.edu/training/hdf5/
-
The examples here correspond to the examples explained in the first
-13 sections of the tutorial.
-
-
-
- |
-
-
- |
-
-
- |
-
-
- |
-
Lesson -4 | - -Create an HDF-5 file. | - -h5_crtfile.c | - -CreateFile.java | -
Lesson -5 | - -Create a Dataset in an HDF-5 file | - -h5_crtdat.c | - -CreateDataset.java | -
Lesson 6 | - -Write and Read data in a dataset | - -h5_rdwt.c | - -DatasetRdWt.java | -
Lesson -7 | - -Create an attribute. | - -h5_crtatt.c | - -CreateAttribute.java | -
Lesson -8 | - -Create a group. | - -h5_crtgrp.c | - -CreateGroup.java | -
Lesson -9 | - -Using Absolute and relative paths | - -h5_crtgrpar.c | - -CreateGroupAR.java | -
Lesson -10 | - -Create a dataset in a group. | - -h5_crtgrpd.c | - -CreateGroupDataset.java | -
Lesson -11 | - -Using Compound Datatypes | - -h5_compound.c | - -Compound.java | -
Lesson -12 | - -Selection of a hyperslab. | - -h5_hyperslab.c | - -Hyperslab.java | -
Lesson -13 | - -Selection of elements. | - -h5_copy.c | - -Copy.java | -
-
The Java tutorial programs try to stay close to the corresponding C -program. The main function's structure almost same as C program, with one -call for each HDF5 library function. For example, where the C program has -a call to H5Fopen(), the Java program has a call to H5Fopen_wrap(). -
The wrapper functions call the HDF-5 library using the Java HDF-5 Interface -(JHI5). The HDF-5 C interface returns error codes; these are represented -by Java Exceptions in the JHI5. The wrapper function catches the exception -and prints a message. -
For example, the H5Fopen_wrap() method calls the JHI5, and catches -any exceptions which may occur: -
public static int H5Fopen_wrap (String name, int flags, int access_id) - { - int file_id = -1; // file identifier - try - { - // Create a new file using default file properties. - file_id = H5.H5Fopen (name, flags, access_id); - } - catch (HDF5Exception hdf5e) - { - System.out.println - ("DatasetRdWt.H5Fopen_wrap() with HDF5Exception: " - + hdf5e.getMessage()); - } - catch (Exception e) - { - System.out.println - ("DatasetRdWt.H5Fopen_wrap() with other Exception: " - + e.getMessage()); - } - return file_id; - }- -
-
hdfhelp@ncsa.uiuc.edu - - diff --git a/doc/html/Tutor/examples/java/runCompound.sh b/doc/html/Tutor/examples/java/runCompound.sh deleted file mode 100644 index ef2be38..0000000 --- a/doc/html/Tutor/examples/java/runCompound.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java Compound $* diff --git a/doc/html/Tutor/examples/java/runCompound.sh.in b/doc/html/Tutor/examples/java/runCompound.sh.in deleted file mode 100644 index bc58088..0000000 --- a/doc/html/Tutor/examples/java/runCompound.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ Compound $* diff --git a/doc/html/Tutor/examples/java/runCopy.sh b/doc/html/Tutor/examples/java/runCopy.sh deleted file mode 100644 index de71783..0000000 --- a/doc/html/Tutor/examples/java/runCopy.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java Copy $* diff --git a/doc/html/Tutor/examples/java/runCopy.sh.in b/doc/html/Tutor/examples/java/runCopy.sh.in deleted file mode 100644 index 2fd8a46..0000000 --- a/doc/html/Tutor/examples/java/runCopy.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ Copy $* diff --git a/doc/html/Tutor/examples/java/runCreateAttribute.sh b/doc/html/Tutor/examples/java/runCreateAttribute.sh deleted file mode 100644 index 419abce..0000000 --- a/doc/html/Tutor/examples/java/runCreateAttribute.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateAttribute $* diff --git a/doc/html/Tutor/examples/java/runCreateAttribute.sh.in b/doc/html/Tutor/examples/java/runCreateAttribute.sh.in deleted file mode 100644 index 83bcdc7..0000000 --- a/doc/html/Tutor/examples/java/runCreateAttribute.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateAttribute $* diff --git a/doc/html/Tutor/examples/java/runCreateDataset.sh b/doc/html/Tutor/examples/java/runCreateDataset.sh deleted file mode 100644 index 371e811..0000000 --- a/doc/html/Tutor/examples/java/runCreateDataset.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateDataset $* diff --git a/doc/html/Tutor/examples/java/runCreateDataset.sh.in b/doc/html/Tutor/examples/java/runCreateDataset.sh.in deleted file mode 100644 index 606e153..0000000 --- a/doc/html/Tutor/examples/java/runCreateDataset.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateDataset $* diff --git a/doc/html/Tutor/examples/java/runCreateFile.sh b/doc/html/Tutor/examples/java/runCreateFile.sh deleted file mode 100644 index e32c0ab..0000000 --- a/doc/html/Tutor/examples/java/runCreateFile.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateFile $* diff --git a/doc/html/Tutor/examples/java/runCreateFile.sh.in b/doc/html/Tutor/examples/java/runCreateFile.sh.in deleted file mode 100644 index bf48b9c..0000000 --- a/doc/html/Tutor/examples/java/runCreateFile.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateFile $* diff --git a/doc/html/Tutor/examples/java/runCreateFileInput.sh b/doc/html/Tutor/examples/java/runCreateFileInput.sh deleted file mode 100644 index fa12f06..0000000 --- a/doc/html/Tutor/examples/java/runCreateFileInput.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateFileInput $* diff --git a/doc/html/Tutor/examples/java/runCreateFileInput.sh.in b/doc/html/Tutor/examples/java/runCreateFileInput.sh.in deleted file mode 100644 index 776eac5..0000000 --- a/doc/html/Tutor/examples/java/runCreateFileInput.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateFileInput $* diff --git a/doc/html/Tutor/examples/java/runCreateGroup.sh b/doc/html/Tutor/examples/java/runCreateGroup.sh deleted file mode 100644 index ee9deee..0000000 --- a/doc/html/Tutor/examples/java/runCreateGroup.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateGroup $* diff --git a/doc/html/Tutor/examples/java/runCreateGroup.sh.in b/doc/html/Tutor/examples/java/runCreateGroup.sh.in deleted file mode 100644 index e2eadb5..0000000 --- a/doc/html/Tutor/examples/java/runCreateGroup.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateGroup $* diff --git a/doc/html/Tutor/examples/java/runCreateGroupAR.sh b/doc/html/Tutor/examples/java/runCreateGroupAR.sh deleted file mode 100644 index 2619a11..0000000 --- a/doc/html/Tutor/examples/java/runCreateGroupAR.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateGroupAR $* diff --git a/doc/html/Tutor/examples/java/runCreateGroupAR.sh.in b/doc/html/Tutor/examples/java/runCreateGroupAR.sh.in deleted file mode 100644 index d61d852..0000000 --- a/doc/html/Tutor/examples/java/runCreateGroupAR.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateGroupAR $* diff --git a/doc/html/Tutor/examples/java/runCreateGroupDataset.sh b/doc/html/Tutor/examples/java/runCreateGroupDataset.sh deleted file mode 100644 index 15b7bfa..0000000 --- a/doc/html/Tutor/examples/java/runCreateGroupDataset.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java CreateGroupDataset $* diff --git a/doc/html/Tutor/examples/java/runCreateGroupDataset.sh.in b/doc/html/Tutor/examples/java/runCreateGroupDataset.sh.in deleted file mode 100644 index af2b4b5..0000000 --- a/doc/html/Tutor/examples/java/runCreateGroupDataset.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ CreateGroupDataset $* diff --git a/doc/html/Tutor/examples/java/runDatasetRdWt.sh b/doc/html/Tutor/examples/java/runDatasetRdWt.sh deleted file mode 100644 index a049ea8..0000000 --- a/doc/html/Tutor/examples/java/runDatasetRdWt.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java DatasetRdWt $* diff --git a/doc/html/Tutor/examples/java/runDatasetRdWt.sh.in b/doc/html/Tutor/examples/java/runDatasetRdWt.sh.in deleted file mode 100644 index ad3a049..0000000 --- a/doc/html/Tutor/examples/java/runDatasetRdWt.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ DatasetRdWt $* diff --git a/doc/html/Tutor/examples/java/runHyperSlab.sh b/doc/html/Tutor/examples/java/runHyperSlab.sh deleted file mode 100644 index 549f807..0000000 --- a/doc/html/Tutor/examples/java/runHyperSlab.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=/afs/ncsa/projects/hdf/java/java2/mcgrath/arabica/New5 -HDF5LIB=/afs/ncsa/projects/hdf/release/prehdf5-1.2.1/SunOS_5.7/lib - -#make this relative to the source root... -PWD=/afs/ncsa.uiuc.edu/projects/hdf/java/java2/mcgrath/arabica/java-hdf5 -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/solaris" - -export CLASSPATH -export LD_LIBRARY_PATH - -/usr/java1.2/bin/java HyperSlab $* diff --git a/doc/html/Tutor/examples/java/runHyperSlab.sh.in b/doc/html/Tutor/examples/java/runHyperSlab.sh.in deleted file mode 100644 index f515fc9..0000000 --- a/doc/html/Tutor/examples/java/runHyperSlab.sh.in +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/sh - -JH5INSTALLDIR=@JH5INST@ -HDF5LIB=@HDF5LIB@ - -#make this relative to the source root... -PWD=@PWD@ -LIBDIR=$JH5INSTALLDIR"/lib" - -CLASSPATH=".:"$LIBDIR"/jhdf5.jar" - -LD_LIBRARY_PATH=$HDF5LIB":"$LIBDIR"/@JAVATARG@" - -export CLASSPATH -export LD_LIBRARY_PATH - -@JAVA@ HyperSlab $* diff --git a/doc/html/Tutor/examples/mountexample.f90 b/doc/html/Tutor/examples/mountexample.f90 deleted file mode 100644 index f4341b2..0000000 --- a/doc/html/Tutor/examples/mountexample.f90 +++ /dev/null @@ -1,187 +0,0 @@ -! -!In the following example we create one file with a group in it, -!and another file with a dataset. Mounting is used to -!access the dataset from the second file as a member of a group -!in the first file. -! - - PROGRAM MOUNTEXAMPLE - - USE HDF5 ! This module contains all necessary modules - - IMPLICIT NONE - - ! - ! Filenames are "mount1.h5" and "mount2.h5" - ! - CHARACTER(LEN=9), PARAMETER :: filename1 = "mount1.h5" - CHARACTER(LEN=9), PARAMETER :: filename2 = "mount2.h5" - - ! - !data space rank and dimensions - ! - INTEGER, PARAMETER :: RANK = 2 - INTEGER, PARAMETER :: NX = 4 - INTEGER, PARAMETER :: NY = 5 - - ! - ! File identifiers - ! - INTEGER(HID_T) :: file1_id, file2_id - - ! - ! Group identifier - ! - INTEGER(HID_T) :: gid - - ! - ! Dataset identifier - ! - INTEGER(HID_T) :: dset_id - - ! - ! Data space identifier - ! - INTEGER(HID_T) :: dataspace - - ! - ! Data type identifier - ! - INTEGER(HID_T) :: dtype_id - - ! - ! The dimensions for the dataset. - ! - INTEGER(HSIZE_T), DIMENSION(2) :: dims = (/NX,NY/) - - ! - ! Flag to check operation success - ! - INTEGER :: error - - ! - ! General purpose integer - ! - INTEGER :: i, j - - ! - ! Data buffers - ! - INTEGER, DIMENSION(NX,NY) :: data_in, data_out - - ! - ! Initialize FORTRAN interface. - ! - CALL h5open_f(error) - - ! - ! Initialize data_in buffer - ! - do i = 1, NX - do j = 1, NY - data_in(i,j) = (i-1) + (j-1) - end do - end do - - ! - ! Create first file "mount1.h5" using default properties. - ! - CALL h5fcreate_f(filename1, H5F_ACC_TRUNC_F, file1_id, error) - - ! - ! Create group "/G" inside file "mount1.h5". - ! - CALL h5gcreate_f(file1_id, "/G", gid, error) - - ! - ! Close file and group identifiers. - ! - CALL h5gclose_f(gid, error) - CALL h5fclose_f(file1_id, error) - - ! - ! Create second file "mount2.h5" using default properties. - ! - CALL h5fcreate_f(filename2, H5F_ACC_TRUNC_F, file2_id, error) - - ! - ! Create data space for the dataset. - ! - CALL h5screate_simple_f(RANK, dims, dataspace, error) - - ! - ! Create dataset "/D" inside file "mount2.h5". - ! - CALL h5dcreate_f(file2_id, "/D", H5T_NATIVE_INTEGER, dataspace, & - dset_id, error) - - ! - ! Write data_in to the dataset - ! - CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, data_in, error) - - ! - ! Close file, dataset and dataspace identifiers. - ! - CALL h5sclose_f(dataspace, error) - CALL h5dclose_f(dset_id, error) - CALL h5fclose_f(file2_id, error) - - ! - ! Reopen both files. - ! - CALL h5fopen_f (filename1, H5F_ACC_RDWR_F, file1_id, error) - CALL h5fopen_f (filename2, H5F_ACC_RDWR_F, file2_id, error) - - ! - ! Mount the second file under the first file's "/G" group. - ! - CALL h5fmount_f (file1_id, "/G", file2_id, error) - - - ! - ! Access dataset D in the first file under /G/D name. - ! - CALL h5dopen_f(file1_id, "/G/D", dset_id, error) - - ! - ! Get dataset's data type. - ! - CALL h5dget_type_f(dset_id, dtype_id, error) - - ! - ! Read the dataset. - ! - CALL h5dread_f(dset_id, dtype_id, data_out, error) - - ! - ! Print out the data. - ! - do i = 1, NX - print *, (data_out(i,j), j = 1, NY) - end do - - - ! - !Close dset_id and dtype_id. - ! - CALL h5dclose_f(dset_id, error) - CALL h5tclose_f(dtype_id, error) - - ! - ! Unmount the second file. - ! - CALL h5funmount_f(file1_id, "/G", error); - - ! - ! Close both files. - ! - CALL h5fclose_f(file1_id, error) - CALL h5fclose_f(file2_id, error) - ! - ! Close FORTRAN interface. - ! - CALL h5close_f(error) - - END PROGRAM MOUNTEXAMPLE - diff --git a/doc/html/Tutor/examples/refobjexample.f90 b/doc/html/Tutor/examples/refobjexample.f90 deleted file mode 100644 index fdbb26d..0000000 --- a/doc/html/Tutor/examples/refobjexample.f90 +++ /dev/null @@ -1,142 +0,0 @@ -! -! This program shows how to create and store references to the objects. -! Program creates a file, two groups, a dataset to store integer data and -! a dataset to store references to the objects. -! Stored references are used to open the objects they are point to. -! Data is written to the dereferenced dataset, and class type is displayed for -! the shared datatype. -! - PROGRAM OBJ_REFERENCES - - USE HDF5 ! This module contains all necessary modules - - IMPLICIT NONE - CHARACTER(LEN=10), PARAMETER :: filename = "FORTRAN.h5" ! File - CHARACTER(LEN=8), PARAMETER :: dsetnamei = "INTEGERS" ! Dataset with the integer data - CHARACTER(LEN=17), PARAMETER :: dsetnamer = "OBJECT_REFERENCES" ! Dataset wtih object - ! references - CHARACTER(LEN=6), PARAMETER :: groupname1 = "GROUP1" ! Groups in the file - CHARACTER(LEN=6), PARAMETER :: groupname2 = "GROUP2" ! - - INTEGER(HID_T) :: file_id ! File identifier - INTEGER(HID_T) :: grp1_id ! Group identifiers - INTEGER(HID_T) :: grp2_id ! - INTEGER(HID_T) :: dset_id ! Dataset identifiers - INTEGER(HID_T) :: dsetr_id ! - INTEGER(HID_T) :: type_id ! Type identifier - INTEGER(HID_T) :: space_id ! Dataspace identifiers - INTEGER(HID_T) :: spacer_id ! - INTEGER :: error - INTEGER(HSIZE_T), DIMENSION(1) :: dims = (/5/) - INTEGER(HSIZE_T), DIMENSION(1) :: dimsr= (/4/) - INTEGER(HSIZE_T), DIMENSION(1) :: my_maxdims = (/5/) - INTEGER :: rank = 1 - INTEGER :: rankr = 1 - TYPE(hobj_ref_t_f), DIMENSION(4) :: ref - TYPE(hobj_ref_t_f), DIMENSION(4) :: ref_out - INTEGER, DIMENSION(5) :: data = (/1, 2, 3, 4, 5/) - INTEGER :: class, ref_size - ! - ! Initialize FORTRAN interface. - ! - CALL h5open_f(error) - ! - ! Create a file - ! - CALL h5fcreate_f(filename, H5F_ACC_TRUNC_F, file_id, error) - ! Default file access and file creation - ! properties are used. - ! - ! Create a group in the file - ! - CALL h5gcreate_f(file_id, groupname1, grp1_id, error) - ! - ! Create a group inside the created gorup - ! - CALL h5gcreate_f(grp1_id, groupname2, grp2_id, error) - ! - ! Create dataspaces for datasets - ! - CALL h5screate_simple_f(rank, dims, space_id, error, maxdims=my_maxdims) - CALL h5screate_simple_f(rankr, dimsr, spacer_id, error) - ! - ! Create integer dataset - ! - CALL h5dcreate_f(file_id, dsetnamei, H5T_NATIVE_INTEGER, space_id, & - dset_id, error) - ! - ! Create dataset to store references to the objects - ! - CALL h5dcreate_f(file_id, dsetnamer, H5T_STD_REF_OBJ, spacer_id, & - dsetr_id, error) - ! - ! Create a datatype and store in the file - ! - CALL h5tcopy_f(H5T_NATIVE_REAL, type_id, error) - CALL h5tcommit_f(file_id, "MyType", type_id, error) - ! - ! Close dataspaces, groups and integer dataset - ! - CALL h5sclose_f(space_id, error) - CALL h5sclose_f(spacer_id, error) - CALL h5tclose_f(type_id, error) - CALL h5dclose_f(dset_id, error) - CALL h5gclose_f(grp1_id, error) - CALL h5gclose_f(grp2_id, error) - ! - ! Create references to two groups, integer dataset and shared datatype - ! and write it to the dataset in the file - ! - CALL h5rcreate_f(file_id, groupname1, ref(1), error) - CALL h5rcreate_f(file_id, "/GROUP1/GROUP2", ref(2), error) - CALL h5rcreate_f(file_id, dsetnamei, ref(3), error) - CALL h5rcreate_f(file_id, "MyType", ref(4), error) - ref_size = size(ref) - CALL h5dwrite_f(dsetr_id, H5T_STD_REF_OBJ, ref, ref_size, error) - ! - ! Close the dataset - ! - CALL h5dclose_f(dsetr_id, error) - ! - ! Reopen the dataset with object references and read references to the buffer - ! - CALL h5dopen_f(file_id, dsetnamer,dsetr_id,error) - ref_size = size(ref_out) - CALL h5dread_f(dsetr_id, H5T_STD_REF_OBJ, ref_out, ref_size, error) - ! - ! Dereference the third reference. We know that it is a dataset. On practice - ! one should use h5rget_object_type_f function to find out - ! the type of an object the reference points to. - ! - CALL h5rdereference_f(dsetr_id, ref(3), dset_id, error) - ! - ! Write data to the dataset. - ! - CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, data, error) - if (error .eq. 0) write(*,*) "Data has been successfully written to the dataset " - ! - ! Dereference the fourth reference. We know that it is a datatype. On practice - ! one should use h5rget_object_type_f function to find out - ! the type of an object the reference points to. - ! - CALL h5rdereference_f(dsetr_id, ref(4), type_id, error) - ! - ! Get datatype class and display it if it is of a FLOAT class. - ! - CALL h5tget_class_f(type_id, class, error) - if(class .eq. H5T_FLOAT_F) write(*,*) "Stored datatype is of a FLOAT class" - ! - ! Close all objects. - ! - CALL h5dclose_f(dset_id, error) - CALL h5tclose_f(type_id, error) - CALL h5dclose_f(dsetr_id, error) - CALL h5fclose_f(file_id, error) - ! - ! Close FORTRAN interface. - ! - CALL h5close_f(error) - - END PROGRAM OBJ_REFERENCES - - diff --git a/doc/html/Tutor/examples/refregexample.f90 b/doc/html/Tutor/examples/refregexample.f90 deleted file mode 100644 index 5d72f1e..0000000 --- a/doc/html/Tutor/examples/refregexample.f90 +++ /dev/null @@ -1,162 +0,0 @@ -! -! This program shows how to create, store and dereference references -! to the dataset regions. -! Program creates a file and writes two dimensional integer dataset -! to it. Then program creates and stores references to the hyperslab -! and 3 points selected in the integer dataset, in the second dataset. -! Program reopens the second dataset, reads and dereferences region -! references, and then reads and displays selected data from the -! integer dataset. -! - PROGRAM REG_REFERENCE - - USE HDF5 ! This module contains all necessary modules - - IMPLICIT NONE - CHARACTER(LEN=10), PARAMETER :: filename = "FORTRAN.h5" - CHARACTER(LEN=6), PARAMETER :: dsetnamev = "MATRIX" - CHARACTER(LEN=17), PARAMETER :: dsetnamer = "REGION_REFERENCES" - - INTEGER(HID_T) :: file_id ! File identifier - INTEGER(HID_T) :: space_id ! Dataspace identifier - INTEGER(HID_T) :: spacer_id ! Dataspace identifier - INTEGER(HID_T) :: dsetv_id ! Dataset identifier - INTEGER(HID_T) :: dsetr_id ! Dataset identifier - INTEGER :: error - TYPE(hdset_reg_ref_t_f) , DIMENSION(2) :: ref ! Buffers to store references - TYPE(hdset_reg_ref_t_f) , DIMENSION(2) :: ref_out ! - INTEGER(HSIZE_T), DIMENSION(2) :: dims = (/2,9/) ! Datasets dimensions - INTEGER(HSIZE_T), DIMENSION(1) :: dimsr = (/2/) ! - INTEGER(HSIZE_T), DIMENSION(2) :: start - INTEGER(HSIZE_T), DIMENSION(2) :: count - INTEGER :: rankr = 1 - INTEGER :: rank = 2 - INTEGER , DIMENSION(2,9) :: data - INTEGER , DIMENSION(2,9) :: data_out = 0 - INTEGER(HSIZE_T) , DIMENSION(2,3) :: coord - INTEGER(SIZE_T) ::num_points = 3 ! Number of selected points - INTEGER :: i, j - INTEGER :: ref_size - coord = reshape((/1,1,2,7,1,9/), (/2,3/)) ! Coordinates of selected points - data = reshape ((/1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6/), (/2,9/)) - ! - ! Initialize FORTRAN interface. - ! - CALL h5open_f(error) - ! - ! Create a new file. - ! - CALL h5fcreate_f(filename, H5F_ACC_TRUNC_F, file_id, error) - ! Default file access and file creation - ! properties are used. - ! - ! Create dataspaces: - ! - ! for dataset with references to dataset regions - ! - CALL h5screate_simple_f(rankr, dimsr, spacer_id, error) - ! - ! for integer dataset - ! - CALL h5screate_simple_f(rank, dims, space_id, error) - ! - ! Create and write datasets: - ! - ! Integer dataset - ! - CALL h5dcreate_f(file_id, dsetnamev, H5T_NATIVE_INTEGER, space_id, & - dsetv_id, error) - CALL h5dwrite_f(dsetv_id, H5T_NATIVE_INTEGER, data, error) - CALL h5dclose_f(dsetv_id, error) - ! - ! Dataset with references - ! - CALL h5dcreate_f(file_id, dsetnamer, H5T_STD_REF_DSETREG, spacer_id, & - dsetr_id, error) - ! - ! Create a reference to the hyperslab selection. - ! - start(1) = 0 - start(2) = 3 - count(1) = 2 - count(2) = 3 - CALL h5sselect_hyperslab_f(space_id, H5S_SELECT_SET_F, & - start, count, error) - CALL h5rcreate_f(file_id, dsetnamev, space_id, ref(1), error) - ! - ! Create a reference to elements selection. - ! - CALL h5sselect_none_f(space_id, error) - CALL h5sselect_elements_f(space_id, H5S_SELECT_SET_F, rank, num_points,& - coord, error) - CALL h5rcreate_f(file_id, dsetnamev, space_id, ref(2), error) - ! - ! Write dataset with the references. - ! - ref_size = size(ref) - CALL h5dwrite_f(dsetr_id, H5T_STD_REF_DSETREG, ref, ref_size, error) - ! - ! Close all objects. - ! - CALL h5sclose_f(space_id, error) - CALL h5sclose_f(spacer_id, error) - CALL h5dclose_f(dsetr_id, error) - CALL h5fclose_f(file_id, error) - ! - ! Reopen the file to test selections. - ! - CALL h5fopen_f (filename, H5F_ACC_RDWR_F, file_id, error) - CALL h5dopen_f(file_id, dsetnamer, dsetr_id, error) - ! - ! Read references to the dataset regions. - ! - ref_size = size(ref_out) - CALL h5dread_f(dsetr_id, H5T_STD_REF_DSETREG, ref_out, ref_size, error) - ! - ! Dereference the first reference. - ! - CALL H5rdereference_f(dsetr_id, ref_out(1), dsetv_id, error) - CALL H5rget_region_f(dsetr_id, ref_out(1), space_id, error) - ! - ! Read selected data from the dataset. - ! - CALL h5dread_f(dsetv_id, H5T_NATIVE_INTEGER, data_out, error, & - mem_space_id = space_id, file_space_id = space_id) - write(*,*) "Hypeslab selection" - write(*,*) - do i = 1,2 - write(*,*) (data_out (i,j), j = 1,9) - enddo - write(*,*) - CALL h5sclose_f(space_id, error) - CALL h5dclose_f(dsetv_id, error) - data_out = 0 - ! - ! Dereference the second reference. - ! - CALL H5rdereference_f(dsetr_id, ref_out(2), dsetv_id, error) - CALL H5rget_region_f(dsetr_id, ref_out(2), space_id, error) - ! - ! Read selected data from the dataset. - ! - CALL h5dread_f(dsetv_id, H5T_NATIVE_INTEGER, data_out, error, & - mem_space_id = space_id, file_space_id = space_id) - write(*,*) "Point selection" - write(*,*) - do i = 1,2 - write(*,*) (data_out (i,j), j = 1,9) - enddo - ! - ! Close all objects - ! - CALL h5sclose_f(space_id, error) - CALL h5dclose_f(dsetv_id, error) - CALL h5dclose_f(dsetr_id, error) - ! - ! Close FORTRAN interface. - ! - CALL h5close_f(error) - - END PROGRAM REG_REFERENCE - - diff --git a/doc/html/Tutor/examples/rwdsetexample.f90 b/doc/html/Tutor/examples/rwdsetexample.f90 deleted file mode 100644 index 729e84d..0000000 --- a/doc/html/Tutor/examples/rwdsetexample.f90 +++ /dev/null @@ -1,78 +0,0 @@ -! -! The following example shows how to write and read to/from an existing dataset. -! It opens the file created in the previous example, obtains the dataset -! identifier, writes the data to the dataset in the file, -! then reads the dataset to memory. -! - - - PROGRAM RWDSETEXAMPLE - - USE HDF5 ! This module contains all necessary modules - - IMPLICIT NONE - - CHARACTER(LEN=8), PARAMETER :: filename = "dsetf.h5" ! File name - CHARACTER(LEN=4), PARAMETER :: dsetname = "dset" ! Dataset name - - INTEGER(HID_T) :: file_id ! File identifier - INTEGER(HID_T) :: dset_id ! Dataset identifier - - INTEGER :: error ! Error flag - INTEGER :: i, j - - INTEGER, DIMENSION(4,6) :: dset_data, data_out ! Data buffers - - ! - ! Initialize the dset_data array. - ! - do i = 1, 4 - do j = 1, 6 - dset_data(i,j) = (i-1)*6 + j; - end do - end do - - ! - ! Initialize FORTRAN predefined datatypes - ! - CALL h5open_f(error) - - ! - ! Open an existing file. - ! - CALL h5fopen_f (filename, H5F_ACC_RDWR_F, file_id, error) - - ! - ! Open an existing dataset. - ! - CALL h5dopen_f(file_id, dsetname, dset_id, error) - - ! - ! Write the dataset. - ! - CALL h5dwrite_f(dset_id, H5T_NATIVE_INTEGER, dset_data, error) - - ! - ! Read the dataset. - ! - CALL h5dread_f(dset_id, H5T_NATIVE_INTEGER, data_out, error) - - ! - ! Close the dataset. - ! - CALL h5dclose_f(dset_id, error) - - ! - ! Close the file. - ! - CALL h5fclose_f(file_id, error) - - ! - ! Close FORTRAN predefined datatypes. - ! - CALL h5close_f(error) - - END PROGRAM RWDSETEXAMPLE - - - diff --git a/doc/html/Tutor/examples/selectele.f90 b/doc/html/Tutor/examples/selectele.f90 deleted file mode 100644 index 8727bd9..0000000 --- a/doc/html/Tutor/examples/selectele.f90 +++ /dev/null @@ -1,282 +0,0 @@ -! -! This program creates two files, copy1.h5, and copy2.h5. -! In copy1.h5, it creates a 3x4 dataset called 'Copy1', -! and write 0's to this dataset. -! In copy2.h5, it create a 3x4 dataset called 'Copy2', -! and write 1's to this dataset. -! It closes both files, reopens both files, selects two -! points in copy1.h5 and writes values to them. Then it -! uses an H5Scopy to write the same selection to copy2.h5. -! Program reopens the files, and reads and prints the contents of -! the two datasets. -! - - PROGRAM SELECTEXAMPLE - - USE HDF5 ! This module contains all necessary modules - - IMPLICIT NONE - - CHARACTER(LEN=8), PARAMETER :: filename1 = "copy1.h5" ! File name - CHARACTER(LEN=8), PARAMETER :: filename2 = "copy2.h5" ! - CHARACTER(LEN=5), PARAMETER :: dsetname1 = "Copy1" ! Dataset name - CHARACTER(LEN=5), PARAMETER :: dsetname2 = "Copy2" ! - - INTEGER, PARAMETER :: RANK = 2 ! Dataset rank - - INTEGER(SIZE_T), PARAMETER :: NUMP = 2 ! Number of points selected - - INTEGER(HID_T) :: file1_id ! File1 identifier - INTEGER(HID_T) :: file2_id ! File2 identifier - INTEGER(HID_T) :: dset1_id ! Dataset1 identifier - INTEGER(HID_T) :: dset2_id ! Dataset2 identifier - INTEGER(HID_T) :: dataspace1 ! Dataspace identifier - INTEGER(HID_T) :: dataspace2 ! Dataspace identifier - INTEGER(HID_T) :: memspace ! memspace identifier - - INTEGER(HSIZE_T), DIMENSION(1) :: dimsm = (/2/) - ! Memory dataspace dimensions - INTEGER(HSIZE_T), DIMENSION(2) :: dimsf = (/3,4/) - ! File dataspace dimensions - INTEGER(HSIZE_T), DIMENSION(RANK,NUMP) :: coord ! Elements coordinates - ! in the file - - INTEGER, DIMENSION(3,4) :: buf1, buf2, bufnew ! Data buffers - INTEGER, DIMENSION(2) :: val = (/53, 59/) ! Values to write - - INTEGER :: memrank = 1 ! Rank of the dataset in memory - - INTEGER :: i, j - - INTEGER :: error ! Error flag - LOGICAL :: status - - - ! - ! Create two files containing identical datasets. Write 0's to one - ! and 1's to the other. - ! - - ! - ! Data initialization. - ! - do i = 1, 3 - do j = 1, 4 - buf1(i,j) = 0; - end do - end do - - do i = 1, 3 - do j = 1, 4 - buf2(i,j) = 1; - end do - end do - - ! - ! Initialize FORTRAN predefined datatypes - ! - CALL h5open_f(error) - - ! - ! Create file1, file2 using default properties. - ! - CALL h5fcreate_f(filename1, H5F_ACC_TRUNC_F, file1_id, error) - - CALL h5fcreate_f(filename2, H5F_ACC_TRUNC_F, file2_id, error) - - ! - ! Create the data space for the datasets. - ! - CALL h5screate_simple_f(RANK, dimsf, dataspace1, error) - - CALL h5screate_simple_f(RANK, dimsf, dataspace2, error) - - ! - ! Create the datasets with default properties. - ! - CALL h5dcreate_f(file1_id, dsetname1, H5T_NATIVE_INTEGER, dataspace1, & - dset1_id, error) - - CALL h5dcreate_f(file2_id, dsetname2, H5T_NATIVE_INTEGER, dataspace2, & - dset2_id, error) - - ! - ! Write the datasets. - ! - CALL h5dwrite_f(dset1_id, H5T_NATIVE_INTEGER, buf1, error) - - CALL h5dwrite_f(dset2_id, H5T_NATIVE_INTEGER, buf2, error) - - ! - ! Close the dataspace for the datasets. - ! - CALL h5sclose_f(dataspace1, error) - - CALL h5sclose_f(dataspace2, error) - - ! - ! Close the datasets. - ! - CALL h5dclose_f(dset1_id, error) - - CALL h5dclose_f(dset2_id, error) - - ! - ! Close the files. - ! - CALL h5fclose_f(file1_id, error) - - CALL h5fclose_f(file2_id, error) - - ! - ! Open the two files. Select two points in one file, write values to - ! those point locations, then do H5Scopy and write the values to the - ! other file. Close files. - ! - - ! - ! Open the files. - ! - CALL h5fopen_f (filename1, H5F_ACC_RDWR_F, file1_id, error) - - CALL h5fopen_f (filename2, H5F_ACC_RDWR_F, file2_id, error) - - ! - ! Open the datasets. - ! - CALL h5dopen_f(file1_id, dsetname1, dset1_id, error) - - CALL h5dopen_f(file2_id, dsetname2, dset2_id, error) - - ! - ! Get dataset1's dataspace identifier. - ! - CALL h5dget_space_f(dset1_id, dataspace1, error) - - ! - ! Create memory dataspace. - ! - CALL h5screate_simple_f(memrank, dimsm, memspace, error) - - ! - ! Set the selected point positions. Because Fortran array index starts - ! from 1, so add one to the actual select points in C. - ! - coord(1,1) = 1 - coord(2,1) = 2 - coord(1,2) = 1 - coord(2,2) = 4 - - ! - ! Select the elements in file space. - ! - CALL h5sselect_elements_f(dataspace1, H5S_SELECT_SET_F, RANK, NUMP,& - coord, error) - - ! - ! Write value into the selected points in dataset1. - ! - CALL H5dwrite_f(dset1_id, H5T_NATIVE_INTEGER, val, error, & - mem_space_id=memspace, file_space_id=dataspace1) - - ! - ! Copy the daspace1 into dataspace2. - ! - CALL h5scopy_f(dataspace1, dataspace2, error) - - ! - ! Write value into the selected points in dataset2. - ! - CALL H5dwrite_f(dset2_id, H5T_NATIVE_INTEGER, val, error, & - mem_space_id=memspace, file_space_id=dataspace2) - - ! - ! Close the dataspace for the datasets. - ! - CALL h5sclose_f(dataspace1, error) - - CALL h5sclose_f(dataspace2, error) - - ! - ! Close the memoryspace. - ! - CALL h5sclose_f(memspace, error) - - ! - ! Close the datasets. - ! - CALL h5dclose_f(dset1_id, error) - - CALL h5dclose_f(dset2_id, error) - - ! - ! Close the files. - ! - CALL h5fclose_f(file1_id, error) - - CALL h5fclose_f(file2_id, error) - - ! - ! Open both files and print the contents of the datasets. - ! - - ! - ! Open the files. - ! - CALL h5fopen_f (filename1, H5F_ACC_RDWR_F, file1_id, error) - - CALL h5fopen_f (filename2, H5F_ACC_RDWR_F, file2_id, error) - - ! - ! Open the datasets. - ! - CALL h5dopen_f(file1_id, dsetname1, dset1_id, error) - - CALL h5dopen_f(file2_id, dsetname2, dset2_id, error) - - ! - ! Read dataset from the first file. - ! - CALL h5dread_f(dset1_id, H5T_NATIVE_INTEGER, bufnew, error) - - ! - ! Display the data read from dataset "Copy1" - ! - write(*,*) "The data in dataset Copy1 is: " - do i = 1, 3 - print *, (bufnew(i,j), j = 1,4) - end do - - ! - ! Read dataset from the second file. - ! - CALL h5dread_f(dset2_id, H5T_NATIVE_INTEGER, bufnew, error) - - ! - ! Display the data read from dataset "Copy2" - ! - write(*,*) "The data in dataset Copy2 is: " - do i = 1, 3 - print *, (bufnew(i,j), j = 1,4) - end do - - ! - ! Close datasets. - ! - CALL h5dclose_f(dset1_id, error) - - CALL h5dclose_f(dset2_id, error) - - ! - ! Close files. - ! - CALL h5fclose_f(file1_id, error) - - CALL h5fclose_f(file2_id, error) - - ! - ! Close FORTRAN predefined datatypes. - ! - CALL h5close_f(error) - - END PROGRAM SELECTEXAMPLE diff --git a/doc/html/Tutor/extend.html b/doc/html/Tutor/extend.html deleted file mode 100644 index 326a946..0000000 --- a/doc/html/Tutor/extend.html +++ /dev/null @@ -1,284 +0,0 @@ -
--HDF5 requires you to use chunking to define extendible datasets. -This makes it possible to extend datasets efficiently without -having to excessively reorganize storage. -
-The following operations are required in order to write an extendible dataset: -
h5_extend.c
chunk.f90
--
H5Pcreate
/ h5pcreate_f
-creates a new property as an instance of
- a property list. The signature is as follows:
--C: -
- hid_t H5Pcreate (H5P_class_t classtype) --
-FORTRAN: -
- h5pcreate_f (classtype, prp_id, hdferr) - - classtype IN: INTEGER - prp_id OUT: INTEGER(HID_T) - hdferr OUT: INTEGER --
-
C | -FORTRAN | -
|
- |
-
-
H5Pset_chunk
/ h5pset_chunk_f
-sets the size of the chunks used
- to store a chunked layout dataset.
- The signature of this routine is as follows:
--C: -
- herr_t H5Pset_chunk (hid_t prp_id, int ndims, - const hsize_t * dims) --
-FORTRAN: -
- h5pset_chunk_f (prp_id, ndims, dims, hdferr) - - prp_id IN: INTEGER(HID_T) - ndims IN: INTEGER - dims IN: INTEGER(HSIZE_T), DIMENSION(ndims) - hdferr OUT: INTEGER - --
-
-
H5Dextend
/ h5dextend_f
routine
-extends a dataset that has an unlimited
- dimension. The signature is as follows:
--C: -
- herr_t H5Dextend (hid_t dset_id, const hsize_t * size) --
-FORTRAN: -
- h5dextend_f (dset_id, size, hdferr) - - dset_id IN: INTEGER(HID_T) - size IN: INTEGER(HSIZE_T), DIMENSION(*) - hdferr OUT: INTEGER-
-
-
-
H5Dget_create_plist
/ h5dget_create_plist_f
-routine returns an identifier for a
-copy of the dataset creation property list for a dataset.
--
H5Pget_layout
, returns the layout of the raw data for a
-dataset. Valid types are H5D_CONTIGUOUS
and
-H5D_CHUNKED
.
-A FORTRAN routine for H5Pget_layout
does not yet exist.
--
H5Pget_chunk
/ h5pget_chunk_f
-routine retrieves the size of chunks
-for the raw data of a chunked layout dataset.
-The signature is as follows:
--C: -
- int H5Pget_chunk (hid_t prp_id, int ndims, hsize_t * dims) --
-FORTRAN: -
- h5pget_chunk_f (prp_id, ndims, dims, hdferr) - - prp_id IN: INTEGER(HID_T) - ndims IN: INTEGER - dims OUT: INTEGER(HSIZE_T), DIMENSION(ndims) - hdferr OUT: INTEGER --
-
-
H5Pclose
/ h5pclose_f
routine
- terminates access to a property list.
- The signature is as follows:
--C: -
- herr_t H5Pclose (hid_t prp_id) --
-FORTRAN: -
- h5pclose_f (prp_id, hdferr) - - prp_id IN: INTEGER(HID_T) - hdferr OUT: INTEGER --
-
-An HDF5 file is a container for storing a variety of scientific data -is composed of two primary types of objects: groups and datasets. -
-Working with groups and datasets is similar in many -ways to working with directories and files in UNIX. As with UNIX directories -and files, an HDF5 object in an HDF5 file is often referred to by its -full path name (also called an absolute path name). -
/
signifies the root group./foo
signifies a member of the root group called foo
.
-/foo/zoo
signifies a member of the group foo
, which in
- turn is a member of the root group.
-- -
- - - -
-
-
-
-
-
-
-
- -
- H5F_ACC_RDWR: Allow read and write access to file. - H5F_ACC_RDONLY: Allow read-only access to file. - H5F_ACC_TRUNC: Truncate file, if it already exists, erasing all data - previously stored in the file. - H5F_ACC_EXCL: Fail if file already exists. - H5F_ACC_DEBUG: Print debug information. - H5P_DEFAULT: Apply default file access and creation properties. --
-
-
- -
- -
-
-
- -
- -
- -
-
- -
-
- -
- An object - reference points to an entire object in the current HDF5 file by storing - the relative file address (OID) of the object header for the object - pointed to. The relative file address of an object header is constant - for the life of the object. An object reference is of a fixed size in - the file. -
-DATASET REGION REFERENCE:
- Reference to a specific dataset region.
-
- A dataset region reference points to a region of a dataset in the - current HDF5 file by storing the OID of the dataset and the global - heap offset of the region referenced. The region referenced is - located by retrieving the coordinates of the areas in the region - from the global heap. A dataset region reference is of a variable - size in the file. -
-
-These properties mean that the TSHDF5 library will not interfere with an application's use of threads. A TSHDF5 library is the same -library as regular HDF-5 library, with additional code to synchronize access to the HDF-5 library's internal data structures. - -
- If you are reading this message, your browser is not capable of - interpreting HTML frames. A no-frames version of the tutorial - is available by viewing the file title.html. -
- If you owuld like to upgrade to a frames-capable browser, - we suggest upgrading to the most recent version of - Nestscape Communicator, Microsoft Internet Explorer, or - an equivalent browser. -
- In the meantime, you can view this tutorial by starting with the - file title.html. -
-HDF5 is a file format and library for storing scientific data. -It was designed and implemented - to meet growing and ever-changing scientific data-storage - and data-handling needs, - to take advantage of the power and features of today's - computing systems, and - to address the deficiencies of HDF4.x. -HDF5 has a powerful and flexible data model, - supports files larger than 2 GB (the limit of HDF4.x files), and - supports parallel I/O. -Thread-safety is designed and is to be implemented in the near future. -For a short overview of the HDF5 data model, library, and tools, see -the slide presentation at the following URL: -
- http://hdf.ncsa.uiuc.edu/HDF5/papers/HDF5_overview/index.htm --This tutorial covers the basic HDF5 data objects and file structure, -the HDF5 programming model, and the API functions necessary for creating and -modifying data objects. It also introduces the available HDF5 tools for accessing -HDF5 files. -
-The examples used in this tutorial, along with a Makefile to compile them, -can be found in ./examples/. You can also download -a tar -file with the examples and Makefile. -To use the Makefile, you may have to edit it and update the -compiler and compiler options, as well as the path for the HDF5 -binary distribution. -The Java examples can be found in -a subdirectory of the ./examples/ directory called java/. The java/ -directory contains a Makefile and shell scripts for running the java -programs. -
-Please check the References for pointers to -other examples of HDF5 Programs. -
-We hope that the step-by-step examples and instructions will give you a quick -start with HDF5. -
-Please send your comments and suggestions to hdfhelp@ncsa.uiuc.edu. - - - - - - - -
-The HDF5 Group interface includes the H5Giterate
function,
-which iterates over the group members.
-
-Operations on each group member can be performed during the iteration process -by passing the operator function and its data to the iterator as parameters. -There are no restrictions on what kind of operations can be performed on -group members during the iteration procedure. -
-The following steps are involved: -
hgn_members_f
returns the number of group members.
- h5gget_obj_info_idx_f
returns the name and type of the
- group member, which is identified by its index.
--
h5_iterate.c
grpit.f90
--Following is the output from these examples: -
-Output from C Example -
- Objects in the root group are: - - Object with name Dataset1 is a dataset - Object with name Datatype1 is a named datatype - Object with name Group1 is a group --Output from FORTRAN Example -
- Number of root group member is 1 - MyGroup 1 - Number of group MyGroup member is 2 - Group_A 1 - dset1 2 - Number of group MyGroup/Group_A member is 1 - dset2 2 -- -
-
- herr_t *(H5G_operator_t) (hid group_id, const char* name, - void *operator_data) --
H5Giterate
.
--
-
H5Giterate
.
-- The operator function in this example simply prints the name and type - of the current object and then exits. - This information can also be used to open the object and perform - different operations or queries. For example a named datatype object's - name can be used to open the datatype and query its properties. -
- The operator return value defines the behavior of the iterator. -
-
-
-
- In this example the operator function returns 0, which causes the iterator - to continue and go through all group members. -
-
H5Gget_objinfo
is used to determine the type of the object.
- It also returns the modification time, number of hard links, and some
- other information.
-- The signature of this function is as follows: -
- herr_t H5Gget_objinfo (hid_t loc_id, const char * name, - hbool_t follow_link, - H5G_stat_t *statbuf) --
-
- The root group in this example does not have objects that are - links, so this flag is not important for our example. -
-
H5G_stat_t
data structure (statbuf.type
).
- Valid values are
- H5G_GROUP
, H5G_DATASET
,
- H5G_TYPE
, and H5G_LINK
.
--
H5Giterate
function has the following signature:
-- int H5Giterate (hid_t loc_id, const char *name , int *idx, - H5G_operator_t operator, void * operator_data) --
-
h5gn_members_f
to get the number of members in
- each group and h5gget_obj_idx_f
to obtain the group member's
- name and type.
--
h5gn_members_f
call:
-- h5gn_members_f (loc_id, name, nmembers, hdferr) - - loc_id IN: INTEGER (HID_T) - name IN: CHARACTER (LEN=*) - nmembers OUT: INTEGER - hdferr OUT: INTEGER --
-
h5gget_obj_info_idx_f
call:
-- h5gget_obj_info_idx_f (loc_id, name, idx, & - obj_name, obj_type, hdferr) - - loc_id IN: INTEGER (HID_T) - name IN: CHARACTER (LEN=*) - idx IN: INTEGER - obj_name OUT: CHARACTER (LEN=*) - obj_type OUT: INTEGER - hdferr OUT: INTEGER --
- H5G_LINK_F - H5G_GROUP_F - H5G_DATASET_F - H5G_TYPE_F --
H5Fmount
/ h5fmount_f
- to mount the second file (the child file) in the first file.
-
-H5Funmount
/
- h5funmount_f
when the work is done.
-- - FILE1 FILE2 - - -------------------- -------------------- - ! ! ! ! - ! / ! ! / ! - ! | ! ! | ! - ! | ! ! | ! - ! V ! ! V ! - ! -------- ! ! ---------- ! - ! ! Group ! ! ! ! Dataset! ! - ! --------- ! ! ---------- ! - !------------------! !------------------! --After mounting
FILE2
under the group in FILE1
,
-the parent file has the following structure:
-- - FILE1 - - -------------------- - ! ! - ! / ! - ! | ! - ! | ! - ! V ! - ! -------- ! - ! ! Group ! ! - ! --------- ! - ! | ! - ! | ! - ! V ! - ! ----------- ! - ! ! Dataset ! ! - ! !---------- ! - ! ! - !------------------! - --[ C program ] - -
h5_mount.c
mountexample.f90
-
-
-NOTE: To download a tar file of the examples, including a Makefile,
-please go to the References page.
-
-
-
-
-C:
-
-FORTRAN:
-
-
- Below is a description of another scenario:
-
- Suppose the group
-
-
-
-
-
-
-
-C:
-
-FORTRAN:
-
-
-
-
-Remarks
-
-
-
-
-
-
-
-H5Fmount
/ h5fmount_f
.
- If no objects will be modified, the
- files can be opened with H5F_ACC_RDONLY
- (H5F_ACC_RDONLY_F
in FORTRAN).
- If the data is to be modified, the files should be opened with
- H5F_ACC_RDWR
(H5F_ACC_RDWR_F
in FORTRAN).
-
- herr_t H5Fmount (hid_t loc_id, const char *dsetname,
- hid_t file_id, hid_t access_prp)
-
-
- h5fmount_f (loc_id, dsetname, file_id, hdferr, access_prp)
-
- loc_id IN: INTEGER (HID_T)
- dsetname IN: CHARACTER (LEN=*)
- file_id IN: INTEGER (HID_T)
- hdferr OUT: INTEGER
- access_prp IN: INTEGER (HID_T), OPTIONAL
- (Default value: H5P_DEFAULT_F)
-
-
-
-/G
in the
- specified file. Since the group /G
is in the root
- group of the first file, one can also use just G
to
- identify it.
-G
were a member of
- the group H
in the first file.
- Then the mount point G
can be specified in
- two different ways:
-
-
-
- dsetname is H/G
.
-H
.
- dsetname is G
.
-H5P_DEFAULT
, can be used in C.
- In FORTRAN, this argument can be omitted or
- H5P_DEFAULT_F
can be used.
-H5Fmount
returns a non-negative
- value if successful and a negative value otherwise.
- With the FORTRAN routine, h5fmount_f
,
- the return value of the call is returned in hdferr:
- 0 if successful and -1 otherwise.
-D
.
- One can also modify data.
- If the dataset is modified while the file is mounted, it is
- modified in the original file after the file is unmounted.
-H5Funmount
/
-h5funmount_f
:
-
- herr_t H5Funmount (hid_t loc_id, const char *dsetname)
-
-
- h5funmount_f (loc_id, dsetname, hdferr)
-
- loc_id IN: INTEGER (HID_T)
- dsetname IN: CHARACTER (LEN=*)
- hdferr OUT: INTEGER
-
-
-
-/G
.
-H5Funmount
/ h5funmount_f
- does not close files. Files are closed with the respective calls to
- the H5Fclose
/ h5fclose_f
function.
-h5dump
utility cannot display files in memory.
- Therefore, no output of FILE1
after FILE2
- was mounted is provided.
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-hdfhelp@ncsa.uiuc.edu
-
-
Last Modified: June 22, 2001
-
-
-
-
-
-
-
-
-
-
-
diff --git a/doc/html/Tutor/property.html b/doc/html/Tutor/property.html
deleted file mode 100644
index 13035f2..0000000
--- a/doc/html/Tutor/property.html
+++ /dev/null
@@ -1,167 +0,0 @@
-
-
-The property list interface provides a mechanism for adding functionality -to HDF5 calls, without increasing the number of arguments used -for a given call. -
-A property list is a collection of values which can
-be passed to various HDF5 functions to control features that
-are typically unimportant or whose default values are usually used
-(by specifying H5P_DEFAULT
/ H5P_DEFAULT_F
).
-
-It supports unusual cases when: - -
- - - --The following example shows how to create a file with 64-bit object -offsets and lengths: -
- hid_t create_plist; - hid_t file_id; - - create_plist = H5Pcreate(H5P_FILE_CREATE); - H5Pset_sizes(create_plist, 8, 8); - - file_id = H5Fcreate("test.h5", H5F_ACC_TRUNC, - create_plist, H5P_DEFAULT); - . - . - . - H5Fclose(file_id); -- - -
-Following is an example of using the H5P_FILE_ACCESS property list for creating
-HDF5 files with the metadata and data split into different files:
-
-[ C program ]
- - h5split.c
-
-
-
-
-
-
-
-
-
-The following code sets the maximum size for the type conversion buffer
-and background buffer:
-Creating Datasets
-The Dataset Creation property list, H5P_DATASET_CREATE, applies to
-H5Dcreate() and controls information on how raw data
-is organized on disk and how the raw data is compressed. The dataset API
-partitions these terms by layout, compression, and external storage:
-
-
-
-
-
-
-
-[ C program ]
- - h5_extend.c
-
-[ C program ]
- - h5_crtextd.c
-Reading or Writing Data
-
-The Data Transfer property list, H5P_DATASET_XFER, is used to control
-various aspects of I/O, such as caching hints or collective I/O information.
-
- plist_xfer = H5Pcreate (H5P_DATASET_XFER);
- H5Pset_buffer(plist_xfer, (hsize_t)NX*NY*NZ, NULL, NULL);
- status = H5Dread (dataset, H5T_NATIVE_UCHAR, memspace, dataspace,
- plist_xfer);
-
-
-
-
-
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-hdfhelp@ncsa.uiuc.edu
-
-
Last Modified: June 22, 2001
-
-
-
-
-
-
-
-
-
-
diff --git a/doc/html/Tutor/questions.html b/doc/html/Tutor/questions.html
deleted file mode 100644
index d0d3b51..0000000
--- a/doc/html/Tutor/questions.html
+++ /dev/null
@@ -1,159 +0,0 @@
-
-
-
-
harry
that is a member of a
- group called dick
, which, in turn, is a member of the root group.
--
-
-
-
-
H5Dcreate
- function, i.e., what information is needed to describe a dataset at
- creation time?
--
-
DATASPACE { SIMPLE (4 , 6 ) / ( 4 , 6 ) }
- -
-
moo
in the group boo
, which is
- in the group foo
, which, in turn, is in the root group.
- How would you specify an absolute name to access this dataset?
-moo
described in the
-previous section (Section 9, question 2) using a relative name.
-Describe a way to access the same dataset using an absolute name.
--During a dataset I/O operation, the library transfers raw data between memory -and the file. The data in memory can have a datatype different from that of -the file and can also be of a different size -(i.e., the data in memory is a subset of the dataset elements, or vice versa). -Therefore, to perform read or write operations, the application -program must specify: -
-The steps to read from or write to a dataset are -as follows: -
H5Dread
/h5dread_f
and
-H5Dwrite
/h5dwrite_f
-routines are used. -C: -
- status = H5Dread (set_id, mem_type_id, mem_space_id, file_space_id, - xfer_prp, buf ); - status = H5Dwrite (set_id, mem_type_id, mem_space_id, file_space_id, - xfer_prp, buf); - --FORTRAN: -
- CALL h5dread_f(dset_id, mem_type_id, buf, error, & - mem_space_id=mspace_id, file_space_id=fspace_id, & - xfer_prp=xfer_plist_id) - or - CALL h5dread_f(dset_id, mem_type_id, buf, error) - - - CALL h5dwrite_f(dset_id, mem_type_id, buf, error, & - mem_space_id=mspace_id, file_space_id=fspace_id, & - xfer_prp=xfer_plist_id) - or - CALL h5dwrite_f(dset_id, mem_type_id, buf, error) -- - -
-
/dset
,
-writes the dataset to the file, then reads the dataset back from
-memory. It then closes the dataset and file. h5_rdwt.c
rwdsetexample.f90
DatasetRdWt.java
H5Fopen
/h5fopen_f
opens an existing file and
- returns a file identifier.
--C: - hid_t H5Fopen (const char *name, unsigned access_mode, hid_t access_prp) - -FORTRAN: - h5fopen_f (name, access_mode, file_id, hdferr, access_prp) - - name CHARACTER(LEN=*) - access_mode INTEGER - (Possible values: H5F_ACC_RDWR_F, H5F_ACC_RDONLY_F) - file_id INTEGER(HID_T) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - access_prp INTEGER(HID_T), OPTIONAL - --
-
H5F_ACC_RDWR
in C
- (H5F_ACC_RDWR_F
in FORTRAN)
- allows read/write access
- while H5F_ACC_RDONLY
in C
- (H5F_ACC_RDONLY_F
in FORTRAN)
- allows read-only access.
-
- -
H5P_DEFAULT
in C and H5P_DEFAULT_F
in FORTRAN
- specify the default file access property list.
- This parameter is optional in FORTRAN; if it is omitted, the default file
- access property list is used.
-
- -
-
H5Dopen
/h5dopen_f
opens an existing dataset
- with the name specified by name at the location specified by
- loc_id.
- For FORTRAN, the return value is passed in the hdferr parameter:
- 0 if successful, -1 if not. For C, the function returns the dataset
- identifier if successful, and a negative value if not.
- -C: -
- hid_t H5Dopen (hid_t loc_id, const char *name) --FORTRAN: -
- h5dopen_f(loc_id, name, hdferr) - - loc_id INTEGER(HID_T) - name CHARACTER(LEN=*) - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) -- -
-
H5Dwrite
/h5dwrite_f
writes raw data
- from an application buffer to the specified
- dataset, converting from the datatype and dataspace of the dataset in
- memory to the datatype and dataspace of the dataset in the file.
--C: -
- herr_t H5Dwrite (hid_t dset_id, hid_t mem_type_id, hid_t mem_space_id, - hid_t file_space_id, hid_t xfer_prp, const void * buf) --FORTRAN: -
- h5dwrite_f (dset_id, mem_type_id, buf, hdferr, mem_space_id, & - file_space_id, xfer_prp) - - dset_id INTEGER(HID_T) - mem_type_id INTEGER(HID_T) - buf(*,...*) TYPE - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - mem_space_id INTEGER(HID_T), OPTIONAL - (Default value: H5S_ALL_F) - file_space_id INTEGER(HID_T), OPTIONAL - (Default value: H5S_ALL_F) - xfer_prp INTEGER(HID_T), OPTIONAL - (Default value: H5P_DEFAULT_F) --
- -
H5T_NATIVE_INT
in C
- (H5T_NATIVE_INTEGER
in FORTRAN) is an integer datatype
- for the machine on which the library was compiled.
- - -
H5S_ALL
in C (H5S_ALL_F
- in FORTRAN) is the default value and indicates that the whole dataspace
- in memory is selected for the I/O operation.
- This parameter is optional in FORTRAN; if it is omitted, the default
- will be used.
- - -
H5S_ALL
in C (H5S_ALL_F
in FORTRAN)
- is the default value and indicates that the entire dataspace of
- the dataset in the file is selected for the I/O operation.
- This parameter is optional in FORTRAN; if it is omitted, the default
- will be used.
- - -
H5P_DEFAULT
in C
- (H5P_DEFAULT_F
in FORTRAN) is the default value and
- indicates that the default data transfer property list is used.
- This parameter is optional in FORTRAN; if it is omitted, the default
- will be used.
- - -
- -
-
H5Dread
/h5dread_f
reads raw data from the
- specified dataset to an application buffer,
- converting from the file datatype and dataspace to the memory datatype and
- dataspace.
--C: -
- herr_t H5Dread (hid_t dset_id, hid_t mem_type_id, hid_t mem_space_id, - hid_t file_space_id, hid_t xfer_prp, void * buf) --FORTRAN: -
- h5dread_f (dset_id, mem_type_id, buf, hdferr, mem_space_id, & - file_space_id, xfer_prp) - - dset_id INTEGER(HID_T) - mem_type_id INTEGER(HID_T) - buf(*,...*) TYPE - hdferr INTEGER - (Possible values: 0 on success and -1 on failure) - mem_space_id INTEGER(HID_T), OPTIONAL - (Default value: H5S_ALL_F) - file_space_id INTEGER(HID_T), OPTIONAL - (Default value: H5S_ALL_F) - xfer_prp INTEGER(HID_T), OPTIONAL - (Default value: H5P_DEFAULT_F) - -- -
-
- -
H5T_NATIVE_INT
in C
- (H5T_NATIVE_INTEGER
in FORTRAN) is an integer datatype
- for the machine on which the library was compiled.
- - -
H5S_ALL
in C (H5S_ALL_F
- in FORTRAN) is the default value and indicates that the whole dataspace
- in memory is selected for the I/O operation.
- This parameter is optional in FORTRAN; if it is omitted, the default
- will be used.
- - -
H5S_ALL
in C (H5S_ALL_F
in FORTRAN)
- is the default value and indicates that the entire dataspace of
- the dataset in the file is selected for the I/O operation.
- This parameter is optional in FORTRAN; if it is omitted, the default
- will be used.
-
- -
H5P_DEFAULT
in C
- (H5P_DEFAULT_F
in FORTRAN) is the default value and
- indicates that the default data transfer property list is used.
- This parameter is optional in FORTRAN; if it is omitted, the default
- will be used.
- - -
- -
dset.h5
(created by the C program).
-dsetf.h5
(created by the FORTRAN
-program).
-
- Fig. 6.1a dset.h5
in DDL
-
- HDF5 "dset.h5" { - GROUP "/" { - DATASET "dset" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 4, 6 ) / ( 4, 6 ) } - DATA { - 1, 2, 3, 4, 5, 6, - 7, 8, 9, 10, 11, 12, - 13, 14, 15, 16, 17, 18, - 19, 20, 21, 22, 23, 24 - } - } - } - } --
- Fig. 6.1b dsetf.h5
in DDL
-
-HDF5 "dsetf.h5" { -GROUP "/" { - DATASET "dset" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 6, 4 ) / ( 6, 4 ) } - DATA { - 1, 7, 13, 19, - 2, 8, 14, 20, - 3, 9, 15, 21, - 4, 10, 16, 22, - 5, 11, 17, 23, - 6, 12, 18, 24 - } - } -} -} -- - - - - - -
-
-
-
-
-
-An object reference is based on the relative file address of the object header -in the file and is constant for the life of the object. Once a reference to -an object is created and stored in a dataset in the file, it can be used -to dereference the object it points to. References are handy for creating -a file index or for grouping related objects by storing references to them in -one dataset. -
-
-
-
-
-
-
-
-
-After that, it opens and reads the reference dataset from the file created
-previously, then dereferences the references.
-
-Creating and Storing References to Objects
-The following steps are involved in creating and storing file references
-to objects:
-
-
-
-
-Reading References and Accessing Objects Using References
-
-The following steps are involved in reading references to objects and
-accessing objects using references:
-
-
-
-H5T_STD_REF_OBJ
- datatype must be used to describe the memory datatype.
- Programming Example
-
-Description
-The example below first creates a group in the file.
-It then creates two datasets and a named datatype in that group.
-References to these four objects are stored in a dataset in the root group.
-
-[C example ]
- -
-NOTE: To download a tar file of the examples, including a Makefile,
-please go to the References page.
-h5_ref2obj.c
-[FORTRAN example ]
- - refobjexample.f90
-
-Following is the output from the examples: -
- Data has been successfully written to the dataset - Stored datatype is of a FLOAT class -- - - -
-C:
- dset2_id = H5Dcreate (file_id, dsetname, H5T_STD_REF_OBJ, - space_id, H5P_DEFAULT); --
-FORTRAN:
- CALL h5dcreate_f (file_id, dsetname, H5T_STD_REF_OBJ, & - space_id, dset2_id, hdferr) --
- Notice that the H5T_SDT_REF_OBJ
- datatype is used to specify that references to objects will be
- stored. The datatype H5T_STD_REF_DSETREG
is
- used to store the dataset
- region references and will be discussed later in this tutorial.
-
-
H5Rcreate
/ h5rcreate_f
- create references to the objects. The signature of
- H5Rcreate
/ h5rcreate_f
is as follows:
--C:
- herr_t H5Rcreate (void* ref, hid_t loc_id, const char *name, - H5R_type_t ref_type, hid_t space_id) --
-FORTRAN:
- h5rcreate_f (loc_id, name, ref, hdferr) - - loc_id IN: INTEGER (HID_T) - name IN: CHARACTER(LEN=*) - ref OUT: TYPE (hobj_ref_t_f) - hdferr OUT: INTEGER --
- - -
-
-
H5R_OBJECT
.
- References to dataset regions, H5R_DATASET_REGION
,
- will be discussed later in this tutorial.
--
-
h5rcreate_f
- call is in hdferr: 0 if successful, -1 otherwise.
- In C, H5Rcreate
returns a non-negative value if
- successful and a negative value otherwise.
--
H5Dwrite
/ h5dwrite_f
writes a
- dataset containing the references.
- Notice that the H5T_SDT_REF_OBJ
datatype is used to
- describe the dataset's memory datatype.
--
H5Dread
/ h5dread_f
- reads the dataset containing the
- references to the objects. The H5T_STD_REF_OBJ
memory
- datatype was
- used to read references to memory.
--
H5Rdereference
/ h5rdereference_f
obtains
- the object's identifier. The signature is as follows:
--C:
- hid_t H5Rdereference (hid_t dset_id, H5R_type_t ref_type, - void *ref) --
-FORTRAN:
- h5rdereference_f (dset_id, ref, obj_id, hdferr) - - dset_id IN: INTEGER (HID_T) - ref IN: TYPE (hobj_ref_t_f) - obj_id OUT: INTEGER (HID_T) - hdferr OUT: INTEGER --
-
-
-
-
- In our simplified situation, we know what type of object was
- stored in the dataset. When the type of the object is unknown,
- H5Rget_object_type
should be used to identify the type
- of object the reference points to.
-
-HDF5 File Created by C Example -
-Fig. A REF_OBJ.h5
in DDL
-
-
-HDF5 "REF_OBJ.h5" { -GROUP "/" { - GROUP "GROUP1" { - GROUP "GROUP2" { - } - } - DATASET "INTEGERS" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 5 ) / ( 5 ) } - DATA { - 1, 2, 3, 4, 5 - } - } - DATATYPE "MYTYPE" { - } - DATASET "OBJECT_REFERENCES" { - DATATYPE { H5T_REFERENCE } - DATASPACE { SIMPLE ( 4 ) / ( 4 ) } - DATA { - GROUP 0:1320, GROUP 0:2272, DATASET 0:2648, DATATYPE 0:3244 - } - } -} -} - - --HDF5 File Created by FORTRAN Example: -
-Fig. B FORTRAN.h5
in DDL
-
-
-HDF5 "FORTRAN.h5" { -GROUP "/" { - GROUP "GROUP1" { - GROUP "GROUP2" { - } - } - DATASET "INTEGERS" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 5 ) / ( 5 ) } - DATA { - 1, 2, 3, 4, 5 - } - } - DATATYPE "MyType" { - } - DATASET "OBJECT_REFERENCES" { - DATATYPE { H5T_REFERENCE } - DATASPACE { SIMPLE ( 4 ) / ( 4 ) } - DATA { - GROUP 0:1344, GROUP 0:2320, DATASET 0:2696, DATATYPE 0:3292 - } - } -} -} --
- -Notice how the data in the reference dataset is described. The two numbers -separated by a colon represent a unique identifier of the object. These -numbers are constant for the life of the object. - - - -
-A dataset region reference points to the dataset selection by storing the
-relative file address of the dataset header and the global heap offset of
-the referenced selection. The selection referenced is located by retrieving
-the coordinates of the areas in the selection from the global heap. This
-internal mechanism of storing and retrieving dataset selections is transparent
-to the user. A reference to a dataset selection (a region) is constant for
-the life of the dataset.
-
-
-
-
-
-
-
-
-
-After creating the dataset and references, the program reads the dataset
-containing the dataset region references.
-It reads data from the dereferenced dataset and displays the number of
-elements and raw data. Then it reads two selections, a hyperslab selection
-and a point selection. The program queries a number of points in the
-hyperslab and their coordinates and displays them. Then it queries a number of
-selected points and their coordinates and displays the information.
-To obtain the example, download:
-Creating and Storing References to Dataset Regions
-The following steps are involved in creating and storing references to
-dataset regions:
-
-
-
-
-
-Reading References to Dataset Regions
-
-The following steps are involved in reading references to dataset
-regions and referenced dataset regions (selections).
-
-
-
-H5T_STD_REF_DSETREG
must be used during
-the read operation.
-H5Rdereference
/ h5rdeference_f
to
-obtain the dataset identifier from the read
- dataset region reference.
- H5Rget_region
/ h5rget_region_f
to obtain
- the dataspace identifier for the dataset
- containing the selection from the read dataset region reference.
- Programming Example
-
-Description
-The example below first creates a dataset in the file. Then it creates a
-dataset to store references to the dataset regions (selections).
-The first selection is a 6 x 6 hyperslab. The second selection is a point
-selection in the same dataset.
-References to both selections are created and stored in the buffer and then
-written to the dataset in the file.
-
-
-[C example ]
- -
-NOTE: To download a tar file of the examples, including a Makefile,
-please go to the References page.
-h5_ref2reg.c
-[FORTRAN example ]
- - refregexample.f90
-
- -Following is the output from the examples: -
-Output of C Example -
-Selected hyperslab: -0 0 0 3 3 4 0 0 0 -0 0 0 3 4 4 0 0 0 -Selected points: -1 0 0 0 0 0 0 0 6 -0 0 0 0 0 0 5 0 0 --Output of FORTRAN Example -
- Hyperslab selection - - 3*0, 2*3, 4, 3*0 - 3*0, 3, 2*4, 3*0 - - Point selection - - 1, 7*0, 6 - 6*0, 5, 2*0 -- - -
H5T_STD_REF_DSETREG
datatype is used.
--C: -
- dset1 = H5Dcreate (file_id, dsetnamer, H5T_STD_REF_DSETREG, - spacer_id, creation_prp); --
-FORTRAN: -
- CALL h5dcreate_f (file_id, dsetnamer, H5T_STD_REF_DSETREG, & - spacer_id, dset1, hdferr, creation_prp) --
- -
H5Sselect_hyperslab
/
- h5sselect_hyperslab_f
and
- H5Sselect_elements
/ h5sselect_elements_f
.
- The identifier was obtained when the dataset was
- created and it describes the dataset's dataspace. We did not close it when
- the dataset was closed to decrease the number of function calls used
- in the example.
- In a real application program, one should open the dataset and determine
- its dataspace using the H5Dget_space
/
- h5dget_space_f
function.
--
H5Rcreate
/ h5rcreate_f
is used to create a
-dataset region reference. The signature of the function is as follows:
--C: -
- herr_t H5Rcreate (void *ref, hid_t loc_id, const char *name, - H5R_type_t ref_type, hid_t space_id) --
-FORTRAN: -
- h5rcreate_f (loc_id, name, space_id, ref, hdferr) - - loc_id IN: INTEGER (HID_T) - name IN: CHARACTER (LEN=*) - space_id IN: INTEGER (HID_T) - ref_type OUT: TYPE(hdset_reg_ref_t_f) - hdferr OUT: INTEGER --
-
-
-
H5R_DATASET_REGION
datatype is used.
--
-
H5Rcreate
returns a non-negative
- value if successful and a negative value otherwise. In FORTRAN, the
- return code from the h5rcreate_f
subroutine is
- returned in hdferr: 0 if succesful and -1 otherwise.
--
H5Dread
/ h5dread_f
with
- the H5T_STD_REF_DSETREG
datatype specified.
--
-C: -
- dset2 = H5Rdereference (dset1, H5R_DATASET_REGION, &ref_out[0]); --
-FORTRAN: -
- CALL h5rdereference_f (dset1, ref_out(1), dset2, hdferr) --
- or to obtain spacial information ( dataspace and selection ) with the call
- to H5Rget_region
/ h5rget_region_f
:
-
-C: -
- dspace2 = H5Rget_region (dset1, H5R_DATASET_REGION, &ref_out[0]); --
-FORTRAN: -
- CALL H5rget_region_f (dset1, ref_out(1), dspace2, hdferr) --
- The reference to the dataset region has information for both the dataset - itself and its selection. In both calls, -
-
- The C function returns the dataspace identifier or a - negative value if it is not successful. - In FORTRAN, the dataset identifier or dataspace identifier - is returned in dset2 or dspace2 - and the return code for the call is returned in hdferr: - 0 if successful and -1 otherwise. -
-
-HDF5 File Created by C Example -
-Fig. A REF_REG.h5
in DDL
-
- -HDF5 "REF_REG.h5" { -GROUP "/" { - DATASET "MATRIX" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 2, 9 ) / ( 2, 9 ) } - DATA { - 1, 1, 2, 3, 3, 4, 5, 5, 6, - 1, 2, 2, 3, 4, 4, 5, 6, 6 - } - } - DATASET "REGION_REFERENCES" { - DATATYPE { H5T_REFERENCE } - DATASPACE { SIMPLE ( 2 ) / ( 2 ) } - DATA { - DATASET 0:744 {(0,3)-(1,5)}, DATASET 0:744 {(0,0), (1,6), (0,8)} - } - } -} -} - --HDF5 File Created by FORTRAN Example: -
-Fig. B FORTRAN.h5
in DDL
-
- -HDF5 "FORTRAN.h5" { -GROUP "/" { - DATASET "MATRIX" { - DATATYPE { H5T_STD_I32BE } - DATASPACE { SIMPLE ( 9, 2 ) / ( 9, 2 ) } - DATA { - 1, 1, - 1, 2, - 2, 2, - 3, 3, - 3, 4, - 4, 4, - 5, 5, - 5, 6, - 6, 6 - } - } - DATASET "REGION_REFERENCES" { - DATATYPE { H5T_REFERENCE } - DATASPACE { SIMPLE ( 2 ) / ( 2 ) } - DATA { - DATASET 0:744 {(3,0)-(5,1)}, DATASET 0:744 {(0,0), (6,1), (8,0)} - } - } -} -} -- -Notice how the raw data in the dataset with the dataset regions is displayed. -Each element of the raw data consists of a reference to the dataset -(DATASET number1:number2) and its selected region. -If the selection is a hyperslab, the corner coordinates of the hyperslab -are displayed. -For the point selection, the coordinates of each point are displayed. - - - - - -
H5Sselect_hyperslab
/ h5sselect_hyperslab_f
.
--
sds.h5
-(sdsf.h5
in FORTRAN). It
-selects a 3 x 4 hyperslab from the dataset as follows (Dimension 0 is
-offset by 1 and Dimension 1 is offset by 2):
--5 x 6 array: -
- | - | - | - | - | - |
- | - |
- X |
-
- X |
-
- X |
-
- X |
-
- | - |
- X |
-
- X |
-
- X |
-
- X |
-
- | - |
- X |
-
- X |
-
- X |
-
- X |
-
- | - | - | - | - | - |
-Then it reads the hyperslab from this file into a 2-dimensional plane -(size 7 x 7) of a 3-dimensional array (size 7 x 7 x 3), as -follows (with Dimension 0 offset by 3): -
-
- | - | - | - | - | - | - |
- | - | - | - | - | - | - |
- | - | - | - | - | - | - |
- X |
-
- X |
-
- X |
-
- X |
-- | - | - |
- X |
-
- X |
-
- X |
-
- X |
-- | - | - |
- X |
-
- X |
-
- X |
-
- X |
-- | - | - |
- | - | - | - | - | - | - |
- -To obtain the example, download: -
h5_hyperslab.c
hyperslab.f90
HyperSlab.java
-
-
-
-
-C:
-
-FORTRAN:
-
-
-
-
-
-
-
-
-The start, stride, count, and block arrays must
-be the same size as the rank of the dataspace.
-
-
-
-The FORTRAN example does not use these calls, though they
-are available as
-
-
-
-
-
-
-
-
-
-Remarks
-
-
-
-
-
-H5Sselect_hyperslab
/ h5sselect_hyperslab_f
-selects a hyperslab region to
-add to the current selected region for a specified dataspace.
-
- herr_t H5Sselect_hyperslab (hid_t space_id, H5S_seloper_t operator,
- const hsize_t *start, const hsize_t *stride,
- const hsize_t *count, const hsize_t *block )
-
-
- h5sselect_hyperslab_f (space_id, operator, start, count, &
- hdferr, stride, block)
-
- space_id IN: INTEGER(HID_T)
- operator IN: INTEGER
- start IN: INTEGER(HSIZE_T), DIMENSION(*)
- count IN: INTEGER(HSIZE_T), DIMENSION(*)
- hdferr OUT: INTEGER
- stride IN: INTEGER(HSIZE_T), DIMENSION(*), OPTIONAL
- block IN: INTEGER(HSIZE_T), DIMENSION(*), OPTIONAL
-
-
-
-
-
-
-H5S_SELECT_SET
(H5S_SELECT_SET_F
in FORTRAN)
- H5S_SELECT_OR
(H5S_SELECT_OR_F
in FORTRAN)
-
-
-H5Dget_space / h5dget_space_f:
-
-
-H5Sget_simple_extent_dims:
- H5Sget_simple_extent_ndims:
- h5sget_simple_extent_dims_f
and
-h5sget_simple_extent_ndims_f
.
-
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-hdfhelp@ncsa.uiuc.edu
-
-
Last Modified: June 22, 2001
-
-
-
-
-
-
-
-
-
-
-
diff --git a/doc/html/Tutor/selectc.html b/doc/html/Tutor/selectc.html
deleted file mode 100644
index 59c464c..0000000
--- a/doc/html/Tutor/selectc.html
+++ /dev/null
@@ -1,265 +0,0 @@
-
-
H5Sselect_elements
/
-h5sselect_elements_f
function.
-
-The H5Scopy
/ h5scopy_f
function allows
-you to make an exact copy of a dataspace.
-This can reduce the number of function calls needed when
-selecting a dataspace.
-
-
H5Sselect_elements
/
-h5sselect_elements_f
-to select individual points in a dataset and how to use
-H5Scopy
/ h5scopy_f
-to make a copy of a dataspace.
-h5_copy.c
selectele.f90
Copy.java
-H5Sselect_elements
/ h5sselect_elements_f
-selects array elements to be
-included in the selection for a dataspace:
--C: -
- herr_t H5Sselect_elements (hid_t space_id, H5S_seloper_t operator, - size_t num_elements, - const hsize_t **coord ) --
-FORTRAN: -
- h5sselect_elements_f (space_id, operator, num_elements, coord, hdferr) - - space_id IN: INTEGER(HID_T) - operator IN: INTEGER - num_elements IN: INTEGER - coord IN: INTEGER(HSIZE_T), DIMENSION(*,*) - hdferr OUT: INTEGER --
-
-
H5S_SELECT_SET
(H5S_SELECT_SET_F
in FORTRAN)
- H5S_SELECT_OR
(H5S_SELECT_OR_F
in FORTRAN)
- -
NUMP x RANK
in C
-(RANK x NUMP
in FORTRAN)
-where NUMP
is the number of selected points
-and RANK
is the rank of the dataset.
-Note that these coordinates are 0-based in C and 1-based in FORTRAN.
-- Consider the non-zero elements of the following array: -
- 0 59 0 53 - 0 0 0 0 - 0 0 1 0- In C, the coord array selecting these points would be as follows: -
- 0 1 - 0 3 - 2 2- While in FORTRAN, the coord array would be as follows: -
- 1 1 3 - 2 4 3-
-
-
H5Scopy
/ h5scopy_f
creates an exact copy of a dataspace:
--C: - -
- hid_t H5Scopy (hid_t space_id) --FORTRAN: -
- h5scopy_f (space_id, new_space_id, hdferr) - - space_id IN: INTEGER(HID_T) - new_space_id OUT: INTEGER(HID_T) - hdferr OUT: INTEGER --
-
-
-
-
-
-
-Fig. S.1a
-Fig. S.2a File Contents
-
-Following is the DDL for copy1.h5 and copy2.h5, as viewed with
-the following commands:
-
-h5dump copy1.h5
-
-h5dump copy2.h5
-
-
-C:copy1.h5
in DDL
-
- HDF5 "copy1.h5" {
- GROUP "/" {
- DATASET "Copy1" {
- DATATYPE { H5T_STD_I32BE }
- DATASPACE { SIMPLE ( 3, 4 ) / ( 3, 4 ) }
- DATA {
- 0, 59, 0, 53,
- 0, 0, 0, 0,
- 0, 0, 0, 0
- }
- }
- }
- }
-
-Fig. S.1b copy2.h5
in DDL
-
- HDF5 "copy2.h5" {
- GROUP "/" {
- DATASET "Copy2" {
- DATATYPE { H5T_STD_I32BE }
- DATASPACE { SIMPLE ( 3, 4 ) / ( 3, 4 ) }
- DATA {
- 1, 59, 1, 53,
- 1, 1, 1, 1,
- 1, 1, 1, 1
- }
- }
- }
- }
-
-
-FORTRAN:copy1.h5
in DDL
-
- HDF5 "copy1.h5" {
- GROUP "/" {
- DATASET "Copy1" {
- DATATYPE { H5T_STD_I32BE }
- DATASPACE { SIMPLE ( 4, 3 ) / ( 4, 3 ) }
- DATA {
- 0, 0, 0,
- 53, 0, 0,
- 0, 0, 0,
- 59, 0, 0
- }
- }
- }
- }
-
-Fig. S.2b copy2.h5
in DDL
-
- HDF5 "copy2.h5" {
- GROUP "/" {
- DATASET "Copy2" {
- DATATYPE { H5T_STD_I32BE }
- DATASPACE { SIMPLE ( 4, 3 ) / ( 4, 3 ) }
- DATA {
- 1, 1, 1,
- 53, 1, 1,
- 1, 1, 1,
- 59, 1, 1
- }
- }
- }
- }
-
-
-
-
-
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-hdfhelp@ncsa.uiuc.edu
-
-
Last Modified: June 22, 2001
-
-
-
-
-
-
-
-
-
-
-
diff --git a/doc/html/Tutor/software.html b/doc/html/Tutor/software.html
deleted file mode 100644
index 074bfc8..0000000
--- a/doc/html/Tutor/software.html
+++ /dev/null
@@ -1,85 +0,0 @@
-
-
-If using the pre-compiled binaries you must
-also obtain the GZIP library, as they were compiled with GZIP included, but do
-not include this library. We provide the GZIP library for the platforms on
-which we tested at:
-
-ftp://ftp.ncsa.uiuc.edu/HDF/gzip/
-
-You can build the HDF5 library yourself, if need be. The source code
-can be obtained from:
-
-ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/src/
-
-For further information regarding HDF5, check the HDF5 home page:
-
-http://hdf.ncsa.uiuc.edu/HDF5/
-
-
-
-HDF5 Utilities - h5ls/h5dump
-Glossary
-References
-Example Programs from this Tutorial
-
-
- The HDF5 version number is a set of three integer values
- written as either The The The The It's often convenient to drop the release number when referring
- to a version of the library, like saying version 1.2 of HDF5.
- The release number can be any value in this case.
-
- Version 1.0.0 was released for alpha testing the first week of
- March, 1998. The developement version number was incremented to
- 1.0.1 and remained constant until the the last week of April,
- when the release number started to increase and development
- versions were made available to people outside the core HDF5
- development team.
-
- Version 1.0.23 was released mid-July as a second alpha
- version.
-
- Version 1.1.0 will be the first official beta release but the
- 1.1 branch will also serve as a development branch since we're
- not concerned about providing bug fixes separate from normal
- development for the beta version.
-
- After the beta release we rolled back the version number so the
- first release is version 1.0 and development will continue on
- version 1.1. We felt that an initial version of 1.0 was more
- important than continuing to increment the pre-release version
- numbers.
-
- The motivation for separate public and development versions is
- that the public version will receive only bug fixes while the
- development version will receive new features. This also allows
- us to release bug fixes expediently without waiting for the
- development version to reach a stable state.
-
- Eventually, the development version will near completion and a
- new development branch will fork while the original one enters a
- feature freeze state. When the original development branch is
- ready for release the minor version number will be incremented
- to an even value.
-
-
- The library provides a set of macros and functions to query and
- check version numbers.
-
- The HDF5 development must proceed in such a manner as to
- satisfy the following conditions:
-
- There's at least one invarient: new object features introduced
- in the HDF5 file format (like 2-d arrays of structs) might be
- impossible to "translate" to a format that an old HDF4
- application can understand either because the HDF4 file format
- or the HDF4 API has no mechanism to describe the object.
-
- What follows is one possible implementation based on how
- Condition B was solved in the AIO/PDB world. It also attempts
- to satisfy these goals:
-
- The proposed implementation uses wrappers to handle
- compatability issues. A Format-X file is wrapped in a
- Format-Y file by creating a Format-Y skeleton that replicates
- the Format-X meta data. The Format-Y skeleton points to the raw
- data stored in Format-X without moving the raw data. The
- restriction is that raw data storage methods in Format-Y is a
- superset of raw data storage methods in Format-X (otherwise the
- raw data must be copied to Format-Y). We're assuming that meta
- data is small wrt the entire file.
-
- The wrapper can be a separate file that has pointers into the
- first file or it can be contained within the first file. If
- contained in a single file, the file can appear as a Format-Y
- file or simultaneously a Format-Y and Format-X file.
-
- The Format-X meta-data can be thought of as the original
- wrapper around raw data and Format-Y is a second wrapper around
- the same data. The wrappers are independend of one another;
- modifying the meta-data in one wrapper causes the other to
- become out of date. Modification of raw data doesn't invalidate
- either view as long as the meta data that describes its storage
- isn't modifed. For instance, an array element can change values
- if storage is already allocated for the element, but if storage
- isn't allocated then the meta data describing the storage must
- change, invalidating all wrappers but one.
-
- It's perfectly legal to modify the meta data of one wrapper
- without modifying the meta data in the other wrapper(s). The
- illegal part is accessing the raw data through a wrapper which
- is out of date.
-
- If raw data is wrapped by more than one internal wrapper
- (internal means that the wrapper is in the same file as
- the raw data) then access to that file must assume that
- unreferenced parts of that file contain meta data for another
- wrapper and cannot be reclaimed as free memory.
-
- Since this is a temporary situation which can't be
- automatically detected by the HDF5 library, we must rely
- on the application to notify the HDF5 library whether or not it
- must satisfy Condition B. (Even if we don't rely on the
- application, at some point someone is going to remove the
- Condition B constraint from the library.) So the module that
- handles Condition B is conditionally compiled and then enabled
- on a per-file basis.
-
- If the application desires to produce an HDF4 file (determined
- by arguments to An internal HDF4 wrapper would be used if the HDF5 file is
- writable and the user doesn't mind that the HDF5 file is
- modified. An external wrapper would be used if the file isn't
- writable or if the user wants the data file to be primarily HDF5
- but a few applications need an HDF4 view of the data.
-
- Modifying through the HDF5 library an HDF5 file that has
- internal HDF4 wrapper should invalidate the HDF4 wrapper (and
- optionally regenerate it when Modifying through the HDF5 library an HDF5 file that has an
- external HDF4 wrapper will cause the HDF4 wrapper to become out
- of date (but possibly regenerated during Modifying through the HDF4 library an HDF5 file that has an
- internal or external HDF4 wrapper will cause the HDF5 wrapper to
- become out of date. However, there is now way for the old HDF4
- library to notify the HDF5 wrapper that it's out of date.
- Therefore the HDF5 library must be able to detect when the HDF5
- wrapper is out of date and be able to fix it. If the HDF4
- wrapper is complete then the easy way is to ignore the original
- HDF5 wrapper and generate a new one from the HDF4 wrapper. The
- other approach is to compare the HDF4 and HDF5 wrappers and
- assume that if they differ HDF4 is the right one, if HDF4 omits
- data then it was because HDF4 is a partial wrapper (rather than
- assume HDF4 deleted the data), and if HDF4 has new data then
- copy the new meta data to the HDF5 wrapper. On the other hand,
- perhaps we don't need to allow these situations (modifying an
- HDF5 file with the old HDF4 library and then accessing it with
- the HDF5 library is either disallowed or causes HDF5 objects
- that can't be described by HDF4 to be lost).
-
- To convert an HDF5 file to an HDF4 file on demand, one simply
- opens the file with the HDF4 flag and closes it. This is also
- how AIO implemented backward compatability with PDB in its file
- format.
-
- This condition must be satisfied for all time because there
- will always be archived HDF4 files. If a pure HDF4 file (that
- is, one without HDF5 meta data) is opened with an HDF5 library,
- the If an external and temporary HDF5 wrapper is desired, the
- wrapper is created through the cache like all other HDF5 files.
- The data appears on disk only if a particular cached datum is
- preempted. Instead of calling External wrappers are quite obvious: they contain only things
- from the format specs for the wrapper and nothing from the
- format specs of the format which they wrap.
-
- An internal HDF4 wrapper is added to an HDF5 file in such a way
- that the file appears to be both an HDF4 file and an HDF5
- file. HDF4 requires an HDF4 file header at file offset zero. If
- a user block is present then we just move the user block down a
- bit (and truncate it) and insert the minimum HDF4 signature.
- The HDF4
- When such a file is opened by the HDF5 library for
- modification it shifts the user block back down to address zero
- and fills with zeros, then truncates the file at the end of the
- HDF5 data or adds the trailing HDF4 wrapper to the free
- list. This prevents HDF4 applications from reading the file with
- an out of date wrapper.
-
- If there is no user block then we have a problem. The HDF5
- super block must be moved to make room for the HDF4 file header.
- But moving just the super block causes problems because all file
- addresses stored in the file are relative to the super block
- address. The only option is to shift the entire file contents
- by 512 bytes to open up a user block (too bad we don't have
- hooks into the Unix i-node stuff so we could shift the entire
- file contents by the size of a file system page without ever
- performing I/O on the file :-)
-
- Is it possible to place an HDF5 wrapper in an HDF4 file? I
- don't know enough about the HDF4 format, but I would suspect it
- might be possible to open a hole at file address 512 (and
- possibly before) by moving some things to the end of the file
- to make room for the HDF5 signature. The remainder of the HDF5
- wrapper goes at the end of the file and entries are added to the
- HDF4 Conversion programs that copy an entire HDF4 file to a separate,
- self-contained HDF5 file and vice versa might be useful.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-HDF Help Desk
-
-Copyright by the Board of Trustees of the University of Illinois.
- Since file data structures can be cached in memory by the H5AC
- package it becomes problematic to move such a data structure in
- the file. One cannot just copy a portion of the file from one
- location to another because:
-
- Here's a correct method to move data from one location to
- another. The example code assumes that one is moving a B-link
- tree node from Parallel HDF5 Design In this section, I first describe the function requirements of the Parallel HDF5 (PHDF5) software and the assumed system requirements. Section 2 describes the programming model of the PHDF5 interface. Section 3 shows an example PHDF5 program. HDF5 uses optional access template object to control the file access
-mechanism. The general model in accessing an HDF5 file in parallel
-contains the following steps: Each processes of the MPI communicator creates an access template and sets
-it up with MPI parallel access information (communicator, info object,
-access-mode). All processes of the MPI communicator open an HDF5 file by a collective call
-(H5FCreate or H5Fopen) with the access template. All processes of the MPI communicator open a dataset by a collective call (H5Dcreate or H5Dopen). This version supports only collective dataset open. Future version may support datasets open by a subset of the processes that have opened the file. Each process may do independent and arbitrary number of data I/O access by independent calls (H5Dread or H5Dwrite) to the dataset with the transfer template set for independent access. (The default transfer mode is independent transfer). If the dataset is an unlimited dimension one and if the H5Dwrite is writing data beyond the current dimension size of the dataset, all processes that have opened the dataset must make a collective call (H5Dallocate) to allocate more space for the dataset BEFORE the independent H5Dwrite call. All processes that have opened the dataset may do collective data I/O access by collective calls (H5Dread or H5Dwrite) to the dataset with the transfer template set for collective access. Pre-allocation (H5Dallocate) is not needed for unlimited dimension datasets since the H5Dallocate call, if needed, is done internally by the collective data access call. Changes to attributes can only occur at the "main process" (process 0). Read only access to attributes can occur independent in each process that has opened the dataset. (API to be defined later.) All processes that have opened the dataset must close the dataset by a collective call (H5Dclose). All processes that have opened the file must close the file by a collective call (H5Fclose). h5dump
-
-h5dump [-h] [-bb] [-header] [-a
-
-h5ls
-
-h5ls [OPTIONS] FILE [OBJECTS...]
-
- OPTIONS
- -h, -?, --help Print a usage message and exit
- -d, --dump Print the values of datasets
- -f, --full Print full path names instead of base names
- -l, --label Label members of compound datasets
- -r, --recursive List all groups recursively, avoiding cycles
- -s, --string Print 1-byte integer datasets as ASCII
- -wN, --width=N Set the number of columns of output
- -v, --verbose Generate more verbose output
- -V, --version Print version number and exit
- FILE
- The file name may include a printf(3C) integer format such as
- "%05d" to open a file family.
- OBJECTS
- The names of zero or more objects about which information should be
- displayed. If a group is mentioned then information about each of its
- members is displayed. If no object names are specified then
- information about all of the objects in the root group is displayed.
-
-
-
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-hdfhelp@ncsa.uiuc.edu
-
-
Last Modified: June 22, 2001
-
-
-
-
-
-
-
-
diff --git a/doc/html/Version.html b/doc/html/Version.html
deleted file mode 100644
index d465d04..0000000
--- a/doc/html/Version.html
+++ /dev/null
@@ -1,137 +0,0 @@
-
-
-
- Version Numbers
-
- 1. Introduction
-
- hdf5-1.2.3
or hdf5 version
- 1.2 release 3
.
-
- 5
is part of the library name and will only
- change if the entire file format and library are redesigned
- similar in scope to the changes between HDF4 and HDF5.
-
- 1
is the major version number and
- changes when there is an extensive change to the file format or
- library API. Such a change will likely require files to be
- translated and applications to be modified. This number is not
- expected to change frequently.
-
- 2
is the minor version number and is
- incremented by each public release that presents new features.
- Even numbers are reserved for stable public versions of the
- library while odd numbers are reserved for developement
- versions. See the diagram below for examples.
-
- 3
is the release number. For public
- versions of the library, the release number is incremented each
- time a bug is fixed and the fix is made available to the public.
- For development versions, the release number is incremented more
- often (perhaps almost daily).
-
- 2. Abbreviated Versions
-
- 3. Special Versions
-
- 4. Public versus Development
-
-
Fig 1: Version Example
- 5. Version Support from the Library
-
-
-
-
-H5_VERS_MAJOR
- H5_VERS_MINOR
- H5_VERS_RELEASE
-
- herr_t H5get_libversion (unsigned *majnum, unsigned
- *minnum, unsigned *relnum)
-
- void H5check(void)
-
- herr_t H5check_version (unsigned majnum,
- unsigned minnum, unsigned relnum)
- H5check()
macro
- with the include file version constants. The function
- compares its arguments to the result returned by
- H5get_libversion()
and if a mismatch is detected prints
- an error message on the standard error stream and aborts.
-
-HDF Help Desk
-
-
-
-
-Last modified: Fri Oct 30 10:32:50 EST 1998
-
-
-
-
diff --git a/doc/html/chunk1.gif b/doc/html/chunk1.gif
deleted file mode 100644
index 0260818..0000000
Binary files a/doc/html/chunk1.gif and /dev/null differ
diff --git a/doc/html/chunk1.obj b/doc/html/chunk1.obj
deleted file mode 100644
index 5936b0c..0000000
--- a/doc/html/chunk1.obj
+++ /dev/null
@@ -1,52 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,16,1,9,1,1,0,0,3,0,1,1,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-box('black',64,64,384,384,5,2,1,29,0,0,0,0,0,'2',[
-]).
-poly('black',2,[
- 128,64,128,384],0,2,1,30,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 192,64,192,384],0,2,1,31,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 256,64,256,384],0,2,1,32,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 320,64,320,384],0,2,1,33,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 64,128,384,128],0,2,1,34,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 64,192,384,192],0,2,1,35,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 64,256,384,256],0,2,1,36,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',2,[
- 64,320,384,320],0,2,1,37,0,4,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-box('black',128,448,192,512,5,2,1,56,0,0,0,0,0,'2',[
-]).
-text('black',448,208,'Courier',0,17,2,1,0,1,84,28,61,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Entire array",
- "5000 x 5000"]).
-text('black',256,464,'Courier',0,17,2,1,0,1,84,28,63,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Single Chunk",
- "1000 x 1000"]).
-box('black',48,48,512,528,0,1,1,71,0,0,0,0,0,'1',[
-]).
diff --git a/doc/html/compat.html b/doc/html/compat.html
deleted file mode 100644
index fd46ca4..0000000
--- a/doc/html/compat.html
+++ /dev/null
@@ -1,271 +0,0 @@
-
-
-
- Backward/Forward Compatability
-
-
-
-
-
-
- Wrappers
-
-
- Implementation of Condition B
-
- H5Fopen
), and the Condition B
- module is compiled into the library, then H5Fclose
- calls the module to traverse the HDF5 wrapper and generate an
- additional internal or external HDF4 wrapper (wrapper specifics
- are described below). If Condition B is implemented as a module
- then it can benefit from the metadata already cached by the main
- library.
-
- H5Fclose
is
- called). The HDF5 library must understand how wrappers work, but
- not necessarily anything about the HDF4 file format.
-
- H5Fclose
).
- Note: Perhaps the next release of the HDF4 library should
- insure that the HDF4 wrapper file has a more recent modification
- time than the raw data file (the HDF5 file) to which it
- points(?)
-
-
- Implementation of Condition C
-
- H5Fopen
builds an internal or external HDF5
- wrapper and then accesses the raw data through that wrapper. If
- the HDF5 library modifies the file then the HDF4 wrapper becomes
- out of date. However, since the HDF5 library hasn't been
- released, we can at least implement it to disable and/or reclaim
- the HDF4 wrapper.
-
- H5Fclose
on the HDF5
- wrapper file we call H5Fabort
which immediately
- releases all file resources without updating the file, and then
- we unlink the file from Unix.
-
-
- What do wrappers look like?
-
- dd
list and any other data it needs are
- appended to the end of the file and the HDF5 signature uses the
- logical file length field to determine the beginning of the
- trailing part of the wrapper.
-
-
-
-
-
- HDF4 minimal file header. Its main job is to point to
- the
- dd
list at the end of the file.
-
- User-defined block which is truncated by the size of the
- HDF4 file header so that the HDF5 super block file address
- doesn't change.
-
-
- The HDF5 super block and data, unmodified by adding the
- HDF4 wrapper.
-
-
- The main part of the HDF4 wrapper. The
- dd
- list will have entries for all parts of the file so
- hdpack(?) doesn't (re)move anything.dd
list to mark the location(s) of the HDF5
- wrapper.
-
-
- Other Thoughts
-
-
- Robb Matzke
-
-
-Last modified: Wed Oct 8 12:34:42 EST 1997
-
-
-
diff --git a/doc/html/cpplus/CppInterfaces.html b/doc/html/cpplus/CppInterfaces.html
deleted file mode 100644
index f8f37f2..0000000
--- a/doc/html/cpplus/CppInterfaces.html
+++ /dev/null
@@ -1,1437 +0,0 @@
-
-
-
-
- HDF5 C++ Interfaces
- ===================
-
-// HDF5 dataset and attribute have some common characteristics, so the
-// term abstract dataset is used to name the element that can represent
-// both objects, dataset and attribute.
-//
-// Class AbstractDs is an abstract base class, from which Attribute and
-// DataSet inherit. It provides the services that are common to both
-// Attribute and DataSet. It also inherits from H5Object and passes down
-// the services that H5Object provides.
-class AbstractDs : public H5Object
-
- // Gets the dataspace of this abstract dataset - pure virtual
- virtual DataSpace getSpace() const = 0;
-
- // Gets the class of the datatype that is used by this abstract
- // dataset
- H5T_class_t getTypeClass() const;
-
- // Gets a copy of the datatype that this abstract dataset uses.
- // Note that this datatype is a generic one and can only be accessed
- // via generic member functions, i.e., member functions belong to
- // DataType. To get specific datatype, i.e. EnumType, FloatType,
- // etc..., use the specific functions, that follow, instead.
- DataType getDataType() const;
-
- // Gets a copy of the specific datatype of this abstract dataset
- EnumType getEnumType() const;
- CompType getCompType() const;
- IntType getIntType() const;
- FloatType getFloatType() const;
- StrType getStrType() const;
-
- // Copy constructor
- AbstractDs( const AbstractDs& original );
-
- virtual ~AbstractDs();
-
-// end of class AbstractDs
-
-// Atomic datatype can be an integer, float, string, or predefined datatype.
-//
-// Class AtomType is a base class, from which IntType, FloatType, StrType,
-// and PredType inherit. It provides the services that are common to these
-// subclasses. It also inherits from DataType and passes down the
-// services that are common to all the datatypes.
-class AtomType : public DataType
-
- // Sets the total size for an atomic datatype.
- void setSize( size_t size ) const;
-
- // Returns the byte order of an atomic datatype.
- H5T_order_t getOrder( string& order_string ) const;
-
- // Sets the byte ordering of an atomic datatype.
- void setOrder( H5T_order_t order ) const;
-
- // Returns the precision of an atomic datatype.
- size_t getPrecision() const;
-
- // Sets the precision of an atomic datatype.
- void setPrecision( size_t precision ) const;
-
- // Gets the bit offset of the first significant bit.
- int getOffset() const;
-
- // Sets the bit offset of the first significant bit.
- void setOffset( size_t offset ) const;
-
- // Copy constructor
- AtomType( const AtomType& original );
-
- virtual ~AtomType();
-
-// end of class AtomType
-
-
-// An attribute is an abstract dataset because it has some characteristics
-// that a dataset also has, but not all.
-//
-// Class Attribute inherits from AbstractDs and provides accesses to an
-// attribute.
-class Attribute : public AbstractDs
-
- // Writes data to this attribute.
- void write(const DataType& mem_type, void *buf ) const;
-
- // Reads data from this attribute.
- void read( const DataType& mem_type, void *buf ) const;
-
- // Gets a copy of the dataspace for this attribute.
- virtual DataSpace getSpace() const;
-
- // Gets the name of this attribute.
- string getName( size_t buf_size ) const;
-
- // An attribute doesn't have the ability to iterate, simply because
- // it doesn't have any attributes associated with it. Thus, the
- // implementation of this member which is inherited from H5Object
- // is overwritten to do nothing here.
- int iterateAttrs() const;
-
- // Creates a copy of an existing attribute using the attribute id
- Attribute( const hid_t attr_id );
-
- // Copy constructor
- Attribute( const Attribute& original );
-
- virtual ~Attribute();
-
-
-// CommonFG is a protocol class. Its existence is simply to provide the
-// common services that are provided by H5File and Group. The file or
-// group in the context of this class is referred to as 'location'.
-class CommonFG
- // Creates a new group at this location.
- Group createGroup( const string& name, size_t size_hint = 0 ) const;
- Group createGroup( const char* name, size_t size_hint = 0 ) const;
-
- // Opens an existing group in a location.
- Group openGroup( const string& name ) const;
- Group openGroup( const char* name ) const;
-
- // Creates a new dataset at this location.
- DataSet createDataSet( const string& name, const DataType& data_type, const DataSpace& data_space, const DSetCreatPropList& create_plist = DSetCreatPropList::DEFAULT ) const;
- DataSet createDataSet( const char* name, const DataType& data_type, const DataSpace& data_space, const DSetCreatPropList& create_plist = DSetCreatPropList::DEFAULT ) const;
-
- // Opens an existing dataset at this location.
- DataSet openDataSet( const string& name ) const;
- DataSet openDataSet( const char* name ) const;
-
- // Creates a link of the specified type from new_name to current_name;
- // both names are interpreted relative to this location.
- void link( H5G_link_t link_type, const string& curr_name, const string& new_name ) const;
- void link( H5G_link_t link_type, const char* curr_name, const char* new_name ) const;
-
- // Removes the specified name at this location.
- void unlink( const string& name ) const;
- void unlink( const char* name ) const;
-
- // Renames an HDF5 object at this location.
- void move( const string& src, const string& dst ) const;
- void move( const char* src, const char* dst ) const;
-
- // Returns information about an HDF5 object, given by its name, at this location.
- void getObjinfo( const string& name, hbool_t follow_link, H5G_stat_t& statbuf ) const;
- void getObjinfo( const char* name, hbool_t follow_link, H5G_stat_t& statbuf ) const;
-
- // Returns the name of the HDF5 object that the symbolic link points to.
- string getLinkval( const string& name, size_t size ) const;
- string getLinkval( const char* name, size_t size ) const;
-
- // Sets the comment for the HDF5 object specified by its name.
- void setComment( const string& name, const string& comment ) const;
- void setComment( const char* name, const char* comment ) const;
-
- // Retrieves comment for the HDF5 object specified by its name.
- string getComment( const string& name, size_t bufsize ) const;
- string getComment( const char* name, size_t bufsize ) const;
-
- // Mounts the file 'child' onto this location.
- void mount( const string& name, H5File& child, PropList& plist ) const;
- void mount( const char* name, H5File& child, PropList& plist) const;
-
- // Unmounts the file named 'name' from this location.
- void unmount( const string& name ) const;
- void unmount( const char* name ) const;
-
- // Iterates over the elements of this location - not implemented in
- // C++ style yet
- int iterateElems( const string& name, int *idx, H5G_iterate_t op, void *op_data );
- int iterateElems( const char* name, int *idx, H5G_iterate_t op, void *op_data );
-
- // Opens a generic named datatype at this location
- DataType openDataType( const string& name ) const;
- DataType openDataType( const char* name ) const;
-
- // Opens a named enumeration datatype at this location
- EnumType openEnumType( const string& name ) const;
- EnumType openEnumType( const char* name ) const;
-
- // Opens a named compound datatype at this location
- CompType openCompType( const string& name ) const;
- CompType openCompType( const char* name ) const;
-
- // Opens a named integer datatype at this location
- IntType openIntType( const string& name ) const;
- IntType openIntType( const char* name ) const;
-
- // Opens a named floating-point datatype at this location
- FloatType openFloatType( const string& name ) const;
- FloatType openFloatType( const char* name ) const;
-
- // Opens a named string datatype at this location
- StrType openStrType( const string& name ) const;
- StrType openStrType( const char* name ) const;
-
- // For H5File and Group to throw appropriate exception - pure virtual
- virtual void throwException() const = 0;
-
- // Get id of the location, either group or file - pure virtual
- virtual hid_t getLocId() const = 0;
-
- CommonFG();
- virtual ~CommonFG();
-
-// end of CommonFG declaration
-
-
-// Class CompType inherits from DataType and provides accesses to a compound
-// datatype.
-class CompType : public DataType
-
- // Creates a new compound datatype, given the type's size.
- CompType( size_t size );
-
- // Creates a compound datatype using an existing id.
- CompType( const hid_t existing_id );
-
- // Gets the compound datatype of the specified dataset.
- CompType( const DataSet& dataset );
-
- // Returns the number of members in this compound datatype.
- int getNmembers() const;
-
- // Returns the name of a member of this compound datatype.
- string getMemberName( unsigned member_num ) const;
-
- // Returns the offset of a member of this compound datatype.
- size_t getMemberOffset( unsigned memb_no ) const;
-
- // Returns the dimensionality of the specified member of this compound datatype.
- int getMemberDims( int member_num, size_t* dims, int* perm ) const;
-
- // Returns the type class of the specified member of this compound
- // datatype. It provides to the user a way of knowing what type
- // to create another datatype of the same class.
- H5T_class_t getMemberClass( unsigned member_num ) const;
-
- // Returns the generic datatype of the specified member in
- // this compound datatype.
- DataType getMemberDataType( int member_num ) const;
-
- // Returns the enumeration datatype of the specified member in
- // this compound datatype.
- EnumType getMemberEnumType( int member_num ) const;
-
- // Returns the compound datatype of the specified member in
- // this compound datatype.
- CompType getMemberCompType( int member_num ) const;
-
- // Returns the integer datatype of the specified member in
- // this compound datatype.
- IntType getMemberIntType( int member_num ) const;
-
- // Returns the floating-point datatype of the specified member in
- // this compound datatype.
- FloatType getMemberFloatType( int member_num ) const;
-
- // Returns the string datatype of the specified member in
- // this compound datatype.
- StrType getMemberStrType( int member_num ) const;
-
- // Adds a new member to this compound datatype.
- void insertMember( const string name, size_t offset, const DataType& new_member ) const;
-
- // Recursively removes padding from within this compound datatype.
- void pack() const;
-
- // Default constructor
- CompType();
-
- // Copy constructor
- CompType( const CompType& original );
-
- virtual ~CompType();
-
-// end of class CompType
-
-
-// Class DataSet inherits from AbstractDs and provides accesses to a dataset.
-class DataSet : public AbstractDs
-
- // Gets the dataspace of this dataset.
- virtual DataSpace getSpace() const;
-
- // Gets the creation property list of this dataset.
- DSetCreatPropList getCreatePlist() const;
-
- // Gets the storage size of this dataset.
- hsize_t getStorageSize() const;
-
- // Reads the data of this dataset and stores it in the provided buffer.
- // The memory and file dataspaces and the transferring property list
- // can be defaults.
- void read( void* buf, const DataType& mem_type, const DataSpace& mem_space = DataSpace::ALL, const DataSpace& file_space = DataSpace::ALL, const DSetMemXferPropList& xfer_plist = DSetMemXferPropList::DEFAULT ) const;
-
- // Writes the buffered data to this dataset.
- // The memory and file dataspaces and the transferring property list
- // can be defaults.
- void write( const void* buf, const DataType& mem_type, const DataSpace& mem_space = DataSpace::ALL, const DataSpace& file_space = DataSpace::ALL, const DSetMemXferPropList& xfer_plist = DSetMemXferPropList::DEFAULT ) const;
-
- // Extends the dataset with unlimited dimension.
- void extend( const hsize_t* size ) const;
-
- // Default constructor
- DataSet();
-
- // Copy constructor
- DataSet( const DataSet& original );
-
- virtual ~DataSet();
-
-// end of class DataSet
-
-
-// Class DataSpace provides accesses to the dataspace.
-class DataSpace : public IdComponent
-
- // Default DataSpace objects
- static const DataSpace ALL;
-
- // Creates a dataspace object given the space type.
- DataSpace( H5S_class_t type );
-
- // Creates a simple dataspace.
- DataSpace( int rank, const hsize_t * dims, const hsize_t * maxdims = NULL);
-
- // Makes copy of an existing dataspace.
- void copy( const DataSpace& like_space );
-
- // Determines if this dataspace is a simple one.
- bool isSimple () const;
-
- // Sets the offset of this simple dataspace.
- void offsetSimple ( const hssize_t* offset ) const;
-
- // Retrieves dataspace dimension size and maximum size.
- int getSimpleExtentDims ( hsize_t *dims, hsize_t *maxdims = NULL ) const;
-
- // Gets the dimensionality of this dataspace.
- int getSimpleExtentNdims () const;
-
- // Gets the number of elements in this dataspace.
- hssize_t getSimpleExtentNpoints () const;
-
- // Gets the current class of this dataspace.
- H5S_class_t getSimpleExtentType () const;
-
- // Copies the extent of this dataspace.
- void extentCopy ( DataSpace& dest_space ) const;
-
- // Sets or resets the size of this dataspace.
- void setExtentSimple( int rank, const hsize_t *current_size, const hsize_t *maximum_size = NULL ) const;
-
- // Removes the extent from this dataspace.
- void setExtentNone () const;
-
- // Gets the number of elements in this dataspace selection.
- hssize_t getSelectNpoints () const;
-
- // Get number of hyperslab blocks.
- hssize_t getSelectHyperNblocks () const;
-
- // Gets the list of hyperslab blocks currently selected.
- void getSelectHyperBlocklist( hsize_t startblock, hsize_t numblocks, hsize_t *buf ) const;
-
- // Gets the number of element points in the current selection.
- hssize_t getSelectElemNpoints () const;
-
- // Retrieves the list of element points currently selected.
- void getSelectElemPointlist ( hsize_t startpoint, hsize_t numpoints, hsize_t *buf ) const;
-
- // Gets the bounding box containing the current selection.
- void getSelectBounds ( hsize_t* start, hsize_t* end ) const;
-
- // Selects array elements to be included in the selection for
- // this dataspace.
- void selectElements ( H5S_seloper_t op, const size_t num_elements, const hsize_t* coord[ ] ) const;
-
- // Selects the entire dataspace.
- void selectAll () const;
-
- // Resets the selection region to include no elements.
- void selectNone () const;
-
- // Verifies that the selection is within the extent of the dataspace.
- bool selectValid () const;
-
- // Selects a hyperslab region to add to the current selected region.
- void selectHyperslab( H5S_seloper_t op, const hsize_t *count, const hsize_t *start, const hsize_t *stride = NULL, const hsize_t *block = NULL ) const;
-
- // Default constructor
- DataSpace();
-
- // Create a dataspace object from a dataspace identifier
- DataSpace( const hid_t space_id );
-
- // Copy constructor
- DataSpace( const DataSpace& original );
-
- virtual ~DataSpace();
-// end of class DataSpace
-
-
-// HDF5 datatype can be an atom datatype, a compound datatype, or an
-// enumeration datatype. A datatype is itself a kind of HDF5 object.
-//
-// Class DataType provides accesses to a generic HDF5 datatype. It has
-// characteristics which AtomType, CompType, and EnumType inherit. It also
-// inherits from H5Object and passes down the services to its subclasses.
-class DataType : public H5Object
-
- // Creates a datatype given its class and size.
- DataType( const H5T_class_t type_class, size_t size );
-
- // Copies an existing datatype to this datatype instance.
- void copy( const DataType& like_type );
-
- // Returns the datatype class identifier of this datatype.
- H5T_class_t getClass() const;
-
- // Commits a transient datatype to a file; this datatype becomes
- // a named datatype which can be accessed from the location.
- void commit( H5Object& loc, const string& name ) const;
- void commit( H5Object& loc, const char* name ) const;
-
- // Determines whether this datatype is a named datatype or
- // a transient datatype.
- bool committed() const;
-
- // Finds a conversion function that can handle the conversion
- // of this datatype to the given datatype, dest.
- H5T_conv_t find( const DataType& dest, H5T_cdata_t **pcdata ) const;
-
- // Converts data from this datatype into the specified datatype, dest.
- void convert( const DataType& dest, size_t nelmts, void *buf, void *background, PropList& plist ) const;
-
- // Sets the overflow handler to a specified function.
- void setOverflow(H5T_overflow_t func) const;
-
- // Returns a pointer to the current global overflow function.
- H5T_overflow_t getOverflow(void) const;
-
- // Locks a datatype.
- void lock() const;
-
- // Returns the size of this datatype.
- size_t getSize() const;
-
- // Returns the base datatype from which a datatype is derived.
- // Not implemented yet
- DataType getSuper() const;
-
- // Registers a conversion function.
- void registerFunc(H5T_pers_t pers, const string& name, const DataType& dest, H5T_conv_t func ) const;
- void registerFunc(H5T_pers_t pers, const char* name, const DataType& dest, H5T_conv_t func ) const;
-
- // Removes a conversion function from all conversion paths.
- void unregister( H5T_pers_t pers, const string& name, const DataType& dest, H5T_conv_t func ) const;
- void unregister( H5T_pers_t pers, const char* name, const DataType& dest, H5T_conv_t func ) const;
-
- // Tags an opaque datatype.
- void setTag( const string& tag ) const;
- void setTag( const char* tag ) const;
-
- // Gets the tag associated with an opaque datatype.
- string getTag() const;
-
- // Creates a DataType using an existing id - this datatype is
- // not a predefined type
- DataType( const hid_t type_id, bool predtype = false );
-
- // Default constructor
- DataType();
-
- // Copy constructor
- DataType( const DataType& original );
-
- virtual ~DataType();
-
-// end of class DataType
-
-
-// Class DSetCreatPropList provides accesses to a dataset creation
-// property list.
-class DSetCreatPropList : public PropList
-
- // Default DSetCreatPropList object
- static const DSetCreatPropList DEFAULT;
-
- // Copies a dataset creation property list using assignment statement.
- DSetCreatPropList& operator=( const DSetCreatPropList& rhs );
-
- // Sets the type of storage used to store the raw data for the
- // dataset that uses this property list.
- void setLayout( hid_t plist, H5D_layout_t layout ) const;
-
- // Gets the layout of the raw data storage of the data that uses this
- // property list.
- H5D_layout_t getLayout() const;
-
- // Sets the size of the chunks used to store a chunked layout dataset.
- void setChunk( int ndims, const hsize_t* dim ) const;
-
- // Retrieves the size of the chunks used to store a chunked layout dataset.
- int getChunk( int max_ndims, hsize_t* dim ) const;
-
- // Sets compression method and compression level
- void setDeflate( int level ) const;
-
- // Sets a dataset fill value.
- void setFillValue( DataType& fvalue_type, const void* value ) const;
-
- // Retrieves a dataset fill value.
- void getFillValue( DataType& fvalue_type, void* value ) const;
-
- // Adds a filter to the filter pipeline
- void setFilter( H5Z_filter_t filter, unsigned int flags, size_t cd_nelmts, const unsigned int cd_values[] ) const;
-
- // Returns the number of filters in the pipeline.
- int getNfilters() const;
-
- // Returns information about a filter in a pipeline.
- H5Z_filter_t getFilter( int filter_number, unsigned int& flags, size_t& cd_nelmts, unsigned int* cd_values, size_t namelen, char name[] ) const;
-
- // Adds an external file to the list of external files.
- void setExternal( const char* name, off_t offset, hsize_t size ) const;
-
- // Returns the number of external files for a dataset.
- int getExternalCount() const;
-
- // Returns information about an external file
- void getExternal( unsigned idx, size_t name_size, char* name, off_t& offset, hsize_t& size ) const;
-
- // Creates a copy of an existing dataset creation property list
- // using the property list id
- DSetCreatPropList( const hid_t plist_id );
-
- // Default constructor
- DSetCreatPropList();
-
- // Copy constructor
- DSetCreatPropList( const DSetCreatPropList& original );
-
- virtual ~DSetCreatPropList();
-
-// end of class DSetCreatPropList
-
-
-// Class DSetMemXferPropList provides accesses to a dataset memory and
-// transfer property list.
-class DSetMemXferPropList : public PropList
-
- // Default object for dataset memory and transfer property list
- static const DSetMemXferPropList DEFAULT;
-
- // Copies a dataset memory and transfer property list using
- // assignment statement
- DSetMemXferPropList& operator=( const DSetMemXferPropList& rhs );
-
- // Sets type conversion and background buffers
- void setBuffer( size_t size, void* tconv, void* bkg ) const;
-
- // Reads buffer settings
- size_t getBuffer( void** tconv, void** bkg ) const;
-
- // Sets the dataset transfer property list status to TRUE or FALSE
- void setPreserve( bool status ) const;
-
- // Checks status of the dataset transfer property list
- bool getPreserve() const;
-
- // Indicates whether to cache hyperslab blocks during I/O
- void setHyperCache( bool cache, unsigned limit = 0 ) const;
-
- // Returns information regarding the caching of hyperslab blocks during I/O
- void getHyperCache( bool& cache, unsigned& limit ) const;
-
- // Sets B-tree split ratios for a dataset transfer property list
- void setBtreeRatios( double left, double middle, double right ) const;
-
- // Gets B-tree split ratios for a dataset transfer property list
- void getBtreeRatios( double& left, double& middle, double& right ) const;
-
- // Sets the memory manager for variable-length datatype
- // allocation in H5Dread and H5Dvlen_reclaim
- void setVlenMemManager( H5MM_allocate_t alloc, void* alloc_info,
- H5MM_free_t free, void* free_info ) const;
-
- // alloc and free are set to NULL, indicating that system
- // malloc and free are to be used
- void setVlenMemManager() const;
-
- // Gets the memory manager for variable-length datatype
- // allocation in H5Dread and H5Tvlen_reclaim
- void getVlenMemManager( H5MM_allocate_t& alloc, void** alloc_info,
- H5MM_free_t& free, void** free_info ) const;
-
- // Sets the transfer mode - parallel mode, not currently supported
- //void setXfer( H5D_transfer_t data_xfer_mode = H5D_XFER_INDEPENDENT ) const;
-
- // Gets the transfer mode - parallel mode, not currently supported
- //H5D_transfer_t getXfer() const;
-
- // Creates a copy of an existing dataset memory and transfer
- // property list using the property list id
- DSetMemXferPropList (const hid_t plist_id)
-
- // Default constructor
- DSetMemXferPropList();
-
- // Copy constructor
- DSetMemXferPropList( const DSetMemXferPropList& original );
-
- // Default destructor
- virtual ~DSetMemXferPropList();
-
-// end of class DSetMemXferPropList
-
-
-class EnumType : public DataType
-
- // Creates an empty enumeration datatype based on a native signed
- // integer type, whose size is given by size.
- EnumType( size_t size );
-
- // Gets the enum datatype of the specified dataset
- EnumType( const DataSet& dataset ); // H5Dget_type
-
- // Creates a new enum datatype based on an integer datatype
- EnumType( const IntType& data_type ); // H5Tenum_create
-
- // Inserts a new member to this enumeration type.
- void insert( const string& name, void *value ) const;
- void insert( const char* name, void *value ) const;
-
- // Returns the symbol name corresponding to a specified member
- // of this enumeration datatype.
- string nameOf( void *value, size_t size ) const;
-
- // Returns the value corresponding to a specified member of this
- // enumeration datatype.
- void valueOf( const string& name, void *value ) const;
- void valueOf( const char* name, void *value ) const;
-
- // Returns the value of an enumeration datatype member
- void getMemberValue( unsigned memb_no, void *value ) const;
-
- // Default constructor
- EnumType();
-
- // Creates an enumeration datatype using an existing id
- EnumType( const hid_t existing_id );
-
- // Copy constructor
- EnumType( const EnumType& original );
-
- virtual ~EnumType();
-// end of class EnumType
-
-
-class Exception
-
- // Creates an exception with a detailed message
- Exception( const string& message );
-
- Exception( const char* message);
-
- // Returns the character string that describes an error specified by
- // a major error number.
- string getMajorString( H5E_major_t major_num ) const;
-
- // Returns the character string that describes an error specified by
- // a minor error number.
- string getMinorString( H5E_minor_t minor_num ) const;
-
- // Returns the detailed message set at the time the exception is thrown
- string getDetailMesg() const;
-
- // Turns on the automatic error printing.
- void setAutoPrint( H5E_auto_t func,
- void* client_data ) const;
-
- // Turns off the automatic error printing.
- static void dontPrint();
-
- // Retrieves the current settings for the automatic error stack
- // traversal function and its data.
- void getAutoPrint( H5E_auto_t& func,
- void** client_data ) const;
-
- // Clears the error stack for the current thread.
- void clearErrorStack() const;
-
- // Walks the error stack for the current thread, calling the
- // specified function.
- void walkErrorStack( H5E_direction_t direction,
- H5E_walk_t func, void* client_data ) const;
-
- // Default error stack traversal callback function that prints
- // error messages to the specified output stream.
- void walkDefErrorStack( int n, H5E_error_t& err_desc,
- void* client_data ) const;
-
- // Prints the error stack in a default manner.
- //void printError() const;
- void printError( FILE* stream = NULL ) const;
-
- // Creates an exception with no message
- Exception();
-
- // copy constructor
- Exception( const Exception& original );
-
-// end of class Exception
-
-
-// Class FileIException inherits from Exception to provide exception
-// handling for H5File.
-class FileIException : public Exception
- FileIException();
- FileIException( string message );
-// end of class FileIException
-
-
-// Class GroupIException inherits from Exception to provide exception
-// handling for Group.
-class GroupIException : public Exception
- GroupIException();
- GroupIException( string message );
-// end of class GroupIException
-
-
-// Class DataSpaceIException inherits from Exception to provide exception
-// handling for DataSpace.
-class DataSpaceIException : public Exception
- DataSpaceIException();
- DataSpaceIException( string message );
-// end of class DataSpaceIException
-
-
-// Class DataTypeIException inherits from Exception to provide exception
-// handling for DataType.
-class DataTypeIException : public Exception
- DataTypeIException();
- DataTypeIException( string message );
-// end of class DataTypeIException
-
-
-// Class PropListIException inherits from Exception to provide exception
-// handling for PropList.
-class PropListIException : public Exception
- PropListIException();
- PropListIException( string message );
-// end of class PropListIException
-
-
-// Class DataSetIException inherits from Exception to provide exception
-// handling for DataSet.
-class DataSetIException : public Exception
- DataSetIException();
- DataSetIException( string message );
-// end of class DataSetIException
-
-
-// Class AttributeIException inherits from Exception to provide exception
-// handling for Attribute.
-class AttributeIException : public Exception
- AttributeIException();
- AttributeIException( string message );
-// end of class AttributeIException
-
-
-// Class LibraryIException inherits from Exception to provide exception
-// handling for H5Library.
-class LibraryIException : public Exception
- LibraryIException();
- LibraryIException( string message );
-// end of class LibraryIException
-
-
-// Class IdComponentException inherits from Exception to provide exception
-// handling for IdComponent.
-class IdComponentException : public Exception
- IdComponentException();
- IdComponentException( string message );
-// end of class IdComponentException
-
-
-// Class FileAccPropList provides accesses to a file access property list.
-class FileAccPropList : public PropList
-
- // Default file access property list object
- static const FileAccPropList DEFAULT;
-
- // Copies a file access property list using assignment statement.
- FileAccPropList& operator=( const FileAccPropList& rhs );
-
- // Sets alignment properties of this file access property list.
- void setAlignment( hsize_t threshold = 1, hsize_t alignment = 1 ) const;
-
- // Retrieves the current settings for alignment properties from
- // this file access property list.
- void getAlignment( hsize_t& threshold, hsize_t& alignment ) const;
-
- // Sets the meta data cache and raw data chunk cache parameters.
- void setCache( int mdc_nelmts, size_t rdcc_nelmts, size_t rdcc_nbytes, double rdcc_w0 ) const;
-
- // Retrieves maximum sizes of data caches and the preemption
- // policy value.
- void getCache( int& mdc_nelmts, size_t& rdcc_nelmts, size_t& rdcc_nbytes, double& rdcc_w0 ) const;
-
- // Sets garbage collecting references flag.
- void setGcReferences( unsigned gc_ref = 0 ) const;
-
- // Returns garbage collecting references setting.
- unsigned getGcReferences() const;
-
- // Creates a copy of an existing file access property list
- // using the property list id.
- FileAccPropList (const hid_t plist_id);
-
- // Default constructor
- FileAccPropList();
-
- // Copy constructor
- FileAccPropList( const FileAccPropList& original );
-
- // Default destructor
- virtual ~FileAccPropList();
-
-// end of class FileAccPropList
-
-
-// Class FileCreatPropList provides accesses to a file creation property list.
-class FileCreatPropList : public PropList
-
- // Default file creation property list object
- static const FileCreatPropList DEFAULT;
-
- // Copies a file creation property list using assignment statement.
- FileCreatPropList& operator=( const FileCreatPropList& rhs );
-
- // Retrieves version information for various parts of a file.
- void getVersion( unsigned& boot, unsigned& freelist, unsigned& stab, unsigned& shhdr ) const;
-
- // Sets the userblock size field of a file creation property list.
- void setUserblock( hsize_t size ) const;
-
- // Gets the size of a user block in this file creation property list.
- hsize_t getUserblock() const;
-
- // Sets file size-of addresses and sizes.
- void setSizes( size_t sizeof_addr = 4, size_t sizeof_size = 4 ) const;
-
- // Retrieves the size-of address and size quantities stored in a
- // file according to this file creation property list.
- void getSizes( size_t& sizeof_addr, size_t& sizeof_size ) const;
-
- // Sets the size of parameters used to control the symbol table nodes.
- void setSymk( unsigned int_nodes_k, unsigned leaf_nodes_k ) const;
-
- // Retrieves the size of the symbol table B-tree 1/2 rank and the
- // symbol table leaf node 1/2 size.
- void getSymk( unsigned& int_nodes_k, unsigned& leaf_nodes_k ) const;
-
- // Sets the size of parameter used to control the B-trees for
- // indexing chunked datasets.
- void setIstorek( unsigned ik ) const;
-
- // Returns the 1/2 rank of an indexed storage B-tree.
- unsigned getIstorek() const;
-
- // Creates a copy of an existing file create property list
- // using the property list id.
- FileCreatPropList (const hid_t plist_id);
-
- // Default constructor
- FileCreatPropList();
-
- // Copy constructor
- FileCreatPropList( const FileCreatPropList& original );
-
- // Default destructor
- virtual ~FileCreatPropList();
-
-// end of class FileCreatPropList
-
-// Class H5File provides accesses to an HDF5 file. It uses the services
-// provided by CommonFG beside inheriting the HDF5 id management from the
-// IdComponent class.
-class H5File : public IdComponent, public CommonFG
-
- // Creates or opens an HDF5 file. The file creation and access
- // property lists can be default.
- H5File( const string& name, unsigned int flags, const FileCreatPropList& create_plist = FileCreatPropList::DEFAULT, const FileAccPropList& access_plist = FileAccPropList::DEFAULT );
- H5File( const char* name, unsigned int flags, const FileCreatPropList& create_plist = FileCreatPropList::DEFAULT, const FileAccPropList& access_plist = FileAccPropList::DEFAULT );
-
- // Throw file exception - used by CommonFG to specifically throw
- // FileIException.
- virtual void throwException() const;
-
- // Determines if a file, specified by its name, is in HDF5 format.
- static bool isHdf5(const string& name );
- static bool isHdf5(const char* name );
-
- // Reopens this file.
- void reopen();
-
- // Gets the creation property list of this file.
- FileCreatPropList getCreatePlist() const;
-
- // Gets the access property list of this file.
- FileAccPropList getAccessPlist() const;
-
- // Copy constructor
- H5File(const H5File& original );
-
- virtual ~H5File();
-
-// end of class H5File
-
-
-// Class FloatType inherits from AtomType and provides accesses to a
-// floating-point datatype.
-class FloatType : public AtomType
-
- // Creates a floating-point type using a predefined type.
- FloatType( const PredType& pred_type );
-
- // Gets the floating-point datatype of the specified dataset.
- FloatType( const DataSet& dataset );
-
- // Retrieves floating point datatype bit field information.
- void getFields( size_t& spos, size_t& epos, size_t& esize, size_t& mpos, size_t& msize ) const;
-
- // Sets locations and sizes of floating point bit fields.
- void setFields( size_t spos, size_t epos, size_t esize, size_t mpos, size_t msize ) const;
-
- // Retrieves the exponent bias of a floating-point type.
- size_t getEbias() const;
-
- // Sets the exponent bias of a floating-point type.
- void setEbias( size_t ebias ) const;
-
- // Retrieves mantissa normalization of a floating-point datatype.
- H5T_norm_t getNorm( string& norm_string ) const;
-
- // Sets the mantissa normalization of a floating-point datatype.
- void setNorm( H5T_norm_t norm ) const;
-
- // Retrieves the internal padding type for unused bits in
- // floating-point datatypes.
- H5T_pad_t getInpad( string& pad_string ) const;
-
- // Fills unused internal floating point bits.
- void setInpad( H5T_pad_t inpad ) const;
-
- // Default constructor
- FloatType();
-
- // Creates a floating-point datatype using an existing id.
- FloatType( const hid_t existing_id );
-
- // Copy constructor
- FloatType( const FloatType& original );
-
- virtual ~FloatType();
-
-// end of class FloatType
-
-
-// Class Group provides accesses to an HDF5 group. As H5File, it uses the
-// services provided by CommonFG. This class also inherits from H5Object.
-class Group : public H5Object, public CommonFG
- public:
-
- // Throw group exception - used by CommonFG to specifically throw
- // GroupIException.
- virtual void throwException() const;
-
- // Default constructor
- Group();
-
- // Copy constructor
- Group( const Group& original );
-
- virtual ~Group();
-
-// end of class Group
-
-// Class IdComponent provides a mechanism to handle reference counting
-// for an identifier of any HDF5 object.
-class IdComponent
- // Sets the identifier of this object to a new value.
- void setId( hid_t new_id );
-
- // Creates an object to hold an HDF5 identifier.
- IdComponent( const hid_t h5_id );
-
- // Gets the value of the current HDF5 object id which is held
- // by this IdComponent object.
- hid_t getId () const;
-
- // Increment reference counter.
- void incRefCount();
-
- // Decrement reference counter.
- void decRefCount();
-
- // Get the reference counter to this identifier.
- int getCounter();
-
- // Decrements the reference counter then determines if there are
- // no more reference to this object.
- bool noReference();
-
- // Reset this object by deleting its reference counter of the old id.
- void reset();
-
- // Copy constructor
- IdComponent( const IdComponent& original );
-
- // Destructor
- virtual ~IdComponent();
-
-}; // end class IdComponent
-
-
-// Class IntType inherits from AtomType and provides accesses to
-// integer datatypes.
-class IntType : public AtomType
-
- // Creates a integer type using a predefined type.
- IntType( const PredType& pred_type );
-
- // Gets the integer datatype of the specified dataset.
- IntType( const DataSet& dataset );
-
- // Retrieves the sign type for an integer type.
- H5T_sign_t getSign() const;
-
- // Sets the sign proprety for an integer type.
- void setSign( H5T_sign_t sign ) const;
-
- // Default constructor
- IntType();
-
- // Creates a integer datatype using an existing id.
- IntType( const hid_t existing_id );
-
- // Copy constructor
- IntType( const IntType& original );
-
- virtual ~IntType();
-
-// end of class IntType
-
-
-// Class H5Library provides accesses to the HDF5 library. All of its
-// member functions are static.
-class H5Library
-
- // Initializes the HDF5 library.
- static void open();
-
- // Flushes all data to disk, closes files, and cleans up memory.
- static void close();
-
- // Instructs library not to install atexit cleanup routine
- static void dontAtExit();
-
- // Returns the HDF library release number.
- static void getLibVersion( unsigned& majnum, unsigned& minnum, unsigned& relnum );
-
- // Verifies that the arguments match the version numbers compiled
- // into the library
- static void checkVersion( unsigned majnum, unsigned minnum, unsigned relnum );
-
-// end of class H5Library
-
-
-// An HDF5 object can be a group, dataset, attribute, or named datatype.
-//
-// Class H5Object provides the services that are typical to an HDF5 object
-// so Group, DataSet, Attribute, and DataType can use them. It also
-// inherits the HDF5 id management from the class IdComponent.
-class H5Object : public IdComponent
-
- // Flushes all buffers associated with this HDF5 object to disk.
- void flush( H5F_scope_t scope ) const;
-
- // Creates an attribute for a group, dataset, or named datatype.
- // PropList is currently not used, it should always be default.
- Attribute createAttribute( const char* name, const DataType& type, const DataSpace& space, const PropList& create_plist = PropList::DEFAULT ) const;
- Attribute createAttribute( const string& name, const DataType& type, const DataSpace& space, const PropList& create_plist = PropList::DEFAULT ) const;
-
- // Opens an attribute that belongs to this object, given the
- // attribute name.
- Attribute openAttribute( const string& name ) const;
- Attribute openAttribute( const char* name ) const;
-
- // Opens an attribute that belongs to this object, given the
- // attribute index.
- Attribute openAttribute( const unsigned int idx ) const;
-
- // Iterate user's function over the attributes of this HDF5 object
- int iterateAttrs( attr_operator_t user_op, unsigned* idx = NULL, void* op_data = NULL );
-
- // Determines the number of attributes attached to this HDF5 object.
- int getNumAttrs() const;
-
- // Removes an attribute from this HDF5 object, given the attribute
- // name.
- void removeAttr( const string& name ) const;
- void removeAttr( const char* name ) const;
-
- // Copy constructor
- H5Object( const H5Object& original );
-
- virtual ~H5Object();
-
-// end of class H5Object
-
-
-// Class PredType contains all the predefined datatype objects that are
-// currently available.
-class PredType : public AtomType
-
- static const PredType STD_I8BE;
- static const PredType STD_I8LE;
- static const PredType STD_I16BE;
- static const PredType STD_I16LE;
- static const PredType STD_I32BE;
- static const PredType STD_I32LE;
- static const PredType STD_I64BE;
- static const PredType STD_I64LE;
- static const PredType STD_U8BE;
- static const PredType STD_U8LE;
- static const PredType STD_U16BE;
- static const PredType STD_U16LE;
- static const PredType STD_U32BE;
- static const PredType STD_U32LE;
- static const PredType STD_U64BE;
- static const PredType STD_U64LE;
- static const PredType STD_B8BE;
- static const PredType STD_B8LE;
- static const PredType STD_B16BE;
- static const PredType STD_B16LE;
- static const PredType STD_B32BE;
- static const PredType STD_B32LE;
- static const PredType STD_B64BE;
- static const PredType STD_B64LE;
- static const PredType STD_REF_OBJ;
- static const PredType STD_REF_DSETREG;
-
- static const PredType C_S1;
- static const PredType FORTRAN_S1;
-
- static const PredType IEEE_F32BE;
- static const PredType IEEE_F32LE;
- static const PredType IEEE_F64BE;
- static const PredType IEEE_F64LE;
-
- static const PredType UNIX_D32BE;
- static const PredType UNIX_D32LE;
- static const PredType UNIX_D64BE;
- static const PredType UNIX_D64LE;
-
- static const PredType INTEL_I8;
- static const PredType INTEL_I16;
- static const PredType INTEL_I32;
- static const PredType INTEL_I64;
- static const PredType INTEL_U8;
- static const PredType INTEL_U16;
- static const PredType INTEL_U32;
- static const PredType INTEL_U64;
- static const PredType INTEL_B8;
- static const PredType INTEL_B16;
- static const PredType INTEL_B32;
- static const PredType INTEL_B64;
- static const PredType INTEL_F32;
- static const PredType INTEL_F64;
-
- static const PredType ALPHA_I8;
- static const PredType ALPHA_I16;
- static const PredType ALPHA_I32;
- static const PredType ALPHA_I64;
- static const PredType ALPHA_U8;
- static const PredType ALPHA_U16;
- static const PredType ALPHA_U32;
- static const PredType ALPHA_U64;
- static const PredType ALPHA_B8;
- static const PredType ALPHA_B16;
- static const PredType ALPHA_B32;
- static const PredType ALPHA_B64;
- static const PredType ALPHA_F32;
- static const PredType ALPHA_F64;
-
- static const PredType MIPS_I8;
- static const PredType MIPS_I16;
- static const PredType MIPS_I32;
- static const PredType MIPS_I64;
- static const PredType MIPS_U8;
- static const PredType MIPS_U16;
- static const PredType MIPS_U32;
- static const PredType MIPS_U64;
- static const PredType MIPS_B8;
- static const PredType MIPS_B16;
- static const PredType MIPS_B32;
- static const PredType MIPS_B64;
- static const PredType MIPS_F32;
- static const PredType MIPS_F64;
-
- static const PredType NATIVE_CHAR;
- static const PredType NATIVE_SCHAR;
- static const PredType NATIVE_UCHAR;
- static const PredType NATIVE_SHORT;
- static const PredType NATIVE_USHORT;
- static const PredType NATIVE_INT;
- static const PredType NATIVE_UINT;
- static const PredType NATIVE_LONG;
- static const PredType NATIVE_ULONG;
- static const PredType NATIVE_LLONG;
- static const PredType NATIVE_ULLONG;
- static const PredType NATIVE_FLOAT;
- static const PredType NATIVE_DOUBLE;
- static const PredType NATIVE_LDOUBLE;
- static const PredType NATIVE_B8;
- static const PredType NATIVE_B16;
- static const PredType NATIVE_B32;
- static const PredType NATIVE_B64;
- static const PredType NATIVE_OPAQUE;
- static const PredType NATIVE_HSIZE;
- static const PredType NATIVE_HSSIZE;
- static const PredType NATIVE_HERR;
- static const PredType NATIVE_HBOOL;
-
- static const PredType NATIVE_INT8;
- static const PredType NATIVE_UINT8;
- static const PredType NATIVE_INT_LEAST8;
- static const PredType NATIVE_UINT_LEAST8;
- static const PredType NATIVE_INT_FAST8;
- static const PredType NATIVE_UINT_FAST8;
-
- static const PredType NATIVE_INT16;
- static const PredType NATIVE_UINT16;
- static const PredType NATIVE_INT_LEAST16;
- static const PredType NATIVE_UINT_LEAST16;
- static const PredType NATIVE_INT_FAST16;
- static const PredType NATIVE_UINT_FAST16;
-
- static const PredType NATIVE_INT32;
- static const PredType NATIVE_UINT32;
- static const PredType NATIVE_INT_LEAST32;
- static const PredType NATIVE_UINT_LEAST32;
- static const PredType NATIVE_INT_FAST32;
- static const PredType NATIVE_UINT_FAST32;
-
- static const PredType NATIVE_INT64;
- static const PredType NATIVE_UINT64;
- static const PredType NATIVE_INT_LEAST64;
- static const PredType NATIVE_UINT_LEAST64;
- static const PredType NATIVE_INT_FAST64;
- static const PredType NATIVE_UINT_FAST64;
-
- // Copy constructor
- PredType( const PredType& original );
-
- // Default destructor
- virtual ~PredType();
-
- protected:
- // Default constructor
- PredType();
-
- // Creates a pre-defined type using an HDF5 pre-defined constant
- PredType( const hid_t predtype_id ); // used by the library only
-
-// end of class PredType
-
-
-// An HDF5 property list can be a file creation property list, a file
-// access property list, a dataset creation property list, or a dataset
-// memory and transfer property list.
-//
-// Class PropList provides accesses to an HDF5 property list. Its
-// services are inherited by classes FileCreatPropList, FileAccPropList,
-// DSetCreatPropList, and DSetMemXferPropList. It also inherits the HDF5
-// id management from the class IdComponent.
-class PropList : public IdComponent
-
- // Default property list object
- static const PropList DEFAULT;
-
- // Creates a property list given the property list type.
- PropList( H5P_class_t type );
-
- // Makes a copy of the given property list.
- void copy( const PropList& like_plist );
-
- // Gets the class of this property list, i.e. H5P_FILE_CREATE,
- // H5P_FILE_ACCESS, ...
- H5P_class_t getClass() const;
-
- // Default constructor
- PropList();
-
- // Copy constructor
- PropList( const PropList& original );
-
- // Creates a default property list or creates a copy of an
- // existing property list giving the property list id
- PropList( const hid_t plist_id );
-
- virtual ~PropList();
-
-// end of class PropList
-
-// Class RefCounter provides a reference counting mechanism. It is used
-// mainly by IdComponent to keep track of the references to an HDF5 object
-// identifier.
-class RefCounter
-
- // Returns the value of the counter.
- int getCounter () const;
-
- // Increments and decrements the counter.
- void increment();
- void decrement();
-
- // This bool function is used to determine whether to close an
- // HDF5 object when there are no more reference to that object.
- // It decrements the counter, then returns true if there are no
- // other object references the associated identifier. When the
- // function returns true, the associated identifier can be closed
- // safely.
- bool noReference();
-
- // Default constructor
- RefCounter();
-
- ~RefCounter();
-
-// end of class RefCounter
-
-
-// Class StrType inherits from AtomType and provides accesses to a
-// string datatype.
-class StrType : public AtomType
- public:
- // Creates a string type using a predefined type.
- StrType( const PredType& pred_type );
-
- // Gets the string datatype of the specified dataset.
- StrType( const DataSet& dataset );
-
- // Returns the character set type of this string datatype.
- H5T_cset_t getCset() const;
-
- // Sets character set to be used.
- void setCset( H5T_cset_t cset ) const;
-
- // Returns the string padding method for this string datatype.
- H5T_str_t getStrpad() const;
-
- // Defines the storage mechanism for character strings.
- void setStrpad( H5T_str_t strpad ) const;
-
- // Default constructor
- StrType();
-
- // Copy constructor
- StrType( const StrType& original );
-
- // Creates a string datatype using an existing id.
- StrType( const hid_t existing_id );
-
- virtual ~StrType();
-// end of class StrType
-
-
-// This template function, resetIdComponent, is used to reset an
-// IdComponent object, which includes closing the associated HDF5
-// identifier if it has no other references.
-// 'Type' can be of the following classes: Attribute, DataSet, DataSpace,
-// DataType, H5File, Group, and PropList.
-template
-
-
-
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-Last modified: 17 December 2000
-
-
-
-
diff --git a/doc/html/cpplus/CppUserNotes.doc b/doc/html/cpplus/CppUserNotes.doc
deleted file mode 100644
index c14d3d6..0000000
Binary files a/doc/html/cpplus/CppUserNotes.doc and /dev/null differ
diff --git a/doc/html/cpplus/CppUserNotes.pdf b/doc/html/cpplus/CppUserNotes.pdf
deleted file mode 100644
index 7d0064f..0000000
Binary files a/doc/html/cpplus/CppUserNotes.pdf and /dev/null differ
diff --git a/doc/html/cpplus/Makefile.am b/doc/html/cpplus/Makefile.am
deleted file mode 100644
index 81af45e..0000000
--- a/doc/html/cpplus/Makefile.am
+++ /dev/null
@@ -1,17 +0,0 @@
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-##
-## Makefile.am
-## Run automake to generate a Makefile.in from this file.
-#
-
-include $(top_srcdir)/config/commence-doc.am
-
-localdocdir = $(docdir)/hdf5/cpplus
-
-# Public doc files (to be installed)...
-localdoc_DATA=CppInterfaces.html CppUserNotes.doc CppUserNotes.pdf
diff --git a/doc/html/cpplus/Makefile.in b/doc/html/cpplus/Makefile.in
deleted file mode 100644
index 434d2d7..0000000
--- a/doc/html/cpplus/Makefile.in
+++ /dev/null
@@ -1,485 +0,0 @@
-# Makefile.in generated by automake 1.9.5 from Makefile.am.
-# @configure_input@
-
-# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
-# 2003, 2004, 2005 Free Software Foundation, Inc.
-# This Makefile.in is free software; the Free Software Foundation
-# gives unlimited permission to copy and/or distribute it,
-# with or without modifications, as long as this notice is preserved.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
-# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
-# PARTICULAR PURPOSE.
-
-@SET_MAKE@
-
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-#
-
-srcdir = @srcdir@
-top_srcdir = @top_srcdir@
-VPATH = @srcdir@
-pkgdatadir = $(datadir)/@PACKAGE@
-pkglibdir = $(libdir)/@PACKAGE@
-pkgincludedir = $(includedir)/@PACKAGE@
-top_builddir = ../../..
-am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
-INSTALL = @INSTALL@
-install_sh_DATA = $(install_sh) -c -m 644
-install_sh_PROGRAM = $(install_sh) -c
-install_sh_SCRIPT = $(install_sh) -c
-INSTALL_HEADER = $(INSTALL_DATA)
-transform = $(program_transform_name)
-NORMAL_INSTALL = :
-PRE_INSTALL = :
-POST_INSTALL = :
-NORMAL_UNINSTALL = :
-PRE_UNINSTALL = :
-POST_UNINSTALL = :
-build_triplet = @build@
-host_triplet = @host@
-DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
- $(top_srcdir)/config/commence-doc.am \
- $(top_srcdir)/config/commence.am
-subdir = doc/html/cpplus
-ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
-am__aclocal_m4_deps = $(top_srcdir)/configure.in
-am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
- $(ACLOCAL_M4)
-mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs
-CONFIG_HEADER = $(top_builddir)/src/H5config.h
-CONFIG_CLEAN_FILES =
-SOURCES =
-DIST_SOURCES =
-am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
-am__vpath_adj = case $$p in \
- $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
- *) f=$$p;; \
- esac;
-am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
-am__installdirs = "$(DESTDIR)$(localdocdir)"
-localdocDATA_INSTALL = $(INSTALL_DATA)
-DATA = $(localdoc_DATA)
-DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
-
-# Set the paths for AFS installs of autotools for Linux machines
-# Ideally, these tools should never be needed during the build.
-ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal
-ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@
-AMDEP_FALSE = @AMDEP_FALSE@
-AMDEP_TRUE = @AMDEP_TRUE@
-AMTAR = @AMTAR@
-AM_MAKEFLAGS = @AM_MAKEFLAGS@
-AR = @AR@
-AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf
-AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader
-AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake
-AWK = @AWK@
-BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@
-BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@
-BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@
-BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@
-BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@
-BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@
-BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@
-BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@
-BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@
-BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@
-BUILD_PDB2HDF = @BUILD_PDB2HDF@
-BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@
-BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@
-BYTESEX = @BYTESEX@
-CC = @CC@
-CCDEPMODE = @CCDEPMODE@
-CC_VERSION = @CC_VERSION@
-CFLAGS = @CFLAGS@
-CONFIG_DATE = @CONFIG_DATE@
-CONFIG_MODE = @CONFIG_MODE@
-CONFIG_USER = @CONFIG_USER@
-CPP = @CPP@
-CPPFLAGS = @CPPFLAGS@
-CXX = @CXX@
-CXXCPP = @CXXCPP@
-CXXDEPMODE = @CXXDEPMODE@
-CXXFLAGS = @CXXFLAGS@
-CYGPATH_W = @CYGPATH_W@
-DEBUG_PKG = @DEBUG_PKG@
-DEFS = @DEFS@
-DEPDIR = @DEPDIR@
-DYNAMIC_DIRS = @DYNAMIC_DIRS@
-ECHO = @ECHO@
-ECHO_C = @ECHO_C@
-ECHO_N = @ECHO_N@
-ECHO_T = @ECHO_T@
-EGREP = @EGREP@
-EXEEXT = @EXEEXT@
-F77 = @F77@
-
-# Make sure that these variables are exported to the Makefiles
-F9XMODEXT = @F9XMODEXT@
-F9XMODFLAG = @F9XMODFLAG@
-F9XSUFFIXFLAG = @F9XSUFFIXFLAG@
-FC = @FC@
-FCFLAGS = @FCFLAGS@
-FCLIBS = @FCLIBS@
-FFLAGS = @FFLAGS@
-FILTERS = @FILTERS@
-FSEARCH_DIRS = @FSEARCH_DIRS@
-H5_VERSION = @H5_VERSION@
-HADDR_T = @HADDR_T@
-HDF5_INTERFACES = @HDF5_INTERFACES@
-HID_T = @HID_T@
-HL = @HL@
-HL_FOR = @HL_FOR@
-HSIZET = @HSIZET@
-HSIZE_T = @HSIZE_T@
-HSSIZE_T = @HSSIZE_T@
-INSTALL_DATA = @INSTALL_DATA@
-INSTALL_PROGRAM = @INSTALL_PROGRAM@
-INSTALL_SCRIPT = @INSTALL_SCRIPT@
-INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
-INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@
-LDFLAGS = @LDFLAGS@
-LIBOBJS = @LIBOBJS@
-LIBS = @LIBS@
-LIBTOOL = @LIBTOOL@
-LN_S = @LN_S@
-LTLIBOBJS = @LTLIBOBJS@
-LT_STATIC_EXEC = @LT_STATIC_EXEC@
-MAINT = @MAINT@
-MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
-MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
-MAKEINFO = @MAKEINFO@
-MPE = @MPE@
-OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@
-OBJEXT = @OBJEXT@
-PACKAGE = @PACKAGE@
-PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
-PACKAGE_NAME = @PACKAGE_NAME@
-PACKAGE_STRING = @PACKAGE_STRING@
-PACKAGE_TARNAME = @PACKAGE_TARNAME@
-PACKAGE_VERSION = @PACKAGE_VERSION@
-PARALLEL = @PARALLEL@
-PATH_SEPARATOR = @PATH_SEPARATOR@
-PERL = @PERL@
-PTHREAD = @PTHREAD@
-RANLIB = @RANLIB@
-ROOT = @ROOT@
-RUNPARALLEL = @RUNPARALLEL@
-RUNSERIAL = @RUNSERIAL@
-R_INTEGER = @R_INTEGER@
-R_LARGE = @R_LARGE@
-SEARCH = @SEARCH@
-SETX = @SETX@
-SET_MAKE = @SET_MAKE@
-
-# Hardcode SHELL to be /bin/sh. Most machines have this shell, and
-# on at least one machine configure fails to detect its existence (janus).
-# Also, when HDF5 is configured on one machine but run on another,
-# configure's automatic SHELL detection may not work on the build machine.
-SHELL = /bin/sh
-SIZE_T = @SIZE_T@
-STATIC_SHARED = @STATIC_SHARED@
-STRIP = @STRIP@
-TESTPARALLEL = @TESTPARALLEL@
-TRACE_API = @TRACE_API@
-USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@
-USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@
-USE_FILTER_NBIT = @USE_FILTER_NBIT@
-USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@
-USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@
-USE_FILTER_SZIP = @USE_FILTER_SZIP@
-VERSION = @VERSION@
-ac_ct_AR = @ac_ct_AR@
-ac_ct_CC = @ac_ct_CC@
-ac_ct_CXX = @ac_ct_CXX@
-ac_ct_F77 = @ac_ct_F77@
-ac_ct_FC = @ac_ct_FC@
-ac_ct_RANLIB = @ac_ct_RANLIB@
-ac_ct_STRIP = @ac_ct_STRIP@
-am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
-am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
-am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
-am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
-am__include = @am__include@
-am__leading_dot = @am__leading_dot@
-am__quote = @am__quote@
-am__tar = @am__tar@
-am__untar = @am__untar@
-bindir = @bindir@
-build = @build@
-build_alias = @build_alias@
-build_cpu = @build_cpu@
-build_os = @build_os@
-build_vendor = @build_vendor@
-datadir = @datadir@
-exec_prefix = @exec_prefix@
-host = @host@
-host_alias = @host_alias@
-host_cpu = @host_cpu@
-host_os = @host_os@
-host_vendor = @host_vendor@
-
-# Install directories that automake doesn't know about
-includedir = $(exec_prefix)/include
-infodir = @infodir@
-install_sh = @install_sh@
-libdir = @libdir@
-libexecdir = @libexecdir@
-localstatedir = @localstatedir@
-mandir = @mandir@
-mkdir_p = @mkdir_p@
-oldincludedir = @oldincludedir@
-prefix = @prefix@
-program_transform_name = @program_transform_name@
-sbindir = @sbindir@
-sharedstatedir = @sharedstatedir@
-sysconfdir = @sysconfdir@
-target_alias = @target_alias@
-
-# Shell commands used in Makefiles
-RM = rm -f
-CP = cp
-
-# Some machines need a command to run executables; this is that command
-# so that our tests will run.
-# We use RUNTESTS instead of RUNSERIAL directly because it may be that
-# some tests need to be run with a different command. Older versions
-# of the makefiles used the command
-# $(LIBTOOL) --mode=execute
-# in some directories, for instance.
-RUNTESTS = $(RUNSERIAL)
-
-# Libraries to link to while building
-LIBHDF5 = $(top_builddir)/src/libhdf5.la
-LIBH5TEST = $(top_builddir)/test/libh5test.la
-LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la
-LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la
-LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la
-LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la
-LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la
-LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la
-LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la
-docdir = $(exec_prefix)/doc
-
-# Scripts used to build examples
-H5CC = $(bindir)/h5cc
-H5CC_PP = $(bindir)/h5pcc
-H5FC = $(bindir)/h5fc
-H5FC_PP = $(bindir)/h5pfc
-
-# .chkexe and .chksh files are used to mark tests that have run successfully.
-MOSTLYCLEANFILES = *.chkexe *.chksh
-localdocdir = $(docdir)/hdf5/cpplus
-
-# Public doc files (to be installed)...
-localdoc_DATA = CppInterfaces.html CppUserNotes.doc CppUserNotes.pdf
-all: all-am
-
-.SUFFIXES:
-$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps)
- @for dep in $?; do \
- case '$(am__configure_deps)' in \
- *$$dep*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
- && exit 0; \
- exit 1;; \
- esac; \
- done; \
- echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/cpplus/Makefile'; \
- cd $(top_srcdir) && \
- $(AUTOMAKE) --foreign doc/html/cpplus/Makefile
-.PRECIOUS: Makefile
-Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
- @case '$?' in \
- *config.status*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
- *) \
- echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
- cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
- esac;
-
-$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-mostlyclean-libtool:
- -rm -f *.lo
-
-clean-libtool:
- -rm -rf .libs _libs
-
-distclean-libtool:
- -rm -f libtool
-uninstall-info-am:
-install-localdocDATA: $(localdoc_DATA)
- @$(NORMAL_INSTALL)
- test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)"
- @list='$(localdoc_DATA)'; for p in $$list; do \
- if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
- f=$(am__strip_dir) \
- echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \
- $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-
-uninstall-localdocDATA:
- @$(NORMAL_UNINSTALL)
- @list='$(localdoc_DATA)'; for p in $$list; do \
- f=$(am__strip_dir) \
- echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \
- rm -f "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-tags: TAGS
-TAGS:
-
-ctags: CTAGS
-CTAGS:
-
-
-distdir: $(DISTFILES)
- $(mkdir_p) $(distdir)/../../../config
- @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
- topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
- list='$(DISTFILES)'; for file in $$list; do \
- case $$file in \
- $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
- $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
- esac; \
- if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
- dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
- if test "$$dir" != "$$file" && test "$$dir" != "."; then \
- dir="/$$dir"; \
- $(mkdir_p) "$(distdir)$$dir"; \
- else \
- dir=''; \
- fi; \
- if test -d $$d/$$file; then \
- if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
- cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
- fi; \
- cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
- else \
- test -f $(distdir)/$$file \
- || cp -p $$d/$$file $(distdir)/$$file \
- || exit 1; \
- fi; \
- done
-check-am: all-am
-check: check-am
-all-am: Makefile $(DATA)
-installdirs:
- for dir in "$(DESTDIR)$(localdocdir)"; do \
- test -z "$$dir" || $(mkdir_p) "$$dir"; \
- done
-install: install-am
-install-exec: install-exec-am
-install-data: install-data-am
-uninstall: uninstall-am
-
-install-am: all-am
- @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
-
-installcheck: installcheck-am
-install-strip:
- $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
- install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
- `test -z '$(STRIP)' || \
- echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
-mostlyclean-generic:
- -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
-
-clean-generic:
-
-distclean-generic:
- -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-
-maintainer-clean-generic:
- @echo "This command is intended for maintainers to use"
- @echo "it deletes files that may require special tools to rebuild."
-clean: clean-am
-
-clean-am: clean-generic clean-libtool mostlyclean-am
-
-distclean: distclean-am
- -rm -f Makefile
-distclean-am: clean-am distclean-generic distclean-libtool
-
-dvi: dvi-am
-
-dvi-am:
-
-html: html-am
-
-info: info-am
-
-info-am:
-
-install-data-am: install-localdocDATA
-
-install-exec-am:
-
-install-info: install-info-am
-
-install-man:
-
-installcheck-am:
-
-maintainer-clean: maintainer-clean-am
- -rm -f Makefile
-maintainer-clean-am: distclean-am maintainer-clean-generic
-
-mostlyclean: mostlyclean-am
-
-mostlyclean-am: mostlyclean-generic mostlyclean-libtool
-
-pdf: pdf-am
-
-pdf-am:
-
-ps: ps-am
-
-ps-am:
-
-uninstall-am: uninstall-info-am uninstall-localdocDATA
-
-.PHONY: all all-am check check-am clean clean-generic clean-libtool \
- distclean distclean-generic distclean-libtool distdir dvi \
- dvi-am html html-am info info-am install install-am \
- install-data install-data-am install-exec install-exec-am \
- install-info install-info-am install-localdocDATA install-man \
- install-strip installcheck installcheck-am installdirs \
- maintainer-clean maintainer-clean-generic mostlyclean \
- mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
- uninstall uninstall-am uninstall-info-am \
- uninstall-localdocDATA
-
-
-# Ignore most rules
-lib progs check test _test check-p check-s:
- @echo "Nothing to be done"
-
-tests dep depend:
- @@SETX@; for d in X $(SUBDIRS); do \
- if test $$d != X; then \
- (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \
- fi;
- done
-
-# In docs directory, install-doc is the same as install
-install-doc install-all:
- $(MAKE) $(AM_MAKEFLAGS) install
-uninstall-doc uninstall-all:
- $(MAKE) $(AM_MAKEFLAGS) uninstall
-# Tell versions [3.59,3.63) of GNU make to not export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
diff --git a/doc/html/dataset_p1.gif b/doc/html/dataset_p1.gif
deleted file mode 100644
index 1e7cea0..0000000
Binary files a/doc/html/dataset_p1.gif and /dev/null differ
diff --git a/doc/html/dataset_p1.obj b/doc/html/dataset_p1.obj
deleted file mode 100644
index 42d66fc..0000000
--- a/doc/html/dataset_p1.obj
+++ /dev/null
@@ -1,32 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,16,1,9,1,1,0,0,1,0,1,1,'Helvetica',0,24,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-box('black',128,240,288,432,4,1,1,26,0,0,0,0,0,'1',[
-]).
-box('black',400,272,464,400,4,1,1,27,0,0,0,0,0,'1',[
-]).
-box('black',192,304,224,368,6,1,1,28,0,0,0,0,0,'1',[
-]).
-box('black',400,272,432,336,6,1,1,29,0,0,0,0,0,'1',[
-]).
-poly('black',2,[
- 224,304,400,272],1,1,1,32,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 224,368,400,336],1,1,1,33,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-text('black',208,208,'Helvetica',0,20,1,1,0,1,77,17,40,0,14,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "File Dataset"]).
-text('black',432,208,'Helvetica',0,20,1,1,0,1,106,17,42,0,14,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Memory Dataset"]).
-text('black',320,144,'Helvetica',0,24,1,1,0,1,206,29,68,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Raw Data Transfer"]).
-box('black',96,128,512,464,0,1,1,70,0,0,0,0,0,'1',[
-]).
diff --git a/doc/html/ddl.html b/doc/html/ddl.html
deleted file mode 100644
index fb0596e..0000000
--- a/doc/html/ddl.html
+++ /dev/null
@@ -1,579 +0,0 @@
-
-
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-DDL in BNF for HDF5
-
-
-1. Introduction
-
-This document contains the data description language (DDL) for an HDF5 file.
-The description is in Backus-Naur Form.
-
-2. Explanation of Symbols
-
-This section contains a brief explanation of the symbols used in the DDL.
-
-
- ::= defined as
- <tname> a token with the name tname
- <a> | <b> one of <a> or <b>
- <a>opt zero or one occurrence of <a>
- <a>* zero or more occurrence of <a>
- <a>+ one or more occurrence of <a>
- [0-9] an element in the range between 0 and 9
- `[' the token within the quotes (used for special characters)
- TBD To Be Decided
-
-
-3. The DDL
-
-
-<file> ::= HDF5 <file_name> { <file_super_block>opt <root_group> }
-
-<file_name> ::= <identifier>
-
-<file_super_block> ::= BOOT_BLOCK { <super_block_content> }
-
-<super_block_content> ::= TBD
-
-<root_group> ::= GROUP "/" {
- <unamed_datatype>*
- <object_id>opt
- <group_comment>opt
- <group_attribute>*
- <group_member>*
- }
-
-<datatype> ::= <atomic_type> | <compound_type> | <variable_length_type> | <array_type>
-
-<unamed_datatype> ::= DATATYPE <unamed_type_name> { <datatype> }
-
-<unamed_type_name> ::= the assigned name for unamed type is in the form of
- #oid, where oid is the object id of the type
-
-<atomic_type> ::= <integer> | <float> | <time> | <string> |
- <bitfield> | <opaque> | <reference> | <enum>
-
-<integer> ::= H5T_STD_I8BE | H5T_STD_I8LE |
- H5T_STD_I16BE | H5T_STD_I16LE |
- H5T_STD_I32BE | H5T_STD_I32LE |
- H5T_STD_I64BE | H5T_STD_I64LE |
- H5T_STD_U8BE | H5T_STD_U8LE |
- H5T_STD_U16BE | H5T_STD_U16LE |
- H5T_STD_U32BE | H5T_STD_U32LE |
- H5T_STD_U64BE | H5T_STD_U64LE |
- H5T_NATIVE_CHAR | H5T_NATIVE_UCHAR |
- H5T_NATIVE_SHORT | H5T_NATIVE_USHORT |
- H5T_NATIVE_INT | H5T_NATIVE_UINT |
- H5T_NATIVE_LONG | H5T_NATIVE_ULONG |
- H5T_NATIVE_LLONG | H5T_NATIVE_ULLONG
-
-<float> ::= H5T_IEEE_F32BE | H5T_IEEE_F32LE |
- H5T_IEEE_F64BE | H5T_IEEE_F64LE |
- H5T_NATIVE_FLOAT | H5T_NATIVE_DOUBLE |
- H5T_NATIVE_LDOUBLE
-
-<time> ::= TBD
-
-<string> ::= H5T_STRING { STRSIZE <strsize> ;
- STRPAD <strpad> ;
- CSET <cset> ;
- CTYPE <ctype> ; }
-
-<strsize> ::= <int_value>
-
-<strpad> ::= H5T_STR_NULLTERM | H5T_STR_NULLPAD | H5T_STR_SPACEPAD
-
-<cset> ::= H5T_CSET_ASCII
-
-<ctype> ::= H5T_C_S1 | H5T_FORTRAN_S1
-
-<bitfield> ::= TBD
-
-<opaque> ::= H5T_OPAQUE { <identifier> }
-
-<reference> ::= H5T_REFERENCE { <ref_type> }
-
-<ref_type> ::= H5T_STD_REF_OBJECT | H5T_STD_REF_DSETREG
-
-<compound_type> ::= H5T_COMPOUND { <member_type_def>+ }
-
-<member_type_def> ::= <datatype> <field_name> ;
-
-<field_name> ::= <identifier>
-
-<variable_length_type> ::= H5T_VLEN { <datatype> }
-
-<array_type> ::= H5T_ARRAY { <dim_sizes> <datatype> }
-
-<dim_sizes> ::= `['<dimsize>`]' | `['<dimsize>`]'<dim_sizes>
-
-<dimsize> ::= <int_value>
-
-<attribute> ::= ATTRIBUTE <attr_name> { <dataset_type>
- <dataset_space>
- <data>opt }
-
-<attr_name> ::= <identifier>
-
-<dataset_type> ::= DATATYPE <path_name> | <datatype>
-
-<enum> ::= H5T_ENUM { <enum_base_type> <enum_def>+ }
-
-<enum_base_type> ::= <integer>
-// Currently enums can only hold integer type data, but they may be expanded
-// in the future to hold any datatype
-
-<enum_def> ::= <enum_symbol> <enum_val>;
-
-<enum_symbol> ::= <identifier>
-
-<enum_val> ::= <int_value>
-
-<path_name> ::= <path_part>+
-
-<path_part> ::= /<identifier>
-
-<dataspace> ::= <scalar_space> | <simple_space> | <complex_space>
-
-<scalar_space> ::= SCALAR
-
-<simple_space> ::= SIMPLE { <current_dims> / <max_dims> }
-
-<complex_space> ::= COMPLEX { <complex_space_definition> }
-
-<dataset_space> ::= DATASPACE <path_name> | <dataspace>
-
-<current_dims> ::= <dims>
-
-<max_dims> ::= `(' <max_dim_list> `)'
-
-<max_dim_list> ::= <max_dim> | <max_dim>, <max_dim_list>
-
-<max_dim> ::= <int_value> | H5S_UNLIMITED
-
-<complex_space_definition> ::= TBD
-
-<data> ::= DATA { <scalar_space_data> | <simple_space_data> | <complex_space_data> } | <subset>
-
-<scalar_space_data> ::= <any_element>
-
-<any_element> ::= <atomic_element> | <compound_element> |
- <variable_length_element> | <array_element>
-
-<any_data_seq> ::= <any_element> | <any_element>, <any_data_seq>
-
-<atomic_element> :: = <integer_data> | <float_data> | <time_data> |
- <string_data> | <bitfield_data> | <opaque_data> |
- <enum_data> | <reference_data>
-
-<subset> ::= SUBSET { <start>;
- <stride>;
- <count>;
- <block>;
- DATA { <simple_space_data> }
- }
-
-<start> ::= START (<coor_list>)
-
-<stride> ::= STRIDE (<pos_list>)
-
-<count> ::= COUNT (<coor_list>)
-
-<block> ::= BLOCK (<coor_list>)
-
-<coor_list> ::= <int_value>, <coor_list> | <int_value>
-
-<integer_data> ::= <int_value>
-
-<float_data> ::= a floating point number
-
-<time_data> ::= TBD
-
-<string_data> ::= a string
-// A string is enclosed in double quotes.
-// If a string is displayed on more than one line, string concatenate
-// operator '//'is used.
-
-<bitfield_data> ::= TBD
-
-<opaque_data> ::= TBD
-
-<enum_data> ::= <enum_symbol>
-
-<reference_data> ::= <object_ref_data> | <data_region_data> | NULL
-
-<object_ref_data> ::= <object_type> <object_num>
-
-<object_type> ::= DATASET | GROUP | DATATYPE
-
-<object_id> ::= OBJECTID { <object_num> }
-
-<object_num> ::= <int_value>:<int_value> | <int_value>
-
-<data_region_data> ::= H5T_STD_REF_DSETREG <object_num> { <data_region_data_list> }
-
-<data_region_data_list> ::= <data_region_data_info>, <data_region_data_list> | <data_region_data_info>
-
-<data_region_data_info> ::= <region_info> | <point_info>
-
-<region_info> ::= (<region_vals>)
-
-<region_vals> ::= <lower_bound>:<upper_bound>, <region_vals> | <lower_bound>:<upper_bound>
-
-<lower_bound> ::= <int_value>
-
-<upper_bound> ::= <int_value>
-
-<point_info> ::= (<point_vals>)
-
-<point_vals> ::= <int_value> | <int_value>, <point_vals>
-
-<compound_element> ::= { <any_data_seq> }
-
-<atomic_simple_data> :: = <atomic_element>, <atomic_simple_data> | <atomic_element>
-
-<simple_space_data> :: = <any_data_seq>
-
-<variable_length_element> ::= ( <any_data_seq> )
-
-<array_element> ::= `[' <any_data_seq> `]'
-
-<complex_space_data> ::= TBD
-
-<named_datatype> ::= DATATYPE <type_name> { <datatype> }
-
-<type_name> ::= <identifier>
-
-<named_dataspace> ::= TBD
-
-<hardlink> ::= HARDLINK <path_name>
-
-<group> ::= GROUP <group_name> { <hardlink> | <group_info> }
-
-<group_comment> ::= COMMENT <string_data>
-
-<group_name> ::= <identifier>
-
-<group_info> ::= <object_id>opt <group_comment>opt <group_attribute>* <group_member>*
-
-<group_attribute> ::= <attribute>
-
-<group_member> ::= <named_datatype> | <named_dataspace> | <group> |
- <dataset> | <softlink>
-
-<dataset> ::= DATASET <dataset_name> { <hardlink> | <dataset_info> }
-
-<dataset_info> ::= <dataset_type> <dataset_space> <storagelayout>opt
- <compression>opt <dataset_attribute>* <object_id>opt
- <data>opt
-// Tokens above can be in any order as long as <data> is
-// after <dataset_type> and <dataset_space>.
-
-<dataset_name> ::= <identifier>
-
-<storagelayout> :: = STORAGELAYOUT <contiguous_layout> |
- STORAGELAYOUT <chunked_layout> |
- STORAGELAYOUT <compact_layout> |
- STORAGELAYOUT <external_layout>
-
-<contiguous_layout> ::= {CONTIGUOUS} // default
-
-<chunked_layout> ::= {CHUNKED <dims> }
-
-<dims> ::= (<dims_values>)
-
-<dims_values> ::= <int_value> | <int_value>, <dims_values>
-
-<compact_layout> ::= TBD
-
-<external_layout> ::= {EXTERNAL <external_file>+ }
-
-<external_file> ::= (<file_name> <offset> <size>)
-
-<offset> ::= <int_value>
-
-<size> ::= <int_value>
-
-<compression> :: = COMPRESSION { TBD }
-
-<dataset_attribute> ::= <attribute>
-
-<softlink> ::= SOFTLINK <softlink_name> { LINKTARGET <target> }
-
-<softlink_name> ::= <identifier>
-
-<target> ::= <identifier>
-
-<identifier> ::= a string
-// character '/' should be used with care.
-
-<pos_list> ::= <pos_int>, <pos_list> | <pos_int>
-
-<int_value> ::= 0 | <pos_int>
-
-<pos_int> ::= [1-9][0-9]*
-
-
-4. An Example of an HDF5 File in DDL
-
-
-HDF5 "example.h5" {
-GROUP "/" {
- ATTRIBUTE "attr1" {
- DATATYPE H5T_STRING {
- STRSIZE 17;
- STRPAD H5T_STR_NULLTERM;
- CSET H5T_CSET_ASCII;
- CTYPE H5T_C_S1;
- }
- DATASPACE SCALAR
- DATA {
- "string attribute"
- }
- }
- DATASET "dset1" {
- DATATYPE H5T_STD_I32BE
- DATASPACE SIMPLE { ( 10, 10 ) / ( 10, 10 ) }
- DATA {
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
- }
- }
- DATASET "dset2" {
- DATATYPE H5T_COMPOUND {
- H5T_STD_I32BE "a";
- H5T_IEEE_F32BE "b";
- H5T_IEEE_F64BE "c";
- }
- DATASPACE SIMPLE { ( 5 ) / ( 5 ) }
- DATA {
- {
- 1,
- 0.1,
- 0.01
- },
- {
- 2,
- 0.2,
- 0.02
- },
- {
- 3,
- 0.3,
- 0.03
- },
- {
- 4,
- 0.4,
- 0.04
- },
- {
- 5,
- 0.5,
- 0.05
- }
- }
- }
- GROUP "group1" {
- COMMENT "This is a comment for group1";
- DATASET "dset3" {
- DATATYPE "/type1"
- DATASPACE SIMPLE { ( 5 ) / ( 5 ) }
- DATA {
- {
- [ 0, 1, 2, 3 ],
- [ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
- 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
- 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
- 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
- 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 ]
- },
- {
- [ 0, 1, 2, 3 ],
- [ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
- 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
- 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
- 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
- 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 ]
- },
- {
- [ 0, 1, 2, 3 ],
- [ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
- 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
- 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
- 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
- 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 ]
- },
- {
- [ 0, 1, 2, 3 ],
- [ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
- 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
- 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
- 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
- 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 ]
- },
- {
- [ 0, 1, 2, 3 ],
- [ 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
- 0.2, 0.2, 0.2, 0.2, 0.2, 0.2,
- 0.3, 0.3, 0.3, 0.3, 0.3, 0.3,
- 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
- 0.5, 0.5, 0.5, 0.5, 0.5, 0.5 ]
- }
- }
- }
- }
- DATASET "dset3" {
- DATATYPE H5T_VLEN { H5T_STD_I32LE }
- DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
- DATA {
- (0), (10, 11), (20, 21, 22), (30, 31, 32, 33)
- }
- }
- GROUP "group2" {
- HARDLINK "/group1"
- }
- SOFTLINK "slink1" {
- LINKTARGET "somevalue"
- }
- DATATYPE "type1" H5T_COMPOUND {
- H5T_ARRAY { [4] H5T_STD_I32BE } "a";
- H5T_ARRAY { [5][6] H5T_IEEE_F32BE } "b";
- }
-}
-}
-
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-
-Last modified: 17 November 2000
-
-
-
-
diff --git a/doc/html/ed_libs/Footer.lbi b/doc/html/ed_libs/Footer.lbi
deleted file mode 100644
index 8f5031e..0000000
--- a/doc/html/ed_libs/Footer.lbi
+++ /dev/null
@@ -1,5 +0,0 @@
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
\ No newline at end of file
diff --git a/doc/html/ed_libs/Makefile.am b/doc/html/ed_libs/Makefile.am
deleted file mode 100644
index 49eb355..0000000
--- a/doc/html/ed_libs/Makefile.am
+++ /dev/null
@@ -1,20 +0,0 @@
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-##
-## Makefile.am
-## Run automake to generate a Makefile.in from this file.
-#
-
-include $(top_srcdir)/config/commence-doc.am
-
-localdocdir = $(docdir)/hdf5/ed_libs
-
-# Public doc files (to be installed)...
-localdoc_DATA=Footer.lbi NavBar_ADevG.lbi NavBar_Common.lbi NavBar_Intro.lbi \
- NavBar_RM.lbi NavBar_TechN.lbi NavBar_UG.lbi styles_Format.lbi \
- styles_Gen.lbi styles_Index.lbi styles_Intro.lbi styles_RM.lbi \
- styles_UG.lbi
diff --git a/doc/html/ed_libs/Makefile.in b/doc/html/ed_libs/Makefile.in
deleted file mode 100644
index 31803a5..0000000
--- a/doc/html/ed_libs/Makefile.in
+++ /dev/null
@@ -1,489 +0,0 @@
-# Makefile.in generated by automake 1.9.5 from Makefile.am.
-# @configure_input@
-
-# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
-# 2003, 2004, 2005 Free Software Foundation, Inc.
-# This Makefile.in is free software; the Free Software Foundation
-# gives unlimited permission to copy and/or distribute it,
-# with or without modifications, as long as this notice is preserved.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
-# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
-# PARTICULAR PURPOSE.
-
-@SET_MAKE@
-
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-#
-
-srcdir = @srcdir@
-top_srcdir = @top_srcdir@
-VPATH = @srcdir@
-pkgdatadir = $(datadir)/@PACKAGE@
-pkglibdir = $(libdir)/@PACKAGE@
-pkgincludedir = $(includedir)/@PACKAGE@
-top_builddir = ../../..
-am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
-INSTALL = @INSTALL@
-install_sh_DATA = $(install_sh) -c -m 644
-install_sh_PROGRAM = $(install_sh) -c
-install_sh_SCRIPT = $(install_sh) -c
-INSTALL_HEADER = $(INSTALL_DATA)
-transform = $(program_transform_name)
-NORMAL_INSTALL = :
-PRE_INSTALL = :
-POST_INSTALL = :
-NORMAL_UNINSTALL = :
-PRE_UNINSTALL = :
-POST_UNINSTALL = :
-build_triplet = @build@
-host_triplet = @host@
-DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
- $(top_srcdir)/config/commence-doc.am \
- $(top_srcdir)/config/commence.am
-subdir = doc/html/ed_libs
-ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
-am__aclocal_m4_deps = $(top_srcdir)/configure.in
-am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
- $(ACLOCAL_M4)
-mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs
-CONFIG_HEADER = $(top_builddir)/src/H5config.h
-CONFIG_CLEAN_FILES =
-SOURCES =
-DIST_SOURCES =
-am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
-am__vpath_adj = case $$p in \
- $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
- *) f=$$p;; \
- esac;
-am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
-am__installdirs = "$(DESTDIR)$(localdocdir)"
-localdocDATA_INSTALL = $(INSTALL_DATA)
-DATA = $(localdoc_DATA)
-DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
-
-# Set the paths for AFS installs of autotools for Linux machines
-# Ideally, these tools should never be needed during the build.
-ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal
-ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@
-AMDEP_FALSE = @AMDEP_FALSE@
-AMDEP_TRUE = @AMDEP_TRUE@
-AMTAR = @AMTAR@
-AM_MAKEFLAGS = @AM_MAKEFLAGS@
-AR = @AR@
-AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf
-AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader
-AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake
-AWK = @AWK@
-BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@
-BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@
-BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@
-BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@
-BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@
-BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@
-BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@
-BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@
-BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@
-BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@
-BUILD_PDB2HDF = @BUILD_PDB2HDF@
-BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@
-BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@
-BYTESEX = @BYTESEX@
-CC = @CC@
-CCDEPMODE = @CCDEPMODE@
-CC_VERSION = @CC_VERSION@
-CFLAGS = @CFLAGS@
-CONFIG_DATE = @CONFIG_DATE@
-CONFIG_MODE = @CONFIG_MODE@
-CONFIG_USER = @CONFIG_USER@
-CPP = @CPP@
-CPPFLAGS = @CPPFLAGS@
-CXX = @CXX@
-CXXCPP = @CXXCPP@
-CXXDEPMODE = @CXXDEPMODE@
-CXXFLAGS = @CXXFLAGS@
-CYGPATH_W = @CYGPATH_W@
-DEBUG_PKG = @DEBUG_PKG@
-DEFS = @DEFS@
-DEPDIR = @DEPDIR@
-DYNAMIC_DIRS = @DYNAMIC_DIRS@
-ECHO = @ECHO@
-ECHO_C = @ECHO_C@
-ECHO_N = @ECHO_N@
-ECHO_T = @ECHO_T@
-EGREP = @EGREP@
-EXEEXT = @EXEEXT@
-F77 = @F77@
-
-# Make sure that these variables are exported to the Makefiles
-F9XMODEXT = @F9XMODEXT@
-F9XMODFLAG = @F9XMODFLAG@
-F9XSUFFIXFLAG = @F9XSUFFIXFLAG@
-FC = @FC@
-FCFLAGS = @FCFLAGS@
-FCLIBS = @FCLIBS@
-FFLAGS = @FFLAGS@
-FILTERS = @FILTERS@
-FSEARCH_DIRS = @FSEARCH_DIRS@
-H5_VERSION = @H5_VERSION@
-HADDR_T = @HADDR_T@
-HDF5_INTERFACES = @HDF5_INTERFACES@
-HID_T = @HID_T@
-HL = @HL@
-HL_FOR = @HL_FOR@
-HSIZET = @HSIZET@
-HSIZE_T = @HSIZE_T@
-HSSIZE_T = @HSSIZE_T@
-INSTALL_DATA = @INSTALL_DATA@
-INSTALL_PROGRAM = @INSTALL_PROGRAM@
-INSTALL_SCRIPT = @INSTALL_SCRIPT@
-INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
-INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@
-LDFLAGS = @LDFLAGS@
-LIBOBJS = @LIBOBJS@
-LIBS = @LIBS@
-LIBTOOL = @LIBTOOL@
-LN_S = @LN_S@
-LTLIBOBJS = @LTLIBOBJS@
-LT_STATIC_EXEC = @LT_STATIC_EXEC@
-MAINT = @MAINT@
-MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
-MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
-MAKEINFO = @MAKEINFO@
-MPE = @MPE@
-OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@
-OBJEXT = @OBJEXT@
-PACKAGE = @PACKAGE@
-PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
-PACKAGE_NAME = @PACKAGE_NAME@
-PACKAGE_STRING = @PACKAGE_STRING@
-PACKAGE_TARNAME = @PACKAGE_TARNAME@
-PACKAGE_VERSION = @PACKAGE_VERSION@
-PARALLEL = @PARALLEL@
-PATH_SEPARATOR = @PATH_SEPARATOR@
-PERL = @PERL@
-PTHREAD = @PTHREAD@
-RANLIB = @RANLIB@
-ROOT = @ROOT@
-RUNPARALLEL = @RUNPARALLEL@
-RUNSERIAL = @RUNSERIAL@
-R_INTEGER = @R_INTEGER@
-R_LARGE = @R_LARGE@
-SEARCH = @SEARCH@
-SETX = @SETX@
-SET_MAKE = @SET_MAKE@
-
-# Hardcode SHELL to be /bin/sh. Most machines have this shell, and
-# on at least one machine configure fails to detect its existence (janus).
-# Also, when HDF5 is configured on one machine but run on another,
-# configure's automatic SHELL detection may not work on the build machine.
-SHELL = /bin/sh
-SIZE_T = @SIZE_T@
-STATIC_SHARED = @STATIC_SHARED@
-STRIP = @STRIP@
-TESTPARALLEL = @TESTPARALLEL@
-TRACE_API = @TRACE_API@
-USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@
-USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@
-USE_FILTER_NBIT = @USE_FILTER_NBIT@
-USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@
-USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@
-USE_FILTER_SZIP = @USE_FILTER_SZIP@
-VERSION = @VERSION@
-ac_ct_AR = @ac_ct_AR@
-ac_ct_CC = @ac_ct_CC@
-ac_ct_CXX = @ac_ct_CXX@
-ac_ct_F77 = @ac_ct_F77@
-ac_ct_FC = @ac_ct_FC@
-ac_ct_RANLIB = @ac_ct_RANLIB@
-ac_ct_STRIP = @ac_ct_STRIP@
-am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
-am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
-am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
-am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
-am__include = @am__include@
-am__leading_dot = @am__leading_dot@
-am__quote = @am__quote@
-am__tar = @am__tar@
-am__untar = @am__untar@
-bindir = @bindir@
-build = @build@
-build_alias = @build_alias@
-build_cpu = @build_cpu@
-build_os = @build_os@
-build_vendor = @build_vendor@
-datadir = @datadir@
-exec_prefix = @exec_prefix@
-host = @host@
-host_alias = @host_alias@
-host_cpu = @host_cpu@
-host_os = @host_os@
-host_vendor = @host_vendor@
-
-# Install directories that automake doesn't know about
-includedir = $(exec_prefix)/include
-infodir = @infodir@
-install_sh = @install_sh@
-libdir = @libdir@
-libexecdir = @libexecdir@
-localstatedir = @localstatedir@
-mandir = @mandir@
-mkdir_p = @mkdir_p@
-oldincludedir = @oldincludedir@
-prefix = @prefix@
-program_transform_name = @program_transform_name@
-sbindir = @sbindir@
-sharedstatedir = @sharedstatedir@
-sysconfdir = @sysconfdir@
-target_alias = @target_alias@
-
-# Shell commands used in Makefiles
-RM = rm -f
-CP = cp
-
-# Some machines need a command to run executables; this is that command
-# so that our tests will run.
-# We use RUNTESTS instead of RUNSERIAL directly because it may be that
-# some tests need to be run with a different command. Older versions
-# of the makefiles used the command
-# $(LIBTOOL) --mode=execute
-# in some directories, for instance.
-RUNTESTS = $(RUNSERIAL)
-
-# Libraries to link to while building
-LIBHDF5 = $(top_builddir)/src/libhdf5.la
-LIBH5TEST = $(top_builddir)/test/libh5test.la
-LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la
-LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la
-LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la
-LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la
-LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la
-LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la
-LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la
-docdir = $(exec_prefix)/doc
-
-# Scripts used to build examples
-H5CC = $(bindir)/h5cc
-H5CC_PP = $(bindir)/h5pcc
-H5FC = $(bindir)/h5fc
-H5FC_PP = $(bindir)/h5pfc
-
-# .chkexe and .chksh files are used to mark tests that have run successfully.
-MOSTLYCLEANFILES = *.chkexe *.chksh
-localdocdir = $(docdir)/hdf5/ed_libs
-
-# Public doc files (to be installed)...
-localdoc_DATA = Footer.lbi NavBar_ADevG.lbi NavBar_Common.lbi NavBar_Intro.lbi \
- NavBar_RM.lbi NavBar_TechN.lbi NavBar_UG.lbi styles_Format.lbi \
- styles_Gen.lbi styles_Index.lbi styles_Intro.lbi styles_RM.lbi \
- styles_UG.lbi
-
-all: all-am
-
-.SUFFIXES:
-$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps)
- @for dep in $?; do \
- case '$(am__configure_deps)' in \
- *$$dep*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
- && exit 0; \
- exit 1;; \
- esac; \
- done; \
- echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/ed_libs/Makefile'; \
- cd $(top_srcdir) && \
- $(AUTOMAKE) --foreign doc/html/ed_libs/Makefile
-.PRECIOUS: Makefile
-Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
- @case '$?' in \
- *config.status*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
- *) \
- echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
- cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
- esac;
-
-$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-mostlyclean-libtool:
- -rm -f *.lo
-
-clean-libtool:
- -rm -rf .libs _libs
-
-distclean-libtool:
- -rm -f libtool
-uninstall-info-am:
-install-localdocDATA: $(localdoc_DATA)
- @$(NORMAL_INSTALL)
- test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)"
- @list='$(localdoc_DATA)'; for p in $$list; do \
- if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
- f=$(am__strip_dir) \
- echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \
- $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-
-uninstall-localdocDATA:
- @$(NORMAL_UNINSTALL)
- @list='$(localdoc_DATA)'; for p in $$list; do \
- f=$(am__strip_dir) \
- echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \
- rm -f "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-tags: TAGS
-TAGS:
-
-ctags: CTAGS
-CTAGS:
-
-
-distdir: $(DISTFILES)
- $(mkdir_p) $(distdir)/../../../config
- @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
- topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
- list='$(DISTFILES)'; for file in $$list; do \
- case $$file in \
- $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
- $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
- esac; \
- if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
- dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
- if test "$$dir" != "$$file" && test "$$dir" != "."; then \
- dir="/$$dir"; \
- $(mkdir_p) "$(distdir)$$dir"; \
- else \
- dir=''; \
- fi; \
- if test -d $$d/$$file; then \
- if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
- cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
- fi; \
- cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
- else \
- test -f $(distdir)/$$file \
- || cp -p $$d/$$file $(distdir)/$$file \
- || exit 1; \
- fi; \
- done
-check-am: all-am
-check: check-am
-all-am: Makefile $(DATA)
-installdirs:
- for dir in "$(DESTDIR)$(localdocdir)"; do \
- test -z "$$dir" || $(mkdir_p) "$$dir"; \
- done
-install: install-am
-install-exec: install-exec-am
-install-data: install-data-am
-uninstall: uninstall-am
-
-install-am: all-am
- @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
-
-installcheck: installcheck-am
-install-strip:
- $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
- install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
- `test -z '$(STRIP)' || \
- echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
-mostlyclean-generic:
- -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
-
-clean-generic:
-
-distclean-generic:
- -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-
-maintainer-clean-generic:
- @echo "This command is intended for maintainers to use"
- @echo "it deletes files that may require special tools to rebuild."
-clean: clean-am
-
-clean-am: clean-generic clean-libtool mostlyclean-am
-
-distclean: distclean-am
- -rm -f Makefile
-distclean-am: clean-am distclean-generic distclean-libtool
-
-dvi: dvi-am
-
-dvi-am:
-
-html: html-am
-
-info: info-am
-
-info-am:
-
-install-data-am: install-localdocDATA
-
-install-exec-am:
-
-install-info: install-info-am
-
-install-man:
-
-installcheck-am:
-
-maintainer-clean: maintainer-clean-am
- -rm -f Makefile
-maintainer-clean-am: distclean-am maintainer-clean-generic
-
-mostlyclean: mostlyclean-am
-
-mostlyclean-am: mostlyclean-generic mostlyclean-libtool
-
-pdf: pdf-am
-
-pdf-am:
-
-ps: ps-am
-
-ps-am:
-
-uninstall-am: uninstall-info-am uninstall-localdocDATA
-
-.PHONY: all all-am check check-am clean clean-generic clean-libtool \
- distclean distclean-generic distclean-libtool distdir dvi \
- dvi-am html html-am info info-am install install-am \
- install-data install-data-am install-exec install-exec-am \
- install-info install-info-am install-localdocDATA install-man \
- install-strip installcheck installcheck-am installdirs \
- maintainer-clean maintainer-clean-generic mostlyclean \
- mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
- uninstall uninstall-am uninstall-info-am \
- uninstall-localdocDATA
-
-
-# Ignore most rules
-lib progs check test _test check-p check-s:
- @echo "Nothing to be done"
-
-tests dep depend:
- @@SETX@; for d in X $(SUBDIRS); do \
- if test $$d != X; then \
- (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \
- fi;
- done
-
-# In docs directory, install-doc is the same as install
-install-doc install-all:
- $(MAKE) $(AM_MAKEFLAGS) install
-uninstall-doc uninstall-all:
- $(MAKE) $(AM_MAKEFLAGS) uninstall
-# Tell versions [3.59,3.63) of GNU make to not export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
diff --git a/doc/html/ed_libs/NavBar_ADevG.lbi b/doc/html/ed_libs/NavBar_ADevG.lbi
deleted file mode 100644
index 2178e91..0000000
--- a/doc/html/ed_libs/NavBar_ADevG.lbi
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
- HDF5 Application Developer's Guide
-
diff --git a/doc/html/ed_libs/NavBar_Common.lbi b/doc/html/ed_libs/NavBar_Common.lbi
deleted file mode 100644
index 47d2bbd..0000000
--- a/doc/html/ed_libs/NavBar_Common.lbi
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
-
-
- HDF5 User's Guide
- HDF5 Reference Manual
-
diff --git a/doc/html/ed_libs/NavBar_Intro.lbi b/doc/html/ed_libs/NavBar_Intro.lbi
deleted file mode 100644
index 81d035b..0000000
--- a/doc/html/ed_libs/NavBar_Intro.lbi
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
-
-
-
-Introduction to HDF5
-
-HDF5 User Guide
-
-
-HDF5 Reference Manual
-Other HDF5 documents and links
-
diff --git a/doc/html/ed_libs/NavBar_RM.lbi b/doc/html/ed_libs/NavBar_RM.lbi
deleted file mode 100644
index 391a806..0000000
--- a/doc/html/ed_libs/NavBar_RM.lbi
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-
-
-
-HDF5 documents and links
-
-Introduction to HDF5
-HDF5 User Guide
-
-
-And in this document, the
-HDF5 Reference Manual
-
-H5IM
-H5LT
-H5PT
-H5TB
-
-H5
-H5A
-H5D
-H5E
-H5F
-H5G
-H5I
-H5P
-
-H5R
-H5S
-H5T
-H5Z
-Tools
-Datatypes
-
\ No newline at end of file
diff --git a/doc/html/ed_libs/NavBar_TechN.lbi b/doc/html/ed_libs/NavBar_TechN.lbi
deleted file mode 100644
index 99b4b4f..0000000
--- a/doc/html/ed_libs/NavBar_TechN.lbi
+++ /dev/null
@@ -1,27 +0,0 @@
-
-
-
-
-
-HDF5 documents and links
-
-
-Introduction to HDF5
-
-
-HDF5 User's Guide
-HDF5 Application Developer's Guide
-HDF5 Reference Manual
-
-
-
-
diff --git a/doc/html/ed_libs/NavBar_UG.lbi b/doc/html/ed_libs/NavBar_UG.lbi
deleted file mode 100644
index f6de063..0000000
--- a/doc/html/ed_libs/NavBar_UG.lbi
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-
-
-
- HDF5 documents and links
-
- Introduction to HDF5
- HDF5 Reference Manual
- HDF5 User's Guide for Release 1.6
-
-
- And in this document, the
- HDF5 User's Guide from Release 1.4.5:
-
- Files
- Datasets
- Datatypes
- Dataspaces
- Groups
-
- References
- Attributes
- Property Lists
- Error Handling
-
- Filters
- Caching
- Chunking
- Mounting Files
-
- Performance
- Debugging
- Environment
- DDL
-
diff --git a/doc/html/ed_libs/styles_Format.lbi b/doc/html/ed_libs/styles_Format.lbi
deleted file mode 100644
index f979cf0..0000000
--- a/doc/html/ed_libs/styles_Format.lbi
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
diff --git a/doc/html/ed_libs/styles_Gen.lbi b/doc/html/ed_libs/styles_Gen.lbi
deleted file mode 100644
index 26935f2..0000000
--- a/doc/html/ed_libs/styles_Gen.lbi
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
diff --git a/doc/html/ed_libs/styles_Index.lbi b/doc/html/ed_libs/styles_Index.lbi
deleted file mode 100644
index 25ecd90..0000000
--- a/doc/html/ed_libs/styles_Index.lbi
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
diff --git a/doc/html/ed_libs/styles_Intro.lbi b/doc/html/ed_libs/styles_Intro.lbi
deleted file mode 100644
index 08547c3..0000000
--- a/doc/html/ed_libs/styles_Intro.lbi
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
diff --git a/doc/html/ed_libs/styles_RM.lbi b/doc/html/ed_libs/styles_RM.lbi
deleted file mode 100644
index 3dd8eb3..0000000
--- a/doc/html/ed_libs/styles_RM.lbi
+++ /dev/null
@@ -1,19 +0,0 @@
-
-
-
-
-
diff --git a/doc/html/ed_libs/styles_UG.lbi b/doc/html/ed_libs/styles_UG.lbi
deleted file mode 100644
index a21c739..0000000
--- a/doc/html/ed_libs/styles_UG.lbi
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-
-
diff --git a/doc/html/ed_styles/FormatElect.css b/doc/html/ed_styles/FormatElect.css
deleted file mode 100644
index cd181cd..0000000
--- a/doc/html/ed_styles/FormatElect.css
+++ /dev/null
@@ -1,35 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/FormatPrint.css b/doc/html/ed_styles/FormatPrint.css
deleted file mode 100644
index 6b25a73..0000000
--- a/doc/html/ed_styles/FormatPrint.css
+++ /dev/null
@@ -1,58 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/GenElect.css b/doc/html/ed_styles/GenElect.css
deleted file mode 100644
index cd181cd..0000000
--- a/doc/html/ed_styles/GenElect.css
+++ /dev/null
@@ -1,35 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/GenPrint.css b/doc/html/ed_styles/GenPrint.css
deleted file mode 100644
index 6b25a73..0000000
--- a/doc/html/ed_styles/GenPrint.css
+++ /dev/null
@@ -1,58 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/IndexElect.css b/doc/html/ed_styles/IndexElect.css
deleted file mode 100644
index cd181cd..0000000
--- a/doc/html/ed_styles/IndexElect.css
+++ /dev/null
@@ -1,35 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/IndexPrint.css b/doc/html/ed_styles/IndexPrint.css
deleted file mode 100644
index 6b25a73..0000000
--- a/doc/html/ed_styles/IndexPrint.css
+++ /dev/null
@@ -1,58 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/IntroElect.css b/doc/html/ed_styles/IntroElect.css
deleted file mode 100644
index cd181cd..0000000
--- a/doc/html/ed_styles/IntroElect.css
+++ /dev/null
@@ -1,35 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/IntroPrint.css b/doc/html/ed_styles/IntroPrint.css
deleted file mode 100644
index 6b25a73..0000000
--- a/doc/html/ed_styles/IntroPrint.css
+++ /dev/null
@@ -1,58 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/Makefile.am b/doc/html/ed_styles/Makefile.am
deleted file mode 100644
index a4b86e9..0000000
--- a/doc/html/ed_styles/Makefile.am
+++ /dev/null
@@ -1,19 +0,0 @@
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-##
-## Makefile.am
-## Run automake to generate a Makefile.in from this file.
-#
-
-include $(top_srcdir)/config/commence-doc.am
-
-localdocdir = $(docdir)/hdf5/ed_styles
-
-# Public doc files (to be installed)...
-localdoc_DATA=FormatElect.css FormatPrint.css GenElect.css GenPrint.css \
- IndexElect.css IndexPrint.css IntroElect.css IntroPrint.css \
- RMelect.css RMprint.css UGelect.css UGprint.css
diff --git a/doc/html/ed_styles/Makefile.in b/doc/html/ed_styles/Makefile.in
deleted file mode 100644
index 98b1af9..0000000
--- a/doc/html/ed_styles/Makefile.in
+++ /dev/null
@@ -1,488 +0,0 @@
-# Makefile.in generated by automake 1.9.5 from Makefile.am.
-# @configure_input@
-
-# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
-# 2003, 2004, 2005 Free Software Foundation, Inc.
-# This Makefile.in is free software; the Free Software Foundation
-# gives unlimited permission to copy and/or distribute it,
-# with or without modifications, as long as this notice is preserved.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
-# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
-# PARTICULAR PURPOSE.
-
-@SET_MAKE@
-
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-#
-
-srcdir = @srcdir@
-top_srcdir = @top_srcdir@
-VPATH = @srcdir@
-pkgdatadir = $(datadir)/@PACKAGE@
-pkglibdir = $(libdir)/@PACKAGE@
-pkgincludedir = $(includedir)/@PACKAGE@
-top_builddir = ../../..
-am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
-INSTALL = @INSTALL@
-install_sh_DATA = $(install_sh) -c -m 644
-install_sh_PROGRAM = $(install_sh) -c
-install_sh_SCRIPT = $(install_sh) -c
-INSTALL_HEADER = $(INSTALL_DATA)
-transform = $(program_transform_name)
-NORMAL_INSTALL = :
-PRE_INSTALL = :
-POST_INSTALL = :
-NORMAL_UNINSTALL = :
-PRE_UNINSTALL = :
-POST_UNINSTALL = :
-build_triplet = @build@
-host_triplet = @host@
-DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
- $(top_srcdir)/config/commence-doc.am \
- $(top_srcdir)/config/commence.am
-subdir = doc/html/ed_styles
-ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
-am__aclocal_m4_deps = $(top_srcdir)/configure.in
-am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
- $(ACLOCAL_M4)
-mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs
-CONFIG_HEADER = $(top_builddir)/src/H5config.h
-CONFIG_CLEAN_FILES =
-SOURCES =
-DIST_SOURCES =
-am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
-am__vpath_adj = case $$p in \
- $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
- *) f=$$p;; \
- esac;
-am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
-am__installdirs = "$(DESTDIR)$(localdocdir)"
-localdocDATA_INSTALL = $(INSTALL_DATA)
-DATA = $(localdoc_DATA)
-DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
-
-# Set the paths for AFS installs of autotools for Linux machines
-# Ideally, these tools should never be needed during the build.
-ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal
-ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@
-AMDEP_FALSE = @AMDEP_FALSE@
-AMDEP_TRUE = @AMDEP_TRUE@
-AMTAR = @AMTAR@
-AM_MAKEFLAGS = @AM_MAKEFLAGS@
-AR = @AR@
-AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf
-AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader
-AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake
-AWK = @AWK@
-BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@
-BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@
-BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@
-BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@
-BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@
-BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@
-BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@
-BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@
-BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@
-BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@
-BUILD_PDB2HDF = @BUILD_PDB2HDF@
-BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@
-BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@
-BYTESEX = @BYTESEX@
-CC = @CC@
-CCDEPMODE = @CCDEPMODE@
-CC_VERSION = @CC_VERSION@
-CFLAGS = @CFLAGS@
-CONFIG_DATE = @CONFIG_DATE@
-CONFIG_MODE = @CONFIG_MODE@
-CONFIG_USER = @CONFIG_USER@
-CPP = @CPP@
-CPPFLAGS = @CPPFLAGS@
-CXX = @CXX@
-CXXCPP = @CXXCPP@
-CXXDEPMODE = @CXXDEPMODE@
-CXXFLAGS = @CXXFLAGS@
-CYGPATH_W = @CYGPATH_W@
-DEBUG_PKG = @DEBUG_PKG@
-DEFS = @DEFS@
-DEPDIR = @DEPDIR@
-DYNAMIC_DIRS = @DYNAMIC_DIRS@
-ECHO = @ECHO@
-ECHO_C = @ECHO_C@
-ECHO_N = @ECHO_N@
-ECHO_T = @ECHO_T@
-EGREP = @EGREP@
-EXEEXT = @EXEEXT@
-F77 = @F77@
-
-# Make sure that these variables are exported to the Makefiles
-F9XMODEXT = @F9XMODEXT@
-F9XMODFLAG = @F9XMODFLAG@
-F9XSUFFIXFLAG = @F9XSUFFIXFLAG@
-FC = @FC@
-FCFLAGS = @FCFLAGS@
-FCLIBS = @FCLIBS@
-FFLAGS = @FFLAGS@
-FILTERS = @FILTERS@
-FSEARCH_DIRS = @FSEARCH_DIRS@
-H5_VERSION = @H5_VERSION@
-HADDR_T = @HADDR_T@
-HDF5_INTERFACES = @HDF5_INTERFACES@
-HID_T = @HID_T@
-HL = @HL@
-HL_FOR = @HL_FOR@
-HSIZET = @HSIZET@
-HSIZE_T = @HSIZE_T@
-HSSIZE_T = @HSSIZE_T@
-INSTALL_DATA = @INSTALL_DATA@
-INSTALL_PROGRAM = @INSTALL_PROGRAM@
-INSTALL_SCRIPT = @INSTALL_SCRIPT@
-INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
-INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@
-LDFLAGS = @LDFLAGS@
-LIBOBJS = @LIBOBJS@
-LIBS = @LIBS@
-LIBTOOL = @LIBTOOL@
-LN_S = @LN_S@
-LTLIBOBJS = @LTLIBOBJS@
-LT_STATIC_EXEC = @LT_STATIC_EXEC@
-MAINT = @MAINT@
-MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
-MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
-MAKEINFO = @MAKEINFO@
-MPE = @MPE@
-OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@
-OBJEXT = @OBJEXT@
-PACKAGE = @PACKAGE@
-PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
-PACKAGE_NAME = @PACKAGE_NAME@
-PACKAGE_STRING = @PACKAGE_STRING@
-PACKAGE_TARNAME = @PACKAGE_TARNAME@
-PACKAGE_VERSION = @PACKAGE_VERSION@
-PARALLEL = @PARALLEL@
-PATH_SEPARATOR = @PATH_SEPARATOR@
-PERL = @PERL@
-PTHREAD = @PTHREAD@
-RANLIB = @RANLIB@
-ROOT = @ROOT@
-RUNPARALLEL = @RUNPARALLEL@
-RUNSERIAL = @RUNSERIAL@
-R_INTEGER = @R_INTEGER@
-R_LARGE = @R_LARGE@
-SEARCH = @SEARCH@
-SETX = @SETX@
-SET_MAKE = @SET_MAKE@
-
-# Hardcode SHELL to be /bin/sh. Most machines have this shell, and
-# on at least one machine configure fails to detect its existence (janus).
-# Also, when HDF5 is configured on one machine but run on another,
-# configure's automatic SHELL detection may not work on the build machine.
-SHELL = /bin/sh
-SIZE_T = @SIZE_T@
-STATIC_SHARED = @STATIC_SHARED@
-STRIP = @STRIP@
-TESTPARALLEL = @TESTPARALLEL@
-TRACE_API = @TRACE_API@
-USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@
-USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@
-USE_FILTER_NBIT = @USE_FILTER_NBIT@
-USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@
-USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@
-USE_FILTER_SZIP = @USE_FILTER_SZIP@
-VERSION = @VERSION@
-ac_ct_AR = @ac_ct_AR@
-ac_ct_CC = @ac_ct_CC@
-ac_ct_CXX = @ac_ct_CXX@
-ac_ct_F77 = @ac_ct_F77@
-ac_ct_FC = @ac_ct_FC@
-ac_ct_RANLIB = @ac_ct_RANLIB@
-ac_ct_STRIP = @ac_ct_STRIP@
-am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
-am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
-am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
-am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
-am__include = @am__include@
-am__leading_dot = @am__leading_dot@
-am__quote = @am__quote@
-am__tar = @am__tar@
-am__untar = @am__untar@
-bindir = @bindir@
-build = @build@
-build_alias = @build_alias@
-build_cpu = @build_cpu@
-build_os = @build_os@
-build_vendor = @build_vendor@
-datadir = @datadir@
-exec_prefix = @exec_prefix@
-host = @host@
-host_alias = @host_alias@
-host_cpu = @host_cpu@
-host_os = @host_os@
-host_vendor = @host_vendor@
-
-# Install directories that automake doesn't know about
-includedir = $(exec_prefix)/include
-infodir = @infodir@
-install_sh = @install_sh@
-libdir = @libdir@
-libexecdir = @libexecdir@
-localstatedir = @localstatedir@
-mandir = @mandir@
-mkdir_p = @mkdir_p@
-oldincludedir = @oldincludedir@
-prefix = @prefix@
-program_transform_name = @program_transform_name@
-sbindir = @sbindir@
-sharedstatedir = @sharedstatedir@
-sysconfdir = @sysconfdir@
-target_alias = @target_alias@
-
-# Shell commands used in Makefiles
-RM = rm -f
-CP = cp
-
-# Some machines need a command to run executables; this is that command
-# so that our tests will run.
-# We use RUNTESTS instead of RUNSERIAL directly because it may be that
-# some tests need to be run with a different command. Older versions
-# of the makefiles used the command
-# $(LIBTOOL) --mode=execute
-# in some directories, for instance.
-RUNTESTS = $(RUNSERIAL)
-
-# Libraries to link to while building
-LIBHDF5 = $(top_builddir)/src/libhdf5.la
-LIBH5TEST = $(top_builddir)/test/libh5test.la
-LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la
-LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la
-LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la
-LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la
-LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la
-LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la
-LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la
-docdir = $(exec_prefix)/doc
-
-# Scripts used to build examples
-H5CC = $(bindir)/h5cc
-H5CC_PP = $(bindir)/h5pcc
-H5FC = $(bindir)/h5fc
-H5FC_PP = $(bindir)/h5pfc
-
-# .chkexe and .chksh files are used to mark tests that have run successfully.
-MOSTLYCLEANFILES = *.chkexe *.chksh
-localdocdir = $(docdir)/hdf5/ed_styles
-
-# Public doc files (to be installed)...
-localdoc_DATA = FormatElect.css FormatPrint.css GenElect.css GenPrint.css \
- IndexElect.css IndexPrint.css IntroElect.css IntroPrint.css \
- RMelect.css RMprint.css UGelect.css UGprint.css
-
-all: all-am
-
-.SUFFIXES:
-$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps)
- @for dep in $?; do \
- case '$(am__configure_deps)' in \
- *$$dep*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
- && exit 0; \
- exit 1;; \
- esac; \
- done; \
- echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/ed_styles/Makefile'; \
- cd $(top_srcdir) && \
- $(AUTOMAKE) --foreign doc/html/ed_styles/Makefile
-.PRECIOUS: Makefile
-Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
- @case '$?' in \
- *config.status*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
- *) \
- echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
- cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
- esac;
-
-$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-mostlyclean-libtool:
- -rm -f *.lo
-
-clean-libtool:
- -rm -rf .libs _libs
-
-distclean-libtool:
- -rm -f libtool
-uninstall-info-am:
-install-localdocDATA: $(localdoc_DATA)
- @$(NORMAL_INSTALL)
- test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)"
- @list='$(localdoc_DATA)'; for p in $$list; do \
- if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
- f=$(am__strip_dir) \
- echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \
- $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-
-uninstall-localdocDATA:
- @$(NORMAL_UNINSTALL)
- @list='$(localdoc_DATA)'; for p in $$list; do \
- f=$(am__strip_dir) \
- echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \
- rm -f "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-tags: TAGS
-TAGS:
-
-ctags: CTAGS
-CTAGS:
-
-
-distdir: $(DISTFILES)
- $(mkdir_p) $(distdir)/../../../config
- @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
- topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
- list='$(DISTFILES)'; for file in $$list; do \
- case $$file in \
- $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
- $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
- esac; \
- if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
- dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
- if test "$$dir" != "$$file" && test "$$dir" != "."; then \
- dir="/$$dir"; \
- $(mkdir_p) "$(distdir)$$dir"; \
- else \
- dir=''; \
- fi; \
- if test -d $$d/$$file; then \
- if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
- cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
- fi; \
- cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
- else \
- test -f $(distdir)/$$file \
- || cp -p $$d/$$file $(distdir)/$$file \
- || exit 1; \
- fi; \
- done
-check-am: all-am
-check: check-am
-all-am: Makefile $(DATA)
-installdirs:
- for dir in "$(DESTDIR)$(localdocdir)"; do \
- test -z "$$dir" || $(mkdir_p) "$$dir"; \
- done
-install: install-am
-install-exec: install-exec-am
-install-data: install-data-am
-uninstall: uninstall-am
-
-install-am: all-am
- @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
-
-installcheck: installcheck-am
-install-strip:
- $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
- install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
- `test -z '$(STRIP)' || \
- echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
-mostlyclean-generic:
- -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
-
-clean-generic:
-
-distclean-generic:
- -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-
-maintainer-clean-generic:
- @echo "This command is intended for maintainers to use"
- @echo "it deletes files that may require special tools to rebuild."
-clean: clean-am
-
-clean-am: clean-generic clean-libtool mostlyclean-am
-
-distclean: distclean-am
- -rm -f Makefile
-distclean-am: clean-am distclean-generic distclean-libtool
-
-dvi: dvi-am
-
-dvi-am:
-
-html: html-am
-
-info: info-am
-
-info-am:
-
-install-data-am: install-localdocDATA
-
-install-exec-am:
-
-install-info: install-info-am
-
-install-man:
-
-installcheck-am:
-
-maintainer-clean: maintainer-clean-am
- -rm -f Makefile
-maintainer-clean-am: distclean-am maintainer-clean-generic
-
-mostlyclean: mostlyclean-am
-
-mostlyclean-am: mostlyclean-generic mostlyclean-libtool
-
-pdf: pdf-am
-
-pdf-am:
-
-ps: ps-am
-
-ps-am:
-
-uninstall-am: uninstall-info-am uninstall-localdocDATA
-
-.PHONY: all all-am check check-am clean clean-generic clean-libtool \
- distclean distclean-generic distclean-libtool distdir dvi \
- dvi-am html html-am info info-am install install-am \
- install-data install-data-am install-exec install-exec-am \
- install-info install-info-am install-localdocDATA install-man \
- install-strip installcheck installcheck-am installdirs \
- maintainer-clean maintainer-clean-generic mostlyclean \
- mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
- uninstall uninstall-am uninstall-info-am \
- uninstall-localdocDATA
-
-
-# Ignore most rules
-lib progs check test _test check-p check-s:
- @echo "Nothing to be done"
-
-tests dep depend:
- @@SETX@; for d in X $(SUBDIRS); do \
- if test $$d != X; then \
- (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \
- fi;
- done
-
-# In docs directory, install-doc is the same as install
-install-doc install-all:
- $(MAKE) $(AM_MAKEFLAGS) install
-uninstall-doc uninstall-all:
- $(MAKE) $(AM_MAKEFLAGS) uninstall
-# Tell versions [3.59,3.63) of GNU make to not export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
diff --git a/doc/html/ed_styles/RMelect.css b/doc/html/ed_styles/RMelect.css
deleted file mode 100644
index 478f4e3..0000000
--- a/doc/html/ed_styles/RMelect.css
+++ /dev/null
@@ -1,39 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/RMprint.css b/doc/html/ed_styles/RMprint.css
deleted file mode 100644
index 6b25a73..0000000
--- a/doc/html/ed_styles/RMprint.css
+++ /dev/null
@@ -1,58 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/UGelect.css b/doc/html/ed_styles/UGelect.css
deleted file mode 100644
index cd181cd..0000000
--- a/doc/html/ed_styles/UGelect.css
+++ /dev/null
@@ -1,35 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/ed_styles/UGprint.css b/doc/html/ed_styles/UGprint.css
deleted file mode 100644
index 6b25a73..0000000
--- a/doc/html/ed_styles/UGprint.css
+++ /dev/null
@@ -1,58 +0,0 @@
-
\ No newline at end of file
diff --git a/doc/html/extern1.gif b/doc/html/extern1.gif
deleted file mode 100644
index dcac681..0000000
Binary files a/doc/html/extern1.gif and /dev/null differ
diff --git a/doc/html/extern1.obj b/doc/html/extern1.obj
deleted file mode 100644
index 9c56a50..0000000
--- a/doc/html/extern1.obj
+++ /dev/null
@@ -1,40 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,16,1,9,1,1,0,0,1,0,1,0,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-box('black',128,96,192,128,4,1,1,49,0,0,0,0,0,'1',[
-]).
-box('black',192,96,352,128,12,1,1,50,0,0,0,0,0,'1',[
-]).
-box('black',352,96,416,128,18,1,1,51,0,0,0,0,0,'1',[
-]).
-box('black',64,176,224,208,12,1,1,53,0,0,0,0,0,'1',[
-]).
-box('black',256,176,320,208,4,1,1,54,0,0,0,0,0,'1',[
-]).
-box('black',352,176,448,208,18,1,1,55,0,0,0,0,0,'1',[
-]).
-box('black',224,176,256,208,0,1,1,56,0,0,0,0,0,'1',[
-]).
-box('black',320,176,352,208,0,1,1,57,0,0,0,0,0,'1',[
-]).
-box('black',448,176,512,208,0,1,1,58,0,0,0,0,0,'1',[
-]).
-poly('black',2,[
- 176,128,272,176],1,1,1,59,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 240,128,208,176],1,1,1,60,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 384,128,384,176],1,1,1,61,0,0,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-box('black',48,80,528,224,0,1,1,64,0,0,0,0,0,'1',[
-]).
diff --git a/doc/html/extern2.gif b/doc/html/extern2.gif
deleted file mode 100644
index 5f0e942..0000000
Binary files a/doc/html/extern2.gif and /dev/null differ
diff --git a/doc/html/extern2.obj b/doc/html/extern2.obj
deleted file mode 100644
index 3e83452..0000000
--- a/doc/html/extern2.obj
+++ /dev/null
@@ -1,108 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,16,1,9,1,1,1,1,0,0,1,1,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-box('black',48,48,464,432,0,1,1,144,0,0,0,0,0,'1',[
-]).
-text('black',80,240,'Courier',0,17,1,0,0,1,70,14,146,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "scan1.data"]).
-text('black',80,304,'Courier',0,17,1,0,0,1,70,14,148,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "scan2.data"]).
-text('black',80,368,'Courier',0,17,1,0,0,1,70,14,150,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "scan3.data"]).
-polygon('black',7,[
- 64,64,64,128,192,128,192,96,320,96,320,64,64,64],20,1,1,0,181,0,0,0,0,0,'1',
- "00",[
-]).
-polygon('black',7,[
- 64,128,64,160,320,160,320,96,192,96,192,128,64,128],4,1,1,0,182,0,0,0,0,0,'1',
- "00",[
-]).
-box('black',64,160,320,192,26,1,1,183,0,0,0,0,0,'1',[
-]).
-poly('black',2,[
- 80,80,304,80],1,1,1,184,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 80,112,176,112],1,1,1,185,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 208,112,304,112],1,1,1,186,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 80,144,304,144],1,1,1,187,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 80,176,304,176],1,1,1,188,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-box('black',64,256,448,288,20,1,1,203,0,0,0,0,0,'1',[
-]).
-box('black',64,320,448,352,4,1,1,216,0,0,0,0,0,'1',[
-]).
-box('black',64,384,320,416,26,1,1,225,0,0,0,0,0,'1',[
-]).
-poly('black',2,[
- 80,272,304,272],1,1,1,226,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 336,272,432,272],1,1,1,227,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 80,336,176,336],1,1,1,228,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 208,336,432,336],1,1,1,229,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 80,400,304,400],1,1,1,230,0,26,0,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 192,96,64,96],0,1,1,232,0,26,5,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 192,128,320,128],0,1,1,233,0,26,5,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 256,64,256,192],0,1,1,234,0,26,5,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 192,64,192,192],0,1,1,235,0,26,5,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 128,64,128,192],0,1,1,236,0,26,5,0,8,3,0,0,0,'1','8','3',
- "0",[
-]).
-poly('black',2,[
- 320,160,64,160],0,2,1,238,0,26,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',4,[
- 320,96,192,96,192,128,64,128],0,2,1,240,0,0,0,0,10,4,0,0,0,'2','10','4',
- "0",[
-]).
-poly('black',6,[
- 336,64,384,64,384,128,384,128,384,192,336,192],3,1,1,241,1,0,0,0,8,3,0,0,0,'1','8','3',
- "78",[
-]).
-text('black',429,124,'Courier',0,17,2,1,0,1,28,49,250,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,1,0,[
- 429,124,405,124,454,152,0,1000,-1000,0,-15,2,404,123,455,153],[
- "2-d",
- "Dataset"]).
diff --git a/doc/html/fortran/F90Flags.html b/doc/html/fortran/F90Flags.html
deleted file mode 100644
index 8619092..0000000
--- a/doc/html/fortran/F90Flags.html
+++ /dev/null
@@ -1,332 +0,0 @@
-
-
-
-HDF5 Fortran90 Flags and Datatypes
-
-
-
-
-Fortran90 Datatypes
-
-The Fortran90 HDF5 datatypes
-are listed in HDF5 Predefined Datatypes
-
-
-
-
-
-Fortran90 Flags
-
-The Fortran90 HDF5 flags have the same meanings as the C flags defined in the
-HDF5 Reference Manual and the
-HDF5 User's Guide.
-
-
-File access flags
-
-
-
-
-
-
-
-
- H5F_ACC_RDWR_F
- H5F_ACC_RDONLY_F
- H5F_ACC_TRUNC_F
-
-
-
-
- H5F_ACC_EXCL_F
- H5F_ACC_DEBUG_F
-
-
-
- H5F_SCOPE_LOCAL_F
- H5F_SCOPE_GLOBAL_F
-
- Group management flags
-
-
-
-
-
-
-
-
- H5G_UNKNOWN_F
- H5G_LINK_F
- H5G_GROUP_F
-
-
-
-
- H5G_DATASET_F
- H5G_TYPE_F
- H5G_LINK_ERROR_F
-
-
-
- H5G_LINK_HARD_F
- H5G_LINK_SOFT_F
-
- Dataset format flags
-
-
-
-
-
-
-
-
- H5D_COMPACT_F
-
-
-
-
- H5D_CONTIGUOUS_F
-
-
-
- H5D_CHUNKED_F
-
- MPI IO data transfer flags
-
-
-
-
-
-
-
-
- H5FD_MPIO_INDEPENDENT_F
-
-
-
-
- H5FD_MPIO_COLLECTIVE_F
-
-
-
-
-
- Error flags
-
-
-
-
-
-
-
-
- H5E_NONE_MAJOR_F
- H5E_ARGS_F
- H5E_RESOURCE_F
- H5E_INTERNAL_F
- H5E_FILE_F
- H5E_IO_F
- H5E_FUNC_F
- H5E_ATOM_F
-
-
-
-
- H5E_CACHE_F
- H5E_BTREE_F
- H5E_SYM_F
- H5E_HEAP_F
- H5E_OHDR_F
- H5E_DATATYPE_F
- H5E_DATASPACE_F
- H5E_DATASET_F
-
-
-
- H5E_STORAGE_F
- H5E_PLIST_F
- H5E_ATTR_F
- H5E_PLINE_F
- H5E_EFL_F
- H5E_REFERENCE_F
- H5E_VFL_F
- H5E_TBBT_F
-
- Object identifier flags
-
-
-
-
-
-
-
-
- H5I_FILE_F
- H5I_GROUP_F
- H5I_DATATYPE_F
-
-
-
-
- H5I_DATASPACE_F
- H5I_DATASET_F
- H5I_ATTR_F
-
-
-
- H5I_BADID_F
-
- Property list flags
-
-
-
-
-
-
-
-
- H5P_FILE_CREATE_F
- H5P_FILE_ACCESS_F
-
-
-
-
- H5P_DATASET_CREATE_F
- H5P_DATASET_XFER_F
-
-
-
- H5P_MOUNT_F
- H5P_DEFAULT_F
-
- Reference pointer flags
-
-
-
-
-
-
-
- H5R_OBJECT_F
-
-
-
-
- H5R_DATASET_REGION_F
-
-
-
-
-
- Dataspace flags
-
-
-
-
-
-
-
-
- H5S_SCALAR_F
- H5S_SIMPLE_F
-
-
-
-
- H5S_SELECT_SET_F
- H5S_SELECT_OR_F
-
-
-
- H5S_UNLIMITED_F
- H5S_ALL_F
-
- Datatype flags
-
-
-
-
-
-
-
-
- H5T_NO_CLASS_F
- H5T_INTEGER_F
- H5T_FLOAT_F
- H5T_TIME_F
- H5T_STRING_F
- H5T_BITFIELD_F
- H5T_OPAQUE_F
- H5T_COMPOUND_F
- H5T_REFERENCE_F
- H5T_ENUM_F
-
-
-
-
- H5T_ORDER_LE_F
- H5T_ORDER_BE_F
- H5T_ORDER_VAX_F
- H5T_PAD_ZERO_F
- H5T_PAD_ONE_F
- H5T_PAD_BACKGROUND_F
- H5T_PAD_ERROR_F
- H5T_SGN_NONE_F
- H5T_SGN_2_F
- H5T_SGN_ERROR_F
-
-
-
- H5T_NORM_IMPLIED_F
- H5T_NORM_MSBSET_F
- H5T_NORM_NONE_F
- H5T_CSET_ASCII_F
- H5T_STR_NULLTERM_F
- H5T_STR_NULLPAD_F
- H5T_STR_SPACEPAD_F
- H5T_STR_ERROR_F
-
-
-
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-Last modified: 3 April 2001
-
-
-
diff --git a/doc/html/fortran/F90UserNotes.html b/doc/html/fortran/F90UserNotes.html
deleted file mode 100644
index d263cb0..0000000
--- a/doc/html/fortran/F90UserNotes.html
+++ /dev/null
@@ -1,141 +0,0 @@
-
-
-
-
- HDF5 Fortran90 User's Notes
- ===========================
-
-About the source code organization
-==================================
-
-The Fortran APIs are organized in modules parallel to the HDF5 Interfaces.
-Each module is in a separate file with the name H5*ff.f. Corresponding C
-stubs are in the H5*f.c files. For example, the Fortran File APIs are in
-the file H5Fff.f and the corresponding C stubs are in the file H5Ff.c.
-
-Each module contains Fortran definitions of the constants, interfaces to
-the subroutines if needed, and the subroutines themselves.
-
-Users must use constant names in their programs instead of the numerical
-values, as the numerical values are subject to change without notice.
-
-About the Fortran APIs
-=======================
-
-* The Fortran APIs come in the form of Fortran subroutines.
-
-* Each Fortran subroutine name is derived from the corresponding C function
- name by adding "_f" to the name. For example, the name of the C function
- to create an HDF5 file is H5Fcreate; the corresponding Fortran subroutine
- is h5fcreate_f.
-
-* A description of each implemented Fortran subroutine and its parameters
- can be found following the description of the corresponding C function in
- the HDF5 Reference Manual provided with this release.
-
-* The parameter list for each Fortran subroutine has two more parameters
- than the corresponding C function. These additional parameters hold
- the return value and an error code. The order of the Fortran subroutine
- parameters may differ from the order of the C function parameters.
- The Fortran subroutine parameters are listed in the following order:
- -- required input parameters,
- -- output parameters, including return value and error code, and
- -- optional input parameters.
- For example, the C function to create a dataset has the following
- prototype:
-
- hid_t H5Dcreate(hid_it loc_id, char *name, hid_t type_id,
- hid_t space_id, hid_t creation_prp);
-
- The corresponding Fortran subroutine has the following form:
-
- SUBROUTINE h5dcreate_f(loc_id, name, type_id, space_id, dset_id,
- hdferr, creation_prp)
-
- The first four parameters of the Fortran subroutine correspond to the
- C function parameters. The fifth parameter, dset_id, is an output
- parameter and contains a valid dataset identifier if the value of the
- sixth output parameter hdferr indicates successful completion.
- (Error code descriptions are provided with the subroutine descriptions
- in the Reference Manual.) The seventh input parameter, creation_prp,
-
- is optional, and may be omitted when the default creation property
- list is used.
-
-* Parameters to the Fortran subroutines have one of the following
- predefined datatypes (see the file H5fortran_types.f90 for KIND
- definitions):
-
- INTEGER(HID_T) compares with hid_t type in HDF5 C APIs
- INTEGER(HSIZE_T) compares with hsize_t in HDF5 C APIs
- INTEGER(HSSIZE_T) compares with hssize_t in HDF5 C APIs
- INTEGER(SIZE_T) compares with the C size_t type
-
- These integer types usually correspond to 4 or 8 byte integers,
- depending on the FORTRAN90 compiler and the corresponding HDF5
- C library definitions.
-
- The H5R module defines two types of references:
- TYPE(HOBJ_REF_T_F) compares to hobj_ref_t in HDF5 C API
- TYPE(HDSET_REG_REF_T_F) compares to hdset_reg_ref_t in HDF5 C API
-
-* Each Fortran application must call the h5open_f subroutine to
- initialize the Fortran interface and the HDF5 C Library before calling
- any HDF5 Fortran subroutine. The application must call the h5close_f
- subroutine after all calls to the HDF5 Fortran Library to close the
- Fortran interface and HDF5 C Library.
-
-* List of the predefined datatypes can be found in the HDF5 Reference
- Manual provided with this release. See HDF5 Predefined Datatypes.
-
-* When a C application reads data stored from a Fortran program, the data
- will appear to be transposed due to the difference in the C and Fortran
- storage orders. For example, if Fortran writes a 4x6 two-dimensional
- dataset to the file, a C program will read it as a 6x4 two-dimensional
- dataset into memory. The HDF5 C utilities h5dump and h5ls will also
- display transposed data, if data is written from a Fortran program.
-
-* Fortran indices are 1-based.
-
-* Compound datatype datasets can be written or read by atomic fields only.
-
-
-
-
-
-
-
-HDF Help Desk
-
-Describes HDF5 Release 1.7, the unreleased development branch; working toward HDF5 Release 1.8.0
-
-
-Last modified: 15 December 2000
-
-
-
-
diff --git a/doc/html/fortran/Makefile.am b/doc/html/fortran/Makefile.am
deleted file mode 100644
index 3b21a9c..0000000
--- a/doc/html/fortran/Makefile.am
+++ /dev/null
@@ -1,17 +0,0 @@
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-##
-## Makefile.am
-## Run automake to generate a Makefile.in from this file.
-#
-
-include $(top_srcdir)/config/commence-doc.am
-
-localdocdir = $(docdir)/hdf5/fortran
-
-# Public doc files (to be installed)...
-localdoc_DATA=F90Flags.html F90UserNotes.html
diff --git a/doc/html/fortran/Makefile.in b/doc/html/fortran/Makefile.in
deleted file mode 100644
index d6e9343..0000000
--- a/doc/html/fortran/Makefile.in
+++ /dev/null
@@ -1,485 +0,0 @@
-# Makefile.in generated by automake 1.9.5 from Makefile.am.
-# @configure_input@
-
-# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
-# 2003, 2004, 2005 Free Software Foundation, Inc.
-# This Makefile.in is free software; the Free Software Foundation
-# gives unlimited permission to copy and/or distribute it,
-# with or without modifications, as long as this notice is preserved.
-
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
-# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
-# PARTICULAR PURPOSE.
-
-@SET_MAKE@
-
-# HDF5 Library Doc Makefile(.in)
-#
-# Copyright (C) 1997, 2002
-# National Center for Supercomputing Applications.
-# All rights reserved.
-#
-#
-
-srcdir = @srcdir@
-top_srcdir = @top_srcdir@
-VPATH = @srcdir@
-pkgdatadir = $(datadir)/@PACKAGE@
-pkglibdir = $(libdir)/@PACKAGE@
-pkgincludedir = $(includedir)/@PACKAGE@
-top_builddir = ../../..
-am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
-INSTALL = @INSTALL@
-install_sh_DATA = $(install_sh) -c -m 644
-install_sh_PROGRAM = $(install_sh) -c
-install_sh_SCRIPT = $(install_sh) -c
-INSTALL_HEADER = $(INSTALL_DATA)
-transform = $(program_transform_name)
-NORMAL_INSTALL = :
-PRE_INSTALL = :
-POST_INSTALL = :
-NORMAL_UNINSTALL = :
-PRE_UNINSTALL = :
-POST_UNINSTALL = :
-build_triplet = @build@
-host_triplet = @host@
-DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
- $(top_srcdir)/config/commence-doc.am \
- $(top_srcdir)/config/commence.am
-subdir = doc/html/fortran
-ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
-am__aclocal_m4_deps = $(top_srcdir)/configure.in
-am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
- $(ACLOCAL_M4)
-mkinstalldirs = $(SHELL) $(top_srcdir)/bin/mkinstalldirs
-CONFIG_HEADER = $(top_builddir)/src/H5config.h
-CONFIG_CLEAN_FILES =
-SOURCES =
-DIST_SOURCES =
-am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
-am__vpath_adj = case $$p in \
- $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
- *) f=$$p;; \
- esac;
-am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
-am__installdirs = "$(DESTDIR)$(localdocdir)"
-localdocDATA_INSTALL = $(INSTALL_DATA)
-DATA = $(localdoc_DATA)
-DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
-
-# Set the paths for AFS installs of autotools for Linux machines
-# Ideally, these tools should never be needed during the build.
-ACLOCAL = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/aclocal -I /afs/ncsa/projects/hdf/packages/libtool_1.5.14/Linux_2.4/share/aclocal
-ADD_PARALLEL_FILES = @ADD_PARALLEL_FILES@
-AMDEP_FALSE = @AMDEP_FALSE@
-AMDEP_TRUE = @AMDEP_TRUE@
-AMTAR = @AMTAR@
-AM_MAKEFLAGS = @AM_MAKEFLAGS@
-AR = @AR@
-AUTOCONF = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoconf
-AUTOHEADER = /afs/ncsa/projects/hdf/packages/autoconf_2.59/Linux_2.4/bin/autoheader
-AUTOMAKE = /afs/ncsa/projects/hdf/packages/automake_1.9.5/Linux_2.4/bin/automake
-AWK = @AWK@
-BUILD_CXX_CONDITIONAL_FALSE = @BUILD_CXX_CONDITIONAL_FALSE@
-BUILD_CXX_CONDITIONAL_TRUE = @BUILD_CXX_CONDITIONAL_TRUE@
-BUILD_FORTRAN_CONDITIONAL_FALSE = @BUILD_FORTRAN_CONDITIONAL_FALSE@
-BUILD_FORTRAN_CONDITIONAL_TRUE = @BUILD_FORTRAN_CONDITIONAL_TRUE@
-BUILD_HDF5_HL_CONDITIONAL_FALSE = @BUILD_HDF5_HL_CONDITIONAL_FALSE@
-BUILD_HDF5_HL_CONDITIONAL_TRUE = @BUILD_HDF5_HL_CONDITIONAL_TRUE@
-BUILD_PABLO_CONDITIONAL_FALSE = @BUILD_PABLO_CONDITIONAL_FALSE@
-BUILD_PABLO_CONDITIONAL_TRUE = @BUILD_PABLO_CONDITIONAL_TRUE@
-BUILD_PARALLEL_CONDITIONAL_FALSE = @BUILD_PARALLEL_CONDITIONAL_FALSE@
-BUILD_PARALLEL_CONDITIONAL_TRUE = @BUILD_PARALLEL_CONDITIONAL_TRUE@
-BUILD_PDB2HDF = @BUILD_PDB2HDF@
-BUILD_PDB2HDF_CONDITIONAL_FALSE = @BUILD_PDB2HDF_CONDITIONAL_FALSE@
-BUILD_PDB2HDF_CONDITIONAL_TRUE = @BUILD_PDB2HDF_CONDITIONAL_TRUE@
-BYTESEX = @BYTESEX@
-CC = @CC@
-CCDEPMODE = @CCDEPMODE@
-CC_VERSION = @CC_VERSION@
-CFLAGS = @CFLAGS@
-CONFIG_DATE = @CONFIG_DATE@
-CONFIG_MODE = @CONFIG_MODE@
-CONFIG_USER = @CONFIG_USER@
-CPP = @CPP@
-CPPFLAGS = @CPPFLAGS@
-CXX = @CXX@
-CXXCPP = @CXXCPP@
-CXXDEPMODE = @CXXDEPMODE@
-CXXFLAGS = @CXXFLAGS@
-CYGPATH_W = @CYGPATH_W@
-DEBUG_PKG = @DEBUG_PKG@
-DEFS = @DEFS@
-DEPDIR = @DEPDIR@
-DYNAMIC_DIRS = @DYNAMIC_DIRS@
-ECHO = @ECHO@
-ECHO_C = @ECHO_C@
-ECHO_N = @ECHO_N@
-ECHO_T = @ECHO_T@
-EGREP = @EGREP@
-EXEEXT = @EXEEXT@
-F77 = @F77@
-
-# Make sure that these variables are exported to the Makefiles
-F9XMODEXT = @F9XMODEXT@
-F9XMODFLAG = @F9XMODFLAG@
-F9XSUFFIXFLAG = @F9XSUFFIXFLAG@
-FC = @FC@
-FCFLAGS = @FCFLAGS@
-FCLIBS = @FCLIBS@
-FFLAGS = @FFLAGS@
-FILTERS = @FILTERS@
-FSEARCH_DIRS = @FSEARCH_DIRS@
-H5_VERSION = @H5_VERSION@
-HADDR_T = @HADDR_T@
-HDF5_INTERFACES = @HDF5_INTERFACES@
-HID_T = @HID_T@
-HL = @HL@
-HL_FOR = @HL_FOR@
-HSIZET = @HSIZET@
-HSIZE_T = @HSIZE_T@
-HSSIZE_T = @HSSIZE_T@
-INSTALL_DATA = @INSTALL_DATA@
-INSTALL_PROGRAM = @INSTALL_PROGRAM@
-INSTALL_SCRIPT = @INSTALL_SCRIPT@
-INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
-INSTRUMENT_LIBRARY = @INSTRUMENT_LIBRARY@
-LDFLAGS = @LDFLAGS@
-LIBOBJS = @LIBOBJS@
-LIBS = @LIBS@
-LIBTOOL = @LIBTOOL@
-LN_S = @LN_S@
-LTLIBOBJS = @LTLIBOBJS@
-LT_STATIC_EXEC = @LT_STATIC_EXEC@
-MAINT = @MAINT@
-MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
-MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
-MAKEINFO = @MAKEINFO@
-MPE = @MPE@
-OBJECT_NAMELEN_DEFAULT_F = @OBJECT_NAMELEN_DEFAULT_F@
-OBJEXT = @OBJEXT@
-PACKAGE = @PACKAGE@
-PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
-PACKAGE_NAME = @PACKAGE_NAME@
-PACKAGE_STRING = @PACKAGE_STRING@
-PACKAGE_TARNAME = @PACKAGE_TARNAME@
-PACKAGE_VERSION = @PACKAGE_VERSION@
-PARALLEL = @PARALLEL@
-PATH_SEPARATOR = @PATH_SEPARATOR@
-PERL = @PERL@
-PTHREAD = @PTHREAD@
-RANLIB = @RANLIB@
-ROOT = @ROOT@
-RUNPARALLEL = @RUNPARALLEL@
-RUNSERIAL = @RUNSERIAL@
-R_INTEGER = @R_INTEGER@
-R_LARGE = @R_LARGE@
-SEARCH = @SEARCH@
-SETX = @SETX@
-SET_MAKE = @SET_MAKE@
-
-# Hardcode SHELL to be /bin/sh. Most machines have this shell, and
-# on at least one machine configure fails to detect its existence (janus).
-# Also, when HDF5 is configured on one machine but run on another,
-# configure's automatic SHELL detection may not work on the build machine.
-SHELL = /bin/sh
-SIZE_T = @SIZE_T@
-STATIC_SHARED = @STATIC_SHARED@
-STRIP = @STRIP@
-TESTPARALLEL = @TESTPARALLEL@
-TRACE_API = @TRACE_API@
-USE_FILTER_DEFLATE = @USE_FILTER_DEFLATE@
-USE_FILTER_FLETCHER32 = @USE_FILTER_FLETCHER32@
-USE_FILTER_NBIT = @USE_FILTER_NBIT@
-USE_FILTER_SCALEOFFSET = @USE_FILTER_SCALEOFFSET@
-USE_FILTER_SHUFFLE = @USE_FILTER_SHUFFLE@
-USE_FILTER_SZIP = @USE_FILTER_SZIP@
-VERSION = @VERSION@
-ac_ct_AR = @ac_ct_AR@
-ac_ct_CC = @ac_ct_CC@
-ac_ct_CXX = @ac_ct_CXX@
-ac_ct_F77 = @ac_ct_F77@
-ac_ct_FC = @ac_ct_FC@
-ac_ct_RANLIB = @ac_ct_RANLIB@
-ac_ct_STRIP = @ac_ct_STRIP@
-am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
-am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
-am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
-am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
-am__include = @am__include@
-am__leading_dot = @am__leading_dot@
-am__quote = @am__quote@
-am__tar = @am__tar@
-am__untar = @am__untar@
-bindir = @bindir@
-build = @build@
-build_alias = @build_alias@
-build_cpu = @build_cpu@
-build_os = @build_os@
-build_vendor = @build_vendor@
-datadir = @datadir@
-exec_prefix = @exec_prefix@
-host = @host@
-host_alias = @host_alias@
-host_cpu = @host_cpu@
-host_os = @host_os@
-host_vendor = @host_vendor@
-
-# Install directories that automake doesn't know about
-includedir = $(exec_prefix)/include
-infodir = @infodir@
-install_sh = @install_sh@
-libdir = @libdir@
-libexecdir = @libexecdir@
-localstatedir = @localstatedir@
-mandir = @mandir@
-mkdir_p = @mkdir_p@
-oldincludedir = @oldincludedir@
-prefix = @prefix@
-program_transform_name = @program_transform_name@
-sbindir = @sbindir@
-sharedstatedir = @sharedstatedir@
-sysconfdir = @sysconfdir@
-target_alias = @target_alias@
-
-# Shell commands used in Makefiles
-RM = rm -f
-CP = cp
-
-# Some machines need a command to run executables; this is that command
-# so that our tests will run.
-# We use RUNTESTS instead of RUNSERIAL directly because it may be that
-# some tests need to be run with a different command. Older versions
-# of the makefiles used the command
-# $(LIBTOOL) --mode=execute
-# in some directories, for instance.
-RUNTESTS = $(RUNSERIAL)
-
-# Libraries to link to while building
-LIBHDF5 = $(top_builddir)/src/libhdf5.la
-LIBH5TEST = $(top_builddir)/test/libh5test.la
-LIBH5F = $(top_builddir)/fortran/src/libhdf5_fortran.la
-LIBH5FTEST = $(top_builddir)/fortran/test/libh5test_fortran.la
-LIBH5CPP = $(top_builddir)/c++/src/libhdf5_cpp.la
-LIBH5TOOLS = $(top_builddir)/tools/lib/libh5tools.la
-LIBH5_HL = $(top_builddir)/hl/src/libhdf5_hl.la
-LIBH5F_HL = $(top_builddir)/hl/fortran/src/libhdf5hl_fortran.la
-LIBH5CPP_HL = $(top_builddir)/hl/c++/src/libhdf5_hl_cpp.la
-docdir = $(exec_prefix)/doc
-
-# Scripts used to build examples
-H5CC = $(bindir)/h5cc
-H5CC_PP = $(bindir)/h5pcc
-H5FC = $(bindir)/h5fc
-H5FC_PP = $(bindir)/h5pfc
-
-# .chkexe and .chksh files are used to mark tests that have run successfully.
-MOSTLYCLEANFILES = *.chkexe *.chksh
-localdocdir = $(docdir)/hdf5/fortran
-
-# Public doc files (to be installed)...
-localdoc_DATA = F90Flags.html F90UserNotes.html
-all: all-am
-
-.SUFFIXES:
-$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/config/commence-doc.am $(top_srcdir)/config/commence.am $(am__configure_deps)
- @for dep in $?; do \
- case '$(am__configure_deps)' in \
- *$$dep*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
- && exit 0; \
- exit 1;; \
- esac; \
- done; \
- echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign doc/html/fortran/Makefile'; \
- cd $(top_srcdir) && \
- $(AUTOMAKE) --foreign doc/html/fortran/Makefile
-.PRECIOUS: Makefile
-Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
- @case '$?' in \
- *config.status*) \
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
- *) \
- echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
- cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
- esac;
-
-$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
- cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
-
-mostlyclean-libtool:
- -rm -f *.lo
-
-clean-libtool:
- -rm -rf .libs _libs
-
-distclean-libtool:
- -rm -f libtool
-uninstall-info-am:
-install-localdocDATA: $(localdoc_DATA)
- @$(NORMAL_INSTALL)
- test -z "$(localdocdir)" || $(mkdir_p) "$(DESTDIR)$(localdocdir)"
- @list='$(localdoc_DATA)'; for p in $$list; do \
- if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
- f=$(am__strip_dir) \
- echo " $(localdocDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(localdocdir)/$$f'"; \
- $(localdocDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-
-uninstall-localdocDATA:
- @$(NORMAL_UNINSTALL)
- @list='$(localdoc_DATA)'; for p in $$list; do \
- f=$(am__strip_dir) \
- echo " rm -f '$(DESTDIR)$(localdocdir)/$$f'"; \
- rm -f "$(DESTDIR)$(localdocdir)/$$f"; \
- done
-tags: TAGS
-TAGS:
-
-ctags: CTAGS
-CTAGS:
-
-
-distdir: $(DISTFILES)
- $(mkdir_p) $(distdir)/../../../config
- @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
- topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
- list='$(DISTFILES)'; for file in $$list; do \
- case $$file in \
- $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
- $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
- esac; \
- if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
- dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
- if test "$$dir" != "$$file" && test "$$dir" != "."; then \
- dir="/$$dir"; \
- $(mkdir_p) "$(distdir)$$dir"; \
- else \
- dir=''; \
- fi; \
- if test -d $$d/$$file; then \
- if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
- cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
- fi; \
- cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
- else \
- test -f $(distdir)/$$file \
- || cp -p $$d/$$file $(distdir)/$$file \
- || exit 1; \
- fi; \
- done
-check-am: all-am
-check: check-am
-all-am: Makefile $(DATA)
-installdirs:
- for dir in "$(DESTDIR)$(localdocdir)"; do \
- test -z "$$dir" || $(mkdir_p) "$$dir"; \
- done
-install: install-am
-install-exec: install-exec-am
-install-data: install-data-am
-uninstall: uninstall-am
-
-install-am: all-am
- @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
-
-installcheck: installcheck-am
-install-strip:
- $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
- install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
- `test -z '$(STRIP)' || \
- echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
-mostlyclean-generic:
- -test -z "$(MOSTLYCLEANFILES)" || rm -f $(MOSTLYCLEANFILES)
-
-clean-generic:
-
-distclean-generic:
- -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-
-maintainer-clean-generic:
- @echo "This command is intended for maintainers to use"
- @echo "it deletes files that may require special tools to rebuild."
-clean: clean-am
-
-clean-am: clean-generic clean-libtool mostlyclean-am
-
-distclean: distclean-am
- -rm -f Makefile
-distclean-am: clean-am distclean-generic distclean-libtool
-
-dvi: dvi-am
-
-dvi-am:
-
-html: html-am
-
-info: info-am
-
-info-am:
-
-install-data-am: install-localdocDATA
-
-install-exec-am:
-
-install-info: install-info-am
-
-install-man:
-
-installcheck-am:
-
-maintainer-clean: maintainer-clean-am
- -rm -f Makefile
-maintainer-clean-am: distclean-am maintainer-clean-generic
-
-mostlyclean: mostlyclean-am
-
-mostlyclean-am: mostlyclean-generic mostlyclean-libtool
-
-pdf: pdf-am
-
-pdf-am:
-
-ps: ps-am
-
-ps-am:
-
-uninstall-am: uninstall-info-am uninstall-localdocDATA
-
-.PHONY: all all-am check check-am clean clean-generic clean-libtool \
- distclean distclean-generic distclean-libtool distdir dvi \
- dvi-am html html-am info info-am install install-am \
- install-data install-data-am install-exec install-exec-am \
- install-info install-info-am install-localdocDATA install-man \
- install-strip installcheck installcheck-am installdirs \
- maintainer-clean maintainer-clean-generic mostlyclean \
- mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
- uninstall uninstall-am uninstall-info-am \
- uninstall-localdocDATA
-
-
-# Ignore most rules
-lib progs check test _test check-p check-s:
- @echo "Nothing to be done"
-
-tests dep depend:
- @@SETX@; for d in X $(SUBDIRS); do \
- if test $$d != X; then \
- (cd $$d && $(MAKE) $(AM_MAKEFLAGS) $@) || exit 1; \
- fi;
- done
-
-# In docs directory, install-doc is the same as install
-install-doc install-all:
- $(MAKE) $(AM_MAKEFLAGS) install
-uninstall-doc uninstall-all:
- $(MAKE) $(AM_MAKEFLAGS) uninstall
-# Tell versions [3.59,3.63) of GNU make to not export all variables.
-# Otherwise a system limit (for SysV at least) may be exceeded.
-.NOEXPORT:
diff --git a/doc/html/group_p1.gif b/doc/html/group_p1.gif
deleted file mode 100644
index 5900446..0000000
Binary files a/doc/html/group_p1.gif and /dev/null differ
diff --git a/doc/html/group_p1.obj b/doc/html/group_p1.obj
deleted file mode 100644
index 5f41959..0000000
--- a/doc/html/group_p1.obj
+++ /dev/null
@@ -1,85 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,8,1,9,1,1,0,2,1,0,1,1,'Times-Roman',0,24,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-text('black',80,168,'Courier',0,17,1,0,0,1,7,14,30,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',80,184,'Courier',0,17,1,0,0,1,7,14,34,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',80,200,'Courier',0,17,1,0,0,1,7,14,36,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',80,216,'Courier',0,17,1,0,0,1,21,14,38,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Foo"]).
-text('black',80,232,'Courier',0,17,1,0,0,1,7,14,43,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',80,248,'Courier',0,17,1,0,0,1,7,14,47,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-rcbox('black',64,152,128,280,0,1,1,0,16,49,0,0,0,0,'1',[
-]).
-text('black',208,152,'Courier',0,17,1,0,0,1,7,14,52,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',80,152,'Courier',0,17,1,0,0,1,7,14,56,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',208,168,'Courier',0,17,1,0,0,1,7,14,58,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',208,184,'Courier',0,17,1,0,0,1,21,14,60,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Bar"]).
-text('black',208,200,'Courier',0,17,1,0,0,1,7,14,62,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',208,216,'Courier',0,17,1,0,0,1,7,14,64,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',208,232,'Courier',0,17,1,0,0,1,7,14,68,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',208,248,'Courier',0,17,1,0,0,1,7,14,72,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-rcbox('black',192,152,256,280,0,1,1,0,16,74,0,0,0,0,'1',[
-]).
-text('black',336,152,'Courier',0,17,1,0,0,1,7,14,75,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',336,168,'Courier',0,17,1,0,0,1,7,14,77,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',336,184,'Courier',0,17,1,0,0,1,7,14,81,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',336,200,'Courier',0,17,1,0,0,1,7,14,88,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',336,216,'Courier',0,17,1,0,0,1,7,14,92,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',336,232,'Courier',0,17,1,0,0,1,7,14,94,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',336,248,'Courier',0,17,1,0,0,1,21,14,96,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Baz"]).
-rcbox('black',320,152,384,280,0,1,1,0,16,98,0,0,0,0,'1',[
-]).
-text('black',224,360,'NewCenturySchlbk-Roman',0,17,2,1,0,1,42,30,99,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Object",
- "Header"]).
-rcbox('black',192,344,256,408,0,1,1,0,16,101,0,0,0,0,'1',[
-]).
-poly('black',4,[
- 112,224,136,216,152,184,192,168],1,1,1,102,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-poly('black',4,[
- 232,192,272,184,288,168,320,160],1,1,1,107,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-poly('black',4,[
- 368,256,416,272,392,336,256,352],1,1,1,110,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-text('black',96,128,'Times-Roman',0,17,1,1,0,1,40,15,120,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 1"]).
-text('black',224,128,'Times-Roman',0,17,1,1,0,1,40,15,126,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 2"]).
-text('black',352,128,'Times-Roman',0,17,1,1,0,1,40,15,130,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 3"]).
-text('black',224,320,'Times-Roman',0,17,1,1,0,1,64,15,134,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Some Object"]).
-text('black',224,80,'Times-Roman',0,24,1,1,0,1,258,28,138,0,22,6,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "The name \"/Foo/Bar/Baz\""]).
-box('black',40,64,448,432,0,1,1,140,0,0,0,0,0,'1',[
-]).
diff --git a/doc/html/group_p2.gif b/doc/html/group_p2.gif
deleted file mode 100644
index a2d12a0..0000000
Binary files a/doc/html/group_p2.gif and /dev/null differ
diff --git a/doc/html/group_p2.obj b/doc/html/group_p2.obj
deleted file mode 100644
index cb91258..0000000
--- a/doc/html/group_p2.obj
+++ /dev/null
@@ -1,57 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,8,1,9,1,1,0,2,1,0,1,0,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-text('black',144,128,'Courier',0,17,1,0,0,1,7,14,26,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,144,'Courier',0,17,1,0,0,1,7,14,30,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,160,'Courier',0,17,1,0,0,1,21,14,34,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Foo"]).
-text('black',144,176,'Courier',0,17,1,0,0,1,7,14,36,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,192,'Courier',0,17,1,0,0,1,7,14,38,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-rcbox('black',128,128,192,256,0,1,1,0,16,40,0,0,0,0,'1',[
-]).
-text('black',144,320,'Courier',0,17,1,0,0,1,7,14,43,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,336,'Courier',0,17,1,0,0,1,7,14,45,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,352,'Courier',0,17,1,0,0,1,21,14,47,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Bar"]).
-text('black',144,368,'Courier',0,17,1,0,0,1,7,14,49,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,384,'Courier',0,17,1,0,0,1,7,14,51,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-rcbox('black',128,320,192,448,0,1,1,0,16,53,0,0,0,0,'1',[
-]).
-text('black',160,96,'NewCenturySchlbk-Roman',0,17,1,1,0,1,46,15,64,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 1"]).
-text('black',160,288,'NewCenturySchlbk-Roman',0,17,1,1,0,1,46,15,68,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 2"]).
-text('black',352,224,'NewCenturySchlbk-Roman',0,17,2,1,0,1,35,30,70,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Some",
- "Object"]).
-rcbox('black',320,256,384,384,0,1,1,0,16,72,0,0,0,0,'1',[
-]).
-poly('black',4,[
- 176,168,224,192,264,240,320,264],1,1,1,73,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-poly('black',4,[
- 176,360,232,344,272,288,320,272],1,1,1,74,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-text('black',264,40,'Helvetica',0,24,1,1,0,1,206,29,93,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Hard Link Example"]).
-box('black',88,24,424,496,0,1,1,95,0,0,0,0,0,'1',[
-]).
-text('black',240,192,'Courier',0,17,1,0,0,1,63,14,129,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "hard link"]).
-text('black',248,336,'Courier',0,17,1,0,0,1,63,14,131,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "hard link"]).
diff --git a/doc/html/group_p3.gif b/doc/html/group_p3.gif
deleted file mode 100644
index 85346de..0000000
Binary files a/doc/html/group_p3.gif and /dev/null differ
diff --git a/doc/html/group_p3.obj b/doc/html/group_p3.obj
deleted file mode 100644
index ad93444..0000000
--- a/doc/html/group_p3.obj
+++ /dev/null
@@ -1,59 +0,0 @@
-%TGIF 3.0-p5
-state(0,33,100,0,0,0,8,1,9,1,1,0,2,1,0,1,0,'Courier',0,17,0,0,0,10,0,0,1,1,0,16,0,0,1,1,1,0,1088,1408,0,0,2880).
-%
-% @(#)$Header$
-% %W%
-%
-unit("1 pixel/pixel").
-page(1,"",1).
-text('black',144,128,'Courier',0,17,1,0,0,1,7,14,26,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,144,'Courier',0,17,1,0,0,1,7,14,30,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,160,'Courier',0,17,1,0,0,1,21,14,34,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Foo"]).
-text('black',144,176,'Courier',0,17,1,0,0,1,7,14,36,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,192,'Courier',0,17,1,0,0,1,7,14,38,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-rcbox('black',128,128,192,256,0,1,1,0,16,40,0,0,0,0,'1',[
-]).
-text('black',144,320,'Courier',0,17,1,0,0,1,7,14,43,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,336,'Courier',0,17,1,0,0,1,7,14,45,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,352,'Courier',0,17,1,0,0,1,21,14,47,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Bar"]).
-text('black',144,368,'Courier',0,17,1,0,0,1,7,14,49,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-text('black',144,384,'Courier',0,17,1,0,0,1,7,14,51,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "."]).
-rcbox('black',128,320,192,448,0,1,1,0,16,53,0,0,0,0,'1',[
-]).
-text('black',160,96,'NewCenturySchlbk-Roman',0,17,1,1,0,1,46,15,64,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 1"]).
-text('black',160,288,'NewCenturySchlbk-Roman',0,17,1,1,0,1,46,15,68,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Group 2"]).
-text('black',352,96,'NewCenturySchlbk-Roman',0,17,2,1,0,1,35,30,70,0,12,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Some",
- "Object"]).
-rcbox('black',320,128,384,256,0,1,1,0,16,72,0,0,0,0,'1',[
-]).
-text('black',264,40,'Helvetica',0,24,1,1,0,1,197,29,93,0,24,5,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "Soft Link Example"]).
-box('black',88,24,424,496,0,1,1,95,0,0,0,0,0,'1',[
-]).
-text('black',320,352,'Courier',0,17,1,0,0,1,35,14,105,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "\"Foo\""]).
-poly('black',4,[
- 176,168,232,160,264,144,320,136],1,1,1,111,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-poly('black',2,[
- 176,360,312,360],1,1,1,116,2,0,0,0,8,3,0,0,0,'1','8','3',
- "",[
-]).
-text('black',240,160,'Courier',0,17,1,0,0,1,63,14,119,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "hard link"]).
-text('black',216,368,'Courier',0,17,1,0,0,1,63,14,121,0,11,3,0,0,0,0,0,2,0,0,0,0,"",0,0,0,[
- "soft link"]).
diff --git a/doc/html/h5s.examples b/doc/html/h5s.examples
deleted file mode 100644
index 688382f..0000000
--- a/doc/html/h5s.examples
+++ /dev/null
@@ -1,347 +0,0 @@
-Example 1: Create a simple fixed size 3-D dataspace in memory and on disk and
- copy the entire dataset to disk.
-
-{
- hid_t file; /* File ID */
- hid_t dataset; /* Dataset ID */
- hid_t mem_space, file_space; /* Dataspaces for memory and the file */
- uint8 *buf; /* Buffer for data */
- hsize_t curr_dims[3]={3,4,5}; /* Dimensions of the dataset */
-
- /* Create file */
- file = H5Fcreate("example1.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /* Create dataspace for dataset in the file */
- /* Selection for dataspace defaults to entire space */
- file_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the dataset's dataspace */
- H5Sset_extent_simple(file_space,3,curr_dims,curr_dims);
-
- /* Create the dataspace for the dataset in memory */
- /* Selection for dataspace defaults to entire space */
- mem_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the memory dataspace */
- H5Sset_extent_simple(mem_space,3,curr_dims,curr_dims);
-
- /* Create the dataset on disk */
- dataset=H5Dcreate(file,"Dataset",H5T_NATIVE_UINT8,file_space,H5P_DEFAULT);
-
- /* Write the dataset to the file */
- H5Dwrite(dataset,H5T_NATIVE_UINT8,mem_space,file_space,H5P_DEFAULT,buf);
-
- /* Close dataspaces */
- H5Sclose(mem_space);
- H5Sclose(file_space);
-
- /* Close dataset & file */
- H5Dclose(dataset);
- H5Fclose(file);
-}
-
-
-Example 2: Create a simple fixed size 3-D dataspace in memory and on disk and
- copy a hyperslab to disk. The hyperslab blocks are packed and
- contiguous in memory, but are scattered when written to the dataset
- on disk.
-
-{
- hid_t file; /* File ID */
- hid_t dataset; /* Dataset ID */
- hid_t mem_space, file_space; /* Dataspaces for memory and the file */
- uint8 *buf; /* Buffer for data */
- hsize_t start[3]={3,4,5}; /* Start of hyperslab */
- hsize_t stride[3]={1,2,2}; /* Stride for hyperslab */
- hsize_t count[3]={3,3,3}; /* Hyperslab block count in each dimension */
- hsize_t block[3]={2,2,2}; /* Hyperslab block size in each dimension */
- hsize_t curr_dims[3]={13,14,15}; /* Dimensions of the dataset */
-
- /* Create file */
- file = H5Fcreate("example2.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /* Create dataspace for dataset in the file */
- /* Selection for dataspace defaults to entire space */
- file_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the dataset's dataspace */
- H5Sset_extent_simple(file_space,3,curr_dims,curr_dims);
-
- /* Set the hyperslab selection for a file dataspace */
- H5Sselect_hyperslab(file_space,H5S_SELECT_SET,start,stride,count,block);
-
- /* Create the dataspace for the dataset in memory */
- /* Selection for dataspace defaults to entire space */
- mem_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the memory dataspace */
- /* Compute the memory dimensions based on the hyperslab blocks to write */
- for(i=0; i<3; i++)
- curr_dims[i]=count[i]*block[i];
- H5Sset_extent_simple(mem_space,3,curr_dims,curr_dims);
-
- /* Create the dataset on disk */
- dataset=H5Dcreate(file,"Dataset",H5T_NATIVE_UINT8,file_space,H5P_DEFAULT);
-
- /* Write the hyperslab to the file */
- H5Dwrite(dataset,H5T_NATIVE_UINT8,mem_space,file_space,H5P_DEFAULT,buf);
-
- /* Close dataspaces */
- H5Sclose(mem_space);
- H5Sclose(file_space);
-
- /* Close dataset & file */
- H5Dclose(dataset);
- H5Fclose(file);
-}
-
-
-Example 3: Create a simple fixed size 3-D dataspace in memory and on disk and
- copy a specific selection of points (with a particular order) to
- disk. The memory and file dataspaces are different sizes, but the number
- of points selected are the same.
-
-{
- hid_t file; /* File ID */
- hid_t dataset; /* Dataset ID */
- hid_t mem_space, file_space; /* Dataspaces for memory and the file */
- uint8 *buf; /* Buffer for data */
- hsize_t elements[5][3]; /* Dataspace elements selected */
- hsize_t curr_dims[3]={13,14,15}; /* Dimensions of the dataset */
-
- /* Create file */
- file = H5Fcreate("example3.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /* Create dataspace for dataset in the file */
- /* Selection for dataspace defaults to entire space */
- file_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the dataset's dataspace */
- H5Sset_extent_simple(file_space,3,curr_dims,curr_dims);
-
- /* Set the elements for the selection in the file dataspace */
- elements[0]={0,2,4}; /* Yes, I know this won't compile.. :-) */
- elements[1]={3,4,1};
- elements[2]={9,8,3};
- elements[3]={7,2,0};
- elements[4]={6,5,8};
- H5Sselect_elements(file_space,H5S_SELECT_SET,5,elements);
-
- /* Create the dataspace for the dataset in memory */
- /* Selection for dataspace defaults to entire space */
- mem_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the memory dataspace */
- curr_dims={23,15,18}; /* This won't compile either :-) */
- H5Sset_extent_simple(mem_space,3,curr_dims,curr_dims);
-
- /* Set the elements for the selection in the file dataspace */
- elements[0]={9,2,1};
- elements[1]={13,1,12};
- elements[2]={4,1,7};
- elements[3]={0,12,0};
- elements[4]={20,10,17};
- H5Sselect_elements(mem_space,H5S_SELECT_SET,5,elements);
-
- /* Create the dataset on disk */
- dataset=H5Dcreate(file,"Dataset",H5T_NATIVE_UINT8,file_space,H5P_DEFAULT);
-
- /* Write the hyperslab to the file */
- H5Dwrite(dataset,H5T_NATIVE_UINT8,mem_space,file_space,H5P_DEFAULT,buf);
-
- /* Close dataspaces */
- H5Sclose(mem_space);
- H5Sclose(file_space);
-
- /* Close dataset & file */
- H5Dclose(dataset);
- H5Fclose(file);
-}
-
-
-Example 4: Create a simple fixed size 3-D dataspace in memory and on disk and
- build up selection hyperslab selections to copy from memory to disk. The
- selection is the same for both dataspaces, but a different offset is used,
- to illustrate the selection offsets.
-
-{
- hid_t file; /* File ID */
- hid_t dataset; /* Dataset ID */
- hid_t mem_space, file_space; /* Dataspaces for memory and the file */
- uint8 *buf; /* Buffer for data */
- hsize_t start[3]; /* Start of hyperslab */
- hsize_t stride[3]; /* Stride for hyperslab */
- hsize_t count[3]; /* Hyperslab block count in each dimension */
- hsize_t block[3]; /* Hyperslab block size in each dimension */
- hssize_t offset[3]; /* Selection offset */
- hsize_t curr_dims[3]={13,14,15}; /* Dimensions of the dataset */
-
- /* Create file */
- file = H5Fcreate("example4.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /* Create dataspace for dataset in the file */
- /* Selection for dataspace defaults to entire space */
- file_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the dataset's dataspace */
- H5Sset_extent_simple(file_space,3,curr_dims,curr_dims);
-
- /* Build up the selection with a series of hyperslab selections */
- start={0,2,4}; /* Again, this won't compile.. :-) */
- stride={1,1,1};
- count={6,5,8};
- block={1,1,1};
-
- /* Set the first selection, union the rest in */
- H5Sselect_hyperslab(file_space,H5S_SELECT_SET,start,stride,count,block);
-
- /* initialize the second hyperslab */
- start={10,9,1}; /* Again, this won't compile.. :-) */
- stride={1,1,1};
- count={2,3,10};
- block={1,1,1};
-
- /* Union the second hyperslab into the file dataspace's selection */
- H5Sselect_hyperslab(file_space,H5S_SELECT_UNION,start,stride,count,block);
-
- /* initialize the third hyperslab */
- start={3,10,5}; /* Again, this won't compile.. :-) */
- stride={1,1,1};
- count={8,2,6};
- block={1,1,1};
-
- /* Union the final hyperslab into the file dataspace's selection */
- H5Sselect_hyperslab(file_space,H5S_SELECT_UNION,start,stride,count,block);
-
- /* Create the dataspace for the dataset in memory */
- /* Selection for dataspace defaults to entire space */
- mem_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the memory dataspace */
- curr_dims={23,15,18}; /* This won't compile either :-) */
- H5Sset_extent_simple(mem_space,3,curr_dims,curr_dims);
-
- /* Copy the selection from the file dataspace */
- H5Sselect_op(mem_space,H5S_SELECT_COPY,file_space);
-
- /* Adjust the offset of the selection in the memory dataspace */
- offset={1,1,1};
- H5Soffset_simple(mem_space,offset);
-
- /* Create the dataset on disk */
- dataset=H5Dcreate(file,"Dataset",H5T_NATIVE_UINT8,file_space,H5P_DEFAULT);
-
- /* Write the hyperslab to the file */
- H5Dwrite(dataset,H5T_NATIVE_UINT8,mem_space,file_space,H5P_DEFAULT,buf);
-
- /* Close dataspaces */
- H5Sclose(mem_space);
- H5Sclose(file_space);
-
- /* Close dataset & file */
- H5Dclose(dataset);
- H5Fclose(file);
-}
-
-
-Example 5: Same as example 1 (create a simple fixed size 3-D dataspace in memory and on disk and
- copy the entire dataset to disk), except that the selection order is changed
- for the memory dataspace, to change between FORTRAN and C array ordering.
-
-{
- hid_t file; /* File ID */
- hid_t dataset; /* Dataset ID */
- hid_t mem_space, file_space; /* Dataspaces for memory and the file */
- uint8 *buf; /* Buffer for data */
- hsize_t order[3]; /* Dimension ordering for selection */
- hsize_t curr_dims[3]={3,4,5}; /* Dimensions of the dataset */
-
- /* Create file */
- file = H5Fcreate("example5.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /* Create dataspace for dataset in the file */
- /* Selection for dataspace defaults to entire space and C array order */
- file_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the dataset's dataspace */
- H5Sset_extent_simple(file_space,3,curr_dims,curr_dims);
-
- /* Create the dataspace for the dataset in memory */
- /* Selection for dataspace defaults to entire space and C array order */
- mem_space=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of the memory dataspace */
- H5Sset_extent_simple(mem_space,3,curr_dims,curr_dims);
-
- /* Change selection ordering to FORTRAN order for memory dataspace */
- order={0,1,2};
- H5Sselect_order(mem_space,order);
-
- /* Create the dataset on disk */
- dataset=H5Dcreate(file,"Dataset",H5T_NATIVE_UINT8,file_space,H5P_DEFAULT);
-
- /* Write the dataset to the file */
- H5Dwrite(dataset,H5T_NATIVE_UINT8,mem_space,file_space,H5P_DEFAULT,buf);
-
- /* Close dataspaces */
- H5Sclose(mem_space);
- H5Sclose(file_space);
-
- /* Close dataset & file */
- H5Dclose(dataset);
- H5Fclose(file);
-}
-
-
-Example 6: Create a stored dataspace on disk and use the H5Ssubspace function
- create a dataspace located within that space.
-
-{
- hid_t file; /* File ID */
- hid_t space1, space2; /* Dataspace IDs */
- hsize_t start[3]; /* Start of hyperslab */
- hsize_t count[3]; /* Hyperslab block count in each dimension */
- hsize_t curr_dims[3]={13,14,15};/* Dimensions of the dataset */
-
- /* Create file */
- file = H5Fcreate("example6.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
-
- /* Create dataspace #1 */
- space1=H5Screate(H5S_SIMPLE);
-
- /* Set the extent & type of dataspace #1 */
- H5Sset_extent_simple(space1,3,curr_dims,curr_dims);
-
- /* Store dataspace #1 on disk */
- H5Scommit(file,"/Dataspaces/Dataspace #1",space1);
-
- /* Select a contiguous hyperslab in dataspace #1 to create dataspace #2 with */
- start={0,2,4};
- count={6,5,8};
-
- /*
- * Use stride and block set to NULL to get contiguous, single element sized
- * hyperslab. The stride and block parameters could also be set to all
- * 1's, but this is simpler and easier.
- */
- H5Sselect_hyperslab(space1,H5S_SELECT_SET,start,NULL,count,NULL);
-
- /* Create dataspace #2 as a dataspace located within dataspace #1 */
- space2=H5Ssubspace(space1);
-
- /* Store dataspace #2 on disk also */
- H5Scommit(file,"/Dataspaces/Dataspace #2",space2);
-
- /*
- * space1 & space2 can be used to create datasets, etc. Any datasets
- * created with space2 can have their dataspace queried to find the parent
- * dataspace and the location within the parent dataspace
- */
-
- /* Close dataspaces */
- H5Sclose(space1);
- H5Sclose(space2);
-
- /* Close file */
- H5Fclose(file);
-}
diff --git a/doc/html/hdf2.jpg b/doc/html/hdf2.jpg
deleted file mode 100644
index 92b53c9..0000000
Binary files a/doc/html/hdf2.jpg and /dev/null differ
diff --git a/doc/html/heap.txt b/doc/html/heap.txt
deleted file mode 100644
index 6b4c058..0000000
--- a/doc/html/heap.txt
+++ /dev/null
@@ -1,72 +0,0 @@
- HEAP MANAGEMENT IN HDF5
- ------------------------
-
-Heap functions are in the H5H package.
-
-
-off_t
-H5H_new (hdf5_file_t *f, size_t size_hint, size_t realloc_hint);
-
- Creates a new heap in the specified file which can efficiently
- store at least SIZE_HINT bytes. The heap can store more than
- that, but doing so may cause the heap to become less efficient
- (for instance, a heap implemented as a B-tree might become
- discontigous). The REALLOC_HINT is the minimum number of bytes
- by which the heap will grow when it must be resized. The hints
- may be zero in which case reasonable (but probably not
- optimal) values will be chosen.
-
- The return value is the address of the new heap relative to
- the beginning of the file boot block.
-
-off_t
-H5H_insert (hdf5_file_t *f, off_t addr, size_t size, const void *buf);
-
- Copies SIZE bytes of data from BUF into the heap whose address
- is ADDR in file F. BUF must be the _entire_ heap object. The
- return value is the byte offset of the new data in the heap.
-
-void *
-H5H_read (hdf5_file_t *f, off_t addr, off_t offset, size_t size, void *buf);
-
- Copies SIZE bytes of data from the heap whose address is ADDR
- in file F into BUF and then returns the address of BUF. If
- BUF is the null pointer then a new buffer will be malloc'd by
- this function and its address is returned.
-
- Returns buffer address or null.
-
-const void *
-H5H_peek (hdf5_file_t *f, off_t addr, off_t offset)
-
- A more efficient version of H5H_read that returns a pointer
- directly into the cache; the data is not copied from the cache
- to a buffer. The pointer is valid until the next call to an
- H5AC function directly or indirectly.
-
- Returns a pointer or null. Do not free the pointer.
-
-void *
-H5H_write (hdf5_file_t *f, off_t addr, off_t offset, size_t size,
- const void *buf);
-
- Modifies (part of) an object in the heap at address ADDR of
- file F by copying SIZE bytes from the beginning of BUF to the
- file. OFFSET is the address withing the heap where the output
- is to occur.
-
- This function can fail if the combination of OFFSET and SIZE
- would write over a boundary between two heap objects.
-
-herr_t
-H5H_remove (hdf5_file_t *f, off_t addr, off_t offset, size_t size);
-
- Removes an object or part of an object which begins at byte
- OFFSET within a heap whose address is ADDR in file F. SIZE
- bytes are returned to the free list. Removing the middle of
- an object has the side effect that one object is now split
- into two objects.
-
- Returns success or failure.
-
-
diff --git a/doc/html/index.html b/doc/html/index.html
deleted file mode 100644
index 3e37f59..0000000
--- a/doc/html/index.html
+++ /dev/null
@@ -1,308 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- HDF5 - A New Generation of HDF
-
The Hierarchical Data Format
-
-
-
- HDF Links at NCSA
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-HDF5 User Documentation
-
-
Release 1.7
- (unreleased development branch)
-
-
-
-
-
-
-
-
-
-
-
-
-
- Served from the HDF5 website at NCSA
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- HDF5 Tools
-
-
-
-HDFView
, h5dump
,
- h5ls
, h5toh4
, etc.
-
-
-
- HDF5 Library Development Documentation
-
-
-
-
-
-
-
-
-
-
-
-
-
- The National Center for Supercomputing Applications
- University of Illinois
- at Urbana-Champaign
-
-
-
-Last modified: 17 May 2005
-
-
-
-Describes HDF5 Release 1.7, the unreleased development branch;
-working toward HDF5 Release 1.8.0.
-
-
-All rights reserved.
-See full copyright notice.
-
-How to Relocate a File Data Structure
-
-
-
-
- old_addr
to new_addr
.
-
-
-
-
-
-
- H5AC_flush
is
- FALSE
.
-
-
- H5AC_flush (f, H5AC_BT, old_addr, FALSE);
-
-
-
-
- H5F_block_read (f, old_addr, size, buf);
-
- H5F_block_write (f, new_addr, size, buf);
-
-
-
- H5AC_rename (f, H5AC_BT, old_addr, new_addr);
-
-
-
- Robb Matzke
-
-
-Last modified: Mon Jul 14 15:38:29 EST
-
-
-
diff --git a/doc/html/ph5design.html b/doc/html/ph5design.html
deleted file mode 100644
index 1280052..0000000
--- a/doc/html/ph5design.html
+++ /dev/null
@@ -1,77 +0,0 @@
-
-
-
-
-1. Design Overview
-1.1. Function requirements
-
-
-
-
-1.2. System requirements
-
-
-
-
-2. Programming Model
-
-
-
-2.1. Setup access template
-2.1. File open
-2.2. Dataset open
-2.3. Dataset access
-2.3.1. Independent dataset access
-2.3.2. Collective dataset access
-2.3.3. Dataset attributes access
-
- 2.4. Dataset close
-2.5. File close
-
- 3. Parallel HDF5 Example
-
-
-
-
Example code
-
Send comments to
-hdfparallel@ncsa.uiuc.edu
Example programs/sections of code below: -
Notes:
-This example creates a new HDF5 file and allows write access.
-If the file exists already, the H5F_ACC_TRUNC flag would also be necessary to
-overwrite the previous file's information.
-
-
Code:
-
-
-
-
- hid_t file_id;
-
- file_id=H5Fcreate("example1.h5",0);
-
- H5Fclose(file_id);
-
-
Notes:
-This example checks if a file is an HDF5 file and lists the contents of the top
-level (file level) group.
-
-
Code:
-
-
-
-
- hid_t file_id; /* File ID */
- uint32 num_items; /* number of items in top-level group */
- intn i; /* counter */
- char *obj_name; /* object's name as string atom */
- uintn name_len; /* object name's length in chars */
- uintn buf_len=0; /* buffer length for names */
- char *buf=NULL; /* buffer for names */
-
- if(H5Fis_hdf5("example2.h5")==TRUE)
- {
- file_id=H5Fopen("example2.h5",H5F_ACC_RDWR|H5ACC_CREATE);
- num_items=H5GgetNumContents(file_id);
- for(i=0; i<num_items; i++)
- {
- obj_name=H5GgetNameByIndex(file_id,i,NULL,0);
- printf("object #%d is: %s\n",i,buf);
- HDfree(obj_name);
- }
- H5Fclose(file_id);
- }
-
-
Notes:
-This example creates a 4-dimensional dataset of 32-bit floating-point
-numbers, corresponding to the current Scientific Dataset functionality.
-This example assumes that the datatype and dataspace of the dataset will not
-be re-used.
-
-
Code:
-
-
-
-
- hid_t file_id; /* File's ID */
- uint32 dims[4]={6,5,4,3}; /* the size of each dimension */
- hid_t dataset_id; /* new object's ID */
- float32 obj_data[6][5][4][3]; /* storage for the dataset's data */
-
- if((file_id=H5Fcreate("example3.h5",H5F_ACC_TRUNC))>=0)
- {
- /* Create & initialize the dataset object */
- dataset_id=H5Mcreate(file_id,H5OBJ_DATASET,"Simple Object");
-
- /* Create & initialize a datatype object */
- H5TsetType(dataset_id,H5TYPE_FLOAT,4,H5T_BIGENDIAN);
-
- /* Initialize dimensionality of dataset */
- H5SsetSpace(dataset_id,rank,dims);
-
- <initialize data array>
-
- /* Write the entire dataset out */
- H5Dwrite(dataset_id,H5S_SCALAR,data);
- <or>
- H5Dwrite(dataset_id,dataset_id,data);
-
- /* Release the atoms we've created */
- H5Mrelease(dataset_id);
-
- /* close the file */
- H5Fclose(file_id);
- }
-
Notes:
-This example creates a 1-dimensional dataset of compound datatype records,
-corresponding to the current Vdata functionality. This example also assumes
-that the datatype and dataspace will not be re-used.
-
-
Code:
-
-
-
-
- hid_t file_id; /* File's ID */
- uint32 dims[1]={45}; /* the size of the dimension */
- hid_t dataset_id; /* object's ID */
- void *obj_data; /* pointer to the dataset's data */
-
- if((file_id=H5Fcreate("example4.h5",H5F_ACC_TRUNC))>=0)
- {
- /* Create & initialize the dataset object */
- dataset_id=H5Mcreate(file_id,H5OBJ_DATASET,"Compound Object");
-
- /* Initialize datatype */
- H5TsetType(dataset_id,H5TYPE_STRUCT);
- H5TaddField(dataset_id,H5TYPE_FLOAT32,"Float32 Scalar Field",H5SPACE_SCALAR);
- H5TaddField(dataset_id,H5TYPE_CHAR,"Char Field",H5SPACE_SCALAR);
- H5TaddField(dataset_id,H5TYPE_UINT16,"UInt16 Field",H5SPACE_SCALAR);
- H5TendDefine(dataset_id);
-
- /* Initialize dimensionality */
- H5SsetSpace(dataset_id,1,dims);
-
- <initialize data array>
-
- /* Write the entire dataset out */
- H5Dwrite(dataset_id,H5S_SCALAR,data);
-
- /* Release the atoms we've created */
- H5Mrelease(dataset_id);
-
- /* close the file */
- H5Fclose(file_id);
- }
-
Notes:
-This example creates a 3-dimensional dataset of compound datatype records,
-roughly corresponding to a multi-dimensional Vdata functionality. This
-example also shows the use of multi-dimensional fields in the compound datatype.
-This example uses "stand-alone" datatypes and dataspaces.
-
-
Code:
-
-
-
-
- hid_t file_id; /* File's ID */
- hid_t type_id; /* datatype's ID */
- hid_t dim_id; /* dimensionality's ID */
- uint32 dims[3]={95,67,5}; /* the size of the dimensions */
- hid_t field_dim_id; /* dimensionality ID for fields in the structure */
- uint32 field_dims[4]; /* array for field dimensions */
- hid_t dataset_id; /* object's ID */
- void *obj_data; /* pointer to the dataset's data */
-
- if((file_id=H5Fcreate("example5.h5",H5F_ACC_TRUNC))>=0)
- {
- /* Create & initialize a datatype object */
- type_id=H5Mcreate(file_id,H5OBJ_DATATYPE,"Compound Type #1");
- H5TsetType(type_id,H5TYPE_STRUCT);
-
- /* Create each multi-dimensional field in structure */
- field_dim_id=H5Mcreate(file_id,H5OBJ_DATASPACE,"Lat/Long Dims");
- field_dims[0]=360;
- field_dims[1]=720;
- H5SsetSpace(field_dim_id,2,field_dims);
- H5TaddField(type_id,H5TYPE_FLOAT32,"Lat/Long Locations",field_dim_id);
- H5Mrelease(field_dim_id);
-
- field_dim_id=H5Mcreate(file_id,H5OBJ_DATASPACE,"Browse Dims");
- field_dims[0]=40;
- field_dims[1]=40;
- H5SsetSpace(field_dim_id,2,field_dims);
- H5TaddField(type_id,H5TYPE_CHAR,"Browse Image",field_dim_id);
- H5Mrelease(field_dim_id);
-
- field_dim_id=H5Mcreate(file_id,H5OBJ_DATASPACE,"Multispectral Dims");
- field_dims[0]=80;
- field_dims[1]=60;
- field_dims[2]=40;
- H5SsetSpace(field_dim_id,3,field_dims);
- H5TaddField(type_id,H5TYPE_UINT16,"Multispectral Scans",field_dim_id);
- H5Mrelease(field_dim_id);
- H5TendDefine(type_id);
-
- /* Create & initialize a dimensionality object */
- dim_id=H5Mcreate(file_id,H5OBJ_DATASPACE,"3-D Dim");
- H5SsetSpace(dim_id,3,dims);
-
- /* Create & initialize the dataset object */
- dataset_id=H5Mcreate(file_id,H5OBJ_DATASET,"Compound Multi-Dim Object");
- H5DsetInfo(dataset_id,type_id,dim_id);
-
- <initialize data array>
-
- /* Write the entire dataset out */
- H5Dwrite(dataset_id,H5S_SCALAR,data);
-
- /* Release the atoms we've created */
- H5Mrelease(type_id);
- H5Mrelease(dim_id);
- H5Mrelease(dataset_id);
-
- /* close the file */
- H5Fclose(file_id);
- }
-
Notes:
-This example shows how to get the information for and display a generic
-dataset.
-
-
Code:
-
-
diff --git a/doc/html/review1a.html b/doc/html/review1a.html
deleted file mode 100644
index 3df8af7..0000000
--- a/doc/html/review1a.html
+++ /dev/null
@@ -1,252 +0,0 @@
-
-
-
- hid_t file_id; /* File's ID */
- hid_t dataset_id; /* dataset's ID in memory */
- uintn elem_size; /* size of each element */
- uintn nelems; /* number of elements in array */
- void *obj_data; /* pointer to the dataset's data */
-
- if((file_id=H5Fopen("example6.h5",0))>=0)
- {
- /* Attach to a datatype object */
- dataset_id=H5MaccessByIndex(obj_oid,0);
-
- if(H5TbaseType(dataset_id)==H5T_COMPOUND)
- {
- <set up for compound object>
- }
- else
- {
- <set up for homogenous object>
- }
-
- elem_size=H5Tsize(dataset_id);
- nelems=H5Snelem(dataset_id);
- <allocate space based on element size and number of elements >
-
- /* Read in the dataset */
- H5Dwrite(dataset_id,H5S_SCALAR,data);
- <or>
- H5Dwrite(dataset_id,dataset_id,data);
-
- /* Release the atoms we've accessed */
- H5Mrelease(dataset_id);
-
- /* close the file */
- H5Fclose(file_id);
- }
-
Directories (or now Groups) are currently implemented as - a directed graph with a single entry point into the graph which - is the Root Object. The root object is usually a - group. All objects have at least one predecessor (the Root - Object always has the HDF5 file super block as a - predecessor). The number of predecessors of a group is also - known as the hard link count or just link count. - Unlike Unix directories, HDF5 groups have no ".." entry since - any group can have multiple predecessors. Given the handle or - id of some object and returning a full name for that object - would be an expensive graph traversal. - -
A special optimization is that a file may contain a single - non-group object and no group(s). The object has one - predecessor which is the file super block. However, once a root - group is created it never dissappears (although I suppose it - could if we wanted). - -
A special object called a Symbolic Link is simply a - name. Usually the name refers to some (other) object, but that - object need not exist. Symbolic links in HDF5 will have the - same semantics as symbolic links in Unix. - -
The symbol table graph contains "entries" for each name. An - entry contains the file address for the object header and - possibly certain messages cached from the object header. - -
The H5G package understands the notion of opening and object - which means that given the name of the object, a handle to the - object is returned (this isn't an API function). Objects can be - opened multiple times simultaneously through the same name or, - if the object has hard links, through other names. The name of - an object cannot be removed from a group if the object is opened - through that group (although the name can change within the - group). - -
Below the API, object attributes can be read without opening - the object; object attributes cannot change without first - opening that object. The one exception is that the contents of a - group can change without opening the group. - -
Assuming we have a flat name space (that is, the root object is - a group which contains names for all other objects in the file - and none of those objects are groups), then we can build a - hierarchy of groups that also refer to the objects. - -
The file initially contains `foo' `bar' `baz' in the root - group. We wish to add groups `grp1' and `grp2' so that `grp1' - contains objects `foo' and `baz' and `grp2' contains objects - `bar' and `baz' (so `baz' appears in both groups). - -
In either case below, one might want to move the flat objects - into some other group (like `flat') so their names don't - interfere with the rest of the hierarchy (or move the hierarchy - into a directory called `/hierarchy'). - -
Create group `grp1' and add symbolic links called `foo' whose - value is `/foo' and `baz' whose value is `/baz'. Similarly for - `grp2'. - -
Accessing `grp1/foo' involves searching the root group for - the name `grp1', then searching that group for `foo', then - searching the root directory for `foo'. Alternatively, one - could change working groups to the grp1 group and then ask for - `foo' which searches `grp1' for the name `foo', then searches - the root group for the name `foo'. - -
Deleting `/grp1/foo' deletes the symbolic link without - affecting the `/foo' object. Deleting `/foo' leaves the - `/grp1/foo' link dangling. - -
Creating the hierarchy is the same as with symbolic links. - -
Accessing `/grp1/foo' searches the root group for the name - `grp1', then searches that group for the name `foo'. If the - current working group is `/grp1' then we just search for the - name `foo'. - -
Deleting `/grp1/foo' leaves `/foo' and vice versa. - -
Depending on the eventual API...
-
-
-
- or
-
-
-H5Gcreate (file_id, "/grp1");
-H5Glink (file_id, H5G_HARD, "/foo", "/grp1/foo");
-
-
-
-
-group_id = H5Gcreate (root_id, "grp1");
-H5Glink (file_id, H5G_HARD, root_id, "foo", group_id, "foo");
-H5Gclose (group_id);
-
Similar to abvoe, but in this case we have to watch out that - we don't get two names which are the same: what happens to - `/grp1/baz' and `/grp2/baz'? If they really refer to the same - object then we just have `/baz', but if they point to two - different objects what happens? - -
The other thing to watch out for cycles in the graph when we - traverse it to build the flat namespace. - -
Two things to watch out for are that the group contents don't - appear to change in a manner which would confuse the - application, and that listing everything in a group is as - efficient as possible. - -
Query the number of things in a group and then query each item
- by index. A trivial implementation would be O(n*n) and wouldn't
- protect the caller from changes to the directory which move
- entries around and therefore change their indices.
-
-
-
-
-n = H5GgetNumContents (group_id);
-for (i=0; i<n; i++) {
- H5GgetNameByIndex (group_id, i, ...); /*don't worry about args yet*/
-}
-
The API contains a single function that reads all information
- from the specified group and returns that info through an array.
- The caller is responsible for freeing the array allocated by the
- query and the things to which it points. This also makes it
- clear the the returned value is a snapshot of the group which
- doesn't change if the group is modified.
-
-
-
- Notice that it would be difficult to expand the info struct since
- its definition is part of the API.
-
-
-n = H5Glist (file_id, "/grp1", info, ...);
-for (i=0; i<n; i++) {
- printf ("name = %s\n", info[i].name);
- free (info[i].name); /*and maybe other fields too?*/
-}
-free (info);
-
The caller asks for a snapshot of the group and then accesses
- items in the snapshot through various query-by-index API
- functions. When finished, the caller notifies the library that
- it's done with the snapshot. The word "snapshot" makes it clear
- that subsequent changes to the directory will not be reflected in
- the shapshot_id.
-
-
-
- In fact, we could allow the user to leave off the H5Gsnapshot and
- H5Grelease and use group_id in the H5GgetNumContents and
- H5GgetNameByIndex so they can choose between Method A and Method
- C.
-
-
-snapshot_id = H5Gsnapshot (group_id); /*or perhaps group_name */
-n = H5GgetNumContents (snapshot_id);
-for (i=0; i<n; i++) {
- H5GgetNameByIndex (shapshot_id, i, ...);
-}
-H5Grelease (shapshot_id);
-
hid_t H5Gshapshot (hid_t group_id)
- H5GgetNameByIndex
is changed. Adding new entries
- to a group doesn't affect the snapshot.
-
- char *H5GgetNameByIndex (hid_t shapshot_id, int
- index)
- index
of
- the snapshot array to get the object name. This is a
- constant-time operation. The name is updated automatically if
- the object is renamed within the group.
-
- H5Gget<whatever>ByIndex...()
- index
,
- which is just a symbol table entry, and reads the appropriate
- object header message(s) which might be cached in the symbol
- table entry. This is a constant-time operation if cached,
- linear in the number of messages if not cached.
-
- H5Grelease (hid_t snapshot_id)
- char*
or some HDF5 string type.In either case, the caller has to release resources associated - with the return value, calling free() or some HDF5 function. - -
Names in the current implementation of the H5G package don't - contain embedded null characters and are always null terminated. - -
Eventually the caller probably wants a char*
so it
- can pass it to some non-HDF5 function, does that require
- strdup'ing the string again? Then the caller has to free() the
- the char* and release the DHF5 string.
-
-
This document describes the various ways that raw data is - stored in an HDF5 file and the object header messages which - contain the parameters for the storage. - -
Raw data storage has three components: the mapping from some - logical multi-dimensional element space to the linear address - space of a file, compression of the raw data on disk, and - striping of raw data across multiple files. These components - are orthogonal. - -
Some goals of the storage mechanism are to be able to - efficently store data which is: - -
The Sparse Large, Dynamic Size, and Subslab Access methods
- share so much code that they can be described with a single
- message. The new Indexed Storage Message (0x0008
)
- will replace the old Chunked Object (0x0009
) and
- Sparse Object (0x000A
) Messages.
-
-
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Address of B-tree |
- |||
Number of Dimensions | -Reserved | -Reserved | -Reserved | -
Reserved (4 bytes) | -|||
Alignment for Dimension 0 (4 bytes) | -|||
Alignment for Dimension 1 (4 bytes) | -|||
... | -|||
Alignment for Dimension N (4 bytes) | -
The alignment fields indicate the alignment in logical space to - use when allocating new storage areas on disk. For instance, - writing every other element of a 100-element one-dimensional - array (using one HDF5 I/O partial write operation per element) - that has unit storage alignment would result in 50 - single-element, discontiguous storage segments. However, using - an alignment of 25 would result in only four discontiguous - segments. The size of the message varies with the number of - dimensions. - -
A B-tree is used to point to the discontiguous portions of - storage which has been allocated for the object. All keys of a - particular B-tree are the same size and are a function of the - number of dimensions. It is therefore not possible to change the - dimensionality of an indexed storage array after its B-tree is - created. - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
External File Number or Zero (4 bytes) | -|||
Chunk Offset in Dimension 0 (4 bytes) | -|||
Chunk Offset in Dimension 1 (4 bytes) | -|||
... | -|||
Chunk Offset in Dimension N (4 bytes) | -
The keys within a B-tree obey an ordering based on the chunk - offsets. If the offsets in dimension-0 are equal, then - dimension-1 is used, etc. The External File Number field - contains a 1-origin offset into the External File List message - which contains the name of the external file in which that chunk - is stored. - -
The indexed storage will support arbitrary striping at the - chunk level; each chunk can be stored in any file. This is - accomplished by using the External File Number field of an - indexed storage B-tree key as a 1-origin offset into an External - File List Message (0x0009) which takes the form: - -
-
byte | -byte | -byte | -byte | -
---|---|---|---|
Name Heap Address |
- |||
Number of Slots Allocated (4 bytes) | -|||
Number of File Names (4 bytes) | -|||
Byte Offset of Name 1 in Heap (4 bytes) | -|||
Byte Offset of Name 2 in Heap (4 bytes) | -|||
... | -|||
Unused Slot(s) |
-
Each indexed storage array that has all or part of its data - stored in external files will contain a single external file - list message. The size of the messages is determined when the - message is created, but it may be possible to enlarge the - message on demand by moving it. At this time, it's not possible - for multiple arrays to share a single external file list - message. - -
- H5O_efl_t *H5O_efl_new (H5G_entry_t *object, intn
- nslots_hint, intn heap_size_hint)
-
-
- intn H5O_efl_index (H5O_efl_t *efl, const char *filename)
-
-
- H5F_low_t *H5O_efl_open (H5O_efl_t *efl, intn index, uintn mode)
-
-
- herr_t H5O_efl_release (H5O_efl_t *efl)
-
- H5O_efl_new
flushes the message
- to disk.
-
-
-
-NCSA Hierarchical Data Format (HDF) Software Library and Utilities
-
-Copyright 1998 the Board of Trustees of the University of Illinois
-
-All rights reserved.
-
- -Contributors: National Center for Supercomputing Applications (NCSA) at -the University of Illinois, Lawrence Livermore Nat'l Laboratory (LLNL), -Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), -Jean-loup Gailly and Mark Adler (gzip library) -
- -Redistribution and use in source and binary forms, with or without -modification, are permitted for any purpose (including commercial purposes) -provided that the following conditions are met: -
- -
- - | - - |
-
|
H5open
(void)
-H5open
initialize the library. This function is
- normally called automatically, but if you find that an
- HDF5 library function is failing inexplicably, try calling
- this function first.
-H5close
(void)
-H5close
flushes all data to disk,
- closes all file identifiers, and cleans up all memory used by
- the library. This function is generall called when the
- application calls exit
, but may be called earlier
- in event of an emergency shutdown or out of desire to free all
- resources used by the HDF5 library.
-H5dont_atexit
(void)
-atexit
cleanup routine.
-H5dont_atexit
indicates to the library that an
- atexit()
cleanup routine should not be installed.
- The major purpose for this is in situations where the
- library is dynamically linked into an application and is
- un-linked from the application before exit()
gets
- called. In those situations, a routine installed with
- atexit()
would jump to a routine which was
- no longer in memory, causing errors.
- - In order to be effective, this routine must be called - before any other HDF function calls, and must be called each - time the library is loaded/linked into the application - (the first time and after it's been un-loaded). -
H5get_libversion
(unsigned *majnum
,
- unsigned *minnum
,
- unsigned *relnum
- )
-H5get_libversion
retrieves the major, minor, and release
- numbers of the version of the HDF library which is linked to
- the application.
-majnum
- minnum
- relnum
- H5check_version
(unsigned majnum
,
- unsigned minnum
,
- unsigned relnum
- )
-H5check_version
verifies that the arguments match the
- version numbers compiled into the library. This function is intended
- to be called from user to verify that the versions of header files
- compiled into the application match the version of the HDF5 library.
-
- Due to the risks of data corruption or segmentation faults,
- H5check_version
causes the application to abort if the
- version numbers do not match.
-
majnum
- minnum
- relnum
- patnum
- - - |
-
|
-
|
-The Attribute interface, H5A, is primarily designed to easily allow -small datasets to be attached to primary datasets as metadata information. -Additional goals for the H5A interface include keeping storage requirement -for each attribute to a minimum and easily sharing attributes among -datasets. -
-Because attributes are intended to be small objects, large datasets -intended as additional information for a primary dataset should be -stored as supplemental datasets in a group with the primary dataset. -Attributes can then be attached to the group containing everything -to indicate a particular type of dataset with supplemental datasets -is located in the group. How small is "small" is not defined by the -library and is up to the user's interpretation. -
-See Attributes in the -HDF5 User's Guide for further information. - -
H5Acreate
(hid_t loc_id
,
- const char *name
,
- hid_t type_id
,
- hid_t space_id
,
- hid_t create_plist
- )
-H5Acreate
creates an attribute which is attached
- to the object specified with loc_id
.
- loc_id
is an identifier of a group, dataset,
- or named datatype. The name specified with name
- for each attribute for an object must be unique for that object.
- The datatype and dataspace identifiers of the attribute,
- type_id
and space_id
, respectively,
- are created with the H5T and H5S interfaces, respectively.
- Currently only simple dataspaces are allowed for attribute
- dataspaces. The create_plist_id
property list
- is currently unused, but will be used int the future for optional
- properties of attributes. The attribute identifier returned from
- this function must be released with H5Aclose
or
- resource leaks will develop. Attempting to create an attribute
- with the same name as an already existing attribute will fail,
- leaving the pre-existing attribute in place.
-loc_id
- name
- type_id
- space_id
- create_plist
- H5Aopen_name
(hid_t loc_id
,
- const char *name
- )
-H5Aopen_name
opens an attribute specified by
- its name, name
, which is attached to the
- object specified with loc_id
.
- The location object may be either a group, dataset, or
- named datatype, which may have any sort of attribute.
- The attribute identifier returned from this function must
- be released with H5Aclose
or resource leaks
- will develop.
-loc_id
- name
- H5Aopen_idx
(hid_t loc_id
,
- unsigned int idx
- )
-H5Aopen_idx
opens an attribute which is attached
- to the object specified with loc_id
.
- The location object may be either a group, dataset, or
- named datatype, all of which may have any sort of attribute.
- The attribute specified by the index, idx
,
- indicates the attribute to access.
- The value of idx
is a 0-based, non-negative integer.
- The attribute identifier returned from this function must be
- released with H5Aclose
or resource leaks will develop.
-loc_id
- idx
- H5Awrite
(hid_t attr_id
,
- hid_t mem_type_id
,
- void *buf
- )
-H5Awrite
writes an attribute, specified with
- attr_id
. The attribute's memory datatype
- is specified with mem_type_id
. The entire
- attribute is written from buf
to the file.
- - Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
attr_id
- mem_type_id
- buf
- H5Aread
(hid_t attr_id
,
- hid_t mem_type_id
,
- void *buf
- )
-H5Aread
reads an attribute, specified with
- attr_id
. The attribute's memory datatype
- is specified with mem_type_id
. The entire
- attribute is read into buf
from the file.
- - Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
attr_id
- mem_type_id
- buf
- H5Aget_space
(hid_t attr_id
)
-H5Aget_space
retrieves a copy of the dataspace
- for an attribute. The dataspace identifier returned from
- this function must be released with H5Sclose
- or resource leaks will develop.
-attr_id
- H5Aget_type
(hid_t attr_id
)
-H5Aget_type
retrieves a copy of the datatype
- for an attribute.
- - The datatype is reopened if it is a named type before returning - it to the application. The datatypes returned by this function - are always read-only. If an error occurs when atomizing the - return datatype, then the datatype is closed. -
- The datatype identifier returned from this function must be
- released with H5Tclose
or resource leaks will develop.
-
attr_id
- H5Aget_name
(hid_t attr_id
,
- char *buf
,
- size_t buf_size
- )
-H5Aget_name
retrieves the name of an attribute
- specified by the identifier, attr_id
.
- Up to buf_size
characters are stored in
- buf
followed by a \0
string
- terminator. If the name of the attribute is longer than
- buf_size
-1, the string terminator is stored in the
- last position of the buffer to properly terminate the string.
-attr_id
- buf
- buf_size
- buf_size
, if successful.
- Otherwise returns FAIL (-1).
-H5Aget_num_attrs
(hid_t loc_id
)
-H5Aget_num_attrs
returns the number of attributes
- attached to the object specified by its identifier,
- loc_id
.
- The object can be a group, dataset, or named datatype.
-loc_id
- H5Aiterate
(hid_t loc_id
,
- unsigned * idx
,
- H5A_operator_t op
,
- void *op_data
- )
-H5Aiterate
iterates over the attributes of
- the object specified by its identifier, loc_id
.
- The object can be a group, dataset, or named datatype.
- For each attribute of the object, the op_data
- and some additional information specified below are passed
- to the operator function op
.
- The iteration begins with the attribute specified by its
- index, idx
; the index for the next attribute
- to be processed by the operator, op
, is
- returned in idx
.
- If idx
is the null pointer, then all attributes
- are processed.
-
- The prototype for H5A_operator_t
is:
- typedef herr_t (*H5A_operator_t)(hid_t loc_id,
- const char *attr_name,
- void *operator_data);
-
-
- The operation receives the identifier for the group, dataset
- or named datatype being iterated over, loc_id
, the
- name of the current attribute about the object, attr_name
,
- and the pointer to the operator data passed in to H5Aiterate
,
- op_data
. The return values from an operator are:
-
loc_id
- idx
- op
- op_data
- H5Adelete
(hid_t loc_id
,
- const char *name
- )
-H5Adelete
removes the attribute specified by its
- name, name
, from a dataset, group, or named datatype.
- This function should not be used when attribute identifiers are
- open on loc_id
as it may cause the internal indexes
- of the attributes to change and future writes to the open
- attributes to produce incorrect results.
-loc_id
- name
- H5Aclose
(hid_t attr_id
)
-H5Aclose
terminates access to the attribute
- specified by its identifier, attr_id
.
- Further use of the attribute identifier will result in
- undefined behavior.
-attr_id
-
-
|
-
| - - |
H5Dcreate
(hid_t loc_id
,
- const char *name
,
- hid_ttype_id
,
- hid_tspace_id
,
- hid_tcreate_plist_id
- )
-H5Dcreate
creates a data set with a name,
- name
, in the file or in the group specified by
- the identifier loc_id
.
- The dataset has the datatype and dataspace identified by
- type_id
and space_id
, respectively.
- The specified datatype and dataspace are the datatype and
- dataspace of the dataset as it will exist in the file,
- which may be different than in application memory.
- Dataset creation properties are specified by the argument
- create_plist_id
.
-
- create_plist_id
is a H5P_DATASET_CREATE
- property list created with H5Pcreate()
and
- initialized with the various functions described above.
- H5Dcreate()
returns a dataset identifier for success
- or negative for failure. The identifier should eventually be
- closed by calling H5Dclose()
to release resources
- it uses.
-
loc_id
- name
- type_id
- space_id
- create_plist_id
- H5Dopen
(hid_t loc_id
,
- const char *name
- )
-H5Dopen
opens an existing dataset for access in the file
- or group specified in loc_id
. name
is
- a dataset name and is used to identify the dataset in the file.
-loc_id
- name
- H5Dget_space
(hid_t dataset_id
- )
-H5Dget_space
returns an identifier for a copy of the
- dataspace for a dataset.
- The dataspace identifier should be released with the
- H5Sclose()
function.
-dataset_id
- H5Dget_type
(hid_t dataset_id
- )
-H5Dget_type
returns an identifier for a copy of the
- datatype for a dataset.
- The datatype should be released with the H5Tclose()
function.
- - If a dataset has a named datatype, then an identifier to the - opened datatype is returned. - Otherwise, the returned datatype is read-only. - If atomization of the datatype fails, then the datatype is closed. -
dataset_id
- H5Dget_create_plist
(hid_t dataset_id
- )
-H5Dget_create_plist
returns an identifier for a
- copy of the dataset creation property list for a dataset.
- The creation property list identifier should be released with
- the H5Pclose()
function.
-dataset_id
- H5Dread
(hid_t dataset_id
,
- hid_t mem_type_id
,
- hid_t mem_space_id
,
- hid_t file_space_id
,
- hid_t xfer_plist_id
,
- void * buf
- )
-buf
,
- converting from file datatype and dataspace to
- memory datatype and dataspace.
-H5Dread
reads a (partial) dataset, specified by its
- identifier dataset_id
, from the file into the
- application memory buffer buf
.
- Data transfer properties are defined by the argument
- xfer_plist_id
.
- The memory datatype of the (partial) dataset is identified by
- the identifier mem_type_id
.
- The part of the dataset to read is defined by
- mem_space_id
and file_space_id
.
-
- file_space_id
can be the constant H5S_ALL
,
- which indicates that the entire file data space is to be referenced.
-
- mem_space_id
can be the constant H5S_ALL
,
- in which case the memory data space is the same as the file data space
- defined when the dataset was created.
-
- The number of elements in the memory data space must match - the number of elements in the file data space. -
- xfer_plist_id
can be the constant H5P_DEFAULT
,
- in which case the default data transfer properties are used.
-
-
- Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
dataset_id
- mem_type_id
- mem_space_id
- file_space_id
- xfer_plist_id
- buf
- H5Dwrite
(hid_t dataset_id
,
- hid_t mem_type_id
,
- hid_t mem_space_id
,
- hid_t file_space_id
,
- hid_t xfer_plist_id
,
- const void * buf
- )
-buf
to
- the specified dataset, converting from
- memory datatype and dataspace to file datatype and dataspace.
-H5Dwrite
writes a (partial) dataset, specified by its
- identifier dataset_id
, from the
- application memory buffer buf
into the file.
- Data transfer properties are defined by the argument
- xfer_plist_id
.
- The memory datatype of the (partial) dataset is identified by
- the identifier mem_type_id
.
- The part of the dataset to write is defined by
- mem_space_id
and file_space_id
.
-
- file_space_id
can be the constant H5S_ALL
.
- which indicates that the entire file data space is to be referenced.
-
- mem_space_id
can be the constant H5S_ALL
,
- in which case the memory data space is the same as the file data space
- defined when the dataset was created.
-
- The number of elements in the memory data space must match - the number of elements in the file data space. -
- xfer_plist_id
can be the constant H5P_DEFAULT
.
- in which case the default data transfer properties are used.
-
- Writing to an external dataset will fail if the HDF5 file is - not open for writing. -
- Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
dataset_id
- mem_type_id
- mem_space_id
- file_space_id
- xfer_plist_id
- buf
- H5Dextend
(hid_t dataset_id
,
- const hsize_t * size
- )
-H5Dextend
verifies that the dataset is at least of size
- size
.
- The dimensionality of size
is the same as that of
- the dataspace of the dataset being changed.
- This function cannot be applied to a dataset with fixed dimensions.
-dataset_id
- size
- H5Dclose
(hid_t dataset_id
- )
-H5Dclose
ends access to a dataset specified by
- dataset_id
and releases resources used by it.
- Further use of the dataset identifier is illegal in calls to
- the dataset API.
-dataset_id
-
-
|
-
|
-
|
-The Error interface provides error handling in the form of a stack.
-The FUNC_ENTER()
macro clears the error stack whenever
-an interface function is entered.
-When an error is detected, an entry is pushed onto the stack.
-As the functions unwind, additional entries are pushed onto the stack.
-The API function will return some indication that an error occurred and
-the application can print the error stack.
-
-Certain API functions in the H5E package, such as H5Eprint()
,
-do not clear the error stack. Otherwise, any function which
-does not have an underscore immediately after the package name
-will clear the error stack. For instance, H5Fopen()
-clears the error stack while H5F_open()
does not.
-
-An error stack has a fixed maximum size. -If this size is exceeded then the stack will be truncated and only the -inner-most functions will have entries on the stack. -This is expected to be a rare condition. -
-Each thread has its own error stack, but since -multi-threading has not been added to the library yet, this -package maintains a single error stack. The error stack is -statically allocated to reduce the complexity of handling -errors within the H5E package. - - -
H5Eset_auto
(H5E_auto_t func
,
- void *client_data
- )
-H5Eset_auto
turns on or off automatic printing of
- errors. When turned on (non-null func
pointer),
- any API function which returns an error indication will
- first call func
, passing it client_data
- as an argument.
-
- When the library is first initialized the auto printing function
- is set to H5Eprint()
(cast appropriately) and
- client_data
is the standard error stream pointer,
- stderr
.
-
- Automatic stack traversal is always in the
- H5E_WALK_DOWNWARD
direction.
-
func
- client_data
- H5Eget_auto
(H5E_auto_t * func
,
- void **client_data
- )
-H5Eget_auto
returns the current settings for the
- automatic error stack traversal function, func
,
- and its data, client_data
. Either (or both)
- arguments may be null in which case the value is not returned.
-func
- client_data
- H5Eclear
(void
)
-H5Eclear
clears the error stack for the current thread.
-
- The stack is also cleared whenever an API function is called,
- with certain exceptions (for instance, H5Eprint()
).
-
- H5Eclear
can fail if there are problems initializing
- the library.
-
H5Eprint
(FILE * stream
)
-H5Eprint
prints the error stack on the specified
- stream, stream
.
- Even if the error stack is empty, a one-line message will be printed:
- HDF5-DIAG: Error detected in thread 0.
-
- H5Eprint
is a convenience function for
- H5Ewalk()
with a function that prints error messages.
- Users are encouraged to write there own more specific error handlers.
-
stream
- H5Ewalk
(H5E_direction_t direction
,
- H5E_walk_t func
,
- void * client_data
- )
-H5Ewalk
walks the error stack for the current thread
- and calls the specified function for each error along the way.
-
- direction
determines whether the stack is walked
- from the inside out or the outside in.
- A value of H5E_WALK_UPWARD
means begin with the
- most specific error and end at the API;
- a value of H5E_WALK_DOWNWARD
means to start at the
- API and end at the inner-most function where the error was first
- detected.
-
- func
will be called for each error in the error stack.
- Its arguments will include an index number (beginning at zero
- regardless of stack traversal direction), an error stack entry,
- and the client_data
pointer passed to
- H5E_print
.
-
- H5Ewalk
can fail if there are problems initializing
- the library.
-
direction
- func
- client_data
- func
.
- H5Ewalk_cb
(int n
,
- H5E_error_t *err_desc
,
- void *client_data
- )
-H5Ewalk_cb
is a default error stack traversal callback
- function that prints error messages to the specified output stream.
- It is not meant to be called directly but rather as an
- argument to the H5Ewalk()
function.
- This function is called also by H5Eprint()
.
- Application writers are encouraged to use this function as a
- model for their own error stack walking functions.
-
- n
is a counter for how many times this function
- has been called for this particular traversal of the stack.
- It always begins at zero for the first error on the stack
- (either the top or bottom error, or even both, depending on
- the traversal direction and the size of the stack).
-
- err_desc
is an error description. It contains all the
- information about a particular error.
-
- client_data
is the same pointer that was passed as the
- client_data
argument of H5Ewalk()
.
- It is expected to be a file pointer (or stderr if null).
-
n
- err_desc
- *client_data
- H5Eget_major
(H5E_major_t n
)
-H5Eget_major
returns a
- constant character string that describes the error.
-n
- H5Eget_minor
(H5E_minor_t n
)
-H5Eget_minor
returns a
- constant character string that describes the error.
-n
-
-
| - - | - - |
H5Fopen
(const char *name
,
- unsigned flags
,
- hid_t access_id
- )
-H5Fopen
opens an existing file and is the primary
- function for accessing existing HDF5 files.
-
- The parameter access_id
is a file access property
- list identifier or H5P_DEFAULT
for the default I/O access
- parameters.
-
- The flags
argument determines whether writing
- to an existing file will be allowed or not.
- The file is opened with read and write permission if
- flags
is set to H5F_ACC_RDWR
.
- All flags may be combined with the bit-wise OR operator (`|')
- to change the behavior of the file open call.
- The more complex behaviors of a file's access are controlled
- through the file-access property list.
-
- Files which are opened more than once return a unique identifier
- for each H5Fopen()
call and can be accessed
- through all file identifiers.
-
- The return value is a file identifier for the open file and it
- should be closed by calling H5Fclose()
when it is
- no longer needed.
-
name
- flags
- H5Fcreate
- parameters list for a list of possible values.
- access_id
- H5Fcreate
(const char *name
,
- unsigned flags
,
- hid_t create_id
,
- hid_t access_id
- )
-H5Fcreate
is the primary function for creating
- HDF5 files .
-
- The flags
parameter determines whether an
- existing file will be overwritten. All newly created files
- are opened for both reading and writing. All flags may be
- combined with the bit-wise OR operator (`|') to change
- the behavior of the H5Fcreate
call.
-
- The more complex behaviors of file creation and access
- are controlled through the file-creation and file-access
- property lists. The value of H5P_DEFAULT
for
- a property list value indicates that the library should use
- the default values for the appropriate property list. Also see
- H5Fpublic.h
for the list of supported flags.
-
name
- flags
- create_id
- access_id
- access_id
.
- Use 0
for default access properties.
- H5Fis_hdf5
(hid_t object_id
- )
-H5Fflush
causes all buffers associated with a
- file to be immediately flushed to disk without removing the
- data from the cache.
-
- object_id
can be any object associated with the file,
- including the file itself, a dataset, a group, an attribute, or
- a named data type.
-
object_id
- H5Fis_hdf5
(const char *name
- )
-H5Fis_hdf5
determines whether a file is in
- the HDF5 format.
-name
- TRUE
or FALSE
if successful.
- Otherwise returns FAIL (-1).
-H5Fget_create_plist
(hid_t file_id
- )
-H5Fget_create_plist
returns a file creation
- property list identifier identifying the creation properties
- used to create this file. This function is useful for
- duplicating properties when creating another file.
- - See "File Creation Properties" in - H5P: Property List Interface - in this reference manual and - "File Creation Properties" - in Files in the - HDF5 User's Guide for - additional information and related functions. -
file_id
- H5Fget_access_plist
(hid_t file_id
)
-H5Fget_access_plist
returns the
- file access property list identifier of the specified file.
- - See "File Access Properties" in - H5P: Property List Interface - in this reference manual and - "File Access Property Lists" - in Files in the - HDF5 User's Guide for - additional information and related functions. -
file_id
- H5Fclose
(hid_t file_id
- )
-H5Fclose
terminates access to an HDF5 file.
- If this is the last file identifier open for a file
- and if access identifiers are still in use,
- this function will fail.
-file_id
-
-HDF Help Desk
- -Last modified: 8 September 1998 - - | -Copyright - |
- - |
-
| - - | ||
-(NYI = Not yet implemented) - |
-A group associates names with objects and provides a mechanism -for mapping a name to an object. Since all objects appear in at -least one group (with the possible exception of the root object) -and since objects can have names in more than one group, the set -of all objects in an HDF5 file is a directed graph. The internal -nodes (nodes with out-degree greater than zero) must be groups -while the leaf nodes (nodes with out-degree zero) are either empty -groups or objects of some other type. Exactly one object in every -non-empty file is the root object. The root object always has a -positive in-degree because it is pointed to by the file boot block. - -
-Every file identifier returned by H5Fcreate
or
-H5Fopen
maintains an independent current working group
-stack, the top item of which is the current working group. The
-stack can be manipulated with H5Gset
, H5Gpush
,
-and H5Gpop
. The root object is the current working group
-if the stack is empty.
-
-
-An object name consists of one or more components separated from -one another by slashes. An absolute name begins with a slash and the -object is located by looking for the first component in the root -object, then looking for the second component in the first object, etc., -until the entire name is traversed. A relative name does not begin -with a slash and the traversal begins with the current working group. - -
-The library does not maintain the full absolute name of its current
-working group because (1) cycles in the graph can make the name length
-unbounded and (2) a group does not necessarily have a unique name. A
-more Unix-like hierarchical naming scheme can be implemented on top of
-the directed graph scheme by creating a ".." entry in each group that
-points to its single predecessor; a getcwd
function would
-then be trivial.
-
-
H5Gcreate
(hid_t loc_id
,
- const char *name
,
- size_t size_hint
- )
- H5Gcreate
creates a new group with the specified
- name at the specified location, loc_id
.
- The location is identified by a file or group identifier.
- The name, name
, must not already be taken by some
- other object and all parent groups must already exist.
-
- size_hint
is a hint for the number of bytes to
- reserve to store the names which will be eventually added to
- the new group. Passing a value of zero for size_hint
- is usually adequate since the library is able to dynamically
- resize the name heap, but a correct hint may result in better
- performance.
- If a non-positive value is supplied for size_hint,
- then a default size is chosen.
-
- The return value is a group identifier for the open group.
- This group identifier should be closed by calling
- H5Gclose()
when it is no longer needed.
-
loc_id
- name
- size_hint
- H5Gopen
(hid_t loc_id
,
- const char *name
- )
- H5Gopen
opens an existing group with the specified name at
- the specified location, loc_id
.
- The location is identified by a file or
- group identifier, and returns a group identifier for the group.
- The obtained group identifier should be released by calling
- H5Gclose()
when it is no longer needed.
- loc_id
- name
- H5Gset
(hid_t loc_id
,
- const char *name
- )
- H5Gset
sets the group with the specified name
- to be the current working group for the file which contains it.
- This function sets the current working group by modifying the
- top element of the current working group stack or, if the
- stack is empty, by pushing a new element onto the stack.
- The initial current working group is the root group.
-
- loc_id
can be a file identifier or a group identifier.
-
- name
is an absolute or relative name and is resolved as follows. Each file identifier
- has a current working group, initially the root group of the
- file. Relative names do not begin with a slash and are relative
- to the specified group or to the current working group.
- Absolute names begin with a slash and are relative to the file's
- root group. For instance, the name /Foo/Bar/Baz
is
- resolved by first looking up Foo
in the root group;
- the name Foo/Bar/Baz
is resolved by first looking
- up the name Foo
in the current working group.
-
- Each file identifier maintains its own notion of the current
- working group. If loc_id
is a group identifier, the
- file identifier is derived from the group identifier.
-
- If a single file is opened with multiple calls to H5Fopen()
,
- which would return multiple file identifiers, then each
- identifier's current working group can be set independently
- of the other file identifiers for that file.
-
loc_id
- name
- H5Gpush
(hid_t loc_id
,
- const char *name
- )
- H5Gpush
pushes a new group
- onto the stack, thus setting a new current working group.
- loc_id
- name
- H5Gpop
(hid_t loc_id
)
- H5Gpop
restores the previous current working group by
- popping an element from the current working group stack.
- An empty stack implies that the current working group is the root
- object. Attempting to pop an empty stack results in failure.
-
- Each file identfier maintains its own notion of the current
- working group. That is, if a single file is opened with
- multiple calls to H5Fopen()
, which returns multiple file
- handles, then each identfier's current working group can be
- set independently of the other file identfiers for that file.
-
- If loc_id
is a group identifier, it is used only to determine the
- file identifier for the stack from which to pop the top entry.
-
loc_id
- H5Gclose
(hid_t group_id
)
- H5Gclose
releases resources used by a group which was
- opened by H5Gcreate()
or H5Gopen()
.
- After closing a group, the group_id
cannot be used again.
- - Failure to release a group with this call will result in resource leaks. -
group_id
- H5Glink
(hid_t loc_id
,
- H5G_link_t link_type
,
- const char *current_name
,
- const char *new_name
- )
- new_name
- to current_name
.
- H5Glink
creates a new name for an object that has some current
- name, possibly one of many names it currently has.
-
- If link_type
is H5G_LINK_HARD
, then
- current_name
must name an existing object and both
- names are interpreted relative to loc_id
, which is
- either a file identifier or a group identifier.
-
- If link_type
is H5G_LINK_SOFT
, then
- current_name
can be anything and is interpreted at
- lookup time relative to the group which contains the final
- component of new_name
. For instance, if
- current_name
is ./foo
,
- new_name
is ./x/y/bar
, and a request
- is made for ./x/y/bar
, then the actual object looked
- up is ./x/y/./foo
.
-
loc_id
- link_type
- H5G_LINK_HARD
and H5G_LINK_SOFT
.
- current_name
- new_name
- H5Gunlink
(hid_t loc_id
,
- const char *name
- )
- name
from the group graph and
- decrements the link count for the object to which name
points
- H5Gunlink
removes an association between a name and an object.
- Object headers keep track of how many hard links refer to the object;
- when the hard link count reaches zero, the object can be removed
- from the file. Objects which are open are not removed until all
- identifiers to the object are closed.
- - If the link count reaches zero, all file-space associated with - the object will be reclaimed. If the object is open, the - reclamation of the file space is delayed until all handles to the - object are closed. -
loc_id
- name
- H5Giterate
(hid_t loc_id
,
- const char *name
,
- int *idx
,
- H5G_operator_t operator
,
- void *operator_data
- )
- H5Giterate
iterates over the members of
- name
in the file or group specified with
- loc_id
.
- For each object in the group, the operator_data
- and some additional information, specified below, are
- passed to the operator
function.
- The iteration begins with the idx
object in the
- group and the next element to be processed by the operator is
- returned in idx
. If idx
- is NULL, then the iterator starts at the first group member;
- since no stopping point is returned in this case, the iterator
- cannot be restarted if one of the calls to its operator returns
- non-zero.
-
- The prototype for H5G_operator_t
is:
-
typedef
herr_t *(H5G_operator_t
)(hid_t group_id
,
- const char *member_name
, void *operator_data/*in,out*/
);
- group_id
, the name of the current
- object within the group, member_name
, and the
- pointer to the operator data passed in to H5Giterate
,
- operator_data
.
- - The return values from an operator are: -
loc_id
- *name
- *idx
- operator
- *operator_data
- H5Gmove
(hid_t loc_id
,
- const char *src
,
- const char *dst
- )
- H5Gmove
renames an object within an HDF5 file.
- The original name, src
, is unlinked from the
- group graph and the new name, dst
, is inserted
- as an atomic operation. Both names are interpreted relative
- to loc_id
, which is either a file or a group
- identifier.
- loc_id
- *src
- *dst
- H5Gget_objinfo
(hid_t loc_id
,
- const char *name
,
- hbool_t follow_link
,
- H5G_stat_t *statbuf
- )
- H5Gget_objinfo
returns information about the
- specified object through the statbuf
argument.
- loc_id
(a file, group, or dataset identifier) and
- name
together determine the object.
- If the object is a symbolic link and follow_link
is
- zero (0
), then the information returned is that for the link itself;
- otherwise the link is followed and information is returned about
- the object to which the link points.
- If follow_link
is non-zero but the final symbolic link
- is dangling (does not point to anything), then an error is returned.
- The statbuf
fields are undefined for an error.
- The existence of an object can be tested by calling this function
- with a null statbuf
.
-
- H5Gget_objinfo()
fills in the following data structure:
-
- typedef struct H5G_stat_t { - unsigned long fileno; - haddr_t objno; - unsigned nlink; - H5G_obj_t type; - time_t mtime; - size_t linklen; - } H5G_stat_t -- The
fileno
and objno
fields contain
- values which uniquely itentify an object among those
- HDF5 files which are open: if both values are the same
- between two objects, then the two objects are the same
- (provided both files are still open).
- The nlink
field is the number of hard links to
- the object or zero when information is being returned about a
- symbolic link (symbolic links do not have hard links but
- all other objects always have at least one).
- The type
field contains the type of the object,
- one of H5G_GROUP
, H5G_DATASET
,
- or H5G_LINK
.
- The mtime
field contains the modification time.
- If information is being returned about a symbolic link then
- linklen
will be the length of the link value
- (the name of the pointed-to object with the null terminator);
- otherwise linklen
will be zero.
- Other fields may be added to this structure in the future.
- mtime
value of 0 (zero).
- loc_id
- *name
- follow_link
- *statbuf
- statbuf
- (if non-null) initialized.
- Otherwise returns FAIL (-1).
-H5Gget_linkval
(hid_t loc_id
,
- const char *name
,
- size_t size
,
- char *value
- )
- H5Gget_linkval
returns size
- characters of the link value through the value
- argument if loc_id
(a file or group identifier)
- and name
specify a symbolic link.
- If size
is smaller than the link value, then
- value
will not be null terminated.
-
- This function fails if the specified object is not a symbolic link.
- The presence of a symbolic link can be tested by passing zero for
- size
and NULL for value
.
-
- Use H5Gget_objinfo()
to get the size of a link value.
-
loc_id
- name
- size
- value
- to be returned.
- value
- value
,
- if successful.
- Otherwise returns FAIL (-1).
-H5Gset_comment
(hid_t loc_id
,
- const char *name
,
- const char *comment
- )
- H5Gset_comment
sets the comment for the the
- object name
to comment
.
- Any previously existing comment is overwritten.
-
- If comment
is the empty string or a
- null pointer, the comment message is removed from the object.
-
- Comments should be relatively short, null-terminated, - ASCII strings. -
- Comments can be attached to any object that has an object header, - e.g., data sets, groups, named data types, and data spaces, but - not symbolic links. -
loc_id
- name
- comment
- H5Gget_comment
(hid_t loc_id
,
- const char *name
,
- size_t bufsize
,
- char *comment
- )
- H5Gget_comment
retrieves the comment for the the
- object name
. The comment is returned in the buffer
- comment
.
-
- At most bufsize
characters, including a null
- terminator, are copied. The result is not null terminated
- if the comment is longer than the supplied buffer.
-
- If an object does not have a comment, the empty string - is returned. -
loc_id
- name
- bufsize
- comment
.
- comment
- bufsize
.
- Otherwise returns FAIL (-1).
-
-
- General Property List Operations
-
File Creation Properties - - - |
-
- File Access Properties
-
|
-
- Dataset Creation Properties
-
Dataset Memory and Transfer Properties -
| ||
- -|| Available only in the parallel HDF5 library. - |
H5Pcreate
(H5P_class_t type
- )
- H5Pcreate
creates a new property as an instance of some
- property list class. The new property list is initialized
- with default values for the specified class. The classes are:
- H5P_FILE_CREATE
- H5P_FILE_ACCESS
- H5P_DATASET_CREATE
- H5P_DATASET_XFER
- type
- plist
) if successful;
- otherwise Fail (-1).
-H5Pclose
(hid_t plist
- )
- H5Pclose
terminates access to a property list.
- All property lists should be closed when the application is
- finished accessing them.
- This frees resources used by the property list.
- plist
- H5Pget_class
(hid_t plist
- )
- H5Pget_class
returns the property list class for the
- property list identied by the plist
parameter.
- Valid property list classes are defined in the description of
- H5Pcreate()
.
- plist
- H5Pcopy
(hid_t plist
- )
- H5Pcopy
copies an existing property list to create
- a new property list.
- The new property list has the same properties and values
- as the original property list.
- plist
- H5Pget_version
(hid_t plist
,
- int * boot
,
- int * freelist
,
- int * stab
,
- int * shhdr
- )
- H5Pget_version
retrieves the version information of various objects
- for a file creation property list. Any pointer parameters which are
- passed as NULL are not queried.
- plist
- boot
- freelist
- stab
- shhdr
- H5Pset_userblock
(hid_t plist
,
- hsize_t size
- )
- H5Pset_userblock
sets the user block size of a
- file creation property list.
- The default user block size is 0; it may be set to any
- power of 2 equal to 512 or greater (512, 1024, 2048, etc.).
- plist
- size
- H5Pget_userblock
(hid_t plist
,
- hsize_t * size
- )
- H5Pget_userblock
retrieves the size of a user block
- in a file creation property list.
- plist
- size
- H5Pset_sizes
(hid_t plist
,
- size_t sizeof_addr
,
- size_t sizeof_size
- )
- H5Pset_sizes
sets the byte size of the offsets and lengths used to
- address objects in an HDF5 file. This function is only valid for
- file creation property lists. Passing in a value of 0 for one of the
- sizeof parameters retains the current value. The default value
- for both values is 4 bytes. Valid values currenly are 2, 4, 8 and
- 16.
- plist
- sizeof_addr
- sizeof_size
- H5Pget_sizes
(hid_t plist
,
- size_t * sizeof_addr
,
- size_t * sizeof_size
- )
- H5Pget_sizes
retrieves the size of the offsets
- and lengths used in an HDF5 file.
- This function is only valid for file creation property lists.
- plist
- size
- size
- H5Pset_mpi
(hid_t plist
,
- MPI_Comm comm
,
- MPI_Info info
- )
- H5Pset_mpi
stores the access mode for MPIO call and the user supplied
- communicator and info in the access property list, which can then
- be used to open file. This function is available only in the
- parallel HDF5 library and is not a collective function.
- plist
- comm
- comm
. Any modification to comm
after
- this function call returns may have undetermined effect
- to the access property list. Users should call this function
- again to setup the property list.
- info
- info
. Any modification to info
after
- this function call returns may have undetermined effect
- to the access property list. Users should call this function
- again to setup the property list.
- H5Pget_mpi
(hid_t plist
,
- MPI_Comm *comm
,
- MPI_Info *info
- )
- H5Pget_mpi
retrieves the communicator and info object
- that have been set by H5Pset_mpi.
- This function is available only in the parallel HDF5 library
- and is not a collective function.
- plist
- comm
- info
- H5Pset_xfer
(hid_t plist
,
- H5D_transfer_t data_xfer_mode
- )
- H5Pset_xfer
sets the transfer mode of the dataset transfer property list.
- The list can then be used to control the I/O transfer mode
- during dataset accesses. This function is available only
- in the parallel HDF5 library and is not a collective function.
- - Valid data transfer modes are: -
plist
- data_xfer_mode
- H5Pget_xfer
(hid_t plist
,
- H5D_transfer_t * data_xfer_mode
- )
- H5Pget_xfer
retrieves the transfer mode from the
- dataset transfer property list.
- This function is available only in the parallel HDF5 library
- and is not a collective function.
- plist
- data_xfer_mode
- H5Pset_sym_k
(hid_t plist
,
- int ik
,
- int lk
- )
- H5Pset_sym_k
sets the size of parameters used to
- control the symbol table nodes. This function is only valid
- for file creation property lists. Passing in a value of 0 for
- one of the parameters retains the current value.
-
- ik
is one half the rank of a tree that stores a symbol
- table for a group. Internal nodes of the symbol table are on
- average 75% full. That is, the average rank of the tree is
- 1.5 times the value of ik
.
-
- lk
is one half of the number of symbols that can
- be stored in a symbol table node. A symbol table node is the
- leaf of a symbol table tree which is used to store a group.
- When symbols are inserted randomly into a group, the group's
- symbol table nodes are 75% full on average. That is, they
- contain 1.5 times the number of symbols specified by
- lk
.
-
plist
- ik
- lk
- H5Pget_sym_k
(hid_t plist
,
- int * ik
,
- int * lk
- )
- H5Pget_sym_k
retrieves the size of the
- symbol table B-tree 1/2 rank and the symbol table leaf
- node 1/2 size. This function is only valid for file creation
- property lists. If a parameter valued is set to NULL, that
- parameter is not retrieved. See the description for
- H5Pset_sym_k for more
- information.
- plist
- ik
- size
- H5Pset_istore_k
(hid_t plist
,
- int ik
- )
- H5Pset_istore_k
sets the size of the parameter
- used to control the B-trees for indexing chunked datasets.
- This function is only valid for file creation property lists.
- Passing in a value of 0 for one of the parameters retains
- the current value.
-
- ik
is one half the rank of a tree that stores
- chunked raw data. On average, such a tree will be 75% full,
- or have an average rank of 1.5 times the value of
- ik
.
-
plist
- ik
- H5Pget_istore_k
(hid_t plist
,
- int * ik
- )
- H5Pget_istore_k
queries the 1/2 rank of
- an indexed storage B-tree.
- The argument ik
may be the null pointer (NULL).
- This function is only valid for file creation property lists.
- - See H5Pset_istore_k for details. -
plist
- ik
- H5Pset_layout
(hid_t plist
,
- H5D_layout_t layout
- )
- H5Pset_layout
sets the type of storage used store the
- raw data for a dataset.
- This function is only valid for dataset creation property lists.
- Valid parameters for layout
are:
- plist
- layout
- H5Pget_layout
(hid_t plist
)
- H5Pget_layout
returns the layout of the raw data for
- a dataset. This function is only valid for dataset creation
- property lists. Valid types for layout
are:
- plist
- H5Pset_chunk
(hid_t plist
,
- int ndims
,
- const hsize_t * dim
- )
- H5Pset_chunk
sets the size of the chunks used to
- store a chunked layout dataset. This function is only valid
- for dataset creation property lists.
- The ndims
parameter currently must be the same size
- as the rank of the dataset. The values of the dim
- array define the size of the chunks to store the dataset's raw data.
- As a side-effect, the layout of the dataset is changed to
- H5D_CHUNKED
, if it is not already.
- plist
- ndims
- dim
- H5Pget_chunk
(hid_t plist
,
- int max_ndims
,
- hsize_t * dims
- )
- H5Pget_chunk
retrieves the size of chunks for the
- raw data of a chunked layout dataset.
- This function is only valid for dataset creation property lists.
- At most, max_ndims
elements of dims
- will be initialized.
- plist
- max_ndims
- dims
array.
- dims
- H5Pset_alignment
(hid_t plist
,
- hsize_t threshold
,
- hsize_t alignment
- )
- H5Pset_alignment
sets the alignment properties
- of a file access property list
- so that any file object >= THRESHOLD bytes will be aligned on
- an address which is a multiple of ALIGNMENT. The addresses
- are relative to the end of the user block; the alignment is
- calculated by subtracting the user block size from the
- absolute file address and then adjusting the address to be a
- multiple of ALIGNMENT.
- - Default values for THRESHOLD and ALIGNMENT are one, implying - no alignment. Generally the default values will result in - the best performance for single-process access to the file. - For MPI-IO and other parallel systems, choose an alignment - which is a multiple of the disk block size. -
plist
- threshold
- alignment
- H5Pget_alignment
(hid_t plist
,
- hsize_t *threshold
,
- hsize_t *alignment
- )
- H5Pget_alignment
retrieves the current settings for
- alignment properties from a file access property list.
- The threshold
and/or alignment
pointers
- may be null pointers (NULL).
- plist
- *threshold
- *alignment
- H5Pset_external
(hid_t plist
,
- const char *name
,
- off_t offset
,
- hsize_t size
- )
- H5Pset_external
adds an external file to the
- list of external files.
- - If a dataset is split across multiple files then the files - should be defined in order. The total size of the dataset is - the sum of the SIZE arguments for all the external files. If - the total size is larger than the size of a dataset then the - dataset can be extended (provided the data space also allows - the extending). -
plist
- *name
- offset
- size
- H5Pget_external_count
(hid_t plist
,
- )
- H5Pget_external_count
returns the number of external files
- for the specified dataset.
- plist
- H5Pget_external
(hid_t plist
,
- int idx
,
- size_t name_size
,
- char *name
,
- off_t *offset
,
- hsize_t *size
- )
- H5Pget_external
returns information about an external
- file. The external file is specified by its index, idx
,
- which is a number from zero to N-1, where N is the value
- returned by H5Pget_external_count()
.
- At most name_size
characters are copied into the
- name
array. If the external file name is
- longer than name_size
with the null terminator, the
- return value is not null terminated (similar to strncpy()
).
-
- If name_size
is zero or name
is the
- null pointer, the external file name is not returned.
- If offset
or size
are null pointers
- then the corresponding information is not returned.
-
plist
- idx
- name_size
- name
array.
- *name
- *offset
- *size
- H5Pset_filter
(hid_t plist
,
- H5Z_filter_t filter
,
- unsigned int flags
,
- size_t cd_nelmts
,
- const unsigned int cd_values[]
- )
- H5Pset_filter
adds the specified
- filter
and corresponding properties to the
- end of an output filter pipeline.
- If plist
is a dataset creation property list,
- the filter is added to the permanent filter pipeline;
- if plist
is a dataset transfer property list,
- the filter is added to the transient filter pipeline.
-
- The array cd_values
contains
- cd_nelmts
integers which are auxiliary data
- for the filter. The integer values will be stored in the
- dataset object header as part of the filter information.
-
- The flags
argument is a bit vector with
- the following fields specifying certain general properties
- of the filter:
-
H5Z_FLAG_OPTIONAL |
- - | If this bit is set then the filter is
- optional. If the filter fails (see below) during an
- H5Dwrite() operation then the filter is
- just excluded from the pipeline for the chunk for which
- it failed; the filter will not participate in the
- pipeline during an H5Dread() of the chunk.
- This is commonly used for compression filters: if the
- compression result would be larger than the input then
- the compression filter returns failure and the
- uncompressed data is stored in the file. If this bit is
- clear and a filter fails then H5Dwrite()
- or H5Dread() also fails. |
-
plist_id
must be a dataset creation
- property list.
- plist
- filter
- flags
- cd_nelmts
- cd_values
- cd_values[]
- H5Pget_nfilters
(hid_t plist
)
- H5Pget_nfilters
returns the number of filters
- defined in the filter pipeline associated with the property list
- plist
.
- - In each pipeline, the filters are numbered from - 0 through N-1, where N is the value returned - by this function. During output to the file, the filters are - applied in increasing order; during input from the file, they - are applied in decreasing order. -
- H5Pget_nfilters
returns the number of filters
- in the pipeline, including zero (0
) if there
- are none.
-
plist_id
must be a dataset creation
- property list.
- plist
- H5Pget_filter
(hid_t plist
,
- int filter_number
,
- unsigned int *flags
,
- size_t *cd_nelmts
,
- unsigned int *cd_values
,
- size_t namelen
,
- char name[]
- )
-
- H5Pget_filter
returns information about a
- filter, specified by its filter number, in a filter pipeline,
- specified by the property list with which it is associated.
-
- If plist
is a dataset creation property list,
- the pipeline is a permanent filter pipeline;
- if plist
is a dataset transfer property list,
- the pipeline is a transient filter pipeline.
-
- On input, cd_nelmts
indicates the number of entries
- in the cd_values
array, as allocated by the caller;
- on return,cd_nelmts
contains the number of values
- defined by the filter.
-
- filter_number
is a value between zero and
- N-1, as described in
- H5Pget_nfilters()
.
- The function will return FAIL (-1) if the filter number is out
- of range.
-
- If name
is a pointer to an array of at least
- namelen
bytes, the filter name will be copied
- into that array. The name will be null terminated if
- namelen
is large enough. The filter name returned
- will be the name appearing in the file, the name registered
- for the filter, or an empty string.
-
- The structure of the flags
argument is discussed
- in H5Pset_filter()
.
-
plist
must be a dataset creation property
- list.
- plist
- filter_number
- flags
- cd_nelmts
- cd_values
- cd_values
- namelen
- name
.
- name[]
- H5Pget_driver
(hid_t plist
,
- )
- H5Pget_driver
returns the identifier of the
- low-level file driver. Valid identifiers are:
- plist
- H5Pset_stdio
(hid_t plist
)
- H5Pset_stdio
sets the low level file driver to use
- the functions declared in the stdio.h file: fopen(), fseek()
- or fseek64(), fread(), fwrite(), and fclose().
- plist
- H5Pget_stdio
(hid_t plist
)
- H5Pget_stdio
checks to determine whether the
- file access property list is set to the stdio driver.
- In the future, additional arguments may be added to this
- function to match those added to H5Pset_stdio().
- plist
- H5Pset_sec2
(hid_t plist
,
- )
- H5Pset_sec2
sets the low-level file driver to use
- the functions declared
- in the unistd.h file: open(), lseek() or lseek64(), read(),
- write(), and close().
- plist
- H5Pget_sec2
(hid_t plist
)
- H5Pget_sec2
checks to determine whether the
- file access property list is set to the sec2 driver.
- In the future, additional arguments may be
- added to this function to match those added to H5Pset_sec2().
- plist
- H5Pset_core
(hid_t plist
,
- size_t increment
- )
- H5Pset_core
sets the low-level file driver to use
- malloc()
and free()
.
- This driver is restricted to temporary files which are not
- larger than the amount of virtual memory available.
- The increment
argument determines the file block size
- and memory will be allocated in multiples of INCREMENT bytes.
- A liberal increment
results in fewer calls to
- realloc()
and probably less memory fragmentation.
- plist
- increment
- H5Pget_core
(hid_t plist
,
- size_t *increment
- )
- H5Pget_core
checks to determine whether the
- file access property list is set to the core driver.
- On success, the block size is returned through the
- increment
if it is not the null pointer.
- In the future, additional arguments may be added to this
- function to match those added to H5Pset_core()
.
- plist
- *increment
- H5Pset_split
(hid_t plist
,
- const char *meta_ext
,
- hid_t meta_plist
,
- const char *raw_ext
,
- hid_t raw_plist
- )
- H5Pset_split
sets the low-level driver to
- split meta data from raw data, storing meta data in one file and
- raw data in another file. The meta file will have a name
- which is formed by adding meta_extension (recommended
- default value: .meta
) to the end of the base name
- and will be accessed according to the meta_properties.
- The raw file will have a name which is formed by appending
- raw_extension (recommended default value:
- .raw
) to the base name and will be accessed according
- to the raw_properties.
- Additional parameters may be added to this function in the future.
- plist
- *meta_ext
- .meta
.
- meta_plist
- *raw_ext
- .raw
.
- raw_plist
- H5Pget_split
(hid_t plist
,
- size_t meta_ext_size
,
- char *meta_ext
,
- hid_t *meta_properties
,
- size_t raw_ext_size
,
- char *raw_ext
,
- hid_t *raw_properties
- )
- H5Pget_split
checks to determine whether the file
- access property list is set to the split driver.
- On successful return,
- meta_properties
and raw_properties
will
- point to copies of the meta and raw access property lists
- which should be closed by calling H5Pclose()
when
- the application is finished with them, but if the meta and/or
- raw file has no property list then a negative value is
- returned for that property list identifier. Also, if
- meta_extension
and/or raw_extension
are
- non-null pointers, at most meta_ext_size
or
- raw_ext_size
characters of the meta or raw file name
- extension will be copied to the specified buffer. If the
- actual name is longer than what was requested then the result
- will not be null terminated (similar to
- strncpy()
). In the future, additional arguments
- may be added to this function to match those added to
- H5Pset_split()
.
- plist
- meta_ext_size
- meta_ext
buffer.
- *meta_ext
- *meta_properties
- raw_ext_size
- raw_ext
buffer.
- *raw_ext
- *raw_properties
- H5Pset_family
(hid_t plist
,
- hsize_t memb_size
,
- hid_t memb_plist
- )
- H5Pset_family
sets the file access properties
- to use the family
- driver; any previously defined driver properties are erased
- from the property list. Each member of the file family will
- use member_properties as its file access property
- list. The memb_size argument gives the logical size
- in bytes of each family member but the actual size could be
- smaller depending on whether the file contains holes. The
- member size is only used when creating a new file or
- truncating an existing file; otherwise the member size comes
- from the size of the first member of the family being
- opened. Note: if the size of the off_t
type is
- four bytes then the maximum family member size is usually
- 2^31-1 because the byte at offset 2,147,483,647 is generally
- inaccessable. Additional parameters may be added to this
- function in the future.
- plist
- memb_size
- memb_plist
- H5Pset_family
(hid_t plist
,
- hsize_t memb_size
,
- hid_t memb_plist
- )
- family
- driver.
- H5Pset_family
sets the file access properties
- to use the family
driver; any previously defined
- driver properties are erased from the property list.
- See File Families
- in the HDF5 User's Guide for a discussion
- of file families.
-
- Each member of the file family will use memb_plist
- as its file access property list.
-
- The memb_size
argument gives the logical size
- in bytes of each family member; the actual size could be
- smaller depending on whether the file contains holes.
- The member size is only used when creating a new file or
- truncating an existing file; otherwise the member size comes
- from the size of the first member of the family being
- opened.
-
- Note: If the size of the off_t
type is
- four bytes, then the maximum family member size is usually
- 2^31-1 because the byte at offset 2,147,483,647 is generally
- inaccessable.
-
- Additional parameters may be added to this function in the - future. -
plist
- memb_size
- memb_plist
- H5Pget_family
(hid_t tid
,
- hsize_t *memb_size
,
- hid_t *memb_plist
- )
- H5Pget_family
checks to determine whether the
- file access property list is set to the family driver.
- On successful return,
- access_properties will point to a copy of the member
- access property list which should be closed by calling
- H5Pclose()
when the application is finished with
- it. If memb_size is non-null then it will contain
- the logical size in bytes of each family member. In the
- future, additional arguments may be added to this function to
- match those added to H5Pset_family()
.
- plist
- *memb_size
- *memb_plist
- H5Pset_cache
(hid_t plist
,
- int mdc_nelmts
,
- size_t rdcc_nbytes
,
- double rdcc_w0
- )
- H5Pset_cache
sets the number of elements (objects)
- in the meta data cache and the total number of bytes in the
- raw data chunk cache.
-
- Sets or queries the meta data cache and raw data chunk cache
- parameters. The plist is a file access property
- list. The number of elements (objects) in the meta data cache
- is mdc_nelmts. The total size of the raw data chunk
- cache and the preemption policy is rdcc_nbytes and
- w0. For H5Pget_cache()
any (or all) of
- the pointer arguments may be null pointers.
-
- The RDCC_W0 value should be between 0 and 1 inclusive and - indicates how much chunks that have been fully read are - favored for preemption. A value of zero means fully read - chunks are treated no differently than other chunks (the - preemption is strictly LRU) while a value of one means fully - read chunks are always preempted before other chunks. -
plist
- mdc_nelmts
- rdcc_nbytes
- rdcc_w0
- H5Pget_cache
(hid_t plist
,
- int *mdc_nelmts
,
- size_t *rdcc_nbytes
,
- double *rdcc_w0
- )
- plist
- *mdc_nelmts
- *rdcc_nbytes
- *rdcc_w0
- H5Pset_buffer
(hid_t plist
,
- size_t size
,
- void *tconv
,
- void *bkg
- )
- H5Pset_buffer
- sets the maximum size
- for the type conversion buffer and background buffer and
- optionally supply pointers to application-allocated buffers.
- If the buffer size is smaller than the entire amount of data
- being transfered between application and file, and a type
- conversion buffer or background buffer is required then
- strip mining will be used. However, certain restrictions
- apply for the size of buffer which can be used for strip
- mining. For instance, when strip mining a 100x200x300
- hyperslab of a simple data space the buffer must be large
- enough to hold a 1x200x300 slab.
-
- If tconv
and/or bkg
are null pointers,
- then buffers will be allocated and freed during the data transfer.
-
- The default value for the maximum buffer is 1 Mb. -
plist
- size
- tconv
- bkg
- H5Pget_buffer
(hid_t plist
,
- void **tconv
,
- void **bkg
- )
- H5Pget_buffer
reads values previously set
- with H5Pset_buffer().
- plist
- **tconv
- **bkg
- H5Pset_preserve
(hid_t plist
,
- hbool_t status
- )
- H5Pset_preserve
sets the
- dataset transfer property list status to TRUE or FALSE.
- - When reading or writing compound data types and the - destination is partially initialized and the read/write is - intended to initialize the other members, one must set this - property to TRUE. Otherwise the I/O pipeline treats the - destination datapoints as completely uninitialized. -
plist
- status
- H5Pget_preserve
(hid_t plist
)
- H5Pget_preserve
checks the status of the
- dataset transfer property list.
- plist
- H5Pset_deflate
(hid_t plist
,
- int level
- )
- H5Pset_deflate
sets the compression method for a
- dataset creation property list to H5D_COMPRESS_DEFLATE
- and the compression level to level<>/code>, which should
- be a value from zero to nine, inclusive.
- Lower compression levels are faster but result in less compression.
- This is the same algorithm as used by the GNU gzip program.
- - Parameters:
-
- - hid_t
plist
- - IN: Identifier for the dataset creation property list.
-
- int
level
- - IN: Compression level.
-
- - Returns:
-
- Returns SUCCEED (0) if successful;
- otherwise FAIL (-1).
-
-
-The H5R Interface is strictly experimental at this time;
-the interface may change dramatically or support for ragged arrays
-may be unavailable in future in releases. As a result, future releases
-may be unable to retrieve data stored with this interface.
- -Do not create any archives using this interface! |
---|
-These functions enable the user to store and retrieve data in ragged arrays. - - -
- - | - - |
- - - -
H5Rcreate
(
,
-
,
-
- )
-H5Rcreate
-
-
-
- H5Ropen
(
,
-
,
-
- )
-H5Ropen
-
-
-
- H5Rclose
(
,
-
,
-
- )
-H5Rclose
-
-
-
- H5Rwrite
(
,
-
,
-
- )
-H5Rwrite
- - Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
-
-
- H5Rread
(
,
-
,
-
- )
-H5Rread
- - Datatype conversion takes place at the time of a read or write - and is automatic. See the - Data Conversion - section of The Data Type Interface (H5T) in the - HDF5 User's Guide for a discussion of - data conversion, including the range of conversions currently - supported by the HDF5 libraries. -
-
-
- - - | - - | - - |
-The following H5S functions are included in the HDF5 specification, -but have not yet been implemented. They are described in the -The Dataspace Interface (H5S) section -of the HDF5 User's Guide.. -
-
|
-
|
-
|
H5Screate
(H5S_class_t type
)
-H5Screate
creates a new dataspace of a particular
- type
.
- The types currently supported are H5S_SCALAR
,
- H5S_SIMPLE
, and H5S_NONE
;
- others are planned to be added later. The H5S_NONE
- dataspace can only hold a selection, not an extent.
-type
- H5Screate_simple
(int rank
,
- const hsize_t * dims
,
- const hsize_t * maxdims
- )
-H5Screate_simple
creates a new simple data space
- and opens it for access. The rank
is the number of
- dimensions used in the dataspace.
- The dims
argument is the size
- of the simple dataset and the maxdims
argument is
- the upper limit on the size of the dataset. maxdims
- may be the null pointer in which case the upper limit is the
- same as dims
. If an element of maxdims
- is zero then the corresponding dimension is unlimited, otherwise
- no element of maxdims
should be smaller than the
- corresponding element of dims
. The dataspace
- identifier returned from this function should be released with
- H5Sclose
or resource leaks will occur.
-rank
- dims
- maxdims
- H5Scopy
(hid_t space_id
- )
-H5Scopy
creates a new dataspace which is an exact
- copy of the dataspace identified by space_id
.
- The dataspace identifier returned from this function should be
- released with H5Sclose
or resource leaks will occur.
-space_id
- H5Sselect_elements
(hid_t space_id
,
- dh5s_selopt_t op
,
- const size_t num_elements
,
- const hssize_t *coord
[ ]
- )
-H5Sselect_elements
selects array elements to be
- included in the selection for the space_id
- dataspace. The number of elements selected must be set with
- the num_elements
. The coord
array
- is a two-dimensional array of size dataspace rank
- by num_elements
(ie. a list of coordinates in
- the array). The order of the element coordinates in the
- coord
array also specifies the order in which
- the array elements are iterated through when I/O is performed.
- Duplicate coordinate locations are not checked for.
-
- The selection operator op
determines how the
- new selection is to be combined with the previously existing
- selection for the dataspace. Currently, only the
- H5S_SELECT_SET
operator is supported, which
- replaces the existing selection with the parameters from
- this call. When operators other than H5S_SELECT_SET
- are used to combine a new selection with an existing selection,
- the selection ordering is reset to 'C' array ordering.
-
space_id
- op
- num_elements
- coord
[ ]
- H5Sselect_all
(hid_t space_id
)
-H5Sselect_all
selects the entire extent
- of the dataspace space_id
.
-
- More specifically, H5Sselect_all
selects
- the special 5S_SELECT_ALL region for the dataspace
- space_id
. H5S_SELECT_ALL selects the
- entire dataspace for any dataspace it is applied to.
-
space_id
- H5Sselect_none
(hid_t space_id
)
-H5Sselect_none
resets the selection region
- for the dataspace space_id
to include no elements.
-space_id
- H5Sselect_valid
(hid_t space_id
)
-H5Sselect_valid
verifies that the selection
- for the dataspace space_id
is within the extent
- of the dataspace if the current offset for the dataspace is used.
-space_id
- H5Sget_simple_extent_npoints
(hid_t space_id
)
-H5Sget_simple_extent_npoints
determines the number of elements
- in a dataspace. For example, a simple 3-dimensional dataspace
- with dimensions 2, 3, and 4 would have 24 elements.
-space_id
- H5Sget_select_npoints
(hid_t space_id
)
-H5Sget_select_npoints
determines the number of elements
- in the current selection of a dataspace.
-space_id
- H5Sget_simple_extent_ndims
(hid_t space_id
)
-H5Sget_simple_extent_ndims
determines the dimensionality (or rank)
- of a dataspace.
-space_id
- H5Sget_simple_extent_dims
(hid_t space_id
,
- hsize_t *dims
,
- hsize_t *maxdims
- )
-H5Sget_simple_extent_dims
returns the size and maximum sizes
- of each dimension of a dataspace through the dims
- and maxdims
parameters.
-space_id
- dims
- maxdims
- H5Sget_space_type
(hid_t space_id
)
-H5Sget_space_type
queries a dataspace to determine the
- current class of a dataspace.
-
- The function returns a class name, one of the following:
- H5S_SCALAR
,
- H5S_SIMPLE
, or
- H5S_NONE
.
-
space_id
- H5Sset_extent_simple
(hid_t space_id
,
- int rank
,
- const hsize_t *current_size
,
- const hsize_t *maximum_size
- )
-H5Sset_extent_simple
sets or resets the size of
- an existing dataspace.
-
- rank
is the dimensionality, or number of
- dimensions, of the dataspace.
-
- current_size
is an array of size rank
- which contains the new size of each dimension in the dataspace.
- maximum_size
is an array of size rank
- which contains the maximum size of each dimension in the
- dataspace.
-
- Any previous extent is removed from the dataspace, the dataspace
- type is set to H5S_SIMPLE
, and the extent is set as
- specified.
-
space_id
- rank
- current_size
- maximum_size
- H5Sis_simple
(hid_t space_id
)
-H5Sis_simple
determines whether a dataspace is
- a simple dataspace. [Currently, all dataspace objects are simple
- dataspaces, complex dataspace support will be added in the future]
-space_id
- H5Soffset_simple
(hid_t space_id
,
- const hssize_t *offset
- )
-H5Soffset_simple
sets the offset of a
- simple dataspace space_id
. The offset
- array must be the same number of elements as the number of
- dimensions for the dataspace. If the offset
- array is set to NULL, the offset for the dataspace
- is reset to 0.
- - This function allows the same shaped selection to be moved - to different locations within a dataspace without requiring it - to be redefined. -
space_id
- offset
- H5Sextent_class
(hid_t space_id
)
-H5Sextent_class
queries a dataspace to determine the
- current class of a dataspace.
-
- The function returns a class name, one of the following:
- H5S_SCALAR
,
- H5S_SIMPLE
.
-
space_id
- H5Sextent_copy
(hid_t dest_space_id
,
- hid_t source_space_id
- )
-H5Sextent_copy
copies the extent from
- source_space_id
to dest_space_id
.
- This action may change the type of the dataspace.
-dest_space_id
- source_space_id
- H5Sset_extent_none
(hid_t space_id
)
-H5Sset_extent_none
removes the extent from
- a dataspace and sets the type to H5S_NONE.
-space_id
- H5Sselect_hyperslab
(hid_t space_id
,
- h5s_selopt_top
,
- const hssize_t *start
,
- const hsize_t *stride
- const hsize_t *count
,
- const hsize_t *block
- )
-H5Sselect_hyperslab
selects a hyperslab region
- to add to the current selected region for the dataspace
- specified by space_id
.
-
- The start
, stride
, count
,
- and block
arrays must be the same size as the rank
- of the dataspace.
-
- The selection operator op
determines how the new
- selection is to be combined with the already existing selection
- for the dataspace.
-
- Currently, only the H5S_SELECT_SET
operator is
- supported; it replaces the existing selection with the
- parameters from this call. Overlapping blocks are not
- supported with the H5S_SELECT_SET
operator.
-
-The start
array determines the starting coordinates
-of the hyperslab
-to select.
-
-The stride
array chooses array locations
-from the dataspace
-with each value in the stride
array determining how
-many elements to move
-in each dimension. Setting a value in the stride
-array to 1 moves to
-each element in that dimension of the dataspace; setting a value of 2 in a
-location in the stride
array moves to every other
-element in that
-dimension of the dataspace. In other words, the stride
-determines the
-number of elements to move from the start
location
-in each dimension.
-Stride values of 0 are not allowed. If the stride
-parameter is NULL
,
-a contiguous hyperslab is selected (as if each value in the
-stride
array
-was set to all 1's).
-
-The count
array determines how many blocks to
-select from the dataspace, in each dimension.
-
-The block
array determines
-the size of the element block selected from the dataspace.
-If the block
-parameter is set to NULL
, the block size defaults
-to a single element
-in each dimension (as if the block
array was set to all 1's).
-
-For example, in a 2-dimensional dataspace, setting
-start
to [1,1],
-stride
to [4,4], count
to [3,7], and
-block
to [2,2] selects
-21 2x2 blocks of array elements starting with location (1,1) and selecting
-blocks at locations (1,1), (5,1), (9,1), (1,5), (5,5), etc.
-
-Regions selected with this function call default to C order iteration when -I/O is performed. -
space_id
- op
- start
- count
- stride
- block
- H5Sclose
(hid_t space_id
- )
-H5Sclose
releases a dataspace.
- Further access through the dataspace identifier is illegal.
- Failure to release a dataspace with this call will
- result in resource leaks.
-space_id
-
-General Datatype Operations
- -Atomic Datatype Properties - |
- -Properties of Compound Types - |
- -Conversion Functions - |
-The Datatype interface, H5T, provides a mechanism to describe the - storage format of individual data points of a data set and is - hopefully designed in such a way as to allow new features to be - easily added without disrupting applications that use the data - type interface. A dataset (the H5D interface) is composed of a - collection or raw data points of homogeneous type organized - according to the data space (the H5S interface). - -
-A datatype is a collection of datatype properties, all of - which can be stored on disk, and which when taken as a whole, - provide complete information for data conversion to or from that - datatype. The interface provides functions to set and query - properties of a datatype. - -
-A data point is an instance of a datatype, - which is an instance of a type class. We have defined - a set of type classes and properties which can be extended at a - later time. The atomic type classes are those which describe - types which cannot be decomposed at the datatype interface - level; all other classes are compound. - -
-See The Datatype Interface (H5T) -in the HDF5 User's Guide for further information, including a complete list of all supported datatypes. - - -
H5Topen
(hid_t loc_id
,
- const char * name
- )
-H5Topen
opens a named datatype at the location
- specified by loc_id
and returns an identifier
- for the datatype. loc_id
is either a file or
- group identifier. The identifier should eventually be closed
- by calling H5Tclose()
to release resources.
-loc_id
- name
- H5Tcommit
(hid_t loc_id
,
- const char * name
,
- hid_t type
- )
-H5Tcommit
commits a transient datatype
- (not immutable) to a file, turned it into a named datatype.
- The loc_id
is either a file or group identifier
- which, when combined with name
, refers to a new
- named datatype.
-loc_id
- name
- type
- H5Tcommitted
(hid_t type
)
-H5Tcommitted
queries a type to determine whether
- the type specified by the type
identifier
- is a named type or a transient type. If this function returns
- a positive value, then the type is named (that is, it has been
- committed, perhaps by some other application). Datasets which
- return committed datatypes with H5Dget_type()
are
- able to share the datatype with other datasets in the same file.
-type
- H5Tinsert_array
(hid_t parent_id
,
- const char *name
,
- size_t offset
,
- int ndims
,
- const size_t *dim
,
- const int *perm
,
- hid_t member_id
- )
-H5Tinsert_array
adds a new member to the
- compound datatype parent_id
.
- The member is an array with ndims
dimensionality
- and the size of the array is dim
.
- The new member's name, name
, must be unique
- within the compound datatype.
- The offset
argument defines the start of the
- member in an instance of the compound datatype and
- member_id
is the type identifier of the new member.
- The total member size should be relatively small.
-parent_id
- name
- offset
- ndims
- dim
- perm
- member_id
- H5Tfind
(hid_t src_id
,
- hid_t dst_id
,
- H5T_cdata_t **pcdata
- )
-H5Tfind
finds a conversion function that can
- handle a conversion from type src_id
to type
- dst_id
.
- The pcdata
argument is a pointer to a pointer
- to type conversion data which was created and initialized
- by the soft type conversion function of this path when the
- conversion function was installed on the path.
-src_id
- dst_id
- pcdata
- H5Tconvert
(hid_t src_id
,
- hid_t dst_id
,
- size_t nelmts
,
- void *buf
,
- void *background
- )
-H5Tconvert
converts nelmts
elements
- from the type specified by the src_id
identifier
- to type dst_id
.
- The source elements are packed in buf
and on return
- the destination will be packed in buf
.
- That is, the conversion is performed in place.
- The optional background buffer is an array of nelmts
- values of destination type which are merged with the converted
- values to fill in cracks (for instance, background
- might be an array of structs with the a
and
- b
fields already initialized and the conversion
- of buf
supplies the c
and d
- field values).
-src_id
- dst_id
- nelmts
- buf
.
- buf
- background
- H5Tset_overflow
(H5T_overflow_t func
)
-H5Tset_overflow
sets the overflow handler
- to be the function specified by func
.
- func
will be called for all datatype conversions that
- result in an overflow.
-
- See the definition of H5T_overflow_t
in
- H5Tpublic.h
for documentation
- of arguments and return values.
- The prototype for H5T_overflow_t
is as follows:
- herr_t (*H5T_overflow_t)(hid_t src_id, hid_t dst_id,
- void *src_buf, void *dst_buf);
-
-
- The NULL pointer may be passed to remove the overflow handler. -
func
- H5Tget_overflow
(void
)
-H5Tset_overflow
returns a pointer
- to the current global overflow function.
- This is an application-defined function that is called whenever a
- datatype conversion causes an overflow.
-H5Tcreate
(H5T_class_t class
,
- size_tsize
- )
-H5Tcreate
creates a new dataype of the specified
- class with the specified number of bytes.
- Currently, only the H5T_COMPOUND
datatype class is
- supported with this function. Use H5Tcopy
- to create integer or floating-point datatypes.
- The datatype identifier returned from this function should be
- released with H5Tclose
or resource leaks will result.
-class
- size
- H5Tcopy
(hid_t type_id
)
-H5Tcopy
copies an existing datatype.
- The returned type is always transient and unlocked.
-
- The type_id
argument can be either a datatype
- identifier, a predefined datatype (defined in
- H5Tpublic.h
), or a dataset identifier.
- If type_id
is a dataset identifier instead of a
- datatype identifier, then this function returns a transient,
- modifiable datatype which is a copy of the dataset's datatype.
-
- The datatype identifier returned should be released with
- H5Tclose
or resource leaks will occur.
-
-
type_id
- H5Tpublic.h
), or a dataset identifier.
- H5Tequal
(hid_t type_id1
,
- hid_ttype_id2
- )
-H5Tequal
determines whether two datatype identifiers
- refer to the same datatype.
-type_id1
- type_id2
- H5Tlock
(hid_t type_id
- )
-H5Tlock
locks the datatype specified by the
- type_id
identifier, making it read-only and
- non-destrucible. This is normally done by the library for
- predefined datatypes so the application does not
- inadvertently change or delete a predefined type.
- Once a datatype is locked it can never be unlocked.
-type_id
- H5Tget_class
(hid_t type_id
- )
-H5Tget_class
returns the datatype class identifier.
-
- Valid class identifiers, as defined in H5Tpublic.h
, are:
-
H5T_INTEGER
(0
)
- H5T_FLOAT
(1
)
- H5T_TIME
(2
)
- H5T_STRING
(3
)
- H5T_BITFIELD
(4
)
- H5T_OPAQUE
(5
)
- H5T_COMPOUND
(6
)
- type_id
- H5Tget_size
(hid_t type_id
- )
-H5Tget_size
returns the size of a datatype in bytes.
-type_id
- H5Tset_size
(hid_t type_id
,
- size_tsize
- )
-H5Tset_size
sets the total size in bytes,
- size
, for an atomic datatype (this operation
- is not permitted on compound datatypes). If the size is
- decreased so that the significant bits of the datatype extend beyond
- the edge of the new size, then the `offset' property is decreased
- toward zero. If the `offset' becomes zero and the significant
- bits of the datatype still hang over the edge of the new size, then
- the number of significant bits is decreased.
- Adjusting the size of an H5T_STRING automatically sets the precision
- to 8*size. All datatypes have a positive size.
-type_id
- size
- H5Tget_order
(hid_t type_id
- )
-H5Tget_order
returns the byte order of an
- atomic datatype.
- - Possible return values are: -
H5T_ORDER_LE
(0
)
- H5T_ORDER_BE
(1
)
- H5T_ORDER_VAX
(2
)
- type_id
- H5T_ORDER_ERROR
(-1).
-H5Tset_order
(hid_t type_id
,
- H5T_order_torder
- )
-H5Tset_order
sets the byte ordering of an atomic datatype.
- Byte orderings currently supported are:
- 0
)
- 1
)
- 2
)
- type_id
- order
- H5Tget_precision
(hid_t type_id
- )
-H5Tget_precision
returns the precision of an atomic datatype. The
- precision is the number of significant bits which, unless padding is
- present, is 8 times larger than the value returned by H5Tget_size().
-type_id
- H5Tset_precision
(hid_t type_id
,
- size_tprecision
- )
-H5Tset_precision
sets the precision of an atomic datatype.
- The precision is the number of significant bits which, unless padding
- is present, is 8 times larger than the value returned by H5Tget_size().
- If the precision is increased then the offset is decreased and then - the size is increased to insure that significant bits do not "hang - over" the edge of the datatype. -
Changing the precision of an H5T_STRING automatically changes the - size as well. The precision must be a multiple of 8. -
When decreasing the precision of a floating point type, set the - locations and sizes of the sign, mantissa, and exponent fields - first. -
type_id
- precision
- H5Tget_offset
(hid_t type_id
- )
-H5Tget_offset
retrieves the bit offset of the first significant bit.
- The signficant bits of an atomic datum can be offset from the beginning
- of the memory for that datum by an amount of padding. The `offset'
- property specifies the number of bits of padding that appear to the
- "right of" the value. That is, if we have a 32-bit datum with 16-bits
- of precision having the value 0x1122 then it will be layed out in
- memory as (from small byte address toward larger byte addresses):
- Byte Position | -Big-Endian Offset=0 | -Big-Endian Offset=16 | -Little-Endian Offset=0 | -Little-Endian Offset=16 | -
---|---|---|---|---|
0: | -[ pad] | -[0x11] | -[0x22] | -[ pad] | -
1: | -[ pad] | -[0x22] | -[0x11] | -[ pad] | -
2: | -[0x11] | -[ pad] | -[ pad] | -[0x22] | -
3: | -[0x22] | -[ pad] | -[ pad] | -[0x11] | -
type_id
- H5Tset_offset
(hid_t type_id
,
- size_t offset
- )
-H5Tset_offset
sets the bit offset of the first significant bit. The
- signficant bits of an atomic datum can be offset from the beginning of
- the memory for that datum by an amount of padding. The `offset'
- property specifies the number of bits of padding that appear to the
- "right of" the value. That is, if we have a 32-bit datum with 16-bits
- of precision having the value 0x1122 then it will be layed out in
- memory as (from small byte address toward larger byte addresses):
- Byte Position | -Big-Endian Offset=0 | -Big-Endian Offset=16 | -Little-Endian Offset=0 | -Little-Endian Offset=16 | -
---|---|---|---|---|
0: | -[ pad] | -[0x11] | -[0x22] | -[ pad] | -
1: | -[ pad] | -[0x22] | -[0x11] | -[ pad] | -
2: | -[0x11] | -[ pad] | -[ pad] | -[0x22] | -
3: | -[0x22] | -[ pad] | -[ pad] | -[0x11] | -
If the offset is incremented then the total size is -incremented also if necessary to prevent significant bits of -the value from hanging over the edge of the datatype. - -
The offset of an H5T_STRING cannot be set to anything but -zero. -
type_id
- offset
- H5Tget_pad
(hid_t type_id
,
- H5T_pad_t * lsb
,
- H5T_pad_t * msb
- )
-H5Tget_pad
retrieves the padding type of the least and most-significant
- bit padding. Valid types are:
- 0
)
- 1
)
- 2
)
- type_id
- lsb
- msb
- H5Tset_pad
(hid_t type_id
,
- H5T_pad_t lsb
,
- H5T_pad_t msb
- )
-H5Tset_pad
sets the least and most-significant bits padding types.
- 0
)
- 1
)
- 2
)
- type_id
- lsb
- msb
- H5Tget_sign
(hid_t type_id
- )
-H5Tget_sign
retrieves the sign type for an integer type.
- Valid types are:
- 0
)
- 1
)
- type_id
- H5T_SGN_ERROR
(-1).
-H5Tset_sign
(hid_t type_id
,
- H5T_sign_t sign
- )
-H5Tset_sign
sets the sign proprety for an integer type.
- 0
)
- 1
)
- type_id
- sign
- H5Tget_fields
(hid_t type_id
,
- size_t * epos
,
- size_t * esize
,
- size_t * mpos
,
- size_t * msize
- )
-H5Tget_fields
retrieves information about the locations of the various
- bit fields of a floating point datatype. The field positions are bit
- positions in the significant region of the datatype. Bits are
- numbered with the least significant bit number zero.
- Any (or even all) of the arguments can be null pointers.
-type_id
- epos
- esize
- mpos
- msize
- H5Tset_fields
(hid_t type_id
,
- size_t epos
,
- size_t esize
,
- size_t mpos
,
- size_t msize
- )
-H5Tset_fields
sets the locations and sizes of the various floating
- point bit fields. The field positions are bit positions in the
- significant region of the datatype. Bits are numbered with the least
- significant bit number zero.
-
- Fields are not allowed to extend beyond the number of bits of - precision, nor are they allowed to overlap with one another. -
type_id
- epos
- esize
- mpos
- msize
- H5Tget_ebias
(hid_t type_id
- )
-H5Tget_ebias
retrieves the exponent bias of a floating-point type.
-type_id
- H5Tset_ebias
(hid_t type_id
,
- size_t ebias
- )
-H5Tset_ebias
sets the exponent bias of a floating-point type.
-type_id
- ebias
- H5Tget_norm
(hid_t type_id
- )
-H5Tget_norm
retrieves the mantissa normalization of
- a floating-point datatype. Valid normalization types are:
- 0
)
- 1
)
- 2
)
- type_id
- H5T_NORM_ERROR
(-1).
-H5Tset_norm
(hid_t type_id
,
- H5T_norm_t norm
- )
-H5Tset_norm
sets the mantissa normalization of
- a floating-point datatype. Valid normalization types are:
- 0
)
- 1
)
- 2
)
- type_id
- norm
- H5Tget_inpad
(hid_t type_id
- )
-H5Tget_inpad
retrieves the internal padding type for
- unused bits in floating-point datatypes.
- Valid padding types are:
- 0
)
- 1
)
- 2
)
- type_id
- H5T_PAD_ERROR
(-1).
-H5Tset_inpad
(hid_t type_id
,
- H5T_pad_t inpad
- )
-H5Tset_inpad
will be filled
- according to the value of the padding value property inpad
.
- Valid padding types are:
- 0
)
- 1
)
- 2
)
- type_id
- pad
- H5Tget_cset
(hid_t type_id
- )
-H5Tget_cset
retrieves the character set type
- of a string datatype. Valid character set types are:
- 0
)
- type_id
- H5T_CSET_ERROR
(-1).
-H5Tset_cset
(hid_t type_id
,
- H5T_cset_t cset
- )
-H5Tset_cset
the character set to be used.
- - HDF5 is able to distinguish between character sets of different - nationalities and to convert between them to the extent possible. - Valid character set types are: -
0
)
- type_id
- cset
- H5Tget_strpad
(hid_t type_id
- )
-H5Tget_strpad
retrieves the string padding method
- for a string datatype. Valid string padding types are:
- 0
)
- 1
)
- type_id
- H5T_STR_ERROR
(-1).
-H5Tset_strpad
(hid_t type_id
,
- H5T_str_t strpad
- )
-H5Tset_strpad
defines the storage mechanism for the string.
- Valid string padding values are:
- 0
)
- 1
)
- type_id
- strpad
- H5Tget_nmembers
(hid_t type_id
- )
-H5Tget_nmembers
retrieves the number of fields a compound datatype has.
-type_id
- H5Tget_member_name
(hid_t type_id
,
- int field_idx
- )
-H5Tget_member_name
retrieves the name of a field
- of a compound datatype. Fields are stored in no particular
- order, with indexes 0 through N-1, where N is the value returned
- by H5Tget_nmembers()
. The name of the field is
- allocated with malloc()
and the caller is responsible
- for freeing the memory used by the name.
-type_id
- field_idx
- H5Tget_member_dims
(hid_t type_id
,
- int field_idx
,
- size_t *dims
,
- int *perm
- )
-H5Tget_member_dims
returns the dimensionality of
- the field. The dimensions and permuation vector are returned
- through arguments dims
and perm
,
- both arrays of at least four elements.
- Either (or even both) may be null pointers.
-type_id
- field_idx
- dims
- to retrieve.
- dims
- perm
- H5Tget_member_type
(hid_t type_id
,
- int field_idx
- )
-H5Tget_member_type
returns the datatype of the specified member. The caller
- should invoke H5Tclose() to release resources associated with the type.
-type_id
- field_idx
- H5Tinsert
(hid_t type_id
,
- const char * name
,
- off_t offset
,
- hid_t field_id
- )
-H5Tinsert
adds another member to the compound datatype
- type_id
. The new member has a name
which
- must be unique within the compound datatype.
- The offset
argument defines the start of the member
- in an instance of the compound datatype, and field_id
- is the datatype identifier of the new member.
- - Note: All members of a compound datatype must be atomic; a - compound datatype cannot have a member which is a compound - datatype. -
type_id
- name
- offset
- field_id
- H5Tpack
(hid_t type_id
- )
-H5Tpack
recursively removes padding from within a compound
- datatype to make it more efficient (space-wise) to store that data.
-type_id
- H5Tregister_hard
(const char
- * name
, hid_t src_id
,
- hid_t dst_id
,
- H5T_conv_t func
- )
-H5Tregister_hard
registers a hard conversion function for a datatype
- conversion path. The path is specified by the source and destination
- datatypes src_id
and dst_id
. A conversion
- path can only have one hard function, so func
replaces any
- previous hard function.
-
- If func
is the null pointer then any hard function
- registered for this path is removed from this path. The soft functions
- are then used when determining which conversion function is appropriate
- for this path. The name
argument is used only
- for debugging and should be a short identifier for the function.
-
- The type of the conversion function pointer is declared as:
-
- typedef
herr_t (*H5T_conv_t
) (hid_t src_id
,
- hid_t dst_id
,
- H5T_cdata_t *cdata
,
- size_t nelmts
,
- void *buf
,
- void *bkg)
;
-
name
- src_id
- dst_id
- func
- H5Tregister_soft
(const char
- * name
, H5T_class_t src_cls
,
- H5T_class_t dst_cls
,
- H5T_conv_t func
- )
-H5Tregister_soft
registers a soft conversion function by adding it to the
- end of the master soft list and replacing the soft function in all
- applicable existing conversion paths. The name
- is used only for debugging and should be a short identifier
- for the function.
-
- The type of the conversion function pointer is declared as:
-
- typedef
herr_t (*H5T_conv_t
) (hid_t src_id
,
- hid_t dst_id
,
- H5T_cdata_t *cdata
,
- size_t nelmts
,
- void *buf
,
- void *bkg)
;
-
name
- src_cls
- dst_cls
- func
- H5Tunregister
(H5T_conv_t func
- )
-H5Tunregister
removes a conversion function from all conversion paths.
-
- The type of the conversion function pointer is declared as:
-
- typedef
herr_t (*H5T_conv_t
) (hid_t src_id
,
- hid_t dst_id
,
- H5T_cdata_t *cdata
,
- size_t nelmts
,
- void *buf
,
- void *bkg)
;
-
func
- H5Tclose
(hid_t type_id
- )
-H5Tclose
releases a datatype. Further access
- through the datatype identifier is illegal. Failure to release
- a datatype with this call will result in resource leaks.
-type_id
-
-
|
-
|
-HDF5 supports compression of raw data by compression methods
-built into the library or defined by an application.
-A compression method is associated with a dataset when the dataset is
-created and is applied independently to each storage chunk of the dataset.
-The dataset must use the H5D_CHUNKED
storage layout.
-
-The HDF5 library does not support compression for contiguous datasets -because of the difficulty of implementing random access for partial I/O. -Compact dataset compression is not supported because it would not produce -significant results. -
-See Compression in the -HDF5 User's Guide for further information. - -
H5Zregister
(H5Z_method_t method
,
- const char *name
,
- H5Z_func_tcfunc
,
- H5Z_func_t ufunc
- )
-H5Zregister
registers new compression and uncompression
- functions for a method specified by a method number, method
.
- name
is used for debugging and may be the null pointer.
- Either or both of cfunc
(the compression function) and
- ufunc
(the uncompression method) may be null pointers.
- - The statistics associated with a method number are not reset - by this function; they accumulate over the life of the library. -
method
- name
- cfunc
- ufunc
- -These tools enable the user to examine HDF5 files interactively. - - -
- - |
- - -
h5dump
- [-h]
- [-bb]
- [-header]
- [-a
names]
- [-d
names]
- [-g
names]
- [-l
names]
- file
-h5dump
enables the user to interactively examine
- the contents of an HDF5 file and dump those contents,
- in human readable form, to an ASCII file or to other tools.
-
- h5dump
displays HDF5 file content on
- standard output. It may display the content of the
- whole HDF5 file or selected objects, which can be groups,
- datasets, links, or attributes.
-
- The -header
option displays object
- header information only and must appear before the
- -a
, -d
, -g
, or
- -l
options.
-
- Native data types created in one machine are displayed with native
- names when h5dump
runs in the same machine type. But when
- h5dump
runs in a different machine type, it displays the
- native data types with standard type names. This will be changed in the
- next release to always display with standard type names.
-
- The h5dump
output is described in detail in
- DDL, the Data Description
- Language document.
-
-h
- -bb
- -header
- -a
names
- -d
names
- -g
names
- -l
names
- h5dump
can display the
- following types of information:
- h5ls
- [
options]
- file
- [
objects...]
-h5ls
prints selected information about file objects
- in the specified format.
--h
or -?
or --help
- -d
or --dump
- -w
N or --width=
N
- -v
or --verbose
- -V
or --version
- %%05d
to open a file family.
- h5repart
- [-v]
- [-V]
- [-[b|m]
N[g|m|k]]
- source_file
- dest_file
-h5repart
splits a single file into a family of
- files, joins a family of files into a single file, or copies
- one family of files to another while changing the size of the
- family members. h5repart
can also be used to
- copy a single file to a single file with holes.
-
- Sizes associated with the -b
and -m
- options may be suffixed with g
for gigabytes,
- m
for megabytes, or k
for kilobytes.
-
- File family names include an integer printf
- format such as %d
.
-
-
-v
- -V
- -b
N
- -m
N
-