Release information for hdf5-1.0.23a ------------------------------------ CHANGES SINCE THE SECOND ALPHA * Strided hyperslab selections in dataspaces now working. * The compression API has been replaced with a more general filter API. See doc/html/Filters.html for details. CHANGES SINCE THE FIRST ALPHA * Two of the packages have been renamed. The data space API has been renamed from `H5P' to `H5S' and the property list (template) API has been renamed from `H5C' to `H5P'. * The new attribute API `H5A' has been added. An attribute is a small dataset which can be attached to some other object (for instance, a 4x4 transformation matrix attached to a 3-dimensional dataset, or an English abstract attached to a group). * The error handling API `H5E' has been completed. By default, when an API function returns failure an error stack is displayed on the standard error stream. The H5Eset_auto() controls the automatic printing and H5E_BEGIN_TRY/H5E_END_TRY macros can temporarily disable the automatic error printing. * Support for large files and datasets (>2GB) has been added. There is an html document that describes how it works. Some of the types for function arguments have changed to support this: all arguments pertaining to sizes of memory objects are `size_t' and all arguments pertaining to file sizes are `hsize_t'. * More data type conversions have been added although none of them are fine tuned for performance. There are new converters from integer to integer and float to float, but not between integers and floating points. A bug has been fixed in the converter between compound types. * The numbered types have been removed from the API: int8, uint8, int16, uint16, int32, uint32, int64, uint64, float32, and float64. Use standard C types instead. Similarly, the numbered types were removed from the H5T_NATIVE_* architecture; use unnumbered types which correspond to the standard C types like H5T_NATIVE_INT. * More debugging support was added. If tracing is enabled at configuration time (the default) and the HDF5_TRACE environment variable is set to a file descriptor then all API calls will emit the function name, argument names and values, and return value on that file number. There is an html document that describes this. If appropriate debugging options are enabled at configuration time, some packages will display performance information on stderr. * Data types can be stored in the file as independent objects and multiple datasets can share a data type. * The raw data I/O stream has been implemented and the application can control meta and raw data caches, so I/O performance should be improved from the first alpha release. * Group and attribute query functions have been implemented so it is now possible to find out the contents of a file with no prior knowledge. * External raw data storage allows datasets to be written by other applications or I/O libraries and described and accessed through HDF5. * Hard and soft (symbolic) links are implemented which allow groups to share objects. Dangling and recursive symbolic links are supported. * User-defined data compression is implemented although we may generalize the interface to allow arbitrary user-defined filters which can be used for compression, checksums, encryption, performance monitoring, etc. The publicly-available `deflate' method is predefined if the GNU libz.a can be found at configuration time. * The configuration scripts have been modified to make it easier to build debugging vs. production versions of the library. * The library automatically checks that the application was compiled with the correct version of header files. Parallel HDF5 Changes * Parallel support for fixed dimension datasets with contiguous or chunked storages. Also, support unlimited dimension datasets which must use chunk storage. No parallel support for compressed datasets. * Collective data transfer for H5Dread/H5Dwrite. Collective access support for datasets with contiguous storage only, thus only fixed dimension datasets for now. * H5Pset_mpi and H5Pget_mpi no longer have the access_mode argument. It is taken over by the data-transfer property list of H5Dread/H5Dwrite. * New functions H5Pset_xfer and H5Pget_xfer to handle the specification of independent or collective data transfer_mode in the dataset transfer properties list. The properties list can be used to specify data transfer mode in the H5Dwrite and H5Dread function calls. * Added parallel support for datasets with chunked storage layout. When a dataset is extend in a PHDF5 file, all processes that open the file must collectively call H5Dextend with identical new dimension sizes. LIST OF API FUNCTIONS The following functions are implemented. Errors are returned if an attempt is made to use some feature which is not implemented and printing the error stack will show `not implemented yet'. Library H5check - check that lib version matches header version H5open - initialize library (happens automatically) H5close - shut down the library (happens automatically) H5dont_atexit - don't call H5close on exit H5version - retrieve library version info H5version_check - check for specific library version Property Lists H5Pclose - release template resources H5Pcopy - copy a template H5Pcreate - create a new template H5Pget_chunk - get chunked storage properties H5Pset_chunk - set chunked storage properties H5Pget_class - get template class H5Pget_istore_k - get chunked storage properties H5Pset_istore_k - set chunked storage properties H5Pget_layout - get raw data layout class H5Pset_layout - set raw data layout class H5Pget_sizes - get address and size sizes H5Pset_sizes - set address and size sizes H5Pget_sym_k - get symbol table storage properties H5Pset_sym_k - set symbol table storage properties H5Pget_userblock - get user-block size H5Pset_userblock - set user-block size H5Pget_version - get file version numbers H5Pget_alignment - get data alignment properties H5Pset_alignment - set data alignment properties H5Pget_external_count- get count of external data files H5Pget_external - get information about an external data file H5Pset_external - add a new external data file to the list H5Pget_driver - get low-level file driver class H5Pget_stdio - get properties for stdio low-level driver H5Pset_stdio - set properties for stdio low-level driver H5Pget_sec2 - get properties for sec2 low-level driver H5Pset_sec2 - set properties for sec2 low-level driver H5Pget_core - get properties for core low-level driver H5Pset_core - set properties for core low-level driver H5Pget_split - get properties for split low-level driver H5Pset_split - set properties for split low-level driver H5P_get_family - get properties for family low-level driver H5P_set_family - set properties for family low-level driver H5Pget_cache - get meta- and raw-data caching properties H5Pset_cache - set meta- and raw-data caching properties H5Pget_buffer - get raw-data I/O pipe buffer properties H5Pset_buffer - set raw-data I/O pipe buffer properties H5Pget_preserve - get type conversion preservation properties H5Pset_preserve - set type conversion preservation properties H5Pget_compression - get raw data compression properties H5Pset_compression - set raw data compression properties H5Pget_deflate - get deflate compression properties H5Pset_deflate - set deflate compression properties H5Pget_mpi - get MPI-IO properties H5Pset_mpi - set MPI-IO properties H5Pget_xfer - get data transfer properties H5Pset_xfer - set data transfer properties Datasets H5Dclose - release dataset resources H5Dcreate - create a new dataset H5Dget_space - get data space H5Dget_type - get data type H5Dget_create_plist - get dataset creation properties H5Dopen - open an existing dataset H5Dread - read raw data H5Dwrite - write raw data H5Dextend - extend a dataset Attributes H5Acreate - create a new attribute H5Aopen_name - open an attribute by name H5Aopen_idx - open an attribute by number H5Awrite - write values into an attribute H5Aread - read values from an attribute H5Aget_space - get attribute data space H5Aget_type - get attribute data type H5Aget_name - get attribute name H5Anum_attrs - return the number of attributes for an object H5Aiterate - iterate over an object's attributes H5Adelete - delete an attribute H5Aclose - close an attribute Errors H5Eclear - clear the error stack H5Eprint - print an error stack H5Eget_auto - get automatic error reporting settings H5Eset_auto - set automatic error reporting H5Ewalk - iterate over the error stack H5Ewalk_cb - the default error stack iterator function H5Eget_major - get the message for the major error number H5Eget_minor - get the message for the minor error number Files H5Fclose - close a file and release resources H5Fcreate - create a new file H5Fget_create_template- get file creation property list H5Fget_access_template- get file access property list H5Fis_hdf5 - determine if a file is an hdf5 file H5Fopen - open an existing file Groups H5Gclose - close a group and release resources H5Gcreate - create a new group H5Gopen - open an existing group H5Gpop - pop a group from the cwg stack H5Gpush - push a group onto the cwg stack H5Gset - set the current working group (cwg) H5Giterate - iterate over the contents of a group H5Gmove - change the name of some object H5Glink - create a hard or soft link to an object H5Gunlink - break the link between a name and an object H5Gstat - get information about a group entry H5Gget_linkval - get the value of a soft link Dataspaces H5Sclose - release dataspace H5Screate - create a new dataspace H5Screate_simple - create a new simple dataspace H5Sextent_dims - get dataspace size H5Sselect_hyperslab - set hyperslab dataspace selection H5Sselect_elements - set element sequence dataspace selection H5Sextent_ndims - get dataspace dimensionality H5Sextent_npoints - get number of points in extent of dataspace H5Sselect_npoints - get number of selected points in dataspace H5Sis_simple - determine if dataspace is simple H5Sset_extent_simple - set simple dataspace dimensionality and size H5Scopy - copy a dataspace Datatypes H5Tclose - release data type resources H5Topen - open a named data type H5Tcommit - name a data type H5Tcommitted - determine if a type is named H5Tcopy - copy a data type H5Tcreate - create a new data type H5Tequal - compare two data types H5Tfind - find a data type conversion function H5Tconvert - convert data from one type to another H5Tget_class - get data type class H5Tget_cset - get character set H5Tget_ebias - get exponent bias H5Tget_fields - get floating point fields H5Tget_inpad - get inter-field padding H5Tget_member_dims - get struct member dimensions H5Tget_member_name - get struct member name H5Tget_member_offset - get struct member byte offset H5Tget_member_type - get struct member type H5Tget_nmembers - get number of struct members H5Tget_norm - get floating point normalization H5Tget_offset - get bit offset within type H5Tget_order - get byte order H5Tget_pad - get padding type H5Tget_precision - get precision in bits H5Tget_sign - get integer sign type H5Tget_size - get size in bytes H5Tget_strpad - get string padding H5Tinsert - insert struct member H5Tlock - lock type to prevent changes H5Tpack - pack struct members H5Tregister_hard - register specific type conversion function H5Tregister_soft - register general type conversion function H5Tset_cset - set character set H5Tset_ebias - set exponent bias H5Tset_fields - set floating point fields H5Tset_inpad - set inter-field padding H5Tset_norm - set floating point normalization H5Tset_offset - set bit offset within type H5Tset_order - set byte order H5Tset_pad - set padding type H5Tset_precision - set precision in bits H5Tset_sign - set integer sign type H5Tset_size - set size in bytes H5Tset_strpad - set string padding H5Tunregister - remove a type conversion function Compression H5Tregister - register a new compression method This release has been tested on UNIX platforms only; specifically: Linux, FreedBSD, IRIX, Solaris & Dec UNIX. Release information for parallel HDF5 ------------------------------------- +) Current release supports independent access to fixed dimension datasets only. +) The comm and info arguments of H5Cset_mpi are not used. All parallel I/O are done via MPI_COMM_WORLD. Access_mode for H5Cset_mpi can be H5ACC_INDEPENDENT only. +) This release of parallel HDF5 has been tested on IBM SP2 and SGI Origin 2000 systems. It uses the ROMIO version of MPIO interface for parallel I/O supports. +) Useful URL's. Parallel HDF webpage: "http://hdf.ncsa.uiuc.edu/Parallel_HDF/" ROMIO webpage: "http://www.mcs.anl.gov/home/thakur/romio/" +) Some to-do items for future releases support for Intel Teraflop platform. support for unlimited dimension datasets. support for file access via a communicator besides MPI_COMM_WORLD. support for collective access to datasets. support for independent create/open of datasets.