1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
|
HDF5 Release 1.3.x
Under Development
INTRODUCTION
This document describes the differences between HDF5-1.2.0 and
HDF5-1.3.x, and contains information on the platforms where HDF5-1.3.x
was tested (????? careful, under construction)
and known problems in HDF5-1.3.x. For more details check the
HISTORY file in the HDF5 source.
The HDF5 documentation can be found on the NCSA ftp server
(ftp.ncsa.uiuc.edu) in the directory:
/HDF/HDF5/docs/
For more information look at the HDF5 home page at:
http://hdf.ncsa.uiuc.edu/HDF5/
If you have any questions or comments, please send them to:
hdfhelp@ncsa.uiuc.edu
CONTENTS
- New features
- New h4toh5 utility
- Bug fixes since HDF5-1.2.0
- Platforms Tested
- Known Problems
New features
============
* The Virtual File Layer, VFL, is added to replace the old file
drivers. It also provides an API for user defined file drivers.
* New features added to snapshots. Use 'snapshot help' to see a
complete list of features.
* Improved configure to detect if MPIO routines are available when
parallel mode is requested.
* Added Thread-Safe support. Phase I implemented.
* Added data sieve buffering to raw data I/O path. This is enabled for
all VFL drivers except the mpio & core drivers. Setting the sieve buffer
size is controlled with new API functions: H5Pset_sieve_buf_size() and
retrieved with H5Pget_sieve_buf_size().
* Added new Virtual File Driver, Stream VFD, to send/receive entire
HDF5 files via socket connections.
* Increased maximum number of dimensions for a dataset (H5S_MAX_RANK) from
31 to 32 to align with HDF4 & netCDF.
* Added 'query' function to VFL drivers. Also added 'type' parameter to
VFL 'read' & 'write' calls, so they are aware of the type of data being
accessed in the file. Updated the VFL document also.
* A new h4toh5 uitlity, to convert HDF4 files to analogous
HDF5 files.
* Added a new array datatype to the datatypes which can be created. Removed
"array fields" from compound datatypes (use an array datatype instead).
Release Notes for h4toh5 beta
=============================
The h4toh5 utility converts an HDF4 file to an HDF5 file.
See the document, "Mapping HDF4 Objects to HDF5 Objects",
http://hdf.ncsa.uiuc.edu/HDF5/papers/H4-H5MappingGuidelines.pdf
Known Limitations of the h4toh5 beta release
=============================================
1. Error handling
Error reporting is minimal.
2. String datatype
HDF4 has no 'string' type. String valued data are usually defined
as an array of 'char' in HDF4. The h4toh5 utility will generally
map these to HDF5 'String' types rather than array of char, with
the following additional rules:
* For the data of HDF4 SDS, image, and palette, if the data is
declared 'DFNT_CHAR8' it will be assumed to be integer and
will be an H5T_INTEGER type.
* For attributes of any HDF4 object, data of type 'DFNT_CHAR8'
will be converted to an HDF5 'H5T_STRING' type.
* For Vdata of HDF4, it is difficult to determine whether data
of type 'DFNT_CHAR8' is intended to be bytes or charaters.
The h4toh5 utility will consider them as C character, and
will convert them to an HDF5 'H5T_STRING' type.
3. Compression, Chunking and External storage
Chunking is supported, but compression and external storage is
not.
An HDF4 object that uses chunking will be converted to an HDF5
file with analogous chunked storage.
An HDF4 object that uses compression will be converted to an
uncompressed HDF5 object.
An HDF4 object that uses external storage will be converted to an
an HDF5 object without external storage.
4. Memory use
The beta version of the h4toh5 utility copies data from HDF4
objects in a single read followed by a single write to the
HDF5 object. For large objects, this requires a very large
amount of memory, which may be extremely slow or fail on
some platforms.
Note that a dataset that has only been partly written will
be read completely, including uninitialized data, and all the
data will be written to the HDF5 object.
5. Platforms
The h4toh5 utility requires HDF5.1.4.
The beta h4toh5 utility has been tested on Solaris 2.6, Solaris 2.5,
Irix 6.5, HPUX 11.0, DEC Unix, FreeBSD, and Windows 2000.
Bug fixes since HDF5-1.2.0
==========================
Library
-------
* The function H5Pset_mpi is renamed as H5Pset_fapl_mpio.
* Corrected a floating point number conversion error for the
Cray J90 platform. The error did not convert the value 0.0
correctly.
* Error was fixed which was not allowing dataset region references to have
their regions retrieved correctly.
* Corrected a bug that caused non-parallel file drivers to fail in
the parallel version.
* Added internal free-lists to reduce memory required by the library and
H5garbage_collect API function
* Fixed error in H5Giterate which was not updating the "index" parameter
correctly.
* Fixed error in hyperslab iteration which was not walking through the
correct sequence of array elements if hyperslabs were staggered in a
certain pattern
* Fixed several other problems in hyperslab iteration code.
* Fixed another H5Giterate bug which was causes groups with large numbers
of objects in them to misbehave when the callback function returned
non-zero values.
* Changed return type of H5Aiterate and H5A_operator_t typedef to be
herr_t, to align them with the dataset and group iterator functions.
* Changed H5Screate_simple and H5Sset_extent_simple to not allow dimensions
of size 0 with out the same dimension being unlimited.
* QAK - 4/19/00 - Improved metadata hashing & caching algorithms to avoid
many hash flushes and also remove some redundant I/O when moving metadata
blocks in the file.
* The "struct(opt)" type conversion function which gets invoked for
certain compound datatype conversions was fixed for nested compound
types. This required a small change in the datatype conversion
function API.
* Re-wrote lots of the hyperslab code to speed it up quite a bit.
* Added bounded garbage collection for the free lists when they run out of
memory and also added H5set_free_list_limits API call to allow users to
put an upper limit on the amount of memory used for free lists.
* Checked for non-existent or deleted objects when dereferencing one with
object or region references and disallow dereference.
* "Time" datatypes (H5T_UNIX_D*) were not being stored and retrieved from
object headers correctly, fixed now.
Configuration
-------------
* The hdf5.h include file was fixed to allow the HDF5 Library to be compiled
with other libraries/applications that use GNU autoconf.
* Configuration for parallel HDF5 was improved. Configure now attempts to
link with libmpi.a and/or libmpio.a as the MPI libraries by default.
It also uses "mpirun" to launch MPI tests by default. It tests to
link MPIO routines during the configuration stage, rather than failing
later as before. One can just do "./configure --enable-parallel"
if the MPI library is in the system library.
* Added support for pthread library and thread-safe option.
* The libhdf5.settings file shows the correct machine byte-sex.
* Added option "--enable-stream-vfd" to configure w/o the Stream VFD.
For Solaris, added -lsocket to the LIBS list of libraries.
Tools
------
* h5dump correctly displays compound datatypes.
* Corrected an error in h5toh4 which did not convert the 32bits
int from HDF5 to HDF4 corectly for the T3E platform.
* h5dump correctly displays the committed copy of predefined types
correctly.
* Added an option, -V, to show the version information of h5dump.
* Fixed a core dumping bug of h5toh4 when executed on platforms like
TFLOPS.
* The test script for h5toh4 used to not able to detect the hdp
dumper command was not valid. It now detects and reports the
failure of hdp execution.
* Merged the tools with the 1.2.2 branch. Required adding new macros, VERSION12
and VERSION13, used in conditional compilation. Updated the Windows project files
for the tools.
* h5dump displays opaque and bitfield data correctly.
* h5dump and h5ls can browse files created with the Stream VFD
(eg. "h5ls <hostname>:<port>").
* h5dump has a new feature "-o <filename>" which outputs the raw data
of the dataset into ascii text file <filename>.
* h5toh4 used to converts hdf5 strings type to hdf4 DFNT_INT8 type.
Corrected to produce hdf4 DFNT_CHAR type instead.
* h5dump and h5ls displays array data correctly.
Documentation
-------------
* User's Guide and Reference Manual were updated.
See doc/html/PSandPDF/index.html for more details.
Platforms Tested:
================
Note: Due to the nature of bug fixes, only static versions of the library and tools were tested.
AIX 4.3.2 (IBM SP) 3.6.6
Cray T3E 2.0.4.81 cc 6.3.0.1
mpt.1.3
FreeBSD 3.3-STABLE gcc 2.95.2
HP-UX B.10.20 HP C HP92453-01 A.10.32
IRIX 6.5 MIPSpro cc 7.30
IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1m
mpt.1.3 (SGI MPI 3.2.0.0)
Linux 2.2.10 SuSE egcs-2.91.66 configured with
(i686-pc-linux-gnu) --disable-hsizet
mpich-1.2.0 egcs-2.91.66 19990314/Linux
OSF1 V4.0 DEC-V5.2-040
SunOS 5.6 cc WorkShop Compilers 4.2 no optimization
SunOS 5.7 cc WorkShop Compilers 5.0
TFLOPS 2.8 cicc (pgcc Rel 3.0-5i)
mpich-1.1.2 with local changes
Windows NT4.0 sp5 MSVC++ 6.0
Known Problems:
==============
o SunOS 5.6 with C WorkShop Compilers 4.2: Hyperslab selections will
fail if library is compiled using optimization of any level.
o The Stream VFD was not tested yet under Windows.
|