Instructions for the Installation of the HDF5 Software ====================================================== CONTENTS -------- 1. Obtaining HDF5 2. Warnings about compilers 2.1. GNU (Intel platforms) 2.2. DEC 2.3. SGI (Irix64 6.2) 2.4. Windows/NT 3. Quick installation 3.1. TFLOPS 3.2. Windows 4. HDF5 dependencies 4.1. Zlib 4.2. MPI and MPI-IO 5. Full installation instructions for source distributions 5.1. Unpacking the distribution 5.1.1. Non-compressed tar archive (*.tar) 5.1.2. Compressed tar archive (*.tar.Z) 5.1.3. Gzip'd tar archive (*.tar.gz) 5.1.4. Bzip'd tar archive (*.tar.bz2) 5.2. Source vs. Build Directories 5.3. Configuring 5.3.1. Specifying the installation directories 5.3.2. Using an alternate C compiler 5.3.3. Additional compilation flags 5.3.4. Specifying other programs 5.3.5. Specifying other libraries and headers 5.3.6. Static versus shared linking 5.3.7. Optimization versus symbolic debugging 5.3.8. Large (>2GB) vs. small (<2GB) file capability 5.3.9. Parallel vs. serial library 5.4. Building 5.5. Testing 5.6. Installing 6. Using the Library 7. Support ***************************************************************************** 1. Obtaining HDF5 The latest supported public release of HDF5 is available from ftp://hdf.ncsa.uiuc.edu/pub/dist/HDF5. For Unix platforms, it is available in tar format uncompressed or compressed with compress, gzip, or bzip2. For Microsoft Windows, it is in ZIP format. The HDF team also makes snapshots of the source code available on a regular basis. These snapshots are unsupported (that is, the HDF team will not release a bug-fix on a particular snapshot; rather any bug fixes will be rolled into the next snapshot). Furthermore, the snapshots have only been tested on a few machines and may not test correctly for parallel applications. Snapshots can be found at ftp://hdf.ncsa.uiuc.edu/pub/outgoing/hdf5/snapshots in a limited number of formats. 2. Warnings about compilers OUTPUT FROM THE FOLLOWING COMPILERS SHOULD BE EXTREMELY SUSPECT WHEN USED TO COMPILE THE HDF5 LIBRARY, ESPECIALLY IF OPTIMIZATIONS ARE ENABLED. IN ALL CASES, HDF5 ATTEMPTS TO WORK AROUND THE COMPILER BUGS BUT THE HDF5 DEVELOPMENT TEAM MAKES NO GUARANTEES THAT THERE ARE OTHER CODE GENERATION PROBLEMS. 2.1. GNU (Intel platforms) Versions before 2.8.1 have serious problems allocating registers when functions contain operations on `long long' data types. Supplying the `--disable-hsizet' switch to configure (documented below) will prevent hdf5 from using `long long' data types in situations that are known not to work, but it limits the hdf5 address space to 2GB. 2.2. DEC The V5.2-038 compiler (and possibly others) occasionally generates incorrect code for memcpy() calls when optimizations are enabled, resulting in unaligned access faults. HDF5 works around the problem by casting the second argument to `char*'. 2.3. SGI (Irix64 6.2) The Mongoose 7.00 compiler has serious optimization bugs and should be upgraded to MIPSpro 7.2.1.2m. Patches are available from SGI. 2.4. Windows/NT The MicroSoft Win32 5.0 compiler is unable to cast unsigned long long values to doubles. HDF5 works around this bug by first casting to signed long long and then to double. 3. Quick installation For those that don't like to read ;-) the following steps can be used to configure, build, test, and install the HDF5 library, header files, and support programs. $ gunzip hdf5-1.2.2.tar.gz | tar xf - $ cd hdf5-1.2.2 $ make check $ make install 3.1. TFLOPS Users of the Intel TFLOPS machine, after reading this file, should see the INSTALL_TFLOPS for more instructions. 3.2. Windows Users of Microsoft Windows should see the INSTALL_Windows.txt for detailed instructions. 4. HDF5 dependencies 4.1. Zlib The HDF5 library has a predefined compression filter that uses the "deflate" method for chunked datatsets. If zlib-1.1.2 or later is found then HDF5 will use it, otherwise HDF5's predefined compression method will degenerate to a no-op (the compression filter will succeed but the data will not be compressed). 4.2. MPI and MPI-IO The parallel version of the library is built upon the foundation provided by MPI and MPI-IO. If these libraries are not available when HDF5 is configured then only a serial version of HDF5 can be built. 5. Full installation instructions for source distributions 5.1. Unpacking the distribution The HDF5 source code is distributed in a variety of formats which can be unpacked with the following commands, each of which creates an `hdf5-1.2.2' directory. 5.1.1. Non-compressed tar archive (*.tar) $ tar xf hdf5-1.2.2.tar 5.1.2. Compressed tar archive (*.tar.Z) $ uncompress -c hdf5-1.2.2.tar.Z |tar xf - 5.1.3. Gzip'd tar archive (*.tar.gz) $ gunzip hdf5-1.2.2.tar.gz |tar xf - 5.1.4. Bzip'd tar archive (*.tar.bz2) $ bunzip2 hdf5-1.2.2.tar.gz |tar xf - 5.2. Source vs. Build Directories On most systems the build can occur in a directory other than the source directory, allowing multiple concurrent builds and/or read-only source code. In order to accomplish this, one should create a build directory, cd into that directory, and run the `configure' script found in the source directory (configure details are below). Unfortunately, this does not work on recent Irix platforms (6.5? and later) because that `make' doesn't understand the VPATH variable. This will be fixed in the next release. 5.3. Configuring HDF5 uses the GNU autoconf system for configuration, which detects various features of the host system and creates the Makefiles. On most systems it should be sufficient to say: $ ./configure OR $ sh configure The configuration process can be controlled through environment variables, command-line switches, and host configuration files. For a complete list of switches say `./configure --help'. The host configuration files are located in the `config' directory and are based on architecture name, vendor name, and/or operating system which are displayed near the beginning of the `configure' output. The host config file influences the behavior of configure by setting or augmenting shell variables. 5.3.1. Specifying the installation directories Typing `make install' will install the HDF5 library, header files, and support programs in /usr/local/lib, /usr/local/include, and /usr/local/bin. To use a path other than /usr/local specify the path with the `--prefix=PATH' switch: $ ./configure --prefix=$HOME If shared libraries are being built (the default) then the final home of the shared library must be specified with this switch before the library and executables are built. 5.3.2. Using an alternate C compiler By default, configure will look for the C compiler by trying `gcc' and `cc'. However, if the environment variable "CC" is set then its value is used as the C compiler (users of csh and derivatives will need to prefix the commands below with `env'). For instance, to use the native C compiler on a system which also has the GNU gcc compiler: $ CC=cc ./configure A parallel version of hdf5 can be built by specifying `mpicc' as the C compiler (the `--enable-parallel' flag documented below is optional). Using the `mpicc' compiler will insure that the correct MPI and MPI-IO header files and libraries are used. $ CC=/usr/local/mpi/bin/mpicc ./configure On Irix64 the default compiler is `cc'. To use an alternate compiler specify it with the CC variable: $ CC='cc -n32' ./configure One may also use various environment variables to change the behavior of the compiler. E.g., to ask for -n32 ABI: $ SGI_ABI=-n32 $ export SGI_ABI $ ./configure 5.3.3. Additional compilation flags If addtional flags must be passed to the compilation commands then specify those flags with the CFLAGS variable. For instance, to enable symbolic debugging of a production version of HDF5 one might say: $ CFLAGS=-g ./confgure --enable-production 5.3.4. Specifying other programs The build system has been tuned for use with GNU make but works also with other versions of make. If the `make' command runs a non-GNU version but a GNU version is available under a different name (perhaps `gmake') then HDF5 can be configured to use it by setting the MAKE variable. Note that whatever value is used for MAKE must also be used as the make command when building the library: $ MAKE=gmake ./configure $ gmake The `AR' and `RANLIB' variables can also be set to the names of the `ar' and `ranlib' (or `:') commands to override values detected by configure. The HDF5 library, include files, and utilities are installed during `make install' (described below) with a BSD-compatible install program detected automatically by configure. If none is found then the shell script bin/install-sh is used. Configure doesn't check that the install script actually works, but if a bad install is detected on your system, you have two choices: 1. Copy the bin/install-sh program to your $HOME/bin directory, name it `install', and make sure that $HOME/bin is searched before the system bin directories. 2. Specify the full path name of the `install-sh' program as the value of the INSTALL environment variable. Note: do not use `cp' or some other program in place of install because the HDF5 makefiles also use the install program to also change file ownership and/or access permissions. 5.3.5. Specifying other libraries and headers Configure searches the standard places (those places known by the systems compiler) for include files and header files. However, additional directories can be specified by using the CPPFLAGS and/or LDFLAGS variables: $ CPPFLAGS=-I/home/robb/include \ LDFLAGS=-L/home/robb/lib \ ./configure HDF5 uses the zlib library for two purposes: it provides support for the HDF5 deflate data compression filter, and it is used by the h5toh4 converter in support of HDF4. Configure searches the standard places (plus those specified above with CPPFLAGS and LDFLAGS variables) for the zlib headers and library. The search can be disabled by specifying `--without-zlib' or alternate directories can be specified with `--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and LDFLAGS variables: $ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib $ CPPFLAGS=-I/usr/unsup/include \ LDFLAGS=-L/usr/unsup/lib \ ./configure The HDF5-to-HDF4 conversion tool requires the HDF4 library and header files which are detected the same way as zlib. The switch to give to configure is `--with-hdf4'. Note that HDF5 requires a newer version of zlib than the one shipped with some versions of HDF4. Also, unless you have the "correct" version of hdf4 the confidence testing will fail in the tools directory. 5.3.6. Static versus shared linking The build process will create static libraries on all systems and shared libraries on systems that support dynamic linking to a sufficient degree. Either form of library may be suppressed by saying `--disable-static' or `--disable-shared'. $ ./configure --disable-shared 5.3.7. Optimization versus symbolic debugging The library can be compiled to provide symbolic debugging support so it can be debugged with gdb, dbx, ddd, etc or it can be compiled with various optimizations. To compile for symbolic debugging (the default for snapshots) say `--disable-production'; to compile with optimizations (the default for supported public releases) say `--enable-production'. On some systems the library can also be compiled for profiling with gprof by saying `--enable-production=profile'. $ ./configure --disable-production #symbolic debugging $ ./configure --enable-production #optimized code $ ./configure --enable-production=profile #for use with gprof Regardless of whether support for symbolic debugging is enabled, the library also is able to perform runtime debugging of certain packages (such as type conversion execution times, and extensive invariant condition checking). To enable this debugging supply a comma-separated list of package names to to the `--enable-debug' switch (see Debugging.html for a list of package names). Debugging can be disabled by saying `--disable-debug'. The default debugging level for snapshots is a subset of the available packages; the default for supported releases is no debugging (debugging can incur a significant runtime penalty). $ ./configure --enable-debug=s,t #debug only H5S and H5T $ ./configure --enable-debug #debug normal packages $ ./configure --enable-debug=all #debug all packages $ ./configure --disable-debug #no debugging HDF5 is also able to print a trace of all API function calls, their arguments, and the return values. To enable or disable the ability to trace the API say `--enable-trace' (the default for snapthots) or `--disable-trace' (the default for public releases). The tracing must also be enabled at runtime to see any output (see Debugging.html). 5.3.8. Large (>2GB) vs. small (<2GB) file capability In order to read or write files that could potentially be larger than 2GB it is necessary to use the non-ANSI `long long' data type on some platforms. However, some compilers (e.g., GNU gcc versions before 2.8.1 on Intel platforms) are unable to produce correct machine code for this data type. To disable use of the `long long' type on these machines say: $ ./configure --disable-hsizet 5.3.9. Parallel vs. serial library The HDF5 library can be configured to use MPI and MPI-IO for parallelism on a distributed multi-processor system. Read the file INSTALL_parallel for detailed explanations. 5.4. Building The library, confidence tests, and programs can be build by saying just $ make Note that if you supplied some other make command via the MAKE variable during the configuration step then that same command must be used here. When using GNU make you can add `-j -l6' to the make command to compile in arallel on SMP machines. Do not give a number after th `-j' since GNU make will turn it off for recursive invocations of make. $ make -j -l6 5.5. Testing HDF5 comes with various test suites, all of which can be run by saying $ make check To run only the tests for the library change to the `test' directory before issuing the command. Similarly, tests for the parallel aspects of the library are in `testpar' and tests for the support programs are in `tools'. Temporary files will be deleted by each test when it complets, but may continue to exist in an incomplete state if the test fails. To prevent deletion of the files define the HDF5_NOCLEANUP environment variable. 5.6. Installing The HDF5 library, include files, and support programs can be installed in a (semi-)public place by saying `make install'. The files are installed under the directory specified with `--prefix=DIR' (or '/usr/local') in directories named `lib', `include', and `bin'. The prefix directory must exist prior to `make install', but its subdirectories are created automatically. If `make install' fails because the install command at your site somehow fails, you may use the install-sh that comes with the source. You need to run ./configure again. $ INSTALL="$PWD/bin/install-sh -c" ./configure ... $ make install The library can be used without installing it by pointing the compiler at the `src' directory for both include files and libraries. However, the minimum which must be installed to make the library publically available is: The library: ./src/libhdf5.a The public header files: ./src/H5*public.h The main header file: ./src/hdf5.h The configuration information: ./src/H5config.h The support programs that are useful are: ./tools/h5ls (list file contents) ./tools/h5dump (dump file contents) ./tools/h5repart (repartition file families) ./tools/h5toh4 (hdf5 to hdf4 file converter) ./tools/h5debug (low-level file debugging) ./tools/h5import (a demo) 6. Using the Library Please see the User Manual in the doc/html directory. Most programs will include and link with -lhdf5. Additional libraries may also be necessary depending on whether support for compression, etc. was compiled into the hdf5 library. A summary of the hdf5 installation can be found in the libhdf5.settings file in the same directory as the static and/or shared hdf5 libraries. 7. Support Support is described in the RELEASE file.