summaryrefslogtreecommitdiffstats
path: root/release_docs/INSTALL
diff options
context:
space:
mode:
authorScot Breitenfeld <brtnfld@hdfgroup.org>2008-04-30 19:23:26 (GMT)
committerScot Breitenfeld <brtnfld@hdfgroup.org>2008-04-30 19:23:26 (GMT)
commit5773fd34bc5adf59b4530d95ac9f0c0585902803 (patch)
tree456ad239799382e1f083fb7fc74399e43b471912 /release_docs/INSTALL
parent0138995d1ce2068db1f790503435a2121132d3ad (diff)
downloadhdf5-5773fd34bc5adf59b4530d95ac9f0c0585902803.zip
hdf5-5773fd34bc5adf59b4530d95ac9f0c0585902803.tar.gz
hdf5-5773fd34bc5adf59b4530d95ac9f0c0585902803.tar.bz2
[svn-r14902] Merged fortran_1_8 branch changes r14505:14901 into the trunk. New fortran wrappers added.
Diffstat (limited to 'release_docs/INSTALL')
-rw-r--r--release_docs/INSTALL1315
1 files changed, 628 insertions, 687 deletions
diff --git a/release_docs/INSTALL b/release_docs/INSTALL
index 4a9a567..1444bf2 100644
--- a/release_docs/INSTALL
+++ b/release_docs/INSTALL
@@ -1,733 +1,674 @@
-Instructions for the Installation of HDF5 Software
-==================================================
-
-This file provides instructions for installing the HDF5 software.
-If you have any problems with the installation, please see The HDF Group's
-support page at the following location:
-
- http://www.hdfgroup.org/services/support.html
-
-CONTENTS
---------
- 1. Obtaining HDF5
-
- 2. Quick installation
- 2.1. Windows
- 2.2. RedStorm (Cray XT3)
-
- 3. HDF5 dependencies
- 3.1. Zlib
- 3.2 Szip (optional)
- 3.3. MPI and MPI-IO
-
- 4. Full installation instructions for source distributions
- 4.1. Unpacking the distribution
- 4.1.1. Non-compressed tar archive (*.tar)
- 4.1.2. Compressed tar archive (*.tar.Z)
- 4.1.3. Gzip'd tar archive (*.tar.gz)
- 4.1.4. Bzip'd tar archive (*.tar.bz2)
- 4.2. Source versus build directories
- 4.3. Configuring
- 4.3.1. Specifying the installation directories
- 4.3.2. Using an alternate C compiler
- 4.3.3. Configuring for 64-bit support
- 4.3.4. Additional compilation flags
- 4.3.5. Compiling HDF5 wrapper libraries
- 4.3.6. Specifying other programs
- 4.3.7. Specifying other libraries and headers
- 4.3.8. Static versus shared linking
- 4.3.9. Optimization versus symbolic debugging
- 4.3.10. Parallel versus serial library
- 4.3.11. Threadsafe capability
- 4.3.12. Backward compatibility
- 4.4. Building
- 4.5. Testing
- 4.6. Installing HDF5
-
- 5. Using the Library
-
- 6. Support
-
- A. Warnings about compilers
- A.1. GNU (Intel platforms)
- A.2. DEC
- A.3. SGI (Irix64 6.2)
- A.4. Windows/NT
-
- B. Large (>2GB) versus small (<2GB) file capability
-
- C. Building and testing with other compilers
- C.1. Building and testing with Intel compilers
- C.2. Building and testing with PGI compilers
+ Instructions for the Installation of HDF5 Software
+ ==================================================
+
+WARNING: This file was not updated for the 1.8.0-beta* releases. If you have any problems with the HDF5
+ installation please contact help@hdfgroup.org
+
+ CONTENTS
+ --------
+ 1. Obtaining HDF5
+
+ 2. Warnings about compilers
+ 2.1. GNU (Intel platforms)
+ 2.2. DEC
+ 2.3. SGI (Irix64 6.2)
+ 2.4. Windows/NT
+
+ 3. Quick installation
+ 3.1. Windows
+ 3.2. RedStorm (Cray XT3)
+
+ 4. HDF5 dependencies
+ 4.1. Zlib
+ 4.2 Szip
+ 4.3. MPI and MPI-IO
+
+ 5. Full installation instructions for source distributions
+ 5.1. Unpacking the distribution
+ 5.1.1. Non-compressed tar archive (*.tar)
+ 5.1.2. Compressed tar archive (*.tar.Z)
+ 5.1.3. Gzip'd tar archive (*.tar.gz)
+ 5.1.4. Bzip'd tar archive (*.tar.bz2)
+ 5.2. Source vs. Build Directories
+ 5.3. Configuring
+ 5.3.1. Specifying the installation directories
+ 5.3.2. Using an alternate C compiler
+ 5.3.3. Configuring for 64-bit support
+ 5.3.4. Additional compilation flags
+ 5.3.5. Compiling HDF5 wrapper libraries
+ 5.3.6. Specifying other programs
+ 5.3.7. Specifying other libraries and headers
+ 5.3.8. Static versus shared linking
+ 5.3.9. Optimization versus symbolic debugging
+ 5.3.10. Large (>2GB) vs. small (<2GB) file capability
+ 5.3.11. Parallel vs. serial library
+ 5.3.12. Threadsafe capability
+ 5.3.13. Backward compatibility
+ 5.3.14. Network stream capability
+ 5.4. Building
+ 5.5. Testing
+ 5.6. Installing
+ 5.7 Building and testing with Intel compilers
+ 5.8 Building and testing with PGI compilers
+
+ 6. Using the Library
+
+ 7. Support
*****************************************************************************
1. Obtaining HDF5
- The latest supported public release of HDF5 is available from
- ftp://ftp.hdfgroup.org/HDF5/current/src. For Unix and UNIX-like
- platforms, it is available in tar format compressed with gzip.
- For Microsoft Windows, it is in ZIP format.
-
- The HDF team also makes snapshots of the source code available on
- a regular basis. These snapshots are unsupported (that is, the
- HDF team will not release a bug-fix on a particular snapshot;
- rather any bug fixes will be rolled into the next snapshot).
- Furthermore, the snapshots have only been tested on a few
- machines and may not test correctly for parallel applications.
- Snapshots, in a limited number of formats, can be found on THG's
- development FTP server:
-
- ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/hdf5/snapshots
-
-
-2. Quick installation
- For those who don't like to read ;-) the following steps can be used
- to configure, build, test, and install the HDF5 Library, header files,
- and support programs. For example, to install HDF5 version X.Y.Z at
- location /usr/local/hdf5, use the following steps.
-
- $ gunzip < hdf5-X.Y.Z.tar.gz | tar xf -
- $ cd hdf5-X.Y.Z
- $ ./configure --prefix=/usr/local/hdf5 <more configure_flags>
- $ make
- $ make check # run test suite.
- $ make install
- $ make check-install # verify installation.
-
- Some versions of the tar command support the -z option. In such cases,
- the first step above can be simplified to the following:
-
- $ tar zxf hdf5-X.Y.Z.tar.gz
-
- <configure_flags> above refers to the configure flags appropriate
- to your installation. For example, to install HDF5 with the
- Fortran and C++ interfaces and with SZIP compression, the
- configure line might read as follows:
-
- $ ./configure --prefix=/usr/local/hdf5 --enable-fortran \
- --enable-cxx --with-szlib=PATH_TO_SZIP
-
- In this case, PATH_TO_SZIP would be replaced with the path to the
- installed location of the SZIP library.
-
-2.1. Windows
- Users of Microsoft Windows should see the INSTALL_Windows files for
- detailed instructions.
-
-2.2. RedStorm (Cray XT3)
- Users of the Red Storm machine, after reading this file, should read
- the Red Storm section in the INSTALL_parallel file for specific
- instructions for the Red Storm machine. The same instructions would
- probably work for other Cray XT3 systems, but they have not been
- verified.
-
-
-3. HDF5 dependencies
-3.1. Zlib
- The HDF5 Library includes a predefined compression filter that
- uses the "deflate" method for chunked datasets. If zlib-1.1.2 or
- later is found, HDF5 will use it. Otherwise, HDF5's predefined
- compression method will degenerate to a no-op; the compression
- filter will succeed but the data will not be compressed.
-
-3.2. Szip (optional)
- The HDF5 Library includes a predefined compression filter that
- uses the extended-Rice lossless compression algorithm for chunked
- datasets. For more information about Szip compression and license
- terms, see http://hdfgroup.org/doc_resource/SZIP/.
-
- Precompiled Szip binaries for each supported platform and a source
- tar file can be found at ftp://ftp.hdfgroup.org/lib-external/szip/.
-
- To configure the HDF5 Library with the Szip compression filter, use
- the '--enable-szlib=/PATH_TO_SZIP' flag. For more information, see
- section 4.3.7, "Specifying other libraries and headers."
-
- Starting with release 1.6.3, Szip library binaries are distributed
- with the encoder enabled (a license may be required to use this binary)
- and with the encoder disabled (freely usable without a license).
- If the encoder enabled binary is used, Szip compression encoding is
- available for an HDF5 application; if the encoder disabled binary is
- used, Szip compression is not available. Szip decoding is always
- available for applications (i.e., an HDF5 application can always read
- Szip-compressed data) if the Szip filter is present, regardless of the
- binary used.
-
-3.3. MPI and MPI-IO
- The parallel version of the library is built upon the foundation
- provided by MPI and MPI-IO. If these libraries are not available
- when HDF5 is configured, only a serial version of HDF5 can be built.
-
-
-4. Full installation instructions for source distributions
-
-4.1. Unpacking the distribution
- The HDF5 source code is distributed in a variety of formats which
- can be unpacked with the following commands, each of which creates an
- 'hdf5-X.Y.Z' directory, where X.Y.Z is the HDF5 version numbers.
-
-4.1.1. Non-compressed tar archive (*.tar)
-
- $ tar xf hdf5-X.Y.Z.tar
-
-4.1.2. Compressed tar archive (*.tar.Z)
-
- $ uncompress -c < hdf5-X.Y.Z.tar.Z | tar xf -
- Or
- $ tar Zxf hdf5-X.Y.Z.tar.Z
-
-4.1.3. Gzip'd tar archive (*.tar.gz)
-
- $ gunzip < hdf5-X.Y.Z.tar.gz | tar xf -
- Or
- $ tar zxf hdf5-X.Y.Z.tar.gz
-
-4.1.4. Bzip'd tar archive (*.tar.bz2)
-
- $ bunzip2 < hdf5-X.Y.Z.tar.bz2 | tar xf -
- Or
- $ tar jxf hdf5-X.Y.Z.tar.bz2
-
-4.2. Source versus build directories
- On most systems the build can occur in a directory other than the
- source directory, allowing multiple concurrent builds and/or
- read-only source code. In order to accomplish this, one should
- create a build directory, cd into that directory, and run the
- `configure' script found in the source directory (configure
- details are below). For example,
- $ mkdir built-fortran
- $ cd build-fortran
- $ ../hdf5-X.Y.Z/configure --enable-fortran ...
-
- Unfortunately, this does not work on recent Irix platforms (6.5?
- and later) because that `make' does not understand the VPATH variable.
- However, HDF5 also supports Irix `pmake' which has a .PATH target
- which serves a similar purpose. Here's what the Irix man pages say
- about VPATH, the facility used by HDF5 makefiles for this feature:
-
- The VPATH facility is a derivation of the undocumented
- VPATH feature in the System V Release 3 version of make.
- System V Release 4 has a new VPATH implementation, much
- like the pmake(1) .PATH feature. This new feature is also
- undocumented in the standard System V Release 4 manual
- pages. For this reason it is not available in the IRIX
- version of make. The VPATH facility should not be used
- with the new parallel make option.
-
-4.3. Configuring
- HDF5 uses the GNU autoconf system for configuration, which
- detects various features of the host system and creates the
- Makefiles. On most systems it should be sufficient to say:
-
- $ ./configure
- Or
- $ sh configure
-
- The configuration process can be controlled through environment
- variables, command-line switches, and host configuration files.
- For a complete list of switches type:
-
- $ ./configure --help
-
- The host configuration files are located in the `config'
- directory and are based on architecture name, vendor name, and/or
- operating system which are displayed near the beginning of the
- `configure' output. The host config file influences the behavior
- of configure by setting or augmenting shell variables.
-
-4.3.1. Specifying the installation directories
- The default installation location is the HDF5 directory created in
- the build directory. Typing `make install' will install the HDF5
- Library, header files, examples, and support programs in hdf5/lib,
- hdf5/include, hdf5/doc/hdf5/examples, and hdf5/bin. To use a path
- other than hdf5, specify the path with the `--prefix=PATH' switch:
-
- $ ./configure --prefix=/usr/local
-
- If shared libraries are being built (the default), the final
- home of the shared library must be specified with this switch
- before the library and executables are built.
-
- HDF5 can be installed into a different location than the prefix
- specified at configure time; see section 4.6, "Installing HDF5,"
- for more details.
-
-4.3.2. Using an alternate C compiler
- By default, configure will look for the C compiler by trying
- `gcc' and `cc'. However, if the environment variable "CC" is set
- then its value is used as the C compiler. For instance, one would
- use the following line to specify the native C compiler on a system
- that also has the GNU gcc compiler (users of csh and derivatives
- will need to prefix the commands below with `env'):
-
- $ CC=cc ./configure
-
- A parallel version of HDF5 can be built by specifying `mpicc'
- as the C compiler. (The `--enable-parallel' flag documented
- below is optional in this case.) Using the `mpicc' compiler
- will insure that the correct MPI and MPI-IO header files and
- libraries are used.
-
- $ CC=/usr/local/mpi/bin/mpicc ./configure
-
-4.3.3. Configuring for 64-bit support
- Several machine architectures support 32-bit or 64-bit binaries.
- The options below describe how to enable support for different options.
-
- On Irix64, the default compiler is `cc'. To use an alternate compiler,
- specify it with the CC variable:
-
- $ CC='cc -n32' ./configure
-
- Similarly, users compiling on a Solaris machine and desiring to
- build the distribution with 64-bit support should specify the
- correct flags with the CC variable:
-
- $ CC='cc -xarch=v9' ./configure
-
- To configure AIX 64-bit support including the Fortran and C++ APIs,
- (Note: need to set $AR to 'ar -X 64'.)
- Serial:
- $ CFLAGS=-q64 FFLAGS=-q64 CXXFLAGS=-q64 AR='ar -X 64'\
- ./configure --enable-fortran
- Parallel: (C++ not supported with parallel)
- $ CFLAGS=-q64 FFLAGS=-q64 AR='ar -X 64'\
- ./configure --enable-fortran
-
-4.3.4. Additional compilation flags
- If addtional flags must be passed to the compilation commands,
- specify those flags with the CFLAGS variable. For instance,
- to enable symbolic debugging of a production version of HDF5, one
- might say:
-
- $ CFLAGS=-g ./configure --enable-production
-
-4.3.5. Compiling HDF5 wrapper libraries
- One can optionally build the Fortran and/or C++ interfaces to the
- HDF5 C library. By default, both options are disabled. To build
- them, specify `--enable-fortran' and `--enable-cxx', respectively.
-
- $ ./configure --enable-fortran
- $ ./configure --enable-cxx
-
- Configuration will halt if a working Fortran 90 or 95 compiler or
- C++ compiler is not found. Currently, the Fortran configure tests
- for these compilers in order: f90, pgf90, f95. To use an
- alternate compiler specify it with the FC variable:
-
- $ FC=/usr/local/bin/g95 ./configure --enable-fortran
-
- Note: The Fortran and C++ interfaces are not supported on all the
- platforms the main HDF5 Library supports. Also, the Fortran
- interface supports parallel HDF5 while the C++ interface does
- not.
-
- Note: See sections 4.7 and 4.8 for building the Fortran library with
- Intel or PGI compilers.
-
-4.3.6. Specifying other programs
- The build system has been tuned for use with GNU make but also
- works with other versions of make. If the `make' command runs a
- non-GNU version but a GNU version is available under a different
- name (perhaps `gmake'), then HDF5 can be configured to use it by
- setting the MAKE variable. Note that whatever value is used for
- MAKE must also be used as the make command when building the
- library:
-
- $ MAKE=gmake ./configure
- $ gmake
-
- The `AR' and `RANLIB' variables can also be set to the names of
- the `ar' and `ranlib' (or `:') commands to override values
- detected by configure.
-
- The HDF5 Library, include files, and utilities are installed
- during `make install' (described below) with a BSD-compatible
- install program detected automatically by configure. If none is
- found, the shell script bin/install-sh is used. Configure does not
- check that the install script actually works; if a bad install is
- detected on your system (e.g., on the ASCI blue machine as of
- March 2, 1999) you have two choices:
-
- 1. Copy the bin/install-sh program to your $HOME/bin
- directory, name it `install', and make sure that $HOME/bin
- is searched before the system bin directories.
-
- 2. Specify the full path name of the `install-sh' program
- as the value of the INSTALL environment variable. Note: do
- not use `cp' or some other program in place of install
- because the HDF5 makefiles also use the install program to
- change file ownership and/or access permissions.
-
-4.3.7. Specifying other libraries and headers
- Configure searches the standard places (those places known by the
- systems compiler) for include files and header files. However,
- additional directories can be specified by using the CPPFLAGS
- and/or LDFLAGS variables:
-
- $ CPPFLAGS=-I/home/robb/include \
+ The latest supported public release of HDF5 is available from
+ ftp://hdf.ncsa.uiuc.edu/HDF5/current/src. For Unix platforms, it is
+ available in tar format compressed with gzip. For Microsoft Windows,
+ it is in ZIP format.
+
+ The HDF team also makes snapshots of the source code available on
+ a regular basis. These snapshots are unsupported (that is, the
+ HDF team will not release a bug-fix on a particular snapshot;
+ rather any bug fixes will be rolled into the next snapshot).
+ Furthermore, the snapshots have only been tested on a few
+ machines and may not test correctly for parallel applications.
+ Snapshots can be found at
+ ftp://hdf.ncsa.uiuc.edu/pub/outgoing/hdf5/snapshots in a limited
+ number of formats.
+
+
+2. Warnings about compilers
+ OUTPUT FROM THE FOLLOWING COMPILERS SHOULD BE EXTREMELY SUSPECT
+ WHEN USED TO COMPILE THE HDF5 LIBRARY, ESPECIALLY IF
+ OPTIMIZATIONS ARE ENABLED. IN ALL CASES, HDF5 ATTEMPTS TO WORK
+ AROUND THE COMPILER BUGS BUT THE HDF5 DEVELOPMENT TEAM MAKES NO
+ GUARANTEES THAT THERE ARE OTHER CODE GENERATION PROBLEMS.
+
+2.1. GNU (Intel platforms)
+ Versions before 2.8.1 have serious problems allocating registers
+ when functions contain operations on `long long' data types.
+ Supplying the `--disable-hsizet' switch to configure (documented
+ below) will prevent hdf5 from using `long long' data types in
+ situations that are known not to work, but it limits the hdf5
+ address space to 2GB.
+
+2.2. COMPAQ/DEC
+ The V5.2-038 compiler (and possibly others) occasionally
+ generates incorrect code for memcpy() calls when optimizations
+ are enabled, resulting in unaligned access faults. HDF5 works
+ around the problem by casting the second argument to `char *'.
+ The fortran module (5.4.1a) fails in compiling some fortran
+ programs. Need to use 5.5.0 or more.
+
+2.3. SGI (Irix64 6.2)
+ The Mongoose 7.00 compiler has serious optimization bugs and
+ should be upgraded to MIPSpro 7.2.1.2m. Patches are available
+ from SGI.
+
+2.4. Windows/NT
+ The MicroSoft Win32 5.0 compiler is unable to cast unsigned long
+ long values to doubles. HDF5 works around this bug by first
+ casting to signed long long and then to double.
+
+ A link warning: defaultlib "LIBC" conflicts with use of other libs
+ appears for debug version of VC++ 6.0. This warning will not affect
+ building and testing hdf5 libraries.
+
+
+3. Quick installation
+ For those that don't like to read ;-) the following steps can be
+ used to configure, build, test, and install the HDF5 library,
+ header files, and support programs.
+
+ $ gunzip < hdf5-1.6.0.tar.gz | tar xf -
+ $ cd hdf5-1.6.0
+ $ make check
+ $ make install
+
+3.1. Windows
+ Users of Microsoft Windows should see the INSTALL_Windows for
+ detailed instructions.
+
+3.2. RedStorm (Cray Xt3)
+ Users of the Red Storm machine, after reading this file, should read
+ the Red Storm section in the INSTALL_parallel file for specific
+ instructions for the Red Storm machine. The same instructions would
+ probably work for other Cray XT3 systems but they have not been
+ verified.
+
+
+4. HDF5 dependencies
+4.1. Zlib
+ The HDF5 library has a predefined compression filter that uses
+ the "deflate" method for chunked datatsets. If zlib-1.1.2 or
+ later is found then HDF5 will use it, otherwise HDF5's predefined
+ compression method will degenerate to a no-op (the compression
+ filter will succeed but the data will not be compressed).
+
+4.2. Szip
+ The HDF5 library has a predefined compression filter that uses
+ the extended-Rice lossless compression algorithm for chunked
+ datatsets. For more information about Szip compression and license terms
+ see http://hdf.ncsa.uiuc.edu/HDF5/doc_resource/SZIP/index.html.
+ Precompiled szip binaries for each supported platform and source tar ball
+ file can be found at ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/
+
+4.3. MPI and MPI-IO
+ The parallel version of the library is built upon the foundation
+ provided by MPI and MPI-IO. If these libraries are not available
+ when HDF5 is configured then only a serial version of HDF5 can be
+ built.
+
+
+5. Full installation instructions for source distributions
+5.1. Unpacking the distribution
+ The HDF5 source code is distributed in a variety of formats which
+ can be unpacked with the following commands, each of which
+ creates an `hdf5-1.6.0' directory.
+
+5.1.1. Non-compressed tar archive (*.tar)
+
+ $ tar xf hdf5-1.6.0.tar
+
+5.1.2. Compressed tar archive (*.tar.Z)
+
+ $ uncompress -c < hdf5-1.6.0.tar.Z | tar xf -
+
+5.1.3. Gzip'd tar archive (*.tar.gz)
+
+ $ gunzip < hdf5-1.6.0.tar.gz | tar xf -
+
+5.1.4. Bzip'd tar archive (*.tar.bz2)
+
+ $ bunzip2 < hdf5-1.6.0.tar.bz2 | tar xf -
+
+5.2. Source vs. Build Directories
+ On most systems the build can occur in a directory other than the
+ source directory, allowing multiple concurrent builds and/or
+ read-only source code. In order to accomplish this, one should
+ create a build directory, cd into that directory, and run the
+ `configure' script found in the source directory (configure
+ details are below).
+
+ Unfortunately, this does not work on recent Irix platforms (6.5?
+ and later) because that `make' doesn't understand the VPATH
+ variable. However, hdf5 also supports Irix `pmake' which has a
+ .PATH target which serves a similar purpose. Here's what the man
+ pages say about VPATH, which is the facility used by HDF5
+ makefiles for this feature:
+
+ The VPATH facility is a derivation of the undocumented
+ VPATH feature in the System V Release 3 version of make.
+ System V Release 4 has a new VPATH implementation, much
+ like the pmake(1) .PATH feature. This new feature is also
+ undocumented in the standard System V Release 4 manual
+ pages. For this reason it is not available in the IRIX
+ version of make. The VPATH facility should not be used
+ with the new parallel make option.
+
+5.3. Configuring
+ HDF5 uses the GNU autoconf system for configuration, which
+ detects various features of the host system and creates the
+ Makefiles. On most systems it should be sufficient to say:
+
+ $ ./configure OR
+ $ sh configure
+
+ The configuration process can be controlled through environment
+ variables, command-line switches, and host configuration files.
+ For a complete list of switches type:
+
+ $ ./configure --help
+
+ The host configuration files are located in the `config'
+ directory and are based on architecture name, vendor name, and/or
+ operating system which are displayed near the beginning of the
+ `configure' output. The host config file influences the behavior
+ of configure by setting or augmenting shell variables.
+
+5.3.1. Specifying the installation directories
+ Typing `make install' will install the HDF5 library, header
+ files, examples, and support programs in /usr/local/lib,
+ /usr/local/include, /usr/local/doc/hdf5/examples, and
+ /usr/local/bin. To use a path other than
+ /usr/local specify the path with the `--prefix=PATH' switch:
+
+ $ ./configure --prefix=$HOME
+
+ If shared libraries are being built (the default) then the final
+ home of the shared library must be specified with this switch
+ before the library and executables are built.
+
+ HDF5 can be installed into a different location than the prefix
+ specified at configure time; see the section on Installing HDF5
+ for more details.
+
+5.3.2. Using an alternate C compiler
+ By default, configure will look for the C compiler by trying
+ `gcc' and `cc'. However, if the environment variable "CC" is set
+ then its value is used as the C compiler (users of csh and
+ derivatives will need to prefix the commands below with `env').
+ For instance, to use the native C compiler on a system which also
+ has the GNU gcc compiler:
+
+ $ CC=cc ./configure
+
+ A parallel version of hdf5 can be built by specifying `mpicc'
+ as the C compiler (the `--enable-parallel' flag documented
+ below is optional in this case). Using the `mpicc' compiler
+ will insure that the correct MPI and MPI-IO header files and
+ libraries are used.
+
+ $ CC=/usr/local/mpi/bin/mpicc ./configure
+
+5.3.3. Configuring for 64-bit support
+ Several machine architectures support 32-bit or 64-bit binaries.
+ The options below describe how to enable support for different options.
+
+ On Irix64 the default compiler is `cc'. To use an alternate
+ compiler specify it with the CC variable:
+
+ $ CC='cc -n32' ./configure
+
+ Similarly, users compiling on a Solaris machine and desiring to
+ build the distribution with 64-bit support should specify the
+ correct flags with the CC variable:
+
+ $ CC='cc -xarch=v9' ./configure
+
+ To configure AIX 64-bit support including fortran API and C++,
+ (Note: need to set $AR to 'ar -X 64'.)
+ Serial:
+ $ CFLAGS=-q64 FFLAGS=-q64 CXXFLAGS=-q64 AR='ar -X 64'\
+ $ ./configure --enable-fortran
+ Parallel: (C++ not supported with parallel)
+ $ CFLAGS=-q64 FFLAGS=-q64 AR='ar -X 64'\
+ $ ./configure --enable-fortran
+
+5.3.4. Additional compilation flags
+ If addtional flags must be passed to the compilation commands
+ then specify those flags with the CFLAGS variable. For instance,
+ to enable symbolic debugging of a production version of HDF5 one
+ might say:
+
+ $ CFLAGS=-g ./configure --enable-production
+
+5.3.5. Compiling HDF5 wrapper libraries
+ One can optionally build the Fortran and/or C++ interface to the
+ HDF5 C library. By default, both options are disabled. To build
+ them, specify `--enable-fortran' and `--enable-cxx' respectively.
+
+ $ ./configure --enable-fortran
+ $ ./configure --enable-cxx
+
+ Configuration will halt if a working Fortran 90 or 95 compiler or
+ C++ compiler is not found. Currently, the Fortran configure tests
+ for these compilers in order: f90, pgf90, f95. To use an
+ alternative compiler specify it with the F9X variable:
+
+ $ F9X=/usr/local/bin/g95 ./configure --enable-fortran
+
+ Note: The Fortran and C++ interfaces are not supported on all the
+ platforms the main HDF5 library supports. Also, the Fortran
+ interface supports parallel HDF5 while the C++ interface does
+ not.
+
+ Note: On Cray T3Es the following files should be modified before
+ building the Fortran Library:
+ fortran/src/H5Dff.f90
+ fortran/src/H5Aff.f90
+ fortran/src/H5Pff.f90
+ Check for "Comment if on T3E ..." comment and comment out
+ specified lines or use a patch from HDF FTP server
+ ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/
+
+ Note: See sections 5.7 and 5.8 for how to build Fortran Library with
+ PGI or Intel compilers.
+
+5.3.6. Specifying other programs
+ The build system has been tuned for use with GNU make but works
+ also with other versions of make. If the `make' command runs a
+ non-GNU version but a GNU version is available under a different
+ name (perhaps `gmake') then HDF5 can be configured to use it by
+ setting the MAKE variable. Note that whatever value is used for
+ MAKE must also be used as the make command when building the
+ library:
+
+ $ MAKE=gmake ./configure
+ $ gmake
+
+ The `AR' and `RANLIB' variables can also be set to the names of
+ the `ar' and `ranlib' (or `:') commands to override values
+ detected by configure.
+
+ The HDF5 library, include files, and utilities are installed
+ during `make install' (described below) with a BSD-compatible
+ install program detected automatically by configure. If none is
+ found then the shell script bin/install-sh is used. Configure
+ doesn't check that the install script actually works, but if a
+ bad install is detected on your system (e.g., on the ASCI blue
+ machine as of March 2, 1999) you have two choices:
+
+ 1. Copy the bin/install-sh program to your $HOME/bin
+ directory, name it `install', and make sure that $HOME/bin
+ is searched before the system bin directories.
+
+ 2. Specify the full path name of the `install-sh' program
+ as the value of the INSTALL environment variable. Note: do
+ not use `cp' or some other program in place of install
+ because the HDF5 makefiles also use the install program to
+ also change file ownership and/or access permissions.
+
+5.3.7. Specifying other libraries and headers
+ Configure searches the standard places (those places known by the
+ systems compiler) for include files and header files. However,
+ additional directories can be specified by using the CPPFLAGS
+ and/or LDFLAGS variables:
+
+ $ CPPFLAGS=-I/home/robb/include \
LDFLAGS=-L/home/robb/lib \
- ./configure
-
- HDF5 uses the zlib library for two purposes: it provides support
- for the HDF5 deflate data compression filter, and it is used by
- the h5toh4 converter and the h4toh5 converter in support of
- HDF4. Configure searches the standard places (plus those specified
- above with the CPPFLAGS and LDFLAGS variables) for the zlib
- headers and library. The search can be disabled by specifying
- `--without-zlib' or alternate directories can be specified with
- `--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and LDFLAGS
- variables:
-
- $ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib
-
- $ CPPFLAGS=-I/usr/unsup/include \
- LDFLAGS=-L/usr/unsup/lib \
- ./configure
-
- The HDF5-to-HDF4 and HDF4-to-HDF5 conversion tool requires the
- HDF4 library and header files, which are detected the same way as
- zlib. The switch to give to configure is `--with-hdf4'. Note
- that HDF5 requires a newer version of zlib than the one shipped
- with some versions of HDF4. Also, unless you have the "correct"
- version of HDF4, the confidence testing will fail in the tools
- directory.
-
- HDF5 includes Szip as a predefined compression method (see 3.2).
- To enable Szip compression, the HDF5 Library must be configured
- and built using the Szip Library:
-
- $ ./configure --with-szlib=/Szip_Install_Directory
-
-4.3.8. Static versus shared linking
- The build process will create static libraries on all systems and
- shared libraries on systems that support dynamic linking to a
- sufficient degree. Either form of the library may be suppressed by
- saying `--disable-static' or `--disable-shared'.
-
- $ ./configure --disable-shared
-
- Shared C++ and Fortran libraries will be built if shared libraries
- are enabled.
-
- To build only statically linked executables on platforms which
- support shared libraries, use the `--enable-static-exec' flag.
-
- $ ./configure --enable-static-exec
-
-4.3.9. Optimization versus symbolic debugging
- The library can be compiled to provide symbolic debugging support
- so it can be debugged with gdb, dbx, ddd, etc., or it can be
- compiled with various optimizations. To compile for symbolic
- debugging (the default for snapshots), say `--disable-production';
- to compile with optimizations (the default for supported public
- releases), say `--enable-production'. On some systems the library
- can also be compiled for profiling with gprof by saying
- `--enable-production=profile'.
-
- $ ./configure --disable-production #symbolic debugging
- $ ./configure --enable-production #optimized code
- $ ./configure --enable-production=profile #for use with gprof
-
- Regardless of whether support for symbolic debugging is enabled,
- the library can also perform runtime debugging of certain packages
- (such as type conversion execution times and extensive invariant
- condition checking). To enable this debugging, supply a
- comma-separated list of package names to to the `--enable-debug'
- switch. See "Debugging HDF5 Applications" for a list of package
- names:
-
- http://www.hdfgroup.org/HDF5/doc/H5.user/Debugging.html
-
- Debugging can be disabled by saying `--disable-debug'.
- The default debugging level for snapshots is a subset of the
- available packages; the default for supported releases is no
- debugging (debugging can incur a significant runtime penalty).
-
- $ ./configure --enable-debug=s,t #debug only H5S and H5T
- $ ./configure --enable-debug #debug normal packages
- $ ./configure --enable-debug=all #debug all packages
- $ ./configure --disable-debug #no debugging
-
- HDF5 can also print a trace of all API function calls, their
- arguments, and the return values. To enable or disable the
- ability to trace the API say `--enable-trace' (the default for
- snapthots) or `--disable-trace' (the default for public releases).
- The tracing must also be enabled at runtime to see any output
- (see "Debugging HDF5 Applications," reference above).
-
-4.3.10. Parallel versus serial library
- The HDF5 Library can be configured to use MPI and MPI-IO for
- parallelism on a distributed multi-processor system. Read the
- file INSTALL_parallel for detailed explanations.
-
-4.3.11. Threadsafe capability
- The HDF5 Library can be configured to be thread-safe (on a very
- large scale) with the `--enable-threadsafe' flag to the configure
- script. Some platforms may also require the '-with-pthread=INC,LIB'
- (or '--with-pthread=DIR') flag to the configure script.
- For further details, see "HDF5 Thread Safe Library":
-
- http://www.hdfgroup.org/HDF5/doc/TechNotes/ThreadSafeLibrary.html
-
-4.3.12. Backward compatibility
- The 1.8 version of the HDF5 Library can be configured to operate
- identically to the v1.6 library with the
- --with-default-api-version=v16
- configure flag. This allows existing code to be compiled with the
- v1.8 library without requiring immediate changes to the application
- source code. For addtional configuration options and other details,
- see "API Compatibility Macros in HDF5":
-
- http://www.hdfgroup.org/HDF5/doc/RM/APICompatMacros.html
-
-4.4. Building
- The library, confidence tests, and programs can be built by
- saying just:
-
- $ make
-
- Note that if you have supplied some other make command via the MAKE
- variable during the configuration step, that same command must be
- used here.
-
- When using GNU make, you can add `-j -l6' to the make command to
- compile in parallel on SMP machines. Do not give a number after
- the `-j' since GNU make will turn it off for recursive invocations
- of make.
-
- $ make -j -l6
-
-4.5. Testing
- HDF5 comes with various test suites, all of which can be run by
- saying
-
- $ make check
-
- To run only the tests for the library, change to the `test'
- directory before issuing the command. Similarly, tests for the
- parallel aspects of the library are in `testpar' and tests for
- the support programs are in `tools'.
-
- The `check' consists of two sub-tests, check-s and check-p, which
- are for serial and parallel tests, respectively. Since serial tests
- and parallel tests must be run with single and multiple processes
- respectively, the two sub-tests work nicely for batch systems in
- which the number of processes is fixed per batch job. One may submit
- one batch job, requesting 1 process, to run all the serial tests by
- "make check-s"; and submit another batch job, requesting multiple
- processes, to run all the parallel tests by "make check-p".
-
- Temporary files will be deleted by each test when it completes,
- but may continue to exist in an incomplete state if the test
- fails. To prevent deletion of the files, define the HDF5_NOCLEANUP
- environment variable.
-
- The HDF5 tests can take a long time to run on some systems. To perform
- a faster (but less thorough) test, set the HDF5TestExpress environment
- variable to 2 or 3 (with 3 being the shortest run). To perform a
- longer test, set HDF5TestExpress to 0. 1 is the default.
-
-4.6. Installing HDF5
- The HDF5 Library, include files, and support programs can be
- installed in a (semi-)public place by saying `make install'. The
- files are installed under the directory specified with
- `--prefix=DIR' (default is 'hdf5') in directories named `lib',
- `include', and `bin'. The directories, if not existing, will be
- created automatically, provided the mkdir command supports the -p
- option.
-
- If `make install' fails because the install command at your site
- somehow fails, you may use the install-sh that comes with the
- source. You will need to run ./configure again.
+ ./configure
+
+ HDF5 uses the zlib library for two purposes: it provides support
+ for the HDF5 deflate data compression filter, and it is used by
+ the h5toh4 converter and the h4toh5 converter in support of
+ HDF4. Configure searches the standard places (plus those
+ specified above with CPPFLAGS and LDFLAGS variables) for the zlib
+ headers and library. The search can be disabled by specifying
+ `--without-zlib' or alternate directories can be specified with
+ `--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and LDFLAGS
+ variables:
+
+ $ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib
+
+ $ CPPFLAGS=-I/usr/unsup/include \
+ LDFLAGS=-L/usr/unsup/lib \
+ ./configure
+
+ The HDF5-to-HDF4 and HDF4-to-HDF5 conversion tool requires the
+ HDF4 library and header files which are detected the same way as
+ zlib. The switch to give to configure is `--with-hdf4'. Note
+ that HDF5 requires a newer version of zlib than the one shipped
+ with some versions of HDF4. Also, unless you have the "correct"
+ version of hdf4 the confidence testing will fail in the tools
+ directory.
+
+ HDF5 has Szip predefined compression method (see 4.2). To enable
+ Szip compression, HDF5 library has to be configured and build using
+ Szip Library
+
+ $ ./configure --with-szlib=/Szip_Install_Directory
+
+5.3.8. Static versus shared linking
+ The build process will create static libraries on all systems and
+ shared libraries on systems that support dynamic linking to a
+ sufficient degree. Either form of library may be suppressed by
+ saying `--disable-static' or `--disable-shared'.
+
+ $ ./configure --disable-shared
+
+ Shared C++ and Fortran libraries will be built if shared libraries
+ are enabled.
+
+ To build only statically linked executables on platforms which
+ support shared libraries, use the `--enable-static-exec' flag.
+
+ $ ./configure --enable-static-exec
+
+5.3.9. Optimization versus symbolic debugging
+ The library can be compiled to provide symbolic debugging support
+ so it can be debugged with gdb, dbx, ddd, etc or it can be
+ compiled with various optimizations. To compile for symbolic
+ debugging (the default for snapshots) say `--disable-production';
+ to compile with optimizations (the default for supported public
+ releases) say `--enable-production'. On some systems the library
+ can also be compiled for profiling with gprof by saying
+ `--enable-production=profile'.
+
+ $ ./configure --disable-production #symbolic debugging
+ $ ./configure --enable-production #optimized code
+ $ ./configure --enable-production=profile #for use with gprof
+
+ Regardless of whether support for symbolic debugging is enabled,
+ the library also is able to perform runtime debugging of certain
+ packages (such as type conversion execution times, and extensive
+ invariant condition checking). To enable this debugging supply a
+ comma-separated list of package names to to the `--enable-debug'
+ switch (see Debugging.html for a list of package names).
+ Debugging can be disabled by saying `--disable-debug'. The
+ default debugging level for snapshots is a subset of the
+ available packages; the default for supported releases is no
+ debugging (debugging can incur a significant runtime penalty).
+
+ $ ./configure --enable-debug=s,t #debug only H5S and H5T
+ $ ./configure --enable-debug #debug normal packages
+ $ ./configure --enable-debug=all #debug all packages
+ $ ./configure --disable-debug #no debugging
+
+ HDF5 is also able to print a trace of all API function calls,
+ their arguments, and the return values. To enable or disable the
+ ability to trace the API say `--enable-trace' (the default for
+ snapthots) or `--disable-trace' (the default for public
+ releases). The tracing must also be enabled at runtime to see any
+ output (see Debugging.html).
+
+5.3.10. Large (>2GB) vs. small (<2GB) file capability
+ In order to read or write files that could potentially be larger
+ than 2GB it is necessary to use the non-ANSI `long long' data
+ type on some platforms. However, some compilers (e.g., GNU gcc
+ versions before 2.8.1 on Intel platforms) are unable to produce
+ correct machine code for this data type. To disable use of the
+ `long long' type on these machines say:
+
+ $ ./configure --disable-hsizet
+
+5.3.11. Parallel vs. serial library
+ The HDF5 library can be configured to use MPI and MPI-IO for
+ parallelizm on a distributed multi-processor system. Read the
+ file INSTALL_parallel for detailed explanations.
+
+5.3.12. Threadsafe capability
+ The HDF5 library can be configured to be thread-safe (on a very
+ large scale) with the with the `--enable-threadsafe' flag to
+ the configure script. Some platforms may also require the
+ '-with-pthread=INC,LIB' (or '--with-pthread=DIR') flag to the configure
+ script as well. Read the file doc/TechNotes/ThreadSafeLibrary.html
+ for further details.
+
+5.3.13. Backward compatibility
+ The 1.8 version of the HDF5 library can be configured to operate
+ identically to the v1.6 library with the `--enable-hdf5v1_6'
+ configure flag. This allows existing code to be compiled with the
+ v1.8 library without requiring immediate changes to the
+ application source code. This flag will only be supported in the
+ v1.8 branch of the library, it will not be available in v1.9+.
+
+5.3.14. Network stream capability
+ The HDF5 library can be configured with a network stream file
+ driver with the `--enable-stream-vfd' configure flag. This option
+ compiles the "stream" Virtual File Driver into the main library.
+ See the documentation on the Virtual File Layer for more details
+ about the use of this driver. The network stream capability is
+ enabled by default, except for use in parallel or with a parallel
+ compiler, where it is disabled. Explicitly enabling Stream-VFD
+ will allow for its use in parallel.
+
+5.4. Building
+ The library, confidence tests, and programs can be build by
+ saying just:
+
+ $ make
+
+ Note that if you supplied some other make command via the MAKE
+ variable during the configuration step then that same command
+ must be used here.
+
+ When using GNU make you can add `-j -l6' to the make command to
+ compile in parallel on SMP machines. Do not give a number after
+ th `-j' since GNU make will turn it off for recursive invocations
+ of make.
+
+ $ make -j -l6
+
+5.5. Testing
+ HDF5 comes with various test suites, all of which can be run by
+ saying
+
+ $ make check
+
+ To run only the tests for the library change to the `test'
+ directory before issuing the command. Similarly, tests for the
+ parallel aspects of the library are in `testpar' and tests for
+ the support programs are in `tools'.
+
+ Temporary files will be deleted by each test when it complets,
+ but may continue to exist in an incomplete state if the test
+ fails. To prevent deletion of the files define the HDF5_NOCLEANUP
+ environment variable.
+
+ The HDF5 tests can take a long time to run on some systems. To
+ perform a faster (but less thorough) test, set the HDF5TestExpress
+ environment variable to 2 or 3 (with 3 being the shortest run).
+ To perform a longer test, set HDF5TestExpress to 0. 1 is the default.
+
+5.6. Installing
+ The HDF5 library, include files, and support programs can be
+ installed in a (semi-)public place by saying `make install'. The
+ files are installed under the directory specified with
+ `--prefix=DIR' (or '/usr/local') in directories named `lib',
+ `include', and `bin'. The prefix directory must exist prior to
+ `make install', but its subdirectories are created automatically.
+
+ If `make install' fails because the install command at your site
+ somehow fails, you may use the install-sh that comes with the
+ source. You need to run ./configure again.
$ INSTALL="$PWD/bin/install-sh -c" ./configure ...
$ make install
- If you want to install HDF5 in a location other than the location
- specified by the `--prefix=DIR' flag during configuration (or
- instead of the default location, `hdf5'), you can do that
- by running the deploy script:
+ If you want to install HDF5 in a location other than the location
+ specified by the `--prefix=DIR' flag during configuration (or
+ instead of the default location, `/usr/local'), you can do that
+ by running the deploy script:
$ bin/deploy NEW_DIR
- This will install HDF5 in NEW_DIR. Alternately, you can do this
- manually by issuing the command:
+ This will install hdf5 in NEW_DIR. Alternately, you can do this
+ manually by issuing the command:
- $ make install prefix=NEW_DIR
+ $ make install prefix=NEW_DIR
- where NEW_DIR is the new directory where you wish to install HDF5.
- If you do not use the deploy script, you should run h5redeploy in
- NEW_DIR/bin directory. This utility will fix the h5cc, h5fc and
- h5c++ scripts to reflect the new NEW_DIR location.
+ where NEW_DIR is the new directory you wish to install HDF5. If
+ you do not use the deploy script, you should run h5redeploy in
+ NEW_DIR/bin directory. This utility will fix h5cc, h5fc and
+ h5c++ scripts to reflect the new NEW_DIR location.
- The library can be used without installing it by pointing the
- compiler at the `src' and 'src/.libs' directory for include files and
- libraries. However, the minimum which must be installed to make
- the library publicly available is:
+ The library can be used without installing it by pointing the
+ compiler at the `src' and 'src/.libs' directory for include files and
+ libraries. However, the minimum which must be installed to make
+ the library publically available is:
- The library:
- ./src/.libs/libhdf5.a
+ The library:
+ ./src/.libs/libhdf5.a
- The public header files:
- ./src/H5*public.h, ./src/H5public.h
+ The public header files:
+ ./src/H5*public.h, ./src/H5public.h
./src/H5FD*.h except ./src/H5FDprivate.h,
./src/H5api_adpt.h
- The main header file:
- ./src/hdf5.h
-
- The configuration information:
- ./src/H5pubconf.h
-
- The support programs that are useful are:
- ./tools/h5ls/h5ls (list file contents)
- ./tools/h5dump/h5dump (dump file contents)
- ./tools/misc/h5repart (repartition file families)
- ./tools/misc/h5debug (low-level file debugging)
- ./tools/h5import/h5import (imports data to HDF5 file)
- ./tools/h5diff/h5diff (compares two HDF5 files)
- ./tools/gifconv/h52gif (HDF5 to GIF converter)
- ./tools/gifconv/gif2h5 (GIF to HDF5 converter)
+ The main header file:
+ ./src/hdf5.h
+ The configuration information:
+ ./src/H5pubconf.h
+
+ The support programs that are useful are:
+ ./tools/h5ls/h5ls (list file contents)
+ ./tools/h5dump/h5dump (dump file contents)
+ ./tools/misc/h5repart (repartition file families)
+ ./tools/misc/h5debug (low-level file debugging)
+ ./tools/h5import/h5import (imports data to HDF5 file)
+ ./tools/h5diff/h5diff (compares two HDF5 files)
+ ./tools/gifconv/h52gif (HDF5 to GIF converter)
+ ./tools/gifconv/gif2h5 (GIF to HDF5 converter)
-5. Using the Library
- Please see the "HDF5 User's Guide" and the "HDF5 Reference Manual":
+5.7 Building and testing with Intel compilers
- http://www.hdfgroup.org/HDF5/doc/
- Most programs will include <hdf5.h> and link with -lhdf5.
- Additional libraries may also be necessary depending on whether
- support for compression, etc., was compiled into the HDF5 Library.
+ When Intel compilers are used (icc or ecc), you will need to
+ modify the generated "libtool" program after configuration is finished.
+ On or around line 104 of the libtool file, there are lines which
+ look like:
- A summary of the HDF5 installation can be found in the
- libhdf5.settings file in the same directory as the static and/or
- shared HDF5 Libraries.
+ # How to pass a linker flag through the compiler.
+ wl=""
+ Change these lines to this:
-6. Support
- Support is described in the README file.
+ # How to pass a linker flag through the compiler.
+ wl="-Wl,"
+ UPDATE: This is now done automatically by the configure script. However,
+ if you still experience a problem, you may want to check this line in
+ the libtool file and make sure that it has the correct value.
-*****************************************************************************
- APPENDIX
-*****************************************************************************
+ * To build the Fortran library using Intel compiler on Linux 2.4, one has to
+ x Use -fpp -DDEC$=DEC_ -DMS$=MS_ compiler flags to disable
+ DEC and MS compiler directives in source files in fortran/src, fortran/test
+ and fortran/examples directories.
+ e.g., setenv F9X 'ifc -fpp -DDEC$=DEC_ -DMS$=MS_'
+ (do not use double quotes since $ is interpreted in them.)
-A. Warnings about compilers
- Output from the following compilers should be extremely suspected
- when used to compile the HDF5 Library, especially if optimizations are
- enabled. In all cases, HDF5 attempts to work around the compiler bugs.
-
-A.1. GNU (Intel platforms)
- Versions before 2.8.1 have serious problems allocating registers
- when functions contain operations on `long long' datatypes.
- Supplying the `--disable-hsizet' switch to configure (documented
- in Appendix B, "Large (>2GB) versus small (<2GB) file capability,")
- will prevent HDF5 from using `long long' datatypes in
- situations that are known not to work, but it limits the HDF5
- address space to 2GB.
-
-A.2. COMPAQ/DEC
- The V5.2-038 compiler (and possibly others) occasionally
- generates incorrect code for memcpy() calls when optimizations
- are enabled, resulting in unaligned access faults. HDF5 works
- around the problem by casting the second argument to `char *'.
- The Fortran module (5.4.1a) fails in compiling some Fortran
- programs. Use 5.5.0 or higher.
-
-A.3. SGI (Irix64 6.2)
- The Mongoose 7.00 compiler has serious optimization bugs and
- should be upgraded to MIPSpro 7.2.1.2m. Patches are available
- from SGI.
-
-A.4. Windows/NT
- The Microsoft Win32 5.0 compiler is unable to cast unsigned long
- long values to doubles. HDF5 works around this bug by first
- casting to signed long long and then to double.
-
- A link warning: defaultlib "LIBC" conflicts with use of other libs
- appears for debug version of VC++ 6.0. This warning will not affect
- building and testing HDF5 Libraries.
-
-
-B. Large (>2GB) versus small (<2GB) file capability
- In order to read or write files that could potentially be larger
- than 2GB, it is necessary to use the non-ANSI `long long' data
- type on some platforms. However, some compilers (e.g., GNU gcc
- versions before 2.8.1 on Intel platforms) are unable to produce
- correct machine code for this datatype. To disable use of the
- `long long' type on these machines, say:
-
- $ ./configure --disable-hsizet
-
-C. Building and testing with other compilers
-C.1. Building and testing with Intel compilers
- When Intel compilers are used (icc or ecc), you will need to modify
- the generated "libtool" program after configuration is finished.
- On or around line 104 of the libtool file, there are lines which
- look like:
-
- # How to pass a linker flag through the compiler.
- wl=""
-
- Change these lines to this:
-
- # How to pass a linker flag through the compiler.
- wl="-Wl,"
-
- UPDATE: This is now done automatically by the configure script.
- However, if you still experience a problem, you may want to check this
- line in the libtool file and make sure that it has the correct value.
-
- * To build the Fortran library using Intel compiler on Linux 2.4,
- one has to perform the following steps:
- x Use the -fpp -DDEC$=DEC_ -DMS$=MS_ compiler flags to disable
- DEC and MS compiler directives in source files in the fortran/src,
- fortran/test, and fortran/examples directories.
- E.g., setenv F9X 'ifc -fpp -DDEC$=DEC_ -DMS$=MS_'
- Do not use double quotes since $ is interpreted in them.
-
- x If Version 6.0 of Fortran compiler is used, the build fails in
- the fortran/test directory and then in the fortran/examples
- directory. To proceed, edit the work.pcl files in those
- directories to contain two lines:
+ x If Version 6.0 of Fortran compiler is used, build fails in
+ the fortran/test directory and then in the
+ fortran/examples directory; to proceed, edit the work.pcl files in
+ those directories to contain two lines
work.pc
../src/work.pc
- x Do the same in the fortran/examples directory.
+ x Do the same in fortran/examples directory
+ x Problem with work.pc files was resolved for newest version of the compiler (7.0)
+
+ * To build the Fortran Library on IA32 follow step described above, except
+ that DEC and MS compiler directives should be removed manually or
+ use a patch from HDF FTP server ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/
+
+
+5.8 Building and testing with PGI compilers
+
+ When PGI C and C++ compilers are used (pgcc or pgCC), you will need to
+ modify the generated "libtool" program after configuration is finished.
+ On or around line 104 of the libtool file, there are lines which
+ look like:
- x A problem with work.pc files was resolved for the newest version
- of the compiler (7.0).
+ # How to pass a linker flag through the compiler.
+ wl=""
- * To build the Fortran library on IA32, follow the steps described
- above, except that the DEC and MS compiler directives should be
- removed manually or use a patch from HDF FTP server:
+ Change these lines to this:
- ftp://ftp.hdfgroup.org/HDF5/current/
+ # How to pass a linker flag through the compiler.
+ wl="-Wl,"
+ UPDATE: This is now done automatically by the configure script. However,
+ if you still experience a problem, you may want to check this line in
+ the libtool file and make sure that it has the correct value.
-C.2. Building and testing with PGI compilers
- When PGI C and C++ compilers are used (pgcc or pgCC), you will need to
- modify the generated "libtool" program after configuration is finished.
- On or around line 104 of the libtool file, there are lines which
- look like this:
+ To build HDF5 C++ Library with pgCC (version 4.0 and later), set
+ environment variable CXX to "pgCC -tlocal"
+ setenv CXX "pgCC -tlocal"
+ before running the configure script.
- # How to pass a linker flag through the compiler.
- wl=""
- Change these lines to this:
+6. Using the Library
+ Please see the User Manual in the doc/html directory.
- # How to pass a linker flag through the compiler.
- wl="-Wl,"
+ Most programs will include <hdf5.h> and link with -lhdf5.
+ Additional libraries may also be necessary depending on whether
+ support for compression, etc. was compiled into the hdf5 library.
- UPDATE: This is now done automatically by the configure script. However,
- if you still experience a problem, you may want to check this line in
- the libtool file and make sure that it has the correct value.
+ A summary of the hdf5 installation can be found in the
+ libhdf5.settings file in the same directory as the static and/or
+ shared hdf5 libraries.
- To build the HDF5 C++ Library with pgCC (version 4.0 and later), set
- the environment variable CXX to "pgCC -tlocal"
- setenv CXX "pgCC -tlocal"
- before running the configure script.
+7. Support
+ Support is described in the README file.