summaryrefslogtreecommitdiffstats
path: root/release_docs/INSTALL
blob: edd2c9350088249cbc7e6d39b65b8525e49254a3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902

Instructions for the Installation of HDF5 Software
==================================================

This file provides instructions for installing the HDF5 software.  
If you have any problems with the installation, please see The HDF Group's
support page at the following location:

    http://www.hdfgroup.org/services/support.html

CONTENTS
--------
        1. Obtaining HDF5

        2. Quick installation
        2.1. Windows
        2.2. RedStorm (Cray XT3)

        3. HDF5 dependencies
        3.1. Zlib
        3.2  Szip (optional)
        3.3. MPI and MPI-IO

        4. Full installation instructions for source distributions
        4.1. Unpacking the distribution
        4.1.1. Non-compressed tar archive (*.tar)
        4.1.2. Compressed tar archive (*.tar.Z)
        4.1.3. Gzip'd tar archive (*.tar.gz)
        4.1.4. Bzip'd tar archive (*.tar.bz2)
        4.2. Source versus build directories
        4.3. Configuring
        4.3.1. Specifying the installation directories
        4.3.2. Using an alternate C compiler
        4.3.3. Configuring for 64-bit support
        4.3.4. Additional compilation flags
        4.3.5. Compiling HDF5 wrapper libraries
        4.3.6. Specifying other programs
        4.3.7. Specifying other libraries and headers
        4.3.8. Static versus shared linking
        4.3.9. Optimization versus symbolic debugging
        4.3.10. Parallel versus serial library
        4.3.11. Threadsafe capability
        4.3.12. Backward compatibility
        4.4. Building
        4.5. Testing
        4.6. Installing HDF5

        5. Using the Library

        6. Support

        A. Warnings about compilers
        A.1. GNU (Intel platforms)
        A.2. DEC
        A.3. SGI (Irix64 6.2)
        A.4. Windows/NT

        B. Large (>2GB) versus small (<2GB) file capability

        C. Building and testing with other compilers
        C.1.  Building and testing with Intel compilers
        C.2.  Building and testing with PGI compilers

*****************************************************************************

1. Obtaining HDF5
        The latest supported public release of HDF5 is available from
        ftp://ftp.hdfgroup.org/HDF5/current/src.  For Unix and UNIX-like 
        platforms, it is available in tar format compressed with gzip.  
        For Microsoft Windows, it is in ZIP format.

        The HDF team also makes snapshots of the source code available on
        a regular basis. These snapshots are unsupported (that is, the
        HDF team will not release a bug-fix on a particular snapshot;
        rather any bug fixes will be rolled into the next snapshot).
        Furthermore, the snapshots have only been tested on a few
        machines and may not test correctly for parallel applications.
        Snapshots, in a limited number of formats, can be found on THG's 
        development FTP server:

            ftp://ftp.hdfgroup.uiuc.edu/pub/outgoing/hdf5/snapshots 


2. Quick installation
        For those who don't like to read ;-) the following steps can be used
        to configure, build, test, and install the HDF5 Library, header files,
        and support programs. For example, to install HDF5 version X.Y.Z at
        location /usr/local/hdf5, use the following steps.

            $ gunzip < hdf5-X.Y.Z.tar.gz | tar xf -
            $ cd hdf5-X.Y.Z
            $ ./configure --prefix=/usr/local/hdf5 <more configure_flags>
            $ make
            $ make check                # run test suite.
            $ make install
            $ make check-install        # verify installation.

        Some versions of the tar command support the -z option. In such cases,
        the first step above can be simplified to the following:

            $ tar zxf hdf5-X.Y.Z.tar.gz

        <configure_flags> above refers to the configure flags appropriate
        to your installation.  For example, to install HDF5 with the 
        Fortran and C++ interfaces and with SZIP compression, the 
        configure line might read as follows:
        
            $ ./configure --prefix=/usr/local/hdf5 --enable-fortran \
                          --enable-cxx --with-szlib=PATH_TO_SZIP

        In this case, PATH_TO_SZIP would be replaced with the path to the 
        installed location of the SZIP library.

2.1. Windows
        Users of Microsoft Windows should see the INSTALL_Windows files for
        detailed instructions.

2.2. RedStorm (Cray XT3)
        Users of the Red Storm machine, after reading this file, should read
        the Red Storm section in the INSTALL_parallel file for specific
        instructions for the Red Storm machine.  The same instructions would
        probably work for other Cray XT3 systems, but they have not been
        verified.


3. HDF5 dependencies
3.1. Zlib
        The HDF5 Library includes a predefined compression filter that 
        uses the "deflate" method for chunked datasets. If zlib-1.1.2 or
        later is found, HDF5 will use it.  Otherwise, HDF5's predefined
        compression method will degenerate to a no-op; the compression
        filter will succeed but the data will not be compressed.

3.2. Szip (optional)
        The HDF5 Library includes a predefined compression filter that
        uses the extended-Rice lossless compression algorithm for chunked
        datasets. For more information about Szip compression and license
        terms, see http://hdfgroup.org/doc_resource/SZIP/.

        The Szip source code can be obtained from the HDF5 Download page
        http://www.hdfgroup.org/HDF5/release/obtain5.html#extlibs. Building
        instructions are available with the Szip source code.

        The HDF Group does not distribute separate Szip precompiled libraries,
        but the HDF5 binaries available from
        http://www.hdfgroup.org/HDF5/release/obtain5.html include 
        the Szip encoder enabled binary for the corresponding platform.

        To configure the HDF5 Library with the Szip compression filter, use
        the '--with-szlib=/PATH_TO_SZIP' flag. For more information, see
        section 4.3.7, "Specifying other libraries and headers."

        Please notice that if HDF5 configure cannot find a valid Szip library,
        configure will not fail; in this case, the compression filter will 
        not be available to the applications.

        To check if Szip compression was successfully configured in, 
        check the "I/O filters (external):" line in the configure output,
        summary section, printed to the standard output.

3.3. MPI and MPI-IO
        The parallel version of the library is built upon the foundation
        provided by MPI and MPI-IO. If these libraries are not available
        when HDF5 is configured, only a serial version of HDF5 can be built.


4. Full installation instructions for source distributions

4.1. Unpacking the distribution
        The HDF5 source code is distributed in a variety of formats which
        can be unpacked with the following commands, each of which creates an
        'hdf5-X.Y.Z' directory, where X.Y.Z is the HDF5 version numbers.

4.1.1. Non-compressed tar archive (*.tar)

            $ tar xf hdf5-X.Y.Z.tar

4.1.2. Compressed tar archive (*.tar.Z)

            $ uncompress -c < hdf5-X.Y.Z.tar.Z | tar xf -
            Or
            $ tar Zxf hdf5-X.Y.Z.tar.Z

4.1.3. Gzip'd tar archive (*.tar.gz)

            $ gunzip < hdf5-X.Y.Z.tar.gz | tar xf -
            Or
            $ tar zxf hdf5-X.Y.Z.tar.gz

4.1.4. Bzip'd tar archive (*.tar.bz2)

            $ bunzip2 < hdf5-X.Y.Z.tar.bz2 | tar xf -
            Or
            $ tar jxf hdf5-X.Y.Z.tar.bz2

4.2. Source versus build directories
        On most systems the build can occur in a directory other than the
        source directory, allowing multiple concurrent builds and/or
        read-only source code. In order to accomplish this, one should
        create a build directory, cd into that directory, and run the
        `configure' script found in the source directory (configure
        details are below). For example,
            $ mkdir built-fortran
            $ cd build-fortran
            $ ../hdf5-X.Y.Z/configure --enable-fortran ...

        Unfortunately, this does not work on recent Irix platforms (6.5?
        and later) because that `make' does not understand the VPATH variable. 
        However, HDF5 also supports Irix `pmake' which has a .PATH target 
        which serves a similar purpose. Here's what the Irix man pages say 
        about VPATH, the facility used by HDF5 makefiles for this feature:

                The VPATH facility is a derivation of the undocumented
                VPATH feature in the System V Release 3 version of make.
                System V Release 4 has a new VPATH implementation, much
                like the pmake(1) .PATH feature. This new feature is also
                undocumented in the standard System V Release 4 manual
                pages.  For this reason it is not available in the IRIX
                version of make. The VPATH facility should not be used
                with the new parallel make option.

4.3. Configuring
        HDF5 uses the GNU autoconf system for configuration, which
        detects various features of the host system and creates the
        Makefiles. On most systems it should be sufficient to say:

            $ ./configure        
            Or
            $ sh configure

        The configuration process can be controlled through environment
        variables, command-line switches, and host configuration files.
        For a complete list of switches type:

            $ ./configure --help

        The host configuration files are located in the `config'
        directory and are based on architecture name, vendor name, and/or
        operating system which are displayed near the beginning of the
        `configure' output. The host config file influences the behavior
        of configure by setting or augmenting shell variables.

4.3.1. Specifying the installation directories
        The default installation location is the HDF5 directory created in
        the build directory. Typing `make install' will install the HDF5
        Library, header files, examples, and support programs in hdf5/lib,
        hdf5/include, hdf5/doc/hdf5/examples, and hdf5/bin. To use a path
        other than hdf5, specify the path with the `--prefix=PATH' switch:

            $ ./configure --prefix=/usr/local

        If shared libraries are being built (the default), the final
        home of the shared library must be specified with this switch
        before the library and executables are built.

        HDF5 can be installed into a different location than the prefix
        specified at configure time; see section 4.6, "Installing HDF5," 
        for more details.

4.3.2. Using an alternate C compiler
        By default, configure will look for the C compiler by trying
        `gcc' and `cc'. However, if the environment variable "CC" is set
        then its value is used as the C compiler.  For instance, one would 
        use the following line to specify the native C compiler on a system 
        that also has the GNU gcc compiler (users of csh and derivatives 
        will need to prefix the commands below with `env'):

            $ CC=cc ./configure

        A parallel version of HDF5 can be built by specifying `mpicc'
        as the C compiler.  (The `--enable-parallel' flag documented
        below is optional in this case.)  Using the `mpicc' compiler
        will insure that the correct MPI and MPI-IO header files and
        libraries are used.

            $ CC=/usr/local/mpi/bin/mpicc ./configure

4.3.3. Debug vs. production builds

        The HDF5 library can be built in either debug or production mode.
        The primary difference between these in HDF5 1.10.0 is that each
        mode specifies different default values for other configure options
        such as --enable-symbols.

        To build in debug mode:

            $ ./configure --enable-build-mode=debug

        To build in production mode:

            $ ./configure --enable-build-mode=production

        To build in clean mode:

            $ ./configure --enable-build-mode=clean

        "Clean mode" is a minimalist configuration. e.g., no symbols, no
        optimization, etc.

        Previously, --enable-debug and --enable-production were used to
        configure the build mode. Since the new options have somewhat
        different semantics from these original options, these older
        options have been deprecated and will cause the configuration
        step to abort if used.

        Release branches and distributions (e.g, HDF5 1.10.0) are set to
        build in production mode by default. All other branches and
        distributions (e.g., trunk, snapshots, HDF5 1.10 development branch)
        will build in debug mode by default.

        Assertions and other low-overhead sanity checks are controlled
        separately via the --enable-asserts option (which itself primarily
        controls the NDEBUG setting). This is enabled by default in debug
        builds but can be configured independently of the build type.

            $ ./configure --enable-asserts

4.3.4. Configuring for 64-bit support
        Several machine architectures support 32-bit or 64-bit binaries.
        The options below describe how to enable support for different options.

        On Irix64, the default compiler is `cc'. To use an alternate compiler,
        specify it with the CC variable:

            $ CC='cc -n32' ./configure

        Similarly, users compiling on a Solaris machine and desiring to
        build the distribution with 64-bit support should specify the
        correct flags with the CC variable:

            $ CC='cc -m64' ./configure

        To configure AIX 64-bit support including the Fortran and C++ APIs,
        (Note: need to set $AR to 'ar -X 64'.)
        Serial:
            $ CFLAGS=-q64 FCFLAGS=-q64 CXXFLAGS=-q64 AR='ar -X 64'\
              ./configure --enable-fortran
        Parallel: (C++ not supported with parallel)
            $ CFLAGS=-q64 FCFLAGS=-q64 AR='ar -X 64'\
              ./configure --enable-fortran

        Configure should automatically enable large file system (LFS) support
        for autotools builds regardless of platform and architecture. If
        this is not true for a platform, please contact the HDF Group
        help desk at help@hdfgroup.org.

4.3.5. Additional compilation flags
        If addtional flags must be passed to the compilation commands,
        specify those flags with the CFLAGS variable. For instance,
        to use pipes instead of temporary files when compiling, you
        can enter:

            $ CFLAGS=-pipe ./configure

4.3.6. Compiling HDF5 wrapper libraries
        One can optionally build the Fortran, C++, and/or Java JNI interfaces
        to the HDF5 C library. By default, all of these options are disabled.
        To build them, specify --enable-fortran, --enable-cxx, and/or
        --enable-java, respectively.

            $ ./configure --enable-fortran
            $ ./configure --enable-cxx
            $ ./configure --enable-java
        
        Configuration will halt if a working Fortran 2003, C++, or
        Java compiler is not found. To use an alternate Fortran compiler,
        specify it with the FC variable:

            $ FC=/usr/local/bin/g95 ./configure --enable-fortran

        CXX can similarly be used to specify an alternative C++ compiler.

        The libraries are compiled with flags and settings that are
        distinct from the HDF5 library. These settings are specified in
        platform- and compiler-specific configuration files in the config/
        directory in the source tree. An exception to this is when a custom
        string is used to specify profiling, symbols, and/or optimization
        options. These flags will be used everywhere.

        Note: The Fortran, C++, and Java interfaces are not supported on all
              platforms the main HDF5 Library supports. Also, the Fortran
              interface supports parallel HDF5 while the C++ and Java
              interfaces do not. None of the interfaces supports 
              thread-safety.

        Note: See sections 4.7 and 4.8 for building the Fortran library with
              Intel or PGI compilers.

        A high-level wrapper library that contains some convenience and
        special-purpose functions and tools can also be built with the
        library. By default, this library is normally built along with
        the C library. The high-level library is not supported when the
        thread-safe HDF5 library is built, however, and will either have
        to be explicitly disabled or the --enable-unsupported configure
        option will have to be used.

            $ ./configure --enable-threadsafe --disable-hl

                OR

            $ ./configure --enable-threadsafe --enable-unsupported

        This is admittedly a little awkward but it avoids the more common
        case of users having to specify --enable-hl for all builds if we
        disabled building the high-level library by default.

4.3.7. Specifying other programs
        The build system has been tuned for use with GNU make but also 
        works with other versions of make.  If the `make' command runs a
        non-GNU version but a GNU version is available under a different
        name (perhaps `gmake'), then HDF5 can be configured to use it by
        setting the MAKE variable. Note that whatever value is used for
        MAKE must also be used as the make command when building the
        library:

            $ MAKE=gmake ./configure
            $ gmake

        The `AR' and `RANLIB' variables can also be set to the names of
        the `ar' and `ranlib' (or `:') commands to override values
        detected by configure.

        The HDF5 Library, include files, and utilities are installed
        during `make install' (described below) with a BSD-compatible
        install program detected automatically by configure. If none is
        found, the shell script bin/install-sh is used. Configure does not
        check that the install script actually works; if a bad install is 
        detected on your system (e.g., on the ASCI blue machine as of 
        March 2, 1999) you have two choices:

            1. Copy the bin/install-sh program to your $HOME/bin
               directory, name it `install', and make sure that $HOME/bin
               is searched before the system bin directories.

            2. Specify the full path name of the `install-sh' program
               as the value of the INSTALL environment variable. Note: do
               not use `cp' or some other program in place of install
               because the HDF5 makefiles also use the install program to
               change file ownership and/or access permissions.

4.3.8. Specifying other libraries and headers
        Configure searches the standard places (those places known by the
        systems compiler) for include files and header files. However,
        additional directories can be specified by using the CPPFLAGS
        and/or LDFLAGS variables:

            $ CPPFLAGS=-I/home/robb/include \
              LDFLAGS=-L/home/robb/lib \
              ./configure

        HDF5 uses the zlib library to support the HDF5 deflate 
        data compression filter.  Configure searches the standard places 
        (plus those specified above with the CPPFLAGS and LDFLAGS variables) 
        for the zlib headers and library. The search can be disabled by 
        specifying `--without-zlib' or alternate directories can be specified 
        with `--with-zlib=INCDIR,LIBDIR' or through the CPPFLAGS and LDFLAGS
        variables:

            $ ./configure --with-zlib=/usr/unsup/include,/usr/unsup/lib

            $ CPPFLAGS=-I/usr/unsup/include \
              LDFLAGS=-L/usr/unsup/lib \
              ./configure

        HDF5 includes Szip as a predefined compression method (see 3.2).  
        To enable Szip compression, the HDF5 Library must be configured 
        and built using the Szip Library:

            $ ./configure --with-szlib=/Szip_Install_Directory

4.3.9. Static versus shared linking
        The build process will create static libraries on all systems and
        shared libraries on systems that support dynamic linking to a
        sufficient degree. Either form of the library may be suppressed by
        saying `--disable-static' or `--disable-shared'.

            $ ./configure --disable-shared

        Shared C++ and Fortran libraries will be built if shared libraries
        are enabled.

        To build only statically linked executables on platforms which
        support shared libraries, use the `--enable-static-exec' flag.

            $ ./configure --enable-static-exec

4.3.10. Symbolic debugging and profiling
        The library can be compiled to provide symbolic debugging support
        so it can be debugged with gdb, dbx, ddd, etc. To compile for
        symbolic, use --enable-symbols:

            $ ./configure --enable-symbols

        The --enable-symbols option can optionally take a flags string
        which the library will use to build the library and wrappers.
        This can be used to enable special settings like -ggdb to be
        used:

            $ ./configure --enable-symbols=-ggdb

        In most cases, when symbols are not enabled the option to
        strip all symbols (e.g., -s w/ gcc) will be added to the
        compiler/linker flags.

        Regardless of whether support for symbolic debugging is enabled,
        the library can also perform runtime debugging of certain packages 
        (such as type conversion execution times and extensive invariant 
        condition checking). These checks are generally more intensive
        than the simple asserts and low-overhead checks enabled by
        --enable-asserts. Note that some of these checks can significantly
        slow down the library.
        
        To enable this debugging, supply a comma-separated list of package
        names to to the --enable-internal-debug option.

        Current packages that can perform extra checks:

            AC,B,B2,D,F,FA,FL,FS,HL,I,O,S,ST,T,Z

        Any subset can be specified:

            $ ./configure --enable-internal-debug="B,B2,EA,FA"

        Not all packages will generate special debug checks. Configure
        simply defines an H5<pkg>_DEBUG symbol for each package in the
        list so there is no harm in specifying arbitrary packages. The
        string "all" can be used to specify "all reasonable" packages,
        though even this excludes a few packages (e.g.: B) that have
        severe run-time penalties.

        In general, the internal debug option is intended for HDF Group
        use and, outside of the default settings, will not be of
        interest the the average user.

        The HDF5 library can also print a trace of all API function calls,
        their arguments, and the return values. To enable or disable the
        ability to trace the API use --enable-trace (the default for
        debug mode) or --disable-trace (the default for production mode). 
        The tracing must also be enabled at runtime to see any output 
        (see "Debugging HDF5 Applications," reference above).

        Profiling can be enabled with --enable-profiling. This will set
        appropriate flags for a common profiler for the platform (usually
        gprof). Other flags can be specified via a custom string passed
        to the option:

            $ ./configure --enable-profiling        # default profiler
            $ ./configure --enable-profiling=-p     # use prof w/ gcc

        The profiling option is independent of the build mode. In particular,
        note that enabling profiling will NOT automatically enable debugging
        since it can be useful to profile a production library, even
        if the output will be limited.

        The custom strings can be more complicated than a single
        argument:

            $ ./configure --enable-profiling="-pg --coverage"

        Do note that when a custom string is specified, it will *replace*
        the flags that would normally be set for that option. It is NOT
        concatenated. This is true of all configure options that can
        accept a custom string.

        If using Valgrind or some other memory sanity checker, you will
        want to build the library with --enable-using-memchecker. This
        will disable a feature of the HDF5 library that recycles
        certain memory blocks without a trip to the system's allocator.
        This can confuse memory checkers and hide problems.

            $ ./configure --enable-using-memchecker

        The library also includes its own memory sanity checker. It is not
        as involved as Valgrind and is mainly for library developers
        but may be of some use to users trying to track down memory leaks.
        It can be enabled with --enable-memory-alloc-sanity-check and
        is separate from --enable-using-memchecker.

            $ ./configure --enable-memory-alloc-sanity-check

        Both of these options can slow down the library and cause
        increased memory usage.

4.3.11  Optimization
        Various levels of compiler optimization can be specified at
        compile time using --enable-optimization. This option can
        take several strings:

            high        Aggressive optimization (-O3, etc.)

            debug       Optimizations that do not detract significantly
                        from the debugging experience (-Og, etc.)

            none        No optimizations (typically -O0)

        Example:

            $ ./configure --enable-optimization=high

        The exact options used will vary with compiler and platform and can
        be seen in the library settings file created during the configure
        step.

        Additionally, a custom string can be provided, such as "-Ofast":

            $ ./configure --enable-optimization=-Ofast


        Setting the optimization level to be high, debug, or none will
        set appropriate options for the language wrappers as well if they
        are being built (due to compiler quirks, these may be different
        than those used for the main library). If a custom string string
        has been provided, however, it will be used for both the HDF5
        library and the language wrappers.

4.3.12. Parallel versus serial library
        The HDF5 Library can be configured to use MPI and MPI-IO for
        parallelism on a distributed multi-processor system.  Read the
        file INSTALL_parallel for detailed explanations.

4.3.13. Threadsafe capability
        The HDF5 Library can be configured to be thread-safe with the
        `--enable-threadsafe' flag to the configure script.  Some platforms
        may also require the '-with-pthread=INC,LIB' (or 
        '--with-pthread=DIR') flag to the configure script, though this
        is normally not necessary. 

        In HDF5 1.10.0, the thread-safety code uses pthreads on POSIX systems
        and Win32 threads on Windows systems.
        
        Thread-safety is currently implemented via a global library mutex,
        effectively serializing library access. Please be aware of the
        potential performance implications. The library is currently NOT
        threaded internally though we do use thread-local storage to keep
        track of things like per-thread function stacks for debugging.

        Note that building the thread-safe library with the high-level
        library and/or language wrappers (Fortran, etc.) is not officially
        supported since we've never investigated the thread-safety of
        these high-level or wrapper libraries and there is no testing.
        However, this can be enabled with --enable-unsupported if you
        are willing to take the risk of using it.

        For further details, see "HDF5 Thread Safe Library":

            http://www.hdfgroup.org/HDF5/doc/TechNotes/ThreadSafeLibrary.html

4.3.14. Backward compatibility
        The 1.10 version of the HDF5 Library can be configured to operate
        identically to the v1.6 library with the 
            --with-default-api-version=v16
        and to the v1.8 library with the
            --with-default-api-version=v18
        configure flag. This allows existing code to be compiled with the
        v1.10 library without requiring immediate changes to the application 
        source code.  For addtional configuration options and other details, 
        see "API Compatibility Macros in HDF5":

            http://www.hdfgroup.org/HDF5/doc/RM/APICompatMacros.html 

4.4. Building
        The library, confidence tests, and programs can be built by
        saying just:

            $ make

        Note that if you have supplied some other make command via the MAKE
        variable during the configuration step, that same command must be
        used here.

        When using GNU make, you can add `-j -l6' to the make command to
        compile in parallel on SMP machines. Do not give a number after
        the `-j' since GNU make will turn it off for recursive invocations
        of make.

            $ make -j -l6

4.5. Testing
        HDF5 comes with various test suites, all of which can be run by
        saying

            $ make check

        To run only the tests for the library, change to the `test'
        directory before issuing the command. Similarly, tests for the
        parallel aspects of the library are in `testpar' and tests for
        the support programs are in `tools'.

        The `check' consists of two sub-tests, check-s and check-p, which
        are for serial and parallel tests, respectively.  Since serial tests
        and parallel tests must be run with single and multiple processes
        respectively, the two sub-tests work nicely for batch systems in
        which the number of processes is fixed per batch job.  One may submit
        one batch job, requesting 1 process, to run all the serial tests by
        "make check-s"; and submit another batch job, requesting multiple
        processes, to run all the parallel tests by "make check-p".

        Temporary files will be deleted by each test when it completes,
        but may continue to exist in an incomplete state if the test
        fails. To prevent deletion of the files, define the HDF5_NOCLEANUP
        environment variable.

        The HDF5 tests can take a long time to run on some systems.  To perform
        a faster (but less thorough) test, set the HDF5TestExpress environment
        variable to 2 or 3 (with 3 being the shortest run).  To perform a
        longer test, set HDF5TestExpress to 0.  1 is the default.

4.6. Installing HDF5
        The HDF5 Library, include files, and support programs can be
        installed in a (semi-)public place by saying `make install'. The
        files are installed under the directory specified with
        `--prefix=DIR' (default is 'hdf5') in directories named `lib',
        `include', and `bin'. The directories, if not existing, will be
        created automatically, provided the mkdir command supports the -p
        option.

        If `make install' fails because the install command at your site
        somehow fails, you may use the install-sh that comes with the
        source. You will need to run ./configure again.

            $ INSTALL="$PWD/bin/install-sh -c" ./configure ...
            $ make install

        If you want to install HDF5 in a location other than the location
        specified by the `--prefix=DIR' flag during configuration (or
        instead of the default location, `hdf5'), you can do that
        by running the deploy script:

            $ bin/deploy NEW_DIR

        This will install HDF5 in NEW_DIR.  Alternately, you can do this
        manually by issuing the command:

            $ make install prefix=NEW_DIR

        where NEW_DIR is the new directory where you wish to install HDF5. 
        If you do not use the deploy script, you should run h5redeploy in
        NEW_DIR/bin directory.  This utility will fix the h5cc, h5fc and
        h5c++ scripts to reflect the new NEW_DIR location.

        The library can be used without installing it by pointing the
        compiler at the `src' and 'src/.libs' directory for include files and
        libraries. However, the minimum which must be installed to make
        the library publicly available is:

            The library:
                ./src/.libs/libhdf5.a

            The public header files:
                ./src/H5*public.h, ./src/H5public.h
                ./src/H5FD*.h except ./src/H5FDprivate.h,
                ./src/H5api_adpt.h

            The main header file:
                ./src/hdf5.h

            The configuration information:
                ./src/H5pubconf.h
        
            The support programs that are useful are:
            ./tools/h5ls/h5ls               (list file contents)
            ./tools/h5dump/h5dump           (dump file contents)
            ./tools/misc/h5repart           (repartition file families)
            ./tools/misc/h5debug            (low-level file debugging)
            ./tools/h5import/h5import       (imports data to HDF5 file)
            ./tools/h5diff/h5diff           (compares two HDF5 files)
            ./tools/gifconv/h52gif          (HDF5 to GIF converter) 
            ./tools/gifconv/gif2h5          (GIF to HDF5 converter)


5. Using the Library
        Please see the "HDF5 User's Guide" and the "HDF5 Reference Manual":

            http://www.hdfgroup.org/HDF5/doc/

        Most programs will include <hdf5.h> and link with -lhdf5.
        Additional libraries may also be necessary depending on whether
        support for compression, etc., was compiled into the HDF5 Library.

        A summary of the HDF5 installation can be found in the
        libhdf5.settings file in the same directory as the static and/or
        shared HDF5 Libraries.


6. Support
        Support is described in the README file.


*****************************************************************************
			    APPENDIX
*****************************************************************************

A. Warnings about compilers
        Output from the following compilers should be extremely suspected
        when used to compile the HDF5 Library, especially if optimizations are
        enabled. In all cases, HDF5 attempts to work around the compiler bugs.

A.1. GNU (Intel platforms)
        Versions before 2.8.1 have serious problems allocating registers
        when functions contain operations on `long long' datatypes.

A.2. COMPAQ/DEC
        The V5.2-038 compiler (and possibly others) occasionally
        generates incorrect code for memcpy() calls when optimizations
        are enabled, resulting in unaligned access faults. HDF5 works
        around the problem by casting the second argument to `char *'.
        The Fortran module (5.4.1a) fails in compiling some Fortran
        programs.  Use 5.5.0 or higher.

A.3. SGI (Irix64 6.2)
        The Mongoose 7.00 compiler has serious optimization bugs and
        should be upgraded to MIPSpro 7.2.1.2m. Patches are available
        from SGI.

A.4. Windows/NT
        The Microsoft Win32 5.0 compiler is unable to cast unsigned long
        long values to doubles. HDF5 works around this bug by first
        casting to signed long long and then to double.

        A link warning: defaultlib "LIBC" conflicts with use of other libs
        appears for debug version of VC++ 6.0. This warning will not affect
        building and testing HDF5 Libraries. 


B. Large (>2GB) versus small (<2GB) file capability
        In order to read or write files that could potentially be larger
        than 2GB, it is necessary to use the non-ANSI `long long' data
        type on some platforms. However, some compilers (e.g., GNU gcc
        versions before 2.8.1 on Intel platforms) are unable to produce
        correct machine code for this datatype. 


C. Building and testing with other compilers
C.1.  Building and testing with Intel compilers
        When Intel compilers are used (icc or ecc), you will need to modify
        the generated "libtool" program after configuration is finished.
        On or around line 104 of the libtool file, there are lines which 
        look like:

            # How to pass a linker flag through the compiler.
            wl=""

        Change these lines to this:

            # How to pass a linker flag through the compiler.
            wl="-Wl,"

        UPDATE: This is now done automatically by the configure script. 
        However, if you still experience a problem, you may want to check this 
        line in the libtool file and make sure that it has the correct value.

      * To build the Fortran library using Intel compiler on Linux 2.4, 
        one has to perform the following steps:
          x Use the -fpp -DDEC$=DEC_ -DMS$=MS_ compiler flags to disable
            DEC and MS compiler directives in source files in the fortran/src, 
            fortran/test, and fortran/examples directories.
            E.g.,  setenv F9X 'ifc -fpp -DDEC$=DEC_ -DMS$=MS_'
            Do not use double quotes since $ is interpreted in them.

          x If Version 6.0 of Fortran compiler is used, the build fails in 
            the fortran/test directory and then in the fortran/examples 
            directory.  To proceed, edit the work.pcl files in those 
            directories to contain two lines:

                    work.pc
                    ../src/work.pc 

          x Do the same in the fortran/examples directory.

          x A problem with work.pc files was resolved for the newest version 
            of the compiler (7.0).

       * To build the Fortran library on IA32, follow the steps described 
         above, except that the DEC and MS compiler directives should be 
         removed manually or use a patch from HDF FTP server:

             ftp://ftp.hdfgroup.org/HDF5/current/


C.2.  Building and testing with PGI compilers
        When PGI C and C++ compilers are used (pgcc or pgCC), you will need to 
        modify the generated "libtool" program after configuration is finished.
        On or around line 104 of the libtool file, there are lines which 
        look like this:

            # How to pass a linker flag through the compiler.
            wl=""

        Change these lines to this:

            # How to pass a linker flag through the compiler.
            wl="-Wl,"

        UPDATE: This is now done automatically by the configure script. However,
        if you still experience a problem, you may want to check this line in
        the libtool file and make sure that it has the correct value.

        To build the HDF5 C++ Library with pgCC (version 4.0 and later), set
        the environment variable CXX to "pgCC -tlocal"
            setenv CXX "pgCC -tlocal"
        before running the configure script.