From 61f24e38e705c0cc88aceabcd749dc4fe2f220d1 Mon Sep 17 00:00:00 2001
From: Albert Cheng <acheng@hdfgroup.org>
Date: Mon, 17 Apr 2006 02:15:57 -0500
Subject: [svn-r12265] Purpose: Removed reference and files for Installation of
 TFLOPS (machine retired) and special VFD of SRB and GASS, both of which have
 been retired too.

---
 release_docs/INSTALL        |  20 +++--
 release_docs/INSTALL_TFLOPS | 175 --------------------------------------------
 release_docs/INSTALL_VFL    | 131 ---------------------------------
 3 files changed, 9 insertions(+), 317 deletions(-)
 delete mode 100644 release_docs/INSTALL_TFLOPS
 delete mode 100644 release_docs/INSTALL_VFL

diff --git a/release_docs/INSTALL b/release_docs/INSTALL
index e5ae047..924c29b 100644
--- a/release_docs/INSTALL
+++ b/release_docs/INSTALL
@@ -13,9 +13,8 @@
 	2.4. Windows/NT
 
 	3. Quick installation
-	3.1. TFLOPS
-	3.2. Windows
-	3.3. Certain Virtual File Layer(VFL)
+	3.1. Windows
+	3.2. RedStorm (Cray XT3)
 
 	4. HDF5 dependencies
 	4.1. Zlib
@@ -121,17 +120,16 @@
 	    $ make check
 	    $ make install
 
-3.1. TFLOPS
-	Users of the Intel TFLOPS machine, after reading this file,
-	should see the INSTALL_TFLOPS for more instructions.
-
-3.2. Windows
+3.1. Windows
 	Users of Microsoft Windows should see the INSTALL_Windows for
 	detailed instructions.
 
-3.3. Certain Virtual File Layer(VFL)
-	If users want to install with special Virtual File Layer(VFL),
-	please go to read INSTALL_VFL file.  
+3.2. RedStorm (Cray Xt3)
+	Users of the Red Storm machine, after reading this file, should read
+	the Red Storm section in the INSTALL_parallel file for specific
+	instructions for the Red Storm machine.  The same instructions would
+	probably work for other Cray XT3 systems but they have not been
+	verified.
 
 
 4. HDF5 dependencies
diff --git a/release_docs/INSTALL_TFLOPS b/release_docs/INSTALL_TFLOPS
deleted file mode 100644
index 66dafc2..0000000
--- a/release_docs/INSTALL_TFLOPS
+++ /dev/null
@@ -1,175 +0,0 @@
-FOR THE INTEL TFLOPS MACHINE:
-
-Below are the step-by-step procedures for building, testing, and
-installing the parallel and sequential versions of the HDF5 library.
-
------------------
-Software locations
-------------------
-The zlib compression library is installed in /usr/community/hdf5/ZLIB.
-The latest version is zlib v1.1.4.
-
-The mpich library, including mpi-io support, is now supported by the 
-TFLOPS system staff.  Check Sasn100:/usr/local/FAQ/R4.4.0_Release_Notes
-for details.  A modified version of mpicc is created to simply the
-command. It is installed as /usr/community/hdf5/tflop-mpich/bin/mpicc.
-
----------------
-gmake recommended
----------------
-Both Sasn100 and Janus have multiple versions of the make command.
-We recommand the use of the Gnu gmake which has some features
-(e.g., -j and --srcdir support) that makes (sic) life easier.
-
----------------
-Parallel HDF5:
----------------
-
-The setup process for building the parallel version of the HDF5 library for the
-ASCI Red machine is very similar to the sequential version.  Since TFLOPS
-does not support MPIO, we have prepared a shell-script file that configures
-with the appropriate MPI library.
-
-Assuming you have already unpacked the HDF5 tar-file into the 
-directory <hdf5>, follow the steps below:
-FROM SASN100,
-
-1) cd <hdf5>
-
-2) CC=/usr/community/hdf5/tflop-mpich/bin/mpicc \
-      ./configure --host=tflops --with-zlib=/usr/community/hdf5/ZLIB
-
-   Alternately, you may specify the host explicitly:
-
-   CC=/usr/community/hdf5/tflop-mpich/bin/mpicc \
-      ./configure --host=i386-intel-osf1 --with-zlib=/usr/community/hdf5/ZLIB
-
-   Skip the "--with-zlib=..." option if you do not wish to include the zlib
-   compression feature.  Without the zlib compression feature, the library
-   will not be able to access zlib compressed datasets.
-
-   You may safely ignore the WARNING message,
-    =========
-    configure: WARNING: If you wanted to set the --build type, don't use --host.
-	If a cross compiler is detected then cross compile mode will be used.
-    =========
-   You may add the option "--build=i386-intel-osf1" to get rid of the WARNING.
-
-   (The previous bugs in src/Makefile and test/Makefile have been resolved.
-    You don't need to edit them any more.)
-
-3) gmake H5detect
-
-
-FROM JANUS, 
-
-4) cd <hdf5>
-
-5) gmake H5Tinit.c
-
-
-FROM SASN100,
-
-6) gmake
-
-
-When everything is finished compiling and linking,
-FROM JANUS, 
-
-7) gmake check 
-   (We have not encountered the following problem for a year.)
-   Sometimes the "gmake check" fails in the sub-directories of test
-   or tools with a message as "print not found".  This is due to the
-   "gmake" of Janus thinking some binary code needs to be recompiled.
-   The easiest way to fix it is
-   FROM SASN100
-   cd <hdf5>/test	# or cd <hdf5>/tools
-   gmake clean; gmake   # re-make all binary
-
-
-Once satisfied with the parallel test results, as long as you 
-have the correct permission,
-FROM SASN100,
-
-8) gmake install
-
-
----------------
-Sequential HDF5:
----------------
-(**NOTE** We have stopped testing sequential HDF5 for the Tflops machine
-since it has little practical value to build sequential applications for
-the Tflops machine.  The instruction below are kept more for historical
-purpose.)
-
-The setup process for building the sequential HDF5 library for the
-ASCI Red machine is done by a coordination of events from sasn100 and
-janus.  Though janus can do compiling, it is better to build it
-from sasn100 which has more complete building tools and runs faster.
-It is also anti-social to tie up janus with compiling.  The HDF5 building
-requires the use of janus because one of steps is to execute a program
-to find out the run-time characteristics of the TFLOPS machine.
-
-Assuming you have already unpacked the HDF5 tar-file into the 
-directory <hdf5>, follow the steps below:
-
-FROM SASN100,
-
-1) cd <hdf5>
-
-2) ./configure --host=tflops --with-zlib=/usr/community/hdf5/ZLIB
-
-   Alternately, you may specify the host explicitly:
-
-   ./configure --host=i386-intel-osf1 --with-zlib=/usr/community/hdf5/ZLIB
-
-   Skip the "--with-zlib=..." option if you do not wish to include the zlib
-   compression feature.  Without the zlib compression feature, the library
-   will not be able to access zlib compressed datasets.
-
-   You may safely ignore the WARNING message,
-    =========
-    configure: WARNING: If you wanted to set the --build type, don't use --host.
-	If a cross compiler is detected then cross compile mode will be used.
-    =========
-   You may add the option "--build=i386-intel-osf1" to get rid of the WARNING.
-
-   (The previous bugs in src/Makefile and test/Makefile have been resolved.
-    You don't need to edit them any more.)
-
-3) gmake H5detect
-
-
-FROM JANUS, 
-
-4) cd <hdf5>
-
-5) gmake H5Tinit.c
-
-
-FROM SASN100,
-
-6) gmake
-
-
-When everything is finished compiling and linking,
-you can run the tests by
-FROM JANUS, 
-
-7) gmake check 
-     Sometimes the "gmake check" fails in the sub-directories of test
-     or tools with a message as "print not found".  This is due to the
-     "gmake" of Janus thinking some binary code needs to be recompiled.
-     The easiest way to fix it is
-     FROM SASN100
-     cd <hdf5>/test	# or cd <hdf5>/tools
-     gmake clean; gmake   # re-make all binary
-
-
-Once satisfied with the test results, you can install
-the software by
-FROM SASN100,
-
-8) gmake install
-
-
diff --git a/release_docs/INSTALL_VFL b/release_docs/INSTALL_VFL
deleted file mode 100644
index 4385c26..0000000
--- a/release_docs/INSTALL_VFL
+++ /dev/null
@@ -1,131 +0,0 @@
-		Installation Instructions for HDF5
-                with Different Virtual File Layer 
-              
-Support for SRB and GASS drivers was removed. See RELEASE.txt file
-                   *              *             *
-
-This file contains installation instructions for HDF5 with certain Virtual File
-Layer to handle file I/O.  We currently have documented SRB and Globus-GASS.
-
-
-
-                         ---   Part I.   SRB   ---
-I. Overview
------------
-This part contains instructions for remote-accessing HDF5 through SRB.  The 
-SRB version 1.1.7 on Sun Solaris 2.7 platform has been tested.  If you have 
-difficulties installing the software in your system, please send mails to me
-(Raymond Lu) at
-
-        slu@ncsa.uiuc.edu
-
-First, you must obtain and unpack the HDF5 source as described in the file 
-INSTALL.  You need the SRB library(client part) installed.  You should also 
-have access to SRB server. 
-
-
-The Storage Resource Broker(SRB) from San Diego Supercomputer Center is client-
-server middleware that provides a uniform interface for connecting to 
-heterogeneous data resources over a network and accessig replicated data sets.
-SRB, in conjunction with the Metadata Catalog(MCAT), provides a way to access 
-data sets and resources based on their attributes rather than their names or 
-physical locations.  Their webpage is at http://www.npaci.edu/Research/DI/srb.
-
-HDF5 is built on the top of SRB as a client to remotely access files on SRB
-server through SRB.  Right now, HDF-SRB only support low-level file transfer of
-SRB.  The MCAT part is not supported yet.  Low-level file transfer means files
-are treated just like Unix type files.  Files can be read, written and 
-appended.  Partial access(read and write to a chunk of file without transferrig
-the whole) is also supported.  
-
-
-II. Installation Steps
-----------------------
-The installation steps are similar to the ones in INSTALL file:
-
-1. Run 'configure' file with SRB options:
-   configure --with-srb=$SRB/include,$SRB/lib
-   where $SRB is your SRB installed library directory.
-
-   For example, below is a script file to run 'configure':
-	#! /bin/sh
-	# how to configure to use the SRB
-
-	SRB_DIR=/afs/ncsa.uiuc.edu/projects/hdf/users/slu/srb_install
-	configure --with-srb=$SRB_DIR/include,$SRB_DIR/lib
-
-2. Run 'make'
-
-3. Run 'make check'
-
-4. Run 'make install'
-
-5. Run testing program(Optional):
-   Go to the testing directory(cd test) and run srb_write, srb_read, 
-   srb_append.  Actually, these tests have been run in step 3.
-
-   srb_write:  Connect to SRB server, write an HDF5 file with an integer 
-               dataset to SRB server.
-   srb_read:   Connect to SRB server, read part of HDF5 file on the SRB server.
-   srb_append: Connect to SRB server, append an integer dataset to an existent
-               HDF5 file on the SRB server.
-
-6. For using HDF-SRB, please read comments in srb_write.c, srb_read.c, 
-   srb_append.c in the hdf5/test directory.
-
-
-
-                      ---   Part II.   Globus-GASS   ---
-
-I. Overview
------------
-This part contains instructions for remote-accessing HDF5 through Globus-GASS.
-The SGI IRIX64(and IRIX) 6.5 platform have been tested.  If you have 
-difficulties installing the software in your system, please send mails to me
-(Raymond Lu) at
-        slu@ncsa.uiuc.edu
-
-First, you must obtain and unpack the HDF5 source as described in the file 
-INSTALL.  You need the Globus 1.1.x and SSL(should have come with Globus) 
-packages.  
-
-HDF5 is built on the top of Globus-GASS(1.1.x) to handle remote access.  
-Globus-GASS(1.1.x) only supports HTTP and HTTPS protocals for 'whole file 
-read and write'.  More features may be added in the future.  
-
-II. Installation Steps
-----------------------
-The installation steps are similar to the ones in INSTALL file:
-
-1. Run 'configure' file with SSL and GASS options:
-   configure --with-ssl=$SSL/lib --with-gass=$GASS/include,$GASS/lib
-   where $SSL is your SSL directory, and $GASS is your Globus directory.
-
-   For example, below is a script file to run 'configure':
-	#! /bin/sh
-	# how to configure to use the Globus-GASS(1.1.x)
-
-	GASS_DIR=/usr/local/globus-install-1.1.1/development/mips-sgi-irix6.5-64_nothreads_standard_debug
-	SSL_LIB=/usr/local/ssl/lib
-
-	configure --with-ssl=$SSL_LIB --with-gass=$GASS_DIR/include,$GASS_DIR/lib
-
-2. Run 'make'
-
-3. Run 'make check'
-
-4. Run 'make install'
-
-5. Run testing program:
-   There is one read testing program called 'gass_read' in the 'test' 
-   directory.  It does whole file read through HTTP protocal.  The URL is 
-   hard coded as 
-      http://hdf.ncsa.uiuc.edu/GLOBUS/a.h5
-
-   The writing really depends on your web server.  You have to set up your 
-   server in right way to accept writing in files.  We have tested it using 
-   Apache Server(1.3.12) without authentication.  If you need more details 
-   about our testing, please contact us.  Globus suggests using their GASS
-   server. 
-
-   There is another program called 'gass_append' used for experiments.
-- 
cgit v0.12