diff options
author | MuQun Yang <ymuqun@hdfgroup.org> | 2006-08-09 03:00:11 (GMT) |
---|---|---|
committer | MuQun Yang <ymuqun@hdfgroup.org> | 2006-08-09 03:00:11 (GMT) |
commit | 6916816a563532fddc3699a6d5e4adb57212968d (patch) | |
tree | 70121257e539ec369455ebd43119873fd96c7489 /src/H5FDmpi.h | |
parent | d17d42acd0fbba4b3433937f448c99930553b038 (diff) | |
download | hdf5-6916816a563532fddc3699a6d5e4adb57212968d.zip hdf5-6916816a563532fddc3699a6d5e4adb57212968d.tar.gz hdf5-6916816a563532fddc3699a6d5e4adb57212968d.tar.bz2 |
[svn-r12553] This check-in includes the following part of parallel optimization codes:
1. Provide another option for users to do independent IO with MPI file setview(collectively)
2. With the request of collective IO from users, using Independent IO with MPI file setview if we find collective IO is not good for the applications for IO per chunk(multi-chunk IO) case. Previously we used pure independent IO and that actually performed small IO(IO each row) for this case. The recent performance study suggested the independent IO with file setview can acheieve significantly better performance than collective IO when not many processes participate in the IO.
3. For applications that explicitly choose to do collective IO per chunk case, the library won't do any optimization(gather/broadcast) operations. The library simply passes the collective IO request to MPI-IO.
Tested at copper, kagiso, heping, mir and tungsten(cmpi and mpich)
Kagiso is using LAM, t_mpi test was broken even.
The cchunk10 test failed at heping and mir. I suspected it was an MPICH problem. Will investigate later.
Everything passed at copper.
at tungsten: the old cmpi bug(failed at esetw) is still there. Other tests passed.
Some sequential fheap tests failed at kagiso.
Diffstat (limited to 'src/H5FDmpi.h')
-rw-r--r-- | src/H5FDmpi.h | 14 |
1 files changed, 10 insertions, 4 deletions
diff --git a/src/H5FDmpi.h b/src/H5FDmpi.h index 6c2a2c5..87eba64 100644 --- a/src/H5FDmpi.h +++ b/src/H5FDmpi.h @@ -21,8 +21,8 @@ #ifndef H5FDmpi_H #define H5FDmpi_H -/***** Macros for One linked collective IO case. *****/ -/* The default value to do one linked collective IO for all chunks. +/***** Macros for One linked collective IO case. *****/ +/* The default value to do one linked collective IO for all chunks. If the average number of chunks per process is greater than this value, the library will create an MPI derived datatype to link all chunks to do collective IO. The user can set this value through an API. */ @@ -30,11 +30,11 @@ #define H5D_ONE_LINK_CHUNK_IO_THRESHOLD 0 /***** Macros for multi-chunk collective IO case. *****/ /* The default value of the threshold to do collective IO for this chunk. - If the average number of processes per chunk is greater than the default value, + If the average percentage of processes per chunk is greater than the default value, collective IO is done for this chunk. */ -#define H5D_MULTI_CHUNK_IO_COL_THRESHOLD 50 +#define H5D_MULTI_CHUNK_IO_COL_THRESHOLD 60 /* Type of I/O for data transfer properties */ typedef enum H5FD_mpio_xfer_t { H5FD_MPIO_INDEPENDENT = 0, /*zero is the default*/ @@ -48,6 +48,12 @@ typedef enum H5FD_mpio_chunk_opt_t { H5FD_MPIO_CHUNK_MULTI_IO } H5FD_mpio_chunk_opt_t; +/* Type of I/O for data transfer properties */ +typedef enum H5FD_mpio_collective_opt_t { + H5FD_MPIO_COLLECTIVE_IO = 0, + H5FD_MPIO_INDIVIDUAL_IO /*zero is the default*/ +} H5FD_mpio_collective_opt_t; + #ifdef H5_HAVE_PARALLEL |