summaryrefslogtreecommitdiffstats
path: root/test/vfd_swmr_bigset_writer.c
Commit message (Collapse)AuthorAgeFilesLines
* Sleep 1/10s between H5Dopen() tries. Make a couple of warning/error messagesDavid Young2020-08-181-2/+6
| | | | more clear/correct.
* Perform the dataset opens in reverse order to their creation, and ifDavid Young2020-08-181-5/+49
| | | | | | | | | | | | H5Dopen fails, rapidly retry up to 9,999 times. Log H5Dopen failures, but log no more than once every five seconds to avoid spamming the terminal. With these changes, it's easier for the reader to open the last dataset before the writer created it, but the reader recovers instead of quitting with an error. It should only be necessary to retry opening the *last* dataset; all previous datasets should open on one try if the last is open.
* Add to the "bigset" writer a `-M` command-line option that enables the use ofDavid Young2020-07-211-70/+110
| | | | | | | multiple files with virtual datasets. Add to vfd_swmr_create_fapl() printf(3)-like arguments for setting the shadow filename. Update all callers.
* Fix a copy-pasto in an error message.David Young2020-07-101-1/+1
|
* Move the dapl initialization to state_init and, if VDS is enabled,David Young2020-07-101-7/+10
| | | | set the virtual view to "first missing."
* Fix a bug where I was trying to store `ndatasets * 4` source-datasetDavid Young2020-07-071-93/+155
| | | | | handles in 4 variables and, of course, failing. Refactor the dataspace/dataset initialization.
* Use native byte order unless big-endian is specified with `-b` option.David Young2020-07-021-12/+19
|
* Add a VDS mode to the bigset test.David Young2020-06-301-7/+197
|
* Gather a couple of assertions.David Young2020-06-261-2/+1
|
* Create one dataset creation property list and one file dataspace andDavid Young2020-06-261-31/+45
| | | | | share them across all datasets/iterations. Extract common code into state_destroy().
* When extending the dataset in one dimension, add columns instead of rowsDavid Young2020-06-261-22/+23
| | | | | so that it's possible to produce a virtual dataset (VDS) variant of the test.
* Create a dataset access property list (dapl) that disables the chunk cache andDavid Young2020-06-191-5/+13
| | | | | | apply it individually to each dataset instead of setting the chunk-cache parameters on the file. Alas, it didn't make any difference, but I'll keep the change.
* Close all of the datasets we opened.David Young2020-06-161-0/+21
|
* Limit every chunk cache to 1 slot and 1kB so that the test doesn't runDavid Young2020-06-111-0/+3
| | | | my dinky development server out of memory.
* Make the test more challenging: on every other step, read a chunk-sizedDavid Young2020-06-111-4/+8
| | | | region offset by 1 unit from a chunk boundary.
* Delete unused `state_t` members. NFCI.David Young2020-05-271-9/+1
|
* Join some lines. NFCI.David Young2020-05-261-4/+2
|
* Add `-a steps` option and if steps != 0, then add (or verify) an attribute onDavid Young2020-05-261-16/+85
| | | | | | | | | | | | | | | | | each dataset every `steps` steps. Update usage message. Add a cast to `time_t` to quiet a compiler warning. Replace two occurrences of a debug statement in `verify_extensible_dset()` with one occurrence in `verify_chunk()`. Replace the anonymous constant `2` with `hang_back` and increase `hang_back` to 3. XXX Now that I've fixed a bug, reduce `hang_back` to 2, again. Verify datasets in the reverse of the order they are written so that we spend less time re-verifying datasets written in the same step.
* Add missing newline to dbgf() statement.David Young2020-05-151-1/+1
|
* Make `-q` actually quiet the program.David Young2020-05-151-1/+1
|
* Make the personality detection more robust like I did forDavid Young2020-05-151-4/+11
| | | | vfd_swmr_zoo_writer, previously.
* Let us change the chunk size with command-line options, -r rows and -cDavid Young2020-05-151-84/+152
| | | | | | columns. If the number of datasets is greater than the number of steps, then only pause between steps, do not pause between individual datasets written/verified. Otherwise, pause between each dataset written/verified.
* Delete extra line. NFCI.David Young2020-05-091-1/+0
|
* Add missing return-value check.David Young2020-05-091-0/+3
|
* Take care not to leak property lists or data spaces.David Young2020-05-091-8/+12
|
* Fix bugs in the dataset dimensions checks.David Young2020-05-071-4/+6
|
* Create a reader for the extensible datasets tests.David Young2020-05-071-22/+231
|
* Add my work-in-progress dataset test. It writes a handful of datasetsDavid Young2020-05-061-0/+401
that expand in one or two dimensions, depending on the setting of the -d option argument.