| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
| |
attribute: failures are expected.
|
| |
|
|
|
|
|
| |
close activities under the correct condition! Extract the reader-only open and
close code into helper routines.
|
|
|
|
|
|
| |
if H5FD__vfd_swmr_header_deserialize() succeeds, then the header that was
passed in was actually initialized. This squashes a used-before-initialized
warning from GCC.
|
|
|
|
| |
to close descriptor 0, later.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
set the virtual view to "first missing."
|
| |
|
| |
|
|
|
|
|
| |
compiler detect use-before-initialization. Use the safer `memset(p, ...,
sizeof(*p))` idiom.
|
| |
|
| |
|
|
|
|
| |
H5F_t.
|
|
|
|
| |
instead of cast to H5FD_t *.
|
|
|
|
|
|
| |
multiple opens of the same file with VFD SWMR---i.e., twice for writing, or for
reading and for writing. In the long run, this will help me encapsulate more
of the SWMR functionality in the VFD, too.
|
|
|
|
|
|
|
|
|
|
| |
the lower virtual file's `exc_owner` field. While I'm here, remove a
gratuitous assertion.
This is part of a changeset that helps us avoid creating multiple H5F_shared_t
for one file when virtual datasets are used with VFD SWMR. The old code for
deduplicating VFD SWMR H5F_shared_t instances did not work correctly with VFD
SWMR, so we'd end up with multiple H5F_shared_t all active on the same file.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
member to all virtual files. Add a routine, H5FD_has_conflict(), that returns
true if a new virtual file is identical to an existing virtual file that has an
exclusive owner. Establish an exclusive owner for a VFD SWMR virtual file's
lower virtual file.
Rename bsdqueue.h to H5queue.h and install it, since it's used by H5FDpublic.h.
This is part of a changeset that helps us avoid creating multiple H5F_shared_t
for one file when virtual datasets are used with VFD SWMR. The old code for
deduplicating VFD SWMR H5F_shared_t instances did not work correctly with VFD
SWMR, so we'd end up with multiple H5F_shared_t all active on the same file.
|
|
|
|
|
| |
handles in 4 variables and, of course, failing. Refactor the
dataspace/dataset initialization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
deep in H5Dwrite---project the *clipped* virtual selection instead of
the virtual selection:
assertion "((src_space)->select.num_elem) == ((dst_space)->select.num_elem)"
failed: file "../../../vchoi_fork/src/H5Sselect.c", line 2617, function
"H5S_select_project_intersection"
with this backtrace:
at /home/dyoung/plain-nbsd/src/lib/libc/gen/raise.c:48
at /home/dyoung/plain-nbsd/src/lib/libc/stdlib/abort.c:74
file=0xae9e3e80 "../../../vchoi_fork/src/H5Sselect.c", line=2617,
function=0xae9e4ca0 <__func__.15686> "H5S_select_project_intersection",
failedexpr=0xae9e0e54 "((src_space)->select.num_elem) ==
((dst_space)->select.num_elem)") at
/home/dyoung/plain-nbsd/src/lib/libc/gen/assert.c:72
dst_space=0xae26f0dc, src_intersect_space=0xae0b577c,
new_space_ptr=0xbfb85fac, share_selection=true)
at ../../../vchoi_fork/src/H5Sselect.c:2749
type_info=type_info@entry=0xbfb86084,
file_space=file_space@entry=0xae0b577c, source_dset=0xae24741c,
io_info=<optimized out>) at ../../../vchoi_fork/src/H5Dvirtual.c:2784
type_info=0xbfb86084, nelmts=256, file_space=0xae0b577c,
mem_space=0xae26ec8c, fm=0xadf0401c)
at ../../../vchoi_fork/src/H5Dvirtual.c:2873
mem_type_id=216172782113783837, mem_space=0xae26ec8c,
file_space=0xae0b577c, buf=0xae203808)
at ../../../vchoi_fork/src/H5Dio.c:780
mem_type_id=216172782113783837, mem_space_id=288230376151711754,
file_space_id=288230376151711755, dxpl_id=792633534417207304,
buf=0xae203808, req=0x0)
at ../../../vchoi_fork/src/H5VLnative_dataset.c:206
mem_type_id=216172782113783837, mem_space_id=288230376151711754,
file_space_id=288230376151711755, dxpl_id=792633534417207304,
buf=0xae203808, req=0x0, cls=<optimized out>)
at ../../../vchoi_fork/src/H5VLcallback.c:2152
mem_type_id=216172782113783837, mem_space_id=288230376151711754,
file_space_id=288230376151711755, dxpl_id=792633534417207304,
buf=0xae203808, req=0x0) at ../../../vchoi_fork/src/H5VLcallback.c:2186
mem_type_id=216172782113783837, mem_space_id=288230376151711754,
file_space_id=288230376151711755, dxpl_id=792633534417207304,
buf=0xae203808) at ../../../vchoi_fork/src/H5Dio.c:313
|
| |
|
| |
|
| |
|
|
|
|
|
| |
share them across all datasets/iterations. Extract common code into
state_destroy().
|
|
|
|
|
| |
so that it's possible to produce a virtual dataset (VDS) variant of the
test.
|
|
|
|
|
|
| |
H5Fvfd_swmr_disable_end_of_tick, H5Fvfd_swmr_enable_end_of_tick().
(2) Tests for the above APIs.
|
|
|
|
| |
set a non-zero default for the number of steps.
|
|
|
|
|
|
|
|
|
|
|
| |
Assert index entries are in sorted order earlier in the loop over
old and new indices.
When looping over the remaining new index entries, just do
`entries_added++` to match the other loops.
In the final log entry in H5F_vfd_swmr_reader_end_of_tick(), mention
whether the call will exit with success or failure.
|
|
|
|
|
| |
property lists---from the closed dataset to the reopened dataset. Now my
chunk-cache settings appear to survive H5Drefresh() calls.
|
|
|
|
|
|
| |
apply it individually to each dataset instead of setting the chunk-cache
parameters on the file. Alas, it didn't make any difference, but I'll keep the
change.
|
|
|
|
|
| |
ticks elapsed during API calls. Write the histogram to the swmr_stats
log outlet when the SWMR VFD closes.
|
|
|
|
|
|
|
|
|
| |
H5F_vfd_swmr_reader_end_of_tick: delete superfluous assertions and
extract a com mon subexpression into a H5FD_t * variable.
Carry on with HGOTO_ERROR() cleanup. Delete superfluous parentheses to
reduce visual clutter. Delete superfluous casts. Delete out-of-date
comment: the index size is not fixed any longer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
statement-ify, changing
HGOTO_ERROR(..., \
)
to
HGOTO_ERROR(...,
);
Remove blank lines between if-clause and HGOTO_ERROR. Add some curly
braces to if-statements where that clarifies things.
NFCI.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that makes assertions fail.
Add an optional `close` method to the `H5D_chunk_ops_t`, and use that to
release "holds" on metadata cache (MDC) entries.
For extensible arrays and v2 B-trees, use the existing `dest`(roy)
method to implement `close`. For v1 B-trees and other chunk indices,
don't provide `close`: we cannot safely close the v1 B-tree index, and
the other indices don't have a meaningful presence in the MDC.
Revert my first attempt at making v1 B-tree chunk indices closeable
with `dest`.
Put my comment about the stopgap fix for VFD SWMR at the right place
in src/H5Dchunk.c.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
the entries in a hash bucket after evicting tagged entries. Evicting
tagged entries can can affect both entries before and after the current
entry in the bucket's linked list, so we cannot be sure that either
`entry_ptr` or `follow_ptr` is valid.
This stops the assertion (entry_ptr->page != page) ||
(entry_ptr->refreshed_in_tick == tick) from failing in the test
`testvfdswmr.sh many_small`.
|
|
|
|
|
|
|
|
|
|
|
| |
H5D__chunk_index_close(), and call it in H5D__chunk_read() after reading
a chunked dataset. In this way, indices based on extensible arrays and
v2 B-trees do not leave pinned/tagged entries in the metadata cache that
we cannot evict/refresh when we load changes from the shadow file.
Make some changes to the v1 B-tree code that set the pointer to the
closed B-tree to NULL and, further, tolerate a NULL pointer where
previously that was impossible.
|
|
|
|
| |
my dinky development server out of memory.
|
|
|
|
| |
region offset by 1 unit from a chunk boundary.
|
| |
|
|
|
|
| |
in the next expression.
|
|
|
|
|
| |
VFD SWMR writer, so that the reader does not lose track of the real
shadow-index content.
|