| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
|\ \
| |/ |
|
| |
| |
| |
| | |
with log outlet `pb_access_sizes`.
|
|\ \
| |/ |
|
| |
| |
| |
| |
| | |
VFD SWMR writer, so that the reader does not lose track of the real
shadow-index content.
|
|\ \
| |/ |
|
| | |
|
|\ \
| |/ |
|
| |
| |
| |
| | |
through the lower VFD. Update error message to match.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
when handling an I/O request on a metadata entry that has been sub-allocated
from a larger file space allocation (i.e. fixed and extensible array), and
that crosses at least one page boundary. .
This required modifying the metadata cache to provide the type of the
metadata cache entry in the current I/O request. For now, this is done
with a function call. Once we are sure this works, it may be appropriate
to convert this to a macro, or to add a flags parameter to the H5F block
read/write calls.
Also updated the metadata cache to report whether a read request is
speculative -- again via a function call. This allowed me to remove
the last address static variable in the H5PB_read() call, which is
necessary to support multiple files opened in VFD SWMR mode.
Also re-wrote the H5PB_remove_entries() call to handle release
of large metadata file space allocations that have been sub-allocated
into multiple metadata entries. Also modified the call to
H5PB_remove_entries() in H5MF__xfree_impl() to invoke it whenever
the page buffer is enabled and the size of the space to be freed is
of page size or larger.
Tested serial / debug on charis and Jelly.
Found a bug in H5MF_xfree_impl(), in which the call to H5PB_remove_entries()
is skipped due to HGOTO_DONE calls earlier in the function. While the
obvious action is to move the call earlier in the function, best to
consult with Vailin first, as there is much going on and it would be
best to avoid making the situation worse. If nothing else, there are
some error management issues.
|
| |
|
|
|
|
|
| |
to keep the cumulative bytes in the tick list up-to-date. That prevents
an assertion, later.
|
|
|
|
|
| |
H5PB_remove_entries(), to remove *all* pages overlapped by the freed
space, instead of just the first one.
|
| |
|
|
|
|
| |
log only the global heap-type reads and writes.)
|
|
|
|
|
|
|
|
|
|
|
| |
pagebuffer entry size. If the sizes are different, then release the old
shadow space (using the correct size) and set the shadow page number to
0 so that H5F_update_vfd_swmr_metadata_file() will allocate new shadow
space with the right size.
Vailin says that this fixes the bug she found, where a 4096-byte buffer
allocated by H5MV_alloc() is released with H5MV_free() as if it was 8192
bytes long.
|
|
|
|
| |
`hlog_fast(lengthen_pbentry,`. This quiets one of the VFD SWMR tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
my investigation of shadow-file freespace leaks.
Copy the tick number from the H5F_shared_t to a temporary variable
and use the temporary instead of spelling out `shared->tick_num`.
Assert that every entry on the tick list is dirty, since clean
entries will not appear on the tick list. Mark each corresponding
shadow-index entry dirty, and set its "tick of last change" to the
current tick and "tick of last flush" to 0. This makes the scan
for entries that do not appear on the tick list more believable.
Lower staircase in the loop that scans for shadow-index entries that
did not appear on the tick list. Log indicies that are marked for
reclamation.
|
| |
|
| |
|
|
|
|
| |
page number to 0 so that we cannot free it again by accident.
|
| |
|
|
|
|
| |
assert a pointer we just dereferenced is non-NULL.
|
|
|
|
|
| |
pagebuffer reads and writes: pbio, pbrd, pbwr. In H5PB_read() and
H5PB_write(), log only global heap accesses, for now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
with multiple pages. Update statistics to maintain consistency.
Refactor a bit: in H5PB__write_meta(), move code that's in both the if-
and else- branch to either before the if-else or after and de-duplicate.
In H5PB_vfd_swmr__update_index(), always update the length of a
shadow-index entry to the current size of its corresponding page-table
entry.
In H5PB_vfd_swmr__update_index(), disregard shadow-index entries that
await garbage collection.
|
| |
|
| |
|
| |
|
|
|
|
| |
H5FD_MEM_GHEAP, that's an out-of-date assumption.
|
|
|
|
|
| |
piece in the beginning, 1 or more full pages, and whatever is leftover at the
end. Passes all of our tests.
|
|
|
|
| |
condition, reducing diagnostic checks, single-spacing, etc.
|
|
|
|
| |
couldn't change in the mean time, but do be paranoid and re-assert.
|
|
|
|
|
|
|
|
| |
time in H5PB_dest(). While we're in H5PB_dest(), mark deleted shadow-index
entries as "garbage" and skip the O(n) shadow index-entries copy.
Rename shadow index-entry member `moved_to_hdf5_file` to `moved_to_lower_file`
while I'm in here---NFCI.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
implementation, rename hlog_log() to hlog(), hlog_vlog() to vhlog(), et cetera.
Rename hlog_lazy() to hlog_fast().
Define some log sinks and use them in the page buffer and in VFD SWMR.
|
|
|
|
| |
Delete a comment on a closing curly brace.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add to the H5F_shared_t (!) a new member that tells the index in
the shadow file where the index should be written.
Allocate shadow filespace for the header and the index separately so
that the index can float. Update tests to match the expected original
location of the index.
Introduce vfd_swmr_enlarge_shadow_index(), a routine that allocates space in
the shadow file for a new index that has (up to) twice as many entries as the
old index, allocates a new in-core index of the same size, and copies the old
in-core index to the new. Call vfd_swmr_enlarge_shadow_index() in
H5PB_vfd_swmr__update_index() when the in-core index has too few slots.
In the comment at the top of H5FD__vfd_swmr_load_hdr_and_idx(), describe the
protocol that it follows, now, when it reads the shadow header and index.
Delete some dead code in the function and add a bit of diagnostic code.
TBD quiet the diagnostic code.
In H5F_vfd_swmr_init(), follow the protocol: write the index, first, then
the header.
Modify property-list checks and tests to reserve no fewer than two pages at the
front of the shadow file for the header and index.
|
|
|
|
|
| |
an H5F_shared_t because the new routine that will relocate the index (which
will be in a future commit) has to pass an H5F_t to the filespace allocator.
|
|
|
|
|
| |
because the H5PB__evict_entry() call should have already done that.
Instead, just assert() that the index entry is not present.
|
|
|
|
|
|
| |
names. While I am here, do not copy the last element of the index over
the element that's being deleted, because in the very next step I'm
shifting all elements over by one.
|
|
|
|
|
| |
shadow_image_defer_free(), which is also a better description of what the
routine does.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Delete the little-used free-list length, dl_len, and just count up the list
entries when diagnostic code needs the length.
Extract the code for deferring shadow-image free into a new subroutine,
`vfd_swmr_idx_entry_defer_free()`.
Rename type `deferred_free_head_t` as `deferred_free_queue_t`.
Remove the disused H5F__LL_{REMOVE,PREPEND} macros.
Add some diagnostic code and #if 0'd assertions.
Change `qsort(ptr, n, sizeof(type), cmp)` to
`qsort(ptr, n, sizeof(*ptr), cmp)`.
Use a `continue` statement to lower a staircase in
H5F_update_vfd_swmr_metadata_file().
Add vfd_swmr_mdf_idx_entry_remove() to delete a shadow index entry and add the
image at that entry to a deferred-free list. Call it whenever a page is
evicted.
Update the comment in H5PB_remove_entry() that asks if we need to remove shadow
index entries: now we *do* remove them. Remove shadow index entries in
H5PB__evict_entry(). Also mention in the comment that the index-entry removal
performed by H5PB__evict_entry() ought to be sufficient.
|