| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Instead, move the epoch backward in time. Additionally, add
nstime_monotonic() and use it in debug builds to assert that time only
goes backward if nstime_update() is using a non-monotonic time source.
|
| |
|
|
|
|
|
|
|
| |
Avoid calling s2u() on raw extent sizes in extent_recycle().
Clamp psz2ind() (implemented as psz2ind_clamp()) when inserting/removing
into/from size-segregated extent heaps.
|
| |
|
|
|
|
| |
This fixes race conditions during purging.
|
| |
|
|
|
|
|
|
|
| |
This facilitates the application accessing its own extent allocator
metadata during hook invocations.
This resolves #259.
|
|
|
|
|
| |
Rename stats.arenas.<i>.metadata.allocated mallctl to
stats.arenas.<i>.metadata .
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When an allocation is large enough to trigger multiple dumps, use
modular math rather than subtraction to reset the interval counter.
Prior to this change, it was possible for a single allocation to cause
many subsequent allocations to all trigger profile dumps.
When updating usable size for a sampled object, try to cancel out
the difference between LARGE_MINCLASS and usable size from the interval
counter.
|
| |
|
| |
|
|
|
|
|
| |
Precisely size extents for huge size classes that aren't multiples of
chunksize.
|
| |
|
| |
|
|
|
|
| |
Refactor [de]registration to maintain interior rtree entries for slabs.
|
| |
|
| |
|
|
|
|
|
|
| |
Rename arena_extent_[d]alloc() to extent_[d]alloc().
Move all chunk [de]registration responsibility into chunk.c.
|
|
|
|
|
|
| |
Remove redundant ptr/oldsize args from huge_*().
Refactor huge/chunk/arena code boundaries.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Always initialize extents' runs_dirty and chunks_cache linkage.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Look up chunk metadata via the radix tree, rather than using
CHUNK_ADDR2BASE().
Propagate pointer's containing extent.
Minimize extent lookups by doing a single lookup (e.g. in free()) and
propagating the pointer's extent into nearly all the functions that may
need it.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements. This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
|
|
|
|
|
|
|
| |
These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size. Note that pszind_t and
szind_t are not interchangeable.
|
|
|
|
| |
This resolves #370.
|
|
|
|
| |
This resolves #369.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers. Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not. All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
|
|
|
|
|
|
|
|
| |
This is a broader application of optimizations to malloc() and free() in
f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement:
reduce # of branches and unnecessary operations.).
This resolves #321.
|
|
|
|
| |
This resolves #367.
|
|
|
|
|
|
|
|
|
|
| |
Split arena_choose() into arena_[i]choose() and use arena_ichoose() for
arena lookup during internal allocation. This fixes huge_palloc() so
that it always succeeds during extent node allocation.
This regression was introduced by
66cd953514a18477eb49732e40d5c2ab5f1b12c5 (Do not allocate metadata via
non-auto arenas, nor tcaches.).
|
| |
|
|
|
|
|
|
| |
Reset large curruns to 0 during arena reset.
Do not increase huge ndalloc stats during arena reset.
|
|
|
|
|
|
|
| |
This makes it possible to discard all of an arena's allocations in a
single operation.
This resolves #146.
|