| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Remove mallctls:
- opt.lg_chunk
- stats.cactive
This resolves #464.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Precisely size extents for huge size classes that aren't multiples of
chunksize.
|
| |
|
| |
|
|
|
|
| |
Refactor [de]registration to maintain interior rtree entries for slabs.
|
| |
|
| |
|
|
|
|
|
|
| |
Rename arena_extent_[d]alloc() to extent_[d]alloc().
Move all chunk [de]registration responsibility into chunk.c.
|
|
|
|
|
|
| |
Remove redundant ptr/oldsize args from huge_*().
Refactor huge/chunk/arena code boundaries.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Always initialize extents' runs_dirty and chunks_cache linkage.
|
|
|
|
|
| |
Set/unset rtree node for last chunk of extents, so that the rtree can be
used for chunk coalescing.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Look up chunk metadata via the radix tree, rather than using
CHUNK_ADDR2BASE().
Propagate pointer's containing extent.
Minimize extent lookups by doing a single lookup (e.g. in free()) and
propagating the pointer's extent into nearly all the functions that may
need it.
|
|
|
|
|
|
|
|
| |
This makes it possible to acquire short-term "ownership" of rtree
elements so that it is possible to read an extent pointer *and* read the
extent's contents with a guarantee that the element will not be modified
until the ownership is released. This is intended as a mechanism for
resolving rtree read/write races rather than as a way to lock extents.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers. Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not. All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
|
|
|
|
| |
This resolves #367.
|
|
|
|
| |
This resolves #358.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs. This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).
Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).
|
|
|
|
|
|
|
| |
This fixes chunk allocation to reuse retained memory even if an
application-provided chunk allocation function is in use.
This resolves #307.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor the arenas array, which contains pointers to all extant arenas,
such that it starts out as a sparse array of maximum size, and use
double-checked atomics-based reads as the basis for fast and simple
arena_get(). Additionally, reduce arenas_lock's role such that it only
protects against arena initalization races. These changes remove the
possibility for arena lookups to trigger locking, which resolves at
least one known (fork-related) deadlock.
This resolves #315.
|
|
|
|
|
|
|
|
| |
Attempt mmap-based in-place huge reallocation by plumbing new_addr into
chunk_alloc_mmap(). This can dramatically speed up incremental huge
reallocation.
This resolves #335.
|
| |
|
|
|
|
| |
Use appropriate versions to resolve 64-to-32-bit data loss warnings.
|
| |
|
| |
|
|
|
|
|
| |
Decommit arena chunk header during chunk deallocation if the rest of the
chunk is decommitted.
|
|
|
|
|
|
|
|
|
| |
Cascade from decommit to purge when purging unused dirty pages, so that
it is possible to decommit cleaned memory rather than just purging. For
non-Windows debug builds, decommit runs rather than purging them, since
this causes access of deallocated runs to segfault.
This resolves #251.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.
Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.
Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.
This resolves #176 and #201.
|
|
|
|
|
|
|
|
| |
- Do not reallocate huge objects in place if the number of backing
chunks would change.
- Do not cache multi-chunk mappings.
This resolves #213.
|
|
|
|
|
|
|
|
|
| |
This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use
first-fit rather than first-best-fit run/chunk allocation.). In some
pathological cases, first-fit search dominates allocation time, and it
also tends not to converge as readily on a steady state of memory
layout, since precise allocation order has a bigger effect than for
first-best-fit.
|
| |
|