| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This avoids a gcc diagnostic note:
note: The ABI for passing parameters with 64-byte alignment has
changed in GCC 4.6
This note related to the cacheline alignment of rtree_ctx_t, which was
introduced by 4a346f55939af4f200121cc4454089592d952f18 (Replace rtree
path cache with LRU cache.).
|
|
|
|
|
|
|
| |
Fix extent_alloc_dss() to account for bytes that are not a multiple of
the page size. This regression was introduced by
577d4572b0821a15e5370f9bf566d884b7cf707c (Make dss operations
lockless.), which was first released in 4.3.0.
|
| |
|
|
|
|
|
| |
NULL can never actually be inserted in practice, and removing support
allows a branch to be removed from the fast path.
|
|
|
|
|
|
|
| |
Rather than dynamically building a table to aid per level computations,
define a constant table at compile time. Omit both high and low
insignificant bits. Use one to three tree levels, depending on the
number of significant bits.
|
|
|
|
| |
A subsequent change instead ignores insignificant high bits.
|
| |
|
|
|
|
|
| |
Anything but a hit in the first element of the lookup cache is
expensive enough to negate the benefits of inlining.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Read adjacent rtree elements while holding element locks, since the
extents mutex only protects against relevant like-state extent mutation.
Fix management of the 'coalesced' loop state variable to merge
forward/backward results, rather than overwriting the result of forward
coalescing if attempting to coalesce backward. In practice this caused
no correctness issues, but could cause extra iterations in rare cases.
These regressions were introduced by
d27f29b468ae3e9d2b1da4a9880351d76e5a1662 (Disentangle arena and extent
locking.).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Set extent as active prior to registration so that other threads can't
modify it in the absence of locking.
This regression was introduced by
d27f29b468ae3e9d2b1da4a9880351d76e5a1662 (Disentangle arena and extent
locking.), via non-obvious means. Removal of extents_mtx protection
during extent_grow_retained() execution opened up the race, but in the
presence of that locking, the code was safe.
This resolves #599.
|
|
|
|
| |
Do not check for overflow unless it is actually a possibility.
|
|
|
|
|
|
|
| |
Fix compute_size_with_overflow() to use a high_bits mask that has the
high bits set, rather than the low bits. This regression was introduced
by 5154ff32ee8c37bacb6afd8a07b923eb33228357 (Unify the allocation
paths).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor arena and extent locking protocols such that arena and
extent locks are never held when calling into the extent_*_wrapper()
API. This requires extra care during purging since the arena lock no
longer protects the inner purging logic. It also requires extra care to
protect extents from being merged with adjacent extents.
Convert extent_t's 'active' flag to an enumerated 'state', so that
retained extents are explicitly marked as such, rather than depending on
ring linkage state.
Refactor the extent collections (and their synchronization) for cached
and retained extents into extents_t. Incorporate LRU functionality to
support purging. Incorporate page count accounting, which replaces
arena->ndirty and arena->stats.retained.
Assert that no core locks are held when entering any internal
[de]allocation functions. This is in addition to existing assertions
that no locks are held when entering external [de]allocation functions.
Audit and document synchronization protocols for all arena_t fields.
This fixes a potential deadlock due to recursive allocation during
gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6
(Fix lock order reversal during gdump.), but with a necessarily much
broader code impact.
|
|
|
|
|
|
|
| |
Synchronize tcaches with tcaches_mtx rather than ctl_mtx. Add missing
synchronization for tcache flushing. This bug was introduced by
1cb181ed632e7573fb4eab194e4d216867222d27 (Implement explicit tcache
support.), which was first released in 4.0.0.
|
|
|
|
|
| |
This makes it possible to make lock state assertions about precisely
which locks are held.
|
|
|
|
|
|
| |
This should have been part of 411697adcda2fd75e135cdcdafb95f2bd295dc7f
(Use exponential series to size extents.), which introduced
extent_grow_next.
|
|
|
|
|
|
|
| |
This reduces the probability of allocating (and thereby indirectly
making a system call) while owning bt2gctx_mtx. Unfortunately it is an
incomplete solution, because ckh insertion/deletion can also
allocate/deallocate, which requires more extensive changes to address.
|
|
|
|
| |
This allows the compiler to completely remove dead code.
|
|
|
|
|
|
|
| |
When multiple threads calling stats_print, race could happen as we read the
counters in separate mallctl calls; and the removed assertion could fail when
other operations happened in between the mallctl calls. For simplicity, output
"race" in the utilization field in this case.
|
|
|
|
|
|
|
|
|
| |
In the refactoring that unified the allocation paths, usize was substituted for
size. This worked fine under the default test configuration, but triggered
asserts when we started beefing up our CI testing.
This change fixes the issue, and clarifies the comment describing the argument
selection that it got wrong.
|
|
|
|
|
|
| |
Avoid the name secure_getenv to avoid redeclaring secure_getenv when
secure_getenv is present but its use is manually disabled via
ac_cv_func_secure_getenv=no.
|
|
|
|
| |
This resolves #564.
|
|
|
|
| |
This resolves #540.
|
|
|
|
|
|
|
| |
Add braces around single-line blocks, and remove line breaks before
function-opening braces.
This resolves #537.
|
|
|
|
|
|
|
|
|
| |
This unifies the allocation paths for malloc, posix_memalign, aligned_alloc,
calloc, memalign, valloc, and mallocx, so that they all share common code where
they can.
There's more work that could be done here, but I think this is the smallest
discrete change in this direction.
|
|
|
|
|
| |
Fix numerous regressions that were exposed by --disable-stats, both in
the core library and in the tests.
|
|
|
|
|
|
|
|
| |
Implement and test a JSON validation parser. Use the parser to validate
JSON output from malloc_stats_print(), with a significant subset of
supported output options.
This resolves #551.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some system libraries are using malloc_default_zone() and then using
some of the malloc_zone_* API. Under normal conditions, those functions
check the malloc_zone_t/malloc_introspection_t struct for the values
that are allowed to be NULL, so that a NULL deref doesn't happen.
As of OSX 10.12, malloc_default_zone() doesn't return the actual default
zone anymore, but returns a fake, wrapper zone. The wrapper zone defines
all the possible functions in the malloc_zone_t/malloc_introspection_t
struct (almost), and calls the function from the registered default zone
(jemalloc in our case) on its own. Without checking whether the pointers
are NULL.
This means that a system library that calls e.g.
malloc_zone_batch_malloc(malloc_default_zone(), ...) ends up trying to
call jemalloc_zone.batch_malloc, which is NULL, and crash follows.
So as of OSX 10.12, the default zone is required to have all the
functions available (really, the same as the wrapper zone), even if they
do nothing.
This is arguably a bug in libsystem_malloc in OSX 10.12, but jemalloc
still needs to work in that case.
|
|
|
|
|
|
|
|
|
|
| |
The SDK jemalloc is built against might be not be the latest for various
reasons, but the resulting binary ought to work on newer versions of
OSX.
In order to ensure this, we need the fullest definitions possible, so
copy what we need from the latest version of malloc/malloc.h available
on opensource.apple.com.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly revert the prof_realloc() changes in
498856f44a30b31fe713a18eb2fc7c6ecf3a9f63 (Move slabs out of chunks.) so
that prof_free_sampled_object() is called when appropriate. Leave the
prof_tctx_[re]set() optimization in place, but add an assertion to
verify that all eight cases are correctly handled. Add a comment to
make clear the code ordering, so that the regression originally fixed by
ea8d97b8978a0c0423f0ed64332463a25b787c3d (Fix
prof_{malloc,free}_sample_object() call order in prof_realloc().) is not
repeated.
This resolves #499.
|
| |
|
| |
|
| |
|
|
|
|
| |
The removed stats merging logic is already taken care of by tcache_flush.
|
|
|
|
| |
This resolves #535.
|
| |
|
|
|
|
|
|
|
| |
Add MALLCTL_ARENAS_DESTROYED for accessing destroyed arena stats as an
analogue to MALLCTL_ARENAS_ALL.
This resolves #382.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Refactor ctl_stats_t to be a demand-zeroed non-growing data structure.
To keep the size from being onerous (~60 MiB) on 32-bit systems, convert
the arenas field to contain pointers rather than directly embedded
ctl_arena_stats_t elements.
|
| |
|
|
|
|
|
|
|
| |
Add the MALLCTL_ARENAS_ALL cpp macro as a fixed index for use
in accessing the arena.<i>.{purge,decay,dss} and stats.arenas.<i>.*
mallctls, and deprecate access via the arenas.narenas index (to be
removed in 6.0.0).
|
|
|
|
| |
This was a latent bug, since the function is (intentionally) not used.
|
| |
|
|
|
|
|
| |
Decrement ndalloc_large rather than incrementing, in order to cancel out
the increment in arena_large_dalloc_stats_update().
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add/rename related mallctls:
- Add stats.arenas.<i>.base .
- Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal .
- Add stats.arenas.<i>.resident .
Modify the arenas.extend mallctl to take an optional (extent_hooks_t *)
argument so that it is possible for all base allocations to be serviced
by the specified extent hooks.
This resolves #463.
|