diff options
author | Jason Evans <jasone@canonware.com> | 2016-05-29 00:29:03 (GMT) |
---|---|---|
committer | Jason Evans <jasone@canonware.com> | 2016-06-06 03:42:23 (GMT) |
commit | d28e5a6696fd59a45c156b5c4dc183bb9ed21596 (patch) | |
tree | 046b3a745b7b711224f404ca2b630c34cf378d6f /src/arena.c | |
parent | ed2c2427a7684bc8f41da54319c5dff00e177f76 (diff) | |
download | jemalloc-d28e5a6696fd59a45c156b5c4dc183bb9ed21596.zip jemalloc-d28e5a6696fd59a45c156b5c4dc183bb9ed21596.tar.gz jemalloc-d28e5a6696fd59a45c156b5c4dc183bb9ed21596.tar.bz2 |
Improve interval-based profile dump triggering.
When an allocation is large enough to trigger multiple dumps, use
modular math rather than subtraction to reset the interval counter.
Prior to this change, it was possible for a single allocation to cause
many subsequent allocations to all trigger profile dumps.
When updating usable size for a sampled object, try to cancel out
the difference between LARGE_MINCLASS and usable size from the interval
counter.
Diffstat (limited to 'src/arena.c')
-rw-r--r-- | src/arena.c | 14 |
1 files changed, 14 insertions, 0 deletions
diff --git a/src/arena.c b/src/arena.c index d9882a4..0b98ec5 100644 --- a/src/arena.c +++ b/src/arena.c @@ -2258,6 +2258,7 @@ void arena_prof_promote(tsdn_t *tsdn, extent_t *extent, const void *ptr, size_t usize) { + arena_t *arena = extent_arena_get(extent); cassert(config_prof); assert(ptr != NULL); @@ -2266,6 +2267,19 @@ arena_prof_promote(tsdn_t *tsdn, extent_t *extent, const void *ptr, extent_usize_set(extent, usize); + /* + * Cancel out as much of the excessive prof_accumbytes increase as + * possible without underflowing. Interval-triggered dumps occur + * slightly more often than intended as a result of incomplete + * canceling. + */ + malloc_mutex_lock(tsdn, &arena->lock); + if (arena->prof_accumbytes >= LARGE_MINCLASS - usize) + arena->prof_accumbytes -= LARGE_MINCLASS - usize; + else + arena->prof_accumbytes = 0; + malloc_mutex_unlock(tsdn, &arena->lock); + assert(isalloc(tsdn, extent, ptr) == usize); } |