| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor ph to support configurable comparison functions. Use a cpp
macro code generation form equivalent to the rb macros so that pairing
heaps can be used for both run heaps and chunk heaps.
Remove per node parent pointers, and instead use leftmost siblings' prev
pointers to track parents.
Fix multi-pass sibling merging to iterate over intermediate results
using a FIFO, rather than a LIFO. Use this fixed sibling merging
implementation for both merge phases of the auxiliary twopass algorithm
(first merging the aux list, then replacing the root with its merged
children). This fixes both degenerate merge behavior and the potential
for deep recursion.
This regression was introduced by
6bafa6678fc36483e638f1c3a0a9bf79fb89bfc9 (Pairing heap).
This resolves #371.
|
| | |
|
| |
|
|
|
| |
Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and
JEMALLOC_FREE_JUNK macros, respectively.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Stack corruption happens in x64 bit
This resolves #347.
|
| | |
|
| |
|
|
|
|
|
|
| |
Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time}
initialization.
Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of
the arena mutex.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
This fixes compilation warnings regarding integer overflow that were
introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx()
size class overflow behavior defined.).
|
| |
|
|
|
|
|
| |
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.
This resolves #278 and #295.
|
| | |
|
| | |
|
| |
|
|
|
| |
Since this is an intrusive tree, rbt_nil is the whole size of the node
and can be quite large. For example, miscelm is ~100 bytes.
|
| |
|
|
|
| |
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
|
| |
|
|
|
| |
Also rename run_quantize_*() to improve clarity. These tests
demonstrate that run_quantize_ceil() is flawed.
|
| |
|
|
|
|
|
| |
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec. This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
|
| |
|
|
| |
Reported by Christopher Ferris <cferris@google.com>.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.
Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time
This resolves #325.
|
| |
|
|
|
|
| |
Check in a generated smootherstep table as smoothstep.h rather than
generating it at configure time, since not all systems (e.g. Windows)
have dc.
|
| |
|
|
|
| |
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
|
| |
|
|
|
| |
Implement ticker, which provides a simple API for ticking off some
number of events before indicating that the ticker has hit its limit.
|
| | |
|
| | |
|
| |
|
|
|
| |
Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration.
|
| |
|
|
|
| |
Fix test_stats_arenas_summary to deallocate before asserting that
purging must have happened.
|
| | |
|
| |
|
|
|
|
| |
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>
This resolves #281.
|
| |
|
|
|
| |
arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
|
| |
|
|
|
|
|
| |
Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This
bug could cause data structure corruption that would most likely result
in a segfault.
|
| |
|
|
|
| |
Optimize arena_prof_tctx_set() to avoid reading run metadata when
deciding whether it's actually necessary to write.
|
| |
|
|
|
|
| |
Fix arenas_cache_cleanup() to handle allocation/deallocation within the
application's thread-specific data cleanup functions even after
arenas_cache is torn down.
|
| |
|
|
| |
This resolves #256.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Create and use FMT* macros that are equivalent to the PRI* macros that
inttypes.h defines. This allows uniform use of the Unix-specific format
specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions
of e.g. PRIu64.
Add ffs()/ffsl() support for compiling with gcc.
Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM,
ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and
use the file for tests as well as for core jemalloc code.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Add various function attributes to the exported functions to give the
compiler more information to work with during optimization, and also
specify throw() when compiling with C++ on Linux, in order to adequately
match what __THROW does in glibc.
This resolves #237.
|
| |
|
|
|
|
|
| |
This {bug,regression} was introduced by
155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
This resolves #241.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add mallctls:
- arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
modified to change the initial lg_dirty_mult setting for newly created
arenas.
- arena.<i>.lg_dirty_mult controls an individual arena's dirty page
purging threshold, and synchronously triggers any purging that may be
necessary to maintain the constraint.
- arena.<i>.chunk.purge allows the per arena dirty page purging function
to be replaced.
This resolves #93.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.
Add chunk node caching to arenas, in order to avoid contention on the
base allocator.
Use chunks_rtree to look up huge allocations rather than a red-black
tree. Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).
Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled. The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.
Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering. These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.
Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be
used in conjunction with the *allocx() API.
Add the tcache.create, tcache.flush, and tcache.destroy mallctls.
This resolves #145.
|