| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and
JEMALLOC_FREE_JUNK macros, respectively.
|
|
|
|
|
|
| |
Restructure the test program master header to avoid blindly enabling
assertions. Prior to this change, assertion code in e.g. arena.h was
always enabled for tests, which could skew performance-related testing.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Add (size_t) casts to MALLOCX_ALIGN() macros so that passing the integer
constant 0x80000000 does not cause a compiler warning about invalid
shift amount.
This resolves #354.
|
| |
|
|
|
|
|
|
| |
Stack corruption happens in x64 bit
This resolves #347.
|
| |
|
|
|
|
|
|
|
|
| |
Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time}
initialization.
Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of
the arena mutex.
|
| |
|
|
|
|
|
| |
Remove invalid tests that were intended to be tests of (hugemax+1) OOM,
for which tests already exist.
|
| |
|
|
|
|
|
|
| |
This fixes compilation warnings regarding integer overflow that were
introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx()
size class overflow behavior defined.).
|
|
|
|
|
|
|
| |
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.
This resolves #278 and #295.
|
| |
|
| |
|
|
|
|
| |
This resolves #333.
|
|
|
|
|
| |
Since this is an intrusive tree, rbt_nil is the whole size of the node
and can be quite large. For example, miscelm is ~100 bytes.
|
|
|
|
|
| |
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
|
|
|
|
|
| |
Also rename run_quantize_*() to improve clarity. These tests
demonstrate that run_quantize_ceil() is flawed.
|
|
|
|
|
|
|
| |
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec. This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
|
|
|
|
| |
Reported by Christopher Ferris <cferris@google.com>.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.
Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time
This resolves #325.
|
|
|
|
|
|
| |
Check in a generated smootherstep table as smoothstep.h rather than
generating it at configure time, since not all systems (e.g. Windows)
have dc.
|
|
|
|
|
| |
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
|
|
|
|
|
| |
Implement ticker, which provides a simple API for ticking off some
number of events before indicating that the ticker has hit its limit.
|
| |
|
| |
|
|
|
|
|
| |
Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration.
|
|
|
|
|
| |
Fix test_stats_arenas_summary to deallocate before asserting that
purging must have happened.
|
| |
|
| |
|
|
|
|
|
|
| |
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>
This resolves #281.
|
|
|
|
|
|
|
|
| |
Modify xallocx() tests that expect to expand in place to use a separate
arena. This avoids the potential for interposed internal allocations
from e.g. heap profile sampling to disrupt the tests.
This resolves #286.
|
|
|
|
|
| |
In addition to depending on map coalescing, the test depended on
munmap() being disabled so that chunk recycling would always succeed.
|
|
|
|
|
|
| |
Make mallocx() OOM testing work correctly even on systems that can
allocate the majority of virtual address space in a single contiguous
region.
|
|
|
|
|
|
|
|
|
|
| |
Zero all trailing bytes of large allocations when
--enable-cache-oblivious configure option is enabled. This regression
was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement
cache index randomization for large allocations.).
Zero trailing bytes of huge allocations when resizing from/to a size
class that is not a multiple of the chunk size.
|
| |
|
|
|
|
|
| |
Systems that do not support chunk split/merge cannot shrink/grow huge
allocations in place.
|
| |
|
|
|
|
|
| |
arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
|
|
|
|
|
| |
Fix xallocx() bugs related to the 'extra' parameter when specified as
non-zero.
|
|
|
|
|
|
|
| |
Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This
bug could cause data structure corruption that would most likely result
in a segfault.
|
|
|
|
|
| |
Optimize arena_prof_tctx_set() to avoid reading run metadata when
deciding whether it's actually necessary to write.
|
|
|
|
|
|
|
|
| |
When junk filling is enabled, shrinking an allocation fills the bytes
that were previously allocated but now aren't. Purging the chunk before
doing that is just a waste of time.
This resolves #260.
|
|
|
|
|
|
| |
Fix arenas_cache_cleanup() to handle allocation/deallocation within the
application's thread-specific data cleanup functions even after
arenas_cache is torn down.
|
|
|
|
| |
This resolves #256.
|
| |
|
| |
|