| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
Reported by Christopher Ferris <cferris@google.com>.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.
Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time
This resolves #325.
|
| |
|
|
|
|
| |
Check in a generated smootherstep table as smoothstep.h rather than
generating it at configure time, since not all systems (e.g. Windows)
have dc.
|
| |
|
|
|
| |
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
|
| |
|
|
|
| |
Implement ticker, which provides a simple API for ticking off some
number of events before indicating that the ticker has hit its limit.
|
| | |
|
| | |
|
| |
|
|
|
| |
Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration.
|
| |
|
|
|
| |
Fix test_stats_arenas_summary to deallocate before asserting that
purging must have happened.
|
| | |
|
| | |
|
| |
|
|
|
|
| |
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>
This resolves #281.
|
| |
|
|
|
|
|
|
| |
Modify xallocx() tests that expect to expand in place to use a separate
arena. This avoids the potential for interposed internal allocations
from e.g. heap profile sampling to disrupt the tests.
This resolves #286.
|
| |
|
|
|
| |
In addition to depending on map coalescing, the test depended on
munmap() being disabled so that chunk recycling would always succeed.
|
| |
|
|
|
|
| |
Make mallocx() OOM testing work correctly even on systems that can
allocate the majority of virtual address space in a single contiguous
region.
|
| |
|
|
|
|
|
|
|
|
| |
Zero all trailing bytes of large allocations when
--enable-cache-oblivious configure option is enabled. This regression
was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement
cache index randomization for large allocations.).
Zero trailing bytes of huge allocations when resizing from/to a size
class that is not a multiple of the chunk size.
|
| | |
|
| |
|
|
|
| |
Systems that do not support chunk split/merge cannot shrink/grow huge
allocations in place.
|
| | |
|
| |
|
|
|
| |
arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
|
| |
|
|
|
| |
Fix xallocx() bugs related to the 'extra' parameter when specified as
non-zero.
|
| |
|
|
|
|
|
| |
Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl). This
bug could cause data structure corruption that would most likely result
in a segfault.
|
| |
|
|
|
| |
Optimize arena_prof_tctx_set() to avoid reading run metadata when
deciding whether it's actually necessary to write.
|
| |
|
|
|
|
|
|
| |
When junk filling is enabled, shrinking an allocation fills the bytes
that were previously allocated but now aren't. Purging the chunk before
doing that is just a waste of time.
This resolves #260.
|
| |
|
|
|
|
| |
Fix arenas_cache_cleanup() to handle allocation/deallocation within the
application's thread-specific data cleanup functions even after
arenas_cache is torn down.
|
| |
|
|
| |
This resolves #256.
|
| | |
|
| | |
|
| |
|
|
| |
Always leave decommit disabled on non-Windows systems.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Cascade from decommit to purge when purging unused dirty pages, so that
it is possible to decommit cleaned memory rather than just purging. For
non-Windows debug builds, decommit runs rather than purging them, since
this causes access of deallocated runs to segfault.
This resolves #251.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.
Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.
Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.
This resolves #176 and #201.
|
| |
|
|
|
|
|
|
| |
- Do not reallocate huge objects in place if the number of backing
chunks would change.
- Do not cache multi-chunk mappings.
This resolves #213.
|
| |
|
|
|
|
|
|
| |
Fix huge_ralloc_no_move() to succeed if an allocation request results in
the same usable size as the existing allocation, even if the request
size is smaller than the usable size. This bug did not cause
correctness issues, but it could cause unnecessary moves during
reallocation.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Create and use FMT* macros that are equivalent to the PRI* macros that
inttypes.h defines. This allows uniform use of the Unix-specific format
specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions
of e.g. PRIu64.
Add ffs()/ffsl() support for compiling with gcc.
Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM,
ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and
use the file for tests as well as for core jemalloc code.
|
| |
|
|
|
| |
This regression was introduced by
1b0e4abbfdbcc1c1a71d1f617adb19951109bfce (Port mq_get() to MinGW.).
|
| |
|
|
|
|
| |
Replace JEMALLOC_ATTR(format(printf, ...). with
JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can
omit the attribute if it would cause extraneous compilation warnings.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
| |
Add various function attributes to the exported functions to give the
compiler more information to work with during optimization, and also
specify throw() when compiling with C++ on Linux, in order to adequately
match what __THROW does in glibc.
This resolves #237.
|
| |
|
|
|
|
|
| |
This {bug,regression} was introduced by
155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
This resolves #241.
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
Fix size class overflow handling for malloc(), posix_memalign(),
memalign(), calloc(), and realloc() when profiling is enabled.
Remove an assertion that erroneously caused arena_sdalloc() to fail when
profiling was enabled.
This resolves #232.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add mallctls:
- arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
modified to change the initial lg_dirty_mult setting for newly created
arenas.
- arena.<i>.lg_dirty_mult controls an individual arena's dirty page
purging threshold, and synchronously triggers any purging that may be
necessary to maintain the constraint.
- arena.<i>.chunk.purge allows the per arena dirty page purging function
to be replaced.
This resolves #93.
|