| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
Reported by Ricardo Nabinger Sanchez.
|
| |
|
|
|
|
|
|
| |
Fix chunk_record() to unlock chunks_mtx before deallocating a base
node, in order to avoid potential deadlock. This fix addresses the
second of two similar bugs.
|
|
|
|
|
|
|
| |
Fix chunk_record() to unlock chunks_mtx before deallocating a base node,
in order to avoid potential deadlock.
Reported by Tudor Bosman.
|
|
|
|
|
| |
Fix a locking order bug that could cause deadlock during fork if heap
profiling were enabled.
|
|
|
|
|
| |
Fix Valgrind integration to annotate all internally allocated memory in
a way that keeps Valgrind happy about internal data structure access.
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a chunk recycling bug that could cause the allocator to lose track
of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could
cause corruption if allocating via sbrk(2) (unlikely unless running with
the "dss:primary" option specified). This was completely harmless on
Linux unless using mlockall(2) (and unlikely even then, unless the
--disable-munmap configure option or the "dss:primary" option was
specified). This regression was introduced in 3.1.0 by the
mlockall(2)/madvise(2) interaction fix.
|
|
|
|
|
|
|
| |
Internal reallocation of the quarantined object array leaked the old array.
Reallocation failure for internal reallocation of the quarantined object
array (very unlikely) resulted in memory corruption.
|
|
|
|
|
|
|
|
|
|
| |
Avoid writing to uninitialized TLS as a side effect of deallocation.
Initializing TLS during deallocation is unsafe because it is possible
that a thread never did any allocation, and that TLS has already been
deallocated by the threads library, resulting in write-after-free
corruption. These fixes affect prof_tdata and quarantine; all other
uses of TLS are already safe, whether intentionally (as for tcache) or
unintentionally (as for arenas).
|
|
|
|
|
|
|
|
| |
Revert refactoring of opt_abort and opt_junk declarations. clang
accepts the config_*-based declarations (and generates correct code),
but gcc complains with:
error: initializer element is not constant
|
|
|
|
| |
Convert a couple of stragglers from JEMALLOC_* to use config_*.
|
|
|
|
|
|
| |
Update hash from MurmurHash2 to MurmurHash3, primarily because the
latter generates 128 bits in a single call for no extra cost, which
simplifies integration with cuckoo hashing.
|
|
|
|
|
| |
Add JEMALLOC_ALWAYS_INLINE and use it to guarantee that the entire fast
paths of the primary allocation/deallocation functions are inlined.
|
|
|
|
|
|
|
| |
Tighten valgrind integration such that immediately after memory is
validated or zeroed, valgrind is told to forget the memory's 'defined'
state. The only place newly allocated memory should be left marked as
'defined' is in the public functions (e.g. calloc() and realloc()).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move validation of supposedly zeroed pages from chunk_alloc() to
chunk_recycle(). There is little point to validating newly mapped
memory returned by chunk_alloc_mmap(), and memory that comes from sbrk()
is explicitly zeroed, so there is little risk to assuming that
chunk_alloc_dss() actually does the zeroing properly.
This relaxation of validation can make a big difference to application
startup time and overall system usage on platforms that use jemalloc as
the system allocator (namely FreeBSD).
Submitted by Ian Lepore <ian@FreeBSD.org>.
|
|
|
|
|
|
|
| |
This ensures POLA on FreeBSD (at least) as free(3) is generally assumed
to not fiddle around with errno.
Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
|
|
|
|
|
|
|
|
|
| |
Modify processing of the lg_chunk option so that it clips an
out-of-range input to the edge of the valid range. This makes it
possible to request the minimum possible chunk size without intimate
knowledge of allocator internals.
Submitted by Ian Lepore (see FreeBSD PR bin/174641).
|
|
|
|
|
|
|
|
| |
Fix chunk_recycyle() to unconditionally inform Valgrind that returned
memory is undefined. This fixes Valgrind warnings that would result
from a huge allocation being freed, then recycled for use as an arena
chunk. The arena code would write metadata to the chunk header, and
Valgrind would consider these invalid writes.
|
|
|
|
| |
Reported by Mike Hommey.
|
|
|
|
|
|
|
| |
Refactor arena_prof_accum() and its callers to avoid arena locking when
prof_interval is 0 (as when profiling is disabled).
Reported by Ben Maurer.
|
|
|
|
|
| |
Tweak chunk purge order to purge unfragmented chunks from high to low
memory. This facilitates dirty run reuse.
|
|
|
|
| |
the system default zone
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Purge unused dirty pages in an order that first performs clean/dirty run
defragmentation, in order to mitigate available run fragmentation.
Remove the limitation that prevented purging unless at least one chunk
worth of dirty pages had accumulated in an arena. This limitation was
intended to avoid excessive purging for small applications, but the
threshold was arbitrary, and the effect of questionable utility.
Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased
likelihood of allocating clean runs, given the same ratio of clean:dirty
runs, and reduces the potential for repeated purging in pathological
large malloc/free loops that push the active:dirty page ratio just over
the purge threshold.
|
|
|
|
|
| |
Fix deadlock in the arenas.purge mallctl due to recursive mutex
acquisition.
|
|
|
|
|
| |
Fix dss/mmap allocation precedence code to use recyclable mmap memory
only after primary dss allocation fails.
|
|
|
|
|
| |
Add ctl_mutex proection to arena_i_dss_ctl(), since ctl_stats.narenas is
accessed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.
Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.
Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.
Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.
Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
Add the "stats.arenas.<i>.dss" mallctl.
|
|
|
|
|
|
|
| |
Mozilla build hides everything by default using visibility pragma and
unhides only explicitly listed headers. But this doesn't work on FreeBSD
because _pthread_mutex_init_calloc_cb is neither documented nor exposed
via any header.
|
|
|
|
|
| |
Use JEMALLOC_USABLE_SIZE_CONST for the malloc_usable_size()
implementation as well as the prototype, for consistency's sake.
|
|
|
|
|
|
| |
Fix mutex acquisition order inversion for the chunks rtree and the base
mutex. Chunks rtree acquisition was introduced by the previous commit,
so this bug was short-lived.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a library constructor for jemalloc that initializes the allocator.
This fixes a race that could occur if threads were created by the main
thread prior to any memory allocation, followed by fork(2), and then
memory allocation in the child process.
Fix the prefork/postfork functions to acquire/release the ctl, prof, and
rtree mutexes. This fixes various fork() child process deadlocks, but
one possible deadlock remains (intentionally) unaddressed: prof
backtracing can acquire runtime library mutexes, so deadlock is still
possible if heap profiling is enabled during fork(). This deadlock is
known to be a real issue in at least the case of libgcc-based
backtracing.
Reported by tfengjun.
|
|
|
|
|
|
|
|
| |
mlockall(2) can cause purging via madvise(2) to fail. Fix purging code
to check whether madvise() succeeded, and base zeroed page metadata on
the result.
Reported by Olivier Lecomte.
|
|
|
|
| |
Reported by Corey Richardson.
|
|
|
|
| |
should be
|
| |
|
|
|
|
|
| |
Remove const from __*_hook variable declarations, so that glibc can
modify them during process forking.
|
| |
|
|
|
|
|
| |
Disable tcache by default if running inside Valgrind, in order to avoid
making unallocated objects appear reachable to Valgrind.
|
|
|
|
|
| |
Auto-detect whether running inside Valgrind, thus removing the need to
manually specify MALLOC_CONF=valgrind:true.
|
|
|
|
|
|
|
| |
Avoid mutex operations in _malloc_{pre,post}fork() unless jemalloc has
been initialized.
Reported by David Xu.
|
|
|
|
|
|
|
|
|
| |
Refactor code such that arena_mapbits_{large,small}_set() always
preserves the unzeroed flag, and manually manipulate the unzeroed flag
in the one case where it actually gets reset (in arena_chunk_purge()).
This fixes unzeroed preservation bugs in arena_run_split() and
arena_ralloc_large_grow(). These bugs caused large calloc() to return
non-zeroed memory under some circumstances.
|
| |
|
|
|
|
|
| |
Refactor duplicated arena_run_alloc() code into
arena_run_alloc_helper().
|
|
|
|
|
|
| |
Add the --enable-mremap option, and disable the use of mremap(2) by
default, for the same reason that freeing chunks via munmap(2) is
disabled by default on Linux: semi-permanent VM map fragmentation.
|
|
|
|
|
|
|
| |
Fix chunk_recycle() to correctly compute trailsize and re-insert
trailing chunks. This fixes a major virtual memory leak.
Simplify chunk_record() to avoid dropping/re-acquiring chunks_mtx.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simplify chunk_alloc_mmap() to no longer attempt map extension. The
extra complexity isn't warranted, because although in the success case
it saves one system call as compared to immediately falling back to
chunk_alloc_mmap_slow(), it also makes the failure case even more
expensive. This simplification removes two bugs:
- For Windows platforms, pages_unmap() wasn't being called for unaligned
mappings prior to falling back to chunk_alloc_mmap_slow(). This
caused permanent virtual memory leaks.
- For non-Windows platforms, alignment greater than chunksize caused
pages_map() to be called with size 0 when attempting map extension.
This always resulted in an mmap() error, and subsequent fallback to
chunk_alloc_mmap_slow().
|
|
|
|
|
| |
Fix a base allocator deadlock due to chunk_recycle() calling back into
the base allocator.
|
|
|
|
| |
In the alloca() case, this fails to be the right size.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an application wants to override je_malloc_message, it is better to define
the symbol locally than to change its value in main(), which might be too late
for various reasons.
Due to je_malloc_message being initialized in util.c, statically linking
jemalloc with an application defining je_malloc_message fails due to
"multiple definition of" the symbol.
Defining it without a value (like je_malloc_conf) makes it more easily
overridable.
|
|
|
|
|
|
|
|
|
| |
Further optimize arena_salloc() to only look at the binind chunk map
bits in the common case.
Add more sanity checks to arena_salloc() that detect chunk map
inconsistencies for large allocations (whether due to allocator bugs or
application bugs).
|