| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.
Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.
Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.
Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.
Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
Add the "stats.arenas.<i>.dss" mallctl.
|
|
|
|
|
| |
Use JEMALLOC_USABLE_SIZE_CONST for the malloc_usable_size()
implementation as well as the prototype, for consistency's sake.
|
|
|
|
|
|
| |
Fix mutex acquisition order inversion for the chunks rtree and the base
mutex. Chunks rtree acquisition was introduced by the previous commit,
so this bug was short-lived.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a library constructor for jemalloc that initializes the allocator.
This fixes a race that could occur if threads were created by the main
thread prior to any memory allocation, followed by fork(2), and then
memory allocation in the child process.
Fix the prefork/postfork functions to acquire/release the ctl, prof, and
rtree mutexes. This fixes various fork() child process deadlocks, but
one possible deadlock remains (intentionally) unaddressed: prof
backtracing can acquire runtime library mutexes, so deadlock is still
possible if heap profiling is enabled during fork(). This deadlock is
known to be a real issue in at least the case of libgcc-based
backtracing.
Reported by tfengjun.
|
|
|
|
| |
should be
|
|
|
|
|
| |
Remove const from __*_hook variable declarations, so that glibc can
modify them during process forking.
|
|
|
|
|
| |
Disable tcache by default if running inside Valgrind, in order to avoid
making unallocated objects appear reachable to Valgrind.
|
|
|
|
|
| |
Auto-detect whether running inside Valgrind, thus removing the need to
manually specify MALLOC_CONF=valgrind:true.
|
|
|
|
|
|
|
| |
Avoid mutex operations in _malloc_{pre,post}fork() unless jemalloc has
been initialized.
Reported by David Xu.
|
|
|
|
| |
Tested with MSVC 8 32 and 64 bits.
|
|
|
|
|
|
| |
Theses newly added macros will be used to implement the equivalent under
MSVC. Also, move the definitions to headers, where they make more sense,
and for some, are even more useful there (e.g. malloc).
|
|
|
|
|
|
|
|
|
| |
Using errno on win32 doesn't quite work, because the value set in a shared
library can't be read from e.g. an executable calling the function setting
errno.
At the same time, since buferror always uses errno/GetLastError, don't pass
it.
|
|
|
|
|
| |
Fix a PROF_ALLOC_PREP() error path to initialize the return value to
NULL.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls. If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR. However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works. Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
|
|
|
|
|
| |
Put CONF_HANDLE_*() keys in quotes, so that they aren't mangled when
--with-private-namespace is used.
|
|
|
|
|
|
| |
Make special FreeBSD libc/libthr function overrides for
_malloc_prefork(), _malloc_postfork(), and _malloc_thread_cleanup()
visible.
|
|
|
|
|
|
|
|
|
| |
Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
Change the "opt.prof_accum" default from true to false.
Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be
abused to disable final profile dumping.
|
|
|
|
|
|
|
|
|
|
|
| |
Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.
Unify the chunk caching for mmap and dss.
Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
|
|
|
|
|
|
| |
Always disable redzone by default, even when --enable-debug is
specified. The memory overhead for redzones can be substantial, which
makes this feature something that should only be opted into.
|
|
|
|
|
| |
Chunk_boot0 calls rtree_new, which calls base_alloc, which locks the
base_mtx mutex. That mutex is initialized in base_boot.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.
Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.
Remove the run_size_p parameter from sa2u().
Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to
chunk_alloc()).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors. Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.
Merge arena_salloc() and arena_salloc_demote().
Refactor i[v]salloc() to expose the 'demote' option.
|
|
|
|
|
|
|
| |
Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.
Reported by Mike Hommey.
|
| |
|
|
|
|
| |
Reported by Mike Hommey.
|
|
|
|
|
| |
Add a0malloc(), a0calloc(), and a0free(), which are used by FreeBSD's
libc to allocate/deallocate TLS in static binaries.
|
|
|
|
|
| |
Postpone mutex initialization on FreeBSD until after base allocation is
safe.
|
|
|
|
|
|
|
|
|
|
|
| |
s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.
Remove remnants of the dynamic-page-shift code.
Rename the "arenas.pagesize" mallctl to "arenas.page".
Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 96d4120ac08db3f2d566e8e5c3bc134a24aa0afc.
ivsalloc() depends on chunks_rtree being initialized. This can be
worked around via a NULL pointer check. However,
thread_allocated_tsd_get() also depends on initialization having
occurred, and there is no way to guard its call in free() that is
cheaper than checking whether ptr is NULL.
|
|
|
|
|
|
|
|
|
| |
Generalize isalloc() to handle NULL pointers in such a way that the NULL
checking overhead is only paid when introspecting huge allocations (or
NULL). This allows free() and malloc_usable_size() to no longer check
for NULL.
Submitted by Igor Bukanov and Mike Hommey.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Remove code that validates malloc_vsnprintf() and malloc_strtoumax()
against their namesakes. The validation code has adequately served its
usefulness at this point, and it isn't worth dealing with the different
formatting for %p with glibc versus other implementations for NULL
pointers ("(nil)" vs. "0x0").
Reported by Mike Hommey.
|
| |
|
| |
|
|
|
|
| |
OSX libc calls zone allocators' force_lock/force_unlock already.
|
|
|
|
|
|
| |
Check for NULL ptr in malloc_usable_size(), rather than just asserting
that ptr is non-NULL. This matches behavior of other implementations
(e.g., glibc and tcmalloc).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(),
_malloc_{pre,post}fork()) to avoid bootstrapping issues due to
allocation in libc and libthr.
Add malloc_strtoumax() and use it instead of strtoul(). Disable
validation code in malloc_vsnprintf() and malloc_strtoumax() until
jemalloc is initialized. This is necessary because locale
initialization causes allocation for both vsnprintf() and strtoumax().
Force the lazy-lock feature on in order to avoid pthread_self(),
because it causes allocation.
Use syscall(SYS_write, ...) rather than write(...), because libthr wraps
write() and causes allocation. Without this workaround, it would not be
possible to print error messages in malloc_conf_init() without
substantially reworking bootstrapping.
Fix choose_arena_hard() to look at how many threads are assigned to the
candidate choice, rather than checking whether the arena is
uninitialized. This bug potentially caused more arenas to be
initialized than necessary.
|
|
|
|
|
|
|
|
|
|
|
| |
Remove ephemeral mutexes from the prof machinery, and remove
malloc_mutex_destroy(). This simplifies mutex management on systems
that call malloc()/free() inside pthread_mutex_{create,destroy}().
Add atomic_*_u() for operation on unsigned values.
Fix prof_printf() to call malloc_vsnprintf() rather than
malloc_snprintf().
|
|
|
|
|
| |
Add JEMALLOC_CC_SILENCE_INIT(), which provides succinct syntax for
initializing a variable to avoid a spurious compiler warning.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement tsd, which is a TLS/TSD abstraction that uses one or both
internally. Modify bootstrapping such that no tsd's are utilized until
allocation is safe.
Remove malloc_[v]tprintf(), and use malloc_snprintf() instead.
Fix %p argument size handling in malloc_vsnprintf().
Fix a long-standing statistics-related bug in the "thread.arena"
mallctl that could cause crashes due to linked list corruption.
|
|
|
|
|
|
|
| |
I tested a build from 10.7 run on 10.7 and 10.6, and a build from 10.6
run on 10.6. The AC_COMPILE_IFELSE limbo is to avoid running a program
during configure, which presumably makes it work when cross compiling
for iOS.
|
| |
|
|
|
|
|
|
|
|
|
| |
Acquire/release arena bin locks as part of the prefork/postfork. This
bug made deadlock in the child between fork and exec a possibility.
Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so
that the child can reinitialize mutexes rather than unlocking them. In
practice, this bug tended not to cause problems.
|
|
|
|
|
|
|
|
| |
Implement aligned_alloc(), which was added in the C11 standard. The
function is weakly specified to the point that a minimally compliant
implementation would be painful to use (size must be an integral
multiple of alignment!), which in practice makes posix_memalign() a
safer choice.
|
|
|
|
|
|
|
| |
Revert JE_COMPILABLE() so that it detects link errors. Cross-compiling
should still work as long as a valid configure cache is provided.
Clean up some comments/whitespace.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as
several other printing functions based on it, so that formatted printing
can be relied upon without concern for inducing a dependency on floating
point runtime support. Replace malloc_write() calls with
malloc_*printf() where doing so simplifies the code.
Add name mangling for library-private symbols in the data and BSS
sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose
all opt_* variable use to cpp so that proper mangling occurs.
|
|
|
|
|
|
|
| |
Remove the lg_tcache_gc_sweep option, because it is no longer
very useful. Prior to the addition of dynamic adjustment of tcache fill
count, it was possible for fill/flush overhead to be a problem, but this
problem no longer occurs.
|