| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
This option enables transparent huge page for base allocators (require
MADV_HUGEPAGE support).
|
|
|
|
|
|
|
| |
The external linkage for spin_adaptive was not used, and the inline
declaration of spin_adaptive that was used caused a probem on FreeBSD
where CPU_SPINWAIT is implemented as a call to a static procedure for
x86 architectures.
|
|
|
|
|
|
| |
If ptr is not page aligned, we know the allocation was not sampled. In this case
use the size passed into sdallocx directly w/o accessing rtree. This improve
sdallocx efficiency in the common case (not sampled && small allocation).
|
|
|
|
|
|
| |
When retain is enabled, we should not attempt mmap for in-place expansion
(large_ralloc_no_move), because it's virtually impossible to succeed, and causes
unnecessary syscalls (which can cause lock contention under load).
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we have to log by writing something like:
static log_var_t log_a_b_c = LOG_VAR_INIT("a.b.c");
log (log_a_b_c, "msg");
This is sort of annoying. Let's just write:
log("a.b.c", "msg");
|
|
|
|
|
| |
I noticed that the whole newImpl is inlined. Since OOM handling code is
rarely executed, we should only inline the hot path.
|
|
|
|
|
|
| |
Currently, the log macro requires at least one argument after the format string,
because of the way the preprocessor handles varargs macros. We can hide some of
that irritation by pushing the extra arguments into a varargs function.
|
| |
|
|
|
|
| |
I.e. mallloc, free, the allocx API, the posix extensions.
|
|
|
|
|
| |
This sets up a hierarchical logging facility, so that we can add logging
statements liberally, and turn them on in a fine-grained manner.
|
|
|
|
|
|
|
| |
Older Linux systems don't have O_CLOEXEC. If that's the case, we fcntl
immediately after open, to minimize the length of the racy period in
which an
operation in another thread can leak a file descriptor to a child.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On OS X, we rely on the zone machinery to call our prefork and postfork
handlers.
In zone_force_unlock, we call jemalloc_postfork_child, reinitializing all our
mutexes regardless of state, since the mutex implementation will assert if the
tid of the unlocker is different from that of the locker. This has the effect
of unlocking the mutexes, but also fails to wake any threads waiting on them in
the parent.
To fix this, we track whether or not we're the parent or child after the fork,
and unlock or reinit as appropriate.
This resolves #895.
|
|
|
|
| |
This fixed the issue that could cause the child process to stuck after fork.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Customized extent hooks may malloc / free thus trigger reentry. Support this
behavior by adding reentrancy on hook functions.
|
|
|
|
| |
Reported by Conrad Meyer.
|
| |
|
|
|
|
|
|
| |
Passing is_background_thread down the decay path, so that background thread
itself won't attempt inactivity_check. This fixes an issue with background
thread doing trylock on a mutex it already owns.
|
|
|
|
|
| |
This prevents signals from being inadvertently delivered to background
threads.
|
| |
|
|
|
|
|
|
|
|
|
| |
We use the minimal_initilized tsd (which requires no cleanup) for free()
specifically, if tsd hasn't been initialized yet.
Any other activity will transit the state from minimal to normal. This is to
workaround the case where a thread has no malloc calls in its lifetime until
during thread termination, free() happens after tls destructors.
|
| |
|
|
|
|
|
| |
During purging, we may unlock decay->mtx. Therefore we should finish logging
decay related counters before attempt to purge.
|
|
|
|
|
| |
If neither background_thread nor lazy_lock is in use, do not abort on dlsym
errors.
|
|
|
|
|
|
|
| |
This issue caused the default extent alloc function to be incorrectly
used even when arena.<i>.extent_hooks is set. This bug was introduced
by 411697adcda2fd75e135cdcdafb95f2bd295dc7f (Use exponential series to
size extents.), which was first released in 5.0.0.
|
| |
|
|
|
|
| |
Avoid calling pthread_create in postfork handlers.
|
|
|
|
|
| |
To avoid complications, avoid invoking pthread_create "internally", instead rely
on thread0 to launch new threads, and also terminating threads when asked.
|
|
|
|
| |
Also fix a compilation error #ifndef JEMALLOC_PTHREAD_CREATE_WRAPPER.
|
| |
|
| |
|
|
|
|
|
|
| |
Avoid holding arenas_lock and background_thread_lock when creating background
threads, because pthread_create may take internal locks, and potentially cause
deadlock with jemalloc internal locks.
|
|
|
|
|
| |
Since tsd cleanup isn't guaranteed when reincarnated, we set up tsd in a way
that needs no cleanup, by making it going through slow path instead.
|
|
|
|
|
| |
It's possible to customize the extent_hooks while still using part of the
default implementation.
|
| |
|
|
|
|
| |
This makes sure we go down slow path w/ a0 in init.
|
| |
|
|
|
|
| |
The state initialization should be done before pthread_create.
|
|
|
|
|
| |
Refactor bootstrapping such that dlsym() is called during the
bootstrapping phase that can tolerate reentrant allocation.
|
|
|
|
|
| |
Previously we could still hit these assertions down error paths or in the
extended API.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This resolves #528.
|
|
|
|
|
| |
Use a separate boolean to track the enabled status, instead of leaving the
global background thread status inconsistent.
|