| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
| |
|
|
|
|
| |
Reported by Pat Lynch.
|
|
|
|
|
| |
Consistently use malloc_mutex_prefork() instead of malloc_mutex_lock()
in all prefork functions.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Avoid writing to uninitialized TLS as a side effect of deallocation.
Initializing TLS during deallocation is unsafe because it is possible
that a thread never did any allocation, and that TLS has already been
deallocated by the threads library, resulting in write-after-free
corruption. These fixes affect prof_tdata and quarantine; all other
uses of TLS are already safe, whether intentionally (as for tcache) or
unintentionally (as for arenas).
|
|
|
|
|
|
| |
Update hash from MurmurHash2 to MurmurHash3, primarily because the
latter generates 128 bits in a single call for no extra cost, which
simplifies integration with cuckoo hashing.
|
|
|
|
|
|
|
| |
Refactor arena_prof_accum() and its callers to avoid arena locking when
prof_interval is 0 (as when profiling is disabled).
Reported by Ben Maurer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a library constructor for jemalloc that initializes the allocator.
This fixes a race that could occur if threads were created by the main
thread prior to any memory allocation, followed by fork(2), and then
memory allocation in the child process.
Fix the prefork/postfork functions to acquire/release the ctl, prof, and
rtree mutexes. This fixes various fork() child process deadlocks, but
one possible deadlock remains (intentionally) unaddressed: prof
backtracing can acquire runtime library mutexes, so deadlock is still
possible if heap profiling is enabled during fork(). This deadlock is
known to be a real issue in at least the case of libgcc-based
backtracing.
Reported by tfengjun.
|
| |
|
|
|
|
|
| |
Handle prof_tdata resurrection during thread shutdown, similarly to how
tcache and quarantine handle resurrection.
|
|
|
|
|
|
| |
Don't set prof_tdata during thread cleanup, because doing so will cause
the cleanup function to be called again, the second time with a NULL
argument.
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a potential deadlock that could occur during interval- and
growth-triggered heap profile dumps.
Fix an off-by-one heap profile statistics bug that could be observed in
interval- and growth-triggered heap profiles.
Fix heap profile dump filename sequence numbers (regression during
conversion to malloc_snprintf()).
|
|
|
|
|
|
|
|
|
| |
Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
Change the "opt.prof_accum" default from true to false.
Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be
abused to disable final profile dumping.
|
|
|
|
|
|
|
| |
Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.
Reported by Mike Hommey.
|
|
|
|
|
|
|
|
|
|
|
| |
s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.
Remove remnants of the dynamic-page-shift code.
Rename the "arenas.pagesize" mallctl to "arenas.page".
Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
|
|
|
|
|
|
|
|
|
|
|
| |
Remove ephemeral mutexes from the prof machinery, and remove
malloc_mutex_destroy(). This simplifies mutex management on systems
that call malloc()/free() inside pthread_mutex_{create,destroy}().
Add atomic_*_u() for operation on unsigned values.
Fix prof_printf() to call malloc_vsnprintf() rather than
malloc_snprintf().
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement tsd, which is a TLS/TSD abstraction that uses one or both
internally. Modify bootstrapping such that no tsd's are utilized until
allocation is safe.
Remove malloc_[v]tprintf(), and use malloc_snprintf() instead.
Fix %p argument size handling in malloc_vsnprintf().
Fix a long-standing statistics-related bug in the "thread.arena"
mallctl that could cause crashes due to linked list corruption.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as
several other printing functions based on it, so that formatted printing
can be relied upon without concern for inducing a dependency on floating
point runtime support. Replace malloc_write() calls with
malloc_*printf() where doing so simplifies the code.
Add name mangling for library-private symbols in the data and BSS
sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose
all opt_* variable use to cpp so that proper mangling occurs.
|
| |
|
|
|
|
|
| |
Rename prn to prng so that Windows doesn't choke when trying to create
a file named prn.h.
|
|
|
|
|
|
|
|
| |
Remove opt.lg_prof_bt_max, and hard code it to 7. The original
intention of this option was to enable faster backtracing by limiting
backtrace depth. However, this makes graphical pprof output very
difficult to interpret. In practice, decreasing sampling frequency is a
better mechanism for limiting profiling overhead.
|
|
|
|
|
|
|
| |
Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024.
This setting is something that users just shouldn't have to worry about.
If lock contention actually ends up being a problem, the simple solution
available to the user is to reduce sampling frequency.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:
#ifdef JEMALLOC_DEBUG
[...]
#endif
becomes:
if (config_debug) {
[...]
}
The advantage is clearer, more concise code. The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used. In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
|
|
|
|
|
|
|
|
| |
Fix prof_lookup() to artificially raise curobjs for all paths through
the code that creates a new entry in the per thread bt2cnt hash table.
This fixes a race condition that could corrupt memory if prof_accum were
false, and a non-default lg_prof_tcmax were used and/or threads were
destroyed.
|
|
|
|
|
|
|
| |
Clean up some prof-related comments to more accurately reflect how the
code works.
Simplify OOM handling code in a couple of prof-related error paths.
|
|
|
|
|
| |
Use the argument to prof_tdata_cleanup(), rather than calling
PROF_TCACHE_GET(). This fixes a bug in the NO_TLS case.
|
|
|
|
|
|
| |
Add the LLU suffix for all 0x... 64-bit constants.
Reported by Jakob Blomer.
|
|
|