| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Generalize ExtractSymbols to handle all cases of library address overlap
with the main binary.
|
|
|
|
|
| |
Linux kernels have been capable of concurrent page table access since
2.6.27, so this hack is not necessary for modern kernels.
|
|
|
|
|
|
| |
Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to
be called, opt_stats_print must actually be checked when reporting the
state of the P/p option.
|
|
|
|
|
|
| |
Don't build with -march=native by default, because the generated code
may perform especially poorly on ABI-compatible, but internally
different, systems.
|
|
|
|
|
|
|
|
| |
Split library build rules up so that parallel building works.
Fix autoconf-related dependencies.
Remove obsolete JEMALLOC_VERSION definition.
|
|
|
|
|
| |
Iterated downward through both libraries and PCs. This allows PCs
to resolve even when library address ranges overlap.
|
|
|
|
|
| |
Remove a duplicate prof_leave() call in an error path through
prof_dump().
|
| |
|
|
|
|
|
| |
Modify ExtractSymbols to operate on sorted PCs and libraries, in order
to reduce computational complexity from O(N*M) to O(N+M).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix divide-by-zero error in pprof. It is possible for sample contexts
to currently have no associated objects, but the cumulative statistics
are still useful, depending on how the user invokes pprof. Since
jemalloc intentionally does not filter such contexts, take care not to
divide by 0 when re-scaling for v2 heap sampling.
Install pprof as part of 'make install'.
Update pprof documentation.
|
|
|
|
|
| |
Leak reporting is useful even if sampling is enabled; some leaks may not
be reported, but those reported are still genuine leaks.
|
|
|
|
|
|
|
| |
Add the E/e options to control whether the application starts with
sampling active/inactive (secondary control to F/f). Add the
prof.active mallctl so that the application can activate/deactivate
sampling on the fly.
|
|
|
|
|
|
| |
Make it possible to disable interval-triggered profile dumping, even if
profiling is enabled. This is useful if the user only wants a single
dump at exit, or if the application manually triggers profile dumps.
|
|
|
|
|
|
|
|
|
| |
If the mean heap sampling interval is larger than one page, simulate
sampled small objects with large objects. This allows profiling context
pointers to be omitted for small objects. As a result, the memory
overhead for sampling decreases as the sampling interval is increased.
Fix a compilation error in the profiling code.
|
| |
|
|
|
|
|
|
|
| |
Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to
whether the pages are (potentially) file-backed or anonymous. This was
merely a performance pessimization for the anonymous mapping case, but
was a calloc()-related bug for the swap_enabled case.
|
|
|
|
|
| |
Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and
preferentially allocate dirty runs.
|
|
|
|
|
|
|
|
|
|
| |
Remove medium size classes, because concurrent dirty page purging is
no longer capable of purging inactive dirty pages inside active runs
(due to recent arena/bin locking changes).
Enhance tcache to support caching large objects, so that the same range
of size classes is still cached, despite the removal of medium size
class support.
|
|
|
|
|
|
|
|
| |
Initialize small run header before dropping arena->lock,
arena_chunk_purge() relies on valid small run headers during run
iteration.
Add some assertions.
|
|
|
|
|
|
| |
Check for interior pointers in arena_[ds]alloc().
Check for corrupt pointers in tcache_alloc().
|
| |
|
|
|
|
|
|
| |
Update arena->nactive when pseudo-allocating runs in
arena_chunk_purge(), since arena_run_dalloc() subtracts from
arena->nactive.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
For bin-related allocation, protect data structures with bin locks
rather than arena locks. Arena locks remain for run
allocation/deallocation and other miscellaneous operations.
Restructure statistics counters to maintain per bin
allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics
via aggregation in the ctl code.
|
|
|
|
|
|
|
| |
Use chained run free lists instead of bitmaps to track free objects
within small runs.
Remove reference counting for small object run pages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use chains of cached objects, rather than using arrays of pointers.
Since tcache_bin_t is no longer dynamically sized, convert tcache_t's
tbin to an array of structures, rather than an array of pointers. This
implicitly removes tcache_bin_{create,destroy}(), which further
simplifies the fast path for malloc/free.
Use cacheline alignment for tcache_t allocations.
Remove runtime configuration option for number of tcache bin slots, and
replace it with a boolean option for enabling/disabling tcache.
Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX
and 2X the number of regions per run for the size class.
For GC-triggered flush, discard 3/4 of the objects below the low water
mark, rather than 1/2.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert chunks_dirty from a red-black tree to a doubly linked list,
and use it to purge dirty pages from chunks in FIFO order.
Add a lock around the code that purges dirty pages via madvise(2), in
order to avoid kernel contention. If lock acquisition fails,
indefinitely postpone purging dirty pages.
Add a lower limit of one chunk worth of dirty pages per arena for
purging, in addition to the active:dirty ratio.
When purging, purge all dirty pages from at least one chunk, but rather
than purging enough pages to drop to half the purging threshold, merely
drop to the threshold.
|
| |
|
|
|
|
|
| |
Rather than passing four strings to malloc_message(), malloc_write4(),
and all the functions that use them, only pass one string.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't look for a shared libunwind if --with-static-libunwind is
specified.
Set SONAME when linking the shared libjemalloc.
Add DESTDIR support.
Add install_{include,lib/man} build targets.
Clean up compiler flag configuration.
|
|
|
|
|
| |
Move prof_sample_threshold initialization into prof_alloc_prep(),
before using it to decide whether to capture a backtrace.
|
| |
|
|
|
|
|
|
|
| |
Base version string on 'git describe --long', and provide cpp
macros in jemalloc.h.
Add the version mallctl.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Use left-leaning 2-3 red-black trees instead of left-leaning 2-3-4
red-black trees. This reduces maximum tree height from (3 lg n) to
(2 lg n).
Do lazy balance fixup, rather than transforming the tree during the down
pass. This improves insert/remove speed by ~30%.
Use callback-based iteration rather than macros.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Include mb.h after mutex.h, in case it actually has to use the
mutex-based memory barrier implementation.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Remove all functionality related to tracing. This functionality was
useful for understanding memory fragmentation during early algorithmic
design of jemalloc, but it had little utility for non-trivial
applications, due to the sheer volume of data written to disk.
|