summaryrefslogtreecommitdiffstats
path: root/jemalloc
Commit message (Collapse)AuthorAgeFilesLines
* Avoid unnecessary isalloc() calls.Jason Evans2010-05-121-12/+18
| | | | | | When heap profiling is enabled but deactivated, there is no need to call isalloc(ptr) in prof_{malloc,realloc}(). Avoid these calls, so that profiling overhead under such conditions is negligible.
* Fix next_arena initialization.Jason Evans2010-05-111-1/+1
| | | | | | If there is more than one arena, initialize next_arena so that the first and second threads to allocate memory use arenas 0 and 1, rather than both using arena 0.
* Add MAP_NORESERVE support.Jordan DeLong2010-05-113-14/+32
| | | | | Add MAP_NORESERVE to the chunk_mmap() case being used by chunk_swap_enable(), if the system supports it.
* Fix junk filling of cached large objects.Jason Evans2010-04-281-1/+1
| | | | | | | | | Use the size argument to tcache_dalloc_large() to control the number of bytes set to 0x5a when junk filling is enabled, rather than accessing a non-existent arena bin. This bug was capable of corrupting an arbitrarily large memory region, depending on what followed the arena data structure in memory (typically zeroed memory, another arena_t, or a red-black tree node for a huge object).
* Fix tcache crash during thread cleanup.Jason Evans2010-04-141-14/+12
| | | | | | | Properly maintain tcache_bin_t's avail pointer such that it is NULL if no objects are cached. This only caused problems during thread cache destruction, since cache flushing otherwise never occurs on an empty bin.
* Fix profiling regression caused by bugfix.Jason Evans2010-04-141-8/+9
| | | | | | | Properly set the context associated with each allocated object, even when the object is not sampled. Remove debug print code that slipped in.
* Remove autom4te.cache in distclean (not relclean).Jason Evans2010-04-141-1/+1
|
* Fix arena chunk purge/dealloc race conditions.Jason Evans2010-04-141-24/+30
| | | | | | | | | Fix arena_chunk_dealloc() to put the new spare in a consistent state before dropping the arena mutex to deallocate the previous spare. Fix arena_run_dalloc() to insert a newly dirtied chunk into the chunks_dirty list before potentially deallocating the chunk, so that dirty page accounting is self-consistent.
* Fix threads-related profiling bugs.Jason Evans2010-04-148-82/+118
| | | | | | | | Initialize bt2cnt_tsd so that cleanup at thread exit actually happens. Associate (prof_ctx_t *) with allocated objects, rather than (prof_thr_cnt_t *). Each thread must always operate on its own (prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
* Update stale JEMALLOC_FILL code.Jason Evans2010-04-141-1/+1
| | | | | Fix a compilation error due to stale data structure access code in tcache_dalloc_large() for junk filling.
* Update documentation.1.0.0Jason Evans2010-04-123-2/+13
|
* Generalize ExtractSymbols optimization (pprof).Jason Evans2010-04-091-17/+18
| | | | | Generalize ExtractSymbols to handle all cases of library address overlap with the main binary.
* Revert re-addition of purge_lock.Jason Evans2010-04-092-39/+48
| | | | | Linux kernels have been capable of concurrent page table access since 2.6.27, so this hack is not necessary for modern kernels.
* Fix P/p reporting in stats_print().Jason Evans2010-04-091-1/+3
| | | | | | Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to be called, opt_stats_print must actually be checked when reporting the state of the P/p option.
* Don't build with -march=native.Jason Evans2010-04-081-1/+0
| | | | | | Don't build with -march=native by default, because the generated code may perform especially poorly on ABI-compatible, but internally different, systems.
* Fix build system problems.Jason Evans2010-04-085-30/+20
| | | | | | | | Split library build rules up so that parallel building works. Fix autoconf-related dependencies. Remove obsolete JEMALLOC_VERSION definition.
* Improve ExtractSymbols (pprof).Jason Evans2010-04-081-11/+4
| | | | | Iterated downward through both libraries and PCs. This allows PCs to resolve even when library address ranges overlap.
* Fix error path in prof_dump().Jason Evans2010-04-061-1/+0
| | | | | Remove a duplicate prof_leave() call in an error path through prof_dump().
* Report E/e option state in jemalloc_stats_print().Jason Evans2010-04-062-3/+6
|
* Optimize ExtractSymbols (pprof).Jason Evans2010-04-031-6/+19
| | | | | Modify ExtractSymbols to operate on sorted PCs and libraries, in order to reduce computational complexity from O(N*M) to O(N+M).
* Use addr2line only for --line option (pprof).Jason Evans2010-04-031-1/+2
|
* Import pprof from google-perftools, svn r91.Jason Evans2010-04-023-3/+4358
| | | | | | | | | | | | Fix divide-by-zero error in pprof. It is possible for sample contexts to currently have no associated objects, but the cumulative statistics are still useful, depending on how the user invokes pprof. Since jemalloc intentionally does not filter such contexts, take care not to divide by 0 when re-scaling for v2 heap sampling. Install pprof as part of 'make install'. Update pprof documentation.
* Don't disable leak reporting due to sampling.Jason Evans2010-04-022-11/+1
| | | | | Leak reporting is useful even if sampling is enabled; some leaks may not be reported, but those reported are still genuine leaks.
* Add sampling activation/deactivation control.Jason Evans2010-04-015-1/+68
| | | | | | | Add the E/e options to control whether the application starts with sampling active/inactive (secondary control to F/f). Add the prof.active mallctl so that the application can activate/deactivate sampling on the fly.
* Make interval-triggered profile dumping optional.Jason Evans2010-04-016-14/+24
| | | | | | Make it possible to disable interval-triggered profile dumping, even if profiling is enabled. This is useful if the user only wants a single dump at exit, or if the application manually triggers profile dumps.
* Reduce statistical heap sampling memory overhead.Jason Evans2010-03-317-59/+224
| | | | | | | | | If the mean heap sampling interval is larger than one page, simulate sampled small objects with large objects. This allows profiling context pointers to be omitted for small objects. As a result, the memory overhead for sampling decreases as the sampling interval is increased. Fix a compilation error in the profiling code.
* Re-add purge_lock to funnel madvise(2) calls.Jason Evans2010-03-272-48/+39
|
* Set/clear CHUNK_MAP_ZEROED in arena_chunk_purge().Jason Evans2010-03-221-11/+32
| | | | | | | Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to whether the pages are (potentially) file-backed or anonymous. This was merely a performance pessimization for the anonymous mapping case, but was a calloc()-related bug for the swap_enabled case.
* Track dirty and clean runs separately.Jason Evans2010-03-194-226/+285
| | | | | Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and preferentially allocate dirty runs.
* Remove medium size classes.Jason Evans2010-03-1712-483/+514
| | | | | | | | | | Remove medium size classes, because concurrent dirty page purging is no longer capable of purging inactive dirty pages inside active runs (due to recent arena/bin locking changes). Enhance tcache to support caching large objects, so that the same range of size classes is still cached, despite the removal of medium size class support.
* Fix a run initialization race condition.Jason Evans2010-03-162-15/+24
| | | | | | | | Initialize small run header before dropping arena->lock, arena_chunk_purge() relies on valid small run headers during run iteration. Add some assertions.
* Add assertions.Jason Evans2010-03-153-1/+11
| | | | | | Check for interior pointers in arena_[ds]alloc(). Check for corrupt pointers in tcache_alloc().
* Widen malloc_stats_print() output columns.Jason Evans2010-03-151-14/+15
|
* arena_chunk_purge() arena->nactive fix.Jason Evans2010-03-151-0/+1
| | | | | | Update arena->nactive when pseudo-allocating runs in arena_chunk_purge(), since arena_run_dalloc() subtracts from arena->nactive.
* Change xmallctl() --> CTL_GET() where possible.Jason Evans2010-03-151-3/+3
|
* Fix malloc_stats_print() man page prototype.Jason Evans2010-03-151-2/+2
|
* mmap()/munmap() without arena->lock or bin->lock.Jason Evans2010-03-151-41/+118
|
* Purge dirty pages without arena->lock.Jason Evans2010-03-152-76/+255
|
* Push locks into arena bins.Jason Evans2010-03-159-191/+368
| | | | | | | | | | For bin-related allocation, protect data structures with bin locks rather than arena locks. Arena locks remain for run allocation/deallocation and other miscellaneous operations. Restructure statistics counters to maintain per bin allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics via aggregation in the ctl code.
* Simplify small object allocation/deallocation.Jason Evans2010-03-142-343/+135
| | | | | | | Use chained run free lists instead of bitmaps to track free objects within small runs. Remove reference counting for small object run pages.
* Simplify tcache object caching.Jason Evans2010-03-148-247/+172
| | | | | | | | | | | | | | | | | | | | Use chains of cached objects, rather than using arrays of pointers. Since tcache_bin_t is no longer dynamically sized, convert tcache_t's tbin to an array of structures, rather than an array of pointers. This implicitly removes tcache_bin_{create,destroy}(), which further simplifies the fast path for malloc/free. Use cacheline alignment for tcache_t allocations. Remove runtime configuration option for number of tcache bin slots, and replace it with a boolean option for enabling/disabling tcache. Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX and 2X the number of regions per run for the size class. For GC-triggered flush, discard 3/4 of the objects below the low water mark, rather than 1/2.
* Modify dirty page purging algorithm.Jason Evans2010-03-054-75/+80
| | | | | | | | | | | | | | | | Convert chunks_dirty from a red-black tree to a doubly linked list, and use it to purge dirty pages from chunks in FIFO order. Add a lock around the code that purges dirty pages via madvise(2), in order to avoid kernel contention. If lock acquisition fails, indefinitely postpone purging dirty pages. Add a lower limit of one chunk worth of dirty pages per arena for purging, in addition to the active:dirty ratio. When purging, purge all dirty pages from at least one chunk, but rather than purging enough pages to drop to half the purging threshold, merely drop to the threshold.
* Print version in malloc_stats_print().Jason Evans2010-03-041-0/+5
|
* Simplify malloc_message().Jason Evans2010-03-0413-248/+270
| | | | | Rather than passing four strings to malloc_message(), malloc_write4(), and all the functions that use them, only pass one string.
* Fix various config/build issues.Jason Evans2010-03-043-17/+35
| | | | | | | | | | | | | Don't look for a shared libunwind if --with-static-libunwind is specified. Set SONAME when linking the shared libjemalloc. Add DESTDIR support. Add install_{include,lib/man} build targets. Clean up compiler flag configuration.
* Move sampling init into prof_alloc_prep().Jason Evans2010-03-031-39/+51
| | | | | Move prof_sample_threshold initialization into prof_alloc_prep(), before using it to decide whether to capture a backtrace.
* Add the --with-static-libunwind configure option.Jason Evans2010-03-022-1/+18
|
* Add release versioning support.0.0.0Jason Evans2010-03-026-2/+32
| | | | | | | Base version string on 'git describe --long', and provide cpp macros in jemalloc.h. Add the version mallctl.
* Allow prof.dump mallctl to specify filename.Jason Evans2010-03-024-78/+134
|
* Edit rb documentation.Jason Evans2010-03-021-7/+6
|