summaryrefslogtreecommitdiffstats
path: root/jemalloc/src/stats.c
Commit message (Collapse)AuthorAgeFilesLines
* Replace JEMALLOC_OPTIONS with MALLOC_CONF.Jason Evans2010-10-241-77/+123
| | | | | | | | | | | Replace the single-character run-time flags with key/value pairs, which can be set via the malloc_conf global, /etc/malloc.conf, and the MALLOC_CONF environment variable. Replace the JEMALLOC_PROF_PREFIX environment variable with the "opt.prof_prefix" option. Replace umax2s() with u2s().
* Make cumulative heap profile data optional.Jason Evans2010-10-031-0/+14
| | | | | | | | Add the R option to control whether cumulative heap profile data are maintained. Add the T option to control the size of per thread backtrace caches, primarily because when the R option is specified, backtraces that no longer have allocations associated with them are discarded as soon as no thread caches refer to them.
* Fix P/p reporting in stats_print().Jason Evans2010-04-091-1/+3
| | | | | | Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to be called, opt_stats_print must actually be checked when reporting the state of the P/p option.
* Report E/e option state in jemalloc_stats_print().Jason Evans2010-04-061-1/+4
|
* Make interval-triggered profile dumping optional.Jason Evans2010-04-011-5/+8
| | | | | | Make it possible to disable interval-triggered profile dumping, even if profiling is enabled. This is useful if the user only wants a single dump at exit, or if the application manually triggers profile dumps.
* Remove medium size classes.Jason Evans2010-03-171-39/+29
| | | | | | | | | | Remove medium size classes, because concurrent dirty page purging is no longer capable of purging inactive dirty pages inside active runs (due to recent arena/bin locking changes). Enhance tcache to support caching large objects, so that the same range of size classes is still cached, despite the removal of medium size class support.
* Widen malloc_stats_print() output columns.Jason Evans2010-03-151-14/+15
|
* Change xmallctl() --> CTL_GET() where possible.Jason Evans2010-03-151-3/+3
|
* Push locks into arena bins.Jason Evans2010-03-151-28/+43
| | | | | | | | | | For bin-related allocation, protect data structures with bin locks rather than arena locks. Arena locks remain for run allocation/deallocation and other miscellaneous operations. Restructure statistics counters to maintain per bin allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics via aggregation in the ctl code.
* Simplify tcache object caching.Jason Evans2010-03-141-13/+8
| | | | | | | | | | | | | | | | | | | | Use chains of cached objects, rather than using arrays of pointers. Since tcache_bin_t is no longer dynamically sized, convert tcache_t's tbin to an array of structures, rather than an array of pointers. This implicitly removes tcache_bin_{create,destroy}(), which further simplifies the fast path for malloc/free. Use cacheline alignment for tcache_t allocations. Remove runtime configuration option for number of tcache bin slots, and replace it with a boolean option for enabling/disabling tcache. Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX and 2X the number of regions per run for the size class. For GC-triggered flush, discard 3/4 of the objects below the low water mark, rather than 1/2.
* Print version in malloc_stats_print().Jason Evans2010-03-041-0/+5
|
* Simplify malloc_message().Jason Evans2010-03-041-131/+162
| | | | | Rather than passing four strings to malloc_message(), malloc_write4(), and all the functions that use them, only pass one string.
* Implement sampling for heap profiling.Jason Evans2010-03-021-5/+10
|
* Wrap mallctl* references with JEMALLOC_P().Jason Evans2010-02-111-19/+28
|
* Restructure source tree.Jason Evans2010-02-111-0/+658