summaryrefslogtreecommitdiffstats
path: root/jemalloc/src/ctl.c
Commit message (Collapse)AuthorAgeFilesLines
* Add the "stats.cactive" mallctl.Jason Evans2011-03-191-0/+3
| | | | | | Add the "stats.cactive" mallctl, which can be used to efficiently and repeatedly query approximately how much active memory the application is utilizing.
* Improve thread-->arena assignment.Jason Evans2011-03-181-0/+13
| | | | | | | | Rather than blindly assigning threads to arenas in round-robin fashion, choose the lowest-numbered arena that currently has the smallest number of threads assigned to it. Add the "stats.arenas.<i>.nthreads" mallctl.
* Create arena_bin_info_t.Jason Evans2011-03-151-3/+3
| | | | | Move read-only fields from arena_bin_t into arena_bin_info_t, primarily in order to avoid false cacheline sharing.
* Fix a "thread.arena" mallctl bug.Jason Evans2011-03-141-2/+2
| | | | Fix a variable reversal bug in mallctl("thread.arena", ...).
* Fix "thread.{de,}allocatedp" mallctl.Jason Evans2011-02-141-2/+2
| | | | | | | | | For the non-TLS case (as on OS X), if the "thread.{de,}allocatedp" mallctl was called before any allocation occurred for that thread, the TSD was still NULL, thus putting the application at risk of dereferencing NULL. Fix this by refactoring the initialization code, and making it part of the conditional logic for all per thread allocation counter accesses.
* Fix a "thread.arena" mallctl bug.Jason Evans2010-12-291-0/+5
| | | | | | When setting a new arena association for the calling thread, also update the tcache's cached arena pointer, primarily so that tcache_alloc_small_hard() uses the intended arena.
* Add the "thread.[de]allocatedp" mallctl's.Jason Evans2010-12-031-1/+7
|
* Push down ctl_mtx.Jason Evans2010-11-241-74/+124
| | | | | | | Many mallctl*() end points require no locking, so push the locking down to just the functions that need it. This is of particular import for "thread.allocated" and "thread.deallocated", which are intended as a low-overhead way to introspect per thread allocation activity.
* Replace JEMALLOC_OPTIONS with MALLOC_CONF.Jason Evans2010-10-241-31/+36
| | | | | | | | | | | Replace the single-character run-time flags with key/value pairs, which can be set via the malloc_conf global, /etc/malloc.conf, and the MALLOC_CONF environment variable. Replace the JEMALLOC_PROF_PREFIX environment variable with the "opt.prof_prefix" option. Replace umax2s() with u2s().
* Add per thread allocation counters, and enhance heap sampling.Jason Evans2010-10-211-0/+14
| | | | | | | | | | | | | | | | | | | Add the "thread.allocated" and "thread.deallocated" mallctls, which can be used to query the total number of bytes ever allocated/deallocated by the calling thread. Add s2u() and sa2u(), which can be used to compute the usable size that will result from an allocation request of a particular size/alignment. Re-factor ipalloc() to use sa2u(). Enhance the heap profiler to trigger samples based on usable size, rather than request size. This has a subtle, but important, impact on the accuracy of heap sampling. For example, previous to this change, 16- and 17-byte objects were sampled at nearly the same rate, but 17-byte objects actually consume 32 bytes each. Therefore it was possible for the sample to be somewhat skewed compared to actual memory usage of the allocated objects.
* Move variable declaration out of for loop header.Jason Evans2010-10-071-1/+2
| | | | | Move a loop variable declaration out of for(usigned i = 0; ...) in order to avoid the need for C99 compilation.
* Make cumulative heap profile data optional.Jason Evans2010-10-031-0/+6
| | | | | | | | Add the R option to control whether cumulative heap profile data are maintained. Add the T option to control the size of per thread backtrace caches, primarily because when the R option is specified, backtraces that no longer have allocations associated with them are discarded as soon as no thread caches refer to them.
* Add the "arenas.purge" mallctl.Jason Evans2010-09-301-1/+40
|
* Port to Mac OS X.Jason Evans2010-09-121-11/+3
| | | | | Add Mac OS X support, based in large part on the OS X support in Mozilla's version of jemalloc.
* Add the thread.arena mallctl.Jason Evans2010-08-141-0/+52
| | | | | | | Make it possible for each thread to manage which arena it is associated with. Implement the 'tests' and 'check' build targets.
* Add sampling activation/deactivation control.Jason Evans2010-04-011-0/+29
| | | | | | | Add the E/e options to control whether the application starts with sampling active/inactive (secondary control to F/f). Add the prof.active mallctl so that the application can activate/deactivate sampling on the fly.
* Make interval-triggered profile dumping optional.Jason Evans2010-04-011-1/+1
| | | | | | Make it possible to disable interval-triggered profile dumping, even if profiling is enabled. This is useful if the user only wants a single dump at exit, or if the application manually triggers profile dumps.
* Remove medium size classes.Jason Evans2010-03-171-54/+35
| | | | | | | | | | Remove medium size classes, because concurrent dirty page purging is no longer capable of purging inactive dirty pages inside active runs (due to recent arena/bin locking changes). Enhance tcache to support caching large objects, so that the same range of size classes is still cached, despite the removal of medium size class support.
* Push locks into arena bins.Jason Evans2010-03-151-16/+120
| | | | | | | | | | For bin-related allocation, protect data structures with bin locks rather than arena locks. Arena locks remain for run allocation/deallocation and other miscellaneous operations. Restructure statistics counters to maintain per bin allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics via aggregation in the ctl code.
* Simplify tcache object caching.Jason Evans2010-03-141-3/+3
| | | | | | | | | | | | | | | | | | | | Use chains of cached objects, rather than using arrays of pointers. Since tcache_bin_t is no longer dynamically sized, convert tcache_t's tbin to an array of structures, rather than an array of pointers. This implicitly removes tcache_bin_{create,destroy}(), which further simplifies the fast path for malloc/free. Use cacheline alignment for tcache_t allocations. Remove runtime configuration option for number of tcache bin slots, and replace it with a boolean option for enabling/disabling tcache. Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX and 2X the number of regions per run for the size class. For GC-triggered flush, discard 3/4 of the objects below the low water mark, rather than 1/2.
* Add release versioning support.0.0.0Jason Evans2010-03-021-0/+4
| | | | | | | Base version string on 'git describe --long', and provide cpp macros in jemalloc.h. Add the version mallctl.
* Allow prof.dump mallctl to specify filename.Jason Evans2010-03-021-4/+13
|
* Implement sampling for heap profiling.Jason Evans2010-03-021-0/+3
|
* Restructure source tree.Jason Evans2010-02-111-0/+1352