summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/ctl.h
Commit message (Collapse)AuthorAgeFilesLines
* Split up and standardize naming of stats code.David T. Goldblatt2017-12-191-2/+2
| | | | | | The arena-associated stats are now all prefixed with arena_stats_, and live in their own file. Likewise, malloc_bin_stats_t -> bin_stats_t, also in its own file.
* Add stats for metadata_thp.Qi Wang2017-08-301-0/+1
| | | | Report number of THPs used in arena and aggregated stats.
* Switch ctl to explicitly use tsd instead of tsdn.Qi Wang2017-06-231-2/+1
|
* Add background thread related stats.Qi Wang2017-05-231-0/+1
|
* Refactor *decay_time into *decay_ms.Jason Evans2017-05-181-2/+2
| | | | | | | | Support millisecond resolution for decay times. Among other use cases this makes it possible to specify a short initial dirty-->muzzy decay phase, followed by a longer muzzy-->clean decay phase. This resolves #812.
* Refactor (MALLOCX_ARENA_MAX + 1) to be MALLOCX_ARENA_LIMIT.Jason Evans2017-05-141-2/+2
| | | | This resolves #673.
* Header refactoring: ctl - unify and remove from catchall.David Goldblatt2017-04-251-0/+130
| | | | | In order to do this, we introduce the mutex_prof module, which breaks a circular dependency between ctl and prof.
* Break up headers into constituent partsDavid Goldblatt2017-01-121-127/+0
| | | | | | | | | | This is part of a broader change to make header files better represent the dependencies between one another (see https://github.com/jemalloc/jemalloc/issues/533). It breaks up component headers into smaller parts that can be made to have a simpler dependency graph. For the autogenerated headers (smoothstep.h and size_classes.h), no splitting was necessary, so I didn't add support to emit multiple headers.
* Implement arena.<i>.destroy .Jason Evans2017-01-071-1/+11
| | | | | | | Add MALLCTL_ARENAS_DESTROYED for accessing destroyed arena stats as an analogue to MALLCTL_ARENAS_ALL. This resolves #382.
* Range-check mib[1] --> arena_ind casts.Jason Evans2017-01-071-1/+1
|
* Move static ctl_epoch variable into ctl_stats_t (as epoch).Jason Evans2017-01-071-0/+1
|
* Refactor ctl_stats_t.Jason Evans2017-01-071-1/+1
| | | | | | | Refactor ctl_stats_t to be a demand-zeroed non-growing data structure. To keep the size from being onerous (~60 MiB) on 32-bit systems, convert the arenas field to contain pointers rather than directly embedded ctl_arena_stats_t elements.
* Remove ratio-based purging.Jason Evans2016-10-121-1/+0
| | | | | | | | | | | | | Make decay-based purging the default (and only) mode. Remove associated mallctls: - opt.purge - opt.lg_dirty_mult - arena.<i>.lg_dirty_mult - arenas.lg_dirty_mult - stats.arenas.<i>.lg_dirty_mult This resolves #385.
* Rename huge to large.Jason Evans2016-06-061-1/+1
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-2/+1
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-5/+5
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Add the stats.retained and stats.arenas.<i>.retained statistics.Jason Evans2016-05-041-0/+1
| | | | This resolves #367.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-11/+13
| | | | This resolves #358.
* Fix stats.arenas.<i>.[...] for --disable-stats case.Jason Evans2016-02-281-0/+3
| | | | | | | | Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time} initialization. Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of the arena mutex.
* Implement decay-based unused dirty page purging.Jason Evans2016-02-201-0/+1
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.
* Add the "stats.arenas.<i>.lg_dirty_mult" mallctl.Jason Evans2015-03-241-0/+1
|
* Add the "stats.allocated" mallctl.Jason Evans2015-03-241-0/+1
|
* Move centralized chunk management into arenas.Jason Evans2015-02-121-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Implement metadata statistics.Jason Evans2015-01-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Add per size class huge allocation statistics.Jason Evans2014-10-131-0/+1
| | | | | | | | | | | | | Add per size class huge allocation statistics, and normalize various stats: - Change the arenas.nlruns type from size_t to unsigned. - Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's. - Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with stats.arenas.<i>.bins.<j>.curregs . - Add the stats.arenas.<i>.hchunks.<j>.nmalloc, stats.arenas.<i>.hchunks.<j>.ndalloc, stats.arenas.<i>.hchunks.<j>.nrequests, and stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
* Refactor huge allocation to be managed by arenas.Jason Evans2014-05-161-5/+0
| | | | | | | | | | | | | | | | | | | | Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
* Add arena-specific and selective dss allocation.Jason Evans2012-10-131-0/+2
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.
* Fix fork(2)-related deadlocks.Jason Evans2012-10-091-0/+3
| | | | | | | | | | | | | | | | | Add a library constructor for jemalloc that initializes the allocator. This fixes a race that could occur if threads were created by the main thread prior to any memory allocation, followed by fork(2), and then memory allocation in the child process. Fix the prefork/postfork functions to acquire/release the ctl, prof, and rtree mutexes. This fixes various fork() child process deadlocks, but one possible deadlock remains (intentionally) unaddressed: prof backtracing can acquire runtime library mutexes, so deadlock is still possible if heap profiling is enabled during fork(). This deadlock is known to be a real issue in at least the case of libgcc-based backtracing. Reported by tfengjun.
* Fix ctl regression.Jason Evans2012-04-241-6/+6
| | | | | Fix ctl to correctly compute the number of children at each level of the ctl tree.
* Avoid using a union for ctl_node_sMike Hommey2012-04-231-12/+15
| | | | | | | MSVC doesn't support C99, and as such doesn't support designated initialization of structs and unions. As there is never a mix of indexed and named nodes, it is pretty straightforward to use a different type for each.
* Implement malloc_vsnprintf().Jason Evans2012-03-081-7/+5
| | | | | | | | | | | | Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as several other printing functions based on it, so that formatted printing can be relied upon without concern for inducing a dependency on floating point runtime support. Replace malloc_write() calls with malloc_*printf() where doing so simplifies the code. Add name mangling for library-private symbols in the data and BSS sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose all opt_* variable use to cpp so that proper mangling occurs.
* Add --with-mangling.Jason Evans2012-03-021-3/+3
| | | | | | | | | | Add the --with-mangling configure option, which can be used to specify name mangling on a per public symbol basis that takes precedence over --with-jemalloc-prefix. Expose the memalign() and valloc() overrides even if --with-jemalloc-prefix is specified. This change does no real harm, and simplifies the code.
* Simplify small size class infrastructure.Jason Evans2012-02-291-1/+1
| | | | | | | | | | | | Program-generate small size class tables for all valid combinations of LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate all relevant data structures, and remove the distinction between tiny/quantum/cacheline/subpage bins. Remove --enable-dynamic-page-shift. This option didn't prove useful in practice, and it prevented optimizations. Add Tilera architecture support.
* Remove the swap feature.Jason Evans2012-02-131-1/+0
| | | | | Remove the swap feature, which enabled per application swap files. In practice this feature has not proven itself useful to users.
* Reduce cpp conditional logic complexity.Jason Evans2012-02-111-6/+0
| | | | | | | | | | | | | | | | | | | | | | Convert configuration-related cpp conditional logic to use static constant variables, e.g.: #ifdef JEMALLOC_DEBUG [...] #endif becomes: if (config_debug) { [...] } The advantage is clearer, more concise code. The main disadvantage is that data structures no longer have conditionally defined fields, so they pay the cost of all fields regardless of whether they are used. In practice, this is only a minor concern; config_stats will go away in an upcoming change, and config_prof is the only other major feature that depends on more than a few special-purpose fields.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-011-0/+118