summaryrefslogtreecommitdiffstats
path: root/src/ctl.c
Commit message (Collapse)AuthorAgeFilesLines
* Rename huge_threshold to oversize_threshold.Qi Wang2019-01-251-3/+3
| | | | | The keyword huge tend to remind people of huge pages which is not relevent to the feature.
* Un-experimental the huge_threshold feature.Qi Wang2019-01-161-1/+1
|
* Avoid creating bg thds for huge arena lone.Qi Wang2019-01-161-0/+11
| | | | | | For low arena count settings, the huge threshold feature may trigger an unwanted bg thd creation. Given that the huge arena does eager purging by default, bypass bg thd creation when initializing the huge arena.
* Add stats for arenas.bin.i.nshards.Qi Wang2018-12-041-1/+4
|
* Add support for sharded bins within an arena.Qi Wang2018-12-041-2/+4
| | | | | | | | | This makes it possible to have multiple set of bins in an arena, which improves arena scalability because the bins (especially the small ones) are always the limiting factor in production workload. A bin shard is picked on allocation; each extent tracks the bin shard id for deallocation. The shard size will be determined using runtime options.
* Add stats for the size of extent_avail heapTyler Etzel2018-08-021-0/+8
|
* Add extents information to mallocstats outputTyler Etzel2018-08-021-2/+80
| | | | - Show number/bytes of extents of each size that are dirty, muzzy, retained.
* Add logging for sampled allocationsTyler Etzel2018-08-011-1/+43
| | | | | - prof_opt_log flag starts logging automatically at runtime - prof_log_{start,stop} mallctl for manual control
* Hide size class computation behind a layer of indirection.David Goldblatt2018-07-131-14/+14
| | | | | | | | | This class removes almost all the dependencies on size_classes.h, accessing the data there only via the new module sc.h, which does not depend on any configuration options. In a subsequent commit, we'll remove the configure-time size class computations, doing them at boot time, instead.
* Clean compilation -Wextragnzlbg2018-07-101-61/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this commit jemalloc produced many warnings when compiled with -Wextra with both Clang and GCC. This commit fixes the issues raised by these warnings or suppresses them if they were spurious at least for the Clang and GCC versions covered by CI. This commit: * adds `JEMALLOC_DIAGNOSTIC` macros: `JEMALLOC_DIAGNOSTIC_{PUSH,POP}` are used to modify the stack of enabled diagnostics. The `JEMALLOC_DIAGNOSTIC_IGNORE_...` macros are used to ignore a concrete diagnostic. * adds `JEMALLOC_FALLTHROUGH` macro to explicitly state that falling through `case` labels in a `switch` statement is intended * Removes all UNUSED annotations on function parameters. The warning -Wunused-parameter is now disabled globally in `jemalloc_internal_macros.h` for all translation units that include that header. It is never re-enabled since that header cannot be included by users. * locally suppresses some -Wextra diagnostics: * `-Wmissing-field-initializer` is buggy in older Clang and GCC versions, where it does not understanding that, in C, `= {0}` is a common C idiom to initialize a struct to zero * `-Wtype-bounds` is suppressed in a particular situation where a generic macro, used in multiple different places, compares an unsigned integer for smaller than zero, which is always true. * `-Walloc-larger-than-size=` diagnostics warn when an allocation function is called with a size that is too large (out-of-range). These are suppressed in the parts of the tests where `jemalloc` explicitly does this to test that the allocation functions fail properly. * adds a new CI build bot that runs the log unit test on CI. Closes #1196 .
* Rename huge_threshold to experimental, and tweak documentation.Qi Wang2018-06-291-1/+1
|
* Add ctl and stats for opt.huge_threshold.Qi Wang2018-06-291-0/+3
|
* Fall back to the default pthread_create if RTLD_NEXT fails.Qi Wang2018-06-281-14/+0
|
* Mallctl: Add experimental.hooks.[install|remove].David Goldblatt2018-05-181-1/+58
|
* Fix background thread index issues with max_background_threads.Qi Wang2018-05-151-4/+2
|
* Mallctl: Add arenas.lookupLatchesar Ionkov2018-05-011-1/+33
| | | | | Implement a new mallctl operation that allows looking up the arena a region of memory belongs to.
* Allow setting extent hooks on uninitialized auto arenas.Qi Wang2018-04-121-12/+33
| | | | | Setting extent hooks can result in initializing an unused auto arena. This is useful to install extent hooks on auto arenas from the beginning.
* background_thread: add max thread count configDave Watson2018-04-101-0/+70
| | | | | Looking at the thread counts in our services, jemalloc's background thread is useful, but mostly idle. Add a config option to tune down the number of threads.
* Add opt.thp which allows explicit hugepage usage.Qi Wang2018-03-081-0/+3
| | | | | | | | "always" marks all user mappings as MADV_HUGEPAGE; while "never" marks all mappings as MADV_NOHUGEPAGE. The default setting "default" does not change any settings. Note that all the madvise calls are part of the default extent hooks by design, so that customized extent hooks have complete control over the mappings including hugepage settings.
* Remove config.thp which wasn't in use.Qi Wang2018-03-081-3/+0
|
* Split up and standardize naming of stats code.David T. Goldblatt2017-12-191-39/+43
| | | | | | The arena-associated stats are now all prefixed with arena_stats_, and live in their own file. Likewise, malloc_bin_stats_t -> bin_stats_t, also in its own file.
* Pull out arena_bin_info_t and arena_bin_t into their own file.David T. Goldblatt2017-12-191-4/+4
| | | | | In the process, kill arena_bin_index, which is unused. To follow are several diffs continuing this separation.
* Add opt.lg_extent_max_active_fitQi Wang2017-11-161-0/+4
| | | | | | | | | | When allocating from dirty extents (which we always prefer if available), large active extents can get split even if the new allocation is much smaller, in which case the introduced fragmentation causes high long term damage. This new option controls the threshold to reuse and split an existing active extent. We avoid using a large extent for much smaller sizes, in order to reduce fragmentation. In some workload, adding the threshold improves virtual memory usage by >10x.
* Add arena.i.retain_grow_limitQi Wang2017-11-031-2/+40
| | | | | | | This option controls the max size when grow_retained. This is useful when we have customized extent hooks reserving physical memory (e.g. 1G huge pages). Without this feature, the default increasing sequence could result in fragmented and wasted physical memory.
* Add stats for metadata_thp.Qi Wang2017-08-301-0/+12
| | | | Report number of THPs used in arena and aggregated stats.
* Change opt.metadata_thp to [disabled,auto,always].Qi Wang2017-08-301-1/+2
| | | | | | | | To avoid the high RSS caused by THP + low usage arena (i.e. THP becomes a significant percentage), added a new "auto" option which will only start using THP after a base allocator used up the first THP region. Starting from the second hugepage (in a single arena), "auto" behaves the same as "always", i.e. madvise hugepage right away.
* Implement opt.metadata_thpQi Wang2017-08-111-0/+3
| | | | | This option enables transparent huge page for base allocators (require MADV_HUGEPAGE support).
* Switch ctl to explicitly use tsd instead of tsdn.Qi Wang2017-06-231-19/+19
|
* Fix assertion typos.Jason Evans2017-06-231-1/+1
| | | | Reported by Conrad Meyer.
* Pass tsd to tcache_flush().Qi Wang2017-06-161-1/+1
|
* Only abort on dlsym when necessary.Qi Wang2017-06-141-0/+7
| | | | | If neither background_thread nor lazy_lock is in use, do not abort on dlsym errors.
* Combine background_thread started / paused into state.Qi Wang2017-06-121-4/+4
|
* Move background thread creation to background_thread_0.Qi Wang2017-06-121-4/+8
| | | | | To avoid complications, avoid invoking pthread_create "internally", instead rely on thread0 to launch new threads, and also terminating threads when asked.
* Drop high rank locks when creating threads.Qi Wang2017-06-081-0/+3
| | | | | | Avoid holding arenas_lock and background_thread_lock when creating background threads, because pthread_create may take internal locks, and potentially cause deadlock with jemalloc internal locks.
* Take background thread lock when setting extent hooks.Qi Wang2017-06-051-1/+1
|
* Set isthreaded when enabling background_thread.Qi Wang2017-06-021-0/+1
|
* Refactor/fix background_thread/percpu_arena bootstrapping.Jason Evans2017-06-011-3/+4
| | | | | Refactor bootstrapping such that dlsym() is called during the bootstrapping phase that can tolerate reentrant allocation.
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-2/+3
|
* Header refactoring: unify and de-catchall extent_mmap module.David Goldblatt2017-05-311-0/+1
|
* Header refactoring: unify and de-catchall extent_dss.David Goldblatt2017-05-311-0/+1
|
* Add the --disable-thp option to support cross compiling.Jason Evans2017-05-301-0/+3
| | | | This resolves #669.
* Add opt.stats_print_opts.Qi Wang2017-05-291-0/+3
| | | | The value is passed to atexit(3)-triggered malloc_stats_print() calls.
* Added opt_abort_conf: abort on invalid config options.Qi Wang2017-05-271-0/+3
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Add profiling for the background thread mutex.Qi Wang2017-05-231-0/+12
|
* Add background thread related stats.Qi Wang2017-05-231-0/+30
|
* Implementing opt.background_thread.Qi Wang2017-05-231-3/+89
| | | | | | | | | | | Added opt.background_thread to enable background threads, which handles purging currently. When enabled, decay ticks will not trigger purging (which will be left to the background threads). We limit the max number of threads to NCPUs. When percpu arena is enabled, set CPU affinity for the background threads as well. The sleep interval of background threads is dynamic and determined by computing number of pages to purge in the future (based on backlog).
* Allow mutexes to take a lock ordering enum at construction.David Goldblatt2017-05-191-1/+2
| | | | | | | This lets us specify whether and how mutexes of the same rank are allowed to be acquired. Currently, we only allow two polices (only a single mutex at a given rank at a time, and mutexes acquired in ascending order), but we can plausibly allow more (e.g. the "release uncontended mutexes before blocking").
* Refactor *decay_time into *decay_ms.Jason Evans2017-05-181-48/+47
| | | | | | | | Support millisecond resolution for decay times. Among other use cases this makes it possible to specify a short initial dirty-->muzzy decay phase, followed by a longer muzzy-->clean decay phase. This resolves #812.
* Add stats: arena uptime.Qi Wang2017-05-181-0/+8
|