summaryrefslogtreecommitdiffstats
path: root/src/ctl.c
Commit message (Collapse)AuthorAgeFilesLines
* Split up and standardize naming of stats code.David T. Goldblatt2017-12-191-39/+43
| | | | | | The arena-associated stats are now all prefixed with arena_stats_, and live in their own file. Likewise, malloc_bin_stats_t -> bin_stats_t, also in its own file.
* Pull out arena_bin_info_t and arena_bin_t into their own file.David T. Goldblatt2017-12-191-4/+4
| | | | | In the process, kill arena_bin_index, which is unused. To follow are several diffs continuing this separation.
* Add opt.lg_extent_max_active_fitQi Wang2017-11-161-0/+4
| | | | | | | | | | When allocating from dirty extents (which we always prefer if available), large active extents can get split even if the new allocation is much smaller, in which case the introduced fragmentation causes high long term damage. This new option controls the threshold to reuse and split an existing active extent. We avoid using a large extent for much smaller sizes, in order to reduce fragmentation. In some workload, adding the threshold improves virtual memory usage by >10x.
* Add arena.i.retain_grow_limitQi Wang2017-11-031-2/+40
| | | | | | | This option controls the max size when grow_retained. This is useful when we have customized extent hooks reserving physical memory (e.g. 1G huge pages). Without this feature, the default increasing sequence could result in fragmented and wasted physical memory.
* Add stats for metadata_thp.Qi Wang2017-08-301-0/+12
| | | | Report number of THPs used in arena and aggregated stats.
* Change opt.metadata_thp to [disabled,auto,always].Qi Wang2017-08-301-1/+2
| | | | | | | | To avoid the high RSS caused by THP + low usage arena (i.e. THP becomes a significant percentage), added a new "auto" option which will only start using THP after a base allocator used up the first THP region. Starting from the second hugepage (in a single arena), "auto" behaves the same as "always", i.e. madvise hugepage right away.
* Implement opt.metadata_thpQi Wang2017-08-111-0/+3
| | | | | This option enables transparent huge page for base allocators (require MADV_HUGEPAGE support).
* Switch ctl to explicitly use tsd instead of tsdn.Qi Wang2017-06-231-19/+19
|
* Fix assertion typos.Jason Evans2017-06-231-1/+1
| | | | Reported by Conrad Meyer.
* Pass tsd to tcache_flush().Qi Wang2017-06-161-1/+1
|
* Only abort on dlsym when necessary.Qi Wang2017-06-141-0/+7
| | | | | If neither background_thread nor lazy_lock is in use, do not abort on dlsym errors.
* Combine background_thread started / paused into state.Qi Wang2017-06-121-4/+4
|
* Move background thread creation to background_thread_0.Qi Wang2017-06-121-4/+8
| | | | | To avoid complications, avoid invoking pthread_create "internally", instead rely on thread0 to launch new threads, and also terminating threads when asked.
* Drop high rank locks when creating threads.Qi Wang2017-06-081-0/+3
| | | | | | Avoid holding arenas_lock and background_thread_lock when creating background threads, because pthread_create may take internal locks, and potentially cause deadlock with jemalloc internal locks.
* Take background thread lock when setting extent hooks.Qi Wang2017-06-051-1/+1
|
* Set isthreaded when enabling background_thread.Qi Wang2017-06-021-0/+1
|
* Refactor/fix background_thread/percpu_arena bootstrapping.Jason Evans2017-06-011-3/+4
| | | | | Refactor bootstrapping such that dlsym() is called during the bootstrapping phase that can tolerate reentrant allocation.
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-2/+3
|
* Header refactoring: unify and de-catchall extent_mmap module.David Goldblatt2017-05-311-0/+1
|
* Header refactoring: unify and de-catchall extent_dss.David Goldblatt2017-05-311-0/+1
|
* Add the --disable-thp option to support cross compiling.Jason Evans2017-05-301-0/+3
| | | | This resolves #669.
* Add opt.stats_print_opts.Qi Wang2017-05-291-0/+3
| | | | The value is passed to atexit(3)-triggered malloc_stats_print() calls.
* Added opt_abort_conf: abort on invalid config options.Qi Wang2017-05-271-0/+3
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Add profiling for the background thread mutex.Qi Wang2017-05-231-0/+12
|
* Add background thread related stats.Qi Wang2017-05-231-0/+30
|
* Implementing opt.background_thread.Qi Wang2017-05-231-3/+89
| | | | | | | | | | | Added opt.background_thread to enable background threads, which handles purging currently. When enabled, decay ticks will not trigger purging (which will be left to the background threads). We limit the max number of threads to NCPUs. When percpu arena is enabled, set CPU affinity for the background threads as well. The sleep interval of background threads is dynamic and determined by computing number of pages to purge in the future (based on backlog).
* Allow mutexes to take a lock ordering enum at construction.David Goldblatt2017-05-191-1/+2
| | | | | | | This lets us specify whether and how mutexes of the same rank are allowed to be acquired. Currently, we only allow two polices (only a single mutex at a given rank at a time, and mutexes acquired in ascending order), but we can plausibly allow more (e.g. the "release uncontended mutexes before blocking").
* Refactor *decay_time into *decay_ms.Jason Evans2017-05-181-48/+47
| | | | | | | | Support millisecond resolution for decay times. Among other use cases this makes it possible to specify a short initial dirty-->muzzy decay phase, followed by a longer muzzy-->clean decay phase. This resolves #812.
* Add stats: arena uptime.Qi Wang2017-05-181-0/+8
|
* Refactor !opt.munmap to opt.retain.Jason Evans2017-04-291-3/+3
|
* Header refactoring: ctl - unify and remove from catchall.David Goldblatt2017-04-251-9/+10
| | | | | In order to do this, we introduce the mutex_prof module, which breaks a circular dependency between ctl and prof.
* Replace --disable-munmap with opt.munmap.Jason Evans2017-04-251-3/+3
| | | | | | | | | Control use of munmap(2) via a run-time option rather than a compile-time option (with the same per platform default). The old behavior of --disable-munmap can be achieved with --with-malloc-conf=munmap:false. This partially resolves #580.
* Header refactoring: size_classes module - remove from the catchallDavid Goldblatt2017-04-241-0/+1
|
* Get rid of most of the various inline macros.David Goldblatt2017-04-241-3/+3
|
* Remove --disable-tls.Jason Evans2017-04-211-3/+0
| | | | | | | This option is no longer useful, because TLS is correctly configured automatically on all supported platforms. This partially resolves #580.
* Remove --disable-tcache.Jason Evans2017-04-211-47/+19
| | | | | | | | | | | Simplify configuration by removing the --disable-tcache option, but replace the testing for that configuration with --with-malloc-conf=tcache:false. Fix the thread.arena and thread.tcache.flush mallctls to work correctly if tcache is disabled. This partially resolves #580.
* Bypass extent tracking for auto arenas.Qi Wang2017-04-211-6/+1
| | | | | | | | Tracking extents is required by arena_reset. To support this, the extent linkage was used for tracking 1) large allocations, and 2) full slabs. However modifying the extent linkage could be an expensive operation as it likely incurs cache misses. Since we forbid arena_reset on auto arenas, let's bypass the linkage operations for auto arenas.
* Header refactoring: unify nstime.h and move it out of the catch-allDavid Goldblatt2017-04-191-0/+1
|
* Header refactoring: move assert.h out of the catch-allDavid Goldblatt2017-04-191-0/+1
|
* Header refactoring: move util.h out of the catchallDavid Goldblatt2017-04-191-0/+2
|
* Prefer old/low extent_t structures during reuse.Jason Evans2017-04-171-1/+1
| | | | | | Rather than using a LIFO queue to track available extent_t structures, use a red-black tree, and always choose the oldest/lowest available during reuse.
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-111-1/+2
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Integrate auto tcache into TSD.Qi Wang2017-04-071-3/+3
| | | | | | | | | The embedded tcache is initialized upon tsd initialization. The avail arrays for the tbins will be allocated / deallocated accordingly during init / cleanup. With this change, the pointer to the auto tcache will always be available, as long as we have access to the TSD. tcache_available() (called in tcache_get()) is provided to check if we should use tcache.
* Profile per arena base mutex, instead of just a0.Qi Wang2017-03-231-5/+4
|
* Refactor mutex profiling code with x-macros.Qi Wang2017-03-231-118/+49
|
* Switch to nstime_t for the time related fields in mutex profiling.Qi Wang2017-03-231-2/+2
|
* Added extents_dirty / _muzzy mutexes, as well as decay_dirty / _muzzy.Qi Wang2017-03-231-33/+48
|
* Added "stats.mutexes.reset" mallctl to reset all mutex stats.Qi Wang2017-03-231-76/+128
| | | | Also switched from the term "lock" to "mutex".
* Added JSON output for lock stats.Qi Wang2017-03-231-1/+3
| | | | Also added option 'x' to malloc_stats() to bypass lock section.