summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/arena_structs_b.h
Commit message (Collapse)AuthorAgeFilesLines
* Split up and standardize naming of stats code.David T. Goldblatt2017-12-191-2/+2
| | | | | | The arena-associated stats are now all prefixed with arena_stats_, and live in their own file. Likewise, malloc_bin_stats_t -> bin_stats_t, also in its own file.
* Pull out arena_bin_info_t and arena_bin_t into their own file.David T. Goldblatt2017-12-191-63/+2
| | | | | In the process, kill arena_bin_index, which is unused. To follow are several diffs continuing this separation.
* Add arena.i.retain_grow_limitQi Wang2017-11-031-0/+5
| | | | | | | This option controls the max size when grow_retained. This is useful when we have customized extent hooks reserving physical memory (e.g. 1G huge pages). Without this feature, the default increasing sequence could result in fragmented and wasted physical memory.
* Make arena stats collection go through cache bins.David Goldblatt2017-08-171-5/+6
| | | | | | This eliminates the need for the arena stats code to "know" about tcaches; all that it needs is a cache_bin_array_descriptor_t to tell it where to find cache_bins whose stats it should aggregate.
* Header refactoring: unify and de-catchall extent_dss.David Goldblatt2017-05-311-0/+1
|
* Fix extent_grow_next management.Jason Evans2017-05-301-2/+3
| | | | | | | | | | | | | Fix management of extent_grow_next to serialize operations that may grow retained memory. This assures that the sizes of the newly allocated extents correspond to the size classes in the intended growth sequence. Fix management of extent_grow_next to skip size classes if a request is too large to be satisfied by the next size in the growth sequence. This avoids the potential for an arbitrary number of requests to bypass triggering extent_grow_next increases. This resolves #858.
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Fix # of unpurged pages in decay algorithm.Qi Wang2017-05-231-0/+2
| | | | | | | | | | When # of dirty pages move below npages_limit (e.g. they are reused), we should not lower number of unpurged pages because that would cause the reused pages to be double counted in the backlog (as a result, decay happen slower than it should). Instead, set number of unpurged to the greater of current npages and npages_limit. Added an assertion: the ceiling # of pages should be greater than npages_limit.
* Refactor *decay_time into *decay_ms.Jason Evans2017-05-181-2/+2
| | | | | | | | Support millisecond resolution for decay times. Among other use cases this makes it possible to specify a short initial dirty-->muzzy decay phase, followed by a longer muzzy-->clean decay phase. This resolves #812.
* Add stats: arena uptime.Qi Wang2017-05-181-0/+2
|
* Refactor !opt.munmap to opt.retain.Jason Evans2017-04-291-1/+1
|
* Replace --disable-munmap with opt.munmap.Jason Evans2017-04-251-4/+4
| | | | | | | | | Control use of munmap(2) via a run-time option rather than a compile-time option (with the same per platform default). The old behavior of --disable-munmap can be achieved with --with-malloc-conf=munmap:false. This partially resolves #580.
* Header refactoring: bitmap - unify and remove from catchall.David Goldblatt2017-04-241-0/+1
|
* Header refactoring: stats - unify and remove from catchallDavid Goldblatt2017-04-241-0/+1
|
* Header refactoring: move smoothstep.h out of the catchall.David Goldblatt2017-04-241-0/+1
|
* Header refactoring: size_classes module - remove from the catchallDavid Goldblatt2017-04-241-0/+2
|
* Header refactoring: ticker module - remove from the catchall and unify.David Goldblatt2017-04-241-0/+1
|
* Header refactoring: unify nstime.h and move it out of the catch-allDavid Goldblatt2017-04-191-0/+1
|
* Prefer old/low extent_t structures during reuse.Jason Evans2017-04-171-4/+5
| | | | | | Rather than using a LIFO queue to track available extent_t structures, use a red-black tree, and always choose the oldest/lowest available during reuse.
* Pass alloc_ctx down profiling path.Qi Wang2017-04-121-2/+2
| | | | | | With this change, when profiling is enabled, we avoid doing redundant rtree lookups. Also changed dalloc_atx_t to alloc_atx_t, as it's now used on allocation path as well (to speed up profiling).
* Header refactoring: move atomic.h out of the catch-allDavid Goldblatt2017-04-111-0/+1
|
* Header refactoring: break out ql.h dependenciesDavid Goldblatt2017-04-111-0/+3
|
* Pass dealloc_ctx down free() fast path.Qi Wang2017-04-111-0/+6
| | | | This gets rid of the redundent rtree lookup down fast path.
* Transition arena struct fields to C11 atomicsDavid Goldblatt2017-04-051-6/+10
|
* Convert prng module to use C11-style atomicsDavid Goldblatt2017-04-041-1/+1
|
* Implement two-phase decay-based purging.Jason Evans2017-03-151-10/+19
| | | | | | | | | | | | | | | | | | | | Split decay-based purging into two phases, the first of which uses lazy purging to convert dirty pages to "muzzy", and the second of which uses forced purging, decommit, or unmapping to convert pages to clean or destroy them altogether. Not all operating systems support lazy purging, yet the application may provide extent hooks that implement lazy purging, so care must be taken to dynamically omit the first phase when necessary. The mallctl interfaces change as follows: - opt.decay_time --> opt.{dirty,muzzy}_decay_time - arena.<i>.decay_time --> arena.<i>.{dirty,muzzy}_decay_time - arenas.decay_time --> arenas.{dirty,muzzy}_decay_time - stats.arenas.<i>.pdirty --> stats.arenas.<i>.p{dirty,muzzy} - stats.arenas.<i>.{npurge,nmadvise,purged} --> stats.arenas.<i>.{dirty,muzzy}_{npurge,nmadvise,purged} This resolves #521.
* Move arena_t's purging field into arena_decay_t.Jason Evans2017-03-151-7/+5
|
* Refactor decay-related function parametrization.Jason Evans2017-03-151-7/+7
| | | | | | | Refactor most of the decay-related functions to take as parameters the decay_t and associated extents_t structures to operate on. This prepares for supporting both lazy and forced purging on different decay schedules.
* Convert arena_t's purging field to non-atomic bool.Jason Evans2017-03-101-8/+7
| | | | The decay mutex already protects all accesses.
* Implement per-CPU arena.Qi Wang2017-03-091-0/+7
| | | | | | | | | | | | | | | | | | The new feature, opt.percpu_arena, determines thread-arena association dynamically based CPU id. Three modes are supported: "percpu", "phycpu" and disabled. "percpu" uses the current core id (with help from sched_getcpu()) directly as the arena index, while "phycpu" will assign threads on the same physical CPU to the same arena. In other words, "percpu" means # of arenas == # of CPUs, while "phycpu" has # of arenas == 1/2 * (# of CPUs). Note that no runtime check on whether hyper threading is enabled is added yet. When enabled, threads will be migrated between arenas when a CPU change is detected. In the current design, to reduce overhead from reading CPU id, each arena tracks the thread accessed most recently. When a new thread comes in, we will read CPU id and update arena if necessary.
* Change arena to use the atomic functions for ssize_t instead of the union ↵David Goldblatt2017-03-071-6/+1
| | | | strategy
* Convert arena_decay_t's time to be atomically synchronized.Jason Evans2017-03-031-2/+9
|
* Synchronize arena->decay with arena->decay.mtx.Jason Evans2017-02-161-6/+2
| | | | This removes the last use of arena->lock.
* Synchronize arena->tcache_ql with arena->tcache_ql_mtx.Jason Evans2017-02-161-1/+2
| | | | This replaces arena->lock synchronization.
* Convert arena->stats synchronization to atomics.Jason Evans2017-02-161-1/+2
|
* Convert arena->prof_accumbytes synchronization to atomics.Jason Evans2017-02-161-1/+2
|
* Convert arena->dss_prec synchronization to atomics.Jason Evans2017-02-161-1/+1
|
* Disentangle arena and extent locking.Jason Evans2017-02-021-45/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor arena and extent locking protocols such that arena and extent locks are never held when calling into the extent_*_wrapper() API. This requires extra care during purging since the arena lock no longer protects the inner purging logic. It also requires extra care to protect extents from being merged with adjacent extents. Convert extent_t's 'active' flag to an enumerated 'state', so that retained extents are explicitly marked as such, rather than depending on ring linkage state. Refactor the extent collections (and their synchronization) for cached and retained extents into extents_t. Incorporate LRU functionality to support purging. Incorporate page count accounting, which replaces arena->ndirty and arena->stats.retained. Assert that no core locks are held when entering any internal [de]allocation functions. This is in addition to existing assertions that no locks are held when entering external [de]allocation functions. Audit and document synchronization protocols for all arena_t fields. This fixes a potential deadlock due to recursive allocation during gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6 (Fix lock order reversal during gdump.), but with a necessarily much broader code impact.
* Break up headers into constituent partsDavid Goldblatt2017-01-121-0/+214
This is part of a broader change to make header files better represent the dependencies between one another (see https://github.com/jemalloc/jemalloc/issues/533). It breaks up component headers into smaller parts that can be made to have a simpler dependency graph. For the autogenerated headers (smoothstep.h and size_classes.h), no splitting was necessary, so I didn't add support to emit multiple headers.