summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/extent_structs.h
Commit message (Collapse)AuthorAgeFilesLines
* extent_t bitpacking logic refactoringRajeev Misra2018-01-041-36/+36
|
* Pull out arena_bin_info_t and arena_bin_t into their own file.David T. Goldblatt2017-12-191-1/+1
| | | | | In the process, kill arena_bin_index, which is unused. To follow are several diffs continuing this separation.
* Add a "dumpable" bit to the extent state.David Goldblatt2017-10-161-7/+29
| | | | | Currently, this is unused (i.e. all extents are always marked dumpable). In the future, we'll begin using this functionality.
* Use ph instead of rb tree for extents_avail_Dave Watson2017-10-041-15/+13
| | | | | | | | | | There does not seem to be any overlap between usage of extent_avail and extent_heap, so we can use the same hook. The only remaining usage of rb trees is in the profiling code, which has some 'interesting' iteration constraints. Fixes #888
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-1/+1
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Refactor !opt.munmap to opt.retain.Jason Evans2017-04-291-4/+3
|
* Header refactoring: bitmap - unify and remove from catchall.David Goldblatt2017-04-241-0/+1
|
* Header refactoring: size_classes module - remove from the catchallDavid Goldblatt2017-04-241-0/+1
|
* Prefer old/low extent_t structures during reuse.Jason Evans2017-04-171-10/+15
| | | | | | Rather than using a LIFO queue to track available extent_t structures, use a red-black tree, and always choose the oldest/lowest available during reuse.
* Track extent structure serial number (esn) in extent_t.Jason Evans2017-04-171-3/+15
| | | | This enables stable sorting of extent_t structures.
* Header refactoring: move atomic.h out of the catch-allDavid Goldblatt2017-04-111-0/+1
|
* Header refactoring: break out ql.h dependenciesDavid Goldblatt2017-04-111-0/+1
|
* Header refactoring: break out ph.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Transition e_prof_tctx in struct extent to C11 atomicsDavid Goldblatt2017-04-041-5/+5
|
* Move arena_slab_data_t's nfree into extent_t's e_bits.Jason Evans2017-03-281-19/+31
| | | | | | | | | | | | | | Compact extent_t to 128 bytes on 64-bit systems by moving arena_slab_data_t's nfree into extent_t's e_bits. Cacheline-align extent_t structures so that they always cross the minimum number of cacheline boundaries. Re-order extent_t fields such that all fields except the slab bitmap (and overlaid heap profiling context pointer) are in the first cacheline. This resolves #461.
* Pack various extent_t fields into a bitfield.Jason Evans2017-03-261-45/+70
| | | | This reduces sizeof(extent_t) from 160 to 136 on x64.
* Store arena index rather than (arena_t *) in extent_t.Jason Evans2017-03-261-2/+2
|
* Use a bitmap in extents_t to speed up search.Jason Evans2017-03-251-0/+7
| | | | | Rather than iteratively checking all sufficiently large heaps during search, maintain and use a bitmap in order to skip empty heaps.
* Convert extent_t's usize to szind.Jason Evans2017-03-231-2/+4
| | | | | | | | Rather than storing usize only for large (and prof-promoted) allocations, store the size class index for allocations that reside within the extent, such that the size class index is valid for all extents that contain extant allocations, and invalid otherwise (mainly to make debugging simpler).
* Implement two-phase decay-based purging.Jason Evans2017-03-151-1/+2
| | | | | | | | | | | | | | | | | | | | Split decay-based purging into two phases, the first of which uses lazy purging to convert dirty pages to "muzzy", and the second of which uses forced purging, decommit, or unmapping to convert pages to clean or destroy them altogether. Not all operating systems support lazy purging, yet the application may provide extent hooks that implement lazy purging, so care must be taken to dynamically omit the first phase when necessary. The mallctl interfaces change as follows: - opt.decay_time --> opt.{dirty,muzzy}_decay_time - arena.<i>.decay_time --> arena.<i>.{dirty,muzzy}_decay_time - arenas.decay_time --> arenas.{dirty,muzzy}_decay_time - stats.arenas.<i>.pdirty --> stats.arenas.<i>.p{dirty,muzzy} - stats.arenas.<i>.{npurge,nmadvise,purged} --> stats.arenas.<i>.{dirty,muzzy}_{npurge,nmadvise,purged} This resolves #521.
* Convert extents_t's npages field to use C11-style atomicsDavid Goldblatt2017-03-091-2/+5
| | | | | | In the process, we can do some strength reduction, changing the fetch-adds and fetch-subs to be simple loads followed by stores, since the modifications all occur while holding the mutex.
* Perform delayed coalescing prior to purging.Jason Evans2017-03-071-2/+5
| | | | | | | | | Rather than purging uncoalesced extents, perform just enough incremental coalescing to purge only fully coalesced extents. In the absence of cached extent reuse, the immediate versus delayed incremental purging algorithms result in the same purge order. This resolves #655.
* Disable coalescing of cached extents.Jason Evans2017-02-171-0/+3
| | | | | | | | Extent splitting and coalescing is a major component of large allocation overhead, and disabling coalescing of cached extents provides a simple and effective hysteresis mechanism. Once two-phase purging is implemented, it will probably make sense to leave coalescing disabled for the first phase, but coalesce during the second phase.
* Disentangle arena and extent locking.Jason Evans2017-02-021-11/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor arena and extent locking protocols such that arena and extent locks are never held when calling into the extent_*_wrapper() API. This requires extra care during purging since the arena lock no longer protects the inner purging logic. It also requires extra care to protect extents from being merged with adjacent extents. Convert extent_t's 'active' flag to an enumerated 'state', so that retained extents are explicitly marked as such, rather than depending on ring linkage state. Refactor the extent collections (and their synchronization) for cached and retained extents into extents_t. Incorporate LRU functionality to support purging. Incorporate page count accounting, which replaces arena->ndirty and arena->stats.retained. Assert that no core locks are held when entering any internal [de]allocation functions. This is in addition to existing assertions that no locks are held when entering external [de]allocation functions. Audit and document synchronization protocols for all arena_t fields. This fixes a potential deadlock due to recursive allocation during gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6 (Fix lock order reversal during gdump.), but with a necessarily much broader code impact.
* Break up headers into constituent partsDavid Goldblatt2017-01-121-0/+84
This is part of a broader change to make header files better represent the dependencies between one another (see https://github.com/jemalloc/jemalloc/issues/533). It breaks up component headers into smaller parts that can be made to have a simpler dependency graph. For the autogenerated headers (smoothstep.h and size_classes.h), no splitting was necessary, so I didn't add support to emit multiple headers.