summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/extent_inlines.h
Commit message (Collapse)AuthorAgeFilesLines
* Fix an extent coalesce bug.Qi Wang2017-11-161-0/+5
| | | | | When coalescing, we should take both extents off the LRU list; otherwise decay can grab the existing outer extent through extents_evict.
* Use tsd offset_state instead of atomicDave Watson2017-11-141-3/+10
| | | | | | While working on #852, I noticed the prng state is atomic. This is the only atomic use of prng in all of jemalloc. Instead, use a threadlocal prng state if possible to avoid unnecessary cache line contention.
* Add a "dumpable" bit to the extent state.David Goldblatt2017-10-161-1/+15
| | | | | Currently, this is unused (i.e. all extents are always marked dumpable). In the future, we'll begin using this functionality.
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-1/+2
|
* Header refactoring: unify and de-catchall mutex_pool.David Goldblatt2017-05-311-1/+1
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Protect the rtree/extent interactions with a mutex pool.David Goldblatt2017-05-191-0/+27
| | | | | | | | | | | | | | | | | | Instead of embedding a lock bit in rtree leaf elements, we associate extents with a small set of mutexes. This gets us two things: - We can use the system mutexes. This (hypothetically) protects us from priority inversion, and lets us stop doing a backoff/sleep loop, instead opting for precise wakeups from the mutex. - Cuts down on the number of mutex acquisitions we have to do (from 4 in the worst case to two). We end up simplifying most of the rtree code (which no longer has to deal with locking or concurrency at all), at the cost of additional complexity in the extent code: since the mutex protecting the rtree leaf elements is determined by reading the extent out of those elements, the initial read is racy, so that we may acquire an out of date mutex. We re-check the extent in the leaf after acquiring the mutex to protect us from this race.
* Refactor (MALLOCX_ARENA_MAX + 1) to be MALLOCX_ARENA_LIMIT.Jason Evans2017-05-141-2/+2
| | | | This resolves #673.
* Header refactoring: pages.h - unify and remove from catchall.David Goldblatt2017-04-251-0/+1
|
* Header refactoring: prng module - remove from the catchall and unify.David Goldblatt2017-04-241-0/+1
|
* Get rid of most of the various inline macros.David Goldblatt2017-04-241-109/+51
|
* Prefer old/low extent_t structures during reuse.Jason Evans2017-04-171-0/+31
| | | | | | Rather than using a LIFO queue to track available extent_t structures, use a red-black tree, and always choose the oldest/lowest available during reuse.
* Track extent structure serial number (esn) in extent_t.Jason Evans2017-04-171-2/+42
| | | | This enables stable sorting of extent_t structures.
* Header refactoring: break out ql.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Move arena-tracking atomics in jemalloc.c to C11-styleDavid Goldblatt2017-04-051-1/+1
|
* Transition e_prof_tctx in struct extent to C11 atomicsDavid Goldblatt2017-04-041-3/+3
|
* Move arena_slab_data_t's nfree into extent_t's e_bits.Jason Evans2017-03-281-1/+31
| | | | | | | | | | | | | | Compact extent_t to 128 bytes on 64-bit systems by moving arena_slab_data_t's nfree into extent_t's e_bits. Cacheline-align extent_t structures so that they always cross the minimum number of cacheline boundaries. Re-order extent_t fields such that all fields except the slab bitmap (and overlaid heap profiling context pointer) are in the first cacheline. This resolves #461.
* Pack various extent_t fields into a bitfield.Jason Evans2017-03-261-59/+85
| | | | This reduces sizeof(extent_t) from 160 to 136 on x64.
* Store arena index rather than (arena_t *) in extent_t.Jason Evans2017-03-261-2/+2
|
* Incorporate szind/slab into rtree leaves.Jason Evans2017-03-231-12/+10
| | | | | | Expand and restructure the rtree API such that all common operations can be achieved with minimal work, regardless of whether the rtree leaf fields are independent versus packed into a single atomic pointer.
* Convert extent_t's usize to szind.Jason Evans2017-03-231-42/+49
| | | | | | | | Rather than storing usize only for large (and prof-promoted) allocations, store the size class index for allocations that reside within the extent, such that the size class index is valid for all extents that contain extant allocations, and invalid otherwise (mainly to make debugging simpler).
* Perform delayed coalescing prior to purging.Jason Evans2017-03-071-0/+9
| | | | | | | | | Rather than purging uncoalesced extents, perform just enough incremental coalescing to purge only fully coalesced extents. In the absence of cached extent reuse, the immediate versus delayed incremental purging algorithms result in the same purge order. This resolves #655.
* Disentangle arena and extent locking.Jason Evans2017-02-021-23/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor arena and extent locking protocols such that arena and extent locks are never held when calling into the extent_*_wrapper() API. This requires extra care during purging since the arena lock no longer protects the inner purging logic. It also requires extra care to protect extents from being merged with adjacent extents. Convert extent_t's 'active' flag to an enumerated 'state', so that retained extents are explicitly marked as such, rather than depending on ring linkage state. Refactor the extent collections (and their synchronization) for cached and retained extents into extents_t. Incorporate LRU functionality to support purging. Incorporate page count accounting, which replaces arena->ndirty and arena->stats.retained. Assert that no core locks are held when entering any internal [de]allocation functions. This is in addition to existing assertions that no locks are held when entering external [de]allocation functions. Audit and document synchronization protocols for all arena_t fields. This fixes a potential deadlock due to recursive allocation during gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6 (Fix lock order reversal during gdump.), but with a necessarily much broader code impact.
* Remove extraneous parens around return arguments.Jason Evans2017-01-211-25/+25
| | | | This resolves #540.
* Update brace style.Jason Evans2017-01-211-72/+39
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Remove leading blank lines from function bodies.Jason Evans2017-01-131-31/+0
| | | | This resolves #535.
* Break up headers into constituent partsDavid Goldblatt2017-01-121-0/+343
This is part of a broader change to make header files better represent the dependencies between one another (see https://github.com/jemalloc/jemalloc/issues/533). It breaks up component headers into smaller parts that can be made to have a simpler dependency graph. For the autogenerated headers (smoothstep.h and size_classes.h), no splitting was necessary, so I didn't add support to emit multiple headers.