summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix a race in rtree_szind_slab_update() for RTREE_LEAF_COMPACT.Jason Evans2017-03-272-13/+53
|
* Remove BITMAP_USE_TREE.Jason Evans2017-03-275-307/+0
| | | | | | | | | | Remove tree-structured bitmap support, in order to reduce complexity and ease maintenance. No bitmaps larger than 512 bits have been necessary since before 4.0.0, and there is no current plan that would increase maximum bitmap size. Although tree-structured bitmaps were used on 32-bit platforms prior to this change, the overall benefits were questionable (higher metadata overhead, higher bitmap modification cost, marginally lower search cost).
* Fix bitmap_ffu() to work with 3+ levels.Jason Evans2017-03-272-41/+56
|
* Pack various extent_t fields into a bitfield.Jason Evans2017-03-262-104/+155
| | | | This reduces sizeof(extent_t) from 160 to 136 on x64.
* Store arena index rather than (arena_t *) in extent_t.Jason Evans2017-03-263-5/+5
|
* Fix BITMAP_USE_TREE version of bitmap_ffu().Jason Evans2017-03-262-5/+48
| | | | | | | | | This fixes an extent searching regression on 32-bit systems, caused by the initial bitmap_ffu() implementation in c8021d01f6efe14dc1bd200021a815638063cb5f (Implement bitmap_ffu(), which finds the first unset bit.), as first used in 5d33233a5e6601902df7cddd8cc8aa0b135c77b2 (Use a bitmap in extents_t to speed up search.).
* Force inline ifree to avoid function call costs on fast path.Qi Wang2017-03-251-2/+2
| | | | | Without ALWAYS_INLINE, sometimes ifree() gets compiled into its own function, which adds overhead on the fast path.
* Use a bitmap in extents_t to speed up search.Jason Evans2017-03-253-12/+44
| | | | | Rather than iteratively checking all sufficiently large heaps during search, maintain and use a bitmap in order to skip empty heaps.
* Implement BITMAP_GROUPS().Jason Evans2017-03-251-0/+6
|
* Implement bitmap_ffu(), which finds the first unset bit.Jason Evans2017-03-256-25/+136
|
* Use first fit layout policy instead of best fit.Jason Evans2017-03-251-12/+42
| | | | | | | | | For extents which do not delay coalescing, use first fit layout policy rather than first-best fit layout policy. This packs extents toward older virtual memory mappings, but at the cost of higher search overhead in the common case. This resolves #711.
* Added documentation for mutex profiling related mallctls.Qi Wang2017-03-231-0/+206
|
* Profile per arena base mutex, instead of just a0.Qi Wang2017-03-233-6/+7
|
* Refactor mutex profiling code with x-macros.Qi Wang2017-03-237-232/+225
|
* Switch to nstime_t for the time related fields in mutex profiling.Qi Wang2017-03-235-20/+24
|
* Added custom mutex spin.Qi Wang2017-03-233-17/+27
| | | | | | | A fixed max spin count is used -- with benchmark results showing it solves almost all problems. As the benchmark used was rather intense, the upper bound could be a little bit high. However it should offer a good tradeoff between spinning and blocking.
* Added extents_dirty / _muzzy mutexes, as well as decay_dirty / _muzzy.Qi Wang2017-03-234-41/+61
|
* Added "stats.mutexes.reset" mallctl to reset all mutex stats.Qi Wang2017-03-2312-189/+250
| | | | Also switched from the term "lock" to "mutex".
* Added JSON output for lock stats.Qi Wang2017-03-234-44/+124
| | | | Also added option 'x' to malloc_stats() to bypass lock section.
* Added lock profiling and output for global locks (ctl, prof and base).Qi Wang2017-03-239-78/+174
|
* Add arena lock stats output.Qi Wang2017-03-239-51/+269
|
* Output bin lock profiling results to malloc_stats.Qi Wang2017-03-238-34/+120
| | | | | Two counters are included for the small bins: lock contention rate, and max lock waiting time.
* First stage of mutex profiling.Qi Wang2017-03-235-32/+149
| | | | Switched to trylock and update counters based on state.
* Further specialize arena_[s]dalloc() tcache fast path.Jason Evans2017-03-233-45/+129
| | | | | | Use tsd_rtree_ctx() rather than tsdn_rtree_ctx() when tcache is non-NULL, in order to avoid an extra branch (and potentially extra stack space) in the fast path.
* Push down iealloc() calls.Jason Evans2017-03-239-227/+176
| | | | | Call iealloc() as deep into call chains as possible without causing redundant calls.
* Remove extent dereferences from the deallocation fast paths.Jason Evans2017-03-238-87/+113
|
* Remove extent arg from isalloc() and arena_salloc().Jason Evans2017-03-236-51/+29
|
* Implement compact rtree leaf element representation.Jason Evans2017-03-235-7/+163
| | | | | | | | | If a single virtual adddress pointer has enough unused bits to pack {szind_t, extent_t *, bool, bool}, use a single pointer-sized field in each rtree leaf element, rather than using three separate fields. This has little impact on access speed (fewer loads/stores, but more bit twiddling), except that denser representation increases TLB effectiveness.
* Embed root node into rtree_t.Jason Evans2017-03-235-140/+86
| | | | This avoids one atomic operation per tree access.
* Incorporate szind/slab into rtree leaves.Jason Evans2017-03-2313-224/+469
| | | | | | Expand and restructure the rtree API such that all common operations can be achieved with minimal work, regardless of whether the rtree leaf fields are independent versus packed into a single atomic pointer.
* Split rtree_elm_t into rtree_{node,leaf}_elm_t.Jason Evans2017-03-239-257/+458
| | | | | | | | | | | | | | | | This allows leaf elements to differ in size from internal node elements. In principle it would be more correct to use a different type for each level of the tree, but due to implementation details related to atomic operations, we use casts anyway, thus counteracting the value of additional type correctness. Furthermore, such a scheme would require function code generation (via cpp macros), as well as either unwieldy type names for leaves or type aliases, e.g. typedef struct rtree_elm_d2_s rtree_leaf_elm_t; This alternate strategy would be more correct, and with less code duplication, but probably not worth the complexity.
* Remove binind field from arena_slab_data_t.Jason Evans2017-03-233-22/+8
| | | | | binind is now redundant; the containing extent_t's szind field always provides the same value.
* Convert extent_t's usize to szind.Jason Evans2017-03-2313-238/+233
| | | | | | | | Rather than storing usize only for large (and prof-promoted) allocations, store the size class index for allocations that reside within the extent, such that the size class index is valid for all extents that contain extant allocations, and invalid otherwise (mainly to make debugging simpler).
* Clamp LG_VADDR for 32-bit builds on x64.Jason Evans2017-03-231-0/+3
|
* Not re-binding iarena when migrate between arenas.Qi Wang2017-03-211-1/+0
|
* Refactor tcaches flush/destroy to reduce lock duration.Jason Evans2017-03-161-6/+13
| | | | Drop tcaches_mtx before calling tcache_destroy().
* Propagate madvise() success/failure from pages_purge_lazy().Jason Evans2017-03-161-3/+3
|
* Implement two-phase decay-based purging.Jason Evans2017-03-1523-470/+1058
| | | | | | | | | | | | | | | | | | | | Split decay-based purging into two phases, the first of which uses lazy purging to convert dirty pages to "muzzy", and the second of which uses forced purging, decommit, or unmapping to convert pages to clean or destroy them altogether. Not all operating systems support lazy purging, yet the application may provide extent hooks that implement lazy purging, so care must be taken to dynamically omit the first phase when necessary. The mallctl interfaces change as follows: - opt.decay_time --> opt.{dirty,muzzy}_decay_time - arena.<i>.decay_time --> arena.<i>.{dirty,muzzy}_decay_time - arenas.decay_time --> arenas.{dirty,muzzy}_decay_time - stats.arenas.<i>.pdirty --> stats.arenas.<i>.p{dirty,muzzy} - stats.arenas.<i>.{npurge,nmadvise,purged} --> stats.arenas.<i>.{dirty,muzzy}_{npurge,nmadvise,purged} This resolves #521.
* Move arena_t's purging field into arena_decay_t.Jason Evans2017-03-152-12/+9
|
* Refactor decay-related function parametrization.Jason Evans2017-03-152-93/+103
| | | | | | | Refactor most of the decay-related functions to take as parameters the decay_t and associated extents_t structures to operate on. This prepares for supporting both lazy and forced purging on different decay schedules.
* Convert remaining arena_stats_t fields to atomicsDavid Goldblatt2017-03-144-57/+93
| | | | | | | These were all size_ts, so we have atomics support for them on all platforms, so the conversion is straightforward. Left non-atomic is curlextents, which AFAICT is not used atomically anywhere.
* Switch atomic uint64_ts in arena_stats_t to C11 atomicsDavid Goldblatt2017-03-143-56/+121
| | | | | | I expect this to be the trickiest conversion we will see, since we want atomics on 64-bit platforms, but are also always able to piggyback on some sort of external synchronization on non-64 bit platforms.
* Prefer pages_purge_forced() over memset().Jason Evans2017-03-142-16/+30
| | | | | | This has the dual advantages of allowing for sparsely used large allocations, and relying on the kernel to supply zeroed pages, which tends to be very fast on modern systems.
* Add alignment/size assertions to pages_*().Jason Evans2017-03-141-0/+15
| | | | | These sanity checks prevent what otherwise might result in failed system calls and unintended fallback execution paths.
* Fix pages_purge_forced() to discard pages on non-Linux systems.Jason Evans2017-03-144-5/+21
| | | | | madvise(..., MADV_DONTNEED) only causes demand-zeroing on Linux, so fall back to overlaying a new mapping.
* Convert rtree code to use C11 atomicsDavid Goldblatt2017-03-133-39/+62
| | | | | | | In the process, I changed the implementation of rtree_elm_acquire so that it won't even try to CAS if its initial read (getting the extent + lock bit) indicates that the CAS is doomed to fail. This can significantly improve performance under contention.
* Convert arena_t's purging field to non-atomic bool.Jason Evans2017-03-102-12/+12
| | | | The decay mutex already protects all accesses.
* Fix ATOMIC_{ACQUIRE,RELEASE,ACQ_REL} definitions.Jason Evans2017-03-091-3/+3
|
* Add documentation for percpu_arena in jemalloc.xml.in.Qi Wang2017-03-091-0/+18
|
* Implement per-CPU arena.Qi Wang2017-03-0916-118/+414
| | | | | | | | | | | | | | | | | | The new feature, opt.percpu_arena, determines thread-arena association dynamically based CPU id. Three modes are supported: "percpu", "phycpu" and disabled. "percpu" uses the current core id (with help from sched_getcpu()) directly as the arena index, while "phycpu" will assign threads on the same physical CPU to the same arena. In other words, "percpu" means # of arenas == # of CPUs, while "phycpu" has # of arenas == 1/2 * (# of CPUs). Note that no runtime check on whether hyper threading is enabled is added yet. When enabled, threads will be migrated between arenas when a CPU change is detected. In the current design, to reduce overhead from reading CPU id, each arena tracks the thread accessed most recently. When a new thread comes in, we will read CPU id and update arena if necessary.