summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Output 4 counters for bin mutexes instead of just 2.Qi Wang2017-04-191-8/+24
|
* Support --with-lg-page values larger than system page size.Jason Evans2017-04-193-101/+145
| | | | | | | | | All mappings continue to be PAGE-aligned, even if the system page size is smaller. This change is primarily intended to provide a mechanism for supporting multiple page sizes with the same binary; smaller page sizes work better in conjunction with jemalloc's design. This resolves #467.
* Revert "Remove BITMAP_USE_TREE."Jason Evans2017-04-191-0/+78
| | | | | | | | | Some systems use a native 64 KiB page size, which means that the bitmap for the smallest size class can be 8192 bits, not just 512 bits as when the page size is 4 KiB. Linear search in bitmap_{sfu,ffu}() is unacceptably slow for such large bitmaps. This reverts commit 7c00f04ff40a34627e31488d02ff1081c749c7ba.
* Header refactoring: unify spin.h and move it out of the catch-all.David Goldblatt2017-04-193-1/+4
|
* Header refactoring: unify nstime.h and move it out of the catch-allDavid Goldblatt2017-04-192-0/+3
|
* Header refactoring: move jemalloc_internal_types.h out of the catch-allDavid Goldblatt2017-04-191-0/+1
|
* Header refactoring: move assert.h out of the catch-allDavid Goldblatt2017-04-1920-1/+32
|
* Header refactoring: move util.h out of the catchallDavid Goldblatt2017-04-196-0/+10
|
* Header refactoring: move malloc_io.h out of the catchallDavid Goldblatt2017-04-197-0/+12
|
* Move CPP_PROLOGUE and CPP_EPILOGUE to the .cppDavid Goldblatt2017-04-191-0/+8
| | | | This lets us avoid having to specify them in every C file.
* Remove the function alignment of prof_backtrace.Qi Wang2017-04-171-1/+0
| | | | | This was an attempt to avoid triggering slow path in libunwind, however turns out to be ineffective.
* Prefer old/low extent_t structures during reuse.Jason Evans2017-04-173-18/+19
| | | | | | Rather than using a LIFO queue to track available extent_t structures, use a red-black tree, and always choose the oldest/lowest available during reuse.
* Track extent structure serial number (esn) in extent_t.Jason Evans2017-04-172-30/+44
| | | | This enables stable sorting of extent_t structures.
* Allocate increasingly large base blocks.Jason Evans2017-04-171-26/+36
| | | | | Limit the total number of base block by leveraging the exponential size class sequence, similarly to extent_grow_retained().
* Update base_unmap() to match extent_dalloc_wrapper().Jason Evans2017-04-171-10/+10
| | | | | | | Reverse the order of forced versus lazy purging attempts in base_unmap(), in order to match the order in extent_dalloc_wrapper(), which was reversed by 64e458f5cdd64f9b67cb495f177ef96bf3ce4e0e (Implement two-phase decay-based purging.).
* Improve rtree cache with a two-level cache design.Qi Wang2017-04-172-6/+32
| | | | | | | | Two levels of rcache is implemented: a direct mapped cache as L1, combined with a LRU cache as L2. The L1 cache offers low cost on cache hit, but could suffer collision under circumstances. This is complemented by the L2 LRU cache, which is slower on cache access (overhead from linear search + reordering), but solves collison of L1 rather well.
* Switch to fine-grained reentrancy support.Qi Wang2017-04-153-76/+55
| | | | | | | Previously we had a general detection and support of reentrancy, at the cost of having branches and inc / dec operations on fast paths. To avoid taxing fast paths, we move the reentrancy operations onto tsd slow state, and only modify reentrancy level around external calls (that might trigger reentrancy).
* Bundle 3 branches on fast path into tsd_state.Qi Wang2017-04-143-37/+106
| | | | | | Added tsd_state_nominal_slow, which on fast path malloc() incorporates tcache_enabled check, and on fast path free() bundles both malloc_slow and tcache_enabled branches.
* Pass alloc_ctx down profiling path.Qi Wang2017-04-123-33/+64
| | | | | | With this change, when profiling is enabled, we avoid doing redundant rtree lookups. Also changed dalloc_atx_t to alloc_atx_t, as it's now used on allocation path as well (to speed up profiling).
* Pass dalloc_ctx down the sdalloc path.Qi Wang2017-04-123-4/+13
| | | | This avoids redundant rtree lookups.
* Header refactoring: move atomic.h out of the catch-allDavid Goldblatt2017-04-111-0/+2
|
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-1127-27/+56
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Header refactoring: break out ph.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Pass dealloc_ctx down free() fast path.Qi Wang2017-04-114-23/+34
| | | | This gets rid of the redundent rtree lookup down fast path.
* Move reentrancy_level to the beginning of TSD.Qi Wang2017-04-072-2/+2
|
* Add basic reentrancy-checking support, and allow arena_new to reenter.David Goldblatt2017-04-072-12/+95
| | | | | | | | | This checks whether or not we're reentrant using thread-local data, and, if we are, moves certain internal allocations to use arena 0 (which should be properly initialized after bootstrapping). The immediate thing this allows is spinning up threads in arena_new, which will enable spinning up background threads there.
* Add hooking functionalityDavid Goldblatt2017-04-073-0/+28
| | | | | This allows us to hook chosen functions and do interesting things there (in particular: reentrancy checking).
* Optimizing TSD and thread cache layout.Qi Wang2017-04-072-36/+56
| | | | | | | | | | 1) Re-organize TSD so that frequently accessed fields are closer to the beginning and more compact. Assuming 64-bit, the first 2.5 cachelines now contains everything needed on tcache fast path, expect the tcache struct itself. 2) Re-organize tcache and tbins. Take lg_fill_div out of tbin, and reduce tbin to 24 bytes (down from 32). Split tbins into tbins_small and tbins_large, and place tbins_small close to the beginning.
* Bypass witness_fork in TSD when !config_debug.Qi Wang2017-04-071-0/+9
| | | | | With the tcache change, we plan to leave some blank space when !config_debug (unused tbins, witnesses) at the end of the tsd. Let's not touch the memory.
* Get rid of tcache_enabled_t as we have runtime init support.Qi Wang2017-04-071-3/+3
|
* Integrate auto tcache into TSD.Qi Wang2017-04-074-74/+160
| | | | | | | | | The embedded tcache is initialized upon tsd initialization. The avail arrays for the tbins will be allocated / deallocated accordingly during init / cleanup. With this change, the pointer to the auto tcache will always be available, as long as we have access to the TSD. tcache_available() (called in tcache_get()) is provided to check if we should use tcache.
* Make prof's cum_gctx a C11-style atomicDavid Goldblatt2017-04-051-2/+2
|
* Make the mutex n_waiting_thds field a C11-style atomicDavid Goldblatt2017-04-051-3/+4
|
* Convert extent module to use C11-style atomcisDavid Goldblatt2017-04-051-8/+10
|
* Convert accumbytes in prof_accum_t to C11 atomics, when possibleDavid Goldblatt2017-04-051-1/+3
|
* Make extent_dss use C11-style atomicsDavid Goldblatt2017-04-051-15/+21
|
* Make base_t's extent_hooks field C11-atomicDavid Goldblatt2017-04-051-10/+4
|
* Transition arena struct fields to C11 atomicsDavid Goldblatt2017-04-052-33/+38
|
* Move arena-tracking atomics in jemalloc.c to C11-styleDavid Goldblatt2017-04-051-6/+8
|
* Convert prng module to use C11-style atomicsDavid Goldblatt2017-04-041-2/+2
|
* Make the tsd member init functions to take tsd_t * type.Qi Wang2017-04-042-2/+7
|
* Do proper cleanup for tsd_state_reincarnated.Qi Wang2017-04-042-16/+8
| | | | | Also enable arena_bind under non-nominal state, as the cleanup will be handled correctly now.
* Add init function support to tsd members.Qi Wang2017-04-042-1/+29
| | | | | | This will facilitate embedding tcache into tsd, which will require proper initialization cannot be done via the static initializer. Make tsd->rtree_ctx to be initialized via rtree_ctx_data_init().
* Lookup extent once per time during tcache_flush_small / _large.Qi Wang2017-03-281-14/+28
| | | | Caching the extents on stack to avoid redundant looking up overhead.
* Move arena_slab_data_t's nfree into extent_t's e_bits.Jason Evans2017-03-282-20/+20
| | | | | | | | | | | | | | Compact extent_t to 128 bytes on 64-bit systems by moving arena_slab_data_t's nfree into extent_t's e_bits. Cacheline-align extent_t structures so that they always cross the minimum number of cacheline boundaries. Re-order extent_t fields such that all fields except the slab bitmap (and overlaid heap profiling context pointer) are in the first cacheline. This resolves #461.
* Remove BITMAP_USE_TREE.Jason Evans2017-03-271-78/+0
| | | | | | | | | | Remove tree-structured bitmap support, in order to reduce complexity and ease maintenance. No bitmaps larger than 512 bits have been necessary since before 4.0.0, and there is no current plan that would increase maximum bitmap size. Although tree-structured bitmaps were used on 32-bit platforms prior to this change, the overall benefits were questionable (higher metadata overhead, higher bitmap modification cost, marginally lower search cost).
* Force inline ifree to avoid function call costs on fast path.Qi Wang2017-03-251-2/+2
| | | | | Without ALWAYS_INLINE, sometimes ifree() gets compiled into its own function, which adds overhead on the fast path.
* Use a bitmap in extents_t to speed up search.Jason Evans2017-03-251-11/+30
| | | | | Rather than iteratively checking all sufficiently large heaps during search, maintain and use a bitmap in order to skip empty heaps.
* Implement bitmap_ffu(), which finds the first unset bit.Jason Evans2017-03-252-7/+22
|
* Use first fit layout policy instead of best fit.Jason Evans2017-03-251-12/+42
| | | | | | | | | For extents which do not delay coalescing, use first fit layout policy rather than first-best fit layout policy. This packs extents toward older virtual memory mappings, but at the cost of higher search overhead in the common case. This resolves #711.