summaryrefslogtreecommitdiffstats
path: root/src/base.c
Commit message (Collapse)AuthorAgeFilesLines
* Fix base allocator THP auto mode locking and stats.Qi Wang2017-11-101-21/+16
| | | | | Added proper synchronization for switching to using THP in auto mode. Also fixed stats for number of THPs used.
* Use hugepage alignment for base allocator.Qi Wang2017-11-041-2/+2
| | | | | This gives us an easier way to tell if the allocation is for metadata in the extent hooks.
* metadata_thp: auto mode adjustment for a0.Qi Wang2017-11-011-19/+22
| | | | | | We observed that arena 0 can have much more metadata allocated comparing to other arenas. Tune the auto mode to only switch to huge page on the 5th block (instead of 3 previously) for a0.
* Enable a0 metadata thp on the 3rd base block.Qi Wang2017-10-051-21/+64
| | | | | | Since we allocate rtree nodes from a0's base, it's pushed to over 1 block on initialization right away, which makes the auto thp mode less effective on a0. We change a0 to make the switch on the 3rd block instead.
* Add stats for metadata_thp.Qi Wang2017-08-301-8/+43
| | | | Report number of THPs used in arena and aggregated stats.
* Change opt.metadata_thp to [disabled,auto,always].Qi Wang2017-08-301-13/+33
| | | | | | | | To avoid the high RSS caused by THP + low usage arena (i.e. THP becomes a significant percentage), added a new "auto" option which will only start using THP after a base allocator used up the first THP region. Starting from the second hugepage (in a single arena), "auto" behaves the same as "always", i.e. madvise hugepage right away.
* Implement opt.metadata_thpQi Wang2017-08-111-14/+29
| | | | | This option enables transparent huge page for base allocators (require MADV_HUGEPAGE support).
* Check arena in current context in pre_reentrancy.Qi Wang2017-06-231-6/+7
|
* Set reentrancy when invoking customized extent hooks.Qi Wang2017-06-231-15/+24
| | | | | Customized extent hooks may malloc / free thus trigger reentry. Support this behavior by adding reentrancy on hook functions.
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-7/+8
|
* Header refactoring: unify and de-catchall extent_mmap module.David Goldblatt2017-05-311-0/+1
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Do not hold the base mutex while calling extent hooks.Jason Evans2017-05-231-0/+6
| | | | | | | | | Drop the base mutex while allocating new base blocks, because extent allocation can enter code that prohibits holding non-core mutexes, e.g. the extent_[d]alloc() and extent_purge_forced_wrapper() calls in extent_alloc_dss(). This partially resolves #802.
* Allow mutexes to take a lock ordering enum at construction.David Goldblatt2017-05-191-1/+2
| | | | | | | This lets us specify whether and how mutexes of the same rank are allowed to be acquired. Currently, we only allow two polices (only a single mutex at a given rank at a time, and mutexes acquired in ascending order), but we can plausibly allow more (e.g. the "release uncontended mutexes before blocking").
* Header refactoring: move assert.h out of the catch-allDavid Goldblatt2017-04-191-0/+2
|
* Track extent structure serial number (esn) in extent_t.Jason Evans2017-04-171-28/+43
| | | | This enables stable sorting of extent_t structures.
* Allocate increasingly large base blocks.Jason Evans2017-04-171-26/+36
| | | | | Limit the total number of base block by leveraging the exponential size class sequence, similarly to extent_grow_retained().
* Update base_unmap() to match extent_dalloc_wrapper().Jason Evans2017-04-171-10/+10
| | | | | | | Reverse the order of forced versus lazy purging attempts in base_unmap(), in order to match the order in extent_dalloc_wrapper(), which was reversed by 64e458f5cdd64f9b67cb495f177ef96bf3ce4e0e (Implement two-phase decay-based purging.).
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-111-1/+2
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Make base_t's extent_hooks field C11-atomicDavid Goldblatt2017-04-051-10/+4
|
* Convert extent_t's usize to szind.Jason Evans2017-03-231-4/+5
| | | | | | | | Rather than storing usize only for large (and prof-promoted) allocations, store the size class index for allocations that reside within the extent, such that the size class index is valid for all extents that contain extant allocations, and invalid otherwise (mainly to make debugging simpler).
* Disentangle arena and extent locking.Jason Evans2017-02-021-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor arena and extent locking protocols such that arena and extent locks are never held when calling into the extent_*_wrapper() API. This requires extra care during purging since the arena lock no longer protects the inner purging logic. It also requires extra care to protect extents from being merged with adjacent extents. Convert extent_t's 'active' flag to an enumerated 'state', so that retained extents are explicitly marked as such, rather than depending on ring linkage state. Refactor the extent collections (and their synchronization) for cached and retained extents into extents_t. Incorporate LRU functionality to support purging. Incorporate page count accounting, which replaces arena->ndirty and arena->stats.retained. Assert that no core locks are held when entering any internal [de]allocation functions. This is in addition to existing assertions that no locks are held when entering external [de]allocation functions. Audit and document synchronization protocols for all arena_t fields. This fixes a potential deadlock due to recursive allocation during gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6 (Fix lock order reversal during gdump.), but with a necessarily much broader code impact.
* Replace tabs following #define with spaces.Jason Evans2017-01-211-1/+1
| | | | This resolves #564.
* Remove extraneous parens around return arguments.Jason Evans2017-01-211-14/+14
| | | | This resolves #540.
* Update brace style.Jason Evans2017-01-211-52/+47
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Remove leading blank lines from function bodies.Jason Evans2017-01-131-9/+0
| | | | This resolves #535.
* Implement per arena base allocators.Jason Evans2016-12-271-119/+288
| | | | | | | | | | | | | Add/rename related mallctls: - Add stats.arenas.<i>.base . - Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal . - Add stats.arenas.<i>.resident . Modify the arenas.extend mallctl to take an optional (extent_hooks_t *) argument so that it is possible for all base allocations to be serviced by the specified extent hooks. This resolves #463.
* Add extent serial numbers.Jason Evans2016-11-151-1/+11
| | | | | | | | Add extent serial numbers and use them where appropriate as a sort key that is higher priority than address, so that the allocation policy prefers older extents. This resolves #147.
* Remove all vestiges of chunks.Jason Evans2016-10-121-6/+6
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-4/+4
|
* Move slabs out of chunks.Jason Evans2016-06-061-2/+1
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-1/+2
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-1/+1
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Add extent_dirty_[gs]et().Jason Evans2016-06-031-1/+1
|
* Merge chunk_alloc_base() into its only caller.Jason Evans2016-06-031-1/+9
|
* Replace extent_tree_szad_* with extent_heap_*.Jason Evans2016-06-031-12/+23
|
* Replace extent_achunk_[gs]et() with extent_slab_[gs]et().Jason Evans2016-06-031-2/+2
|
* Add extent_active_[gs]et().Jason Evans2016-06-031-2/+2
| | | | Always initialize extents' runs_dirty and chunks_cache linkage.
* Rename extent_node_t to extent_t.Jason Evans2016-05-161-37/+37
|
* Remove Valgrind support.Jason Evans2016-05-131-3/+0
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-22/+23
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Convert base_mtx locking protocol comments to assertions.Jason Evans2016-04-171-10/+12
|
* Add witness, a simple online locking validator.Jason Evans2016-04-141-13/+13
| | | | This resolves #358.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-1/+1
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-2/+2
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Fix two valgrind integration regressions.Jason Evans2015-06-221-1/+1
| | | | The regressions were never merged into the master branch.
* Clarify relationship between stats.resident and stats.mapped.Jason Evans2015-05-301-0/+2
|
* Add the "stats.allocated" mallctl.Jason Evans2015-03-241-8/+21
|
* Quantize szad trees by size class.Jason Evans2015-03-071-2/+3
| | | | | | | Treat sizes that round down to the same size class as size-equivalent in trees that are used to search for first best fit, so that there are only as many "firsts" as there are size classes. This comes closer to the ideal of first fit.
* Simplify extent_node_t and add extent_node_init().Jason Evans2015-02-171-4/+2
|