summaryrefslogtreecommitdiffstats
path: root/src/base.c
Commit message (Collapse)AuthorAgeFilesLines
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-7/+8
|
* Header refactoring: unify and de-catchall extent_mmap module.David Goldblatt2017-05-311-0/+1
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-241-0/+1
|
* Do not hold the base mutex while calling extent hooks.Jason Evans2017-05-231-0/+6
| | | | | | | | | Drop the base mutex while allocating new base blocks, because extent allocation can enter code that prohibits holding non-core mutexes, e.g. the extent_[d]alloc() and extent_purge_forced_wrapper() calls in extent_alloc_dss(). This partially resolves #802.
* Allow mutexes to take a lock ordering enum at construction.David Goldblatt2017-05-191-1/+2
| | | | | | | This lets us specify whether and how mutexes of the same rank are allowed to be acquired. Currently, we only allow two polices (only a single mutex at a given rank at a time, and mutexes acquired in ascending order), but we can plausibly allow more (e.g. the "release uncontended mutexes before blocking").
* Header refactoring: move assert.h out of the catch-allDavid Goldblatt2017-04-191-0/+2
|
* Track extent structure serial number (esn) in extent_t.Jason Evans2017-04-171-28/+43
| | | | This enables stable sorting of extent_t structures.
* Allocate increasingly large base blocks.Jason Evans2017-04-171-26/+36
| | | | | Limit the total number of base block by leveraging the exponential size class sequence, similarly to extent_grow_retained().
* Update base_unmap() to match extent_dalloc_wrapper().Jason Evans2017-04-171-10/+10
| | | | | | | Reverse the order of forced versus lazy purging attempts in base_unmap(), in order to match the order in extent_dalloc_wrapper(), which was reversed by 64e458f5cdd64f9b67cb495f177ef96bf3ce4e0e (Implement two-phase decay-based purging.).
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-111-1/+2
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Make base_t's extent_hooks field C11-atomicDavid Goldblatt2017-04-051-10/+4
|
* Convert extent_t's usize to szind.Jason Evans2017-03-231-4/+5
| | | | | | | | Rather than storing usize only for large (and prof-promoted) allocations, store the size class index for allocations that reside within the extent, such that the size class index is valid for all extents that contain extant allocations, and invalid otherwise (mainly to make debugging simpler).
* Disentangle arena and extent locking.Jason Evans2017-02-021-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor arena and extent locking protocols such that arena and extent locks are never held when calling into the extent_*_wrapper() API. This requires extra care during purging since the arena lock no longer protects the inner purging logic. It also requires extra care to protect extents from being merged with adjacent extents. Convert extent_t's 'active' flag to an enumerated 'state', so that retained extents are explicitly marked as such, rather than depending on ring linkage state. Refactor the extent collections (and their synchronization) for cached and retained extents into extents_t. Incorporate LRU functionality to support purging. Incorporate page count accounting, which replaces arena->ndirty and arena->stats.retained. Assert that no core locks are held when entering any internal [de]allocation functions. This is in addition to existing assertions that no locks are held when entering external [de]allocation functions. Audit and document synchronization protocols for all arena_t fields. This fixes a potential deadlock due to recursive allocation during gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6 (Fix lock order reversal during gdump.), but with a necessarily much broader code impact.
* Replace tabs following #define with spaces.Jason Evans2017-01-211-1/+1
| | | | This resolves #564.
* Remove extraneous parens around return arguments.Jason Evans2017-01-211-14/+14
| | | | This resolves #540.
* Update brace style.Jason Evans2017-01-211-52/+47
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Remove leading blank lines from function bodies.Jason Evans2017-01-131-9/+0
| | | | This resolves #535.
* Implement per arena base allocators.Jason Evans2016-12-271-119/+288
| | | | | | | | | | | | | Add/rename related mallctls: - Add stats.arenas.<i>.base . - Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal . - Add stats.arenas.<i>.resident . Modify the arenas.extend mallctl to take an optional (extent_hooks_t *) argument so that it is possible for all base allocations to be serviced by the specified extent hooks. This resolves #463.
* Add extent serial numbers.Jason Evans2016-11-151-1/+11
| | | | | | | | Add extent serial numbers and use them where appropriate as a sort key that is higher priority than address, so that the allocation policy prefers older extents. This resolves #147.
* Remove all vestiges of chunks.Jason Evans2016-10-121-6/+6
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-4/+4
|
* Move slabs out of chunks.Jason Evans2016-06-061-2/+1
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-1/+2
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-1/+1
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Add extent_dirty_[gs]et().Jason Evans2016-06-031-1/+1
|
* Merge chunk_alloc_base() into its only caller.Jason Evans2016-06-031-1/+9
|
* Replace extent_tree_szad_* with extent_heap_*.Jason Evans2016-06-031-12/+23
|
* Replace extent_achunk_[gs]et() with extent_slab_[gs]et().Jason Evans2016-06-031-2/+2
|
* Add extent_active_[gs]et().Jason Evans2016-06-031-2/+2
| | | | Always initialize extents' runs_dirty and chunks_cache linkage.
* Rename extent_node_t to extent_t.Jason Evans2016-05-161-37/+37
|
* Remove Valgrind support.Jason Evans2016-05-131-3/+0
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-22/+23
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Convert base_mtx locking protocol comments to assertions.Jason Evans2016-04-171-10/+12
|
* Add witness, a simple online locking validator.Jason Evans2016-04-141-13/+13
| | | | This resolves #358.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-1/+1
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-2/+2
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Fix two valgrind integration regressions.Jason Evans2015-06-221-1/+1
| | | | The regressions were never merged into the master branch.
* Clarify relationship between stats.resident and stats.mapped.Jason Evans2015-05-301-0/+2
|
* Add the "stats.allocated" mallctl.Jason Evans2015-03-241-8/+21
|
* Quantize szad trees by size class.Jason Evans2015-03-071-2/+3
| | | | | | | Treat sizes that round down to the same size class as size-equivalent in trees that are used to search for first best fit, so that there are only as many "firsts" as there are size classes. This comes closer to the ideal of first fit.
* Simplify extent_node_t and add extent_node_init().Jason Evans2015-02-171-4/+2
|
* Integrate whole chunks into unused dirty page purging machinery.Jason Evans2015-02-171-8/+8
| | | | | | | | | | | | Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
* Move centralized chunk management into arenas.Jason Evans2015-02-121-45/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Refactor base_alloc() to guarantee demand-zeroed memory.Jason Evans2015-02-051-56/+91
| | | | | | | | | | | | Refactor base_alloc() to guarantee that allocations are carved from demand-zeroed virtual memory. This supports sparse data structures such as multi-page radix tree nodes. Enhance base_alloc() to keep track of fragments which were too small to support previous allocation requests, and try to consume them during subsequent requests. This becomes important when request sizes commonly approach or exceed the chunk size (as could radix tree node allocations).
* Implement metadata statistics.Jason Evans2015-01-241-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Refactor huge allocation to be managed by arenas.Jason Evans2014-05-161-10/+2
| | | | | | | | | | | | | | | | | | | | Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
* Add support for user-specified chunk allocators/deallocators.aravind2014-05-121-1/+1
| | | | | | | Add new mallctl endpoints "arena<i>.chunk.alloc" and "arena<i>.chunk.dealloc" to allow userspace to configure jemalloc's chunk allocator and deallocator on a per-arena basis.
* Optimize Valgrind integration.Jason Evans2014-04-151-3/+4
| | | | | | | | | | | Forcefully disable tcache if running inside Valgrind, and remove Valgrind calls in tcache-specific code. Restructure Valgrind-related code to move most Valgrind calls out of the fast path functions. Take advantage of static knowledge to elide some branches in JEMALLOC_VALGRIND_REALLOC().
* Fix Valgrind integration.Jason Evans2013-02-011-0/+3
| | | | | Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access.
* Add arena-specific and selective dss allocation.Jason Evans2012-10-131-1/+2
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.