summaryrefslogtreecommitdiffstats
path: root/src/chunk.c
Commit message (Collapse)AuthorAgeFilesLines
* Remove all vestiges of chunks.Jason Evans2016-10-121-51/+0
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-936/+0
|
* s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/gJason Evans2016-06-061-23/+18
|
* s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/gJason Evans2016-06-061-1/+1
|
* Rename chunks_{cached,retained,mtx} to extents_{cached,retained,mtx}.Jason Evans2016-06-061-16/+16
|
* Rename chunk_*_t hooks to extent_*_t.Jason Evans2016-06-061-29/+29
|
* s/chunk_hook/extent_hook/gJason Evans2016-06-061-86/+89
|
* Move slabs out of chunks.Jason Evans2016-06-061-6/+5
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-8/+19
|
* Implement cache-oblivious support for huge size classes.Jason Evans2016-06-031-41/+57
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-41/+16
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET().Jason Evans2016-06-031-5/+0
|
* Add extent_dirty_[gs]et().Jason Evans2016-06-031-5/+6
|
* Convert rtree from per chunk to per page.Jason Evans2016-06-031-29/+79
| | | | Refactor [de]registration to maintain interior rtree entries for slabs.
* Refactor chunk_purge_wrapper() to take extent argument.Jason Evans2016-06-031-4/+5
|
* Refactor chunk_[de]commit_wrapper() to take extent arguments.Jason Evans2016-06-031-4/+6
|
* Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.Jason Evans2016-06-031-66/+65
| | | | | | Rename arena_extent_[d]alloc() to extent_[d]alloc(). Move all chunk [de]registration responsibility into chunk.c.
* Add/use chunk_split_wrapper().Jason Evans2016-06-031-100/+141
| | | | | | Remove redundant ptr/oldsize args from huge_*(). Refactor huge/chunk/arena code boundaries.
* Add/use chunk_merge_wrapper().Jason Evans2016-06-031-32/+47
|
* Add/use chunk_commit_wrapper().Jason Evans2016-06-031-0/+9
|
* Add/use chunk_decommit_wrapper().Jason Evans2016-06-031-0/+9
|
* Merge chunk_alloc_base() into its only caller.Jason Evans2016-06-031-20/+0
|
* Replace extent_tree_szad_* with extent_heap_*.Jason Evans2016-06-031-32/+53
|
* Use rtree rather than [sz]ad trees for chunk split/coalesce operations.Jason Evans2016-06-031-152/+221
|
* Remove redundant chunk argument from chunk_{,de,re}register().Jason Evans2016-06-031-10/+12
|
* Replace extent_achunk_[gs]et() with extent_slab_[gs]et().Jason Evans2016-06-031-5/+5
|
* Add extent_active_[gs]et().Jason Evans2016-06-031-5/+6
| | | | Always initialize extents' runs_dirty and chunks_cache linkage.
* Set/unset rtree node for last chunk of extents.Jason Evans2016-06-031-4/+41
| | | | | Set/unset rtree node for last chunk of extents, so that the rtree can be used for chunk coalescing.
* Refactor rtree to always use base_alloc() for node allocation.Jason Evans2016-06-031-12/+5
|
* Use rtree-based chunk lookups rather than pointer bit twiddling.Jason Evans2016-06-031-0/+9
| | | | | | | | | | | Look up chunk metadata via the radix tree, rather than using CHUNK_ADDR2BASE(). Propagate pointer's containing extent. Minimize extent lookups by doing a single lookup (e.g. in free()) and propagating the pointer's extent into nearly all the functions that may need it.
* Add element acquire/release capabilities to rtree.Jason Evans2016-06-031-7/+5
| | | | | | | | This makes it possible to acquire short-term "ownership" of rtree elements so that it is possible to read an extent pointer *and* read the extent's contents with a guarantee that the element will not be modified until the ownership is released. This is intended as a mechanism for resolving rtree read/write races rather than as a way to lock extents.
* Rename extent_node_t to extent_t.Jason Evans2016-05-161-95/+93
|
* Remove Valgrind support.Jason Evans2016-05-131-10/+0
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-73/+73
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Add the stats.retained and stats.arenas.<i>.retained statistics.Jason Evans2016-05-041-2/+11
| | | | This resolves #367.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-89/+97
| | | | This resolves #358.
* Fix potential chunk leaks.Jason Evans2016-03-311-35/+16
| | | | | | | | | | | | Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(), so that if the dalloc hook fails, proper decommit/purge/retain cascading occurs. This fixes three potential chunk leaks on OOM paths, one during dss-based chunk allocation, one during chunk header commit (currently relevant only on Windows), and one during rtree write (e.g. if rtree node allocation fails). Merge chunk_purge_arena() into chunk_purge_default() (refactor, no change to functionality).
* Move retaining out of default chunk hooksbuchgr2016-02-261-11/+25
| | | | | | | This fixes chunk allocation to reuse retained memory even if an application-provided chunk allocation function is in use. This resolves #307.
* Refactor arenas array (fixes deadlock).Jason Evans2016-02-251-3/+1
| | | | | | | | | | | | Refactor the arenas array, which contains pointers to all extant arenas, such that it starts out as a sparse array of maximum size, and use double-checked atomics-based reads as the basis for fast and simple arena_get(). Additionally, reduce arenas_lock's role such that it only protects against arena initalization races. These changes remove the possibility for arena lookups to trigger locking, which resolves at least one known (fork-related) deadlock. This resolves #315.
* Attempt mmap-based in-place huge reallocation.Jason Evans2016-02-251-7/+4
| | | | | | | | Attempt mmap-based in-place huge reallocation by plumbing new_addr into chunk_alloc_mmap(). This can dramatically speed up incremental huge reallocation. This resolves #335.
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-241-2/+2
|
* Refactor jemalloc_ffs*() into ffs_*().Jason Evans2016-02-241-1/+1
| | | | Use appropriate versions to resolve 64-to-32-bit data loss warnings.
* Fix a strict aliasing violation.Jason Evans2015-08-121-1/+6
|
* Fix chunk_dalloc_arena() re: zeroing due to purge.Jason Evans2015-08-121-1/+1
|
* Arena chunk decommit cleanups and fixes.Jason Evans2015-08-111-2/+2
| | | | | Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-46/+80
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-100/+246
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-251-0/+6
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Revert to first-best-fit run/chunk allocation.Jason Evans2015-07-161-35/+9
| | | | | | | | | This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use first-fit rather than first-best-fit run/chunk allocation.). In some pathological cases, first-fit search dominates allocation time, and it also tends not to converge as readily on a steady state of memory layout, since precise allocation order has a bigger effect than for first-best-fit.
* Use jemalloc_ffs() rather than ffs().Jason Evans2015-07-081-4/+12
|