summaryrefslogtreecommitdiffstats
path: root/src/arena.c
Commit message (Collapse)AuthorAgeFilesLines
* Do not advance decay epoch when time goes backwards.Jason Evans2016-10-111-4/+17
| | | | | | Instead, move the epoch backward in time. Additionally, add nstime_monotonic() and use it in debug builds to assert that time only goes backward if nstime_update() is using a non-monotonic time source.
* Refactor arena->decay_* into arena->decay.* (arena_decay_t).Jason Evans2016-10-111-38/+38
|
* Fix size class overflow bugs.Jason Evans2016-10-031-2/+2
| | | | | | | Avoid calling s2u() on raw extent sizes in extent_recycle(). Clamp psz2ind() (implemented as psz2ind_clamp()) when inserting/removing into/from size-segregated extent heaps.
* Add various mutex ownership assertions.Jason Evans2016-09-231-0/+2
|
* Protect extents_dirty access with extents_mtx.Jason Evans2016-09-221-9/+20
| | | | This fixes race conditions during purging.
* Fix locking order reversal in arena_reset().Jason Evans2016-06-061-5/+13
|
* Modify extent hook functions to take an (extent_t *) argument.Jason Evans2016-06-061-24/+24
| | | | | | | This facilitates the application accessing its own extent allocator metadata during hook invocations. This resolves #259.
* Remove obsolete stats.arenas.<i>.metadata.mapped mallctl.Jason Evans2016-06-061-2/+1
| | | | | Rename stats.arenas.<i>.metadata.allocated mallctl to stats.arenas.<i>.metadata .
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-29/+29
|
* s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/gJason Evans2016-06-061-4/+4
|
* Rename chunks_{cached,retained,mtx} to extents_{cached,retained,mtx}.Jason Evans2016-06-061-9/+10
|
* s/chunk_hook/extent_hook/gJason Evans2016-06-061-33/+33
|
* Rename huge to large.Jason Evans2016-06-061-71/+72
|
* Move slabs out of chunks.Jason Evans2016-06-061-1311/+308
|
* Improve interval-based profile dump triggering.Jason Evans2016-06-061-0/+14
| | | | | | | | | | | When an allocation is large enough to trigger multiple dumps, use modular math rather than subtraction to reset the interval counter. Prior to this change, it was possible for a single allocation to cause many subsequent allocations to all trigger profile dumps. When updating usable size for a sampled object, try to cancel out the difference between LARGE_MINCLASS and usable size from the interval counter.
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-770/+100
|
* Implement cache-oblivious support for huge size classes.Jason Evans2016-06-031-53/+54
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-33/+8
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET().Jason Evans2016-06-031-77/+166
|
* Add extent_dirty_[gs]et().Jason Evans2016-06-031-1/+1
|
* Convert rtree from per chunk to per page.Jason Evans2016-06-031-17/+11
| | | | Refactor [de]registration to maintain interior rtree entries for slabs.
* Refactor chunk_purge_wrapper() to take extent argument.Jason Evans2016-06-031-2/+2
|
* Refactor chunk_[de]commit_wrapper() to take extent arguments.Jason Evans2016-06-031-8/+6
|
* Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.Jason Evans2016-06-031-84/+22
| | | | | | Rename arena_extent_[d]alloc() to extent_[d]alloc(). Move all chunk [de]registration responsibility into chunk.c.
* Add/use chunk_split_wrapper().Jason Evans2016-06-031-261/+250
| | | | | | Remove redundant ptr/oldsize args from huge_*(). Refactor huge/chunk/arena code boundaries.
* Add/use chunk_merge_wrapper().Jason Evans2016-06-031-44/+46
|
* Add/use chunk_commit_wrapper().Jason Evans2016-06-031-30/+31
|
* Add/use chunk_decommit_wrapper().Jason Evans2016-06-031-7/+7
|
* Replace extent_tree_szad_* with extent_heap_*.Jason Evans2016-06-031-6/+8
|
* Use rtree rather than [sz]ad trees for chunk split/coalesce operations.Jason Evans2016-06-031-2/+0
|
* Remove redundant chunk argument from chunk_{,de,re}register().Jason Evans2016-06-031-2/+2
|
* Replace extent_achunk_[gs]et() with extent_slab_[gs]et().Jason Evans2016-06-031-3/+3
|
* Add extent_active_[gs]et().Jason Evans2016-06-031-3/+3
| | | | Always initialize extents' runs_dirty and chunks_cache linkage.
* Refactor rtree to always use base_alloc() for node allocation.Jason Evans2016-06-031-27/+32
|
* Use rtree-based chunk lookups rather than pointer bit twiddling.Jason Evans2016-06-031-110/+135
| | | | | | | | | | | Look up chunk metadata via the radix tree, rather than using CHUNK_ADDR2BASE(). Propagate pointer's containing extent. Minimize extent lookups by doing a single lookup (e.g. in free()) and propagating the pointer's extent into nearly all the functions that may need it.
* Rename extent_node_t to extent_t.Jason Evans2016-05-161-88/+87
|
* Simplify run quantization.Jason Evans2016-05-161-150/+29
|
* Refactor runs_avail.Jason Evans2016-05-161-38/+23
| | | | | | | | Use pszind_t size classes rather than szind_t size classes, and always reserve space for NPSIZES elements. This removes unused heaps that are not multiples of the page size, and adds (currently) unused heaps for all huge size classes, with the immediate benefit that the size of arena_t allocations is constant (no longer dependent on chunk size).
* Implement pz2ind(), pind2sz(), and psz2u().Jason Evans2016-05-131-2/+3
| | | | | | | These compute size classes and indices similarly to size2index(), index2size() and s2u(), respectively, but using the subset of size classes that are multiples of the page size. Note that pszind_t and szind_t are not interchangeable.
* Initialize arena_bin_info at compile time rather than at boot time.Jason Evans2016-05-131-79/+33
| | | | This resolves #370.
* Remove redzone support.Jason Evans2016-05-131-140/+13
| | | | This resolves #369.
* Remove quarantine support.Jason Evans2016-05-131-23/+7
|
* Remove Valgrind support.Jason Evans2016-05-131-38/+3
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-287/+298
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Optimize the fast paths of calloc() and [m,d,sd]allocx().Jason Evans2016-05-061-1/+1
| | | | | | | | This is a broader application of optimizations to malloc() and free() in f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement: reduce # of branches and unnecessary operations.). This resolves #321.
* Add the stats.retained and stats.arenas.<i>.retained statistics.Jason Evans2016-05-041-0/+1
| | | | This resolves #367.
* Fix huge_palloc() regression.Jason Evans2016-05-041-2/+2
| | | | | | | | | | Split arena_choose() into arena_[i]choose() and use arena_ichoose() for arena lookup during internal allocation. This fixes huge_palloc() so that it always succeeds during extent node allocation. This regression was introduced by 66cd953514a18477eb49732e40d5c2ab5f1b12c5 (Do not allocate metadata via non-auto arenas, nor tcaches.).
* Fix fork()-related lock rank ordering reversals.Jason Evans2016-04-261-5/+23
|
* Fix arena reset effects on large/huge stats.Jason Evans2016-04-251-5/+24
| | | | | | Reset large curruns to 0 during arena reset. Do not increase huge ndalloc stats during arena reset.
* Implement the arena.<i>.reset mallctl.Jason Evans2016-04-221-37/+188
| | | | | | | This makes it possible to discard all of an arena's allocations in a single operation. This resolves #146.