| Commit message (Collapse) | Author | Age | Files | Lines | ||
|---|---|---|---|---|---|---|
| ... | ||||||
| | * | Add rtree lookup path caching. | Jason Evans | 2016-06-06 | 7 | -95/+268 | |
| | | | | | | | | | | | | | | | | | | | rtree-based extent lookups remain more expensive than chunk-based run lookups, but with this optimization the fast path slowdown is ~3 CPU cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles prior. The path caching speedup tends to degrade gracefully unless allocated memory is spread far apart (as is the case when using a mixture of sbrk() and mmap()). | |||||
| | * | Make tsd cleanup functions optional, remove noop cleanup functions. | Jason Evans | 2016-06-06 | 11 | -81/+23 | |
| | | | ||||||
| | * | Remove some unnecessary locking. | Jason Evans | 2016-06-06 | 1 | -20/+2 | |
| | | | ||||||
| | * | Reduce NSZS, since NSIZES (was nsizes) can not be so large. | Jason Evans | 2016-06-06 | 1 | -1/+1 | |
| | | | ||||||
| | * | Fix rallocx() sampling code to not eagerly commit sampler update. | Jason Evans | 2016-06-06 | 1 | -3/+3 | |
| | | | | | | | | | | | | | rallocx() for an alignment-constrained request may end up with a smaller-than-worst-case size if in-place reallocation succeeds due to serendipitous alignment. In such cases, sampling may not happen. | |||||
| | * | Add a missing prof_alloc_rollback() call. | Jason Evans | 2016-06-06 | 1 | -0/+1 | |
| | | | | | | | | | | | | | In the case where prof_alloc_prep() is called with an over-estimate of allocation size, and sampling doesn't end up being triggered, the tctx must be discarded. | |||||
| | * | Miscellaneous s/chunk/extent/ updates. | Jason Evans | 2016-06-06 | 8 | -19/+17 | |
| | | | ||||||
| | * | Relax NBINS constraint (max 255 --> max 256). | Jason Evans | 2016-06-06 | 1 | -4/+2 | |
| | | | ||||||
| | * | Relax opt_lg_chunk clamping constraints. | Jason Evans | 2016-06-06 | 1 | -10/+2 | |
| | | | ||||||
| | * | Remove obsolete stats.arenas.<i>.metadata.mapped mallctl. | Jason Evans | 2016-06-06 | 8 | -73/+34 | |
| | | | | | | | | | | | Rename stats.arenas.<i>.metadata.allocated mallctl to stats.arenas.<i>.metadata . | |||||
| | * | Better document --enable-ivsalloc. | Jason Evans | 2016-06-06 | 3 | -6/+14 | |
| | | | ||||||
| | * | Rename most remaining *chunk* APIs to *extent*. | Jason Evans | 2016-06-06 | 19 | -1159/+1151 | |
| | | | ||||||
| | * | s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/g | Jason Evans | 2016-06-06 | 8 | -44/+55 | |
| | | | ||||||
| | * | s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/g | Jason Evans | 2016-06-06 | 5 | -16/+16 | |
| | | | ||||||
| | * | Rename chunks_{cached,retained,mtx} to extents_{cached,retained,mtx}. | Jason Evans | 2016-06-06 | 4 | -34/+35 | |
| | | | ||||||
| | * | Rename chunk_*_t hooks to extent_*_t. | Jason Evans | 2016-06-06 | 3 | -126/+129 | |
| | | | ||||||
| | * | s/chunk_hook/extent_hook/g | Jason Evans | 2016-06-06 | 12 | -191/+200 | |
| | | | ||||||
| | * | Rename huge to large. | Jason Evans | 2016-06-06 | 37 | -626/+587 | |
| | | | ||||||
| | * | Update private symbols. | Jason Evans | 2016-06-06 | 2 | -13/+21 | |
| | | | ||||||
| | * | Move slabs out of chunks. | Jason Evans | 2016-06-06 | 21 | -2327/+591 | |
| | | | ||||||
| | * | Improve interval-based profile dump triggering. | Jason Evans | 2016-06-06 | 2 | -1/+15 | |
| | | | | | | | | | | | | | | | | | | | | | | | When an allocation is large enough to trigger multiple dumps, use modular math rather than subtraction to reset the interval counter. Prior to this change, it was possible for a single allocation to cause many subsequent allocations to all trigger profile dumps. When updating usable size for a sampled object, try to cancel out the difference between LARGE_MINCLASS and usable size from the interval counter. | |||||
| | * | Use huge size class infrastructure for large size classes. | Jason Evans | 2016-06-06 | 34 | -1975/+459 | |
| | | | ||||||
| | * | Implement cache-oblivious support for huge size classes. | Jason Evans | 2016-06-03 | 12 | -170/+298 | |
| | | | ||||||
| | * | Allow chunks to not be naturally aligned. | Jason Evans | 2016-06-03 | 11 | -268/+105 | |
| | | | | | | | | | | | Precisely size extents for huge size classes that aren't multiples of chunksize. | |||||
| | * | Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET(). | Jason Evans | 2016-06-03 | 6 | -183/+190 | |
| | | | ||||||
| | * | Make extent_prof_tctx_[gs]et() atomic. | Jason Evans | 2016-06-03 | 1 | -3/+7 | |
| | | | ||||||
| | * | Add extent_dirty_[gs]et(). | Jason Evans | 2016-06-03 | 6 | -10/+34 | |
| | | | ||||||
| | * | Convert rtree from per chunk to per page. | Jason Evans | 2016-06-03 | 5 | -52/+94 | |
| | | | | | | | | | Refactor [de]registration to maintain interior rtree entries for slabs. | |||||
| | * | Refactor chunk_purge_wrapper() to take extent argument. | Jason Evans | 2016-06-03 | 4 | -12/+10 | |
| | | | ||||||
| | * | Refactor chunk_[de]commit_wrapper() to take extent arguments. | Jason Evans | 2016-06-03 | 3 | -16/+14 | |
| | | | ||||||
| | * | Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments. | Jason Evans | 2016-06-03 | 10 | -198/+147 | |
| | | | | | | | | | | | | | Rename arena_extent_[d]alloc() to extent_[d]alloc(). Move all chunk [de]registration responsibility into chunk.c. | |||||
| | * | Add/use chunk_split_wrapper(). | Jason Evans | 2016-06-03 | 7 | -519/+563 | |
| | | | | | | | | | | | | | Remove redundant ptr/oldsize args from huge_*(). Refactor huge/chunk/arena code boundaries. | |||||
| | * | Add/use chunk_merge_wrapper(). | Jason Evans | 2016-06-03 | 6 | -93/+101 | |
| | | | ||||||
| | * | Add/use chunk_commit_wrapper(). | Jason Evans | 2016-06-03 | 4 | -30/+44 | |
| | | | ||||||
| | * | Add/use chunk_decommit_wrapper(). | Jason Evans | 2016-06-03 | 4 | -7/+20 | |
| | | | ||||||
| | * | Merge chunk_alloc_base() into its only caller. | Jason Evans | 2016-06-03 | 4 | -23/+9 | |
| | | | ||||||
| | * | Replace extent_tree_szad_* with extent_heap_*. | Jason Evans | 2016-06-03 | 9 | -103/+332 | |
| | | | ||||||
| | * | Use rtree rather than [sz]ad trees for chunk split/coalesce operations. | Jason Evans | 2016-06-03 | 6 | -197/+233 | |
| | | | ||||||
| | * | Dodge ivsalloc() assertion in test code. | Jason Evans | 2016-06-03 | 1 | -1/+16 | |
| | | | ||||||
| | * | Remove redundant chunk argument from chunk_{,de,re}register(). | Jason Evans | 2016-06-03 | 4 | -25/+25 | |
| | | | ||||||
| | * | Fix opt_zero-triggered in-place huge reallocation zeroing. | Jason Evans | 2016-06-03 | 1 | -4/+4 | |
| | | | | | | | | | | | | | Fix huge_ralloc_no_move_expand() to update the extent's zeroed attribute based on the intersection of the previous value and that of the newly merged trailing extent. | |||||
| | * | Add extent_past_get(). | Jason Evans | 2016-06-03 | 2 | -0/+9 | |
| | | | ||||||
| | * | Replace extent_achunk_[gs]et() with extent_slab_[gs]et(). | Jason Evans | 2016-06-03 | 8 | -33/+33 | |
| | | | ||||||
| | * | Add extent_active_[gs]et(). | Jason Evans | 2016-06-03 | 7 | -21/+37 | |
| | | | | | | | | | Always initialize extents' runs_dirty and chunks_cache linkage. | |||||
| | * | Move *PAGE* definitions to pages.h. | Jason Evans | 2016-06-03 | 2 | -15/+15 | |
| | | | ||||||
| | * | Set/unset rtree node for last chunk of extents. | Jason Evans | 2016-06-03 | 1 | -4/+41 | |
| | | | | | | | | | | | Set/unset rtree node for last chunk of extents, so that the rtree can be used for chunk coalescing. | |||||
| | * | Add rtree element witnesses. | Jason Evans | 2016-06-03 | 10 | -40/+241 | |
| | | | ||||||
| | * | Refactor rtree to always use base_alloc() for node allocation. | Jason Evans | 2016-06-03 | 15 | -217/+315 | |
| | | | ||||||
| | * | Use rtree-based chunk lookups rather than pointer bit twiddling. | Jason Evans | 2016-06-03 | 14 | -504/+548 | |
| | | | | | | | | | | | | | | | | | | | | | | | Look up chunk metadata via the radix tree, rather than using CHUNK_ADDR2BASE(). Propagate pointer's containing extent. Minimize extent lookups by doing a single lookup (e.g. in free()) and propagating the pointer's extent into nearly all the functions that may need it. | |||||
| | * | Add element acquire/release capabilities to rtree. | Jason Evans | 2016-06-03 | 6 | -136/+303 | |
| | | | | | | | | | | | | | | | | | This makes it possible to acquire short-term "ownership" of rtree elements so that it is possible to read an extent pointer *and* read the extent's contents with a guarantee that the element will not be modified until the ownership is released. This is intended as a mechanism for resolving rtree read/write races rather than as a way to lock extents. | |||||
