summaryrefslogtreecommitdiffstats
path: root/src/huge.c
Commit message (Collapse)AuthorAgeFilesLines
* Rename huge to large.Jason Evans2016-06-061-352/+0
|
* Update private symbols.Jason Evans2016-06-061-2/+2
|
* Move slabs out of chunks.Jason Evans2016-06-061-2/+2
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-18/+49
|
* Implement cache-oblivious support for huge size classes.Jason Evans2016-06-031-36/+52
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-123/+22
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Convert rtree from per chunk to per page.Jason Evans2016-06-031-2/+2
| | | | Refactor [de]registration to maintain interior rtree entries for slabs.
* Refactor chunk_purge_wrapper() to take extent argument.Jason Evans2016-06-031-4/+2
|
* Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.Jason Evans2016-06-031-26/+11
| | | | | | Rename arena_extent_[d]alloc() to extent_[d]alloc(). Move all chunk [de]registration responsibility into chunk.c.
* Add/use chunk_split_wrapper().Jason Evans2016-06-031-123/+137
| | | | | | Remove redundant ptr/oldsize args from huge_*(). Refactor huge/chunk/arena code boundaries.
* Add/use chunk_merge_wrapper().Jason Evans2016-06-031-16/+4
|
* Remove redundant chunk argument from chunk_{,de,re}register().Jason Evans2016-06-031-8/+8
|
* Fix opt_zero-triggered in-place huge reallocation zeroing.Jason Evans2016-06-031-4/+4
| | | | | | Fix huge_ralloc_no_move_expand() to update the extent's zeroed attribute based on the intersection of the previous value and that of the newly merged trailing extent.
* Replace extent_achunk_[gs]et() with extent_slab_[gs]et().Jason Evans2016-06-031-1/+1
|
* Add extent_active_[gs]et().Jason Evans2016-06-031-1/+1
| | | | Always initialize extents' runs_dirty and chunks_cache linkage.
* Refactor rtree to always use base_alloc() for node allocation.Jason Evans2016-06-031-8/+10
|
* Use rtree-based chunk lookups rather than pointer bit twiddling.Jason Evans2016-06-031-100/+49
| | | | | | | | | | | Look up chunk metadata via the radix tree, rather than using CHUNK_ADDR2BASE(). Propagate pointer's containing extent. Minimize extent lookups by doing a single lookup (e.g. in free()) and propagating the pointer's extent into nearly all the functions that may need it.
* Rename extent_node_t to extent_t.Jason Evans2016-05-161-73/+74
|
* Remove quarantine support.Jason Evans2016-05-131-6/+4
|
* Guard tsdn_tsd() call with tsdn_null() check.Jason Evans2016-05-111-2/+2
|
* Fix chunk accounting related to triggering gdump profiles.Jason Evans2016-05-111-0/+15
| | | | | Fix in place huge reallocation to update the chunk counters that are used for triggering gdump profiles.
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-73/+79
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Optimize the fast paths of calloc() and [m,d,sd]allocx().Jason Evans2016-05-061-1/+1
| | | | | | | | This is a broader application of optimizations to malloc() and free() in f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement: reduce # of branches and unnecessary operations.). This resolves #321.
* Fix huge_palloc() regression.Jason Evans2016-05-041-2/+3
| | | | | | | | | | Split arena_choose() into arena_[i]choose() and use arena_ichoose() for arena lookup during internal allocation. This fixes huge_palloc() so that it always succeeds during extent node allocation. This regression was introduced by 66cd953514a18477eb49732e40d5c2ab5f1b12c5 (Do not allocate metadata via non-auto arenas, nor tcaches.).
* Do not allocate metadata via non-auto arenas, nor tcaches.Jason Evans2016-04-221-15/+13
| | | | | This assures that all internally allocated metadata come from the first opt_narenas arenas, i.e. the automatically multiplexed arenas.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-52/+54
| | | | This resolves #358.
* Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macrosChris Peterson2016-03-311-7/+8
| | | | | Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros, respectively.
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-251-17/+17
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Implement decay-based unused dirty page purging.Jason Evans2016-02-201-6/+19
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.
* Fast-path improvement: reduce # of branches and unnecessary operations.Qi Wang2015-11-101-3/+3
| | | | | | - Combine multiple runtime branches into a single malloc_slow check. - Avoid calling arena_choose / size2index / index2size on fast path. - A few micro optimizations.
* Fix xallocx(..., MALLOCX_ZERO) bugs.Jason Evans2015-09-241-14/+16
| | | | | | | | | | Zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.). Zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size.
* Resolve an unsupported special case in arena_prof_tctx_set().Jason Evans2015-09-151-0/+7
| | | | | | | | | | | Add arena_prof_tctx_reset() and use it instead of arena_prof_tctx_set() when resetting the tctx pointer during reallocation, which happens whenever an originally sampled reallocated object is not sampled during reallocation. This regression was introduced by 594c759f37c301d0245dc2accf4d4aaf9d202819 (Optimize arena_prof_tctx_set().)
* Fix xallocx() bugs.Jason Evans2015-09-121-63/+46
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Don't purge junk filled chunks when shrinking huge allocationsMike Hommey2015-08-281-6/+8
| | | | | | | | When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
* Fix chunk purge hook calls for in-place huge shrinking reallocation.Mike Hommey2015-08-281-2/+2
| | | | | | | | | | Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-1/+1
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-23/+21
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-251-0/+3
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Fix huge_ralloc_no_move() to succeed more often.Jason Evans2015-07-251-1/+1
| | | | | | | | Fix huge_ralloc_no_move() to succeed if an allocation request results in the same usable size as the existing allocation, even if the request size is smaller than the usable size. This bug did not cause correctness issues, but it could cause unnecessary moves during reallocation.
* Fix huge_palloc() to handle size rather than usize input.Jason Evans2015-07-241-6/+12
| | | | | | | | | | huge_ralloc() passes a size that may not be precisely a size class, so make huge_palloc() handle the more general case of a size input rather than usize. This regression appears to have been introduced by the addition of in-place huge reallocation; as such it was never incorporated into a release.
* Avoid atomic operations for dependent rtree reads.Jason Evans2015-05-161-1/+1
|
* Fix in-place shrinking huge reallocation purging bugs.Jason Evans2015-03-261-15/+16
| | | | | | | | | | | Fix the shrinking case of huge_ralloc_no_move_similar() to purge the correct number of pages, at the correct offset. This regression was introduced by 8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 (Implement dynamic per arena control over dirty page purging.). Fix huge_ralloc_no_move_shrink() to purge the correct number of pages. This bug was introduced by 9673983443a0782d975fbcb5d8457cfd411b8b56 (Purge/zero sub-chunk huge allocations as necessary.).
* Implement dynamic per arena control over dirty page purging.Jason Evans2015-03-191-13/+25
| | | | | | | | | | | | | | Add mallctls: - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be modified to change the initial lg_dirty_mult setting for newly created arenas. - arena.<i>.lg_dirty_mult controls an individual arena's dirty page purging threshold, and synchronously triggers any purging that may be necessary to maintain the constraint. - arena.<i>.chunk.purge allows the per arena dirty page purging function to be replaced. This resolves #93.
* Simplify extent_node_t and add extent_node_init().Jason Evans2015-02-171-5/+1
|
* Integrate whole chunks into unused dirty page purging machinery.Jason Evans2015-02-171-29/+32
| | | | | | | | | | | | Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
* Normalize *_link and link_* fields to all be *_link.Jason Evans2015-02-161-3/+3
|
* Move centralized chunk management into arenas.Jason Evans2015-02-121-88/+81
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Implement explicit tcache support.Jason Evans2015-02-101-20/+16
| | | | | | | | | Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be used in conjunction with the *allocx() API. Add the tcache.create, tcache.flush, and tcache.destroy mallctls. This resolves #145.
* huge_node_locked don't have to unlock huge_mtxSébastien Marie2015-01-251-1/+0
| | | | | in src/huge.c, after each call of huge_node_locked(), huge_mtx is already unlocked. don't unlock it twice (it is a undefined behaviour).
* Implement metadata statistics.Jason Evans2015-01-241-64/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.