summaryrefslogtreecommitdiffstats
path: root/src/jemalloc.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Remove all vestiges of chunks.Jason Evans2016-10-121-4/+0
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Remove ratio-based purging.Jason Evans2016-10-121-19/+0
| | | | | | | | | | | | | Make decay-based purging the default (and only) mode. Remove associated mallctls: - opt.purge - opt.lg_dirty_mult - arena.<i>.lg_dirty_mult - arenas.lg_dirty_mult - stats.arenas.<i>.lg_dirty_mult This resolves #385.
* Fix arena_bind().Qi Wang2016-09-221-6/+7
| | | | | When tsd is not in nominal state (e.g. during thread termination), we should not increment nthreads.
* Make tsd cleanup functions optional, remove noop cleanup functions.Jason Evans2016-06-061-28/+0
|
* Fix rallocx() sampling code to not eagerly commit sampler update.Jason Evans2016-06-061-3/+3
| | | | | | rallocx() for an alignment-constrained request may end up with a smaller-than-worst-case size if in-place reallocation succeeds due to serendipitous alignment. In such cases, sampling may not happen.
* Relax opt_lg_chunk clamping constraints.Jason Evans2016-06-061-10/+2
|
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-4/+4
|
* s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/gJason Evans2016-06-061-0/+2
|
* s/chunk_hook/extent_hook/gJason Evans2016-06-061-8/+10
|
* Rename huge to large.Jason Evans2016-06-061-16/+16
|
* Move slabs out of chunks.Jason Evans2016-06-061-22/+24
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-24/+22
|
* Refactor rtree to always use base_alloc() for node allocation.Jason Evans2016-06-031-26/+30
|
* Use rtree-based chunk lookups rather than pointer bit twiddling.Jason Evans2016-06-031-92/+118
| | | | | | | | | | | Look up chunk metadata via the radix tree, rather than using CHUNK_ADDR2BASE(). Propagate pointer's containing extent. Minimize extent lookups by doing a single lookup (e.g. in free()) and propagating the pointer's extent into nearly all the functions that may need it.
* Simplify run quantization.Jason Evans2016-05-161-2/+1
|
* Refactor runs_avail.Jason Evans2016-05-161-3/+14
| | | | | | | | Use pszind_t size classes rather than szind_t size classes, and always reserve space for NPSIZES elements. This removes unused heaps that are not multiples of the page size, and adds (currently) unused heaps for all huge size classes, with the immediate benefit that the size of arena_t allocations is constant (no longer dependent on chunk size).
* Implement pz2ind(), pind2sz(), and psz2u().Jason Evans2016-05-131-2/+2
| | | | | | | These compute size classes and indices similarly to size2index(), index2size() and s2u(), respectively, but using the subset of size classes that are multiples of the page size. Note that pszind_t and szind_t are not interchangeable.
* Initialize arena_bin_info at compile time rather than at boot time.Jason Evans2016-05-131-2/+2
| | | | This resolves #370.
* Remove redzone support.Jason Evans2016-05-131-9/+6
| | | | This resolves #369.
* Remove quarantine support.Jason Evans2016-05-131-46/+21
|
* Remove Valgrind support.Jason Evans2016-05-131-86/+9
|
* Use TSDN_NULL rather than NULL as appropriate.Jason Evans2016-05-131-2/+2
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-244/+270
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Fix tsd bootstrapping for a0malloc().Jason Evans2016-05-071-27/+31
|
* Optimize the fast paths of calloc() and [m,d,sd]allocx().Jason Evans2016-05-061-186/+114
| | | | | | | | This is a broader application of optimizations to malloc() and free() in f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement: reduce # of branches and unnecessary operations.). This resolves #321.
* Modify pages_map() to support mapping uncommitted virtual memory.Jason Evans2016-05-061-0/+1
| | | | | | | | | | | If the OS overcommits: - Commit all mappings in pages_map() regardless of whether the caller requested committed memory. - Linux-specific: Specify MAP_NORESERVE to avoid unfortunate interactions with heuristic overcommit mode during fork(2). This resolves #193.
* Fix witness/fork() interactions.Jason Evans2016-04-261-3/+3
| | | | | | Fix witness to clear its list of owned mutexes in the child if platform-specific malloc_mutex code re-initializes mutexes rather than unlocking them.
* Fix fork()-related lock rank ordering reversals.Jason Evans2016-04-261-12/+29
|
* Fix arena_choose_hard() regression.Jason Evans2016-04-231-1/+1
| | | | | This regression was caused by 66cd953514a18477eb49732e40d5c2ab5f1b12c5 (Do not allocate metadata via non-auto arenas, nor tcaches.).
* Do not allocate metadata via non-auto arenas, nor tcaches.Jason Evans2016-04-221-37/+75
| | | | | This assures that all internally allocated metadata come from the first opt_narenas arenas, i.e. the automatically multiplexed arenas.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-147/+230
| | | | This resolves #358.
* Fix a potential tsd cleanup leak.Jason Evans2016-02-281-0/+3
| | | | | | | | | | | | | | Prior to 767d85061a6fb88ec977bbcd9b429a43aff391e6 (Refactor arenas array (fixes deadlock).), it was possible under some circumstances for arena_get() to trigger recreation of the arenas cache during tsd cleanup, and the arenas cache would then be leaked. In principle a similar issue could still occur as a side effect of decay-based purging, which calls arena_tdata_get(). Fix arenas_tdata_cleanup() by setting tsd->arenas_tdata_bypass to true, so that arena_tdata_get() will gracefully fail (an expected behavior) rather than recreating tsd->arena_tdata. Reported by Christopher Ferris <cferris@google.com>.
* Add more HUGE_MAXCLASS overflow checks.Jason Evans2016-02-261-23/+34
| | | | | | | Add HUGE_MAXCLASS overflow checks that are specific to heap profiling code paths. This fixes test failures that were introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class overflow behavior defined.).
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-251-24/+44
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Refactor arenas array (fixes deadlock).Jason Evans2016-02-251-151/+90
| | | | | | | | | | | | Refactor the arenas array, which contains pointers to all extant arenas, such that it starts out as a sparse array of maximum size, and use double-checked atomics-based reads as the basis for fast and simple arena_get(). Additionally, reduce arenas_lock's role such that it only protects against arena initalization races. These changes remove the possibility for arena lookups to trigger locking, which resolves at least one known (fork-related) deadlock. This resolves #315.
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-241-1/+1
|
* Use ssize_t for readlink() rather than int.Jason Evans2016-02-241-1/+1
|
* Make opt_narenas unsigned rather than size_t.Jason Evans2016-02-241-8/+12
|
* Refactor time_* into nstime_*.Jason Evans2016-02-221-1/+1
| | | | | | | Use a single uint64_t in nstime_t to store nanoseconds rather than using struct timespec. This reduces fragility around conversions between long and uint64_t, especially missing casts that only cause problems on 32-bit platforms.
* Implement decay-based unused dirty page purging.Jason Evans2016-02-201-11/+42
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.
* Refactor arenas_cache tsd.Jason Evans2016-02-201-62/+87
| | | | | Refactor arenas_cache tsd into arenas_tdata, which is a structure of type arena_tdata_t.
* Add --with-malloc-conf.Jason Evans2016-02-201-3/+6
| | | | | Add --with-malloc-conf, which makes it possible to embed a default options string during configuration.
* Call malloc_test_boot0() from malloc_init_hard_recursible().Cosmin Paraschiv2016-01-111-5/+16
| | | | | | | | | | | When using LinuxThreads, malloc bootstrapping deadlocks, since malloc_tsd_boot0() ends up calling pthread_setspecific(), which causes recursive allocation. Fix it by moving the malloc_tsd_boot0() call to malloc_init_hard_recursible(). The deadlock was introduced by 8bb3198f72fc7587dc93527f9f19fb5be52fa553 (Refactor/fix arenas manipulation.), when tsd_boot() was split and the top half, tsd_boot0(), got an extra tsd_wrapper_set() call.
* Fast-path improvement: reduce # of branches and unnecessary operations.Qi Wang2015-11-101-53/+133
| | | | | | - Combine multiple runtime branches into a single malloc_slow check. - Avoid calling arena_choose / size2index / index2size on fast path. - A few micro optimizations.
* Add mallocx() OOM tests.Jason Evans2015-09-171-0/+2
|
* Simplify imallocx_prof_sample().Jason Evans2015-09-171-26/+13
| | | | | | | Simplify imallocx_prof_sample() to always operate on usize rather than sometimes using size. This avoids redundant usize computations and more closely fits the style adopted by i[rx]allocx_prof_sample() to fix sampling bugs.
* Fix irallocx_prof_sample().Jason Evans2015-09-171-5/+5
| | | | | Fix irallocx_prof_sample() to always allocate large regions, even when alignment is non-zero.
* Fix ixallocx_prof_sample().Jason Evans2015-09-171-17/+4
| | | | | | Fix ixallocx_prof_sample() to never modify nor create sampled small allocations. xallocx() is in general incapable of moving small allocations, so this fix removes buggy code without loss of generality.
* Centralize xallocx() size[+extra] overflow checks.Jason Evans2015-09-151-7/+11
|
* Fix ixallocx_prof() to check for size greater than HUGE_MAXCLASS.Jason Evans2015-09-151-1/+5
|