summaryrefslogtreecommitdiffstats
path: root/src/chunk_dss.c
Commit message (Collapse)AuthorAgeFilesLines
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-216/+0
|
* s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/gJason Evans2016-06-061-1/+1
|
* s/chunk_hook/extent_hook/gJason Evans2016-06-061-2/+2
|
* Rename huge to large.Jason Evans2016-06-061-1/+1
|
* Move slabs out of chunks.Jason Evans2016-06-061-1/+1
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-1/+1
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-36/+32
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET().Jason Evans2016-06-031-2/+2
|
* Add extent_dirty_[gs]et().Jason Evans2016-06-031-1/+1
|
* Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.Jason Evans2016-06-031-5/+14
| | | | | | Rename arena_extent_[d]alloc() to extent_[d]alloc(). Move all chunk [de]registration responsibility into chunk.c.
* Remove Valgrind support.Jason Evans2016-05-131-4/+1
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-21/+21
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-23/+23
| | | | This resolves #358.
* Fix potential chunk leaks.Jason Evans2016-03-311-1/+1
| | | | | | | | | | | | Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(), so that if the dalloc hook fails, proper decommit/purge/retain cascading occurs. This fixes three potential chunk leaks on OOM paths, one during dss-based chunk allocation, one during chunk header commit (currently relevant only on Windows), and one during rtree write (e.g. if rtree node allocation fails). Merge chunk_purge_arena() into chunk_purge_default() (refactor, no change to functionality).
* Reduce variable scope.Dmitry-Me2015-09-151-5/+3
| | | | This resolves #274.
* Try to decommit new chunks.Jason Evans2015-08-121-1/+2
| | | | Always leave decommit disabled on non-Windows systems.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-2/+4
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-4/+4
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Rename "dirty chunks" to "cached chunks".Jason Evans2015-02-181-1/+1
| | | | | | | | | | Rename "dirty chunks" to "cached chunks", in order to avoid overloading the term "dirty". Fix the regression caused by 339c2b23b2d61993ac768afcc72af135662c6771 (Fix chunk_unmap() to propagate dirty state.), and actually address what that change attempted, which is to only purge chunks once, and propagate whether zeroed pages resulted into chunk_record().
* Integrate whole chunks into unused dirty page purging machinery.Jason Evans2015-02-171-2/+6
| | | | | | | | | | | | Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
* Move centralized chunk management into arenas.Jason Evans2015-02-121-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* teach the dss chunk allocator to handle new_addrDaniel Micay2014-11-291-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides in-place expansion of huge allocations when the end of the allocation is at the end of the sbrk heap. There's already the ability to extend in-place via recycled chunks but this handles the initial growth of the heap via repeated vector / string reallocations. A possible future extension could allow realloc to go from the following: | huge allocation | recycled chunks | ^ dss_end To a larger allocation built from recycled *and* new chunks: | huge allocation | ^ dss_end Doing that would involve teaching the chunk recycling code to request new chunks to satisfy the request. The chunk_dss code wouldn't require any further changes. #include <stdlib.h> int main(void) { size_t chunk = 4 * 1024 * 1024; void *ptr = NULL; for (size_t size = chunk; size < chunk * 128; size *= 2) { ptr = realloc(ptr, size); if (!ptr) return 1; } } dss:secondary: 0.083s dss:primary: 0.083s After: dss:secondary: 0.083s dss:primary: 0.003s The dss heap grows in the upwards direction, so the oldest chunks are at the low addresses and they are used first. Linux prefers to grow the mmap heap downwards, so the trick will not work in the *current* mmap chunk allocator as a huge allocation will only be at the top of the heap in a contrived case.
* Convert to uniform style: cond == false --> !condJason Evans2014-10-031-2/+2
|
* Optimize Valgrind integration.Jason Evans2014-04-151-1/+2
| | | | | | | | | | | Forcefully disable tcache if running inside Valgrind, and remove Valgrind calls in tcache-specific code. Restructure Valgrind-related code to move most Valgrind calls out of the fast path functions. Take advantage of static knowledge to elide some branches in JEMALLOC_VALGRIND_REALLOC().
* Make dss non-optional, and fix an "arena.<i>.dss" mallctl bug.Jason Evans2014-04-151-10/+10
| | | | | | | Make dss non-optional on all platforms which support sbrk(2). Fix the "arena.<i>.dss" mallctl to return an error if "primary" or "secondary" precedence is specified, but sbrk(2) is not supported.
* Avoid deprecated sbrk(2) on OS X.Jason Evans2013-12-041-7/+8
| | | | | Avoid referencing sbrk(2) on OS X, because it is deprecated as of OS X 10.9 (Mavericks), and the compiler warns against using it.
* Fix Valgrind integration.Jason Evans2013-02-011-1/+0
| | | | | Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access.
* Tighten valgrind integration.Jason Evans2013-01-221-0/+1
| | | | | | | Tighten valgrind integration such that immediately after memory is validated or zeroed, valgrind is told to forget the memory's 'defined' state. The only place newly allocated memory should be left marked as 'defined' is in the public functions (e.g. calloc() and realloc()).
* Add arena-specific and selective dss allocation.Jason Evans2012-10-131-1/+36
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.
* Fix chunk allocation/deallocation bugs.Jason Evans2012-04-211-0/+4
| | | | | | | | | | | | Fix chunk_alloc_dss() to zero memory when requested. Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated memory. Fix huge_palloc() to always junk fill when requested. Improve chunk_recycle() to report that memory is zeroed as a side effect of pages_purge().
* Fix a memory corruption bug in chunk_alloc_dss().Jason Evans2012-04-211-1/+0
| | | | | | | | | Fix a memory corruption bug in chunk_alloc_dss() that was due to claiming newly allocated memory is zeroed. Reverse order of preference between mmap() and sbrk() to prefer mmap(). Clean up management of 'zero' parameter in chunk_alloc*().
* Disable munmap() if it causes VM map holes.Jason Evans2012-04-131-224/+6
| | | | | | | | | | | Add a configure test to determine whether common mmap()/munmap() patterns cause VM map holes, and only use munmap() to discard unused chunks if the problem does not exist. Unify the chunk caching for mmap and dss. Fix options processing to limit lg_chunk to be large enough that redzones will always fit.
* Use a stub replacement and disable dss when sbrk is not supportedMike Hommey2012-04-121-0/+11
|
* Normalize aligned allocation algorithms.Jason Evans2012-04-121-37/+57
| | | | | | | | | | | | | | | Normalize arena_palloc(), chunk_alloc_mmap_slow(), and chunk_recycle_dss() to use the same algorithm for trimming over-allocation. Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and ALIGNMENT_CEILING() macros, and use them where appropriate. Remove the run_size_p parameter from sa2u(). Fix a potential deadlock in chunk_recycle_dss() that was introduced by eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to chunk_alloc()).
* Rename labels.Jason Evans2012-04-101-2/+2
| | | | | | | Rename labels from FOO to label_foo in order to avoid system macro definitions, in particular OUT and ERROR on mingw. Reported by Mike Hommey.
* Add alignment support to chunk_alloc().Mike Hommey2012-04-101-28/+52
|
* Fix fork-related bugs.Jason Evans2012-03-131-4/+32
| | | | | | | | | Acquire/release arena bin locks as part of the prefork/postfork. This bug made deadlock in the child between fork and exec a possibility. Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so that the child can reinitialize mutexes rather than unlocking them. In practice, this bug tended not to cause problems.
* Reduce cpp conditional logic complexity.Jason Evans2012-02-111-2/+12
| | | | | | | | | | | | | | | | | | | | | | Convert configuration-related cpp conditional logic to use static constant variables, e.g.: #ifdef JEMALLOC_DEBUG [...] #endif becomes: if (config_debug) { [...] } The advantage is clearer, more concise code. The main disadvantage is that data structures no longer have conditionally defined fields, so they pay the cost of all fields regardless of whether they are used. In practice, this is only a minor concern; config_stats will go away in an upcoming change, and config_prof is the only other major feature that depends on more than a few special-purpose fields.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-011-0/+284