summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Fix chunk_unmap() to propagate dirty state.Jason Evans2015-02-182-7/+13
| | | | | | | | | | | | Fix chunk_unmap() to propagate whether a chunk is dirty, and modify dirty chunk purging to record this information so it can be passed to chunk_unmap(). Since the broken version of chunk_unmap() claimed that all chunks were clean, this resulted in potential memory corruption for purging implementations that do not zero (e.g. MADV_FREE). This regression was introduced by ee41ad409a43d12900a5a3108f6c14f84e4eb0eb (Integrate whole chunks into unused dirty page purging machinery.).
* arena_chunk_dirty_node_init() --> extent_node_dirty_linkage_init()Jason Evans2015-02-181-11/+3
|
* Simplify extent_node_t and add extent_node_init().Jason Evans2015-02-174-30/+16
|
* Integrate whole chunks into unused dirty page purging machinery.Jason Evans2015-02-177-216/+437
| | | | | | | | | | | | Extend per arena unused dirty page purging to manage unused dirty chunks in aaddtion to unused dirty runs. Rather than immediately unmapping deallocated chunks (or purging them in the --disable-munmap case), store them in a separate set of trees, chunks_[sz]ad_dirty. Preferrentially allocate dirty chunks. When excessive unused dirty pages accumulate, purge runs and chunks in ingegrated LRU order (and unmap chunks in the --enable-munmap case). Refactor extent_node_t to provide accessor functions.
* Normalize *_link and link_* fields to all be *_link.Jason Evans2015-02-163-10/+9
|
* Remove redundant tcache_boot() call.Jason Evans2015-02-151-2/+0
|
* If MALLOCX_ARENA(a) is specified, use it during tcache fill.Jason Evans2015-02-131-9/+10
|
* Refactor huge_*() calls into arena internals.Jason Evans2015-02-121-56/+104
| | | | | Make redirects to the huge_*() API the arena code's responsibility, since arenas now take responsibility for all allocation sizes.
* add missing check for new_addr chunk sizeDaniel Micay2015-02-121-1/+1
| | | | | | 8ddc93293cd8370870f221225ef1e013fbff6d65 switched this to over using the address tree in order to avoid false negatives, so it now needs to check that the size of the free extent is large enough to satisfy the request.
* Move centralized chunk management into arenas.Jason Evans2015-02-129-368/+281
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Update ckh to support metadata allocation tracking.Jason Evans2015-02-121-9/+11
|
* Fix a regression in tcache_bin_flush_small().Jason Evans2015-02-121-1/+1
| | | | | | Fix a serious regression in tcache_bin_flush_small() that was introduced by 1cb181ed632e7573fb4eab194e4d216867222d27 (Implement explicit tcache support.).
* Test and fix tcache ID recycling.Jason Evans2015-02-101-1/+1
|
* Implement explicit tcache support.Jason Evans2015-02-108-187/+362
| | | | | | | | | Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be used in conjunction with the *allocx() API. Add the tcache.create, tcache.flush, and tcache.destroy mallctls. This resolves #145.
* Refactor rtree to be lock-free.Jason Evans2015-02-052-71/+92
| | | | | | | | | | | | | | | | | | Recent huge allocation refactoring associates huge allocations with arenas, but it remains necessary to quickly look up huge allocation metadata during reallocation/deallocation. A global radix tree remains a good solution to this problem, but locking would have become the primary bottleneck after (upcoming) migration of chunk management from global to per arena data structures. This lock-free implementation uses double-checked reads to traverse the tree, so that in the steady state, each read or write requires only a single atomic operation. This implementation also assures that no more than two tree levels actually exist, through a combination of careful virtual memory allocation which makes large sparse nodes cheap, and skipping the root node on x64 (possible because the top 16 bits are all 0 in practice).
* Refactor base_alloc() to guarantee demand-zeroed memory.Jason Evans2015-02-053-66/+104
| | | | | | | | | | | | Refactor base_alloc() to guarantee that allocations are carved from demand-zeroed virtual memory. This supports sparse data structures such as multi-page radix tree nodes. Enhance base_alloc() to keep track of fragments which were too small to support previous allocation requests, and try to consume them during subsequent requests. This becomes important when request sizes commonly approach or exceed the chunk size (as could radix tree node allocations).
* Fix chunk_recycle()'s new_addr functionality.Jason Evans2015-02-051-2/+6
| | | | | | | | Fix chunk_recycle()'s new_addr functionality to search by address rather than just size if new_addr is specified. The functionality added by a95018ee819abf897562d9d1f3bc31d4dd725a8d (Attempt to expand huge allocations in-place.) only worked if the two search orders happened to return the same results (e.g. in simple test cases).
* Make opt.lg_dirty_mult work as documentedMike Hommey2015-02-031-0/+2
| | | | | | | | | | | | | | | | | | The documentation for opt.lg_dirty_mult says: Per-arena minimum ratio (log base 2) of active to dirty pages. Some dirty unused pages may be allowed to accumulate, within the limit set by the ratio (or one chunk worth of dirty pages, whichever is greater) (...) The restriction in parentheses currently doesn't happen. This makes jemalloc aggressively madvise(), which in turns increases the amount of page faults significantly. For instance, this resulted in several(!) hundred(!) milliseconds startup regression on Firefox for Android. This may require further tweaking, but starting with actually doing what the documentation says is a good start.
* util.c: strerror_r returns char* only on glibcFelix Janda2015-02-031-1/+1
|
* Implement the prof.gdump mallctl.Jason Evans2015-01-263-1/+63
| | | | | | | | This feature makes it possible to toggle the gdump feature on/off during program execution, whereas the the opt.prof_dump mallctl value can only be set during program startup. This resolves #72.
* Avoid pointless chunk_recycle() call.Jason Evans2015-01-261-21/+29
| | | | | | | Avoid calling chunk_recycle() for mmap()ed chunks if config_munmap is disabled, in which case there are never any recyclable chunks. This resolves #164.
* huge_node_locked don't have to unlock huge_mtxSébastien Marie2015-01-251-1/+0
| | | | | in src/huge.c, after each call of huge_node_locked(), huge_mtx is already unlocked. don't unlock it twice (it is a undefined behaviour).
* Implement metadata statistics.Jason Evans2015-01-2410-118/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Use the correct type for opt.junk when printing stats.Guilherme Goncalves2015-01-231-1/+1
|
* Refactor bootstrapping to delay tsd initialization.Jason Evans2015-01-223-119/+196
| | | | | | | | | | | | Refactor bootstrapping to delay tsd initialization, primarily to support integration with FreeBSD's libc. Refactor a0*() for internal-only use, and add the bootstrap_{malloc,calloc,free}() API for use by FreeBSD's libc. This separation limits use of the a0*() functions to metadata allocation, which doesn't require malloc/calloc/free API compatibility. This resolves #170.
* Fix arenas_cache_cleanup().Jason Evans2015-01-221-1/+1
| | | | | Fix arenas_cache_cleanup() to check whether arenas_cache is NULL before deallocation, rather than checking arenas.
* Fix OOM handling in memalign() and valloc().Jason Evans2015-01-171-2/+4
| | | | | | Fix memalign() and valloc() to heed imemalign()'s return value. Reported by Kurt Wampler.
* Fix an infinite recursion bug related to a0/tsd bootstrapping.Jason Evans2015-01-151-1/+3
| | | | This resolves #184.
* Move variable declaration to the top its block for MSVC compatibility.Guilherme Goncalves2014-12-171-2/+2
|
* Introduce two new modes of junk filling: "alloc" and "free".Guilherme Goncalves2014-12-155-40/+83
| | | | | | | | In addition to true/false, opt.junk can now be either "alloc" or "free", giving applications the possibility of junking memory only on allocation or deallocation. This resolves #172.
* Ignore MALLOC_CONF in set{uid,gid,cap} binaries.Daniel Micay2014-12-141-1/+22
| | | | | | This eliminates the malloc tunables as tools for an attacker. Closes #173
* Style and spelling fixes.Jason Evans2014-12-096-8/+8
|
* Fix OOM cleanup in huge_palloc().Jason Evans2014-12-051-6/+2
| | | | | | Fix OOM cleanup in huge_palloc() to call idalloct() rather than base_node_dalloc(). This bug is a result of incomplete refactoring, and has no impact other than leaking memory during OOM.
* teach the dss chunk allocator to handle new_addrDaniel Micay2014-11-292-8/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides in-place expansion of huge allocations when the end of the allocation is at the end of the sbrk heap. There's already the ability to extend in-place via recycled chunks but this handles the initial growth of the heap via repeated vector / string reallocations. A possible future extension could allow realloc to go from the following: | huge allocation | recycled chunks | ^ dss_end To a larger allocation built from recycled *and* new chunks: | huge allocation | ^ dss_end Doing that would involve teaching the chunk recycling code to request new chunks to satisfy the request. The chunk_dss code wouldn't require any further changes. #include <stdlib.h> int main(void) { size_t chunk = 4 * 1024 * 1024; void *ptr = NULL; for (size_t size = chunk; size < chunk * 128; size *= 2) { ptr = realloc(ptr, size); if (!ptr) return 1; } } dss:secondary: 0.083s dss:primary: 0.083s After: dss:secondary: 0.083s dss:primary: 0.003s The dss heap grows in the upwards direction, so the oldest chunks are at the low addresses and they are used first. Linux prefers to grow the mmap heap downwards, so the trick will not work in the *current* mmap chunk allocator as a huge allocation will only be at the top of the heap in a contrived case.
* Fix more pointer arithmetic undefined behavior.Jason Evans2014-11-171-4/+4
| | | | | | Reported by Guilherme Gonçalves. This resolves #166.
* Fix pointer arithmetic undefined behavior.Jason Evans2014-11-172-17/+31
| | | | Reported by Denis Denisov.
* Make quarantine_init() static.Jason Evans2014-11-071-1/+1
|
* Fix two quarantine regressions.Jason Evans2014-11-051-0/+22
| | | | | | | | Fix quarantine to actually update tsd when expanding, and to avoid double initialization (leaking the first quarantine) due to recursive initialization. This resolves #161.
* Disable arena_dirty_count() validation.Jason Evans2014-11-011-2/+6
|
* Don't dereference NULL tdata in prof_{enter,leave}().Jason Evans2014-11-011-13/+18
| | | | | | It is possible for the thread's tdata to be NULL late during thread destruction, so take care not to dereference a NULL pointer in such cases.
* rm unused arena wrangling from xallocxDaniel Micay2014-10-311-16/+8
| | | | | It has no use for the arena_t since unlike rallocx it never makes a new memory allocation. It's just an unused parameter in ixalloc_helper.
* Miscellaneous cleanups.Jason Evans2014-10-312-4/+6
|
* avoid redundant chunk header readsDaniel Micay2014-10-311-28/+26
| | | | | | * use sized deallocation in iralloct_realign * iralloc and ixalloc always need the old size, so pass it in from the caller where it's often already calculated
* mark huge allocations as unlikelyDaniel Micay2014-10-312-4/+4
| | | | This cleans up the fast path a bit more by moving away more code.
* Fix prof_{enter,leave}() calls to pass tdata_self.Jason Evans2014-10-301-19/+24
|
* Use JEMALLOC_INLINE_C everywhere it's appropriate.Jason Evans2014-10-304-15/+15
|
* Merge pull request #151 from thestinger/rallocJason Evans2014-10-162-2/+2
|\ | | | | use sized deallocation internally for ralloc
| * use sized deallocation internally for rallocDaniel Micay2014-10-162-2/+2
| | | | | | | | | | | | | | The size of the source allocation is known at this point, so reading the chunk header can be avoided for the small size class fast path. This is not very useful right now, but it provides a significant performance boost with an alternate ralloc entry point taking the old size.
* | Initialize chunks_mtx for all configurations.Jason Evans2014-10-161-4/+3
|/ | | | This resolves #150.
* Purge/zero sub-chunk huge allocations as necessary.Jason Evans2014-10-161-24/+51
| | | | | | | Purge trailing pages during shrinking huge reallocation when resulting size is not a multiple of the chunk size. Similarly, zero pages if necessary during growing huge reallocation when the resulting size is not a multiple of the chunk size.