summaryrefslogtreecommitdiffstats
path: root/src/chunk_mmap.c
Commit message (Collapse)AuthorAgeFilesLines
* Rename most remaining *chunk* APIs to *extent*.Jason Evans2016-06-061-76/+0
|
* Allow chunks to not be naturally aligned.Jason Evans2016-06-031-1/+0
| | | | | Precisely size extents for huge size classes that aren't multiples of chunksize.
* Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.Jason Evans2016-06-031-1/+0
| | | | | | Rename arena_extent_[d]alloc() to extent_[d]alloc(). Move all chunk [de]registration responsibility into chunk.c.
* Modify pages_map() to support mapping uncommitted virtual memory.Jason Evans2016-05-061-7/+3
| | | | | | | | | | | If the OS overcommits: - Commit all mappings in pages_map() regardless of whether the caller requested committed memory. - Linux-specific: Specify MAP_NORESERVE to avoid unfortunate interactions with heuristic overcommit mode during fork(2). This resolves #193.
* Support --with-lg-page values larger than actual page size.Jason Evans2016-04-111-1/+1
| | | | | | | | | | | | | | During over-allocation in preparation for creating aligned mappings, allocate one more page than necessary if PAGE is the actual page size, so that trimming still succeeds even if the system returns a mapping that has less than PAGE alignment. This allows compiling with e.g. 64 KiB "pages" on systems that actually use 4 KiB pages. Note that for e.g. --with-lg-page=21, it is also necessary to increase the chunk size (e.g. --with-malloc-conf=lg_chunk:22) so that there are at least two "pages" per chunk. In practice this isn't a particularly compelling configuration because so much (unusable) virtual memory is dedicated to chunk headers.
* Attempt mmap-based in-place huge reallocation.Jason Evans2016-02-251-4/+6
| | | | | | | | Attempt mmap-based in-place huge reallocation by plumbing new_addr into chunk_alloc_mmap(). This can dramatically speed up incremental huge reallocation. This resolves #335.
* Reduce variable scope.Dmitry-Me2015-09-151-2/+4
| | | | This resolves #274.
* Try to decommit new chunks.Jason Evans2015-08-121-2/+4
| | | | Always leave decommit disabled on non-Windows systems.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-3/+5
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-131/+0
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* We have pages_unmap(ret, size) so we use it.Igor Podlesny2015-03-241-9/+1
|
* Convert to uniform style: cond == false --> !condJason Evans2014-10-031-2/+2
|
* Add check for madvise(2) to configure.ac.Richard Diamond2014-06-031-2/+5
| | | | | | Some platforms, such as Google's Portable Native Client, use Newlib and thus lack access to madvise(2). In those instances, pages_purge() is transformed into a no-op.
* Refactor huge allocation to be managed by arenas.Jason Evans2014-05-161-1/+1
| | | | | | | | | | | | | | | | | | | | Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
* Refactor tests.Jason Evans2013-12-091-2/+2
| | | | | | | Refactor tests to use explicit testing assertions, rather than diff'ing test output. This makes the test code a bit shorter, more explicitly encodes testing intent, and makes test failure diagnosis more straightforward.
* Fix mlockall()/madvise() interaction.Jason Evans2012-10-091-2/+10
| | | | | | | | mlockall(2) can cause purging via madvise(2) to fail. Fix purging code to check whether madvise() succeeded, and base zeroed page metadata on the result. Reported by Olivier Lecomte.
* Fix chunk_alloc_mmap() bugs.Jason Evans2012-05-091-35/+10
| | | | | | | | | | | | | | | | Simplify chunk_alloc_mmap() to no longer attempt map extension. The extra complexity isn't warranted, because although in the success case it saves one system call as compared to immediately falling back to chunk_alloc_mmap_slow(), it also makes the failure case even more expensive. This simplification removes two bugs: - For Windows platforms, pages_unmap() wasn't being called for unaligned mappings prior to falling back to chunk_alloc_mmap_slow(). This caused permanent virtual memory leaks. - For non-Windows platforms, alignment greater than chunksize caused pages_map() to be called with size 0 when attempting map extension. This always resulted in an mmap() error, and subsequent fallback to chunk_alloc_mmap_slow().
* Use Get/SetLastError on Win32Mike Hommey2012-04-301-2/+2
| | | | | | | | | Using errno on win32 doesn't quite work, because the value set in a shared library can't be read from e.g. an executable calling the function setting errno. At the same time, since buferror always uses errno/GetLastError, don't pass it.
* Add support for MingwMike Hommey2012-04-221-29/+79
|
* Remove mmap_unaligned.Jason Evans2012-04-221-74/+26
| | | | | | | | | | | | | Remove mmap_unaligned, which was used to heuristically decide whether to optimistically call mmap() in such a way that could reduce the total number of system calls. If I remember correctly, the intention of mmap_unaligned was to avoid always executing the slow path in the presence of ASLR. However, that reasoning seems to have been based on a flawed understanding of how ASLR actually works. Although ASLR apparently causes mmap() to ignore address requests, it does not cause total placement randomness, so there is a reasonable expectation that iterative mmap() calls will start returning chunk-aligned mappings once the first chunk has been properly aligned.
* Fix a memory corruption bug in chunk_alloc_dss().Jason Evans2012-04-211-6/+10
| | | | | | | | | Fix a memory corruption bug in chunk_alloc_dss() that was due to claiming newly allocated memory is zeroed. Reverse order of preference between mmap() and sbrk() to prefer mmap(). Clean up management of 'zero' parameter in chunk_alloc*().
* Add a pages_purge function to wrap madvise(JEMALLOC_MADV_PURGE) callsMike Hommey2012-04-191-0/+14
| | | | | This will be used to implement the feature on mingw, which doesn't have madvise.
* Disable munmap() if it causes VM map holes.Jason Evans2012-04-131-2/+5
| | | | | | | | | | | Add a configure test to determine whether common mmap()/munmap() patterns cause VM map holes, and only use munmap() to discard unused chunks if the problem does not exist. Unify the chunk caching for mmap and dss. Fix options processing to limit lg_chunk to be large enough that redzones will always fit.
* Normalize aligned allocation algorithms.Jason Evans2012-04-121-32/+18
| | | | | | | | | | | | | | | Normalize arena_palloc(), chunk_alloc_mmap_slow(), and chunk_recycle_dss() to use the same algorithm for trimming over-allocation. Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and ALIGNMENT_CEILING() macros, and use them where appropriate. Remove the run_size_p parameter from sa2u(). Fix a potential deadlock in chunk_recycle_dss() that was introduced by eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to chunk_alloc()).
* Add alignment support to chunk_alloc().Mike Hommey2012-04-101-18/+17
|
* Remove MAP_NORESERVE supportMike Hommey2012-04-101-27/+14
| | | | It was only used by the swap feature, and that is gone.
* Implement tsd.Jason Evans2012-03-231-22/+18
| | | | | | | | | | | | | Implement tsd, which is a TLS/TSD abstraction that uses one or both internally. Modify bootstrapping such that no tsd's are utilized until allocation is safe. Remove malloc_[v]tprintf(), and use malloc_snprintf() instead. Fix %p argument size handling in malloc_vsnprintf(). Fix a long-standing statistics-related bug in the "thread.arena" mallctl that could cause crashes due to linked list corruption.
* Invert NO_TLS to JEMALLOC_TLS.Jason Evans2012-03-191-2/+2
|
* Implement malloc_vsnprintf().Jason Evans2012-03-081-6/+3
| | | | | | | | | | | | Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as several other printing functions based on it, so that formatted printing can be relied upon without concern for inducing a dependency on floating point runtime support. Replace malloc_write() calls with malloc_*printf() where doing so simplifies the code. Add name mangling for library-private symbols in the data and BSS sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose all opt_* variable use to cpp so that proper mangling occurs.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-011-0/+239