summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Silence an unused variable warning.Jason Evans2013-10-201-1/+1
| | | | Reported by Ricardo Nabinger Sanchez.
* malloc_conf_init: revert errno value when readlink(2) fail.Alexandre Perrin2013-10-131-14/+14
|
* Fix another deadlock related to chunk_record().Jason Evans2013-04-231-8/+11
| | | | | | Fix chunk_record() to unlock chunks_mtx before deallocating a base node, in order to avoid potential deadlock. This fix addresses the second of two similar bugs.
* Fix deadlock related to chunk_record().Jason Evans2013-04-171-4/+11
| | | | | | | Fix chunk_record() to unlock chunks_mtx before deallocating a base node, in order to avoid potential deadlock. Reported by Tudor Bosman.
* Fix a prof-related locking order bug.Jason Evans2013-02-063-12/+22
| | | | | Fix a locking order bug that could cause deadlock during fork if heap profiling were enabled.
* Fix Valgrind integration.Jason Evans2013-02-014-30/+37
| | | | | Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access.
* Fix a chunk recycling bug.Jason Evans2013-02-011-0/+1
| | | | | | | | | | | Fix a chunk recycling bug that could cause the allocator to lose track of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could cause corruption if allocating via sbrk(2) (unlikely unless running with the "dss:primary" option specified). This was completely harmless on Linux unless using mlockall(2) (and unlikely even then, unless the --disable-munmap configure option or the "dss:primary" option was specified). This regression was introduced in 3.1.0 by the mlockall(2)/madvise(2) interaction fix.
* Fix two quarantine bugs.Jason Evans2013-01-311-10/+19
| | | | | | | Internal reallocation of the quarantined object array leaked the old array. Reallocation failure for internal reallocation of the quarantined object array (very unlikely) resulted in memory corruption.
* Fix potential TLS-related memory corruption.Jason Evans2013-01-313-57/+43
| | | | | | | | | | Avoid writing to uninitialized TLS as a side effect of deallocation. Initializing TLS during deallocation is unsafe because it is possible that a thread never did any allocation, and that TLS has already been deallocated by the threads library, resulting in write-after-free corruption. These fixes affect prof_tdata and quarantine; all other uses of TLS are already safe, whether intentionally (as for tcache) or unintentionally (as for arenas).
* Revert opt_abort and opt_junk refactoring.Jason Evans2013-01-231-2/+14
| | | | | | | | Revert refactoring of opt_abort and opt_junk declarations. clang accepts the config_*-based declarations (and generates correct code), but gcc complains with: error: initializer element is not constant
* Use config_* instead of JEMALLOC_*.Jason Evans2013-01-222-14/+4
| | | | Convert a couple of stragglers from JEMALLOC_* to use config_*.
* Update hash from MurmurHash2 to MurmurHash3.Jason Evans2013-01-222-88/+21
| | | | | | Update hash from MurmurHash2 to MurmurHash3, primarily because the latter generates 128 bits in a single call for no extra cost, which simplifies integration with cuckoo hashing.
* Add and use JEMALLOC_ALWAYS_INLINE.Jason Evans2013-01-221-3/+3
| | | | | Add JEMALLOC_ALWAYS_INLINE and use it to guarantee that the entire fast paths of the primary allocation/deallocation functions are inlined.
* Tighten valgrind integration.Jason Evans2013-01-222-22/+29
| | | | | | | Tighten valgrind integration such that immediately after memory is validated or zeroed, valgrind is told to forget the memory's 'defined' state. The only place newly allocated memory should be left marked as 'defined' is in the public functions (e.g. calloc() and realloc()).
* Avoid validating freshly mapped memory.Jason Evans2013-01-221-17/+17
| | | | | | | | | | | | | | Move validation of supposedly zeroed pages from chunk_alloc() to chunk_recycle(). There is little point to validating newly mapped memory returned by chunk_alloc_mmap(), and memory that comes from sbrk() is explicitly zeroed, so there is little risk to assuming that chunk_alloc_dss() actually does the zeroing properly. This relaxation of validation can make a big difference to application startup time and overall system usage on platforms that use jemalloc as the system allocator (namely FreeBSD). Submitted by Ian Lepore <ian@FreeBSD.org>.
* Don't mangle errno with free(3) if utrace(2) failsGarrett Cooper2012-12-241-0/+2
| | | | | | | This ensures POLA on FreeBSD (at least) as free(3) is generally assumed to not fiddle around with errno. Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
* Add clipping support to lg_chunk option processing.Jason Evans2012-12-231-19/+23
| | | | | | | | | Modify processing of the lg_chunk option so that it clips an out-of-range input to the edge of the valid range. This makes it possible to request the minimum possible chunk size without intimate knowledge of allocator internals. Submitted by Ian Lepore (see FreeBSD PR bin/174641).
* Fix chunk_recycle() Valgrind integration.Jason Evans2012-12-121-3/+2
| | | | | | | | Fix chunk_recycyle() to unconditionally inform Valgrind that returned memory is undefined. This fixes Valgrind warnings that would result from a huge allocation being freed, then recycled for use as an arena chunk. The arena code would write metadata to the chunk header, and Valgrind would consider these invalid writes.
* Fix "arenas.extend" mallctl to return the number of arenas.Jason Evans2012-11-301-9/+11
| | | | Reported by Mike Hommey.
* Avoid arena_prof_accum()-related locking when possible.Jason Evans2012-11-133-35/+7
| | | | | | | Refactor arena_prof_accum() and its callers to avoid arena locking when prof_interval is 0 (as when profiling is disabled). Reported by Ben Maurer.
* Tweak chunk purge order according to fragmentation.Jason Evans2012-11-071-11/+34
| | | | | Tweak chunk purge order to purge unfragmented chunks from high to low memory. This facilitates dirty run reuse.
* Don't register jemalloc's zone allocator if something else already replaced ↵Mike Hommey2012-11-071-1/+11
| | | | the system default zone
* Purge unused dirty pages in a fragmentation-reducing order.Jason Evans2012-11-061-191/+307
| | | | | | | | | | | | | | | | Purge unused dirty pages in an order that first performs clean/dirty run defragmentation, in order to mitigate available run fragmentation. Remove the limitation that prevented purging unless at least one chunk worth of dirty pages had accumulated in an arena. This limitation was intended to avoid excessive purging for small applications, but the threshold was arbitrary, and the effect of questionable utility. Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased likelihood of allocating clean runs, given the same ratio of clean:dirty runs, and reduces the potential for repeated purging in pathological large malloc/free loops that push the active:dirty page ratio just over the purge threshold.
* Fix deadlock in the arenas.purge mallctl.Jason Evans2012-11-041-26/+22
| | | | | Fix deadlock in the arenas.purge mallctl due to recursive mutex acquisition.
* Fix dss/mmap allocation precedence code.Jason Evans2012-10-171-26/+14
| | | | | Fix dss/mmap allocation precedence code to use recyclable mmap memory only after primary dss allocation fails.
* Add ctl_mutex proection to arena_i_dss_ctl().Jason Evans2012-10-151-0/+2
| | | | | Add ctl_mutex proection to arena_i_dss_ctl(), since ctl_stats.narenas is accessed.
* Add arena-specific and selective dss allocation.Jason Evans2012-10-139-189/+603
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.
* mark _pthread_mutex_init_calloc_cb as public explicitlyJan Beich2012-10-101-1/+1
| | | | | | | Mozilla build hides everything by default using visibility pragma and unhides only explicitly listed headers. But this doesn't work on FreeBSD because _pthread_mutex_init_calloc_cb is neither documented nor exposed via any header.
* Make malloc_usable_size() implementation consistent with prototype.Jason Evans2012-10-091-1/+1
| | | | | Use JEMALLOC_USABLE_SIZE_CONST for the malloc_usable_size() implementation as well as the prototype, for consistency's sake.
* Fix fork(2)-related mutex acquisition order.Jason Evans2012-10-091-3/+3
| | | | | | Fix mutex acquisition order inversion for the chunks rtree and the base mutex. Chunks rtree acquisition was introduced by the previous commit, so this bug was short-lived.
* Fix fork(2)-related deadlocks.Jason Evans2012-10-095-3/+144
| | | | | | | | | | | | | | | | | Add a library constructor for jemalloc that initializes the allocator. This fixes a race that could occur if threads were created by the main thread prior to any memory allocation, followed by fork(2), and then memory allocation in the child process. Fix the prefork/postfork functions to acquire/release the ctl, prof, and rtree mutexes. This fixes various fork() child process deadlocks, but one possible deadlock remains (intentionally) unaddressed: prof backtracing can acquire runtime library mutexes, so deadlock is still possible if heap profiling is enabled during fork(). This deadlock is known to be a real issue in at least the case of libgcc-based backtracing. Reported by tfengjun.
* Fix mlockall()/madvise() interaction.Jason Evans2012-10-093-40/+44
| | | | | | | | mlockall(2) can cause purging via madvise(2) to fail. Fix purging code to check whether madvise() succeeded, and base zeroed page metadata on the result. Reported by Olivier Lecomte.
* Fix error return value in thread_tcache_enabled_ctl().Jason Evans2012-10-081-1/+1
| | | | Reported by Corey Richardson.
* If sysconf() fails, the number of CPUs is reported as UINT_MAX, not 1 as it ↵Corey Richardson2012-10-081-3/+4
| | | | should be
* Remove unused variable and branch (reported by clang-analzyer)Corey Richardson2012-10-081-5/+0
|
* Remove const from __*_hook variable declarations.Jason Evans2012-05-231-5/+4
| | | | | Remove const from __*_hook variable declarations, so that glibc can modify them during process forking.
* Update a comment.Jason Evans2012-05-161-1/+1
|
* Disable tcache by default if running inside Valgrind.Jason Evans2012-05-161-0/+2
| | | | | Disable tcache by default if running inside Valgrind, in order to avoid making unallocated objects appear reachable to Valgrind.
* Auto-detect whether running inside Valgrind.Jason Evans2012-05-151-14/+15
| | | | | Auto-detect whether running inside Valgrind, thus removing the need to manually specify MALLOC_CONF=valgrind:true.
* Return early in _malloc_{pre,post}fork() if uninitialized.Jason Evans2012-05-121-0/+14
| | | | | | | Avoid mutex operations in _malloc_{pre,post}fork() unless jemalloc has been initialized. Reported by David Xu.
* Fix large calloc() zeroing bugs.Jason Evans2012-05-111-25/+18
| | | | | | | | | Refactor code such that arena_mapbits_{large,small}_set() always preserves the unzeroed flag, and manually manipulate the unzeroed flag in the one case where it actually gets reset (in arena_chunk_purge()). This fixes unzeroed preservation bugs in arena_run_split() and arena_ralloc_large_grow(). These bugs caused large calloc() to return non-zeroed memory under some circumstances.
* Add arena chunk map assertions.Jason Evans2012-05-111-15/+30
|
* Refactor arena_run_alloc().Jason Evans2012-05-111-34/+24
| | | | | Refactor duplicated arena_run_alloc() code into arena_run_alloc_helper().
* Add the --enable-mremap option.Jason Evans2012-05-092-1/+4
| | | | | | Add the --enable-mremap option, and disable the use of mremap(2) by default, for the same reason that freeing chunks via munmap(2) is disabled by default on Linux: semi-permanent VM map fragmentation.
* Fix chunk_recycle() to stop leaking trailing chunks.Jason Evans2012-05-091-40/+38
| | | | | | | Fix chunk_recycle() to correctly compute trailsize and re-insert trailing chunks. This fixes a major virtual memory leak. Simplify chunk_record() to avoid dropping/re-acquiring chunks_mtx.
* Fix chunk_alloc_mmap() bugs.Jason Evans2012-05-092-35/+11
| | | | | | | | | | | | | | | | Simplify chunk_alloc_mmap() to no longer attempt map extension. The extra complexity isn't warranted, because although in the success case it saves one system call as compared to immediately falling back to chunk_alloc_mmap_slow(), it also makes the failure case even more expensive. This simplification removes two bugs: - For Windows platforms, pages_unmap() wasn't being called for unaligned mappings prior to falling back to chunk_alloc_mmap_slow(). This caused permanent virtual memory leaks. - For non-Windows platforms, alignment greater than chunksize caused pages_map() to be called with size 0 when attempting map extension. This always resulted in an mmap() error, and subsequent fallback to chunk_alloc_mmap_slow().
* Fix a base allocator deadlock.Jason Evans2012-05-031-3/+14
| | | | | Fix a base allocator deadlock due to chunk_recycle() calling back into the base allocator.
* Don't use sizeof() on a VARIABLE_ARRAYMike Hommey2012-05-021-2/+2
| | | | In the alloca() case, this fails to be the right size.
* Allow je_malloc_message to be overridden when linking staticallyMike Hommey2012-05-022-19/+14
| | | | | | | | | | | | | If an application wants to override je_malloc_message, it is better to define the symbol locally than to change its value in main(), which might be too late for various reasons. Due to je_malloc_message being initialized in util.c, statically linking jemalloc with an application defining je_malloc_message fails due to "multiple definition of" the symbol. Defining it without a value (like je_malloc_conf) makes it more easily overridable.
* Further optimize and harden arena_salloc().Jason Evans2012-05-021-4/+5
| | | | | | | | | Further optimize arena_salloc() to only look at the binind chunk map bits in the common case. Add more sanity checks to arena_salloc() that detect chunk map inconsistencies for large allocations (whether due to allocator bugs or application bugs).