summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Refactor prng to not use 64-bit atomics on 32-bit platforms.Jason Evans2016-11-073-6/+8
| | | | This resolves #495.
* Fix run leak.Jason Evans2016-11-071-5/+7
| | | | | | | | | | | Fix arena_run_first_best_fit() to search all potentially non-empty runs_avail heaps, rather than ignoring the heap that contains runs larger than large_maxclass, but less than chunksize. This fixes a regression caused by f193fd80cf1f99bce2bc9f5f4a8b149219965da2 (Refactor runs_avail.). This resolves #493.
* Fix arena data structure size calculation.Jason Evans2016-11-041-2/+2
| | | | | | | | | | Fix paren placement so that QUANTUM_CEILING() applies to the correct portion of the expression that computes how much memory to base_alloc(). In practice this bug had no impact. This was caused by 5d8db15db91c85d47b343cfc07fc6ea736f0de48 (Simplify run quantization.), which in turn fixed an over-allocation regression caused by 3c4d92e82a31f652a7c77ca937a02d0185085b06 (Add per size class huge allocation statistics.).
* Fix large allocation to search optimal size class heap.Jason Evans2016-11-041-1/+1
| | | | | | | | | | | | | | | | | Fix arena_run_alloc_large_helper() to not convert size to usize when searching for the first best fit via arena_run_first_best_fit(). This allows the search to consider the optimal quantized size class, so that e.g. allocating and deallocating 40 KiB in a tight loop can reuse the same memory. This regression was nominally caused by 5707d6f952c71baa2f19102479859012982ac821 (Quantize szad trees by size class.), but it did not commonly cause problems until 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.). These regressions were first released in 4.0.0. This resolves #487.
* Fix chunk_alloc_cache() to support decommitted allocation.Jason Evans2016-11-042-11/+13
| | | | | | | | Fix chunk_alloc_cache() to support decommitted allocation, and use this ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that chunks don't get permanently stuck in a hybrid state. This resolves #487.
* Check for existance of CPU_COUNT macro before using it.Dave Watson2016-11-031-1/+7
| | | | This resolves #485.
* Do not use syscall(2) on OS X 10.12 (deprecated).Jason Evans2016-11-032-4/+4
|
* Add os_unfair_lock support.Jason Evans2016-11-031-0/+2
| | | | | OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended replacement.
* Fix/refactor zone allocator integration code.Jason Evans2016-11-031-85/+107
| | | | | | | | | Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes, since OS X 10.12 cannot tolerate a child unlocking mutexes that were locked by its parent. Refactor; this was a side effect of experimenting with zone {de,re}registration during fork(2).
* Add "J" (JSON) support to malloc_stats_print().Jason Evans2016-11-011-377/+854
| | | | This resolves #474.
* Use CLOCK_MONOTONIC_COARSE rather than COARSE_MONOTONIC_RAW.Jason Evans2016-10-301-2/+2
| | | | | | | | The raw clock variant is slow (even relative to plain CLOCK_MONOTONIC), whereas the coarse clock variant is faster than CLOCK_MONOTONIC, but still has resolution (~1ms) that is adequate for our purposes. This resolves #479.
* Use syscall(2) rather than {open,read,close}(2) during boot.Jason Evans2016-10-301-0/+19
| | | | | | | | | Some applications wrap various system calls, and if they call the allocator in their wrappers, unexpected reentry can result. This is not a general solution (many other syscalls are spread throughout the code), but this resolves a bootstrapping issue that is apparently common. This resolves #443.
* Do not mark malloc_conf as weak on Windows.Jason Evans2016-10-291-1/+1
| | | | | | | This works around malloc_conf not being properly initialized by at least the cygwin toolchain. Prior build system changes to use -Wl,--[no-]whole-archive may be necessary for malloc_conf resolution to work properly as a non-weak symbol (not tested).
* Do not mark malloc_conf as weak for unit tests.Jason Evans2016-10-291-1/+5
| | | | | | | This is generally correct (no need for weak symbols since no jemalloc library is involved in the link phase), and avoids linking problems (apparently unininitialized non-NULL malloc_conf) when using cygwin with gcc.
* Support static linking of jemalloc with glibcDave Watson2016-10-281-0/+31
| | | | | | | | | | | | | | | | | | | | | | | glibc defines its malloc implementation with several weak and strong symbols: strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc) strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree) strong_alias (__libc_free, __free) strong_alias (__libc_free, free) strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc) The issue is not with the weak symbols, but that other parts of glibc depend on __libc_malloc explicitly. Defining them in terms of jemalloc API's allows the linker to drop glibc's malloc.o completely from the link, and static linking no longer results in symbol collisions. Another wrinkle: jemalloc during initialization calls sysconf to get the number of CPU's. GLIBC allocates for the first time before setting up isspace (and other related) tables, which are used by sysconf. Instead, use the pthread API to get the number of CPUs with GLIBC, which seems to work. This resolves #442.
* Fix over-sized allocation of rtree leaf nodes.Jason Evans2016-10-281-1/+1
| | | | | Use the correct level metadata when allocating child nodes so that leaf nodes don't end up over-sized (2^16 elements vs 2^4 elements).
* Do not (recursively) allocate within tsd_fetch().Jason Evans2016-10-216-73/+79
| | | | | | | Refactor tsd so that tsdn_fetch() does not trigger allocation, since allocation could cause infinite recursion. This resolves #458.
* Make dss operations lockless.Jason Evans2016-10-136-127/+118
| | | | | | | | | | | | | | Rather than protecting dss operations with a mutex, use atomic operations. This has negligible impact on synchronization overhead during typical dss allocation, but is a substantial improvement for chunk_in_dss() and the newly added chunk_dss_mergeable(), which can be called multiple times during chunk deallocations. This change also has the advantage of avoiding tsd in deallocation paths associated with purging, which resolves potential deadlocks during thread exit due to attempted tsd resurrection. This resolves #425.
* Add/use adaptive spinning.Jason Evans2016-10-133-2/+10
| | | | | | | | Add spin_t and spin_{init,adaptive}(), which provide a simple abstraction for adaptive spinning. Adaptively spin during busy waits in bootstrapping and rtree node initialization.
* Disallow 0x5a junk filling when running in Valgrind.Jason Evans2016-10-131-6/+28
| | | | | | | | Explicitly disallow junk:true and junk:free runtime settings when running in Valgrind, since deallocation-time junk filling and redzone validation cause false positive Valgrind reports. This resolves #470.
* Fix and simplify decay-based purging.Jason Evans2016-10-111-51/+58
| | | | | | | | | | | | | | | | | | | | | Simplify decay-based purging attempts to only be triggered when the epoch is advanced, rather than every time purgeable memory increases. In a correctly functioning system (not previously the case; see below), this only causes a behavior difference if during subsequent purge attempts the least recently used (LRU) purgeable memory extent is initially too large to be purged, but that memory is reused between attempts and one or more of the next LRU purgeable memory extents are small enough to be purged. In practice this is an arbitrary behavior change that is within the set of acceptable behaviors. As for the purging fix, assure that arena->decay.ndirty is recorded *after* the epoch advance and associated purging occurs. Prior to this fix, it was possible for purging during epoch advance to cause a substantially underrepresentative (arena->ndirty - arena->decay.ndirty), i.e. the number of dirty pages attributed to the current epoch was too low, and a series of unintended purges could result. This fix is also relevant in the context of the simplification described above, but the bug's impact would be limited to over-purging at epoch advances.
* Do not advance decay epoch when time goes backwards.Jason Evans2016-10-112-4/+39
| | | | | | Instead, move the epoch backward in time. Additionally, add nstime_monotonic() and use it in debug builds to assert that time only goes backward if nstime_update() is using a non-monotonic time source.
* Refactor arena->decay_* into arena->decay.* (arena_decay_t).Jason Evans2016-10-111-38/+38
|
* Refine nstime_update().Jason Evans2016-10-101-27/+49
| | | | | | | | | | | | | | | | | | | | | Add missing #include <time.h>. The critical time facilities appear to have been transitively included via unistd.h and sys/time.h, but in principle this omission was capable of having caused clock_gettime(CLOCK_MONOTONIC, ...) to have been overlooked in favor of gettimeofday(), which in turn could cause spurious non-monotonic time updates. Refactor nstime_get() out of nstime_update() and add configure tests for all variants. Add CLOCK_MONOTONIC_RAW support (Linux-specific) and mach_absolute_time() support (OS X-specific). Do not fall back to clock_gettime(CLOCK_REALTIME, ...). This was a fragile Linux-specific workaround, which we're unlikely to use at all now that clock_gettime(CLOCK_MONOTONIC_RAW, ...) is supported, and if we have no choice besides non-monotonic clocks, gettimeofday() is only incrementally worse.
* Simplify run quantization.Jason Evans2016-10-062-153/+30
|
* Refactor runs_avail.Jason Evans2016-10-052-41/+37
| | | | | | | | Use pszind_t size classes rather than szind_t size classes, and always reserve space for NPSIZES elements. This removes unused heaps that are not multiples of the page size, and adds (currently) unused heaps for all huge size classes, with the immediate benefit that the size of arena_t allocations is constant (no longer dependent on chunk size).
* Implement pz2ind(), pind2sz(), and psz2u().Jason Evans2016-10-042-4/+4
| | | | | | | These compute size classes and indices similarly to size2index(), index2size() and s2u(), respectively, but using the subset of size classes that are multiples of the page size. Note that pszind_t and szind_t are not interchangeable.
* Use TSDN_NULL rather than NULL as appropriate.Jason Evans2016-10-042-7/+7
|
* Close file descriptor after reading "/proc/sys/vm/overcommit_memory".Jason Evans2016-09-261-0/+1
| | | | | | | This bug was introduced by c2f970c32b527660a33fa513a76d913c812dcf7c (Modify pages_map() to support mapping uncommitted virtual memory.). This resolves #399.
* Formatting fixes.Jason Evans2016-09-261-9/+12
|
* Change how the default zone is foundMike Hommey2016-09-261-2/+31
| | | | | | | | | | | | On OSX 10.12, malloc_default_zone returns a special zone that is not present in the list of registered zones. That zone uses a "lite zone" if one is present (apparently enabled when malloc stack logging is enabled), or the first registered zone otherwise. In practice this means unless malloc stack logging is enabled, the first registered zone is the default. So get the list of zones to get the first one, instead of relying on malloc_default_zone.
* Fix a valgrind regression in chunk_recycle()Elliot Ronaghan2016-09-261-1/+2
| | | | | Fix a latent valgrind bug exposed by d412624b25eed2b5c52b7d94a71070d3aab03cb4 (Move retaining out of default chunk hooks).
* Fix arena_bind().Qi Wang2016-09-231-6/+7
| | | | | When tsd is not in nominal state (e.g. during thread termination), we should not increment nthreads.
* Fix rallocx() sampling code to not eagerly commit sampler update.Jason Evans2016-06-081-3/+3
| | | | | | rallocx() for an alignment-constrained request may end up with a smaller-than-worst-case size if in-place reallocation succeeds due to serendipitous alignment. In such cases, sampling may not happen.
* Fix opt_zero-triggered in-place huge reallocation zeroing.Jason Evans2016-06-081-5/+5
| | | | | | Fix huge_ralloc_no_move_expand() to update the extent's zeroed attribute based on the intersection of the previous value and that of the newly merged trailing extent.
* Fix a Valgrind regression in chunk_alloc_wrapper().Elliot Ronaghan2016-06-071-2/+4
| | | | | This regression was caused by d412624b25eed2b5c52b7d94a71070d3aab03cb4 (Move retaining out of default chunk hooks).
* Fix a Valgrind regression in calloc().Elliot Ronaghan2016-06-071-1/+1
| | | | | This regression was caused by 3ef51d7f733ac6432e80fa902a779ab5b98d74f6 (Optimize the fast paths of calloc() and [m,d,sd]allocx().).
* Fix potential VM map fragmentation regression.Jason Evans2016-06-072-2/+2
| | | | | | | | Revert 245ae6036c09cc11a72fab4335495d95cddd5beb (Support --with-lg-page values larger than actual page size.), because it could cause VM map fragmentation if the kernel grows mmap()ed memory downward. This resolves #391.
* Fix mixed decl in nstime.cElliot Ronaghan2016-06-071-3/+5
| | | | Fix mixed decl in the gettimeofday() branch of nstime_update()
* Propagate tsdn to default chunk hooks.Jason Evans2016-06-071-20/+62
| | | | | | | This avoids bootstrapping issues for configurations that require allocation during tsd initialization. This resolves #390.
* Guard tsdn_tsd() call with tsdn_null() check.Jason Evans2016-05-111-2/+2
|
* Mangle tested functions as n_witness_* rather than witness_*_impl.Jason Evans2016-05-111-9/+8
|
* Optimize witness fast path.Jason Evans2016-05-111-118/+4
| | | | | | | | | | | Short-circuit commonly called witness functions so that they only execute in debug builds, and remove equivalent guards from mutex functions. This avoids pointless code execution in witness_assert_lockless(), which is typically called twice per allocation/deallocation function invocation. Inline commonly called witness functions so that optimized builds can completely remove calls as dead code.
* Fix chunk accounting related to triggering gdump profiles.Jason Evans2016-05-111-0/+15
| | | | | Fix in place huge reallocation to update the chunk counters that are used for triggering gdump profiles.
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-1114-1196/+1257
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Fix tsd bootstrapping for a0malloc().Jason Evans2016-05-071-27/+31
|
* Optimize the fast paths of calloc() and [m,d,sd]allocx().Jason Evans2016-05-063-188/+116
| | | | | | | | This is a broader application of optimizations to malloc() and free() in f4a0f32d340985de477bbe329ecdaecd69ed1055 (Fast-path improvement: reduce # of branches and unnecessary operations.). This resolves #321.
* Modify pages_map() to support mapping uncommitted virtual memory.Jason Evans2016-05-063-25/+102
| | | | | | | | | | | If the OS overcommits: - Commit all mappings in pages_map() regardless of whether the caller requested committed memory. - Linux-specific: Specify MAP_NORESERVE to avoid unfortunate interactions with heuristic overcommit mode during fork(2). This resolves #193.
* Scale leak report summary according to sampling probability.Jason Evans2016-05-041-18/+38
| | | | | | | This makes the numbers reported in the leak report summary closely match those reported by jeprof. This resolves #356.
* Add the stats.retained and stats.arenas.<i>.retained statistics.Jason Evans2016-05-044-6/+30
| | | | This resolves #367.