summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| * Fix extent_alloc_cache[_locked]() to support decommitted allocation.Jason Evans2016-11-044-20/+19
| | | | | | | | | | | | | | | | | | Fix extent_alloc_cache[_locked]() to support decommitted allocation, and use this ability in arena_stash_dirty(), so that decommitted extents are not needlessly committed during purging. In practice this does not happen on any currently supported systems, because both extent merging and decommit must be implemented; all supported systems implement one xor the other.
| * Update symbol mangling.Jason Evans2016-11-031-0/+2
| |
| * Update ChangeLog for 4.3.0.Jason Evans2016-11-031-0/+37
| |
| * Support Debian GNU/kFreeBSD.Samuel Moritz2016-11-031-1/+1
| | | | | | | | Treat it exactly like Linux since they both use GNU libc.
| * Fix long spinning in rtree_node_initDave Watson2016-11-034-19/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rtree_node_init spinlocks the node, allocates, and then sets the node. This is under heavy contention at the top of the tree if many threads start to allocate at the same time. Instead, take a per-rtree sleeping mutex to reduce spinning. Tested both pthreads and osx OSSpinLock, and both reduce spinning adequately Previous benchmark time: ./ttest1 500 100 ~15s New benchmark time: ./ttest1 500 100 .57s
| * Check for existance of CPU_COUNT macro before using it.Dave Watson2016-11-031-1/+7
| | | | | | | | This resolves #485.
| * Fix sycall(2) configure test for Linux.Jason Evans2016-11-031-2/+1
| |
| * Do not use syscall(2) on OS X 10.12 (deprecated).Jason Evans2016-11-034-4/+24
| |
| * Add os_unfair_lock support.Jason Evans2016-11-037-0/+42
| | | | | | | | | | OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended replacement.
| * Fix/refactor zone allocator integration code.Jason Evans2016-11-032-85/+108
| | | | | | | | | | | | | | | | | | Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes, since OS X 10.12 cannot tolerate a child unlocking mutexes that were locked by its parent. Refactor; this was a side effect of experimenting with zone {de,re}registration during fork(2).
| * Call _exit(2) rather than exit(3) in forked child.Jason Evans2016-11-031-1/+1
| | | | | | | | _exit(2) is async-signal-safe, whereas exit(3) is not.
| * Force no lazy-lock on Windows.Jason Evans2016-11-021-5/+11
| | | | | | | | | | | | | | Monitoring thread creation is unimplemented for Windows, which means lazy-lock cannot function correctly. This resolves #310.
| * malloc_stats_print() fixes/cleanups.Jason Evans2016-11-011-18/+3
| | | | | | | | | | | | Fix and clean up various malloc_stats_print() issues caused by 0ba5b9b6189e16a983d8922d8c5cb6ab421906e8 (Add "J" (JSON) support to malloc_stats_print().).
| * Use <quote>...</quote> rather than &ldquo;...&rdquo; or "..." in XML.Jason Evans2016-11-012-31/+33
| |
| * Add "J" (JSON) support to malloc_stats_print().Jason Evans2016-11-012-335/+738
| | | | | | | | This resolves #474.
| * Fix extent_rtree acquire() to release element on error.Jason Evans2016-10-311-1/+3
| | | | | | | | This resolves #480.
| * Add an assertion in witness_owner().Jason Evans2016-10-311-0/+3
| |
| * Refactor witness_unlock() to fix undefined test behavior.Jason Evans2016-10-312-11/+29
| | | | | | | | This resolves #396.
| * Use CLOCK_MONOTONIC_COARSE rather than COARSE_MONOTONIC_RAW.Jason Evans2016-10-303-10/+10
| | | | | | | | | | | | | | | | The raw clock variant is slow (even relative to plain CLOCK_MONOTONIC), whereas the coarse clock variant is faster than CLOCK_MONOTONIC, but still has resolution (~1ms) that is adequate for our purposes. This resolves #479.
| * Use syscall(2) rather than {open,read,close}(2) during boot.Jason Evans2016-10-301-0/+19
| | | | | | | | | | | | | | | | | | Some applications wrap various system calls, and if they call the allocator in their wrappers, unexpected reentry can result. This is not a general solution (many other syscalls are spread throughout the code), but this resolves a bootstrapping issue that is apparently common. This resolves #443.
| * Fix EXTRA_CFLAGS to not affect configuration.Jason Evans2016-10-302-5/+4
| |
| * Do not mark malloc_conf as weak on Windows.Jason Evans2016-10-291-1/+1
| | | | | | | | | | | | | | This works around malloc_conf not being properly initialized by at least the cygwin toolchain. Prior build system changes to use -Wl,--[no-]whole-archive may be necessary for malloc_conf resolution to work properly as a non-weak symbol (not tested).
| * Do not mark malloc_conf as weak for unit tests.Jason Evans2016-10-291-1/+5
| | | | | | | | | | | | | | This is generally correct (no need for weak symbols since no jemalloc library is involved in the link phase), and avoids linking problems (apparently unininitialized non-NULL malloc_conf) when using cygwin with gcc.
| * Support static linking of jemalloc with glibcDave Watson2016-10-282-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | glibc defines its malloc implementation with several weak and strong symbols: strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc) strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree) strong_alias (__libc_free, __free) strong_alias (__libc_free, free) strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc) The issue is not with the weak symbols, but that other parts of glibc depend on __libc_malloc explicitly. Defining them in terms of jemalloc API's allows the linker to drop glibc's malloc.o completely from the link, and static linking no longer results in symbol collisions. Another wrinkle: jemalloc during initialization calls sysconf to get the number of CPU's. GLIBC allocates for the first time before setting up isspace (and other related) tables, which are used by sysconf. Instead, use the pthread API to get the number of CPUs with GLIBC, which seems to work. This resolves #442.
| * Reduce memory requirements for regression tests.Jason Evans2016-10-283-35/+55
| | | | | | | | | | | | | | This is intended to drop memory usage to a level that AppVeyor test instances can handle. This resolves #393.
| * Periodically purge in memory-intensive integration tests.Jason Evans2016-10-281-0/+7
| | | | | | | | This resolves #393.
| * Periodically purge in memory-intensive integration tests.Jason Evans2016-10-283-6/+27
| | | | | | | | This resolves #393.
| * Fix over-sized allocation of rtree leaf nodes.Jason Evans2016-10-281-1/+1
| | | | | | | | | | Use the correct level metadata when allocating child nodes so that leaf nodes don't end up over-sized (2^16 elements vs 2^4 elements).
| * Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).Jason Evans2016-10-2825-317/+358
| | | | | | | | | | This avoids warnings in some cases, and is otherwise generally good hygiene.
| * Explicitly cast negative constants meant for use as unsigned.Jason Evans2016-10-281-3/+5
| |
| * Add cast to silence (harmless) conversion warning.Jason Evans2016-10-281-1/+1
| |
| * Avoid negation of unsigned numbers.Jason Evans2016-10-281-2/+2
| | | | | | | | | | | | Rather than relying on two's complement negation for alignment mask generation, use bitwise not and addition. This dodges warnings from MSVC, and should be strength-reduced by compiler optimization anyway.
| * Only link with libm (-lm) if necessary.Jason Evans2016-10-282-6/+16
| | | | | | | | This fixes warnings when building with MSVC.
| * Only use --whole-archive with gcc.Jason Evans2016-10-283-3/+7
| | | | | | | | | | | | | | Conditionalize use of --whole-archive on the platform plus compiler, rather than on the ABI. This fixes a regression caused by 7b24c6e5570062495243f1e55131b395adb31e33 (Use --whole-archive when linking integration tests on MinGW.).
| * Do not force lazy lock on Windows.Jason Evans2016-10-271-1/+0
| | | | | | | | | | | | | | | | This reverts 13473c7c66a81a4dc1cf11a97e9c8b1dbb785b64, which was intended to work around bootstrapping issues when linking statically. However, this actually causes problems in various other configurations, so this reversion may force a future fix for the underlying problem, if it still exists.
| * Use --whole-archive when linking integration tests on MinGW.Jason Evans2016-10-261-1/+10
| | | | | | | | | | | | | | Prior to this change, the malloc_conf weak symbol provided by the jemalloc dynamic library is always used, even if the application provides a malloc_conf symbol. Use the --whole-archive linker option to allow the weak symbol to be overridden.
| * Do not (recursively) allocate within tsd_fetch().Jason Evans2016-10-2113-132/+172
| | | | | | | | | | | | | | Refactor tsd so that tsdn_fetch() does not trigger allocation, since allocation could cause infinite recursion. This resolves #458.
| * Make dss operations lockless.Jason Evans2016-10-1311-147/+131
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than protecting dss operations with a mutex, use atomic operations. This has negligible impact on synchronization overhead during typical dss allocation, but is a substantial improvement for extent_in_dss() and the newly added extent_dss_mergeable(), which can be called multiple times during extent deallocations. This change also has the advantage of avoiding tsd in deallocation paths associated with purging, which resolves potential deadlocks during thread exit due to attempted tsd resurrection. This resolves #425.
| * Add/use adaptive spinning.Jason Evans2016-10-136-2/+66
| | | | | | | | | | | | | | | | Add spin_t and spin_{init,adaptive}(), which provide a simple abstraction for adaptive spinning. Adaptively spin during busy waits in bootstrapping and rtree node initialization.
| * Remove all vestiges of chunks.Jason Evans2016-10-1223-270/+26
| | | | | | | | | | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
| * Remove ratio-based purging.Jason Evans2016-10-1211-485/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | Make decay-based purging the default (and only) mode. Remove associated mallctls: - opt.purge - opt.lg_dirty_mult - arena.<i>.lg_dirty_mult - arenas.lg_dirty_mult - stats.arenas.<i>.lg_dirty_mult This resolves #385.
| * Fix and simplify decay-based purging.Jason Evans2016-10-112-69/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify decay-based purging attempts to only be triggered when the epoch is advanced, rather than every time purgeable memory increases. In a correctly functioning system (not previously the case; see below), this only causes a behavior difference if during subsequent purge attempts the least recently used (LRU) purgeable memory extent is initially too large to be purged, but that memory is reused between attempts and one or more of the next LRU purgeable memory extents are small enough to be purged. In practice this is an arbitrary behavior change that is within the set of acceptable behaviors. As for the purging fix, assure that arena->decay.ndirty is recorded *after* the epoch advance and associated purging occurs. Prior to this fix, it was possible for purging during epoch advance to cause a substantially underrepresentative (arena->ndirty - arena->decay.ndirty), i.e. the number of dirty pages attributed to the current epoch was too low, and a series of unintended purges could result. This fix is also relevant in the context of the simplification described above, but the bug's impact would be limited to over-purging at epoch advances.
| * Fix decay tests to all adapt to nstime_monotonic().Jason Evans2016-10-111-6/+9
| |
| * Do not advance decay epoch when time goes backwards.Jason Evans2016-10-116-6/+63
| | | | | | | | | | | | Instead, move the epoch backward in time. Additionally, add nstime_monotonic() and use it in debug builds to assert that time only goes backward if nstime_update() is using a non-monotonic time source.
| * Refactor arena->decay_* into arena->decay.* (arena_decay_t).Jason Evans2016-10-112-84/+91
| |
| * Refine nstime_update().Jason Evans2016-10-105-38/+109
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add missing #include <time.h>. The critical time facilities appear to have been transitively included via unistd.h and sys/time.h, but in principle this omission was capable of having caused clock_gettime(CLOCK_MONOTONIC, ...) to have been overlooked in favor of gettimeofday(), which in turn could cause spurious non-monotonic time updates. Refactor nstime_get() out of nstime_update() and add configure tests for all variants. Add CLOCK_MONOTONIC_RAW support (Linux-specific) and mach_absolute_time() support (OS X-specific). Do not fall back to clock_gettime(CLOCK_REALTIME, ...). This was a fragile Linux-specific workaround, which we're unlikely to use at all now that clock_gettime(CLOCK_MONOTONIC_RAW, ...) is supported, and if we have no choice besides non-monotonic clocks, gettimeofday() is only incrementally worse.
| * Reduce "thread.arena" mallctl contention.Jason Evans2016-10-041-3/+1
| | | | | | | | This resolves #460.
| * Remove a size class assertion from extent_size_quantize_floor().Jason Evans2016-10-031-1/+0
| | | | | | | | | | Extent coalescence can result in legitimate calls to extent_size_quantize_floor() with size larger than LARGE_MAXCLASS.
| * Fix size class overflow bugs.Jason Evans2016-10-034-8/+30
| | | | | | | | | | | | | | Avoid calling s2u() on raw extent sizes in extent_recycle(). Clamp psz2ind() (implemented as psz2ind_clamp()) when inserting/removing into/from size-segregated extent heaps.
| * Verify extent hook functions receive correct extent_hooks pointer.Jason Evans2016-09-291-17/+52
| |