summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* Dodge 32-bit-clang-specific backtracing failure.Jason Evans2017-02-281-0/+4
| | | | | This disables run_tests.sh configurations that use the combination of 32-bit clang and heap profiling.
* Put -D_REENTRANT in CPPFLAGS rather than CFLAGS.Jason Evans2017-02-281-1/+1
| | | | | | This regression was introduced by 194d6f9de8ff92841b67f38a2a6a06818e3240dd (Restructure *CFLAGS/*CXXFLAGS configuration.).
* Fix {allocated,nmalloc,ndalloc,nrequests}_large stats regression.Jason Evans2017-02-272-15/+3
| | | | | | This fixes a regression introduced by d433471f581ca50583c7a99f9802f7388f81aa36 (Derive {allocated,nmalloc,ndalloc,nrequests}_large stats.).
* Tidy up extent quantization.Jason Evans2017-02-272-25/+5
| | | | | | | Remove obsolete unit test scaffolding for extent quantization. Remove redundant assertions. Add an assertion to extents_first_best_fit_locked() that should help prevent aligned allocation regressions.
* Update a comment.Jason Evans2017-02-261-4/+4
|
* Get rid of witness in malloc_mutex_t when !(configured w/ debug).Qi Wang2017-02-243-14/+34
| | | | | | We don't touch witness at all when config_debug == false. Let's only pay the memory cost in malloc_mutex_s when needed. Note that when !config_debug, we keep the field in a union so that we don't have to do #ifdefs in multiple places.
* Use MALLOC_CONF rather than malloc_conf for tests.Jason Evans2017-02-2335-80/+119
| | | | | | | | | malloc_conf does not reliably work with MSVC, which complains of "inconsistent dll linkage", i.e. its inability to support the application overriding malloc_conf when dynamically linking/loading. Work around this limitation by adding test harness support for per test shell script sourcing, and converting all tests to use MALLOC_CONF instead of malloc_conf.
* Remove remainder of mb (memory barrier).Jason Evans2017-02-223-4/+0
| | | | | This complements 94c5d22a4da7844d0bdc5b370e47b1ba14268af2 (Remove mb.h, which is unused).
* Avoid -lgcc for heap profiling if unwind.h is missing.Jason Evans2017-02-211-1/+3
| | | | | | This removes an unneeded library dependency when falling back to intrinsics-based backtracing (or failing to enable heap profiling at all).
* Remove obsolete arena_maybe_purge() call.Jason Evans2017-02-211-4/+0
| | | | | Remove a call to arena_maybe_purge() that was necessary for ratio-based purging, but is obsolete in the context of decay-based purging.
* Move arena_basic_stats_merge() prototype (hygienic cleanup).Jason Evans2017-02-211-3/+3
|
* Disable coalescing of cached extents.Jason Evans2017-02-174-24/+43
| | | | | | | | Extent splitting and coalescing is a major component of large allocation overhead, and disabling coalescing of cached extents provides a simple and effective hysteresis mechanism. Once two-phase purging is implemented, it will probably make sense to leave coalescing disabled for the first phase, but coalesce during the second phase.
* Optimize extent coalescing.Jason Evans2017-02-171-20/+23
| | | | | Refactor extent_can_coalesce(), extent_coalesce(), and extent_record() to avoid needlessly repeating extent [de]activation operations.
* Fix arena->stats.mapped accounting.Jason Evans2017-02-164-26/+61
| | | | | Mapped memory increases when extent_alloc_wrapper() succeeds, and decreases when extent_dalloc_wrapper() is called (during purging).
* Synchronize arena->decay with arena->decay.mtx.Jason Evans2017-02-164-33/+35
| | | | This removes the last use of arena->lock.
* Derive {allocated,nmalloc,ndalloc,nrequests}_large stats.Jason Evans2017-02-162-26/+27
| | | | This mildly reduces stats update overhead during normal operation.
* Synchronize arena->tcache_ql with arena->tcache_ql_mtx.Jason Evans2017-02-165-22/+32
| | | | This replaces arena->lock synchronization.
* Convert arena->stats synchronization to atomics.Jason Evans2017-02-169-228/+326
|
* Convert arena->prof_accumbytes synchronization to atomics.Jason Evans2017-02-1615-59/+128
|
* Convert arena->dss_prec synchronization to atomics.Jason Evans2017-02-164-17/+10
|
* Do not generate unused tsd_*_[gs]et() functions.Jason Evans2017-02-134-33/+31
| | | | | | | | | This avoids a gcc diagnostic note: note: The ABI for passing parameters with 64-byte alignment has changed in GCC 4.6 This note related to the cacheline alignment of rtree_ctx_t, which was introduced by 4a346f55939af4f200121cc4454089592d952f18 (Replace rtree path cache with LRU cache.).
* Fix extent_alloc_dss() regression.Jason Evans2017-02-101-19/+29
| | | | | | | Fix extent_alloc_dss() to account for bytes that are not a multiple of the page size. This regression was introduced by 577d4572b0821a15e5370f9bf566d884b7cf707c (Make dss operations lockless.), which was first released in 4.3.0.
* Fix rtree_subkey() regression.Jason Evans2017-02-101-1/+1
| | | | | | | Fix rtree_subkey() to use uintptr_t rather than unsigned for key bitmasking. This regression was introduced by 4a346f55939af4f200121cc4454089592d952f18 (Replace rtree path cache with LRU cache.).
* Enable mutex witnesses even when !isthreaded.Jason Evans2017-02-101-9/+5
| | | | | | This fixes interactions with witness_assert_depth[_to_rank](), which was added in d0e93ada51e20f4ae394ff4dbdcf96182767c89c (Add witness_assert_depth[_to_rank]().).
* Spin adaptively in rtree_elm_acquire().Jason Evans2017-02-091-10/+11
|
* Enhance spin_adaptive() to yield after several iterations.Jason Evans2017-02-093-6/+28
| | | | | This avoids worst case behavior if e.g. another thread is preempted while owning the resource the spinning thread is waiting for.
* Replace spin_init() with SPIN_INITIALIZER.Jason Evans2017-02-095-12/+4
|
* Remove rtree support for 0 (NULL) keys.Jason Evans2017-02-093-45/+43
| | | | | NULL can never actually be inserted in practice, and removing support allows a branch to be removed from the fast path.
* Determine rtree levels at compile time.Jason Evans2017-02-099-272/+248
| | | | | | | Rather than dynamically building a table to aid per level computations, define a constant table at compile time. Omit both high and low insignificant bits. Use one to three tree levels, depending on the number of significant bits.
* Remove rtree leading 0 bit optimization.Jason Evans2017-02-092-84/+16
| | | | A subsequent change instead ignores insignificant high bits.
* Make non-essential inline rtree functions static functions.Jason Evans2017-02-094-119/+85
|
* Split rtree_elm_lookup_hard() out of rtree_elm_lookup().Jason Evans2017-02-094-101/+111
| | | | | Anything but a hit in the first element of the lookup cache is expensive enough to negate the benefits of inlining.
* Replace rtree path cache with LRU cache.Jason Evans2017-02-094-124/+108
| | | | | | | Rework rtree_ctx_t to encapsulate an rtree leaf LRU lookup cache rather than a single-path element lookup cache. The replacement is logically much simpler, as well as slightly faster in the fast path case and less prone to degraded performance during non-trivial sequences of lookups.
* Optimize a branch out of rtree_read() if !dependent.Jason Evans2017-02-091-1/+1
|
* Conditianalize lg_tcache_max use on JEMALLOC_TCACHE.Jason Evans2017-02-071-1/+5
|
* Fix extent_record().Jason Evans2017-02-071-18/+33
| | | | | | | | | | | | | | Read adjacent rtree elements while holding element locks, since the extents mutex only protects against relevant like-state extent mutation. Fix management of the 'coalesced' loop state variable to merge forward/backward results, rather than overwriting the result of forward coalescing if attempting to coalesce backward. In practice this caused no correctness issues, but could cause extra iterations in rare cases. These regressions were introduced by d27f29b468ae3e9d2b1da4a9880351d76e5a1662 (Disentangle arena and extent locking.).
* Fix a race in extent_grow_retained().Jason Evans2017-02-041-9/+14
| | | | | | | | | | | | | Set extent as active prior to registration so that other threads can't modify it in the absence of locking. This regression was introduced by d27f29b468ae3e9d2b1da4a9880351d76e5a1662 (Disentangle arena and extent locking.), via non-obvious means. Removal of extents_mtx protection during extent_grow_retained() execution opened up the race, but in the presence of that locking, the code was safe. This resolves #599.
* Optimize compute_size_with_overflow().Jason Evans2017-02-041-5/+16
| | | | Do not check for overflow unless it is actually a possibility.
* Fix compute_size_with_overflow().Jason Evans2017-02-041-1/+1
| | | | | | | Fix compute_size_with_overflow() to use a high_bits mask that has the high bits set, rather than the low bits. This regression was introduced by 5154ff32ee8c37bacb6afd8a07b923eb33228357 (Unify the allocation paths).
* Disentangle arena and extent locking.Jason Evans2017-02-0219-645/+767
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor arena and extent locking protocols such that arena and extent locks are never held when calling into the extent_*_wrapper() API. This requires extra care during purging since the arena lock no longer protects the inner purging logic. It also requires extra care to protect extents from being merged with adjacent extents. Convert extent_t's 'active' flag to an enumerated 'state', so that retained extents are explicitly marked as such, rather than depending on ring linkage state. Refactor the extent collections (and their synchronization) for cached and retained extents into extents_t. Incorporate LRU functionality to support purging. Incorporate page count accounting, which replaces arena->ndirty and arena->stats.retained. Assert that no core locks are held when entering any internal [de]allocation functions. This is in addition to existing assertions that no locks are held when entering external [de]allocation functions. Audit and document synchronization protocols for all arena_t fields. This fixes a potential deadlock due to recursive allocation during gdump, in a similar fashion to b49c649bc18fff4bd10a1c8adbaf1f25f6453cb6 (Fix lock order reversal during gdump.), but with a necessarily much broader code impact.
* Fix/refactor tcaches synchronization.Jason Evans2017-02-026-29/+101
| | | | | | | Synchronize tcaches with tcaches_mtx rather than ctl_mtx. Add missing synchronization for tcache flushing. This bug was introduced by 1cb181ed632e7573fb4eab194e4d216867222d27 (Implement explicit tcache support.), which was first released in 4.0.0.
* Add witness_assert_depth[_to_rank]().Jason Evans2017-02-026-26/+84
| | | | | This makes it possible to make lock state assertions about precisely which locks are held.
* Synchronize extent_grow_next accesses.Jason Evans2017-02-021-3/+15
| | | | | | This should have been part of 411697adcda2fd75e135cdcdafb95f2bd295dc7f (Use exponential series to size extents.), which introduced extent_grow_next.
* Call prof_gctx_create() without owing bt2gctx_mtx.Jason Evans2017-02-021-12/+29
| | | | | | | This reduces the probability of allocating (and thereby indirectly making a system call) while owning bt2gctx_mtx. Unfortunately it is an incomplete solution, because ckh insertion/deletion can also allocate/deallocate, which requires more extensive changes to address.
* Conditionalize prof fork handling on config_prof.Jason Evans2017-02-021-4/+4
| | | | This allows the compiler to completely remove dead code.
* Handle race in stats_arena_bins_printQi Wang2017-02-011-2/+11
| | | | | | | When multiple threads calling stats_print, race could happen as we read the counters in separate mallctl calls; and the removed assertion could fail when other operations happened in between the mallctl calls. For simplicity, output "race" in the utilization field in this case.
* Silence harmless warnings discovered via run_tests.sh.Jason Evans2017-02-011-2/+5
|
* CI: Run --enable-debug builds on windowsDavid Goldblatt2017-02-011-1/+15
| | | | This will hopefully catch some windows-specific bugs.
* Introduce scripts to run all possible testsDavid Goldblatt2017-01-312-0/+45
| | | | | | | | | | | | | | In 6e7d0890 we added better travis continuous integration tests. This is nice, but has two problems: - We run only a subset of interesting tests. - The travis builds can take hours to give us back results (especially on OS X). This adds scripts/gen_run_tests.py, and its output, run_tests.sh, which builds and runs a larger portion of possible configurations on the local machine. While a travis run takes several hours to complete , I can run these scripts on my (OS X) latop and (Linux) devserve, and get a more exhaustive set of results back in around 10 minutes.
* Beef up travis CI integration testingDavid Goldblatt2017-01-272-11/+159
| | | | | | | | | | | | | Introduces gen_travis.py, which generates .travis.yml, and updates .travis.yml to be the generated version. The travis build matrix approach doesn't play well with mixing and matching various different environment settings, so we generate every build explicitly, rather than letting them do it for us. To avoid abusing travis resources (and save us time waiting for CI results), we don't test every possible combination of options; we only check up to 2 unusual settings at a time.