summaryrefslogtreecommitdiffstats
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
...
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-253-2/+137
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-242-7/+7
|
* Make opt_narenas unsigned rather than size_t.Jason Evans2016-02-241-1/+1
|
* Fix Windows build issuesDmitri Smirnov2016-02-242-4/+0
| | | | This resolves #333.
* Remove rbt_nilDave Watson2016-02-241-19/+22
| | | | | Since this is an intrusive tree, rbt_nil is the whole size of the node and can be quite large. For example, miscelm is ~100 bytes.
* Use table lookup for run_quantize_{floor,ceil}().Jason Evans2016-02-231-10/+2
| | | | | Reduce run quantization overhead by generating lookup tables during bootstrapping, and using the tables for all subsequent run quantization.
* Test run quantization.Jason Evans2016-02-221-0/+157
| | | | | Also rename run_quantize_*() to improve clarity. These tests demonstrate that run_quantize_ceil() is flawed.
* Refactor time_* into nstime_*.Jason Evans2016-02-227-260/+257
| | | | | | | Use a single uint64_t in nstime_t to store nanoseconds rather than using struct timespec. This reduces fragility around conversions between long and uint64_t, especially missing casts that only cause problems on 32-bit platforms.
* Handle unaligned keys in hash().Jason Evans2016-02-201-3/+16
| | | | Reported by Christopher Ferris <cferris@google.com>.
* Increase test coverage in test_decay_ticks.Jason Evans2016-02-201-123/+98
|
* Implement decay-based unused dirty page purging.Jason Evans2016-02-202-0/+465
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.
* Implement smoothstep table generation.Jason Evans2016-02-201-0/+106
| | | | | | Check in a generated smootherstep table as smoothstep.h rather than generating it at configure time, since not all systems (e.g. Windows) have dc.
* Refactor prng* from cpp macros into inline functions.Jason Evans2016-02-202-23/+114
| | | | | Remove 32-bit variant, convert prng64() to prng_lg_range(), and add prng_range().
* Implement ticker.Jason Evans2016-02-201-0/+76
| | | | | Implement ticker, which provides a simple API for ticking off some number of events before indicating that the ticker has hit its limit.
* Flesh out time_*() API.Jason Evans2016-02-204-49/+214
|
* Add time_update().Cameron Evans2016-02-201-0/+23
|
* Add --with-malloc-conf.Jason Evans2016-02-201-16/+17
| | | | | Add --with-malloc-conf, which makes it possible to embed a default options string during configuration.
* Fix test_stats_arenas_summary fragility.Jason Evans2016-02-201-4/+4
| | | | | Fix test_stats_arenas_summary to deallocate before asserting that purging must have happened.
* Don't rely on unpurged chunks in xallocx() test.Jason Evans2016-02-201-20/+20
|
* Add test for tree destructionJoshua Kahn2015-11-091-1/+16
|
* Allow const keys for lookupJoshua Kahn2015-11-091-1/+1
| | | | | | Signed-off-by: Steve Dougherty <sdougherty@barracuda.com> This resolves #281.
* Fix intermittent xallocx() test failures.Jason Evans2015-10-011-43/+65
| | | | | | | | Modify xallocx() tests that expect to expand in place to use a separate arena. This avoids the potential for interposed internal allocations from e.g. heap profile sampling to disrupt the tests. This resolves #286.
* Remove fragile xallocx() test case.Jason Evans2015-09-251-9/+0
| | | | | In addition to depending on map coalescing, the test depended on munmap() being disabled so that chunk recycling would always succeed.
* Make mallocx() OOM test more robust.Jason Evans2015-09-241-3/+14
| | | | | | Make mallocx() OOM testing work correctly even on systems that can allocate the majority of virtual address space in a single contiguous region.
* Fix xallocx(..., MALLOCX_ZERO) bugs.Jason Evans2015-09-241-1/+118
| | | | | | | | | | Zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.). Zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size.
* Add mallocx() OOM tests.Jason Evans2015-09-171-0/+70
|
* Loosen expected xallocx() results.Jason Evans2015-09-151-9/+9
| | | | | Systems that do not support chunk split/merge cannot shrink/grow huge allocations in place.
* Add more xallocx() overflow tests.Jason Evans2015-09-151-0/+64
|
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-123-11/+11
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix xallocx() bugs.Jason Evans2015-09-122-2/+242
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Fix "prof.reset" mallctl-related corruption.Jason Evans2015-09-101-15/+66
| | | | | | | Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
* Optimize arena_prof_tctx_set().Jason Evans2015-09-021-17/+32
| | | | | Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
* Don't purge junk filled chunks when shrinking huge allocationsMike Hommey2015-08-281-0/+4
| | | | | | | | When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
* Fix arenas_cache_cleanup().Christopher Ferris2015-08-211-0/+6
| | | | | | Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
* Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans2015-08-191-1/+1
| | | | This resolves #256.
* Fix test for MinGW.Jason Evans2015-08-121-11/+15
|
* Fix assertion in test.Jason Evans2015-08-121-1/+1
|
* Try to decommit new chunks.Jason Evans2015-08-121-11/+14
| | | | Always leave decommit disabled on non-Windows systems.
* Add no-OOM assertions to test.Jason Evans2015-08-071-6/+12
|
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-18/+48
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-39/+177
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-251-3/+3
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Fix huge_ralloc_no_move() to succeed more often.Jason Evans2015-07-251-2/+3
| | | | | | | | Fix huge_ralloc_no_move() to succeed if an allocation request results in the same usable size as the existing allocation, even if the request size is smaller than the usable size. This bug did not cause correctness issues, but it could cause unnecessary moves during reallocation.
* Fix MinGW-related portability issues.Jason Evans2015-07-239-88/+88
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Fix a compilation error.Jason Evans2015-07-221-8/+10
| | | | | This regression was introduced by 1b0e4abbfdbcc1c1a71d1f617adb19951109bfce (Port mq_get() to MinGW.).
* Add JEMALLOC_FORMAT_PRINTF().Jason Evans2015-07-222-4/+4
| | | | | | Replace JEMALLOC_ATTR(format(printf, ...). with JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can omit the attribute if it would cause extraneous compilation warnings.
* Port mq_get() to MinGW.Jason Evans2015-07-212-10/+36
|
* Fix more MinGW build warnings.Jason Evans2015-07-184-43/+46
|
* Add the config.cache_oblivious mallctl.Jason Evans2015-07-171-0/+1
|
* Add timer support for Windows.Jason Evans2015-07-132-10/+24
|