summaryrefslogtreecommitdiffstats
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* Make mallocx() OOM test more robust.Jason Evans2015-09-241-3/+14
| | | | | | Make mallocx() OOM testing work correctly even on systems that can allocate the majority of virtual address space in a single contiguous region.
* Fix xallocx(..., MALLOCX_ZERO) bugs.Jason Evans2015-09-241-1/+118
| | | | | | | | | | Zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.). Zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size.
* Add mallocx() OOM tests.Jason Evans2015-09-171-0/+70
|
* Loosen expected xallocx() results.Jason Evans2015-09-151-9/+9
| | | | | Systems that do not support chunk split/merge cannot shrink/grow huge allocations in place.
* Add more xallocx() overflow tests.Jason Evans2015-09-151-0/+64
|
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-123-11/+11
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix xallocx() bugs.Jason Evans2015-09-122-2/+242
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Fix "prof.reset" mallctl-related corruption.Jason Evans2015-09-101-15/+66
| | | | | | | Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
* Optimize arena_prof_tctx_set().Jason Evans2015-09-021-17/+32
| | | | | Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
* Don't purge junk filled chunks when shrinking huge allocationsMike Hommey2015-08-281-0/+4
| | | | | | | | When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
* Fix arenas_cache_cleanup().Christopher Ferris2015-08-211-0/+6
| | | | | | Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
* Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans2015-08-191-1/+1
| | | | This resolves #256.
* Fix test for MinGW.Jason Evans2015-08-121-11/+15
|
* Fix assertion in test.Jason Evans2015-08-121-1/+1
|
* Try to decommit new chunks.Jason Evans2015-08-121-11/+14
| | | | Always leave decommit disabled on non-Windows systems.
* Add no-OOM assertions to test.Jason Evans2015-08-071-6/+12
|
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-071-18/+48
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Generalize chunk management hooks.Jason Evans2015-08-041-39/+177
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-251-3/+3
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Fix huge_ralloc_no_move() to succeed more often.Jason Evans2015-07-251-2/+3
| | | | | | | | Fix huge_ralloc_no_move() to succeed if an allocation request results in the same usable size as the existing allocation, even if the request size is smaller than the usable size. This bug did not cause correctness issues, but it could cause unnecessary moves during reallocation.
* Fix MinGW-related portability issues.Jason Evans2015-07-239-88/+88
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Fix a compilation error.Jason Evans2015-07-221-8/+10
| | | | | This regression was introduced by 1b0e4abbfdbcc1c1a71d1f617adb19951109bfce (Port mq_get() to MinGW.).
* Add JEMALLOC_FORMAT_PRINTF().Jason Evans2015-07-222-4/+4
| | | | | | Replace JEMALLOC_ATTR(format(printf, ...). with JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can omit the attribute if it would cause extraneous compilation warnings.
* Port mq_get() to MinGW.Jason Evans2015-07-212-10/+36
|
* Fix more MinGW build warnings.Jason Evans2015-07-184-43/+46
|
* Add the config.cache_oblivious mallctl.Jason Evans2015-07-171-0/+1
|
* Add timer support for Windows.Jason Evans2015-07-132-10/+24
|
* Avoid function prototype incompatibilities.Jason Evans2015-07-102-5/+5
| | | | | | | | | Add various function attributes to the exported functions to give the compiler more information to work with during optimization, and also specify throw() when compiling with C++ on Linux, in order to adequately match what __THROW does in glibc. This resolves #237.
* Fix an integer overflow bug in {size2index,s2u}_compute().Jason Evans2015-07-101-0/+89
| | | | | | | This {bug,regression} was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.). This resolves #241.
* Fix indentation.Jason Evans2015-07-091-1/+1
|
* Fix size class overflow handling when profiling is enabled.Jason Evans2015-06-243-5/+59
| | | | | | | | | | Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. Remove an assertion that erroneously caused arena_sdalloc() to fail when profiling was enabled. This resolves #232.
* Clarify relationship between stats.resident and stats.mapped.Jason Evans2015-05-301-3/+7
|
* Avoid atomic operations for dependent rtree reads.Jason Evans2015-05-161-14/+14
|
* Fix signed/unsigned comparison in arena_lg_dirty_mult_valid().Jason Evans2015-03-241-3/+3
|
* Implement dynamic per arena control over dirty page purging.Jason Evans2015-03-193-19/+123
| | | | | | | | | | | | | | Add mallctls: - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be modified to change the initial lg_dirty_mult setting for newly created arenas. - arena.<i>.lg_dirty_mult controls an individual arena's dirty page purging threshold, and synchronously triggers any purging that may be necessary to maintain the constraint. - arena.<i>.chunk.purge allows the per arena dirty page purging function to be replaced. This resolves #93.
* use CLOCK_MONOTONIC in the timer if it's availableDaniel Micay2015-03-132-0/+27
| | | | | | Linux sets _POSIX_MONOTONIC_CLOCK to 0 meaning it *might* be available, so a sysconf check is necessary at runtime with a fallback to the mandatory CLOCK_REALTIME clock.
* Remove obsolete (incorrect) assertions.Jason Evans2015-02-161-21/+24
| | | | | | | This regression was introduced by 88fef7ceda6269598cef0cee8b984c8765673c27 (Refactor huge_*() calls into arena internals.), and went undetected because of the --enable-debug regression.
* Move centralized chunk management into arenas.Jason Evans2015-02-121-27/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Test and fix tcache ID recycling.Jason Evans2015-02-101-0/+12
|
* Implement explicit tcache support.Jason Evans2015-02-101-0/+110
| | | | | | | | | Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be used in conjunction with the *allocx() API. Add the tcache.create, tcache.flush, and tcache.destroy mallctls. This resolves #145.
* Refactor rtree to be lock-free.Jason Evans2015-02-051-25/+52
| | | | | | | | | | | | | | | | | | Recent huge allocation refactoring associates huge allocations with arenas, but it remains necessary to quickly look up huge allocation metadata during reallocation/deallocation. A global radix tree remains a good solution to this problem, but locking would have become the primary bottleneck after (upcoming) migration of chunk management from global to per arena data structures. This lock-free implementation uses double-checked reads to traverse the tree, so that in the steady state, each read or write requires only a single atomic operation. This implementation also assures that no more than two tree levels actually exist, through a combination of careful virtual memory allocation which makes large sparse nodes cheap, and skipping the root node on x64 (possible because the top 16 bits are all 0 in practice).
* Implement more atomic operations.Jason Evans2015-02-051-30/+55
| | | | | | - atomic_*_p(). - atomic_cas_*(). - atomic_write_*().
* Implement the prof.gdump mallctl.Jason Evans2015-01-261-2/+27
| | | | | | | | This feature makes it possible to toggle the gdump feature on/off during program execution, whereas the the opt.prof_dump mallctl value can only be set during program startup. This resolves #72.
* Introduce two new modes of junk filling: "alloc" and "free".Guilherme Goncalves2014-12-154-16/+33
| | | | | | | | In addition to true/false, opt.junk can now be either "alloc" or "free", giving applications the possibility of junking memory only on allocation or deallocation. This resolves #172.
* Style and spelling fixes.Jason Evans2014-12-093-5/+3
|
* Fix test_stats_arenas_bins for 32-bit builds.Yuriy Kaminskiy2014-12-031-0/+1
|
* Thwart compiler optimizations.Jason Evans2014-10-151-0/+12
|
* Add per size class huge allocation statistics.Jason Evans2014-10-132-10/+114
| | | | | | | | | | | | | Add per size class huge allocation statistics, and normalize various stats: - Change the arenas.nlruns type from size_t to unsigned. - Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's. - Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with stats.arenas.<i>.bins.<j>.curregs . - Add the stats.arenas.<i>.hchunks.<j>.nmalloc, stats.arenas.<i>.hchunks.<j>.ndalloc, stats.arenas.<i>.hchunks.<j>.nrequests, and stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
* Don't fetch tsd in a0{d,}alloc().Jason Evans2014-10-111-0/+1
| | | | | Don't fetch tsd in a0{d,}alloc(), because doing so can cause infinite recursion on systems that require an allocated tsd wrapper.
* Add configure options.Jason Evans2014-10-102-1/+27
| | | | | | | | | | | | Add: --with-lg-page --with-lg-page-sizes --with-lg-size-class-group --with-lg-quantum Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE. Fix various edge conditions exposed by the configure options.