summaryrefslogtreecommitdiffstats
path: root/test/unit
Commit message (Collapse)AuthorAgeFilesLines
...
* Refactor/fix ph.Jason Evans2016-04-111-20/+237
| | | | | | | | | | | | | | | | | | | | | Refactor ph to support configurable comparison functions. Use a cpp macro code generation form equivalent to the rb macros so that pairing heaps can be used for both run heaps and chunk heaps. Remove per node parent pointers, and instead use leftmost siblings' prev pointers to track parents. Fix multi-pass sibling merging to iterate over intermediate results using a FIFO, rather than a LIFO. Use this fixed sibling merging implementation for both merge phases of the auxiliary twopass algorithm (first merging the aux list, then replacing the root with its merged children). This fixes both degenerate merge behavior and the potential for deep recursion. This regression was introduced by 6bafa6678fc36483e638f1c3a0a9bf79fb89bfc9 (Pairing heap). This resolves #371.
* Fix a compilation warning in the ph test code.Jason Evans2016-04-051-20/+1
|
* Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macrosChris Peterson2016-03-311-3/+3
| | | | | Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros, respectively.
* Refactor out signed/unsigned comparisons.Jason Evans2016-03-152-6/+6
|
* Unittest for pairing heapDave Watson2016-03-081-0/+92
|
* Fix stack corruption and uninitialized var warningDmitri Smirnov2016-02-291-6/+7
| | | | | | Stack corruption happens in x64 bit This resolves #347.
* Fix decay tests for --disable-tcache case.Jason Evans2016-02-281-6/+14
|
* Fix stats.arenas.<i>.[...] for --disable-stats case.Jason Evans2016-02-281-1/+3
| | | | | | | | Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time} initialization. Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of the arena mutex.
* Fix decay tests for --disable-stats case.Jason Evans2016-02-281-11/+18
|
* Miscellaneous bitmap refactoring.Jason Evans2016-02-261-9/+13
|
* Cast PTRDIFF_MAX to size_t before adding 1.Jason Evans2016-02-261-2/+2
| | | | | | This fixes compilation warnings regarding integer overflow that were introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class overflow behavior defined.).
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-251-1/+24
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-241-2/+2
|
* Make opt_narenas unsigned rather than size_t.Jason Evans2016-02-241-1/+1
|
* Remove rbt_nilDave Watson2016-02-241-19/+22
| | | | | Since this is an intrusive tree, rbt_nil is the whole size of the node and can be quite large. For example, miscelm is ~100 bytes.
* Use table lookup for run_quantize_{floor,ceil}().Jason Evans2016-02-231-10/+2
| | | | | Reduce run quantization overhead by generating lookup tables during bootstrapping, and using the tables for all subsequent run quantization.
* Test run quantization.Jason Evans2016-02-221-0/+157
| | | | | Also rename run_quantize_*() to improve clarity. These tests demonstrate that run_quantize_ceil() is flawed.
* Refactor time_* into nstime_*.Jason Evans2016-02-224-249/+246
| | | | | | | Use a single uint64_t in nstime_t to store nanoseconds rather than using struct timespec. This reduces fragility around conversions between long and uint64_t, especially missing casts that only cause problems on 32-bit platforms.
* Handle unaligned keys in hash().Jason Evans2016-02-201-3/+16
| | | | Reported by Christopher Ferris <cferris@google.com>.
* Increase test coverage in test_decay_ticks.Jason Evans2016-02-201-123/+98
|
* Implement decay-based unused dirty page purging.Jason Evans2016-02-202-0/+465
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.
* Implement smoothstep table generation.Jason Evans2016-02-201-0/+106
| | | | | | Check in a generated smootherstep table as smoothstep.h rather than generating it at configure time, since not all systems (e.g. Windows) have dc.
* Refactor prng* from cpp macros into inline functions.Jason Evans2016-02-202-23/+114
| | | | | Remove 32-bit variant, convert prng64() to prng_lg_range(), and add prng_range().
* Implement ticker.Jason Evans2016-02-201-0/+76
| | | | | Implement ticker, which provides a simple API for ticking off some number of events before indicating that the ticker has hit its limit.
* Flesh out time_*() API.Jason Evans2016-02-201-3/+203
|
* Add time_update().Cameron Evans2016-02-201-0/+23
|
* Add --with-malloc-conf.Jason Evans2016-02-201-16/+17
| | | | | Add --with-malloc-conf, which makes it possible to embed a default options string during configuration.
* Fix test_stats_arenas_summary fragility.Jason Evans2016-02-201-4/+4
| | | | | Fix test_stats_arenas_summary to deallocate before asserting that purging must have happened.
* Add test for tree destructionJoshua Kahn2015-11-091-1/+16
|
* Allow const keys for lookupJoshua Kahn2015-11-091-1/+1
| | | | | | Signed-off-by: Steve Dougherty <sdougherty@barracuda.com> This resolves #281.
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-123-11/+11
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix "prof.reset" mallctl-related corruption.Jason Evans2015-09-101-15/+66
| | | | | | | Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
* Optimize arena_prof_tctx_set().Jason Evans2015-09-021-17/+32
| | | | | Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
* Fix arenas_cache_cleanup().Christopher Ferris2015-08-211-0/+6
| | | | | | Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
* Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans2015-08-191-1/+1
| | | | This resolves #256.
* Fix assertion in test.Jason Evans2015-08-121-1/+1
|
* Add no-OOM assertions to test.Jason Evans2015-08-071-6/+12
|
* Fix MinGW-related portability issues.Jason Evans2015-07-235-47/+44
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Fix more MinGW build warnings.Jason Evans2015-07-183-31/+34
|
* Add the config.cache_oblivious mallctl.Jason Evans2015-07-171-0/+1
|
* Avoid function prototype incompatibilities.Jason Evans2015-07-102-5/+5
| | | | | | | | | Add various function attributes to the exported functions to give the compiler more information to work with during optimization, and also specify throw() when compiling with C++ on Linux, in order to adequately match what __THROW does in glibc. This resolves #237.
* Fix an integer overflow bug in {size2index,s2u}_compute().Jason Evans2015-07-101-0/+89
| | | | | | | This {bug,regression} was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.). This resolves #241.
* Fix indentation.Jason Evans2015-07-091-1/+1
|
* Clarify relationship between stats.resident and stats.mapped.Jason Evans2015-05-301-3/+7
|
* Avoid atomic operations for dependent rtree reads.Jason Evans2015-05-161-14/+14
|
* Fix signed/unsigned comparison in arena_lg_dirty_mult_valid().Jason Evans2015-03-241-3/+3
|
* Implement dynamic per arena control over dirty page purging.Jason Evans2015-03-192-5/+71
| | | | | | | | | | | | | | Add mallctls: - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be modified to change the initial lg_dirty_mult setting for newly created arenas. - arena.<i>.lg_dirty_mult controls an individual arena's dirty page purging threshold, and synchronously triggers any purging that may be necessary to maintain the constraint. - arena.<i>.chunk.purge allows the per arena dirty page purging function to be replaced. This resolves #93.
* Move centralized chunk management into arenas.Jason Evans2015-02-121-27/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Test and fix tcache ID recycling.Jason Evans2015-02-101-0/+12
|
* Implement explicit tcache support.Jason Evans2015-02-101-0/+110
| | | | | | | | | Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be used in conjunction with the *allocx() API. Add the tcache.create, tcache.flush, and tcache.destroy mallctls. This resolves #145.