summaryrefslogtreecommitdiffstats
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* Refactor prng to not use 64-bit atomics on 32-bit platforms.Jason Evans2016-11-071-12/+207
| | | | This resolves #495.
* Fix run leak.Jason Evans2016-11-071-1/+1
| | | | | | | | | | | Fix arena_run_first_best_fit() to search all potentially non-empty runs_avail heaps, rather than ignoring the heap that contains runs larger than large_maxclass, but less than chunksize. This fixes a regression caused by f193fd80cf1f99bce2bc9f5f4a8b149219965da2 (Refactor runs_avail.). This resolves #493.
* Add os_unfair_lock support.Jason Evans2016-11-032-0/+9
| | | | | OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended replacement.
* Call _exit(2) rather than exit(3) in forked child.Jason Evans2016-11-031-1/+1
| | | | _exit(2) is async-signal-safe, whereas exit(3) is not.
* Reduce memory requirements for regression tests.Jason Evans2016-10-283-35/+55
| | | | | | | This is intended to drop memory usage to a level that AppVeyor test instances can handle. This resolves #393.
* Periodically purge in memory-intensive integration tests.Jason Evans2016-10-281-0/+7
| | | | This resolves #393.
* Periodically purge in memory-intensive integration tests.Jason Evans2016-10-283-6/+27
| | | | This resolves #393.
* Do not (recursively) allocate within tsd_fetch().Jason Evans2016-10-212-24/+24
| | | | | | | Refactor tsd so that tsdn_fetch() does not trigger allocation, since allocation could cause infinite recursion. This resolves #458.
* Make dss operations lockless.Jason Evans2016-10-131-2/+2
| | | | | | | | | | | | | | Rather than protecting dss operations with a mutex, use atomic operations. This has negligible impact on synchronization overhead during typical dss allocation, but is a substantial improvement for chunk_in_dss() and the newly added chunk_dss_mergeable(), which can be called multiple times during chunk deallocations. This change also has the advantage of avoiding tsd in deallocation paths associated with purging, which resolves potential deadlocks during thread exit due to attempted tsd resurrection. This resolves #425.
* Fix decay tests to all adapt to nstime_monotonic().Jason Evans2016-10-111-6/+9
|
* Do not advance decay epoch when time goes backwards.Jason Evans2016-10-112-2/+20
| | | | | | Instead, move the epoch backward in time. Additionally, add nstime_monotonic() and use it in debug builds to assert that time only goes backward if nstime_update() is using a non-monotonic time source.
* Refactor runs_avail.Jason Evans2016-10-051-1/+1
| | | | | | | | Use pszind_t size classes rather than szind_t size classes, and always reserve space for NPSIZES elements. This removes unused heaps that are not multiples of the page size, and adds (currently) unused heaps for all huge size classes, with the immediate benefit that the size of arena_t allocations is constant (no longer dependent on chunk size).
* Implement pz2ind(), pind2sz(), and psz2u().Jason Evans2016-10-041-11/+83
| | | | | | | These compute size classes and indices similarly to size2index(), index2size() and s2u(), respectively, but using the subset of size classes that are multiples of the page size. Note that pszind_t and szind_t are not interchangeable.
* Work around a weird pgi bug in test/unit/math.cElliot Ronaghan2016-09-261-0/+4
| | | | | | | | | | | | | | | | | | | | | | pgi fails to compile math.c, reporting that `-INFINITY` in `pt_norm_expected[]` is a "Non-constant" expression. A simplified version of this failure is: ```c #include <math.h> static double inf1, inf2 = INFINITY; // no complaints static double inf3 = INFINITY; // suddenly INFINITY is "Non-constant" int main() { } ``` ```sh PGC-S-0074-Non-constant expression in initializer (t.c: 4) ``` pgi errors on the declaration of inf3, and will compile fine if that line is removed. I've reported this bug to pgi, but in the meantime I just switched to using (DBL_MAX + DBL_MAX) to work around this bug.
* Disable junk filling for tests that could otherwise easily OOM.Jason Evans2016-05-112-0/+8
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-115-88/+88
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Fix tsd bootstrapping for a0malloc().Jason Evans2016-05-075-16/+69
|
* Don't test fork() on Windows.Jason Evans2016-05-041-0/+6
|
* Update mallocx() OOM test to deal with smaller hugemax.Jason Evans2016-05-031-10/+19
| | | | | | | | | | | | | Depending on virtual memory resource limits, it is necessary to attempt allocating three maximally sized objects to trigger OOM rather than just two, since the maximum supported size is slightly less than half the total virtual memory address space. This fixes a test failure that was introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class overflow behavior defined.). This resolves #379.
* Fix witness/fork() interactions.Jason Evans2016-04-261-3/+22
| | | | | | Fix witness to clear its list of owned mutexes in the child if platform-specific malloc_mutex code re-initializes mutexes rather than unlocking them.
* Fix fork()-related lock rank ordering reversals.Jason Evans2016-04-261-0/+39
|
* Use separate arena for chunk tests.Jason Evans2016-04-261-28/+45
| | | | | This assures that side effects of internal allocation don't impact tests.
* Fix arena_reset() test to avoid tcache.Jason Evans2016-04-251-10/+9
|
* Implement the arena.<i>.reset mallctl.Jason Evans2016-04-221-0/+160
| | | | | | | This makes it possible to discard all of an arena's allocations in a single operation. This resolves #146.
* Fix style nits.Jason Evans2016-04-173-4/+4
|
* Add witness, a simple online locking validator.Jason Evans2016-04-143-3/+282
| | | | This resolves #358.
* Fix more 64-to-32 conversion warnings.Jason Evans2016-04-122-11/+11
|
* Clean up char vs. uint8_t in junk filling code.Jason Evans2016-04-112-15/+17
| | | | Consistently use uint8_t rather than char for junk filling code.
* Refactor/fix ph.Jason Evans2016-04-111-20/+237
| | | | | | | | | | | | | | | | | | | | | Refactor ph to support configurable comparison functions. Use a cpp macro code generation form equivalent to the rb macros so that pairing heaps can be used for both run heaps and chunk heaps. Remove per node parent pointers, and instead use leftmost siblings' prev pointers to track parents. Fix multi-pass sibling merging to iterate over intermediate results using a FIFO, rather than a LIFO. Use this fixed sibling merging implementation for both merge phases of the auxiliary twopass algorithm (first merging the aux list, then replacing the root with its merged children). This fixes both degenerate merge behavior and the potential for deep recursion. This regression was introduced by 6bafa6678fc36483e638f1c3a0a9bf79fb89bfc9 (Pairing heap). This resolves #371.
* Fix a compilation warning in the ph test code.Jason Evans2016-04-051-20/+1
|
* Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macrosChris Peterson2016-03-311-3/+3
| | | | | Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros, respectively.
* Avoid blindly enabling assertions for header code when testing.Jason Evans2016-03-231-33/+45
| | | | | | Restructure the test program master header to avoid blindly enabling assertions. Prior to this change, assertion code in e.g. arena.h was always enabled for tests, which could skew performance-related testing.
* Code formatting fixes.Jason Evans2016-03-231-1/+2
|
* Refactor out signed/unsigned comparisons.Jason Evans2016-03-153-9/+8
|
* Add (size_t) casts to MALLOCX_ALIGN().Jason Evans2016-03-111-13/+10
| | | | | | | | Add (size_t) casts to MALLOCX_ALIGN() macros so that passing the integer constant 0x80000000 does not cause a compiler warning about invalid shift amount. This resolves #354.
* Unittest for pairing heapDave Watson2016-03-081-0/+92
|
* Fix stack corruption and uninitialized var warningDmitri Smirnov2016-02-291-6/+7
| | | | | | Stack corruption happens in x64 bit This resolves #347.
* Fix decay tests for --disable-tcache case.Jason Evans2016-02-281-6/+14
|
* Fix stats.arenas.<i>.[...] for --disable-stats case.Jason Evans2016-02-281-1/+3
| | | | | | | | Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time} initialization. Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of the arena mutex.
* Fix decay tests for --disable-stats case.Jason Evans2016-02-281-11/+18
|
* Remove invalid tests.Jason Evans2016-02-272-18/+2
| | | | | Remove invalid tests that were intended to be tests of (hugemax+1) OOM, for which tests already exist.
* Miscellaneous bitmap refactoring.Jason Evans2016-02-261-9/+13
|
* Cast PTRDIFF_MAX to size_t before adding 1.Jason Evans2016-02-263-10/+10
| | | | | | This fixes compilation warnings regarding integer overflow that were introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class overflow behavior defined.).
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-253-2/+137
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-242-7/+7
|
* Make opt_narenas unsigned rather than size_t.Jason Evans2016-02-241-1/+1
|
* Fix Windows build issuesDmitri Smirnov2016-02-242-4/+0
| | | | This resolves #333.
* Remove rbt_nilDave Watson2016-02-241-19/+22
| | | | | Since this is an intrusive tree, rbt_nil is the whole size of the node and can be quite large. For example, miscelm is ~100 bytes.
* Use table lookup for run_quantize_{floor,ceil}().Jason Evans2016-02-231-10/+2
| | | | | Reduce run quantization overhead by generating lookup tables during bootstrapping, and using the tables for all subsequent run quantization.
* Test run quantization.Jason Evans2016-02-221-0/+157
| | | | | Also rename run_quantize_*() to improve clarity. These tests demonstrate that run_quantize_ceil() is flawed.