summaryrefslogtreecommitdiffstats
path: root/test/unit
Commit message (Collapse)AuthorAgeFilesLines
...
* Rename the arenas.extend mallctl to arenas.create.Jason Evans2017-01-073-8/+8
|
* Add MALLCTL_ARENAS_ALL.Jason Evans2017-01-071-0/+8
| | | | | | | Add the MALLCTL_ARENAS_ALL cpp macro as a fixed index for use in accessing the arena.<i>.{purge,decay,dss} and stats.arenas.<i>.* mallctls, and deprecate access via the arenas.narenas index (to be removed in 6.0.0).
* Implement per arena base allocators.Jason Evans2016-12-271-0/+274
| | | | | | | | | | | | | Add/rename related mallctls: - Add stats.arenas.<i>.base . - Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal . - Add stats.arenas.<i>.resident . Modify the arenas.extend mallctl to take an optional (extent_hooks_t *) argument so that it is possible for all base allocations to be serviced by the specified extent hooks. This resolves #463.
* Add huge page configuration and pages_[no}huge().Jason Evans2016-12-271-0/+30
| | | | | | | | Add the --with-lg-hugepage configure option, but automatically configure LG_HUGEPAGE even if it isn't specified. Add the pages_[no]huge() functions, which toggle huge page state via madvise(..., MADV_[NO]HUGEPAGE) calls.
* Simplify arena_slab_regind().Jason Evans2016-12-231-0/+35
| | | | | | | | | Rewrite arena_slab_regind() to provide sufficient constant data for the compiler to perform division strength reduction. This replaces more general manual strength reduction that was implemented before arena_bin_info was compile-time-constant. It would be possible to slightly improve on the compiler-generated division code by taking advantage of range limits that the compiler doesn't know about.
* Add a_type parameter to qr_{meld,split}().Jason Evans2016-12-131-6/+6
|
* Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).Jason Evans2016-11-151-6/+6
| | | | | This avoids warnings in some cases, and is otherwise generally good hygiene.
* Add packing test, which verifies stable layout policy.Jason Evans2016-11-151-0/+167
|
* Fix test_prng_lg_range_zu() to work on 32-bit systems.Jason Evans2016-11-071-10/+10
|
* Rename atomic_*_{uint32,uint64,u}() to atomic_*_{u32,u64,zu}().Jason Evans2016-11-071-12/+12
| | | | This change conforms to naming conventions throughout the codebase.
* Refactor prng to not use 64-bit atomics on 32-bit platforms.Jason Evans2016-11-071-22/+187
| | | | This resolves #495.
* Fix psz/pind edge cases.Jason Evans2016-11-041-15/+17
| | | | | | | | | Add an "over-size" extent heap in which to store extents which exceed the maximum size class (plus cache-oblivious padding, if enabled). Remove psz2ind_clamp() and use psz2ind() instead so that trying to allocate the maximum size class can in principle succeed. In practice, this allows assertions to hold so that OOM errors can be successfully generated.
* Fix long spinning in rtree_node_initDave Watson2016-11-031-0/+2
| | | | | | | | | | | | | | | | | rtree_node_init spinlocks the node, allocates, and then sets the node. This is under heavy contention at the top of the tree if many threads start to allocate at the same time. Instead, take a per-rtree sleeping mutex to reduce spinning. Tested both pthreads and osx OSSpinLock, and both reduce spinning adequately Previous benchmark time: ./ttest1 500 100 ~15s New benchmark time: ./ttest1 500 100 .57s
* Call _exit(2) rather than exit(3) in forked child.Jason Evans2016-11-031-1/+1
| | | | _exit(2) is async-signal-safe, whereas exit(3) is not.
* Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).Jason Evans2016-10-2812-235/+263
| | | | | This avoids warnings in some cases, and is otherwise generally good hygiene.
* Explicitly cast negative constants meant for use as unsigned.Jason Evans2016-10-281-3/+5
|
* Add cast to silence (harmless) conversion warning.Jason Evans2016-10-281-1/+1
|
* Do not (recursively) allocate within tsd_fetch().Jason Evans2016-10-212-24/+24
| | | | | | | Refactor tsd so that tsdn_fetch() does not trigger allocation, since allocation could cause infinite recursion. This resolves #458.
* Make dss operations lockless.Jason Evans2016-10-131-2/+2
| | | | | | | | | | | | | | Rather than protecting dss operations with a mutex, use atomic operations. This has negligible impact on synchronization overhead during typical dss allocation, but is a substantial improvement for extent_in_dss() and the newly added extent_dss_mergeable(), which can be called multiple times during extent deallocations. This change also has the advantage of avoiding tsd in deallocation paths associated with purging, which resolves potential deadlocks during thread exit due to attempted tsd resurrection. This resolves #425.
* Remove all vestiges of chunks.Jason Evans2016-10-126-43/+11
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Remove ratio-based purging.Jason Evans2016-10-122-84/+1
| | | | | | | | | | | | | Make decay-based purging the default (and only) mode. Remove associated mallctls: - opt.purge - opt.lg_dirty_mult - arena.<i>.lg_dirty_mult - arenas.lg_dirty_mult - stats.arenas.<i>.lg_dirty_mult This resolves #385.
* Fix decay tests to all adapt to nstime_monotonic().Jason Evans2016-10-111-6/+9
|
* Do not advance decay epoch when time goes backwards.Jason Evans2016-10-112-2/+20
| | | | | | Instead, move the epoch backward in time. Additionally, add nstime_monotonic() and use it in debug builds to assert that time only goes backward if nstime_update() is using a non-monotonic time source.
* Work around a weird pgi bug in test/unit/math.cElliot Ronaghan2016-06-081-0/+4
| | | | | | | | | | | | | | | | | | | | | | pgi fails to compile math.c, reporting that `-INFINITY` in `pt_norm_expected[]` is a "Non-constant" expression. A simplified version of this failure is: ```c #include <math.h> static double inf1, inf2 = INFINITY; // no complaints static double inf3 = INFINITY; // suddenly INFINITY is "Non-constant" int main() { } ``` ```sh PGC-S-0074-Non-constant expression in initializer (t.c: 4) ``` pgi errors on the declaration of inf3, and will compile fine if that line is removed. I've reported this bug to pgi, but in the meantime I just switched to using (DBL_MAX + DBL_MAX) to work around this bug.
* Remove a stray memset(), and fix a junk filling test regression.Jason Evans2016-06-061-5/+19
|
* Add rtree lookup path caching.Jason Evans2016-06-061-35/+48
| | | | | | | | | rtree-based extent lookups remain more expensive than chunk-based run lookups, but with this optimization the fast path slowdown is ~3 CPU cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles prior. The path caching speedup tends to degrade gracefully unless allocated memory is spread far apart (as is the case when using a mixture of sbrk() and mmap()).
* Miscellaneous s/chunk/extent/ updates.Jason Evans2016-06-061-3/+3
|
* s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/gJason Evans2016-06-061-1/+1
|
* Rename huge to large.Jason Evans2016-06-068-81/+81
|
* Move slabs out of chunks.Jason Evans2016-06-063-15/+16
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-069-434/+73
|
* Implement cache-oblivious support for huge size classes.Jason Evans2016-06-031-12/+42
|
* Replace extent_tree_szad_* with extent_heap_*.Jason Evans2016-06-031-0/+155
|
* Dodge ivsalloc() assertion in test code.Jason Evans2016-06-031-1/+16
|
* Add rtree element witnesses.Jason Evans2016-06-032-19/+25
|
* Refactor rtree to always use base_alloc() for node allocation.Jason Evans2016-06-031-35/+82
|
* Add element acquire/release capabilities to rtree.Jason Evans2016-06-031-34/+119
| | | | | | | | This makes it possible to acquire short-term "ownership" of rtree elements so that it is possible to read an extent pointer *and* read the extent's contents with a guarantee that the element will not be modified until the ownership is released. This is intended as a mechanism for resolving rtree read/write races rather than as a way to lock extents.
* Rename extent_node_t to extent_t.Jason Evans2016-05-161-12/+13
|
* Refactor runs_avail.Jason Evans2016-05-161-1/+1
| | | | | | | | Use pszind_t size classes rather than szind_t size classes, and always reserve space for NPSIZES elements. This removes unused heaps that are not multiples of the page size, and adds (currently) unused heaps for all huge size classes, with the immediate benefit that the size of arena_t allocations is constant (no longer dependent on chunk size).
* Implement pz2ind(), pind2sz(), and psz2u().Jason Evans2016-05-131-11/+83
| | | | | | | These compute size classes and indices similarly to size2index(), index2size() and s2u(), respectively, but using the subset of size classes that are multiples of the page size. Note that pszind_t and szind_t are not interchangeable.
* Initialize arena_bin_info at compile time rather than at boot time.Jason Evans2016-05-131-1/+1
| | | | This resolves #370.
* Implement BITMAP_INFO_INITIALIZER(nbits).Jason Evans2016-05-131-109/+296
| | | | This allows static initialization of bitmap_info_t structures.
* Remove redzone support.Jason Evans2016-05-133-48/+3
| | | | This resolves #369.
* Remove quarantine support.Jason Evans2016-05-135-113/+2
|
* Remove Valgrind support.Jason Evans2016-05-132-3/+1
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-115-88/+88
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Fix tsd bootstrapping for a0malloc().Jason Evans2016-05-073-1/+24
|
* Don't test fork() on Windows.Jason Evans2016-05-041-0/+6
|
* Fix witness/fork() interactions.Jason Evans2016-04-261-3/+22
| | | | | | Fix witness to clear its list of owned mutexes in the child if platform-specific malloc_mutex code re-initializes mutexes rather than unlocking them.
* Fix fork()-related lock rank ordering reversals.Jason Evans2016-04-261-0/+39
|