summaryrefslogtreecommitdiffstats
path: root/test/unit/junk.c
Commit message (Collapse)AuthorAgeFilesLines
* Pull out arena_bin_info_t and arena_bin_t into their own file.David T. Goldblatt2017-12-191-1/+1
| | | | | In the process, kill arena_bin_index, which is unused. To follow are several diffs continuing this separation.
* Test with background_thread:true.Jason Evans2017-06-011-4/+7
| | | | | | Add testing for background_thread:true, and condition a xallocx() --> rallocx() escalation assertion to allow for spurious in-place rallocx() following xallocx() failure.
* Header refactoring: move util.h out of the catchallDavid Goldblatt2017-04-191-0/+2
|
* Use MALLOC_CONF rather than malloc_conf for tests.Jason Evans2017-02-231-8/+0
| | | | | | | | | malloc_conf does not reliably work with MSVC, which complains of "inconsistent dll linkage", i.e. its inability to support the application overriding malloc_conf when dynamically linking/loading. Work around this limitation by adding test harness support for per test shell script sourcing, and converting all tests to use MALLOC_CONF instead of malloc_conf.
* Remove extraneous parens around return arguments.Jason Evans2017-01-211-2/+2
| | | | This resolves #540.
* Update brace style.Jason Evans2017-01-211-19/+14
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Remove leading blank lines from function bodies.Jason Evans2017-01-131-5/+0
| | | | This resolves #535.
* Make dss operations lockless.Jason Evans2016-10-131-2/+2
| | | | | | | | | | | | | | Rather than protecting dss operations with a mutex, use atomic operations. This has negligible impact on synchronization overhead during typical dss allocation, but is a substantial improvement for extent_in_dss() and the newly added extent_dss_mergeable(), which can be called multiple times during extent deallocations. This change also has the advantage of avoiding tsd in deallocation paths associated with purging, which resolves potential deadlocks during thread exit due to attempted tsd resurrection. This resolves #425.
* Remove all vestiges of chunks.Jason Evans2016-10-121-1/+1
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Remove a stray memset(), and fix a junk filling test regression.Jason Evans2016-06-061-5/+19
|
* Rename huge to large.Jason Evans2016-06-061-8/+8
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-80/+11
|
* Initialize arena_bin_info at compile time rather than at boot time.Jason Evans2016-05-131-1/+1
| | | | This resolves #370.
* Remove redzone support.Jason Evans2016-05-131-46/+2
| | | | This resolves #369.
* Remove quarantine support.Jason Evans2016-05-131-1/+1
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-2/+2
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Fix tsd bootstrapping for a0malloc().Jason Evans2016-05-071-1/+0
|
* Add witness, a simple online locking validator.Jason Evans2016-04-141-2/+2
| | | | This resolves #358.
* Clean up char vs. uint8_t in junk filling code.Jason Evans2016-04-111-8/+8
| | | | Consistently use uint8_t rather than char for junk filling code.
* Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macrosChris Peterson2016-03-311-3/+3
| | | | | Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros, respectively.
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-121-6/+6
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix assertion in test.Jason Evans2015-08-121-1/+1
|
* Fix MinGW-related portability issues.Jason Evans2015-07-231-13/+13
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Fix more MinGW build warnings.Jason Evans2015-07-181-13/+13
|
* Introduce two new modes of junk filling: "alloc" and "free".Guilherme Goncalves2014-12-151-15/+26
| | | | | | | | In addition to true/false, opt.junk can now be either "alloc" or "free", giving applications the possibility of junking memory only on allocation or deallocation. This resolves #172.
* Use regular arena allocation for huge tree nodes.Daniel Micay2014-10-081-7/+20
| | | | | | This avoids grabbing the base mutex, as a step towards fine-grained locking for huge allocations. The thread cache also provides a tiny (~3%) improvement for serial huge allocations.
* Normalize size classes.Jason Evans2014-10-061-3/+14
| | | | | | | | | | Normalize size classes to use the same number of size classes per size doubling (currently hard coded to 4), across the intire range of size classes. Small size classes already used this spacing, but in order to support this change, additional small size classes now fill [4 KiB .. 16 KiB). Large size classes range from [16 KiB .. 4 MiB). Huge size classes now support non-multiples of the chunk size in order to fill (4 MiB .. 16 MiB).
* Refactor huge allocation to be managed by arenas.Jason Evans2014-05-161-6/+3
| | | | | | | | | | | | | | | | | | | | Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
* Fix message formatting errors uncovered by p_test_fail() refactoring.Jason Evans2014-03-301-1/+1
|
* Fix junk filling for mremap(2)-based huge reallocation.Jason Evans2014-02-251-3/+6
| | | | | | | If mremap(2) is used for huge reallocation, physical pages are mapped to new virtual addresses rather than data being copied to new pages. This bypasses the normal junk filling that would happen during allocation, so add junk filling that is specific to this case.
* Add junk/zero filling unit tests, and fix discovered bugs.Jason Evans2014-01-081-0/+219
Fix growing large reallocation to junk fill new space. Fix huge deallocation to junk fill when munmap is disabled.