summaryrefslogtreecommitdiffstats
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* Logging: log using the log var names directly.David Goldblatt2017-07-241-2/+1
| | | | | | | | | | | Currently we have to log by writing something like: static log_var_t log_a_b_c = LOG_VAR_INIT("a.b.c"); log (log_a_b_c, "msg"); This is sort of annoying. Let's just write: log("a.b.c", "msg");
* Logging: allow logging with empty varargs.David Goldblatt2017-07-221-2/+14
| | | | | | Currently, the log macro requires at least one argument after the format string, because of the way the preprocessor handles varargs macros. We can hide some of that irritation by pushing the extra arguments into a varargs function.
* Add a logging facility.David T. Goldblatt2017-07-211-0/+182
| | | | | This sets up a hierarchical logging facility, so that we can add logging statements liberally, and turn them on in a fine-grained manner.
* Add a test of behavior under multi-threaded forking.David Goldblatt2017-07-111-21/+87
| | | | | | Forking a multithreaded process is dangerous but allowed, so long as the child only executes async-signal-safe functions (e.g. exec). Add a test to ensure that we don't break this behavior.
* Set reentrancy when invoking customized extent hooks.Qi Wang2017-06-231-9/+6
| | | | | Customized extent hooks may malloc / free thus trigger reentry. Support this behavior by adding reentrancy on hook functions.
* Add alloc hook test in test/integration/extent.Qi Wang2017-06-141-0/+3
|
* Prevent background threads from running in post_reset().Qi Wang2017-06-121-5/+13
| | | | | We lookup freed extents for testing in post_reset. Take background_thread lock so that the extents are not modified at the same time.
* Combine background_thread started / paused into state.Qi Wang2017-06-121-1/+1
|
* Make tsd no-cleanup during tsd reincarnation.Qi Wang2017-06-071-2/+2
| | | | | Since tsd cleanup isn't guaranteed when reincarnated, we set up tsd in a way that needs no cleanup, by making it going through slow path instead.
* Test with background_thread:true.Jason Evans2017-06-011-4/+7
| | | | | | Add testing for background_thread:true, and condition a xallocx() --> rallocx() escalation assertion to allow for spurious in-place rallocx() following xallocx() failure.
* Refactor/fix background_thread/percpu_arena bootstrapping.Jason Evans2017-06-011-6/+6
| | | | | Refactor bootstrapping such that dlsym() is called during the bootstrapping phase that can tolerate reentrant allocation.
* Skip default tcache testing if !opt_tcache.Jason Evans2017-06-011-4/+4
|
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-314-97/+103
|
* Header refactoring: unify and de-catchall extent_mmap module.David Goldblatt2017-05-311-0/+1
|
* Header refactoring: unify and de-catchall rtree module.David Goldblatt2017-05-313-0/+6
|
* Add the --disable-thp option to support cross compiling.Jason Evans2017-05-301-1/+1
| | | | This resolves #669.
* Add test for excessive retained memory.Jason Evans2017-05-301-0/+179
|
* Make test/unit/background_thread not flaky.Qi Wang2017-05-271-3/+5
|
* Cleanup smoothstep.sh / .h.Qi Wang2017-05-251-1/+1
| | | | h_step_sum was used to compute moving sum. Not in use anymore.
* Header refactoring: unify and de-catchall witness code.David Goldblatt2017-05-241-92/+80
|
* Add tests for background threads.Qi Wang2017-05-232-0/+118
|
* Add background thread related stats.Qi Wang2017-05-231-0/+36
|
* Implementing opt.background_thread.Qi Wang2017-05-234-3/+39
| | | | | | | | | | | Added opt.background_thread to enable background threads, which handles purging currently. When enabled, decay ticks will not trigger purging (which will be left to the background threads). We limit the max number of threads to NCPUs. When percpu arena is enabled, set CPU affinity for the background threads as well. The sleep interval of background threads is dynamic and determined by computing number of pages to purge in the future (based on backlog).
* Protect the rtree/extent interactions with a mutex pool.David Goldblatt2017-05-191-90/+3
| | | | | | | | | | | | | | | | | | Instead of embedding a lock bit in rtree leaf elements, we associate extents with a small set of mutexes. This gets us two things: - We can use the system mutexes. This (hypothetically) protects us from priority inversion, and lets us stop doing a backoff/sleep loop, instead opting for precise wakeups from the mutex. - Cuts down on the number of mutex acquisitions we have to do (from 4 in the worst case to two). We end up simplifying most of the rtree code (which no longer has to deal with locking or concurrency at all), at the cost of additional complexity in the extent code: since the mutex protecting the rtree leaf elements is determined by reading the extent out of those elements, the initial read is racy, so that we may acquire an out of date mutex. We re-check the extent in the leaf after acquiring the mutex to protect us from this race.
* Refactor *decay_time into *decay_ms.Jason Evans2017-05-184-102/+98
| | | | | | | | Support millisecond resolution for decay times. Among other use cases this makes it possible to specify a short initial dirty-->muzzy decay phase, followed by a longer muzzy-->clean decay phase. This resolves #812.
* Header refactoring: tsd - cleanup and dependency breaking.David Goldblatt2017-05-011-44/+33
| | | | | | | | | | | | This removes the tsd macros (which are used only for tsd_t in real builds). We break up the circular dependencies involving tsd. We also move all tsd access through getters and setters. This allows us to assert that we only touch data when tsd is in a valid state. We simplify the usages of the x macro trick, removing all the customizability (get/set, init, cleanup), moving the lifetime logic to tsd_init and tsd_cleanup. This lets us make initialization order independent of order within tsd_t.
* Add extent_destroy_t and use it during arena destruction.Jason Evans2017-04-293-1/+32
| | | | | | | | | | Add the extent_destroy_t extent destruction hook to extent_hooks_t, and use it during arena destruction. This hook explicitly communicates to the callee that the extent must be destroyed or tracked for later reuse, lest it be permanently leaked. Prior to this change, retained extents could unintentionally be leaked if extent retention was enabled. This resolves #560.
* Refactor !opt.munmap to opt.retain.Jason Evans2017-04-292-2/+2
|
* Header refactoring: hash - unify and remove from catchall.David Goldblatt2017-04-251-0/+1
|
* Replace --disable-munmap with opt.munmap.Jason Evans2017-04-252-3/+3
| | | | | | | | | Control use of munmap(2) via a run-time option rather than a compile-time option (with the same per platform default). The old behavior of --disable-munmap can be achieved with --with-malloc-conf=munmap:false. This partially resolves #580.
* Header refactoring: ticker module - remove from the catchall and unify.David Goldblatt2017-04-242-0/+4
|
* Get rid of most of the various inline macros.David Goldblatt2017-04-246-90/+65
|
* Output MALLOC_CONF and debug cmd when test failure happens.Qi Wang2017-04-221-9/+10
|
* Remove --disable-tls.Jason Evans2017-04-211-1/+0
| | | | | | | This option is no longer useful, because TLS is correctly configured automatically on all supported platforms. This partially resolves #580.
* Remove --disable-tcache.Jason Evans2017-04-217-108/+69
| | | | | | | | | | | Simplify configuration by removing the --disable-tcache option, but replace the testing for that configuration with --with-malloc-conf=tcache:false. Fix the thread.arena and thread.tcache.flush mallctls to work correctly if tcache is disabled. This partially resolves #580.
* Support --with-lg-page values larger than system page size.Jason Evans2017-04-192-2/+2
| | | | | | | | | All mappings continue to be PAGE-aligned, even if the system page size is smaller. This change is primarily intended to provide a mechanism for supporting multiple page sizes with the same binary; smaller page sizes work better in conjunction with jemalloc's design. This resolves #467.
* Revert "Remove BITMAP_USE_TREE."Jason Evans2017-04-191-0/+16
| | | | | | | | | Some systems use a native 64 KiB page size, which means that the bitmap for the smallest size class can be 8192 bits, not just 512 bits as when the page size is 4 KiB. Linear search in bitmap_{sfu,ffu}() is unacceptably slow for such large bitmaps. This reverts commit 7c00f04ff40a34627e31488d02ff1081c749c7ba.
* Header refactoring: unify nstime.h and move it out of the catch-allDavid Goldblatt2017-04-191-3/+1
|
* Header refactoring: move util.h out of the catchallDavid Goldblatt2017-04-193-0/+6
|
* Header refactoring: move bit_util.h out of the catchallDavid Goldblatt2017-04-191-0/+2
|
* Track extent structure serial number (esn) in extent_t.Jason Evans2017-04-171-2/+2
| | | | This enables stable sorting of extent_t structures.
* Pass alloc_ctx down profiling path.Qi Wang2017-04-121-2/+2
| | | | | | With this change, when profiling is enabled, we avoid doing redundant rtree lookups. Also changed dalloc_atx_t to alloc_atx_t, as it's now used on allocation path as well (to speed up profiling).
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-111-3/+4
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Header refactoring: break out ql.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Header refactoring: break out qr.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Header refactoring: break out rb.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Header refactoring: break out ph.h dependenciesDavid Goldblatt2017-04-111-0/+2
|
* Add basic reentrancy-checking support, and allow arena_new to reenter.David Goldblatt2017-04-074-28/+55
| | | | | | | | | This checks whether or not we're reentrant using thread-local data, and, if we are, moves certain internal allocations to use arena 0 (which should be properly initialized after bootstrapping). The immediate thing this allows is spinning up threads in arena_new, which will enable spinning up background threads there.
* Add hooking functionalityDavid Goldblatt2017-04-0710-9/+124
| | | | | This allows us to hook chosen functions and do interesting things there (in particular: reentrancy checking).
* Integrate auto tcache into TSD.Qi Wang2017-04-071-0/+6
| | | | | | | | | The embedded tcache is initialized upon tsd initialization. The avail arrays for the tbins will be allocated / deallocated accordingly during init / cleanup. With this change, the pointer to the auto tcache will always be available, as long as we have access to the TSD. tcache_available() (called in tcache_get()) is provided to check if we should use tcache.