summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Header refactoring: unify and de-catchall rtree module.David Goldblatt2017-05-315-0/+5
|
* Pass the O_CLOEXEC flag to open(2).Jason Evans2017-05-312-4/+5
| | | | This resolves #528.
* Track background thread status separately at fork.Qi Wang2017-05-311-3/+8
| | | | | Use a separate boolean to track the enabled status, instead of leaving the global background thread status inconsistent.
* Output total_wait_ns for bin mutexes.Qi Wang2017-05-311-19/+5
|
* Explicitly say so when aborting on opt_abort_conf.Qi Wang2017-05-311-2/+10
|
* Add the --disable-thp option to support cross compiling.Jason Evans2017-05-302-0/+4
| | | | This resolves #669.
* Fix npages during arena_decay_epoch_advance().Qi Wang2017-05-301-20/+14
| | | | | We do not lock extents while advancing epoch. This change makes sure that we only read npages from extents once in order to avoid any inconsistency.
* Fix extent_grow_next management.Jason Evans2017-05-302-150/+211
| | | | | | | | | | | | | Fix management of extent_grow_next to serialize operations that may grow retained memory. This assures that the sizes of the newly allocated extents correspond to the size classes in the intended growth sequence. Fix management of extent_grow_next to skip size classes if a request is too large to be satisfied by the next size in the growth sequence. This avoids the potential for an arbitrary number of requests to bypass triggering extent_grow_next increases. This resolves #858.
* Fix OOM paths in extent_grow_retained().Jason Evans2017-05-301-2/+8
|
* Add opt.stats_print_opts.Qi Wang2017-05-293-43/+52
| | | | The value is passed to atexit(3)-triggered malloc_stats_print() calls.
* Added opt_abort_conf: abort on invalid config options.Qi Wang2017-05-273-0/+22
|
* Cleanup smoothstep.sh / .h.Qi Wang2017-05-251-1/+1
| | | | h_step_sum was used to compute moving sum. Not in use anymore.
* Fix stats.mapped during deallocation.Qi Wang2017-05-241-1/+1
|
* Header refactoring: unify and de-catchall mutex moduleDavid Goldblatt2017-05-2412-0/+24
|
* Header refactoring: unify and de-catchall witness code.David Goldblatt2017-05-246-74/+100
|
* Do not assume dss never decreases.Jason Evans2017-05-231-38/+34
| | | | | | | | | An sbrk() caller outside jemalloc can decrease the dss, so add a separate atomic boolean to explicitly track whether jemalloc is concurrently calling sbrk(), rather than depending on state outside jemalloc's full control. This resolves #802.
* Do not hold the base mutex while calling extent hooks.Jason Evans2017-05-231-0/+6
| | | | | | | | | Drop the base mutex while allocating new base blocks, because extent allocation can enter code that prohibits holding non-core mutexes, e.g. the extent_[d]alloc() and extent_purge_forced_wrapper() calls in extent_alloc_dss(). This partially resolves #802.
* Fix # of unpurged pages in decay algorithm.Qi Wang2017-05-231-10/+26
| | | | | | | | | | When # of dirty pages move below npages_limit (e.g. they are reused), we should not lower number of unpurged pages because that would cause the reused pages to be double counted in the backlog (as a result, decay happen slower than it should). Instead, set number of unpurged to the greater of current npages and npages_limit. Added an assertion: the ceiling # of pages should be greater than npages_limit.
* Check for background thread inactivity on extents_dalloc.Qi Wang2017-05-232-19/+46
| | | | | | To avoid background threads sleeping forever with idle arenas, we eagerly check background threads' sleep time after extents_dalloc, and signal the thread if necessary.
* Add profiling for the background thread mutex.Qi Wang2017-05-232-0/+14
|
* Add background thread related stats.Qi Wang2017-05-234-21/+162
|
* Implementing opt.background_thread.Qi Wang2017-05-236-79/+814
| | | | | | | | | | | Added opt.background_thread to enable background threads, which handles purging currently. When enabled, decay ticks will not trigger purging (which will be left to the background threads). We limit the max number of threads to NCPUs. When percpu arena is enabled, set CPU affinity for the background threads as well. The sleep interval of background threads is dynamic and determined by computing number of pages to purge in the future (based on backlog).
* Protect the rtree/extent interactions with a mutex pool.David Goldblatt2017-05-193-214/+160
| | | | | | | | | | | | | | | | | | Instead of embedding a lock bit in rtree leaf elements, we associate extents with a small set of mutexes. This gets us two things: - We can use the system mutexes. This (hypothetically) protects us from priority inversion, and lets us stop doing a backoff/sleep loop, instead opting for precise wakeups from the mutex. - Cuts down on the number of mutex acquisitions we have to do (from 4 in the worst case to two). We end up simplifying most of the rtree code (which no longer has to deal with locking or concurrency at all), at the cost of additional complexity in the extent code: since the mutex protecting the rtree leaf elements is determined by reading the extent out of those elements, the initial read is racy, so that we may acquire an out of date mutex. We re-check the extent in the leaf after acquiring the mutex to protect us from this race.
* Allow mutexes to take a lock ordering enum at construction.David Goldblatt2017-05-199-27/+60
| | | | | | | This lets us specify whether and how mutexes of the same rank are allowed to be acquired. Currently, we only allow two polices (only a single mutex at a given rank at a time, and mutexes acquired in ascending order), but we can plausibly allow more (e.g. the "release uncontended mutexes before blocking").
* Refactor *decay_time into *decay_ms.Jason Evans2017-05-184-143/+136
| | | | | | | | Support millisecond resolution for decay times. Among other use cases this makes it possible to specify a short initial dirty-->muzzy decay phase, followed by a longer muzzy-->clean decay phase. This resolves #812.
* Add stats: arena uptime.Qi Wang2017-05-183-0/+25
|
* Refactor (MALLOCX_ARENA_MAX + 1) to be MALLOCX_ARENA_LIMIT.Jason Evans2017-05-141-5/+5
| | | | This resolves #673.
* Automatically generate private symbol name mangling macros.Jason Evans2017-05-121-18/+29
| | | | | | | | Rather than using a manually maintained list of internal symbols to drive name mangling, add a compilation phase to automatically extract the list of internal symbols. This resolves #677.
* Stop depending on JEMALLOC_N() for function interception during testing.Jason Evans2017-05-126-167/+51
| | | | | | Instead, always define function pointers for interceptable functions, but mark them const unless testing, so that the compiler can optimize out the pointer dereferences.
* Revert "Use trylock in tcache_bin_flush when possible."Qi Wang2017-05-011-123/+48
| | | | | This reverts commit 8584adc451f31adfc4ab8693d9189cf3a7e5d858. Production results not favorable. Will investigate separately.
* Header refactoring: tsd - cleanup and dependency breaking.David Goldblatt2017-05-015-39/+64
| | | | | | | | | | | | This removes the tsd macros (which are used only for tsd_t in real builds). We break up the circular dependencies involving tsd. We also move all tsd access through getters and setters. This allows us to assert that we only touch data when tsd is in a valid state. We simplify the usages of the x macro trick, removing all the customizability (get/set, init, cleanup), moving the lifetime logic to tsd_init and tsd_cleanup. This lets us make initialization order independent of order within tsd_t.
* Add extent_destroy_t and use it during arena destruction.Jason Evans2017-04-292-12/+55
| | | | | | | | | | Add the extent_destroy_t extent destruction hook to extent_hooks_t, and use it during arena destruction. This hook explicitly communicates to the callee that the extent must be destroyed or tracked for later reuse, lest it be permanently leaked. Prior to this change, retained extents could unintentionally be leaked if extent retention was enabled. This resolves #560.
* Refactor !opt.munmap to opt.retain.Jason Evans2017-04-297-14/+14
|
* Inline tcache_bin_flush_small_impl / _large_impl.Qi Wang2017-04-281-2/+2
|
* Use trylock in tcache_bin_flush when possible.Qi Wang2017-04-261-48/+123
| | | | | During tcache gc, use tcache_bin_try_flush_small / _large so that we can skip items with their bins locked already.
* Remove redundant extent lookup in tcache_bin_flush_large.Qi Wang2017-04-251-1/+1
|
* Avoid prof_dump during reentrancy.Qi Wang2017-04-251-12/+20
|
* Header refactoring: pages.h - unify and remove from catchall.David Goldblatt2017-04-251-0/+3
|
* Header refactoring: hash - unify and remove from catchall.David Goldblatt2017-04-252-0/+2
|
* Header refactoring: ctl - unify and remove from catchall.David Goldblatt2017-04-253-37/+41
| | | | | In order to do this, we introduce the mutex_prof module, which breaks a circular dependency between ctl and prof.
* Replace --disable-munmap with opt.munmap.Jason Evans2017-04-257-32/+37
| | | | | | | | | Control use of munmap(2) via a run-time option rather than a compile-time option (with the same per platform default). The old behavior of --disable-munmap can be achieved with --with-malloc-conf=munmap:false. This partially resolves #580.
* Use trylock in arena_decay_impl().Qi Wang2017-04-241-8/+16
| | | | If another thread is working on decay, we don't have to wait for the mutex.
* Implement malloc_mutex_trylock() w/ proper stats update.Qi Wang2017-04-241-2/+2
|
* Header refactoring: size_classes module - remove from the catchallDavid Goldblatt2017-04-244-0/+4
|
* Header refactoring: ckh module - remove from the catchall and unify.David Goldblatt2017-04-242-0/+4
|
* Header refactoring: ticker module - remove from the catchall and unify.David Goldblatt2017-04-241-0/+1
|
* Header refactoring: prng module - remove from the catchall and unify.David Goldblatt2017-04-241-0/+1
|
* Get rid of most of the various inline macros.David Goldblatt2017-04-246-35/+34
|
* Enable -Wundef, when supported.David Goldblatt2017-04-222-12/+4
| | | | | | This can catch bugs in which one header defines a numeric constant, and another uses it without including the defining header. Undefined preprocessor symbols expand to '0', so that this will compile fine, silently doing the math wrong.
* Remove --enable-ivsalloc.Jason Evans2017-04-211-4/+11
| | | | | | | | | | | | Continue to use ivsalloc() when --enable-debug is specified (and add assertions to guard against 0 size), but stop providing a documented explicit semantics-changing band-aid to dodge undefined behavior in sallocx() and malloc_usable_size(). ivsalloc() remains compiled in, unlike when #211 restored --enable-ivsalloc, and if JEMALLOC_FORCE_IVSALLOC is defined during compilation, sallocx() and malloc_usable_size() will still use ivsalloc(). This partially resolves #580.