summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Set isthreaded manually.devQi Wang2017-10-061-5/+6
| | | | Avoid relying pthread_once which creates dependency during init.
* Delay background_thread_ctl_init to right before thread creation.Qi Wang2017-10-062-4/+6
| | | | | ctl_init sets isthreaded, which means it should be done without holding any locks.
* Enable a0 metadata thp on the 3rd base block.Qi Wang2017-10-051-21/+64
| | | | | | Since we allocate rtree nodes from a0's base, it's pushed to over 1 block on initialization right away, which makes the auto thp mode less effective on a0. We change a0 to make the switch on the 3rd block instead.
* Power: disable the CPU_SPINWAIT macro.David Goldblatt2017-10-051-1/+2
| | | | | | | | | | | | | | Quoting from https://github.com/jemalloc/jemalloc/issues/761 : [...] reading the Power ISA documentation[1], the assembly in [the CPU_SPINWAIT macro] isn't correct anyway (as @marxin points out): the setting of the program-priority register is "sticky", and we never undo the lowering. We could do something similar, but given that we don't have testing here in the first place, I'm inclined to simply not try. I'll put something up reverting the problematic commit tomorrow. [1] Book II, chapter 3 of the 2.07B or 3.0B ISA documents.
* Use ph instead of rb tree for extents_avail_Dave Watson2017-10-041-1/+1
| | | | | | | | | | There does not seem to be any overlap between usage of extent_avail and extent_heap, so we can use the same hook. The only remaining usage of rb trees is in the profiling code, which has some 'interesting' iteration constraints. Fixes #888
* Logging: capitalize log macro.David Goldblatt2017-10-031-48/+48
| | | | Dodge a name-conflict with the math.h logarithm function. D'oh.
* Add runtime detection of lazy purging support.Qi Wang2017-09-271-0/+24
| | | | | | It's possible to build with lazy purge enabled but depoly to systems without such support. In this case, rely on the boot time detection instead of keep making unnecessary madvise calls (which all returns EINVAL).
* Put static keyword first.Qi Wang2017-09-211-1/+1
| | | | Fix a warning by -Wold-style-declaration.
* Clear cache bin ql postfork.Qi Wang2017-09-121-0/+7
| | | | | This fixes a regression in 9c05490, which introduced the new cache bin ql. The list needs to be cleaned up after fork, same as tcache_ql.
* Relax constraints on reentrancy for extent hooks.Qi Wang2017-08-311-1/+12
| | | | | | If we guarantee no malloc activity in extent hooks, it's possible to make customized hooks working on arena 0. Remove the non-a0 assertion to enable such use cases.
* Add stats for metadata_thp.Qi Wang2017-08-304-14/+76
| | | | Report number of THPs used in arena and aggregated stats.
* Change opt.metadata_thp to [disabled,auto,always].Qi Wang2017-08-305-17/+54
| | | | | | | | To avoid the high RSS caused by THP + low usage arena (i.e. THP becomes a significant percentage), added a new "auto" option which will only start using THP after a base allocator used up the first THP region. Starting from the second hugepage (in a single arena), "auto" behaves the same as "always", i.e. madvise hugepage right away.
* Make arena stats collection go through cache bins.David Goldblatt2017-08-172-4/+13
| | | | | | This eliminates the need for the arena stats code to "know" about tcaches; all that it needs is a cache_bin_array_descriptor_t to tell it where to find cache_bins whose stats it should aggregate.
* Pull out caching for a bin into its own file.David Goldblatt2017-08-172-22/+22
| | | | | | This is the first step towards breaking up the tcache and arena (since they interact primarily at the bin level). It should also make a future arena caching implementation more straightforward.
* Fix test/unit/pages.Qi Wang2017-08-111-6/+7
| | | | | | As part of the metadata_thp support, We now have a separate swtich (JEMALLOC_HAVE_MADVISE_HUGE) for MADV_HUGEPAGE availability. Use that instead of JEMALLOC_THP (which doesn't guard pages_huge anymore) in tests.
* Implement opt.metadata_thpQi Wang2017-08-115-16/+85
| | | | | This option enables transparent huge page for base allocators (require MADV_HUGEPAGE support).
* Remove external linkage for spin_adaptiveRyan Libby2017-08-081-4/+0
| | | | | | | The external linkage for spin_adaptive was not used, and the inline declaration of spin_adaptive that was used caused a probem on FreeBSD where CPU_SPINWAIT is implemented as a call to a static procedure for x86 architectures.
* Only read szind if ptr is not paged aligned in sdallocx.Qi Wang2017-07-311-2/+22
| | | | | | If ptr is not page aligned, we know the allocation was not sampled. In this case use the size passed into sdallocx directly w/o accessing rtree. This improve sdallocx efficiency in the common case (not sampled && small allocation).
* Bypass extent_alloc_wrapper_hard for no_move_expand.Qi Wang2017-07-311-0/+9
| | | | | | When retain is enabled, we should not attempt mmap for in-place expansion (large_ralloc_no_move), because it's virtually impossible to succeed, and causes unnecessary syscalls (which can cause lock contention under load).
* Logging: log using the log var names directly.David Goldblatt2017-07-241-151/+47
| | | | | | | | | | | Currently we have to log by writing something like: static log_var_t log_a_b_c = LOG_VAR_INIT("a.b.c"); log (log_a_b_c, "msg"); This is sort of annoying. Let's just write: log("a.b.c", "msg");
* Split out cold code path in newImplQinfan Wu2017-07-241-7/+16
| | | | | I noticed that the whole newImpl is inlined. Since OOM handling code is rarely executed, we should only inline the hot path.
* Logging: allow logging with empty varargs.David Goldblatt2017-07-222-9/+9
| | | | | | Currently, the log macro requires at least one argument after the format string, because of the way the preprocessor handles varargs macros. We can hide some of that irritation by pushing the extra arguments into a varargs function.
* Validates fd before calling fcntlY. T. Chung2017-07-222-4/+12
|
* Add entry and exit logging to all core functions.David T. Goldblatt2017-07-211-1/+198
| | | | I.e. mallloc, free, the allocx API, the posix extensions.
* Add a logging facility.David T. Goldblatt2017-07-212-0/+90
| | | | | This sets up a hierarchical logging facility, so that we can add logging statements liberally, and turn them on in a fine-grained manner.
* Fall back to FD_CLOEXEC when O_CLOEXEC is unavailable.Y. T. Chung2017-07-202-5/+28
| | | | | | | Older Linux systems don't have O_CLOEXEC. If that's the case, we fcntl immediately after open, to minimize the length of the racy period in which an operation in another thread can leak a file descriptor to a child.
* Fix deadlock in multithreaded fork in OS X.David Goldblatt2017-07-111-6/+24
| | | | | | | | | | | | | | | | On OS X, we rely on the zone machinery to call our prefork and postfork handlers. In zone_force_unlock, we call jemalloc_postfork_child, reinitializing all our mutexes regardless of state, since the mutex implementation will assert if the tid of the unlocker is different from that of the locker. This has the effect of unlocking the mutexes, but also fails to wake any threads waiting on them in the parent. To fix this, we track whether or not we're the parent or child after the fork, and unlock or reinit as appropriate. This resolves #895.
* Add extent_grow_mtx in pre_ / post_fork handlers.Qi Wang2017-06-302-5/+15
| | | | This fixed the issue that could cause the child process to stuck after fork.
* Fix pthread_sigmask() usage to block all signals.Qi Wang2017-06-261-1/+1
|
* Switch ctl to explicitly use tsd instead of tsdn.Qi Wang2017-06-232-24/+23
|
* Check arena in current context in pre_reentrancy.Qi Wang2017-06-236-46/+47
|
* Set reentrancy when invoking customized extent hooks.Qi Wang2017-06-233-25/+102
| | | | | Customized extent hooks may malloc / free thus trigger reentry. Support this behavior by adding reentrancy on hook functions.
* Fix assertion typos.Jason Evans2017-06-232-2/+2
| | | | Reported by Conrad Meyer.
* Add thread name for background threads.Qi Wang2017-06-231-1/+3
|
* Avoid inactivity_check within background threads.Qi Wang2017-06-221-17/+22
| | | | | | Passing is_background_thread down the decay path, so that background thread itself won't attempt inactivity_check. This fixes an issue with background thread doing trylock on a mutex it already owns.
* Mask signals during background thread creation.Jason Evans2017-06-211-3/+35
| | | | | This prevents signals from being inadvertently delivered to background threads.
* Clear tcache_ql after fork in child.Qi Wang2017-06-201-0/+17
|
* Add minimal initialized TSD.Qi Wang2017-06-162-16/+38
| | | | | | | | | We use the minimal_initilized tsd (which requires no cleanup) for free() specifically, if tsd hasn't been initialized yet. Any other activity will transit the state from minimal to normal. This is to workaround the case where a thread has no malloc calls in its lifetime until during thread termination, free() happens after tls destructors.
* Pass tsd to tcache_flush().Qi Wang2017-06-162-3/+2
|
* Log decay->nunpurged before purging.Qi Wang2017-06-151-2/+3
| | | | | During purging, we may unlock decay->mtx. Therefore we should finish logging decay related counters before attempt to purge.
* Only abort on dlsym when necessary.Qi Wang2017-06-142-3/+18
| | | | | If neither background_thread nor lazy_lock is in use, do not abort on dlsym errors.
* Fix extent_hooks in extent_grow_retained().Qi Wang2017-06-141-3/+12
| | | | | | | This issue caused the default extent alloc function to be incorrectly used even when arena.<i>.extent_hooks is set. This bug was introduced by 411697adcda2fd75e135cdcdafb95f2bd295dc7f (Use exponential series to size extents.), which was first released in 5.0.0.
* Combine background_thread started / paused into state.Qi Wang2017-06-122-29/+50
|
* Not re-enable background thread after fork.Qi Wang2017-06-122-36/+46
| | | | Avoid calling pthread_create in postfork handlers.
* Move background thread creation to background_thread_0.Qi Wang2017-06-122-144/+249
| | | | | To avoid complications, avoid invoking pthread_create "internally", instead rely on thread0 to launch new threads, and also terminating threads when asked.
* Normalize background thread configuration.Jason Evans2017-06-091-0/+2
| | | | Also fix a compilation error #ifndef JEMALLOC_PTHREAD_CREATE_WRAPPER.
* Update a UTRACE() size argument.Jason Evans2017-06-081-1/+1
|
* Add internal tsd for background_thread.Qi Wang2017-06-082-6/+14
|
* Drop high rank locks when creating threads.Qi Wang2017-06-084-13/+42
| | | | | | Avoid holding arenas_lock and background_thread_lock when creating background threads, because pthread_create may take internal locks, and potentially cause deadlock with jemalloc internal locks.
* Make tsd no-cleanup during tsd reincarnation.Qi Wang2017-06-072-21/+48
| | | | | Since tsd cleanup isn't guaranteed when reincarnated, we set up tsd in a way that needs no cleanup, by making it going through slow path instead.