summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Modify configure to determine return value of strerror_r.Christopher Ferris2018-01-111-1/+1
| | | | | | | | On glibc and Android's bionic, strerror_r returns char* when _GNU_SOURCE is defined. Add a configure check for this rather than assume glibc is the only libc that behaves this way.
* Improve the fit for aligned allocation.Qi Wang2018-01-051-10/+61
| | | | | | | We compute the max size required to satisfy an alignment. However this can be quite pessimistic, especially with frequent reuse (and combined with state-based fragmentation). This commit adds one more fit step specific to aligned allocations, searching in all potential fit size classes.
* handle 32 bit mutex countersRajeev Misra2018-01-041-36/+47
|
* Implement arena regind computation using div_info_t.David Goldblatt2017-12-211-17/+16
| | | | | This eliminates the need to generate an enormous switch statement in arena_slab_regind.
* Add the div module, which allows fast division by dynamic values.David Goldblatt2017-12-212-1/+57
|
* Split up and standardize naming of stats code.David T. Goldblatt2017-12-193-181/+46
| | | | | | The arena-associated stats are now all prefixed with arena_stats_, and live in their own file. Likewise, malloc_bin_stats_t -> bin_stats_t, also in its own file.
* Rename cache_alloc_easy to cache_bin_alloc_easy.David T. Goldblatt2017-12-191-1/+1
| | | | This lives in the cache_bin module; just a typo.
* Move bin stats code from arena to bin module.David T. Goldblatt2017-12-191-14/+1
|
* Move bin forking code from arena to bin module.David T. Goldblatt2017-12-192-4/+19
|
* Move bin initialization from arena module to bin module.David T. Goldblatt2017-12-192-10/+17
|
* Pull out arena_bin_info_t and arena_bin_t into their own file.David T. Goldblatt2017-12-194-67/+70
| | | | | In the process, kill arena_bin_index, which is unused. To follow are several diffs continuing this separation.
* Over purge by 1 extent always.Qi Wang2017-12-182-6/+4
| | | | | | | | | When purging, large allocations are usually the ones that cross the npages_limit threshold, simply because they are "large". This means we often leave the large extent around for a while, which has the downsides of: 1) high RSS and 2) more chance of them getting fragmented. Given that they are not likely to be reused very soon (LRU), let's over purge by 1 extent (which is often large and not reused frequently).
* Output opt.lg_extent_max_active_fit in stats.Qi Wang2017-12-141-0/+3
|
* Fix extent deregister on the leak path.Qi Wang2017-12-091-4/+14
| | | | On leak path we should not adjust gdump when deregister.
* Add more tests for extent hooks failure paths.Qi Wang2017-11-291-0/+3
|
* Add missing deregister before extents_leak.Qi Wang2017-11-201-0/+1
| | | | This fixes an regression introduced by 211b1f3 (refactor extent split).
* Avoid setting zero and commit if split fails in extent_recycle.Qi Wang2017-11-201-14/+10
|
* Eagerly coalesce large extents.Qi Wang2017-11-161-1/+15
| | | | | | Coalescing is a small price to pay for large allocations since they happen less frequently. This reduces fragmentation while also potentially improving locality.
* Fix an extent coalesce bug.Qi Wang2017-11-161-7/+13
| | | | | When coalescing, we should take both extents off the LRU list; otherwise decay can grab the existing outer extent through extents_evict.
* Add opt.lg_extent_max_active_fitQi Wang2017-11-163-0/+16
| | | | | | | | | | When allocating from dirty extents (which we always prefer if available), large active extents can get split even if the new allocation is much smaller, in which case the introduced fragmentation causes high long term damage. This new option controls the threshold to reuse and split an existing active extent. We avoid using a large extent for much smaller sizes, in order to reduce fragmentation. In some workload, adding the threshold improves virtual memory usage by >10x.
* Use extent_heap_first for best fit.Qi Wang2017-11-161-1/+1
| | | | | extent_heap_any makes the layout less predictable and as a result incurs more fragmentation.
* Use tsd offset_state instead of atomicDave Watson2017-11-141-0/+10
| | | | | | While working on #852, I noticed the prng state is atomic. This is the only atomic use of prng in all of jemalloc. Instead, use a threadlocal prng state if possible to avoid unnecessary cache line contention.
* Fix base allocator THP auto mode locking and stats.Qi Wang2017-11-101-21/+16
| | | | | Added proper synchronization for switching to using THP in auto mode. Also fixed stats for number of THPs used.
* Fix unbounded increase in stash_decayed.Qi Wang2017-11-092-14/+21
| | | | | | Added an upper bound on how many pages we can decay during the current run. Without this, decay could have unbounded increase in stashed, since other threads could add new pages into the extents.
* Use hugepage alignment for base allocator.Qi Wang2017-11-041-2/+2
| | | | | This gives us an easier way to tell if the allocation is for metadata in the extent hooks.
* Add arena.i.retain_grow_limitQi Wang2017-11-033-5/+72
| | | | | | | This option controls the max size when grow_retained. This is useful when we have customized extent hooks reserving physical memory (e.g. 1G huge pages). Without this feature, the default increasing sequence could result in fragmented and wasted physical memory.
* Try to use sysctl(3) instead of sysctlbyname(3).Edward Tomasz Napierala2017-11-031-0/+13
| | | | | | | | | This attempts to use VM_OVERCOMMIT OID - newly introduced in -CURRENT few days ago, specifically for this purpose - instead of querying the sysctl by its string name. Due to how syctlbyname(3) works, this means we do one syscall during binary startup instead of two. Signed-off-by: Edward Tomasz Napierala <trasz@FreeBSD.org>
* Use getpagesize(3) under FreeBSD.Edward Tomasz Napierala2017-11-031-0/+2
| | | | | | | This avoids sysctl(2) syscall during binary startup, using the value passed in the ELF aux vector instead. Signed-off-by: Edward Tomasz Napierala <trasz@FreeBSD.org>
* metadata_thp: auto mode adjustment for a0.Qi Wang2017-11-011-19/+22
| | | | | | We observed that arena 0 can have much more metadata allocated comparing to other arenas. Tune the auto mode to only switch to huge page on the 5th block (instead of 3 previously) for a0.
* Output all counters for bin mutex stats.Qi Wang2017-10-191-4/+7
| | | | The saved space is not worth the trouble of missing counters.
* Add a "dumpable" bit to the extent state.David Goldblatt2017-10-162-8/+16
| | | | | Currently, this is unused (i.e. all extents are always marked dumpable). In the future, we'll begin using this functionality.
* Add pages_dontdump and pages_dodump.David Goldblatt2017-10-161-0/+23
| | | | This will, eventually, enable us to avoid dumping eden regions.
* Factor out extent-splitting core from extent lifetime management.David Goldblatt2017-10-161-81/+149
| | | | | | | Before this commit, extent_recycle_split intermingles the splitting of an extent and the return of parts of that extent to a given extents_t. After it, that logic is separated. This will enable splitting extents that don't live in any extents_t (as the grow retained region soon will).
* Document some of the internal extent functions.David Goldblatt2017-10-161-0/+35
|
* Define MADV_FREE on our own when needed.Qi Wang2017-10-111-1/+7
| | | | | | On x86 Linux, we define our own MADV_FREE if madvise(2) is available, but no MADV_FREE is detected. This allows the feature to be built in and enabled with runtime detection.
* Set isthreaded manually.devQi Wang2017-10-061-5/+6
| | | | Avoid relying pthread_once which creates dependency during init.
* Delay background_thread_ctl_init to right before thread creation.Qi Wang2017-10-062-4/+6
| | | | | ctl_init sets isthreaded, which means it should be done without holding any locks.
* Enable a0 metadata thp on the 3rd base block.Qi Wang2017-10-051-21/+64
| | | | | | Since we allocate rtree nodes from a0's base, it's pushed to over 1 block on initialization right away, which makes the auto thp mode less effective on a0. We change a0 to make the switch on the 3rd block instead.
* Power: disable the CPU_SPINWAIT macro.David Goldblatt2017-10-051-1/+2
| | | | | | | | | | | | | | Quoting from https://github.com/jemalloc/jemalloc/issues/761 : [...] reading the Power ISA documentation[1], the assembly in [the CPU_SPINWAIT macro] isn't correct anyway (as @marxin points out): the setting of the program-priority register is "sticky", and we never undo the lowering. We could do something similar, but given that we don't have testing here in the first place, I'm inclined to simply not try. I'll put something up reverting the problematic commit tomorrow. [1] Book II, chapter 3 of the 2.07B or 3.0B ISA documents.
* Use ph instead of rb tree for extents_avail_Dave Watson2017-10-041-1/+1
| | | | | | | | | | There does not seem to be any overlap between usage of extent_avail and extent_heap, so we can use the same hook. The only remaining usage of rb trees is in the profiling code, which has some 'interesting' iteration constraints. Fixes #888
* Logging: capitalize log macro.David Goldblatt2017-10-031-48/+48
| | | | Dodge a name-conflict with the math.h logarithm function. D'oh.
* Add runtime detection of lazy purging support.Qi Wang2017-09-271-0/+24
| | | | | | It's possible to build with lazy purge enabled but depoly to systems without such support. In this case, rely on the boot time detection instead of keep making unnecessary madvise calls (which all returns EINVAL).
* Put static keyword first.Qi Wang2017-09-211-1/+1
| | | | Fix a warning by -Wold-style-declaration.
* Clear cache bin ql postfork.Qi Wang2017-09-121-0/+7
| | | | | This fixes a regression in 9c05490, which introduced the new cache bin ql. The list needs to be cleaned up after fork, same as tcache_ql.
* Relax constraints on reentrancy for extent hooks.Qi Wang2017-08-311-1/+12
| | | | | | If we guarantee no malloc activity in extent hooks, it's possible to make customized hooks working on arena 0. Remove the non-a0 assertion to enable such use cases.
* Add stats for metadata_thp.Qi Wang2017-08-304-14/+76
| | | | Report number of THPs used in arena and aggregated stats.
* Change opt.metadata_thp to [disabled,auto,always].Qi Wang2017-08-305-17/+54
| | | | | | | | To avoid the high RSS caused by THP + low usage arena (i.e. THP becomes a significant percentage), added a new "auto" option which will only start using THP after a base allocator used up the first THP region. Starting from the second hugepage (in a single arena), "auto" behaves the same as "always", i.e. madvise hugepage right away.
* Make arena stats collection go through cache bins.David Goldblatt2017-08-172-4/+13
| | | | | | This eliminates the need for the arena stats code to "know" about tcaches; all that it needs is a cache_bin_array_descriptor_t to tell it where to find cache_bins whose stats it should aggregate.
* Pull out caching for a bin into its own file.David Goldblatt2017-08-172-22/+22
| | | | | | This is the first step towards breaking up the tcache and arena (since they interact primarily at the bin level). It should also make a future arena caching implementation more straightforward.
* Fix test/unit/pages.Qi Wang2017-08-111-6/+7
| | | | | | As part of the metadata_thp support, We now have a separate swtich (JEMALLOC_HAVE_MADVISE_HUGE) for MADV_HUGEPAGE availability. Use that instead of JEMALLOC_THP (which doesn't guard pages_huge anymore) in tests.