| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Should only run the hook tests without background threads. This was introduced
in 6e841f6.
|
| |
|
|
|
|
|
| |
This is a temporary workaround until we add some beefier CI machines. Right
now, we're seeing too many OOMs for this to be useful.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Added opt.background_thread to enable background threads, which handles purging
currently. When enabled, decay ticks will not trigger purging (which will be
left to the background threads). We limit the max number of threads to NCPUs.
When percpu arena is enabled, set CPU affinity for the background threads as
well.
The sleep interval of background threads is dynamic and determined by computing
number of pages to purge in the future (based on backlog).
|
|
|
|
|
|
|
|
|
|
|
| |
Simplify configuration by removing the --disable-tcache option, but
replace the testing for that configuration with
--with-malloc-conf=tcache:false.
Fix the thread.arena and thread.tcache.flush mallctls to work correctly
if tcache is disabled.
This partially resolves #580.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new feature, opt.percpu_arena, determines thread-arena association
dynamically based CPU id. Three modes are supported: "percpu", "phycpu"
and disabled.
"percpu" uses the current core id (with help from sched_getcpu())
directly as the arena index, while "phycpu" will assign threads on the
same physical CPU to the same arena. In other words, "percpu" means # of
arenas == # of CPUs, while "phycpu" has # of arenas == 1/2 * (# of
CPUs). Note that no runtime check on whether hyper threading is enabled
is added yet.
When enabled, threads will be migrated between arenas when a CPU change
is detected. In the current design, to reduce overhead from reading CPU
id, each arena tracks the thread accessed most recently. When a new
thread comes in, we will read CPU id and update arena if necessary.
|
|
|
|
|
|
|
|
|
| |
malloc_conf does not reliably work with MSVC, which complains of
"inconsistent dll linkage", i.e. its inability to support the
application overriding malloc_conf when dynamically linking/loading.
Work around this limitation by adding test harness support for per test
shell script sourcing, and converting all tests to use MALLOC_CONF
instead of malloc_conf.
|
|
|
|
| |
This resolves #564.
|
|
|
|
| |
This resolves #540.
|
|
|
|
|
|
|
| |
Add braces around single-line blocks, and remove line breaks before
function-opening braces.
This resolves #537.
|
| |
|
|
|
|
| |
This resolves #535.
|
|
|
|
|
|
| |
Move test extent hook code from the extent integration test into a
header, and normalize the out-of-band controls and introspection.
Also refactor the base unit test to use the header.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add/rename related mallctls:
- Add stats.arenas.<i>.base .
- Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal .
- Add stats.arenas.<i>.resident .
Modify the arenas.extend mallctl to take an optional (extent_hooks_t *)
argument so that it is possible for all base allocations to be serviced
by the specified extent hooks.
This resolves #463.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Split purging into lazy and forced variants. Use the forced variant for
zeroing dss.
Add support for NULL function pointers as an opt-out mechanism for the
dalloc, commit, decommit, purge_lazy, purge_forced, split, and merge
fields of extent_hooks_t.
Add short-circuiting checks in large_ralloc_no_move_{shrink,expand}() so
that no attempt is made if splitting/merging is not supported.
This resolves #268.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds cpp bindings for jemalloc, along with necessary autoconf settings.
This is mostly to add sized deallocation support, which can't be added
from C directly. Sized deallocation is ~10% microbench improvement.
* Import ax_cxx_compile_stdcxx.m4 from the autoconf repo, seems like the
easiest way to get c++14 detection.
* Adds various other changes, like CXXFLAGS, to configure.ac.
* Adds new rules to Makefile.in for src/jemalloc-cpp.cpp, and a basic
unittest.
* Both new and delete are overridden, to ensure jemalloc is used for
both.
* TODO future enhancement of avoiding extra PLT thunks for new and
delete - sdallocx and malloc are publicly exported jemalloc symbols,
using an alias would link them directly. Unfortunately, was having
trouble getting it to play nice with jemalloc's namespace support.
Testing:
Tested gcc 4.8, gcc 5, gcc 5.2, clang 4.0. Only gcc >= 5 has sized
deallocation support, verified that the rest build correctly.
Tested mac osx and Centos.
Tested --with-jemalloc-prefix and --without-export.
This resolves #202.
|
| |
|
|
|
|
|
|
|
| |
This is intended to drop memory usage to a level that AppVeyor test
instances can handle.
This resolves #393.
|
|
|
|
| |
This resolves #393.
|
|
|
|
| |
This resolves #393.
|
|
|
|
|
| |
This avoids warnings in some cases, and is otherwise generally good
hygiene.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Fix a fundamental extent_split_wrapper() bug in an error path.
Fix extent_recycle() to deregister unsplittable extents before leaking
them.
Relax xallocx() test assertions so that unsplittable extents don't cause
test failures.
|
|
|
|
|
|
|
|
|
|
| |
With the removal of subchunk size class infrastructure, there are no
large size classes that are guaranteed to be re-expandable in place
unless munmap() is disabled. Work around these legitimate failures with
rallocx() fallback calls. If there were no test configuration for which
the xallocx() calls succeeded, it would be important to override the
extent hooks for testing purposes, but by default these tests don't use
the rallocx() fallbacks on Linux, so test coverage is still sufficient.
|
|
|
|
|
|
|
| |
This facilitates the application accessing its own extent allocator
metadata during hook invocations.
This resolves #259.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Precisely size extents for huge size classes that aren't multiples of
chunksize.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depending on virtual memory resource limits, it is necessary to attempt
allocating three maximally sized objects to trigger OOM rather than just
two, since the maximum supported size is slightly less than half the
total virtual memory address space.
This fixes a test failure that was introduced by
0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class
overflow behavior defined.).
This resolves #379.
|
|
|
|
|
| |
This assures that side effects of internal allocation don't impact
tests.
|
|
|
|
|
|
|
|
| |
Add (size_t) casts to MALLOCX_ALIGN() macros so that passing the integer
constant 0x80000000 does not cause a compiler warning about invalid
shift amount.
This resolves #354.
|
|
|
|
|
| |
Remove invalid tests that were intended to be tests of (hugemax+1) OOM,
for which tests already exist.
|
|
|
|
|
|
| |
This fixes compilation warnings regarding integer overflow that were
introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx()
size class overflow behavior defined.).
|
|
|
|
|
|
|
| |
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.
This resolves #278 and #295.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Modify xallocx() tests that expect to expand in place to use a separate
arena. This avoids the potential for interposed internal allocations
from e.g. heap profile sampling to disrupt the tests.
This resolves #286.
|
|
|
|
|
| |
In addition to depending on map coalescing, the test depended on
munmap() being disabled so that chunk recycling would always succeed.
|
|
|
|
|
|
| |
Make mallocx() OOM testing work correctly even on systems that can
allocate the majority of virtual address space in a single contiguous
region.
|
|
|
|
|
|
|
|
|
|
| |
Zero all trailing bytes of large allocations when
--enable-cache-oblivious configure option is enabled. This regression
was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement
cache index randomization for large allocations.).
Zero trailing bytes of huge allocations when resizing from/to a size
class that is not a multiple of the chunk size.
|
| |
|
|
|
|
|
| |
Systems that do not support chunk split/merge cannot shrink/grow huge
allocations in place.
|