| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Only use __declspec(nothrow) in C++ mode.
This resolves #244.
|
|
|
|
|
|
|
|
| |
As per gcc documentation:
The alloc_size attribute is used to tell the compiler that the function
return value points to memory (...)
This resolves #245.
|
| |
|
|
|
|
|
|
|
|
|
| |
This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use
first-fit rather than first-best-fit run/chunk allocation.). In some
pathological cases, first-fit search dominates allocation time, and it
also tends not to converge as readily on a steady state of memory
layout, since precise allocation order has a bigger effect than for
first-best-fit.
|
|
|
|
|
|
|
|
|
| |
Add various function attributes to the exported functions to give the
compiler more information to work with during optimization, and also
specify throw() when compiling with C++ on Linux, in order to adequately
match what __THROW does in glibc.
This resolves #237.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Conditionally define ENOENT, EINVAL, etc. (was unconditional).
Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued
(harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the
alternative to this workaround would have been to disable the function
attributes which cause gcc to look for type mismatches in formatted printing
function calls.
|
| |
|
| |
|
|
|
|
|
|
|
| |
- Set opt_lg_chunk based on run-time OS setting
- Verify LG_PAGE is compatible with run-time OS setting
- When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION
- When targeting Windows Vista or newer, statically initialize init_lock
|
|
|
|
|
|
|
|
|
|
| |
Fix size class overflow handling for malloc(), posix_memalign(),
memalign(), calloc(), and realloc() when profiling is enabled.
Remove an assertion that erroneously caused arena_sdalloc() to fail when
profiling was enabled.
This resolves #232.
|
|
|
|
| |
This resolves #235.
|
| |
|
|
|
|
| |
The regressions were never merged into the master branch.
|
| |
|
|
|
|
|
|
| |
This avoids the potential surprise of deallocating an object with one
tcache specified, and having the object cached in a different tcache
once it drains from the quarantine.
|
|
|
|
|
|
| |
Now that small allocation runs have fewer regions due to run metadata
residing in chunk headers, an explicit minimum tcache count is needed to
make sure that tcache adequately amortizes synchronization overhead.
|
|
|
|
|
|
| |
Pass large allocation requests to arena_malloc() when possible. This
regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d
(Normalize size classes.).
|
|
|
|
|
| |
This regression was introduced by
155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extract szad size quantization into {extent,run}_quantize(), and .
quantize szad run sizes to the union of valid small region run sizes and
large run sizes.
Refactor iteration in arena_run_first_fit() to use
run_quantize{,_first,_next(), and add support for padded large runs.
For large allocations that have no specified alignment constraints,
compute a pseudo-random offset from the beginning of the first backing
page that is a multiple of the cache line size. Under typical
configurations with 4-KiB pages and 64-byte cache lines this results in
a uniform distribution among 64 page boundary offsets.
Add the --disable-cache-oblivious option, primarily intended for
performance testing.
This resolves #13.
|
|
|
|
|
|
|
|
|
|
| |
This rename avoids installation collisions with the upstream gperftools.
Additionally, jemalloc's per thread heap profile functionality
introduced an incompatible file format, so it's now worthwhile to
clearly distinguish jemalloc's version of this script from the upstream
version.
This resolves #229.
|
|
|
|
| |
This resolves #227.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the shrinking case of huge_ralloc_no_move_similar() to purge the
correct number of pages, at the correct offset. This regression was
introduced by 8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 (Implement
dynamic per arena control over dirty page purging.).
Fix huge_ralloc_no_move_shrink() to purge the correct number of pages.
This bug was introduced by 9673983443a0782d975fbcb5d8457cfd411b8b56
(Purge/zero sub-chunk huge allocations as necessary.).
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Fix arena_get() calls that specify refresh_if_missing=false. In
ctl_refresh() and ctl.c's arena_purge(), these calls attempted to only
refresh once, but did so in an unreliable way.
arena_i_lg_dirty_mult_ctl() was simply wrong to pass
refresh_if_missing=false.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This regression was introduced by
8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 (Implement dynamic per arena
control over dirty page purging.).
This resolves #215.
|
|
|
|
|
|
|
|
|
| |
However, unlike before it was removed do not force --enable-ivsalloc
when Darwin zone allocator integration is enabled, since the zone
allocator code uses ivsalloc() regardless of whether
malloc_usable_size() and sallocx() do.
This resolves #211.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add mallctls:
- arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
modified to change the initial lg_dirty_mult setting for newly created
arenas.
- arena.<i>.lg_dirty_mult controls an individual arena's dirty page
purging threshold, and synchronously triggers any purging that may be
necessary to maintain the constraint.
- arena.<i>.chunk.purge allows the per arena dirty page purging function
to be replaced.
This resolves #93.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the prof_tctx_state_destroying transitory state and instead add
the tctx_uid field, so that the tuple <thr_uid, tctx_uid> uniquely
identifies a tctx. This assures that tctx's are well ordered even when
more than two with the same thr_uid coexist. A previous attempted fix
based on prof_tctx_state_destroying was only sufficient for protecting
against two coexisting tctx's, but it also introduced a new dumping
race.
These regressions were introduced by
602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap
profiling.) and 764b00023f2bc97f240c3a758ed23ce9c0ad8526 (Fix a heap
profiling regression.).
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Add the prof_tctx_state_destroying transitionary state to fix a race
between a thread destroying a tctx and another thread creating a new
equivalent tctx.
This regression was introduced by
602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap
profiling.).
|
|
|
|
|
|
|
|
| |
a14bce85 made buferror not take an error code, and make the Windows
code path for buferror use GetLastError, while the alternative code
paths used errno. Then 2a83ed02 made buferror take an error code
again, and while it changed the non-Windows code paths to use that
error code, the Windows code path was not changed accordingly.
|
|
|
|
|
|
|
|
|
|
| |
Fix prof_tctx_comp() to incorporate tctx state into the comparison.
During a dump it is possible for both a purgatory tctx and an otherwise
equivalent nominal tctx to reside in the tree at the same time.
This regression was introduced by
602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap
profiling.).
|
|
|
|
| |
These bugs only affected tests and debug builds.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This regression was introduced by
97c04a93838c4001688fe31bf018972b4696efe2 (Use first-fit rather than
first-best-fit run/chunk allocation.).
|
|
|
|
|
|
|
| |
This tends to more effectively pack active memory toward low addresses.
However, additional tree searches are required in many cases, so whether
this change stands the test of time will depend on real-world
benchmarks.
|
|
|
|
|
|
|
| |
Treat sizes that round down to the same size class as size-equivalent
in trees that are used to search for first best fit, so that there are
only as many "firsts" as there are size classes. This comes closer to
the ideal of first fit.
|
| |
|
|
|
|
|
|
| |
These regressions were introduced by
ee41ad409a43d12900a5a3108f6c14f84e4eb0eb (Integrate whole chunks into
unused dirty page purging machinery.).
|
|
|
|
|
|
|
|
|
|
| |
Rename "dirty chunks" to "cached chunks", in order to avoid overloading
the term "dirty".
Fix the regression caused by 339c2b23b2d61993ac768afcc72af135662c6771
(Fix chunk_unmap() to propagate dirty state.), and actually address what
that change attempted, which is to only purge chunks once, and propagate
whether zeroed pages resulted into chunk_record().
|