| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
- Decorate public function with __declspec(allocator) and __declspec(restrict), just like MSVC 1900
- Support JEMALLOC_HAS_RESTRICT by defining the restrict keyword
- Move __declspec(nothrow) between 'void' and '*' so it compiles once more
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.
Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.
Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.
This resolves #176 and #201.
|
|
|
|
|
|
|
|
| |
- Do not reallocate huge objects in place if the number of backing
chunks would change.
- Do not cache multi-chunk mappings.
This resolves #213.
|
|
|
|
|
|
|
|
| |
Fix huge_ralloc_no_move() to succeed if an allocation request results in
the same usable size as the existing allocation, even if the request
size is smaller than the usable size. This bug did not cause
correctness issues, but it could cause unnecessary moves during
reallocation.
|
|
|
|
|
|
|
|
|
|
| |
huge_ralloc() passes a size that may not be precisely a size class, so
make huge_palloc() handle the more general case of a size input rather
than usize.
This regression appears to have been introduced by the addition of
in-place huge reallocation; as such it was never incorporated into a
release.
|
|
|
|
|
|
|
|
|
| |
Take large_pad into account when determining whether an aligned
allocation can be satisfied by a large size class.
This regression was introduced by
8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index
randomization for large allocations.).
|
|
|
|
|
| |
This change merely documents that arena_palloc_large() always receives
usize as its argument.
|
|
|
|
|
| |
Macro expansion happens too late for the #undef directives to work as a
mechanism for preventing accidental direct use of the PRI* macros.
|
|
|
|
| |
This resolves #83.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create and use FMT* macros that are equivalent to the PRI* macros that
inttypes.h defines. This allows uniform use of the Unix-specific format
specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions
of e.g. PRIu64.
Add ffs()/ffsl() support for compiling with gcc.
Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM,
ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and
use the file for tests as well as for core jemalloc code.
|
|
|
|
|
| |
This regression was introduced by
1b0e4abbfdbcc1c1a71d1f617adb19951109bfce (Port mq_get() to MinGW.).
|
|
|
|
|
|
| |
Replace JEMALLOC_ATTR(format(printf, ...). with
JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can
omit the attribute if it would cause extraneous compilation warnings.
|
| |
|
|
|
|
|
|
| |
Only use __declspec(nothrow) in C++ mode.
This resolves #244.
|
|
|
|
|
|
|
|
| |
As per gcc documentation:
The alloc_size attribute is used to tell the compiler that the function
return value points to memory (...)
This resolves #245.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Fixes warning with newer GCCs:
include/jemalloc/jemalloc.h:229:2: warning: extra ';' [-Wpedantic]
};
^
|
|
|
|
|
|
| |
This change improves interaction with transparent huge pages, e.g.
reduced page faults (at least in the absence of unused dirty page
purging).
|
|
|
|
|
|
|
|
|
| |
This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use
first-fit rather than first-best-fit run/chunk allocation.). In some
pathological cases, first-fit search dominates allocation time, and it
also tends not to converge as readily on a steady state of memory
layout, since precise allocation order has a bigger effect than for
first-best-fit.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Add various function attributes to the exported functions to give the
compiler more information to work with during optimization, and also
specify throw() when compiling with C++ on Linux, in order to adequately
match what __THROW does in glibc.
This resolves #237.
|
|
|
|
|
|
|
| |
This {bug,regression} was introduced by
155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
This resolves #241.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Conditionally define ENOENT, EINVAL, etc. (was unconditional).
Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued
(harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the
alternative to this workaround would have been to disable the function
attributes which cause gcc to look for type mismatches in formatted printing
function calls.
|
| |
|
|
|
| |
Fix typos ChangeLog
|
| |
|
| |
|
|
|
|
|
|
|
| |
- Set opt_lg_chunk based on run-time OS setting
- Verify LG_PAGE is compatible with run-time OS setting
- When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION
- When targeting Windows Vista or newer, statically initialize init_lock
|
|
|
|
|
|
|
|
|
|
| |
Fix size class overflow handling for malloc(), posix_memalign(),
memalign(), calloc(), and realloc() when profiling is enabled.
Remove an assertion that erroneously caused arena_sdalloc() to fail when
profiling was enabled.
This resolves #232.
|
|
|
|
| |
This resolves #235.
|
| |
|
|
|
|
| |
The regressions were never merged into the master branch.
|
| |
|
| |
|
|
|
|
|
|
| |
This avoids the potential surprise of deallocating an object with one
tcache specified, and having the object cached in a different tcache
once it drains from the quarantine.
|
| |
|
|
|
|
|
|
| |
Now that small allocation runs have fewer regions due to run metadata
residing in chunk headers, an explicit minimum tcache count is needed to
make sure that tcache adequately amortizes synchronization overhead.
|
|
|
|
|
|
|
|
| |
Take into account large_pad when computing whether to pass the
deallocation request to tcache_dalloc_large(), so that the largest
cacheable size makes it back to tcache. This regression was introduced
by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index
randomization for large allocations.).
|
|
|
|
|
|
| |
Pass large allocation requests to arena_malloc() when possible. This
regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d
(Normalize size classes.).
|
|
|
|
|
| |
This regression was introduced by
155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extract szad size quantization into {extent,run}_quantize(), and .
quantize szad run sizes to the union of valid small region run sizes and
large run sizes.
Refactor iteration in arena_run_first_fit() to use
run_quantize{,_first,_next(), and add support for padded large runs.
For large allocations that have no specified alignment constraints,
compute a pseudo-random offset from the beginning of the first backing
page that is a multiple of the cache line size. Under typical
configurations with 4-KiB pages and 64-byte cache lines this results in
a uniform distribution among 64 page boundary offsets.
Add the --disable-cache-oblivious option, primarily intended for
performance testing.
This resolves #13.
|