| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
This will prevent accidental creation of potential integer truncation
bugs when developing on LP64 systems.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Use appropriate versions to resolve 64-to-32-bit data loss warnings.
|
|
|
|
| |
This resolves #333.
|
|
|
|
|
| |
These tree types converged to become identical, yet they still had
independently generated red-black tree implementations.
|
|
|
|
|
|
|
|
|
|
|
| |
Separate run trees by index, replacing the previous quantize logic.
Quantization by index is now performed only on insertion / removal from
the tree, and not on node comparison, saving some cpu. This also means
we don't have to dereference the miscelm* pointers, saving half of the
memory loads from miscelms/mapbits that have fallen out of cache. A
linear scan of the indicies appears to be fast enough.
The only cost of this is an extra tree array in each arena.
|
|
|
|
|
| |
Since this is an intrusive tree, rbt_nil is the whole size of the node
and can be quite large. For example, miscelm is ~100 bytes.
|
|
|
|
|
| |
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
|
|
|
|
|
|
|
|
|
|
|
| |
In practice this bug had limited impact (and then only by increasing
chunk fragmentation) because run_quantize_ceil() returned correct
results except for inputs that could only arise from aligned allocation
requests that required more than page alignment.
This bug existed in the original run quantization implementation, which
was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement
cache index randomization for large allocations.).
|
|
|
|
|
| |
Also rename run_quantize_*() to improve clarity. These tests
demonstrate that run_quantize_ceil() is flawed.
|
| |
|
|
|
|
|
|
|
| |
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec. This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
|
| |
|
| |
|
|
|
|
| |
struct timespec is already defined by the system (at least on MinGW).
|
|
|
|
|
| |
Add jemalloc_ffs64() and use it instead of jemalloc_ffsl() in
prng_range(), since long is not guaranteed to be a 64-bit type.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
in Cygwin x64
|
| |
|
| |
|
|
|
|
| |
Reported by Christopher Ferris <cferris@google.com>.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.
Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time
This resolves #325.
|
|
|
|
|
|
| |
Check in a generated smootherstep table as smoothstep.h rather than
generating it at configure time, since not all systems (e.g. Windows)
have dc.
|
|
|
|
|
| |
Refactor out arena_compute_npurge() by integrating its logic into
arena_stash_dirty() as an incremental computation.
|
|
|
|
|
| |
Refactor arenas_cache tsd into arenas_tdata, which is a structure of
type arena_tdata_t.
|
|
|
|
|
| |
Refactor early return logic in arena_ralloc_no_move() to return early on
failure rather than on success.
|
| |
|
|
|
|
|
| |
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
|
| |
|
|
|
|
|
| |
Implement ticker, which provides a simple API for ticking off some
number of events before indicating that the ticker has hit its limit.
|
| |
|
| |
|
|
|
|
|
| |
Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration.
|
| |
|
|
|
|
|
| |
Fix test_stats_arenas_summary to deallocate before asserting that
purging must have happened.
|
| |
|
| |
|
|
|
|
|
|
| |
Pass the retain and exclude parameters to the /pprof/symbol pprof server
endpoint so that the server has the opportunity to optimize which
symbols it looks up and/or returns mappings for.
|