| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
Replace the single-character run-time flags with key/value pairs, which
can be set via the malloc_conf global, /etc/malloc.conf, and the
MALLOC_CONF environment variable.
Replace the JEMALLOC_PROF_PREFIX environment variable with the
"opt.prof_prefix" option.
Replace umax2s() with u2s().
|
| |
|
|
|
|
| |
Base dynamic structure size on offsetof(), rather than subtracting the
size of the dynamic structure member. Results could differ on systems
with strict data structure alignment requirements.
|
| |
|
|
|
|
| |
Omit the first map_bias elements of the map in arena_chunk_t. This
avoids barely spilling over into an extra chunk header page for common
chunk sizes.
|
| |
|
|
|
|
| |
Add allocm(), rallocm(), sallocm(), and dallocm(), which are a
functional superset of malloc(), calloc(), posix_memalign(),
malloc_usable_size(), and free().
|
| |
|
|
|
| |
Add Mac OS X support, based in large part on the OS X support in
Mozilla's version of jemalloc.
|
| |
|
|
|
|
|
| |
Properly maintain tcache_bin_t's avail pointer such that it is NULL if
no objects are cached. This only caused problems during thread cache
destruction, since cache flushing otherwise never occurs on an empty
bin.
|
| |
|
|
|
| |
Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and
preferentially allocate dirty runs.
|
| |
|
|
|
|
|
|
|
|
| |
Remove medium size classes, because concurrent dirty page purging is
no longer capable of purging inactive dirty pages inside active runs
(due to recent arena/bin locking changes).
Enhance tcache to support caching large objects, so that the same range
of size classes is still cached, despite the removal of medium size
class support.
|
| |
|
|
|
|
|
|
| |
Initialize small run header before dropping arena->lock,
arena_chunk_purge() relies on valid small run headers during run
iteration.
Add some assertions.
|
| |
|
|
|
|
|
|
|
|
| |
For bin-related allocation, protect data structures with bin locks
rather than arena locks. Arena locks remain for run
allocation/deallocation and other miscellaneous operations.
Restructure statistics counters to maintain per bin
allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics
via aggregation in the ctl code.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use chains of cached objects, rather than using arrays of pointers.
Since tcache_bin_t is no longer dynamically sized, convert tcache_t's
tbin to an array of structures, rather than an array of pointers. This
implicitly removes tcache_bin_{create,destroy}(), which further
simplifies the fast path for malloc/free.
Use cacheline alignment for tcache_t allocations.
Remove runtime configuration option for number of tcache bin slots, and
replace it with a boolean option for enabling/disabling tcache.
Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX
and 2X the number of regions per run for the size class.
For GC-triggered flush, discard 3/4 of the objects below the low water
mark, rather than 1/2.
|
| |
|
|
|
| |
Rather than passing four strings to malloc_message(), malloc_write4(),
and all the functions that use them, only pass one string.
|
| |
|