summaryrefslogtreecommitdiffstats
path: root/jemalloc/src/tcache.c
Commit message (Collapse)AuthorAgeFilesLines
* Replace JEMALLOC_OPTIONS with MALLOC_CONF.Jason Evans2010-10-241-6/+6
| | | | | | | | | | | Replace the single-character run-time flags with key/value pairs, which can be set via the malloc_conf global, /etc/malloc.conf, and the MALLOC_CONF environment variable. Replace the JEMALLOC_PROF_PREFIX environment variable with the "opt.prof_prefix" option. Replace umax2s() with u2s().
* Use offsetof() when sizing dynamic structures.Jason Evans2010-10-021-1/+1
| | | | | | Base dynamic structure size on offsetof(), rather than subtracting the size of the dynamic structure member. Results could differ on systems with strict data structure alignment requirements.
* Omit chunk header in arena chunk map.Jason Evans2010-10-021-6/+6
| | | | | | Omit the first map_bias elements of the map in arena_chunk_t. This avoids barely spilling over into an extra chunk header page for common chunk sizes.
* Add {,r,s,d}allocm().Jason Evans2010-09-171-1/+3
| | | | | | Add allocm(), rallocm(), sallocm(), and dallocm(), which are a functional superset of malloc(), calloc(), posix_memalign(), malloc_usable_size(), and free().
* Port to Mac OS X.Jason Evans2010-09-121-6/+20
| | | | | Add Mac OS X support, based in large part on the OS X support in Mozilla's version of jemalloc.
* Fix tcache crash during thread cleanup.Jason Evans2010-04-141-14/+12
| | | | | | | Properly maintain tcache_bin_t's avail pointer such that it is NULL if no objects are cached. This only caused problems during thread cache destruction, since cache flushing otherwise never occurs on an empty bin.
* Track dirty and clean runs separately.Jason Evans2010-03-191-2/+2
| | | | | Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and preferentially allocate dirty runs.
* Remove medium size classes.Jason Evans2010-03-171-12/+131
| | | | | | | | | | Remove medium size classes, because concurrent dirty page purging is no longer capable of purging inactive dirty pages inside active runs (due to recent arena/bin locking changes). Enhance tcache to support caching large objects, so that the same range of size classes is still cached, despite the removal of medium size class support.
* Fix a run initialization race condition.Jason Evans2010-03-161-6/+7
| | | | | | | | Initialize small run header before dropping arena->lock, arena_chunk_purge() relies on valid small run headers during run iteration. Add some assertions.
* Push locks into arena bins.Jason Evans2010-03-151-45/+36
| | | | | | | | | | For bin-related allocation, protect data structures with bin locks rather than arena locks. Arena locks remain for run allocation/deallocation and other miscellaneous operations. Restructure statistics counters to maintain per bin allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics via aggregation in the ctl code.
* Simplify tcache object caching.Jason Evans2010-03-141-120/+77
| | | | | | | | | | | | | | | | | | | | Use chains of cached objects, rather than using arrays of pointers. Since tcache_bin_t is no longer dynamically sized, convert tcache_t's tbin to an array of structures, rather than an array of pointers. This implicitly removes tcache_bin_{create,destroy}(), which further simplifies the fast path for malloc/free. Use cacheline alignment for tcache_t allocations. Remove runtime configuration option for number of tcache bin slots, and replace it with a boolean option for enabling/disabling tcache. Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX and 2X the number of regions per run for the size class. For GC-triggered flush, discard 3/4 of the objects below the low water mark, rather than 1/2.
* Simplify malloc_message().Jason Evans2010-03-041-2/+2
| | | | | Rather than passing four strings to malloc_message(), malloc_write4(), and all the functions that use them, only pass one string.
* Restructure source tree.Jason Evans2010-02-111-0/+335