summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/base.h
Commit message (Collapse)AuthorAgeFilesLines
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-5/+5
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-5/+6
| | | | This resolves #358.
* Add the "stats.allocated" mallctl.Jason Evans2015-03-241-1/+1
|
* Move centralized chunk management into arenas.Jason Evans2015-02-121-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Refactor base_alloc() to guarantee demand-zeroed memory.Jason Evans2015-02-051-1/+0
| | | | | | | | | | | | Refactor base_alloc() to guarantee that allocations are carved from demand-zeroed virtual memory. This supports sparse data structures such as multi-page radix tree nodes. Enhance base_alloc() to keep track of fragments which were too small to support previous allocation requests, and try to consume them during subsequent requests. This becomes important when request sizes commonly approach or exceed the chunk size (as could radix tree node allocations).
* Implement metadata statistics.Jason Evans2015-01-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Refactor huge allocation to be managed by arenas.Jason Evans2014-05-161-1/+1
| | | | | | | | | | | | | | | | | | | | Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
* Port to FreeBSD.Jason Evans2012-02-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(), _malloc_{pre,post}fork()) to avoid bootstrapping issues due to allocation in libc and libthr. Add malloc_strtoumax() and use it instead of strtoul(). Disable validation code in malloc_vsnprintf() and malloc_strtoumax() until jemalloc is initialized. This is necessary because locale initialization causes allocation for both vsnprintf() and strtoumax(). Force the lazy-lock feature on in order to avoid pthread_self(), because it causes allocation. Use syscall(SYS_write, ...) rather than write(...), because libthr wraps write() and causes allocation. Without this workaround, it would not be possible to print error messages in malloc_conf_init() without substantially reworking bootstrapping. Fix choose_arena_hard() to look at how many threads are assigned to the candidate choice, rather than checking whether the arena is uninitialized. This bug potentially caused more arenas to be initialized than necessary.
* Fix fork-related bugs.Jason Evans2012-03-131-2/+3
| | | | | | | | | Acquire/release arena bin locks as part of the prefork/postfork. This bug made deadlock in the child between fork and exec a possibility. Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so that the child can reinitialize mutexes rather than unlocking them. In practice, this bug tended not to cause problems.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-011-0/+24