summaryrefslogtreecommitdiffstats
path: root/src/tsd.c
Commit message (Collapse)AuthorAgeFilesLines
* Header refactoring: move assert.h out of the catch-allDavid Goldblatt2017-04-191-0/+2
|
* Switch to fine-grained reentrancy support.Qi Wang2017-04-151-1/+3
| | | | | | | Previously we had a general detection and support of reentrancy, at the cost of having branches and inc / dec operations on fast paths. To avoid taxing fast paths, we move the reentrancy operations onto tsd slow state, and only modify reentrancy level around external calls (that might trigger reentrancy).
* Bundle 3 branches on fast path into tsd_state.Qi Wang2017-04-141-1/+39
| | | | | | Added tsd_state_nominal_slow, which on fast path malloc() incorporates tcache_enabled check, and on fast path free() bundles both malloc_slow and tcache_enabled branches.
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-111-1/+2
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Add hooking functionalityDavid Goldblatt2017-04-071-0/+9
| | | | | This allows us to hook chosen functions and do interesting things there (in particular: reentrancy checking).
* Integrate auto tcache into TSD.Qi Wang2017-04-071-5/+0
| | | | | | | | | The embedded tcache is initialized upon tsd initialization. The avail arrays for the tbins will be allocated / deallocated accordingly during init / cleanup. With this change, the pointer to the auto tcache will always be available, as long as we have access to the TSD. tcache_available() (called in tcache_get()) is provided to check if we should use tcache.
* Make the tsd member init functions to take tsd_t * type.Qi Wang2017-04-041-1/+1
|
* Do proper cleanup for tsd_state_reincarnated.Qi Wang2017-04-041-9/+6
| | | | | Also enable arena_bind under non-nominal state, as the cleanup will be handled correctly now.
* Add init function support to tsd members.Qi Wang2017-04-041-1/+18
| | | | | | This will facilitate embedding tcache into tsd, which will require proper initialization cannot be done via the static initializer. Make tsd->rtree_ctx to be initialized via rtree_ctx_data_init().
* Do not generate unused tsd_*_[gs]et() functions.Jason Evans2017-02-131-1/+1
| | | | | | | | | This avoids a gcc diagnostic note: note: The ABI for passing parameters with 64-byte alignment has changed in GCC 4.6 This note related to the cacheline alignment of rtree_ctx_t, which was introduced by 4a346f55939af4f200121cc4454089592d952f18 (Replace rtree path cache with LRU cache.).
* Replace tabs following #define with spaces.Jason Evans2017-01-211-4/+4
| | | | This resolves #564.
* Remove extraneous parens around return arguments.Jason Evans2017-01-211-6/+6
| | | | This resolves #540.
* Update brace style.Jason Evans2017-01-211-25/+17
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Remove leading blank lines from function bodies.Jason Evans2017-01-131-7/+0
| | | | This resolves #535.
* Make tsd cleanup functions optional, remove noop cleanup functions.Jason Evans2016-06-061-1/+6
|
* Use TSDN_NULL rather than NULL as appropriate.Jason Evans2016-05-131-5/+5
|
* Fix style nits.Jason Evans2016-04-171-1/+1
|
* Add witness, a simple online locking validator.Jason Evans2016-04-141-9/+11
| | | | This resolves #358.
* Prevent MSVC from optimizing away tls_callback (resolves #318)rustyx2016-02-201-1/+3
|
* Refactor arenas_cache tsd.Jason Evans2016-02-201-2/+2
| | | | | Refactor arenas_cache tsd into arenas_tdata, which is a structure of type arena_tdata_t.
* Work around an NPTL-specific TSD issue.Jason Evans2015-09-241-0/+3
| | | | | | | Work around a potentially bad thread-specific data initialization interaction with NPTL (glibc's pthreads implementation). This resolves #283.
* Fix a variable declaration typo.Jason Evans2015-07-081-1/+1
|
* Fix an assignment type warning for tls_callback.Jason Evans2015-07-081-2/+2
|
* Implement metadata statistics.Jason Evans2015-01-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Refactor bootstrapping to delay tsd initialization.Jason Evans2015-01-221-2/+2
| | | | | | | | | | | | Refactor bootstrapping to delay tsd initialization, primarily to support integration with FreeBSD's libc. Refactor a0*() for internal-only use, and add the bootstrap_{malloc,calloc,free}() API for use by FreeBSD's libc. This separation limits use of the a0*() functions to metadata allocation, which doesn't require malloc/calloc/free API compatibility. This resolves #170.
* Refactor/fix arenas manipulation.Jason Evans2014-10-081-6/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Abstract arenas access to use arena_get() (or a0get() where appropriate) rather than directly reading e.g. arenas[ind]. Prior to the addition of the arenas.extend mallctl, the worst possible outcome of directly accessing arenas was a stale read, but arenas.extend may allocate and assign a new array to arenas. Add a tsd-based arenas_cache, which amortizes arenas reads. This introduces some subtle bootstrapping issues, with tsd_boot() now being split into tsd_boot[01]() to support tsd wrapper allocation bootstrapping, as well as an arenas_cache_bypass tsd variable which dynamically terminates allocation of arenas_cache itself. Promote a0malloc(), a0calloc(), and a0free() to be generally useful for internal allocation, and use them in several places (more may be appropriate). Abstract arena->nthreads management and fix a missing decrement during thread destruction (recent tsd refactoring left arenas_cleanup() unused). Change arena_choose() to propagate OOM, and handle OOM in all callers. This is important for providing consistent allocation behavior when the MALLOCX_ARENA() flag is being used. Prior to this fix, it was possible for an OOM to result in allocation silently allocating from a different arena than the one specified.
* Fix tsd cleanup regressions.Jason Evans2014-10-041-5/+0
| | | | | | | | | | | | | | | | Fix tsd cleanup regressions that were introduced in 5460aa6f6676c7f253bfcb75c028dfd38cae8aaf (Convert all tsd variables to reside in a single tsd structure.). These regressions were twofold: 1) tsd_tryget() should never (and need never) return NULL. Rename it to tsd_fetch() and simplify all callers. 2) tsd_*_set() must only be called when tsd is in the nominal state, because cleanup happens during the nominal-->purgatory transition, and re-initialization must not happen while in the purgatory state. Add tsd_nominal() and use it as needed. Note that tsd_*{p,}_get() can still be used as long as no re-initialization that would require cleanup occurs. This means that e.g. the thread_allocated counter can be updated unconditionally.
* Convert all tsd variables to reside in a single tsd structure.Jason Evans2014-09-231-2/+49
|
* Implement the *allocx() API.Jason Evans2013-12-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | Implement the *allocx() API, which is a successor to the *allocm() API. The *allocx() functions are slightly simpler to use because they have fewer parameters, they directly return the results of primary interest, and mallocx()/rallocx() avoid the strict aliasing pitfall that allocm()/rallocx() share with posix_memalign(). The following code violates strict aliasing rules: foo_t *foo; allocm((void **)&foo, NULL, 42, 0); whereas the following is safe: foo_t *foo; void *p; allocm(&p, NULL, 42, 0); foo = (foo_t *)p; mallocx() does not have this problem: foo_t *foo = (foo_t *)mallocx(42, 0);
* Fix a potential infinite loop during thread exit.Jason Evans2013-11-201-1/+1
| | | | | | | | | Fix malloc_tsd_dalloc() to bypass tcache when dallocating, so that there is no danger of causing tcache reincarnation during thread exit. Whether this infinite loop occurs depends on the pthreads TSD implementation; it is known to occur on Solaris. Submitted by Markus Eberspächer.
* Add support for LinuxThreads.Leonard Crestez2013-10-251-0/+34
| | | | | | | | | | | | | | | | | When using LinuxThreads pthread_setspecific triggers recursive allocation on all threads. Work around this by creating a global linked list of in-progress tsd initializations. This modifies the _tsd_get_wrapper macro-generated function. When it has to initialize an TSD object it will push the item to the linked list first. If this causes a recursive allocation then the _get_wrapper request is satisfied from the list. When pthread_setspecific returns the item is removed from the list. This effectively adds a very poor substitute for real TLS used only during pthread_setspecific allocation recursion. Signed-off-by: Crestez Dan Leonard <lcrestez@ixiacom.com>
* Optimize malloc() and free() fast paths.Jason Evans2012-05-021-1/+1
| | | | | | | | | | Embed the bin index for small page runs into the chunk page map, in order to omit [...] in the following dependent load sequence: ptr-->mapelm-->[run-->bin-->]bin_info Move various non-critcal code out of the inlined function chain into helper functions (tcache_event_hard(), arena_dalloc_small(), and locking).
* Add support for MSVCMike Hommey2012-05-011-0/+8
| | | | Tested with MSVC 8 32 and 64 bits.
* Replace JEMALLOC_ATTR with various different macros when it makes senseMike Hommey2012-05-011-2/+4
| | | | | | Theses newly added macros will be used to implement the equivalent under MSVC. Also, move the definitions to headers, where they make more sense, and for some, are even more useful there (e.g. malloc).
* Avoid variable length arrays and remove declarations within codeMike Hommey2012-04-291-1/+1
| | | | | | | | | | | | MSVC doesn't support C99, and building as C++ to be able to use them is dangerous, as C++ and C99 are incompatible. Introduce a VARIABLE_ARRAY macro that either uses VLA when supported, or alloca() otherwise. Note that using alloca() inside loops doesn't quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged. It might be worth investigating ways to check whether VARIABLE_ARRAY is used in such context at runtime in debug builds and bail out if that happens.
* Add support for MingwMike Hommey2012-04-221-1/+26
|
* Remove extra argument for malloc_tsd_cleanup_registerMike Hommey2012-04-191-4/+3
| | | | | Bookkeeping an extra argument that actually only stores a function pointer for a function we already have is not very useful.
* Make special FreeBSD function overrides visible.Jason Evans2012-04-191-0/+1
| | | | | | Make special FreeBSD libc/libthr function overrides for _malloc_prefork(), _malloc_postfork(), and _malloc_thread_cleanup() visible.
* Remove arena_malloc_prechosen().Jason Evans2012-04-061-1/+1
| | | | | Remove arena_malloc_prechosen(), now that arena_malloc() can be invoked in a way that is semantically equivalent.
* Implement tsd.Jason Evans2012-03-231-0/+72
Implement tsd, which is a TLS/TSD abstraction that uses one or both internally. Modify bootstrapping such that no tsd's are utilized until allocation is safe. Remove malloc_[v]tprintf(), and use malloc_snprintf() instead. Fix %p argument size handling in malloc_vsnprintf(). Fix a long-standing statistics-related bug in the "thread.arena" mallctl that could cause crashes due to linked list corruption.