summaryrefslogtreecommitdiffstats
path: root/src/mutex.c
Commit message (Collapse)AuthorAgeFilesLines
* Power: disable the CPU_SPINWAIT macro.David Goldblatt2017-10-051-1/+2
| | | | | | | | | | | | | | Quoting from https://github.com/jemalloc/jemalloc/issues/761 : [...] reading the Power ISA documentation[1], the assembly in [the CPU_SPINWAIT macro] isn't correct anyway (as @marxin points out): the setting of the program-priority register is "sticky", and we never undo the lowering. We could do something similar, but given that we don't have testing here in the first place, I'm inclined to simply not try. I'll put something up reverting the problematic commit tomorrow. [1] Book II, chapter 3 of the 2.07B or 3.0B ISA documents.
* Refactor/fix background_thread/percpu_arena bootstrapping.Jason Evans2017-06-011-11/+1
| | | | | Refactor bootstrapping such that dlsym() is called during the bootstrapping phase that can tolerate reentrant allocation.
* Use real pthread_create for creating background threads.Qi Wang2017-05-311-1/+1
|
* Add profiling for the background thread mutex.Qi Wang2017-05-231-0/+2
|
* Implementing opt.background_thread.Qi Wang2017-05-231-18/+1
| | | | | | | | | | | Added opt.background_thread to enable background threads, which handles purging currently. When enabled, decay ticks will not trigger purging (which will be left to the background threads). We limit the max number of threads to NCPUs. When percpu arena is enabled, set CPU affinity for the background threads as well. The sleep interval of background threads is dynamic and determined by computing number of pages to purge in the future (based on backlog).
* Allow mutexes to take a lock ordering enum at construction.David Goldblatt2017-05-191-3/+25
| | | | | | | This lets us specify whether and how mutexes of the same rank are allowed to be acquired. Currently, we only allow two polices (only a single mutex at a given rank at a time, and mutexes acquired in ascending order), but we can plausibly allow more (e.g. the "release uncontended mutexes before blocking").
* Implement malloc_mutex_trylock() w/ proper stats update.Qi Wang2017-04-241-2/+2
|
* Header refactoring: move assert.h out of the catch-allDavid Goldblatt2017-04-191-0/+1
|
* Header refactoring: move malloc_io.h out of the catchallDavid Goldblatt2017-04-191-0/+2
|
* Header refactoring: Split up jemalloc_internal.hDavid Goldblatt2017-04-111-1/+2
| | | | | | | | | | | | | | This is a biggy. jemalloc_internal.h has been doing multiple jobs for a while now: - The source of system-wide definitions. - The catch-all include file. - The module header file for jemalloc.c This commit splits up this functionality. The system-wide definitions responsibility has moved to jemalloc_preamble.h. The catch-all include file is now jemalloc_internal_includes.h. The module headers for jemalloc.c are now in jemalloc_internal_[externs|inlines|types].h, just as they are for the other modules.
* Make the mutex n_waiting_thds field a C11-style atomicDavid Goldblatt2017-04-051-3/+4
|
* Switch to nstime_t for the time related fields in mutex profiling.Qi Wang2017-03-231-12/+14
|
* Added custom mutex spin.Qi Wang2017-03-231-2/+14
| | | | | | | A fixed max spin count is used -- with benchmark results showing it solves almost all problems. As the benchmark used was rather intense, the upper bound could be a little bit high. However it should offer a good tradeoff between spinning and blocking.
* Added "stats.mutexes.reset" mallctl to reset all mutex stats.Qi Wang2017-03-231-4/+10
| | | | Also switched from the term "lock" to "mutex".
* Add arena lock stats output.Qi Wang2017-03-231-20/+26
|
* First stage of mutex profiling.Qi Wang2017-03-231-0/+43
| | | | Switched to trylock and update counters based on state.
* Replace tabs following #define with spaces.Jason Evans2017-01-211-2/+2
| | | | This resolves #564.
* Remove extraneous parens around return arguments.Jason Evans2017-01-211-8/+8
| | | | This resolves #540.
* Update brace style.Jason Evans2017-01-211-20/+20
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Remove leading blank lines from function bodies.Jason Evans2017-01-131-6/+0
| | | | This resolves #535.
* Add os_unfair_lock support.Jason Evans2016-11-031-0/+2
| | | | | OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended replacement.
* Add rtree element witnesses.Jason Evans2016-06-031-1/+1
|
* Resolve bootstrapping issues when embedded in FreeBSD libc.Jason Evans2016-05-111-6/+6
| | | | | | | | | | | | | b2c0d6322d2307458ae2b28545f8a5c9903d7ef5 (Add witness, a simple online locking validator.) caused a broad propagation of tsd throughout the internal API, but tsd_fetch() was designed to fail prior to tsd bootstrapping. Fix this by splitting tsd_t into non-nullable tsd_t and nullable tsdn_t, and modifying all internal APIs that do not critically rely on tsd to take nullable pointers. Furthermore, add the tsd_booted_get() function so that tsdn_fetch() can probe whether tsd bootstrapping is complete and return NULL if not. All dangerous conversions of nullable pointers are tsdn_tsd() calls that assert-fail on invalid conversion.
* Add witness, a simple online locking validator.Jason Evans2016-04-141-9/+12
| | | | This resolves #358.
* Optimizations for WindowsMatthijs2015-06-251-0/+4
| | | | | | | - Set opt_lg_chunk based on run-time OS setting - Verify LG_PAGE is compatible with run-time OS setting - When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION - When targeting Windows Vista or newer, statically initialize init_lock
* Refactor base_alloc() to guarantee demand-zeroed memory.Jason Evans2015-02-051-3/+3
| | | | | | | | | | | | Refactor base_alloc() to guarantee that allocations are carved from demand-zeroed virtual memory. This supports sparse data structures such as multi-page radix tree nodes. Enhance base_alloc() to keep track of fragments which were too small to support previous allocation requests, and try to consume them during subsequent requests. This becomes important when request sizes commonly approach or exceed the chunk size (as could radix tree node allocations).
* Normalize #define whitespace.Jason Evans2013-12-091-1/+1
| | | | Consistently use a tab rather than a space following #define.
* mark _pthread_mutex_init_calloc_cb as public explicitlyJan Beich2012-10-101-1/+1
| | | | | | | Mozilla build hides everything by default using visibility pragma and unhides only explicitly listed headers. But this doesn't work on FreeBSD because _pthread_mutex_init_calloc_cb is neither documented nor exposed via any header.
* Replace JEMALLOC_ATTR with various different macros when it makes senseMike Hommey2012-05-011-3/+2
| | | | | | Theses newly added macros will be used to implement the equivalent under MSVC. Also, move the definitions to headers, where they make more sense, and for some, are even more useful there (e.g. malloc).
* Add support for MingwMike Hommey2012-04-221-4/+12
|
* Postpone mutex initialization on FreeBSD.Jason Evans2012-04-041-4/+30
| | | | | Postpone mutex initialization on FreeBSD until after base allocation is safe.
* Port to FreeBSD.Jason Evans2012-02-031-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(), _malloc_{pre,post}fork()) to avoid bootstrapping issues due to allocation in libc and libthr. Add malloc_strtoumax() and use it instead of strtoul(). Disable validation code in malloc_vsnprintf() and malloc_strtoumax() until jemalloc is initialized. This is necessary because locale initialization causes allocation for both vsnprintf() and strtoumax(). Force the lazy-lock feature on in order to avoid pthread_self(), because it causes allocation. Use syscall(SYS_write, ...) rather than write(...), because libthr wraps write() and causes allocation. Without this workaround, it would not be possible to print error messages in malloc_conf_init() without substantially reworking bootstrapping. Fix choose_arena_hard() to look at how many threads are assigned to the candidate choice, rather than checking whether the arena is uninitialized. This bug potentially caused more arenas to be initialized than necessary.
* Remove ephemeral mutexes.Jason Evans2012-03-241-12/+0
| | | | | | | | | | | Remove ephemeral mutexes from the prof machinery, and remove malloc_mutex_destroy(). This simplifies mutex management on systems that call malloc()/free() inside pthread_mutex_{create,destroy}(). Add atomic_*_u() for operation on unsigned values. Fix prof_printf() to call malloc_vsnprintf() rather than malloc_snprintf().
* Fix fork-related bugs.Jason Evans2012-03-131-0/+26
| | | | | | | | | Acquire/release arena bin locks as part of the prefork/postfork. This bug made deadlock in the child between fork and exec a possibility. Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so that the child can reinitialize mutexes rather than unlocking them. In practice, this bug tended not to cause problems.
* Use glibc allocator hooks.Jason Evans2012-02-291-0/+4
| | | | | | | | | | | When jemalloc is used as a libc malloc replacement (i.e. not prefixed), some particular setups may end up inconsistently calling malloc from libc and free from jemalloc, or the other way around. glibc provides hooks to make its functions use alternative implementations. Use them. Submitted by Karl Tomlinson and Mike Hommey.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-011-0/+90