summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* MSVC compatibility changesMatthijs2015-08-044-16/+45
| | | | | | - Decorate public function with __declspec(allocator) and __declspec(restrict), just like MSVC 1900 - Support JEMALLOC_HAS_RESTRICT by defining the restrict keyword - Move __declspec(nothrow) between 'void' and '*' so it compiles once more
* Generalize chunk management hooks.Jason Evans2015-08-0420-553/+1022
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-257-4/+44
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Fix huge_ralloc_no_move() to succeed more often.Jason Evans2015-07-252-3/+4
| | | | | | | | Fix huge_ralloc_no_move() to succeed if an allocation request results in the same usable size as the existing allocation, even if the request size is smaller than the usable size. This bug did not cause correctness issues, but it could cause unnecessary moves during reallocation.
* Fix huge_palloc() to handle size rather than usize input.Jason Evans2015-07-242-7/+13
| | | | | | | | | | huge_ralloc() passes a size that may not be precisely a size class, so make huge_palloc() handle the more general case of a size input rather than usize. This regression appears to have been introduced by the addition of in-place huge reallocation; as such it was never incorporated into a release.
* Fix sa2u() regression.Jason Evans2015-07-241-1/+1
| | | | | | | | | Take large_pad into account when determining whether an aligned allocation can be satisfied by a large size class. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.).
* Change arena_palloc_large() parameter from size to usize.Jason Evans2015-07-241-12/+12
| | | | | This change merely documents that arena_palloc_large() always receives usize as its argument.
* Leave PRI* macros defined after using them to define FMT*.Jason Evans2015-07-231-11/+0
| | | | | Macro expansion happens too late for the #undef directives to work as a mechanism for preventing accidental direct use of the PRI* macros.
* Force lazy_lock on MinGW.Jason Evans2015-07-231-0/+1
| | | | This resolves #83.
* Fix MinGW-related portability issues.Jason Evans2015-07-2318-494/+224
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Fix a compilation error.Jason Evans2015-07-221-8/+10
| | | | | This regression was introduced by 1b0e4abbfdbcc1c1a71d1f617adb19951109bfce (Port mq_get() to MinGW.).
* Add JEMALLOC_FORMAT_PRINTF().Jason Evans2015-07-228-20/+54
| | | | | | Replace JEMALLOC_ATTR(format(printf, ...). with JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can omit the attribute if it would cause extraneous compilation warnings.
* Port mq_get() to MinGW.Jason Evans2015-07-213-13/+39
|
* Move JEMALLOC_NOTHROW just after return type.Jason Evans2015-07-213-74/+69
| | | | | | Only use __declspec(nothrow) in C++ mode. This resolves #244.
* Remove JEMALLOC_ALLOC_SIZE annotations on functions not returning pointersMike Hommey2015-07-212-4/+4
| | | | | | | | As per gcc documentation: The alloc_size attribute is used to tell the compiler that the function return value points to memory (...) This resolves #245.
* Fix more MinGW build warnings.Jason Evans2015-07-184-43/+46
|
* Add the config.cache_oblivious mallctl.Jason Evans2015-07-174-1/+16
|
* Remove extraneous ';' on closing 'extern "C"'Dave Rigby2015-07-161-1/+1
| | | | | | | | Fixes warning with newer GCCs: include/jemalloc/jemalloc.h:229:2: warning: extra ';' [-Wpedantic] }; ^
* Change default chunk size from 256 KiB to 2 MiB.Jason Evans2015-07-162-2/+2
| | | | | | This change improves interaction with transparent huge pages, e.g. reduced page faults (at least in the absence of unused dirty page purging).
* Revert to first-best-fit run/chunk allocation.Jason Evans2015-07-163-78/+27
| | | | | | | | | This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use first-fit rather than first-best-fit run/chunk allocation.). In some pathological cases, first-fit search dominates allocation time, and it also tends not to converge as readily on a steady state of memory layout, since precise allocation order has a bigger effect than for first-best-fit.
* Add timer support for Windows.Jason Evans2015-07-132-10/+24
|
* Fix alloc_size configure test.Jason Evans2015-07-101-3/+2
|
* Add configure test for alloc_size attribute.Jason Evans2015-07-103-2/+21
|
* Avoid function prototype incompatibilities.Jason Evans2015-07-107-49/+100
| | | | | | | | | Add various function attributes to the exported functions to give the compiler more information to work with during optimization, and also specify throw() when compiling with C++ on Linux, in order to adequately match what __THROW does in glibc. This resolves #237.
* Fix an integer overflow bug in {size2index,s2u}_compute().Jason Evans2015-07-103-2/+96
| | | | | | | This {bug,regression} was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.). This resolves #241.
* Fix indentation.Jason Evans2015-07-091-1/+1
|
* Add a missing ChangeLog entry.Jason Evans2015-07-091-0/+3
|
* Fix a variable declaration typo.Jason Evans2015-07-081-1/+1
|
* Use jemalloc_ffs() rather than ffs().Jason Evans2015-07-081-4/+12
|
* Fix MinGW build warnings.Jason Evans2015-07-084-57/+82
| | | | | | | | | | Conditionally define ENOENT, EINVAL, etc. (was unconditional). Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued (harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the alternative to this workaround would have been to disable the function attributes which cause gcc to look for type mismatches in formatted printing function calls.
* Fix an assignment type warning for tls_callback.Jason Evans2015-07-081-2/+2
|
* Fix typos ChangeLogcharsyam2015-07-071-1/+1
| | | Fix typos ChangeLog
* Minor ChangeLog edit.Jason Evans2015-07-071-3/+2
|
* Move a variable declaration closer to its use.Jason Evans2015-07-071-1/+2
|
* Optimizations for WindowsMatthijs2015-06-254-2/+36
| | | | | | | - Set opt_lg_chunk based on run-time OS setting - Verify LG_PAGE is compatible with run-time OS setting - When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION - When targeting Windows Vista or newer, statically initialize init_lock
* Fix size class overflow handling when profiling is enabled.Jason Evans2015-06-249-18/+86
| | | | | | | | | | Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. Remove an assertion that erroneously caused arena_sdalloc() to fail when profiling was enabled. This resolves #232.
* Convert arena_maybe_purge() recursion to iteration.Jason Evans2015-06-232-10/+27
| | | | This resolves #235.
* Add alignment assertions to public aligned allocation functions.Jason Evans2015-06-231-28/+33
|
* Fix two valgrind integration regressions.Jason Evans2015-06-222-3/+9
| | | | The regressions were never merged into the master branch.
* Update a comment.Jason Evans2015-06-151-1/+2
|
* Clarify relationship between stats.resident and stats.mapped.Jason Evans2015-05-303-7/+15
|
* Bypass tcache when draining quarantined allocations.Jason Evans2015-05-301-3/+3
| | | | | | This avoids the potential surprise of deallocating an object with one tcache specified, and having the object cached in a different tcache once it drains from the quarantine.
* Fix type errors in C11 versions of atomic_*() functions.Chi-hung Hsieh2015-05-281-8/+8
|
* Impose a minimum tcache count for small size classes.Jason Evans2015-05-202-1/+10
| | | | | | Now that small allocation runs have fewer regions due to run metadata residing in chunk headers, an explicit minimum tcache count is needed to make sure that tcache adequately amortizes synchronization overhead.
* Fix arena_dalloc() performance regression.Jason Evans2015-05-201-1/+2
| | | | | | | | Take into account large_pad when computing whether to pass the deallocation request to tcache_dalloc_large(), so that the largest cacheable size makes it back to tcache. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.).
* Fix performance regression in arena_palloc().Jason Evans2015-05-201-2/+13
| | | | | | Pass large allocation requests to arena_malloc() when possible. This regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
* Fix nhbins calculation.Jason Evans2015-05-201-1/+1
| | | | | This regression was introduced by 155bfa7da18cab0d21d87aa2dce4554166836f5d (Normalize size classes.).
* Avoid atomic operations for dependent rtree reads.Jason Evans2015-05-165-26/+43
|
* Fix type punning in calls to atomic operation functions.Jason Evans2015-05-082-8/+15
|
* Implement cache index randomization for large allocations.Jason Evans2015-05-0610-73/+279
| | | | | | | | | | | | | | | | | | | | Extract szad size quantization into {extent,run}_quantize(), and . quantize szad run sizes to the union of valid small region run sizes and large run sizes. Refactor iteration in arena_run_first_fit() to use run_quantize{,_first,_next(), and add support for padded large runs. For large allocations that have no specified alignment constraints, compute a pseudo-random offset from the beginning of the first backing page that is a multiple of the cache line size. Under typical configurations with 4-KiB pages and 64-byte cache lines this results in a uniform distribution among 64 page boundary offsets. Add the --disable-cache-oblivious option, primarily intended for performance testing. This resolves #13.