summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Fix ixallocx_prof_sample() argument order reversal.Jason Evans2015-09-151-1/+1
| | | | | Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() in the correct order.
* s/max_usize/usize_max/gJason Evans2015-09-151-6/+6
|
* s/oldptr/old_ptr/gJason Evans2015-09-151-15/+15
|
* Make one call to prof_active_get_unlocked() per allocation event.Jason Evans2015-09-151-10/+19
| | | | | | | Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes.
* Fix irealloc_prof() to prof_alloc_rollback() on OOM.Jason Evans2015-09-151-1/+3
|
* Optimize irallocx_prof() to optimistically update the sampler state.Jason Evans2015-09-151-3/+3
|
* Fix ixallocx_prof() size+extra overflow.Jason Evans2015-09-151-0/+3
| | | | | Fix ixallocx_prof() to clamp the extra parameter if size+extra would overflow HUGE_MAXCLASS.
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-122-13/+13
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix xallocx() bugs.Jason Evans2015-09-122-171/+140
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Fix "prof.reset" mallctl-related corruption.Jason Evans2015-09-101-3/+11
| | | | | | | Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
* Reduce variables scopeDmitry-Me2015-09-041-9/+10
|
* Force initialization of the init_lock in malloc_init_hard on Windows XPMike Hommey2015-09-041-1/+15
| | | | This resolves #269.
* Optimize arena_prof_tctx_set().Jason Evans2015-09-021-1/+1
| | | | | Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
* Don't purge junk filled chunks when shrinking huge allocationsMike Hommey2015-08-281-6/+8
| | | | | | | | When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
* Fix chunk purge hook calls for in-place huge shrinking reallocation.Mike Hommey2015-08-281-2/+2
| | | | | | | | | | Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
* Fix arenas_cache_cleanup() and arena_get_hard().Jason Evans2015-08-281-6/+5
| | | | | | | | | Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. This is a more general fix that complements 45e9f66c280e1ba8bebf7bed387a43bc9e45536d (Fix arenas_cache_cleanup().).
* Fix arenas_cache_cleanup().Christopher Ferris2015-08-211-1/+5
| | | | | | Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
* Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans2015-08-192-27/+27
| | | | This resolves #256.
* Don't bitshift by negative amounts.Jason Evans2015-08-191-4/+3
| | | | | | | | Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
* Fix a strict aliasing violation.Jason Evans2015-08-121-1/+6
|
* Fix chunk_dalloc_arena() re: zeroing due to purge.Jason Evans2015-08-121-1/+1
|
* Try to decommit new chunks.Jason Evans2015-08-123-4/+13
| | | | Always leave decommit disabled on non-Windows systems.
* Refactor arena_mapbits_{small,large}_set() to not preserve unzeroed.Jason Evans2015-08-112-43/+67
| | | | | Fix arena_run_split_large_helper() to treat newly committed memory as zeroed.
* Refactor arena_mapbits unzeroed flag management.Jason Evans2015-08-112-22/+23
| | | | | | Only set the unzeroed flag when initializing the entire mapbits entry, rather than mutating just the unzeroed bit. This simplifies the possible mapbits state transitions.
* Arena chunk decommit cleanups and fixes.Jason Evans2015-08-112-27/+51
| | | | | Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-076-151/+350
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Fix an in-place growing large reallocation regression.Jason Evans2015-08-071-5/+6
| | | | | | | | Fix arena_ralloc_large_grow() to properly account for large_pad, so that in-place large reallocation succeeds when possible, rather than always failing. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.)
* MSVC compatibility changesMatthijs2015-08-041-8/+16
| | | | | | - Decorate public function with __declspec(allocator) and __declspec(restrict), just like MSVC 1900 - Support JEMALLOC_HAS_RESTRICT by defining the restrict keyword - Move __declspec(nothrow) between 'void' and '*' so it compiles once more
* Generalize chunk management hooks.Jason Evans2015-08-048-403/+556
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-252-0/+9
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Fix huge_ralloc_no_move() to succeed more often.Jason Evans2015-07-251-1/+1
| | | | | | | | Fix huge_ralloc_no_move() to succeed if an allocation request results in the same usable size as the existing allocation, even if the request size is smaller than the usable size. This bug did not cause correctness issues, but it could cause unnecessary moves during reallocation.
* Fix huge_palloc() to handle size rather than usize input.Jason Evans2015-07-241-6/+12
| | | | | | | | | | huge_ralloc() passes a size that may not be precisely a size class, so make huge_palloc() handle the more general case of a size input rather than usize. This regression appears to have been introduced by the addition of in-place huge reallocation; as such it was never incorporated into a release.
* Change arena_palloc_large() parameter from size to usize.Jason Evans2015-07-241-12/+12
| | | | | This change merely documents that arena_palloc_large() always receives usize as its argument.
* Fix MinGW-related portability issues.Jason Evans2015-07-234-60/+59
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Add JEMALLOC_FORMAT_PRINTF().Jason Evans2015-07-222-5/+5
| | | | | | Replace JEMALLOC_ATTR(format(printf, ...). with JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can omit the attribute if it would cause extraneous compilation warnings.
* Move JEMALLOC_NOTHROW just after return type.Jason Evans2015-07-211-36/+27
| | | | | | Only use __declspec(nothrow) in C++ mode. This resolves #244.
* Remove JEMALLOC_ALLOC_SIZE annotations on functions not returning pointersMike Hommey2015-07-211-2/+2
| | | | | | | | As per gcc documentation: The alloc_size attribute is used to tell the compiler that the function return value points to memory (...) This resolves #245.
* Add the config.cache_oblivious mallctl.Jason Evans2015-07-171-0/+3
|
* Revert to first-best-fit run/chunk allocation.Jason Evans2015-07-162-77/+26
| | | | | | | | | This effectively reverts 97c04a93838c4001688fe31bf018972b4696efe2 (Use first-fit rather than first-best-fit run/chunk allocation.). In some pathological cases, first-fit search dominates allocation time, and it also tends not to converge as readily on a steady state of memory layout, since precise allocation order has a bigger effect than for first-best-fit.
* Avoid function prototype incompatibilities.Jason Evans2015-07-101-20/+40
| | | | | | | | | Add various function attributes to the exported functions to give the compiler more information to work with during optimization, and also specify throw() when compiling with C++ on Linux, in order to adequately match what __THROW does in glibc. This resolves #237.
* Fix a variable declaration typo.Jason Evans2015-07-081-1/+1
|
* Use jemalloc_ffs() rather than ffs().Jason Evans2015-07-081-4/+12
|
* Fix MinGW build warnings.Jason Evans2015-07-083-49/+52
| | | | | | | | | | Conditionally define ENOENT, EINVAL, etc. (was unconditional). Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued (harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the alternative to this workaround would have been to disable the function attributes which cause gcc to look for type mismatches in formatted printing function calls.
* Fix an assignment type warning for tls_callback.Jason Evans2015-07-081-2/+2
|
* Move a variable declaration closer to its use.Jason Evans2015-07-071-1/+2
|
* Optimizations for WindowsMatthijs2015-06-253-2/+24
| | | | | | | - Set opt_lg_chunk based on run-time OS setting - Verify LG_PAGE is compatible with run-time OS setting - When targeting Windows Vista or newer, use SRWLOCK instead of CRITICAL_SECTION - When targeting Windows Vista or newer, statically initialize init_lock
* Fix size class overflow handling when profiling is enabled.Jason Evans2015-06-241-4/+12
| | | | | | | | | | Fix size class overflow handling for malloc(), posix_memalign(), memalign(), calloc(), and realloc() when profiling is enabled. Remove an assertion that erroneously caused arena_sdalloc() to fail when profiling was enabled. This resolves #232.
* Convert arena_maybe_purge() recursion to iteration.Jason Evans2015-06-231-10/+24
| | | | This resolves #235.
* Add alignment assertions to public aligned allocation functions.Jason Evans2015-06-231-28/+33
|
* Fix two valgrind integration regressions.Jason Evans2015-06-222-3/+9
| | | | The regressions were never merged into the master branch.