summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Allow const keys for lookupJoshua Kahn2015-11-092-5/+6
| | | | | | Signed-off-by: Steve Dougherty <sdougherty@barracuda.com> This resolves #281.
* Remove arena_run_dalloc_decommit().Mike Hommey2015-11-091-23/+2
| | | | This resolves #284.
* Fix a xallocx(..., MALLOCX_ZERO) bug.Jason Evans2015-09-251-3/+9
| | | | | | | | Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large allocations that have been randomly assigned an offset of 0 when --enable-cache-oblivious configure option is enabled. This addresses a special case missed in d260f442ce693de4351229027b37b3293fcbfd7d (Fix xallocx(..., MALLOCX_ZERO) bugs.).
* Work around an NPTL-specific TSD issue.Jason Evans2015-09-241-0/+3
| | | | | | | Work around a potentially bad thread-specific data initialization interaction with NPTL (glibc's pthreads implementation). This resolves #283.
* Fix xallocx(..., MALLOCX_ZERO) bugs.Jason Evans2015-09-242-14/+26
| | | | | | | | | | Zero all trailing bytes of large allocations when --enable-cache-oblivious configure option is enabled. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.). Zero trailing bytes of huge allocations when resizing from/to a size class that is not a multiple of the chunk size.
* Fix prof_tctx_dump_iter() to filter.Jason Evans2015-09-221-5/+17
| | | | | | | Fix prof_tctx_dump_iter() to filter out nodes that were created after heap profile dumping started. Prior to this fix, spurious entries with arbitrary object/byte counts could appear in heap profiles, which resulted in jeprof inaccuracies or failures.
* Make arena_dalloc_large_locked_impl() static.Jason Evans2015-09-201-1/+1
|
* Add mallocx() OOM tests.Jason Evans2015-09-171-0/+2
|
* Fix prof_alloc_rollback().Jason Evans2015-09-171-1/+1
| | | | | Fix prof_alloc_rollback() to read tdata from thread-specific data rather than dereferencing a potentially invalid tctx.
* Simplify imallocx_prof_sample().Jason Evans2015-09-171-26/+13
| | | | | | | Simplify imallocx_prof_sample() to always operate on usize rather than sometimes using size. This avoids redundant usize computations and more closely fits the style adopted by i[rx]allocx_prof_sample() to fix sampling bugs.
* Fix irallocx_prof_sample().Jason Evans2015-09-171-5/+5
| | | | | Fix irallocx_prof_sample() to always allocate large regions, even when alignment is non-zero.
* Fix ixallocx_prof_sample().Jason Evans2015-09-171-17/+4
| | | | | | Fix ixallocx_prof_sample() to never modify nor create sampled small allocations. xallocx() is in general incapable of moving small allocations, so this fix removes buggy code without loss of generality.
* Centralize xallocx() size[+extra] overflow checks.Jason Evans2015-09-152-14/+11
|
* Reduce variable scope.Dmitry-Me2015-09-153-9/+9
| | | | This resolves #274.
* Fix ixallocx_prof() to check for size greater than HUGE_MAXCLASS.Jason Evans2015-09-151-1/+5
|
* Resolve an unsupported special case in arena_prof_tctx_set().Jason Evans2015-09-152-3/+10
| | | | | | | | | | | Add arena_prof_tctx_reset() and use it instead of arena_prof_tctx_set() when resetting the tctx pointer during reallocation, which happens whenever an originally sampled reallocated object is not sampled during reallocation. This regression was introduced by 594c759f37c301d0245dc2accf4d4aaf9d202819 (Optimize arena_prof_tctx_set().)
* Fix ixallocx_prof_sample() argument order reversal.Jason Evans2015-09-151-1/+1
| | | | | Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() in the correct order.
* s/max_usize/usize_max/gJason Evans2015-09-151-6/+6
|
* s/oldptr/old_ptr/gJason Evans2015-09-151-15/+15
|
* Make one call to prof_active_get_unlocked() per allocation event.Jason Evans2015-09-151-10/+19
| | | | | | | Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes.
* Fix irealloc_prof() to prof_alloc_rollback() on OOM.Jason Evans2015-09-151-1/+3
|
* Optimize irallocx_prof() to optimistically update the sampler state.Jason Evans2015-09-151-3/+3
|
* Fix ixallocx_prof() size+extra overflow.Jason Evans2015-09-151-0/+3
| | | | | Fix ixallocx_prof() to clamp the extra parameter if size+extra would overflow HUGE_MAXCLASS.
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-122-13/+13
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix xallocx() bugs.Jason Evans2015-09-122-171/+140
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Fix "prof.reset" mallctl-related corruption.Jason Evans2015-09-101-3/+11
| | | | | | | Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
* Reduce variables scopeDmitry-Me2015-09-041-9/+10
|
* Force initialization of the init_lock in malloc_init_hard on Windows XPMike Hommey2015-09-041-1/+15
| | | | This resolves #269.
* Optimize arena_prof_tctx_set().Jason Evans2015-09-021-1/+1
| | | | | Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
* Don't purge junk filled chunks when shrinking huge allocationsMike Hommey2015-08-281-6/+8
| | | | | | | | When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
* Fix chunk purge hook calls for in-place huge shrinking reallocation.Mike Hommey2015-08-281-2/+2
| | | | | | | | | | Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
* Fix arenas_cache_cleanup() and arena_get_hard().Jason Evans2015-08-281-6/+5
| | | | | | | | | Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. This is a more general fix that complements 45e9f66c280e1ba8bebf7bed387a43bc9e45536d (Fix arenas_cache_cleanup().).
* Fix arenas_cache_cleanup().Christopher Ferris2015-08-211-1/+5
| | | | | | Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
* Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans2015-08-192-27/+27
| | | | This resolves #256.
* Don't bitshift by negative amounts.Jason Evans2015-08-191-4/+3
| | | | | | | | Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
* Fix a strict aliasing violation.Jason Evans2015-08-121-1/+6
|
* Fix chunk_dalloc_arena() re: zeroing due to purge.Jason Evans2015-08-121-1/+1
|
* Try to decommit new chunks.Jason Evans2015-08-123-4/+13
| | | | Always leave decommit disabled on non-Windows systems.
* Refactor arena_mapbits_{small,large}_set() to not preserve unzeroed.Jason Evans2015-08-112-43/+67
| | | | | Fix arena_run_split_large_helper() to treat newly committed memory as zeroed.
* Refactor arena_mapbits unzeroed flag management.Jason Evans2015-08-112-22/+23
| | | | | | Only set the unzeroed flag when initializing the entire mapbits entry, rather than mutating just the unzeroed bit. This simplifies the possible mapbits state transitions.
* Arena chunk decommit cleanups and fixes.Jason Evans2015-08-112-27/+51
| | | | | Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
* Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-076-151/+350
| | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
* Fix an in-place growing large reallocation regression.Jason Evans2015-08-071-5/+6
| | | | | | | | Fix arena_ralloc_large_grow() to properly account for large_pad, so that in-place large reallocation succeeds when possible, rather than always failing. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.)
* MSVC compatibility changesMatthijs2015-08-041-8/+16
| | | | | | - Decorate public function with __declspec(allocator) and __declspec(restrict), just like MSVC 1900 - Support JEMALLOC_HAS_RESTRICT by defining the restrict keyword - Move __declspec(nothrow) between 'void' and '*' so it compiles once more
* Generalize chunk management hooks.Jason Evans2015-08-048-403/+556
| | | | | | | | | | | | | | | | | | | | Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls. The chunk hooks allow control over chunk allocation/deallocation, decommit/commit, purging, and splitting/merging, such that the application can rely on jemalloc's internal chunk caching and retaining functionality, yet implement a variety of chunk management mechanisms and policies. Merge the chunks_[sz]ad_{mmap,dss} red-black trees into chunks_[sz]ad_retained. This slightly reduces how hard jemalloc tries to honor the dss precedence setting; prior to this change the precedence setting was also consulted when recycling chunks. Fix chunk purging. Don't purge chunks in arena_purge_stashed(); instead deallocate them in arena_unstash_purged(), so that the dirty memory linkage remains valid until after the last time it is used. This resolves #176 and #201.
* Implement support for non-coalescing maps on MinGW.Jason Evans2015-07-252-0/+9
| | | | | | | | - Do not reallocate huge objects in place if the number of backing chunks would change. - Do not cache multi-chunk mappings. This resolves #213.
* Fix huge_ralloc_no_move() to succeed more often.Jason Evans2015-07-251-1/+1
| | | | | | | | Fix huge_ralloc_no_move() to succeed if an allocation request results in the same usable size as the existing allocation, even if the request size is smaller than the usable size. This bug did not cause correctness issues, but it could cause unnecessary moves during reallocation.
* Fix huge_palloc() to handle size rather than usize input.Jason Evans2015-07-241-6/+12
| | | | | | | | | | huge_ralloc() passes a size that may not be precisely a size class, so make huge_palloc() handle the more general case of a size input rather than usize. This regression appears to have been introduced by the addition of in-place huge reallocation; as such it was never incorporated into a release.
* Change arena_palloc_large() parameter from size to usize.Jason Evans2015-07-241-12/+12
| | | | | This change merely documents that arena_palloc_large() always receives usize as its argument.
* Fix MinGW-related portability issues.Jason Evans2015-07-234-60/+59
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.