summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Fix prof_{malloc,free}_sample_object() call order in prof_realloc().Jason Evans2015-09-152-3/+11
| | | | | | Fix prof_realloc() to call prof_free_sampled_object() after calling prof_malloc_sample_object(). Prior to this fix, if tctx and old_tctx were the same, the tctx could have been prematurely destroyed.
* Fix ixallocx_prof_sample() argument order reversal.Jason Evans2015-09-152-1/+3
| | | | | Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample() in the correct order.
* s/max_usize/usize_max/gJason Evans2015-09-151-6/+6
|
* s/oldptr/old_ptr/gJason Evans2015-09-151-15/+15
|
* Make one call to prof_active_get_unlocked() per allocation event.Jason Evans2015-09-153-18/+33
| | | | | | | Make one call to prof_active_get_unlocked() per allocation event, and use the result throughout the relevant functions that handle an allocation event. Also add a missing check in prof_realloc(). These fixes protect allocation events against concurrent prof_active changes.
* Fix irealloc_prof() to prof_alloc_rollback() on OOM.Jason Evans2015-09-152-1/+4
|
* Optimize irallocx_prof() to optimistically update the sampler state.Jason Evans2015-09-151-3/+3
|
* Fix ixallocx_prof() size+extra overflow.Jason Evans2015-09-151-0/+3
| | | | | Fix ixallocx_prof() to clamp the extra parameter if size+extra would overflow HUGE_MAXCLASS.
* Remove check_stress from check target's dependencies.Jason Evans2015-09-121-4/+4
| | | | | | | | | | | | | Prior to this change the debug build/test command needed to look like: make all tests && make check_unit && make check_integration && \ make check_integration_prof This is now simply: make check Rename the check_stress target to stress.
* Rename arena_maxclass to large_maxclass.Jason Evans2015-09-128-28/+28
| | | | | arena_maxclass is no longer an appropriate name, because arenas also manage huge allocations.
* Fix xallocx() bugs.Jason Evans2015-09-129-179/+394
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Fix "prof.reset" mallctl-related corruption.Jason Evans2015-09-104-20/+84
| | | | | | | Fix heap profiling to distinguish among otherwise identical sample sites with interposed resets (triggered via the "prof.reset" mallctl). This bug could cause data structure corruption that would most likely result in a segfault.
* Reduce variables scopeDmitry-Me2015-09-041-9/+10
|
* Force initialization of the init_lock in malloc_init_hard on Windows XPMike Hommey2015-09-041-1/+15
| | | | This resolves #269.
* Fix pointer comparision with undefined behavior.Jason Evans2015-09-041-2/+2
| | | | | | | This didn't cause bad code generation in the one case spot-checked (gcc 4.8.1), but had the potential to to so. This bug was introduced by 594c759f37c301d0245dc2accf4d4aaf9d202819 (Optimize arena_prof_tctx_set().).
* Optimize arena_prof_tctx_set().Jason Evans2015-09-024-28/+56
| | | | | Optimize arena_prof_tctx_set() to avoid reading run metadata when deciding whether it's actually necessary to write.
* Fix TLS configuration.Jason Evans2015-09-022-8/+16
| | | | | | | | Fix TLS configuration such that it is enabled by default for platforms on which it works correctly. This regression was introduced by ac5db02034c01357a4ce90504886046a58117921 (Make --enable-tls and --enable-lazy-lock take precedence over configure.ac-hardcoded defaults).
* Don't purge junk filled chunks when shrinking huge allocationsMike Hommey2015-08-282-6/+12
| | | | | | | | When junk filling is enabled, shrinking an allocation fills the bytes that were previously allocated but now aren't. Purging the chunk before doing that is just a waste of time. This resolves #260.
* Fix chunk purge hook calls for in-place huge shrinking reallocation.Mike Hommey2015-08-282-2/+6
| | | | | | | | | | Fix chunk purge hook calls for in-place huge shrinking reallocation to specify the old chunk size rather than the new chunk size. This bug caused no correctness issues for the default chunk purge function, but was visible to custom functions set via the "arena.<i>.chunk_hooks" mallctl. This resolves #264.
* Fix arenas_cache_cleanup() and arena_get_hard().Jason Evans2015-08-282-9/+8
| | | | | | | | | Fix arenas_cache_cleanup() and arena_get_hard() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down. This is a more general fix that complements 45e9f66c280e1ba8bebf7bed387a43bc9e45536d (Fix arenas_cache_cleanup().).
* Add JEMALLOC_CXX_THROW to the memalign() function prototype.Jason Evans2015-08-262-1/+4
| | | | | | | | | | Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to match glibc and avoid compilation errors when including both jemalloc/jemalloc.h and malloc.h in C++ code. This change was unintentionally omitted from ae93d6bf364e9db9f9ee69c3e5f9df110d8685a4 (Avoid function prototype incompatibilities.).
* Fix arenas_cache_cleanup().Christopher Ferris2015-08-213-2/+15
| | | | | | Fix arenas_cache_cleanup() to handle allocation/deallocation within the application's thread-specific data cleanup functions even after arenas_cache is torn down.
* Silence compiler warnings for unreachable code.Jason Evans2015-08-201-12/+14
| | | | Reported by Ingvar Hagelund.
* Rename index_t to szind_t to avoid an existing type on Solaris.Jason Evans2015-08-197-70/+71
| | | | This resolves #256.
* Don't bitshift by negative amounts.Jason Evans2015-08-194-13/+50
| | | | | | | | Don't bitshift by negative amounts when encoding/decoding run sizes in chunk header maps. This affected systems with page sizes greater than 8 KiB. Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
* Merge branch 'dev'4.0.0Jason Evans2015-08-17131-9046/+16699
|\
| * Update ChangeLog for 4.0.0.Jason Evans2015-08-171-2/+1
| |
| * Improve arena.<i>.chunk_hooks documentation formatting.Jason Evans2015-08-141-37/+46
| |
| * Update in-place reallocation documentation.Jason Evans2015-08-141-3/+9
| |
| * Update large/huge size class cutoff documentation.Jason Evans2015-08-141-9/+9
| |
| * Fix a comment.Jason Evans2015-08-131-1/+1
| |
| * Fix gcc build failure (define __has_builtin).Jason Evans2015-08-121-0/+3
| |
| * Check whether gcc version supports __builtin_unreachable().Jason Evans2015-08-121-0/+11
| |
| * Fix a strict aliasing violation.Jason Evans2015-08-121-1/+6
| |
| * Fix test for MinGW.Jason Evans2015-08-121-11/+15
| |
| * Fix chunk_dalloc_arena() re: zeroing due to purge.Jason Evans2015-08-121-1/+1
| |
| * Update list of private symbols.Jason Evans2015-08-121-25/+14
| |
| * Fix assertion in test.Jason Evans2015-08-121-1/+1
| |
| * Remove obsolete entry.Jason Evans2015-08-121-4/+0
| |
| * Stop forcing --enable-munmap on MinGW.Jason Evans2015-08-122-8/+1
| | | | | | | | | | This is no longer necessary because of the more general chunk merge/split approach to dealing with map coalescing.
| * Try to decommit new chunks.Jason Evans2015-08-124-15/+27
| | | | | | | | Always leave decommit disabled on non-Windows systems.
| * Refactor arena_mapbits_{small,large}_set() to not preserve unzeroed.Jason Evans2015-08-113-54/+73
| | | | | | | | | | Fix arena_run_split_large_helper() to treat newly committed memory as zeroed.
| * Fix build failure.Jason Evans2015-08-111-1/+1
| | | | | | | | | | | | | | | | This regression was introduced by de249c8679a188065949f2560b1f0015ea6534b4 (Arena chunk decommit cleanups and fixes.). This resolves #254.
| * Make --enable-tls and --enable-lazy-lock take precedence over ↵Mike Hommey2015-08-111-5/+9
| | | | | | | | configure.ac-hardcoded defaults
| * Refactor arena_mapbits unzeroed flag management.Jason Evans2015-08-114-37/+35
| | | | | | | | | | | | Only set the unzeroed flag when initializing the entire mapbits entry, rather than mutating just the unzeroed bit. This simplifies the possible mapbits state transitions.
| * Arena chunk decommit cleanups and fixes.Jason Evans2015-08-115-29/+55
| | | | | | | | | | Decommit arena chunk header during chunk deallocation if the rest of the chunk is decommitted.
| * Add no-OOM assertions to test.Jason Evans2015-08-071-6/+12
| |
| * Implement chunk hook support for page run commit/decommit.Jason Evans2015-08-0715-267/+545
| | | | | | | | | | | | | | | | | | Cascade from decommit to purge when purging unused dirty pages, so that it is possible to decommit cleaned memory rather than just purging. For non-Windows debug builds, decommit runs rather than purging them, since this causes access of deallocated runs to segfault. This resolves #251.
| * Fix an in-place growing large reallocation regression.Jason Evans2015-08-071-5/+6
| | | | | | | | | | | | | | | | Fix arena_ralloc_large_grow() to properly account for large_pad, so that in-place large reallocation succeeds when possible, rather than always failing. This regression was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.)
| * work around _FORTIFY_SOURCE false positiveDaniel Micay2015-08-041-0/+3
| | | | | | | | | | | | | | | | | | | | | | In builds with profiling disabled (default), the opt_prof_prefix array has a one byte length as a micro-optimization. This will cause the usage of write in the unused profiling code to be statically detected as a buffer overflow by Bionic's _FORTIFY_SOURCE implementation as it tries to detect read overflows in addition to write overflows. This works around the problem by informing the compiler that not_reached() means code in unreachable in release builds.