summaryrefslogtreecommitdiffstats
path: root/jemalloc/src
Commit message (Collapse)AuthorAgeFilesLines
* Use madvise(..., MADV_FREE) on OS X.Jason Evans2010-10-241-3/+0
| | | | | Use madvise(..., MADV_FREE) rather than msync(..., MS_KILLPAGES) on OS X, since it works for at least OS X 10.5 and 10.6.
* Add missing #ifdef JEMALLOC_PROF.Jason Evans2010-10-241-0/+2
| | | | Only call prof_boot0() if profiling is enabled.
* Replace JEMALLOC_OPTIONS with MALLOC_CONF.Jason Evans2010-10-247-497/+575
| | | | | | | | | | | Replace the single-character run-time flags with key/value pairs, which can be set via the malloc_conf global, /etc/malloc.conf, and the MALLOC_CONF environment variable. Replace the JEMALLOC_PROF_PREFIX environment variable with the "opt.prof_prefix" option. Replace umax2s() with u2s().
* Fix heap profiling bugs.Jason Evans2010-10-223-66/+40
| | | | | | | | | | | | | Fix a regression due to the recent heap profiling accuracy improvements: prof_{m,re}alloc() must set the object's profiling context regardless of whether it is sampled. Fix management of the CHUNK_MAP_CLASS chunk map bits, such that all large object (re-)allocation paths correctly initialize the bits. Prior to this fix, in-place realloc() cleared the bits, resulting in incorrect reported object size from arena_salloc_demote(). After this fix the non-demoted bit pattern is all zeros (instead of all ones), which makes it easier to assure that the bits are properly set.
* Fix a heap profiling regression.Jason Evans2010-10-211-99/+0
| | | | | | Call prof_ctx_set() in all paths through prof_{m,re}alloc(). Inline arena_prof_ctx_get().
* Inline the fast path for heap sampling.Jason Evans2010-10-211-479/+74
| | | | | | | | Inline the heap sampling code that is executed for every allocation event (regardless of whether a sample is taken). Combine all prof TLS data into a single data structure, in order to reduce the TLS lookup volume.
* Add per thread allocation counters, and enhance heap sampling.Jason Evans2010-10-214-67/+262
| | | | | | | | | | | | | | | | | | | Add the "thread.allocated" and "thread.deallocated" mallctls, which can be used to query the total number of bytes ever allocated/deallocated by the calling thread. Add s2u() and sa2u(), which can be used to compute the usable size that will result from an allocation request of a particular size/alignment. Re-factor ipalloc() to use sa2u(). Enhance the heap profiler to trigger samples based on usable size, rather than request size. This has a subtle, but important, impact on the accuracy of heap sampling. For example, previous to this change, 16- and 17-byte objects were sampled at nearly the same rate, but 17-byte objects actually consume 32 bytes each. Therefore it was possible for the sample to be somewhat skewed compared to actual memory usage of the allocated objects.
* Fix a bug in arena_dalloc_bin_run().Jason Evans2010-10-191-13/+53
| | | | | | | | | | | Fix the newsize argument to arena_run_trim_tail() that arena_dalloc_bin_run() passes. Previously, oldsize-newsize (i.e. the complement) was passed, which could erroneously cause dirty pages to be returned to the clean available runs tree. Prior to the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED conversion, this bug merely caused dirty pages to be unaccounted for (and therefore never get purged), but with CHUNK_MAP_UNZEROED, this could cause dirty pages to be treated as zeroed (i.e. memory corruption).
* Fix arena bugs.Jason Evans2010-10-181-6/+19
| | | | | | | | | Split arena_dissociate_bin_run() out of arena_dalloc_bin_run(), so that arena_bin_malloc_hard() can avoid dissociation when recovering from losing a race. This fixes a bug introduced by a recent attempted fix. Fix a regression in arena_ralloc_large_grow() that was introduced by recent fixes.
* Fix arena bugs.Jason Evans2010-10-181-43/+58
| | | | | | | | | Move part of arena_bin_lower_run() into the callers, since the conditions under which it should be called differ slightly between callers. Fix arena_chunk_purge() to omit run size in the last map entry for each run it temporarily allocates.
* Add assertions to run coalescing.Jason Evans2010-10-181-7/+17
| | | | | Assert that the chunk map bits at the ends of the runs that participate in coalescing are self-consistent.
* Fix numerous arena bugs.Jason Evans2010-10-181-76/+170
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In arena_ralloc_large_grow(), update the map element for the end of the newly grown run, rather than the interior map element that was the beginning of the appended run. This is a long-standing bug, and it had the potential to cause massive corruption, but triggering it required roughly the following sequence of events: 1) Large in-place growing realloc(), with left-over space in the run that followed the large object. 2) Allocation of the remainder run left over from (1). 3) Deallocation of the remainder run *before* deallocation of the large run, with unfortunate interior map state left over from previous run allocation/deallocation activity, such that one or more pages of allocated memory would be treated as part of the remainder run during run coalescing. In summary, this was a bad bug, but it was difficult to trigger. In arena_bin_malloc_hard(), if another thread wins the race to allocate a bin run, dispose of the spare run via arena_bin_lower_run() rather than arena_run_dalloc(), since the run has already been prepared for use as a bin run. This bug has existed since March 14, 2010: e00572b384c81bd2aba57fac32f7077a34388915 mmap()/munmap() without arena->lock or bin->lock. Fix bugs in arena_dalloc_bin_run(), arena_trim_head(), arena_trim_tail(), and arena_ralloc_large_grow() that could cause the CHUNK_MAP_UNZEROED map bit to become corrupted. These are all long-standing bugs, but the chances of them actually causing problems was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED conversion. Fix a large run statistics regression in arena_ralloc_large_grow() that was introduced on September 17, 2010: 8e3c3c61b5bb676a705450708e7e79698cdc9e0c Add {,r,s,d}allocm(). Add debug code to validate that supposedly pre-zeroed memory really is.
* Preserve CHUNK_MAP_UNZEROED for small runs.Jason Evans2010-10-161-4/+8
| | | | | | | | | Preserve CHUNK_MAP_UNZEROED when allocating small runs, because it is possible that untouched pages will be returned to the tree of clean runs, where the CHUNK_MAP_UNZEROED flag matters. Prior to the conversion from CHUNK_MAP_ZEROED, this was already a bug, but in the worst case extra zeroing occurred. After the conversion, this bug made it possible to incorrectly treat pages as pre-zeroed.
* Fix a regression in CHUNK_MAP_UNZEROED change.Jason Evans2010-10-141-2/+3
| | | | | | | | | | | | Fix a regression added by revision: 3377ffa1f4f8e67bce1e36624285e5baf5f9ecef Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED. A modified chunk->map dereference was missing the subtraction of map_bias, which caused incorrect chunk map initialization, as well as potential corruption of the first non-header page of memory within each chunk.
* Move variable declaration out of for loop header.Jason Evans2010-10-071-1/+2
| | | | | Move a loop variable declaration out of for(usigned i = 0; ...) in order to avoid the need for C99 compilation.
* Increase PRN 'a' and 'c' constants.Jason Evans2010-10-031-1/+1
| | | | | Increase PRN 'a' and 'c' constants, so that high bits tend to cascade more.
* Fix leak context count reporting.Jason Evans2010-10-031-3/+3
| | | | | | Fix a bug in leak context count reporting that tended to cause the number of contexts to be underreported. The reported number of leaked objects and bytes were not affected by this bug.
* Increase default backtrace depth from 4 to 128.Jason Evans2010-10-031-5/+51
| | | | | Increase the default backtrace depth, because shallow backtraces tend to result in confusing pprof output graphs.
* Make cumulative heap profile data optional.Jason Evans2010-10-035-103/+230
| | | | | | | | Add the R option to control whether cumulative heap profile data are maintained. Add the T option to control the size of per thread backtrace caches, primarily because when the R option is specified, backtraces that no longer have allocations associated with them are discarded as soon as no thread caches refer to them.
* Remove malloc_swap_enable().Jason Evans2010-10-021-17/+0
| | | | | | | Remove malloc_swap_enable(), which was obsoleted by the "swap.fds" mallctl. The prototype for malloc_swap_enable() was removed from jemalloc/jemalloc.h, but the function itself was accidentally left in place.
* Use offsetof() when sizing dynamic structures.Jason Evans2010-10-023-6/+7
| | | | | | Base dynamic structure size on offsetof(), rather than subtracting the size of the dynamic structure member. Results could differ on systems with strict data structure alignment requirements.
* Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED.Jason Evans2010-10-021-20/+26
| | | | | | Invert the chunk map bit that tracks whether a page is zeroed, so that for zeroed arena chunks, the interior of the page map does not need to be initialized (as it consists entirely of zero bytes).
* Omit chunk header in arena chunk map.Jason Evans2010-10-023-150/+173
| | | | | | Omit the first map_bias elements of the map in arena_chunk_t. This avoids barely spilling over into an extra chunk header page for common chunk sizes.
* Add the "arenas.purge" mallctl.Jason Evans2010-09-302-8/+57
|
* Fix compiler warnings and errors.Jason Evans2010-09-211-49/+67
| | | | | | | | Use INT_MAX instead of MAX_INT in ALLOCM_ALIGN(), and #include <limits.h> in order to get its definition. Modify prof code related to hash tables to avoid aliasing warnings from gcc 4.1.2 (gcc 4.4.0 and 4.4.3 do not warn).
* Fix compiler warnings.Jason Evans2010-09-213-15/+70
| | | | | | Add --enable-cc-silence, which can be used to silence harmless warnings. Fix an aliasing bug in ckh_pointer_hash().
* Add memalign() and valloc() overrides.Jason Evans2010-09-201-0/+43
| | | | | If memalign() and/or valloc() are present on the system, override them in order to avoid mixed allocator usage.
* Wrap strerror_r().Jason Evans2010-09-203-8/+29
| | | | | Create the buferror() function, which wraps strerror_r(). This is necessary because glibc provides a non-standard strerror_r().
* Remove bad assertions in malloc_{pre,post}fork().Jason Evans2010-09-201-7/+1
| | | | | | | Remove assertions that malloc_{pre,post}fork() are only called if threading is enabled. This was true of these functions in the context of FreeBSD's libc, but now the functions are called unconditionally as a result of registering them with pthread_atfork().
* Add {,r,s,d}allocm().Jason Evans2010-09-176-90/+353
| | | | | | Add allocm(), rallocm(), sallocm(), and dallocm(), which are a functional superset of malloc(), calloc(), posix_memalign(), malloc_usable_size(), and free().
* Move size class table to man page.Jason Evans2010-09-121-82/+0
| | | | | | | Move the table of size classes from jemalloc.c to the manual page. When manually formatting the manual page, it is now necessary to use: nroff -man -t jemalloc.3
* Port to Mac OS X.Jason Evans2010-09-1212-129/+653
| | | | | Add Mac OS X support, based in large part on the OS X support in Mozilla's version of jemalloc.
* Add the thread.arena mallctl.Jason Evans2010-08-141-0/+52
| | | | | | | Make it possible for each thread to manage which arena it is associated with. Implement the 'tests' and 'check' build targets.
* Move assert() calls up in arena_run_reg_alloc().Jason Evans2010-08-051-1/+1
| | | | | | Move assert() calls up in arena_run_reg_alloc(), so that a corrupt pointer will likely be caught by an assertion *before* it is dereferenced.
* Add a missing mutex unlock in malloc_init_hard().Jason Evans2010-07-221-0/+1
| | | | | | | | | | If multiple threads race to initialize malloc, the loser(s) busy-wait until initialization is complete. Add a missing mutex lock so that the loser(s) properly release the initialization mutex. Under some race conditions, this flaw could have caused one or more threads to become permanently blocked. Reported by Terrell Magee.
* Fix the libunwind version of prof_backtrace().Jason Evans2010-06-041-5/+4
| | | | | | Fix the libunwind version of prof_backtrace() to set the backtrace depth for all possible code paths. This fixes the zero-length backtrace problem when using libunwind.
* Avoid unnecessary isalloc() calls.Jason Evans2010-05-121-12/+18
| | | | | | When heap profiling is enabled but deactivated, there is no need to call isalloc(ptr) in prof_{malloc,realloc}(). Avoid these calls, so that profiling overhead under such conditions is negligible.
* Fix next_arena initialization.Jason Evans2010-05-111-1/+1
| | | | | | If there is more than one arena, initialize next_arena so that the first and second threads to allocate memory use arenas 0 and 1, rather than both using arena 0.
* Add MAP_NORESERVE support.Jordan DeLong2010-05-112-14/+31
| | | | | Add MAP_NORESERVE to the chunk_mmap() case being used by chunk_swap_enable(), if the system supports it.
* Fix tcache crash during thread cleanup.Jason Evans2010-04-141-14/+12
| | | | | | | Properly maintain tcache_bin_t's avail pointer such that it is NULL if no objects are cached. This only caused problems during thread cache destruction, since cache flushing otherwise never occurs on an empty bin.
* Fix profiling regression caused by bugfix.Jason Evans2010-04-141-8/+9
| | | | | | | Properly set the context associated with each allocated object, even when the object is not sampled. Remove debug print code that slipped in.
* Fix arena chunk purge/dealloc race conditions.Jason Evans2010-04-141-24/+30
| | | | | | | | | Fix arena_chunk_dealloc() to put the new spare in a consistent state before dropping the arena mutex to deallocate the previous spare. Fix arena_run_dalloc() to insert a newly dirtied chunk into the chunks_dirty list before potentially deallocating the chunk, so that dirty page accounting is self-consistent.
* Fix threads-related profiling bugs.Jason Evans2010-04-144-72/+105
| | | | | | | | Initialize bt2cnt_tsd so that cleanup at thread exit actually happens. Associate (prof_ctx_t *) with allocated objects, rather than (prof_thr_cnt_t *). Each thread must always operate on its own (prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
* Revert re-addition of purge_lock.Jason Evans2010-04-091-37/+43
| | | | | Linux kernels have been capable of concurrent page table access since 2.6.27, so this hack is not necessary for modern kernels.
* Fix P/p reporting in stats_print().Jason Evans2010-04-091-1/+3
| | | | | | Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to be called, opt_stats_print must actually be checked when reporting the state of the P/p option.
* Fix error path in prof_dump().Jason Evans2010-04-061-1/+0
| | | | | Remove a duplicate prof_leave() call in an error path through prof_dump().
* Report E/e option state in jemalloc_stats_print().Jason Evans2010-04-061-1/+4
|
* Don't disable leak reporting due to sampling.Jason Evans2010-04-021-8/+0
| | | | | Leak reporting is useful even if sampling is enabled; some leaks may not be reported, but those reported are still genuine leaks.
* Add sampling activation/deactivation control.Jason Evans2010-04-013-1/+40
| | | | | | | Add the E/e options to control whether the application starts with sampling active/inactive (secondary control to F/f). Add the prof.active mallctl so that the application can activate/deactivate sampling on the fly.
* Make interval-triggered profile dumping optional.Jason Evans2010-04-014-10/+18
| | | | | | Make it possible to disable interval-triggered profile dumping, even if profiling is enabled. This is useful if the user only wants a single dump at exit, or if the application manually triggers profile dumps.