summaryrefslogtreecommitdiffstats
path: root/jemalloc/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Move variable declaration out of for loop header.Jason Evans2010-10-071-1/+2
| | | | | Move a loop variable declaration out of for(usigned i = 0; ...) in order to avoid the need for C99 compilation.
* Increase PRN 'a' and 'c' constants.Jason Evans2010-10-031-1/+1
| | | | | Increase PRN 'a' and 'c' constants, so that high bits tend to cascade more.
* Fix leak context count reporting.Jason Evans2010-10-031-3/+3
| | | | | | Fix a bug in leak context count reporting that tended to cause the number of contexts to be underreported. The reported number of leaked objects and bytes were not affected by this bug.
* Increase default backtrace depth from 4 to 128.Jason Evans2010-10-031-5/+51
| | | | | Increase the default backtrace depth, because shallow backtraces tend to result in confusing pprof output graphs.
* Make cumulative heap profile data optional.Jason Evans2010-10-035-103/+230
| | | | | | | | Add the R option to control whether cumulative heap profile data are maintained. Add the T option to control the size of per thread backtrace caches, primarily because when the R option is specified, backtraces that no longer have allocations associated with them are discarded as soon as no thread caches refer to them.
* Remove malloc_swap_enable().Jason Evans2010-10-021-17/+0
| | | | | | | Remove malloc_swap_enable(), which was obsoleted by the "swap.fds" mallctl. The prototype for malloc_swap_enable() was removed from jemalloc/jemalloc.h, but the function itself was accidentally left in place.
* Use offsetof() when sizing dynamic structures.Jason Evans2010-10-023-6/+7
| | | | | | Base dynamic structure size on offsetof(), rather than subtracting the size of the dynamic structure member. Results could differ on systems with strict data structure alignment requirements.
* Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED.Jason Evans2010-10-021-20/+26
| | | | | | Invert the chunk map bit that tracks whether a page is zeroed, so that for zeroed arena chunks, the interior of the page map does not need to be initialized (as it consists entirely of zero bytes).
* Omit chunk header in arena chunk map.Jason Evans2010-10-023-150/+173
| | | | | | Omit the first map_bias elements of the map in arena_chunk_t. This avoids barely spilling over into an extra chunk header page for common chunk sizes.
* Add the "arenas.purge" mallctl.Jason Evans2010-09-302-8/+57
|
* Fix compiler warnings and errors.Jason Evans2010-09-211-49/+67
| | | | | | | | Use INT_MAX instead of MAX_INT in ALLOCM_ALIGN(), and #include <limits.h> in order to get its definition. Modify prof code related to hash tables to avoid aliasing warnings from gcc 4.1.2 (gcc 4.4.0 and 4.4.3 do not warn).
* Fix compiler warnings.Jason Evans2010-09-213-15/+70
| | | | | | Add --enable-cc-silence, which can be used to silence harmless warnings. Fix an aliasing bug in ckh_pointer_hash().
* Add memalign() and valloc() overrides.Jason Evans2010-09-201-0/+43
| | | | | If memalign() and/or valloc() are present on the system, override them in order to avoid mixed allocator usage.
* Wrap strerror_r().Jason Evans2010-09-203-8/+29
| | | | | Create the buferror() function, which wraps strerror_r(). This is necessary because glibc provides a non-standard strerror_r().
* Remove bad assertions in malloc_{pre,post}fork().Jason Evans2010-09-201-7/+1
| | | | | | | Remove assertions that malloc_{pre,post}fork() are only called if threading is enabled. This was true of these functions in the context of FreeBSD's libc, but now the functions are called unconditionally as a result of registering them with pthread_atfork().
* Add {,r,s,d}allocm().Jason Evans2010-09-176-90/+353
| | | | | | Add allocm(), rallocm(), sallocm(), and dallocm(), which are a functional superset of malloc(), calloc(), posix_memalign(), malloc_usable_size(), and free().
* Move size class table to man page.Jason Evans2010-09-121-82/+0
| | | | | | | Move the table of size classes from jemalloc.c to the manual page. When manually formatting the manual page, it is now necessary to use: nroff -man -t jemalloc.3
* Port to Mac OS X.Jason Evans2010-09-1212-129/+653
| | | | | Add Mac OS X support, based in large part on the OS X support in Mozilla's version of jemalloc.
* Add the thread.arena mallctl.Jason Evans2010-08-141-0/+52
| | | | | | | Make it possible for each thread to manage which arena it is associated with. Implement the 'tests' and 'check' build targets.
* Move assert() calls up in arena_run_reg_alloc().Jason Evans2010-08-051-1/+1
| | | | | | Move assert() calls up in arena_run_reg_alloc(), so that a corrupt pointer will likely be caught by an assertion *before* it is dereferenced.
* Add a missing mutex unlock in malloc_init_hard().Jason Evans2010-07-221-0/+1
| | | | | | | | | | If multiple threads race to initialize malloc, the loser(s) busy-wait until initialization is complete. Add a missing mutex lock so that the loser(s) properly release the initialization mutex. Under some race conditions, this flaw could have caused one or more threads to become permanently blocked. Reported by Terrell Magee.
* Fix the libunwind version of prof_backtrace().Jason Evans2010-06-041-5/+4
| | | | | | Fix the libunwind version of prof_backtrace() to set the backtrace depth for all possible code paths. This fixes the zero-length backtrace problem when using libunwind.
* Avoid unnecessary isalloc() calls.Jason Evans2010-05-121-12/+18
| | | | | | When heap profiling is enabled but deactivated, there is no need to call isalloc(ptr) in prof_{malloc,realloc}(). Avoid these calls, so that profiling overhead under such conditions is negligible.
* Fix next_arena initialization.Jason Evans2010-05-111-1/+1
| | | | | | If there is more than one arena, initialize next_arena so that the first and second threads to allocate memory use arenas 0 and 1, rather than both using arena 0.
* Add MAP_NORESERVE support.Jordan DeLong2010-05-112-14/+31
| | | | | Add MAP_NORESERVE to the chunk_mmap() case being used by chunk_swap_enable(), if the system supports it.
* Fix tcache crash during thread cleanup.Jason Evans2010-04-141-14/+12
| | | | | | | Properly maintain tcache_bin_t's avail pointer such that it is NULL if no objects are cached. This only caused problems during thread cache destruction, since cache flushing otherwise never occurs on an empty bin.
* Fix profiling regression caused by bugfix.Jason Evans2010-04-141-8/+9
| | | | | | | Properly set the context associated with each allocated object, even when the object is not sampled. Remove debug print code that slipped in.
* Fix arena chunk purge/dealloc race conditions.Jason Evans2010-04-141-24/+30
| | | | | | | | | Fix arena_chunk_dealloc() to put the new spare in a consistent state before dropping the arena mutex to deallocate the previous spare. Fix arena_run_dalloc() to insert a newly dirtied chunk into the chunks_dirty list before potentially deallocating the chunk, so that dirty page accounting is self-consistent.
* Fix threads-related profiling bugs.Jason Evans2010-04-144-72/+105
| | | | | | | | Initialize bt2cnt_tsd so that cleanup at thread exit actually happens. Associate (prof_ctx_t *) with allocated objects, rather than (prof_thr_cnt_t *). Each thread must always operate on its own (prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
* Revert re-addition of purge_lock.Jason Evans2010-04-091-37/+43
| | | | | Linux kernels have been capable of concurrent page table access since 2.6.27, so this hack is not necessary for modern kernels.
* Fix P/p reporting in stats_print().Jason Evans2010-04-091-1/+3
| | | | | | Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to be called, opt_stats_print must actually be checked when reporting the state of the P/p option.
* Fix error path in prof_dump().Jason Evans2010-04-061-1/+0
| | | | | Remove a duplicate prof_leave() call in an error path through prof_dump().
* Report E/e option state in jemalloc_stats_print().Jason Evans2010-04-061-1/+4
|
* Don't disable leak reporting due to sampling.Jason Evans2010-04-021-8/+0
| | | | | Leak reporting is useful even if sampling is enabled; some leaks may not be reported, but those reported are still genuine leaks.
* Add sampling activation/deactivation control.Jason Evans2010-04-013-1/+40
| | | | | | | Add the E/e options to control whether the application starts with sampling active/inactive (secondary control to F/f). Add the prof.active mallctl so that the application can activate/deactivate sampling on the fly.
* Make interval-triggered profile dumping optional.Jason Evans2010-04-014-10/+18
| | | | | | Make it possible to disable interval-triggered profile dumping, even if profiling is enabled. This is useful if the user only wants a single dump at exit, or if the application manually triggers profile dumps.
* Reduce statistical heap sampling memory overhead.Jason Evans2010-03-313-52/+183
| | | | | | | | | If the mean heap sampling interval is larger than one page, simulate sampled small objects with large objects. This allows profiling context pointers to be omitted for small objects. As a result, the memory overhead for sampling decreases as the sampling interval is increased. Fix a compilation error in the profiling code.
* Re-add purge_lock to funnel madvise(2) calls.Jason Evans2010-03-271-43/+37
|
* Set/clear CHUNK_MAP_ZEROED in arena_chunk_purge().Jason Evans2010-03-221-11/+32
| | | | | | | Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to whether the pages are (potentially) file-backed or anonymous. This was merely a performance pessimization for the anonymous mapping case, but was a calloc()-related bug for the swap_enabled case.
* Track dirty and clean runs separately.Jason Evans2010-03-192-195/+245
| | | | | Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and preferentially allocate dirty runs.
* Remove medium size classes.Jason Evans2010-03-175-325/+263
| | | | | | | | | | Remove medium size classes, because concurrent dirty page purging is no longer capable of purging inactive dirty pages inside active runs (due to recent arena/bin locking changes). Enhance tcache to support caching large objects, so that the same range of size classes is still cached, despite the removal of medium size class support.
* Fix a run initialization race condition.Jason Evans2010-03-162-15/+24
| | | | | | | | Initialize small run header before dropping arena->lock, arena_chunk_purge() relies on valid small run headers during run iteration. Add some assertions.
* Add assertions.Jason Evans2010-03-151-0/+4
| | | | | | Check for interior pointers in arena_[ds]alloc(). Check for corrupt pointers in tcache_alloc().
* Widen malloc_stats_print() output columns.Jason Evans2010-03-151-14/+15
|
* arena_chunk_purge() arena->nactive fix.Jason Evans2010-03-151-0/+1
| | | | | | Update arena->nactive when pseudo-allocating runs in arena_chunk_purge(), since arena_run_dalloc() subtracts from arena->nactive.
* Change xmallctl() --> CTL_GET() where possible.Jason Evans2010-03-151-3/+3
|
* mmap()/munmap() without arena->lock or bin->lock.Jason Evans2010-03-151-41/+118
|
* Purge dirty pages without arena->lock.Jason Evans2010-03-151-68/+230
|
* Push locks into arena bins.Jason Evans2010-03-154-170/+283
| | | | | | | | | | For bin-related allocation, protect data structures with bin locks rather than arena locks. Arena locks remain for run allocation/deallocation and other miscellaneous operations. Restructure statistics counters to maintain per bin allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics via aggregation in the ctl code.
* Simplify small object allocation/deallocation.Jason Evans2010-03-141-314/+123
| | | | | | | Use chained run free lists instead of bitmaps to track free objects within small runs. Remove reference counting for small object run pages.