summaryrefslogtreecommitdiffstats
path: root/doc
Commit message (Collapse)AuthorAgeFilesLines
...
* Clarify how to use malloc_conf.Jason Evans2013-03-191-1/+8
| | | | | | Clarify that malloc_conf is intended only for compile-time configuration, since jemalloc may be initialized before main() is entered.
* Add clipping support to lg_chunk option processing.Jason Evans2012-12-231-2/+5
| | | | | | | | | Modify processing of the lg_chunk option so that it clips an out-of-range input to the edge of the valid range. This makes it possible to request the minimum possible chunk size without intimate knowledge of allocator internals. Submitted by Ian Lepore (see FreeBSD PR bin/174641).
* document what stats.active does not trackJan Beich2012-11-071-2/+4
| | | | Based on http://www.canonware.com/pipermail/jemalloc-discuss/2012-March/000164.html
* Purge unused dirty pages in a fragmentation-reducing order.Jason Evans2012-11-061-1/+1
| | | | | | | | | | | | | | | | Purge unused dirty pages in an order that first performs clean/dirty run defragmentation, in order to mitigate available run fragmentation. Remove the limitation that prevented purging unless at least one chunk worth of dirty pages had accumulated in an arena. This limitation was intended to avoid excessive purging for small applications, but the threshold was arbitrary, and the effect of questionable utility. Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased likelihood of allocating clean runs, given the same ratio of clean:dirty runs, and reduces the potential for repeated purging in pathological large malloc/free loops that push the active:dirty page ratio just over the purge threshold.
* Add arena-specific and selective dss allocation.Jason Evans2012-10-131-9/+80
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.
* Disable tcache by default if running inside Valgrind.Jason Evans2012-05-161-1/+2
| | | | | Disable tcache by default if running inside Valgrind, in order to avoid making unallocated objects appear reachable to Valgrind.
* Auto-detect whether running inside Valgrind.Jason Evans2012-05-151-16/+11
| | | | | Auto-detect whether running inside Valgrind, thus removing the need to manually specify MALLOC_CONF=valgrind:true.
* Generalize "stats.mapped" documentation.Jason Evans2012-05-101-2/+2
| | | | | | Generalize "stats.mapped" documentation to state that all inactive chunks are omitted, now that it is possible for mmap'ed chunks to be omitted in addition to DSS chunks.
* Add the --enable-mremap option.Jason Evans2012-05-091-0/+10
| | | | | | Add the --enable-mremap option, and disable the use of mremap(2) by default, for the same reason that freeing chunks via munmap(2) is disabled by default on Linux: semi-permanent VM map fragmentation.
* Fix Valgrind URL in documentation.Jason Evans2012-04-261-20/+20
| | | | Reported by Daichi GOTO.
* Fix a memory corruption bug in chunk_alloc_dss().Jason Evans2012-04-211-2/+2
| | | | | | | | | Fix a memory corruption bug in chunk_alloc_dss() that was due to claiming newly allocated memory is zeroed. Reverse order of preference between mmap() and sbrk() to prefer mmap(). Clean up management of 'zero' parameter in chunk_alloc*().
* Update prof defaults to match common usage.Jason Evans2012-04-171-17/+28
| | | | | | | | | Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). Change the "opt.prof_accum" default from true to false. Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be abused to disable final profile dumping.
* Update pprof (from gperftools 2.0).Jason Evans2012-04-171-1/+1
|
* Add the --disable-munmap option.Jason Evans2012-04-171-0/+10
| | | | | | Add the --disable-munmap option, remove the configure test that attempted to detect the VM allocation quirk known to exist on Linux x86[_64], and make --disable-munmap implicit on Linux.
* Always disable redzone by default.Jason Evans2012-04-131-3/+1
| | | | | | Always disable redzone by default, even when --enable-debug is specified. The memory overhead for redzones can be substantial, which makes this feature something that should only be opted into.
* Implement Valgrind support, redzones, and quarantine.Jason Evans2012-04-111-4/+75
| | | | | | | | | | | | | Implement Valgrind support, as well as the redzone and quarantine features, which help Valgrind detect memory errors. Redzones are only implemented for small objects because the changes necessary to support redzones around large and huge objects are complicated by in-place reallocation, to the point that it isn't clear that the maintenance burden is worth the incremental improvement to Valgrind support. Merge arena_salloc() and arena_salloc_demote(). Refactor i[v]salloc() to expose the 'demote' option.
* Add utrace(2)-based tracing (--enable-utrace).Jason Evans2012-04-051-0/+25
|
* Remove obsolete "config.dynamic_page_shift" mallctl documentation.Jason Evans2012-04-031-10/+0
|
* Clean up *PAGE* macros.Jason Evans2012-04-021-10/+1
| | | | | | | | | | | s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g. Remove remnants of the dynamic-page-shift code. Rename the "arenas.pagesize" mallctl to "arenas.page". Remove the "arenas.chunksize" mallctl, which is redundant with "opt.lg_chunk".
* Add the "thread.tcache.enabled" mallctl.Jason Evans2012-03-271-0/+14
|
* Fix various documentation formatting regressions.Jason Evans2012-03-191-18/+20
|
* Rename the "tcache.flush" mallctl to "thread.tcache.flush".Jason Evans2012-03-171-18/+18
|
* Implement aligned_alloc().Jason Evans2012-03-131-0/+35
| | | | | | | | Implement aligned_alloc(), which was added in the C11 standard. The function is weakly specified to the point that a minimally compliant implementation would be painful to use (size must be an integral multiple of alignment!), which in practice makes posix_memalign() a safer choice.
* Remove the lg_tcache_gc_sweep option.Jason Evans2012-03-051-19/+1
| | | | | | | Remove the lg_tcache_gc_sweep option, because it is no longer very useful. Prior to the addition of dynamic adjustment of tcache fill count, it was possible for fill/flush overhead to be a problem, but this problem no longer occurs.
* Add the --disable-experimental option.Jason Evans2012-03-031-1/+3
|
* Add nallocm().Jason Evans2012-02-291-8/+30
| | | | | | | Add nallocm(), which computes the real allocation size that would result from the corresponding allocm() call. nallocm() is a functional superset of OS X's malloc_good_size(), in that it takes alignment constraints into account.
* Remove the sysv option.Jason Evans2012-02-291-26/+0
|
* Simplify small size class infrastructure.Jason Evans2012-02-291-175/+23
| | | | | | | | | | | | Program-generate small size class tables for all valid combinations of LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate all relevant data structures, and remove the distinction between tiny/quantum/cacheline/subpage bins. Remove --enable-dynamic-page-shift. This option didn't prove useful in practice, and it prevented optimizations. Add Tilera architecture support.
* Remove the opt.lg_prof_bt_max option.Jason Evans2012-02-141-16/+0
| | | | | | | | Remove opt.lg_prof_bt_max, and hard code it to 7. The original intention of this option was to enable faster backtracing by limiting backtrace depth. However, this makes graphical pprof output very difficult to interpret. In practice, decreasing sampling frequency is a better mechanism for limiting profiling overhead.
* Remove the opt.lg_prof_tcmax option.Jason Evans2012-02-141-24/+2
| | | | | | | Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024. This setting is something that users just shouldn't have to worry about. If lock contention actually ends up being a problem, the simple solution available to the user is to reduce sampling frequency.
* Remove highruns statistics.Jason Evans2012-02-131-22/+0
|
* Make 8-byte tiny size class non-optional.Jason Evans2012-02-131-17/+6
| | | | | | | | | | | When tiny size class support was first added, it was intended to support truly tiny size classes (even 2 bytes). However, this wasn't very useful in practice, so the minimum tiny size class has been limited to sizeof(void *) for a long time now. This is too small to be standards compliant, but other commonly used malloc implementations do not even bother using a 16-byte quantum on systems with vector units (SSE2+, AltiVEC, etc.). As such, it is safe in practice to support an 8-byte tiny size class on 64-bit systems that support 16-byte types.
* Remove the swap feature.Jason Evans2012-02-131-92/+2
| | | | | Remove the swap feature, which enabled per application swap files. In practice this feature has not proven itself useful to users.
* Document swap.fds mallctl as read-write.Jason Evans2011-08-121-1/+1
| | | | | Fix the manual page to document the swap.fds mallctl as read-write, rather than read-only.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-014-0/+2295