summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Simplify zone_good_size().Jason Evans2012-02-291-15/+3
| | | | | | Simplify zone_good_size() to avoid memory allocation. Submitted by Mike Hommey.
* Add nallocm().Jason Evans2012-02-291-0/+22
| | | | | | | Add nallocm(), which computes the real allocation size that would result from the corresponding allocm() call. nallocm() is a functional superset of OS X's malloc_good_size(), in that it takes alignment constraints into account.
* Use glibc allocator hooks.Jason Evans2012-02-292-0/+28
| | | | | | | | | | | When jemalloc is used as a libc malloc replacement (i.e. not prefixed), some particular setups may end up inconsistently calling malloc from libc and free from jemalloc, or the other way around. glibc provides hooks to make its functions use alternative implementations. Use them. Submitted by Karl Tomlinson and Mike Hommey.
* Do not enforce minimum alignment in memalign().Jason Evans2012-02-291-6/+8
| | | | | | | | | | | | | | | | | | | Do not enforce minimum alignment in memalign(). This is a non-standard function, and there is disagreement over whether to enforce minimum alignment. Solaris documentation (whence memalign() originated) says that minimum alignment is required: The value of alignment must be a power of two and must be greater than or equal to the size of a word. However, Linux's manual page says in its NOTES section: memalign() may not check that the boundary parameter is correct. This is descriptive rather than prescriptive, but applications with bad assumptions about memalign() exist, so be as forgiving as possible. Reported by Mike Hommey.
* Remove unused variables in stats_print().Jason Evans2012-02-291-4/+0
| | | | Submitted by Mike Hommey.
* Remove unused variable in arena_run_split().Jason Evans2012-02-291-2/+1
| | | | Submitted by Mike Hommey.
* Enable the stats configuration option by default.Jason Evans2012-02-291-2/+0
|
* Remove the sysv option.Jason Evans2012-02-293-54/+7
|
* Fix realloc(p, 0) to act like free(p).Jason Evans2012-02-291-13/+19
| | | | Reported by Yoni Londer.
* Simplify small size class infrastructure.Jason Evans2012-02-295-508/+86
| | | | | | | | | | | | Program-generate small size class tables for all valid combinations of LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate all relevant data structures, and remove the distinction between tiny/quantum/cacheline/subpage bins. Remove --enable-dynamic-page-shift. This option didn't prove useful in practice, and it prevented optimizations. Add Tilera architecture support.
* Remove the opt.lg_prof_bt_max option.Jason Evans2012-02-144-25/+8
| | | | | | | | Remove opt.lg_prof_bt_max, and hard code it to 7. The original intention of this option was to enable faster backtracing by limiting backtrace depth. However, this makes graphical pprof output very difficult to interpret. In practice, decreasing sampling frequency is a better mechanism for limiting profiling overhead.
* Remove the opt.lg_prof_tcmax option.Jason Evans2012-02-144-24/+3
| | | | | | | Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024. This setting is something that users just shouldn't have to worry about. If lock contention actually ends up being a problem, the simple solution available to the user is to reduce sampling frequency.
* Fix bin->runcur management.Jason Evans2012-02-141-62/+72
| | | | | | | Fix an interaction between arena_dissociate_bin_run() and arena_bin_lower_run() that made it possible for bin->runcur to point to a run other than the lowest non-full run. This bug violated jemalloc's layout policy, but did not affect correctness.
* Remove highruns statistics.Jason Evans2012-02-133-56/+11
|
* Make 8-byte tiny size class non-optional.Jason Evans2012-02-132-79/+31
| | | | | | | | | | | When tiny size class support was first added, it was intended to support truly tiny size classes (even 2 bytes). However, this wasn't very useful in practice, so the minimum tiny size class has been limited to sizeof(void *) for a long time now. This is too small to be standards compliant, but other commonly used malloc implementations do not even bother using a 16-byte quantum on systems with vector units (SSE2+, AltiVEC, etc.). As such, it is safe in practice to support an 8-byte tiny size class on 64-bit systems that support 16-byte types.
* Silence compiler warnings.Jason Evans2012-02-131-5/+25
|
* Streamline tcache-related malloc/free fast paths.Jason Evans2012-02-132-32/+1
| | | | | | | | | | | tcache_get() is inlined, so do the config_tcache check inside tcache_get() and simplify its callers. Make arena_malloc() an inline function, since it is part of the malloc() fast path. Remove conditional logic that cause build issues if --disable-tcache was specified.
* Remove the swap feature.Jason Evans2012-02-137-571/+19
| | | | | Remove the swap feature, which enabled per application swap files. In practice this feature has not proven itself useful to users.
* Remove magic.Jason Evans2012-02-132-24/+0
| | | | | | Remove structure magic, because 1) it is no longer conditional, and 2) it stopped being very effective at detecting memory corruption several years ago.
* Reduce cpp conditional logic complexity.Jason Evans2012-02-1112-1473/+949
| | | | | | | | | | | | | | | | | | | | | | Convert configuration-related cpp conditional logic to use static constant variables, e.g.: #ifdef JEMALLOC_DEBUG [...] #endif becomes: if (config_debug) { [...] } The advantage is clearer, more concise code. The main disadvantage is that data structures no longer have conditionally defined fields, so they pay the cost of all fields regardless of whether they are used. In practice, this is only a minor concern; config_stats will go away in an upcoming change, and config_prof is the only other major feature that depends on more than a few special-purpose fields.
* Fix malloc_stats_print(..., "a") output.Jason Evans2011-11-111-1/+1
| | | | | | Fix the logic in stats_print() such that if the "a" flag is passed in without the "m" flag, merged statistics will be printed even if only one arena is initialized.
* Fix huge_ralloc to maintain chunk statistics.Jason Evans2011-11-113-13/+16
| | | | | Fix huge_ralloc() to properly maintain chunk statistics when using mremap(2).
* Fix huge_ralloc() race when using mremap(2).Jason Evans2011-11-091-3/+9
| | | | | | | | | Fix huge_ralloc() to remove the old memory region from tree of huge allocations *before* calling mremap(2), in order to make sure that no other thread acquires the old memory region via mmap() and encounters stale metadata in the tree. Reported by: Rich Prohaska
* Fix rallocm() test to support >4KiB pages.Jason Evans2011-11-061-1/+1
|
* Initialize arenas_tsd before setting it.Jason Evans2011-11-041-8/+8
| | | | Reported by: Ethan Burns, Rich Prohaska, Tudor Bosman
* Fix a prof-related race condition.Jason Evans2011-08-311-6/+19
| | | | | | | | Fix prof_lookup() to artificially raise curobjs for all paths through the code that creates a new entry in the per thread bt2cnt hash table. This fixes a race condition that could corrupt memory if prof_accum were false, and a non-default lg_prof_tcmax were used and/or threads were destroyed.
* Fix a prof-related bug in realloc().Jason Evans2011-08-311-3/+8
| | | | | Fix realloc() such that it only records the object passed in as freed if no OOM error occurs.
* Add missing prof_malloc() call in allocm().Jason Evans2011-08-131-3/+2
| | | | | | Add a missing prof_malloc() call in allocm(). Before this fix, negative object/byte counts could be observed in heap profiles for applications that use allocm().
* Fix off-by-one backtracing issues.Jason Evans2011-08-121-13/+36
| | | | | | | | | | | | Rewrite prof_alloc_prep() as a cpp macro, PROF_ALLOC_PREP(), in order to remove any doubt as to whether an additional stack frame is created. Prior to this change, it was assumed that inlining would reduce the total number of frames in the backtrace, but in practice behavior wasn't completely predictable. Create imemalign() and call it from posix_memalign(), memalign(), and valloc(), so that all entry points require the same number of stack frames to be ignored during backtracing.
* Conditionalize an isalloc() call in rallocm().Jason Evans2011-08-121-2/+2
| | | | Conditionalize an isalloc() call in rallocm() that be unnecessary.
* Fix two prof-related bugs in rallocm().Jason Evans2011-08-122-3/+11
| | | | | | | Properly handle boundary conditions for sampled region promotion in rallocm(). Prior to this fix, some combinations of 'size' and 'extra' values could cause erroneous behavior. Additionally, size class recording for promoted regions was incorrect.
* Clean up prof-related comments.Jason Evans2011-08-101-23/+16
| | | | | | | Clean up some prof-related comments to more accurately reflect how the code works. Simplify OOM handling code in a couple of prof-related error paths.
* Use prof_tdata_cleanup() argument.Jason Evans2011-08-091-24/+19
| | | | | Use the argument to prof_tdata_cleanup(), rather than calling PROF_TCACHE_GET(). This fixes a bug in the NO_TLS case.
* Fix assertions in arena_purge().Jason Evans2011-06-131-2/+2
| | | | | | | | Fix assertions in arena_purge() to accurately reflect the constraints in arena_maybe_purge(). There were two bugs here, one of which merely weakened the assertion, and the other of which referred to an uninitialized variable (typo; used npurgatory instead of arena->npurgatory).
* Use LLU suffix for all 64-bit constants.Jason Evans2011-05-222-2/+2
| | | | | | Add the LLU suffix for all 0x... 64-bit constants. Reported by Jakob Blomer.
* Move repo contents in jemalloc/ to top level.Jason Evans2011-04-0121-0/+11560