summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Postpone mutex initialization on FreeBSD.Jason Evans2012-04-042-4/+35
| | | | | Postpone mutex initialization on FreeBSD until after base allocation is safe.
* Finish renaming "arenas.pagesize" to "arenas.page".Jason Evans2012-04-021-11/+10
|
* Clean up *PAGE* macros.Jason Evans2012-04-028-153/+118
| | | | | | | | | | | s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g. Remove remnants of the dynamic-page-shift code. Rename the "arenas.pagesize" mallctl to "arenas.page". Remove the "arenas.chunksize" mallctl, which is redundant with "opt.lg_chunk".
* Revert "Avoid NULL check in free() and malloc_usable_size()."Jason Evans2012-04-021-11/+15
| | | | | | | | | | This reverts commit 96d4120ac08db3f2d566e8e5c3bc134a24aa0afc. ivsalloc() depends on chunks_rtree being initialized. This can be worked around via a NULL pointer check. However, thread_allocated_tsd_get() also depends on initialization having occurred, and there is no way to guard its call in free() that is cheaper than checking whether ptr is NULL.
* Avoid NULL check in free() and malloc_usable_size().Jason Evans2012-04-021-15/+11
| | | | | | | | | Generalize isalloc() to handle NULL pointers in such a way that the NULL checking overhead is only paid when introspecting huge allocations (or NULL). This allows free() and malloc_usable_size() to no longer check for NULL. Submitted by Igor Bukanov and Mike Hommey.
* Move last bit of zone initialization in zone.c, and lazy-initializeMike Hommey2012-04-022-11/+1
|
* Remove vsnprintf() and strtoumax() validation.Jason Evans2012-04-022-28/+1
| | | | | | | | | | Remove code that validates malloc_vsnprintf() and malloc_strtoumax() against their namesakes. The validation code has adequately served its usefulness at this point, and it isn't worth dealing with the different formatting for %p with glibc versus other implementations for NULL pointers ("(nil)" vs. "0x0"). Reported by Mike Hommey.
* Avoid crashes when system libraries use the purgeable zone allocatorMike Hommey2012-03-302-6/+27
|
* Move zone registration to zone.cMike Hommey2012-03-302-25/+21
|
* Add a SYS_write definition on systems where it is not defined in headersMike Hommey2012-03-301-0/+10
| | | | | Namely, in the Android NDK headers, SYS_write is not defined; but __NR_write is.
* Don't use pthread_atfork to register prefork/postfork handlers on OSXMike Hommey2012-03-281-1/+1
| | | | OSX libc calls zone allocators' force_lock/force_unlock already.
* Add the "thread.tcache.enabled" mallctl.Jason Evans2012-03-272-20/+43
|
* Check for NULL ptr in malloc_usable_size().Jason Evans2012-03-261-4/+2
| | | | | | Check for NULL ptr in malloc_usable_size(), rather than just asserting that ptr is non-NULL. This matches behavior of other implementations (e.g., glibc and tcmalloc).
* Make zone_{free, realloc, free_definite_size} fallback to the system ↵Mike Hommey2012-03-261-4/+17
| | | | | | | | | | | | allocator if they are called with a pointer that jemalloc didn't allocate It turns out some OSX system libraries (like CoreGraphics on 10.6) like to call malloc_zone_* functions, but giving them pointers that weren't allocated with the zone they are using. Possibly, they do malloc_zone_malloc(malloc_default_zone()) before we register the jemalloc zone, and malloc_zone_realloc(malloc_default_zone()) after. malloc_default_zone() returning a different value in both cases.
* Fix glibc hooks when using both --with-jemalloc-prefix and --with-manglingMike Hommey2012-03-261-1/+9
|
* Port to FreeBSD.Jason Evans2012-02-035-40/+197
| | | | | | | | | | | | | | | | | | | | | | | | Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(), _malloc_{pre,post}fork()) to avoid bootstrapping issues due to allocation in libc and libthr. Add malloc_strtoumax() and use it instead of strtoul(). Disable validation code in malloc_vsnprintf() and malloc_strtoumax() until jemalloc is initialized. This is necessary because locale initialization causes allocation for both vsnprintf() and strtoumax(). Force the lazy-lock feature on in order to avoid pthread_self(), because it causes allocation. Use syscall(SYS_write, ...) rather than write(...), because libthr wraps write() and causes allocation. Without this workaround, it would not be possible to print error messages in malloc_conf_init() without substantially reworking bootstrapping. Fix choose_arena_hard() to look at how many threads are assigned to the candidate choice, rather than checking whether the arena is uninitialized. This bug potentially caused more arenas to be initialized than necessary.
* Remove ephemeral mutexes.Jason Evans2012-03-243-35/+46
| | | | | | | | | | | Remove ephemeral mutexes from the prof machinery, and remove malloc_mutex_destroy(). This simplifies mutex management on systems that call malloc()/free() inside pthread_mutex_{create,destroy}(). Add atomic_*_u() for operation on unsigned values. Fix prof_printf() to call malloc_vsnprintf() rather than malloc_snprintf().
* Add JEMALLOC_CC_SILENCE_INIT().Jason Evans2012-03-232-39/+11
| | | | | Add JEMALLOC_CC_SILENCE_INIT(), which provides succinct syntax for initializing a variable to avoid a spurious compiler warning.
* Implement tsd.Jason Evans2012-03-238-244/+274
| | | | | | | | | | | | | Implement tsd, which is a TLS/TSD abstraction that uses one or both internally. Modify bootstrapping such that no tsd's are utilized until allocation is safe. Remove malloc_[v]tprintf(), and use malloc_snprintf() instead. Fix %p argument size handling in malloc_vsnprintf(). Fix a long-standing statistics-related bug in the "thread.arena" mallctl that could cause crashes due to linked list corruption.
* Improve zone support for OSXMike Hommey2012-03-202-174/+37
| | | | | | | I tested a build from 10.7 run on 10.7 and 10.6, and a build from 10.6 run on 10.6. The AC_COMPILE_IFELSE limbo is to avoid running a program during configure, which presumably makes it work when cross compiling for iOS.
* Unbreak mac after commit 4e2e3ddMike Hommey2012-03-201-1/+1
|
* Invert NO_TLS to JEMALLOC_TLS.Jason Evans2012-03-194-9/+9
|
* Rename the "tcache.flush" mallctl to "thread.tcache.flush".Jason Evans2012-03-171-6/+6
|
* Fix fork-related bugs.Jason Evans2012-03-136-26/+158
| | | | | | | | | Acquire/release arena bin locks as part of the prefork/postfork. This bug made deadlock in the child between fork and exec a possibility. Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so that the child can reinitialize mutexes rather than unlocking them. In practice, this bug tended not to cause problems.
* Modify malloc_vsnprintf() validation code.Jason Evans2012-03-131-4/+3
| | | | | | Modify malloc_vsnprintf() validation code to verify that output is identical to vsnprintf() output, even if both outputs are truncated due to buffer exhaustion.
* Implement aligned_alloc().Jason Evans2012-03-131-10/+27
| | | | | | | | Implement aligned_alloc(), which was added in the C11 standard. The function is weakly specified to the point that a minimally compliant implementation would be painful to use (size must be an integral multiple of alignment!), which in practice makes posix_memalign() a safer choice.
* Fix a regression in JE_COMPILABLE().Jason Evans2012-03-131-4/+1
| | | | | | | Revert JE_COMPILABLE() so that it detects link errors. Cross-compiling should still work as long as a valid configure cache is provided. Clean up some comments/whitespace.
* Fix malloc_stats_print() option support.Jason Evans2012-03-131-6/+8
| | | | Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
* s/PRIx64/PRIxPTR/ for uintptr_t printf() argument.Jason Evans2012-03-121-1/+1
|
* Remove extra '}'.Jason Evans2012-03-121-1/+0
|
* Implement malloc_vsnprintf().Jason Evans2012-03-086-470/+760
| | | | | | | | | | | | Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as several other printing functions based on it, so that formatted printing can be relied upon without concern for inducing a dependency on floating point runtime support. Replace malloc_write() calls with malloc_*printf() where doing so simplifies the code. Add name mangling for library-private symbols in the data and BSS sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose all opt_* variable use to cpp so that proper mangling occurs.
* Remove the lg_tcache_gc_sweep option.Jason Evans2012-03-054-27/+0
| | | | | | | Remove the lg_tcache_gc_sweep option, because it is no longer very useful. Prior to the addition of dynamic adjustment of tcache fill count, it was possible for fill/flush overhead to be a problem, but this problem no longer occurs.
* Use UINT64_C() rather than LLU for 64-bit constants.Jason Evans2012-03-052-8/+9
|
* Add the --disable-experimental option.Jason Evans2012-03-031-1/+11
|
* Rename prn to prng.Jason Evans2012-03-022-4/+4
| | | | | Rename prn to prng so that Windows doesn't choke when trying to create a file named prn.h.
* Add --with-mangling.Jason Evans2012-03-023-67/+59
| | | | | | | | | | Add the --with-mangling configure option, which can be used to specify name mangling on a per public symbol basis that takes precedence over --with-jemalloc-prefix. Expose the memalign() and valloc() overrides even if --with-jemalloc-prefix is specified. This change does no real harm, and simplifies the code.
* Simplify zone_good_size().Jason Evans2012-02-291-15/+3
| | | | | | Simplify zone_good_size() to avoid memory allocation. Submitted by Mike Hommey.
* Add nallocm().Jason Evans2012-02-291-0/+22
| | | | | | | Add nallocm(), which computes the real allocation size that would result from the corresponding allocm() call. nallocm() is a functional superset of OS X's malloc_good_size(), in that it takes alignment constraints into account.
* Use glibc allocator hooks.Jason Evans2012-02-292-0/+28
| | | | | | | | | | | When jemalloc is used as a libc malloc replacement (i.e. not prefixed), some particular setups may end up inconsistently calling malloc from libc and free from jemalloc, or the other way around. glibc provides hooks to make its functions use alternative implementations. Use them. Submitted by Karl Tomlinson and Mike Hommey.
* Do not enforce minimum alignment in memalign().Jason Evans2012-02-291-6/+8
| | | | | | | | | | | | | | | | | | | Do not enforce minimum alignment in memalign(). This is a non-standard function, and there is disagreement over whether to enforce minimum alignment. Solaris documentation (whence memalign() originated) says that minimum alignment is required: The value of alignment must be a power of two and must be greater than or equal to the size of a word. However, Linux's manual page says in its NOTES section: memalign() may not check that the boundary parameter is correct. This is descriptive rather than prescriptive, but applications with bad assumptions about memalign() exist, so be as forgiving as possible. Reported by Mike Hommey.
* Remove unused variables in stats_print().Jason Evans2012-02-291-4/+0
| | | | Submitted by Mike Hommey.
* Remove unused variable in arena_run_split().Jason Evans2012-02-291-2/+1
| | | | Submitted by Mike Hommey.
* Enable the stats configuration option by default.Jason Evans2012-02-291-2/+0
|
* Remove the sysv option.Jason Evans2012-02-293-54/+7
|
* Fix realloc(p, 0) to act like free(p).Jason Evans2012-02-291-13/+19
| | | | Reported by Yoni Londer.
* Simplify small size class infrastructure.Jason Evans2012-02-295-508/+86
| | | | | | | | | | | | Program-generate small size class tables for all valid combinations of LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate all relevant data structures, and remove the distinction between tiny/quantum/cacheline/subpage bins. Remove --enable-dynamic-page-shift. This option didn't prove useful in practice, and it prevented optimizations. Add Tilera architecture support.
* Remove the opt.lg_prof_bt_max option.Jason Evans2012-02-144-25/+8
| | | | | | | | Remove opt.lg_prof_bt_max, and hard code it to 7. The original intention of this option was to enable faster backtracing by limiting backtrace depth. However, this makes graphical pprof output very difficult to interpret. In practice, decreasing sampling frequency is a better mechanism for limiting profiling overhead.
* Remove the opt.lg_prof_tcmax option.Jason Evans2012-02-144-24/+3
| | | | | | | Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024. This setting is something that users just shouldn't have to worry about. If lock contention actually ends up being a problem, the simple solution available to the user is to reduce sampling frequency.
* Fix bin->runcur management.Jason Evans2012-02-141-62/+72
| | | | | | | Fix an interaction between arena_dissociate_bin_run() and arena_bin_lower_run() that made it possible for bin->runcur to point to a run other than the lowest non-full run. This bug violated jemalloc's layout policy, but did not affect correctness.
* Remove highruns statistics.Jason Evans2012-02-133-56/+11
|