summaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
...
* Clean up a few config-related conditionals/asserts.Jason Evans2012-04-182-6/+8
| | | | | | Clean up a few config-related conditionals to avoid unnecessary dependencies on prof symbols. Use cassert() rather than assert() everywhere that it's appropriate.
* Update prof defaults to match common usage.Jason Evans2012-04-174-2/+8
| | | | | | | | | Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). Change the "opt.prof_accum" default from true to false. Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be abused to disable final profile dumping.
* Add the --disable-munmap option.Jason Evans2012-04-171-0/+3
| | | | | | Add the --disable-munmap option, remove the configure test that attempted to detect the VM allocation quirk known to exist on Linux x86[_64], and make --disable-munmap implicit on Linux.
* Disable munmap() if it causes VM map holes.Jason Evans2012-04-135-241/+187
| | | | | | | | | | | Add a configure test to determine whether common mmap()/munmap() patterns cause VM map holes, and only use munmap() to discard unused chunks if the problem does not exist. Unify the chunk caching for mmap and dss. Fix options processing to limit lg_chunk to be large enough that redzones will always fit.
* Always disable redzone by default.Jason Evans2012-04-131-3/+1
| | | | | | Always disable redzone by default, even when --enable-debug is specified. The memory overhead for redzones can be substantial, which makes this feature something that should only be opted into.
* Call base_boot before chunk_boot0Mike Hommey2012-04-121-2/+2
| | | | | Chunk_boot0 calls rtree_new, which calls base_alloc, which locks the base_mtx mutex. That mutex is initialized in base_boot.
* Use a stub replacement and disable dss when sbrk is not supportedMike Hommey2012-04-121-0/+11
|
* Normalize aligned allocation algorithms.Jason Evans2012-04-125-114/+110
| | | | | | | | | | | | | | | Normalize arena_palloc(), chunk_alloc_mmap_slow(), and chunk_recycle_dss() to use the same algorithm for trimming over-allocation. Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and ALIGNMENT_CEILING() macros, and use them where appropriate. Remove the run_size_p parameter from sa2u(). Fix a potential deadlock in chunk_recycle_dss() that was introduced by eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to chunk_alloc()).
* Implement Valgrind support, redzones, and quarantine.Jason Evans2012-04-118-115/+457
| | | | | | | | | | | | | Implement Valgrind support, as well as the redzone and quarantine features, which help Valgrind detect memory errors. Redzones are only implemented for small objects because the changes necessary to support redzones around large and huge objects are complicated by in-place reallocation, to the point that it isn't clear that the maintenance burden is worth the incremental improvement to Valgrind support. Merge arena_salloc() and arena_salloc_demote(). Refactor i[v]salloc() to expose the 'demote' option.
* Rename labels.Jason Evans2012-04-107-107/+107
| | | | | | | Rename labels from FOO to label_foo in order to avoid system macro definitions, in particular OUT and ERROR on mingw. Reported by Mike Hommey.
* Add alignment support to chunk_alloc().Mike Hommey2012-04-106-135/+84
|
* Remove MAP_NORESERVE supportMike Hommey2012-04-101-27/+14
| | | | It was only used by the swap feature, and that is gone.
* Always initialize tcache data structures.Jason Evans2012-04-061-46/+38
| | | | | | | Always initialize tcache data structures if the tcache configuration option is enabled, regardless of opt_tcache. This fixes "thread.tcache.enabled" mallctl manipulation in the case when opt_tcache is false.
* Remove arena_malloc_prechosen().Jason Evans2012-04-061-1/+1
| | | | | Remove arena_malloc_prechosen(), now that arena_malloc() can be invoked in a way that is semantically equivalent.
* Add utrace(2)-based tracing (--enable-utrace).Jason Evans2012-04-053-1/+44
|
* Fix threaded initialization and enable it on Linux.Jason Evans2012-04-051-3/+5
| | | | Reported by Mike Hommey.
* Add missing "opt.lg_tcache_max" mallctl implementation.Jason Evans2012-04-041-0/+3
|
* Add a0malloc(), a0calloc(), and a0free().Jason Evans2012-04-043-5/+56
| | | | | Add a0malloc(), a0calloc(), and a0free(), which are used by FreeBSD's libc to allocate/deallocate TLS in static binaries.
* Postpone mutex initialization on FreeBSD.Jason Evans2012-04-042-4/+35
| | | | | Postpone mutex initialization on FreeBSD until after base allocation is safe.
* Finish renaming "arenas.pagesize" to "arenas.page".Jason Evans2012-04-021-11/+10
|
* Clean up *PAGE* macros.Jason Evans2012-04-028-153/+118
| | | | | | | | | | | s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g. Remove remnants of the dynamic-page-shift code. Rename the "arenas.pagesize" mallctl to "arenas.page". Remove the "arenas.chunksize" mallctl, which is redundant with "opt.lg_chunk".
* Revert "Avoid NULL check in free() and malloc_usable_size()."Jason Evans2012-04-021-11/+15
| | | | | | | | | | This reverts commit 96d4120ac08db3f2d566e8e5c3bc134a24aa0afc. ivsalloc() depends on chunks_rtree being initialized. This can be worked around via a NULL pointer check. However, thread_allocated_tsd_get() also depends on initialization having occurred, and there is no way to guard its call in free() that is cheaper than checking whether ptr is NULL.
* Avoid NULL check in free() and malloc_usable_size().Jason Evans2012-04-021-15/+11
| | | | | | | | | Generalize isalloc() to handle NULL pointers in such a way that the NULL checking overhead is only paid when introspecting huge allocations (or NULL). This allows free() and malloc_usable_size() to no longer check for NULL. Submitted by Igor Bukanov and Mike Hommey.
* Move last bit of zone initialization in zone.c, and lazy-initializeMike Hommey2012-04-022-11/+1
|
* Remove vsnprintf() and strtoumax() validation.Jason Evans2012-04-022-28/+1
| | | | | | | | | | Remove code that validates malloc_vsnprintf() and malloc_strtoumax() against their namesakes. The validation code has adequately served its usefulness at this point, and it isn't worth dealing with the different formatting for %p with glibc versus other implementations for NULL pointers ("(nil)" vs. "0x0"). Reported by Mike Hommey.
* Avoid crashes when system libraries use the purgeable zone allocatorMike Hommey2012-03-302-6/+27
|
* Move zone registration to zone.cMike Hommey2012-03-302-25/+21
|
* Add a SYS_write definition on systems where it is not defined in headersMike Hommey2012-03-301-0/+10
| | | | | Namely, in the Android NDK headers, SYS_write is not defined; but __NR_write is.
* Don't use pthread_atfork to register prefork/postfork handlers on OSXMike Hommey2012-03-281-1/+1
| | | | OSX libc calls zone allocators' force_lock/force_unlock already.
* Add the "thread.tcache.enabled" mallctl.Jason Evans2012-03-272-20/+43
|
* Check for NULL ptr in malloc_usable_size().Jason Evans2012-03-261-4/+2
| | | | | | Check for NULL ptr in malloc_usable_size(), rather than just asserting that ptr is non-NULL. This matches behavior of other implementations (e.g., glibc and tcmalloc).
* Make zone_{free, realloc, free_definite_size} fallback to the system ↵Mike Hommey2012-03-261-4/+17
| | | | | | | | | | | | allocator if they are called with a pointer that jemalloc didn't allocate It turns out some OSX system libraries (like CoreGraphics on 10.6) like to call malloc_zone_* functions, but giving them pointers that weren't allocated with the zone they are using. Possibly, they do malloc_zone_malloc(malloc_default_zone()) before we register the jemalloc zone, and malloc_zone_realloc(malloc_default_zone()) after. malloc_default_zone() returning a different value in both cases.
* Fix glibc hooks when using both --with-jemalloc-prefix and --with-manglingMike Hommey2012-03-261-1/+9
|
* Port to FreeBSD.Jason Evans2012-02-035-40/+197
| | | | | | | | | | | | | | | | | | | | | | | | Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(), _malloc_{pre,post}fork()) to avoid bootstrapping issues due to allocation in libc and libthr. Add malloc_strtoumax() and use it instead of strtoul(). Disable validation code in malloc_vsnprintf() and malloc_strtoumax() until jemalloc is initialized. This is necessary because locale initialization causes allocation for both vsnprintf() and strtoumax(). Force the lazy-lock feature on in order to avoid pthread_self(), because it causes allocation. Use syscall(SYS_write, ...) rather than write(...), because libthr wraps write() and causes allocation. Without this workaround, it would not be possible to print error messages in malloc_conf_init() without substantially reworking bootstrapping. Fix choose_arena_hard() to look at how many threads are assigned to the candidate choice, rather than checking whether the arena is uninitialized. This bug potentially caused more arenas to be initialized than necessary.
* Remove ephemeral mutexes.Jason Evans2012-03-243-35/+46
| | | | | | | | | | | Remove ephemeral mutexes from the prof machinery, and remove malloc_mutex_destroy(). This simplifies mutex management on systems that call malloc()/free() inside pthread_mutex_{create,destroy}(). Add atomic_*_u() for operation on unsigned values. Fix prof_printf() to call malloc_vsnprintf() rather than malloc_snprintf().
* Add JEMALLOC_CC_SILENCE_INIT().Jason Evans2012-03-232-39/+11
| | | | | Add JEMALLOC_CC_SILENCE_INIT(), which provides succinct syntax for initializing a variable to avoid a spurious compiler warning.
* Implement tsd.Jason Evans2012-03-238-244/+274
| | | | | | | | | | | | | Implement tsd, which is a TLS/TSD abstraction that uses one or both internally. Modify bootstrapping such that no tsd's are utilized until allocation is safe. Remove malloc_[v]tprintf(), and use malloc_snprintf() instead. Fix %p argument size handling in malloc_vsnprintf(). Fix a long-standing statistics-related bug in the "thread.arena" mallctl that could cause crashes due to linked list corruption.
* Improve zone support for OSXMike Hommey2012-03-202-174/+37
| | | | | | | I tested a build from 10.7 run on 10.7 and 10.6, and a build from 10.6 run on 10.6. The AC_COMPILE_IFELSE limbo is to avoid running a program during configure, which presumably makes it work when cross compiling for iOS.
* Unbreak mac after commit 4e2e3ddMike Hommey2012-03-201-1/+1
|
* Invert NO_TLS to JEMALLOC_TLS.Jason Evans2012-03-194-9/+9
|
* Rename the "tcache.flush" mallctl to "thread.tcache.flush".Jason Evans2012-03-171-6/+6
|
* Fix fork-related bugs.Jason Evans2012-03-136-26/+158
| | | | | | | | | Acquire/release arena bin locks as part of the prefork/postfork. This bug made deadlock in the child between fork and exec a possibility. Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so that the child can reinitialize mutexes rather than unlocking them. In practice, this bug tended not to cause problems.
* Modify malloc_vsnprintf() validation code.Jason Evans2012-03-131-4/+3
| | | | | | Modify malloc_vsnprintf() validation code to verify that output is identical to vsnprintf() output, even if both outputs are truncated due to buffer exhaustion.
* Implement aligned_alloc().Jason Evans2012-03-131-10/+27
| | | | | | | | Implement aligned_alloc(), which was added in the C11 standard. The function is weakly specified to the point that a minimally compliant implementation would be painful to use (size must be an integral multiple of alignment!), which in practice makes posix_memalign() a safer choice.
* Fix a regression in JE_COMPILABLE().Jason Evans2012-03-131-4/+1
| | | | | | | Revert JE_COMPILABLE() so that it detects link errors. Cross-compiling should still work as long as a valid configure cache is provided. Clean up some comments/whitespace.
* Fix malloc_stats_print() option support.Jason Evans2012-03-131-6/+8
| | | | Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
* s/PRIx64/PRIxPTR/ for uintptr_t printf() argument.Jason Evans2012-03-121-1/+1
|
* Remove extra '}'.Jason Evans2012-03-121-1/+0
|
* Implement malloc_vsnprintf().Jason Evans2012-03-086-470/+760
| | | | | | | | | | | | Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as several other printing functions based on it, so that formatted printing can be relied upon without concern for inducing a dependency on floating point runtime support. Replace malloc_write() calls with malloc_*printf() where doing so simplifies the code. Add name mangling for library-private symbols in the data and BSS sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose all opt_* variable use to cpp so that proper mangling occurs.
* Remove the lg_tcache_gc_sweep option.Jason Evans2012-03-054-27/+0
| | | | | | | Remove the lg_tcache_gc_sweep option, because it is no longer very useful. Prior to the addition of dynamic adjustment of tcache fill count, it was possible for fill/flush overhead to be a problem, but this problem no longer occurs.