summaryrefslogtreecommitdiffstats
path: root/include/jemalloc
Commit message (Collapse)AuthorAgeFilesLines
* Add no-op bodies to VALGRIND_*() macro stubs.Jason Evans2013-03-061-9/+11
| | | | | | | | | Add no-op bodies to VALGRIND_*() macro stubs so that they can be used in contexts like the following without generating a compiler warning about the 'if' statement having an empty body: if (config_valgrind) VALGRIND_MAKE_MEM_UNDEFINED(ret, size);
* fix building for s390 systemsMike Frysinger2013-03-061-1/+1
| | | | | | | | | Checking for __s390x__ means you work on s390x, but not s390 (32bit) systems. So use __s390__ which works for both. With this, `make check` passes on s390. Signed-off-by: Mike Frysinger <vapier@gentoo.org>
* Fix a prof-related locking order bug.Jason Evans2013-02-061-13/+20
| | | | | Fix a locking order bug that could cause deadlock during fork if heap profiling were enabled.
* Fix Valgrind integration.Jason Evans2013-02-012-2/+3
| | | | | Fix Valgrind integration to annotate all internally allocated memory in a way that keeps Valgrind happy about internal data structure access.
* Fix potential TLS-related memory corruption.Jason Evans2013-01-313-8/+55
| | | | | | | | | | Avoid writing to uninitialized TLS as a side effect of deallocation. Initializing TLS during deallocation is unsafe because it is possible that a thread never did any allocation, and that TLS has already been deallocated by the threads library, resulting in write-after-free corruption. These fixes affect prof_tdata and quarantine; all other uses of TLS are already safe, whether intentionally (as for tcache) or unintentionally (as for arenas).
* Specify 'inline' in addition to always_inline attribute.Jason Evans2013-01-231-1/+1
| | | | | Specify both inline and __attribute__((always_inline)), in order to avoid warnings when using newer versions of gcc.
* Update hash from MurmurHash2 to MurmurHash3.Jason Evans2013-01-224-40/+315
| | | | | | Update hash from MurmurHash2 to MurmurHash3, primarily because the latter generates 128 bits in a single call for no extra cost, which simplifies integration with cuckoo hashing.
* Add and use JEMALLOC_ALWAYS_INLINE.Jason Evans2013-01-223-48/+56
| | | | | Add JEMALLOC_ALWAYS_INLINE and use it to guarantee that the entire fast paths of the primary allocation/deallocation functions are inlined.
* Tighten valgrind integration.Jason Evans2013-01-221-0/+2
| | | | | | | Tighten valgrind integration such that immediately after memory is validated or zeroed, valgrind is told to forget the memory's 'defined' state. The only place newly allocated memory should be left marked as 'defined' is in the public functions (e.g. calloc() and realloc()).
* Fix build break on *BSDGarrett Cooper2012-12-242-1/+10
| | | | | | | Linux uses alloca.h; many other operating systems define alloca(3) in stdlib.h. Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
* Avoid arena_prof_accum()-related locking when possible.Jason Evans2012-11-132-1/+43
| | | | | | | Refactor arena_prof_accum() and its callers to avoid arena locking when prof_interval is 0 (as when profiling is disabled). Reported by Ben Maurer.
* Purge unused dirty pages in a fragmentation-reducing order.Jason Evans2012-11-061-29/+24
| | | | | | | | | | | | | | | | Purge unused dirty pages in an order that first performs clean/dirty run defragmentation, in order to mitigate available run fragmentation. Remove the limitation that prevented purging unless at least one chunk worth of dirty pages had accumulated in an arena. This limitation was intended to avoid excessive purging for small applications, but the threshold was arbitrary, and the effect of questionable utility. Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased likelihood of allocating clean runs, given the same ratio of clean:dirty runs, and reduces the potential for repeated purging in pathological large malloc/free loops that push the active:dirty page ratio just over the purge threshold.
* Add arena-specific and selective dss allocation.Jason Evans2012-10-138-35/+145
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.
* Drop const from malloc_usable_size() argument on Linux.Jason Evans2012-10-092-1/+11
| | | | | Drop const from malloc_usable_size() argument on Linux, in order to match the prototype in Linux's malloc.h.
* Fix fork(2)-related deadlocks.Jason Evans2012-10-095-0/+31
| | | | | | | | | | | | | | | | | Add a library constructor for jemalloc that initializes the allocator. This fixes a race that could occur if threads were created by the main thread prior to any memory allocation, followed by fork(2), and then memory allocation in the child process. Fix the prefork/postfork functions to acquire/release the ctl, prof, and rtree mutexes. This fixes various fork() child process deadlocks, but one possible deadlock remains (intentionally) unaddressed: prof backtracing can acquire runtime library mutexes, so deadlock is still possible if heap profiling is enabled during fork(). This deadlock is known to be a real issue in at least the case of libgcc-based backtracing. Reported by tfengjun.
* Fix mlockall()/madvise() interaction.Jason Evans2012-10-092-1/+4
| | | | | | | | mlockall(2) can cause purging via madvise(2) to fail. Fix purging code to check whether madvise() succeeded, and base zeroed page metadata on the result. Reported by Olivier Lecomte.
* Define LG_QUANTUM for hppa.Jason Evans2012-10-081-0/+3
| | | | Submitted by Jory Pratt.
* Auto-detect whether running inside Valgrind.Jason Evans2012-05-151-0/+1
| | | | | Auto-detect whether running inside Valgrind, thus removing the need to manually specify MALLOC_CONF=valgrind:true.
* Fix heap profiling crash for realloc(p, 0) case.Jason Evans2012-05-151-1/+1
| | | | | Fix prof_realloc() to not call prof_ctx_set() if a sampled object is being freed via realloc(p, 0).
* Fix large calloc() zeroing bugs.Jason Evans2012-05-111-5/+10
| | | | | | | | | Refactor code such that arena_mapbits_{large,small}_set() always preserves the unzeroed flag, and manually manipulate the unzeroed flag in the one case where it actually gets reset (in arena_chunk_purge()). This fixes unzeroed preservation bugs in arena_run_split() and arena_ralloc_large_grow(). These bugs caused large calloc() to return non-zeroed memory under some circumstances.
* Update a comment.Jason Evans2012-05-101-9/+9
|
* Export je_memalign and je_vallocMike Hommey2012-05-091-0/+9
| | | | | | da99e31 removed attributes on je_memalign and je_valloc, while they didn't have a definition in the jemalloc.h header, thus making them non-exported. Export them again, by defining them in the jemalloc.h header.
* Add the --enable-mremap option.Jason Evans2012-05-092-6/+16
| | | | | | Add the --enable-mremap option, and disable the use of mremap(2) by default, for the same reason that freeing chunks via munmap(2) is disabled by default on Linux: semi-permanent VM map fragmentation.
* Further optimize and harden arena_salloc().Jason Evans2012-05-022-34/+69
| | | | | | | | | Further optimize arena_salloc() to only look at the binind chunk map bits in the common case. Add more sanity checks to arena_salloc() that detect chunk map inconsistencies for large allocations (whether due to allocator bugs or application bugs).
* Fix partial rename of s/EXPORT/JEMALLOC_EXPORT/g.Jason Evans2012-05-021-5/+5
|
* Update private namespace mangling.Jason Evans2012-05-021-12/+11
|
* Make malloc_write() non-inline.Jason Evans2012-05-021-11/+1
| | | | | Make malloc_write() non-inline, in order to resolve its dependency on je_malloc_write().
* Make CACHELINE a raw constant.Jason Evans2012-05-021-1/+4
| | | | | | | Make CACHELINE a raw constant in order to work around a __declspec(align()) limitation. Submitted by Mike Hommey.
* Optimize malloc() and free() fast paths.Jason Evans2012-05-024-134/+320
| | | | | | | | | | Embed the bin index for small page runs into the chunk page map, in order to omit [...] in the following dependent load sequence: ptr-->mapelm-->[run-->bin-->]bin_info Move various non-critcal code out of the inlined function chain into helper functions (tcache_event_hard(), arena_dalloc_small(), and locking).
* Add support for MSVCMike Hommey2012-05-013-4/+55
| | | | Tested with MSVC 8 32 and 64 bits.
* Replace JEMALLOC_ATTR with various different macros when it makes senseMike Hommey2012-05-013-28/+37
| | | | | | Theses newly added macros will be used to implement the equivalent under MSVC. Also, move the definitions to headers, where they make more sense, and for some, are even more useful there (e.g. malloc).
* Few configure.ac adjustmentsMike Hommey2012-05-011-2/+2
| | | | | | - Use the extensions autoconf finds for object and executable files. - Remove the sorev variable, and replace SOREV definition with sorev's. - Default to je_ prefix on win32.
* Use Get/SetLastError on Win32Mike Hommey2012-04-302-3/+37
| | | | | | | | | Using errno on win32 doesn't quite work, because the value set in a shared library can't be read from e.g. an executable calling the function setting errno. At the same time, since buferror always uses errno/GetLastError, don't pass it.
* Avoid variable length arrays and remove declarations within codeMike Hommey2012-04-292-1/+16
| | | | | | | | | | | | MSVC doesn't support C99, and building as C++ to be able to use them is dangerous, as C++ and C99 are incompatible. Introduce a VARIABLE_ARRAY macro that either uses VLA when supported, or alloca() otherwise. Note that using alloca() inside loops doesn't quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged. It might be worth investigating ways to check whether VARIABLE_ARRAY is used in such context at runtime in debug builds and bail out if that happens.
* Fix more prof_tdata resurrection corner cases.Jason Evans2012-04-291-5/+7
|
* Handle prof_tdata resurrection.Jason Evans2012-04-291-3/+15
| | | | | Handle prof_tdata resurrection during thread shutdown, similarly to how tcache and quarantine handle resurrection.
* Fix a PROF_ALLOC_PREP() error path.Jason Evans2012-04-251-1/+3
| | | | | Fix a PROF_ALLOC_PREP() error path to initialize the return value to NULL.
* Fix ctl regression.Jason Evans2012-04-241-6/+6
| | | | | Fix ctl to correctly compute the number of children at each level of the ctl tree.
* Avoid using a union for ctl_node_sMike Hommey2012-04-231-12/+15
| | | | | | | MSVC doesn't support C99, and as such doesn't support designated initialization of structs and unions. As there is never a mix of indexed and named nodes, it is pretty straightforward to use a different type for each.
* Fix heap profiling bugs.Jason Evans2012-04-222-9/+35
| | | | | | | | | | | Fix a potential deadlock that could occur during interval- and growth-triggered heap profile dumps. Fix an off-by-one heap profile statistics bug that could be observed in interval- and growth-triggered heap profiles. Fix heap profile dump filename sequence numbers (regression during conversion to malloc_snprintf()).
* Remove unused #includesMike Hommey2012-04-221-2/+0
|
* Add support for MingwMike Hommey2012-04-223-13/+125
|
* Remove mmap_unaligned.Jason Evans2012-04-223-11/+2
| | | | | | | | | | | | | Remove mmap_unaligned, which was used to heuristically decide whether to optimistically call mmap() in such a way that could reduce the total number of system calls. If I remember correctly, the intention of mmap_unaligned was to avoid always executing the slow path in the presence of ASLR. However, that reasoning seems to have been based on a flawed understanding of how ASLR actually works. Although ASLR apparently causes mmap() to ignore address requests, it does not cause total placement randomness, so there is a reasonable expectation that iterative mmap() calls will start returning chunk-aligned mappings once the first chunk has been properly aligned.
* Fix chunk allocation/deallocation bugs.Jason Evans2012-04-211-1/+1
| | | | | | | | | | | | Fix chunk_alloc_dss() to zero memory when requested. Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated memory. Fix huge_palloc() to always junk fill when requested. Improve chunk_recycle() to report that memory is zeroed as a side effect of pages_purge().
* Fix a memory corruption bug in chunk_alloc_dss().Jason Evans2012-04-211-1/+1
| | | | | | | | | Fix a memory corruption bug in chunk_alloc_dss() that was due to claiming newly allocated memory is zeroed. Reverse order of preference between mmap() and sbrk() to prefer mmap(). Clean up management of 'zero' parameter in chunk_alloc*().
* Fix isthreaded-related build breakage.Jason Evans2012-04-201-0/+1
|
* Add missing private namespace mangling.Jason Evans2012-04-201-0/+46
|
* Don't mangle pthread_create().Jason Evans2012-04-201-1/+0
| | | | Don't mangle pthread_create(); it's an exported symbol when defined.
* Make arena_salloc() an inline function.Jason Evans2012-04-203-10/+50
|
* Remove extra argument for malloc_tsd_cleanup_registerMike Hommey2012-04-191-10/+5
| | | | | Bookkeeping an extra argument that actually only stores a function pointer for a function we already have is not very useful.