summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'dev'2.1.2Jason Evans2011-03-028-59/+142
|\
| * Update ChangeLog for 2.1.2.je2011-03-021-0/+6
| |
| * Build both PIC and no PIC static librariesArun Sharma2011-03-022-11/+19
| | | | | | | | | | | | | | | | | | | | When jemalloc is linked into an executable (as opposed to a shared library), compiling with -fno-pic can have significant advantages, mainly because we don't have to go throught the GOT (global offset table). Users who want to link jemalloc into a shared library that could be dlopened need to link with libjemalloc_pic.a or libjemalloc.so.
| * Fix style nits.Jason Evans2011-02-143-6/+8
| |
| * Fix "thread.{de,}allocatedp" mallctl.Jason Evans2011-02-145-32/+99
| | | | | | | | | | | | | | | | | | For the non-TLS case (as on OS X), if the "thread.{de,}allocatedp" mallctl was called before any allocation occurred for that thread, the TSD was still NULL, thus putting the application at risk of dereferencing NULL. Fix this by refactoring the initialization code, and making it part of the conditional logic for all per thread allocation counter accesses.
| * Add release dates to ChangeLog.Jason Evans2011-02-081-10/+10
| |
* | Merge branch 'dev'2.1.1Jason Evans2011-02-0112-89/+97
|\ \ | |/
| * Update ChangeLog for 2.1.1.Jason Evans2011-02-011-1/+9
| |
| * Fix an alignment-related bug in huge_ralloc().Jason Evans2011-02-011-3/+3
| | | | | | | | | | | | | | Fix huge_ralloc() to call huge_palloc() only if alignment requires it. This bug caused under-sized allocation for aligned huge reallocation (via rallocm()) if the requested alignment was less than the chunk size (4 MiB by default).
| * Fix ALLOCM_LG_ALIGN definition.Jason Evans2011-01-261-1/+1
| | | | | | | | | | | | Fix ALLOCM_LG_ALIGN to take a parameter and use it. Apparently, an editing error left ALLOCM_LG_ALIGN with the same definition as ALLOCM_LG_ALIGN_MASK.
| * Fix assertion typos.Jason Evans2011-01-152-8/+8
| | | | | | | | s/=/==/ in several assertions, as well as fixing spelling errors.
| * Fix a heap dumping deadlock.Jason Evans2011-01-151-8/+22
| | | | | | | | | | | | | | | | | | | | | | Restructure the ctx initialization code such that the ctx isn't locked across portions of the initialization code where allocation could occur. Instead artificially inflate the cnt_merged.curobjs field, just as is done elsewhere to avoid similar races to the one that would otherwise be created by the reduction in locking scope. This bug affected interval- and growth-triggered heap dumping, but not manual heap dumping.
| * Fix a "thread.arena" mallctl bug.Jason Evans2010-12-291-0/+5
| | | | | | | | | | | | When setting a new arena association for the calling thread, also update the tcache's cached arena pointer, primarily so that tcache_alloc_small_hard() uses the intended arena.
| * Update various comments.Jason Evans2010-12-182-49/+40
| |
| * Remove an arena_bin_run_size_calc() constraint.Jason Evans2010-12-161-3/+1
| | | | | | | | | | | | Remove the constraint that small run headers fit in one page. This constraint was necessary to avoid dirty page purging issues for unused pages within runs for medium size classes (which no longer exist).
| * Edit INSTALL.Jason Evans2010-12-161-8/+8
| |
| * Remove high_water from tcache_bin_t.Jason Evans2010-12-162-8/+0
| | | | | | | | | | Remove the high_water field from tcache_bin_t, since it is not useful for anything.
* | Merge branch 'dev'2.1.0Jason Evans2010-12-0422-1866/+2675
|\ \ | |/
| * Updated ChangeLog for 2.1.0.Jason Evans2010-12-041-0/+17
| |
| * Add the "thread.[de]allocatedp" mallctl's.Jason Evans2010-12-032-2/+36
| |
| * Use mremap(2) for huge realloc().Jason Evans2010-12-0114-17/+182
| | | | | | | | | | | | | | | | | | | | If mremap(2) is available and supports MREMAP_FIXED, use it for huge realloc(). Initialize rtree later during bootstrapping, so that --enable-debug --enable-dss works. Fix a minor swap_avail stats bug.
| * Convert man page from roff to DocBook.Jason Evans2010-11-279-1774/+2317
| | | | | | | | | | | | | | Convert the man page source from roff to DocBook, and generate html and roff output. Modify the build system such that the documentation can be built as part of the release process, so that users need not have DocBook tools installed.
| * Push down ctl_mtx.Jason Evans2010-11-241-74/+124
| | | | | | | | | | | | | | Many mallctl*() end points require no locking, so push the locking down to just the functions that need it. This is of particular import for "thread.allocated" and "thread.deallocated", which are intended as a low-overhead way to introspect per thread allocation activity.
| * Fix mallctlnametomib() documentation.Jason Evans2010-11-051-2/+2
| | | | | | | | | | Fix the prototype for mallctlnametomib() in the manual page to correspond to reality.
* | Merge branch 'dev'2.0.1Jason Evans2010-10-303-34/+67
|\ \ | |/
| * Update ChangeLog for 2.0.1.Jason Evans2010-10-301-0/+9
| |
| * Fix prof bugs.Jason Evans2010-10-281-6/+29
| | | | | | | | | | | | | | Fix a race condition in ctx destruction that could cause undefined behavior (deadlock observed). Add mutex unlocks to some OOM error paths.
| * Fix compilation error.Jason Evans2010-10-251-1/+3
| | | | | | | | Don't declare loop variable inside for (...) clause.
| * Re-indent ChangeLog.Jason Evans2010-10-241-27/+26
| | | | | | | | Fix indentation inconsistencies in ChangeLog.
* | Merge branch 'dev'2.0.0Jason Evans2010-10-2450-2494/+6075
|\ \ | |/
| * Document groff commands for manpage formatting.Jason Evans2010-10-241-2/+6
| | | | | | | | Document how to format the manpage for the terminal, pdf, and html.
| * Bump library version number.Jason Evans2010-10-241-1/+1
| |
| * Add ChangeLog.Jason Evans2010-10-243-18/+149
| | | | | | | | | | | | Add ChangeLog, which briefly summarizes releases. Edit README and INSTALL.
| * Use madvise(..., MADV_FREE) on OS X.Jason Evans2010-10-243-10/+4
| | | | | | | | | | Use madvise(..., MADV_FREE) rather than msync(..., MS_KILLPAGES) on OS X, since it works for at least OS X 10.5 and 10.6.
| * Edit manpage.Jason Evans2010-10-241-11/+19
| | | | | | | | Make various minor edits to the manpage.
| * Re-format size class table.Jason Evans2010-10-241-51/+17
| | | | | | | | | | | | Use a more compact layout for the size class table in the man page. This avoids layout glitches due to approaching the single-page table size limit.
| * Add missing #ifdef JEMALLOC_PROF.Jason Evans2010-10-241-0/+2
| | | | | | | | Only call prof_boot0() if profiling is enabled.
| * Replace JEMALLOC_OPTIONS with MALLOC_CONF.Jason Evans2010-10-2418-967/+999
| | | | | | | | | | | | | | | | | | | | | | Replace the single-character run-time flags with key/value pairs, which can be set via the malloc_conf global, /etc/malloc.conf, and the MALLOC_CONF environment variable. Replace the JEMALLOC_PROF_PREFIX environment variable with the "opt.prof_prefix" option. Replace umax2s() with u2s().
| * Fix heap profiling bugs.Jason Evans2010-10-226-83/+85
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix a regression due to the recent heap profiling accuracy improvements: prof_{m,re}alloc() must set the object's profiling context regardless of whether it is sampled. Fix management of the CHUNK_MAP_CLASS chunk map bits, such that all large object (re-)allocation paths correctly initialize the bits. Prior to this fix, in-place realloc() cleared the bits, resulting in incorrect reported object size from arena_salloc_demote(). After this fix the non-demoted bit pattern is all zeros (instead of all ones), which makes it easier to assure that the bits are properly set.
| * Fix a heap profiling regression.Jason Evans2010-10-213-101/+110
| | | | | | | | | | | | Call prof_ctx_set() in all paths through prof_{m,re}alloc(). Inline arena_prof_ctx_get().
| * Inline the fast path for heap sampling.Jason Evans2010-10-213-504/+447
| | | | | | | | | | | | | | | | Inline the heap sampling code that is executed for every allocation event (regardless of whether a sample is taken). Combine all prof TLS data into a single data structure, in order to reduce the TLS lookup volume.
| * Add per thread allocation counters, and enhance heap sampling.Jason Evans2010-10-2110-155/+563
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the "thread.allocated" and "thread.deallocated" mallctls, which can be used to query the total number of bytes ever allocated/deallocated by the calling thread. Add s2u() and sa2u(), which can be used to compute the usable size that will result from an allocation request of a particular size/alignment. Re-factor ipalloc() to use sa2u(). Enhance the heap profiler to trigger samples based on usable size, rather than request size. This has a subtle, but important, impact on the accuracy of heap sampling. For example, previous to this change, 16- and 17-byte objects were sampled at nearly the same rate, but 17-byte objects actually consume 32 bytes each. Therefore it was possible for the sample to be somewhat skewed compared to actual memory usage of the allocated objects.
| * Fix a bug in arena_dalloc_bin_run().Jason Evans2010-10-191-13/+53
| | | | | | | | | | | | | | | | | | | | | | Fix the newsize argument to arena_run_trim_tail() that arena_dalloc_bin_run() passes. Previously, oldsize-newsize (i.e. the complement) was passed, which could erroneously cause dirty pages to be returned to the clean available runs tree. Prior to the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED conversion, this bug merely caused dirty pages to be unaccounted for (and therefore never get purged), but with CHUNK_MAP_UNZEROED, this could cause dirty pages to be treated as zeroed (i.e. memory corruption).
| * Fix arena bugs.Jason Evans2010-10-181-6/+19
| | | | | | | | | | | | | | | | | | Split arena_dissociate_bin_run() out of arena_dalloc_bin_run(), so that arena_bin_malloc_hard() can avoid dissociation when recovering from losing a race. This fixes a bug introduced by a recent attempted fix. Fix a regression in arena_ralloc_large_grow() that was introduced by recent fixes.
| * Fix arena bugs.Jason Evans2010-10-181-43/+58
| | | | | | | | | | | | | | | | | | Move part of arena_bin_lower_run() into the callers, since the conditions under which it should be called differ slightly between callers. Fix arena_chunk_purge() to omit run size in the last map entry for each run it temporarily allocates.
| * Add assertions to run coalescing.Jason Evans2010-10-181-7/+17
| | | | | | | | | | Assert that the chunk map bits at the ends of the runs that participate in coalescing are self-consistent.
| * Fix numerous arena bugs.Jason Evans2010-10-182-80/+172
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In arena_ralloc_large_grow(), update the map element for the end of the newly grown run, rather than the interior map element that was the beginning of the appended run. This is a long-standing bug, and it had the potential to cause massive corruption, but triggering it required roughly the following sequence of events: 1) Large in-place growing realloc(), with left-over space in the run that followed the large object. 2) Allocation of the remainder run left over from (1). 3) Deallocation of the remainder run *before* deallocation of the large run, with unfortunate interior map state left over from previous run allocation/deallocation activity, such that one or more pages of allocated memory would be treated as part of the remainder run during run coalescing. In summary, this was a bad bug, but it was difficult to trigger. In arena_bin_malloc_hard(), if another thread wins the race to allocate a bin run, dispose of the spare run via arena_bin_lower_run() rather than arena_run_dalloc(), since the run has already been prepared for use as a bin run. This bug has existed since March 14, 2010: e00572b384c81bd2aba57fac32f7077a34388915 mmap()/munmap() without arena->lock or bin->lock. Fix bugs in arena_dalloc_bin_run(), arena_trim_head(), arena_trim_tail(), and arena_ralloc_large_grow() that could cause the CHUNK_MAP_UNZEROED map bit to become corrupted. These are all long-standing bugs, but the chances of them actually causing problems was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED conversion. Fix a large run statistics regression in arena_ralloc_large_grow() that was introduced on September 17, 2010: 8e3c3c61b5bb676a705450708e7e79698cdc9e0c Add {,r,s,d}allocm(). Add debug code to validate that supposedly pre-zeroed memory really is.
| * Preserve CHUNK_MAP_UNZEROED for small runs.Jason Evans2010-10-161-4/+8
| | | | | | | | | | | | | | | | | | Preserve CHUNK_MAP_UNZEROED when allocating small runs, because it is possible that untouched pages will be returned to the tree of clean runs, where the CHUNK_MAP_UNZEROED flag matters. Prior to the conversion from CHUNK_MAP_ZEROED, this was already a bug, but in the worst case extra zeroing occurred. After the conversion, this bug made it possible to incorrectly treat pages as pre-zeroed.
| * Fix a regression in CHUNK_MAP_UNZEROED change.Jason Evans2010-10-141-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Fix a regression added by revision: 3377ffa1f4f8e67bce1e36624285e5baf5f9ecef Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED. A modified chunk->map dereference was missing the subtraction of map_bias, which caused incorrect chunk map initialization, as well as potential corruption of the first non-header page of memory within each chunk.
| * Re-organize prof-libgcc configuration.Jason Evans2010-10-071-7/+10
| | | | | | | | | | | | Re-organize code for --enable-prof-libgcc so that configure doesn't report both libgcc and libunwind support as being configured in. This change has no impact on how jemalloc is actually configured/built.