summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| * | Merge pull request #129 from daverigby/msvc_lg_floorJason Evans2014-09-291-0/+15
| |\ \ | | | | | | | | Use MSVC intrinsics for lg_floor
| | * | Use MSVC intrinsics for lg_floorDave Rigby2014-09-241-0/+15
| | |/ | | | | | | | | | | | | When using MSVC make use of its intrinsic functions (supported on x86, amd64 & ARM) for lg_floor.
| * | Mark malloc_conf as a weak symbolDave Rigby2014-09-291-1/+1
| | | | | | | | | | | | This fixes issue #113 - je_malloc_conf is not respected on OS X
| * | Move small run metadata into the arena chunk header.Jason Evans2014-09-293-261/+233
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move small run metadata into the arena chunk header, with multiple expected benefits: - Lower run fragmentation due to reduced run sizes; runs are more likely to completely drain when there are fewer total regions. - Improved cache behavior. Prior to this change, run headers were always page-aligned, which put extra pressure on some CPU cache sets. The degree to which this was a problem was hardware dependent, but it likely hurt some even for the most advanced modern hardware. - Buffer overruns/underruns are less likely to corrupt allocator metadata. - Size classes between 4 KiB and 16 KiB become reasonable to support without any special handling, and the runs are small enough that dirty unused pages aren't a significant concern.
| * | Implement compile-time bitmap size computation.Jason Evans2014-09-283-26/+54
| | |
| * | Fix profile dumping race.Jason Evans2014-09-252-1/+10
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a race that caused a non-critical assertion failure. To trigger the race, a thread had to be part way through initializing a new sample, such that it was discoverable by the dumping thread, but not yet linked into its gctx by the time a later dump phase would normally have reset its state to 'nominal'. Additionally, lock access to the state field during modification to transition to the dumping state. It's not apparent that this oversight could have caused an actual problem due to outer locking that protects the dumping machinery, but the added locking pedantically follows the stated locking protocol for the state field.
| * Add instructions for installing from non-packaged sources.Jason Evans2014-09-231-2/+15
| |
| * Convert all tsd variables to reside in a single tsd structure.Jason Evans2014-09-2322-935/+1027
| |
| * Ignore jemalloc.pc .Jason Evans2014-09-221-0/+2
| |
| * Generate a pkg-config fileNick White2014-09-193-1/+23
| |
| * fix tls_model autoconf testDaniel Micay2014-09-161-1/+1
| | | | | | | | | | | | It has an unused variable, so it was always failing (at least with gcc 4.9.1). Alternatively, the `-Werror` flag could be removed if it isn't strictly necessary.
| * Fixed iOS build after OR1 changesValerii Hiora2014-09-121-0/+3
| |
| * Fix prof regressions.Jason Evans2014-09-121-1/+22
| | | | | | | | | | | | | | | | | | | | | | | | Don't use atomic_add_uint64(), because it isn't available on 32-bit platforms. Fix forking support functions to manage all prof-related mutexes. These regressions were introduced by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.), which did not make it into any releases prior to these fixes.
| * Fix irallocx_prof() sample logic.Jason Evans2014-09-121-3/+3
| | | | | | | | | | | | | | | | Fix irallocx_prof() sample logic to only update the threshold counter after it knows what size the allocation ended up being. This regression was caused by 6e73dc194ee9682d3eacaf725a989f04629718f7 (Fix a profile sampling race.), which did not make it into any releases prior to this fix.
| * Apply likely()/unlikely() to allocation/deallocation fast paths.Jason Evans2014-09-128-129/+138
| |
| * Fix mallocx() to always honor MALLOCX_ARENA() when profiling.Jason Evans2014-09-111-2/+1
| |
| * mark some conditions as unlikelyDaniel Micay2014-09-114-31/+31
| | | | | | | | | | | | | | | | | | | | | | | | * assertion failure * malloc_init failure * malloc not already initialized (in malloc_init) * running in valgrind * thread cache disabled at runtime Clang and GCC already consider a comparison with NULL or -1 to be cold, so many branches (out-of-memory) are already correctly considered as cold and marking them is not important.
| * add likely / unlikely macrosDaniel Micay2014-09-101-0/+8
| |
| * Add sdallocx() to list of functions to prune in pprof.Jason Evans2014-09-101-0/+1
| |
| * Fix a profile sampling race.Jason Evans2014-09-104-73/+109
| | | | | | | | | | | | | | | | | | | | | | Fix a profile sampling race that was due to preparing to sample, yet doing nothing to assure that the context remains valid until the stats are updated. These regressions were caused by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.), which did not make it into any releases prior to these fixes.
| * Fix prof_tdata_get()-related regressions.Jason Evans2014-09-092-30/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix prof_tdata_get() to avoid dereferencing an invalid tdata pointer (when it's PROF_TDATA_STATE_{REINCARNATED,PURGATORY}). Fix prof_tdata_get() callers to check for invalid results besides NULL (PROF_TDATA_STATE_{REINCARNATED,PURGATORY}). These regressions were caused by 602c8e0971160e4b85b08b16cf8a2375aa24bc04 (Implement per thread heap profiling.), which did not make it into any releases prior to these fixes.
| * Fix threaded heap profile bug in pprof.Jason Evans2014-09-091-1/+1
| | | | | | | | | | Fix ReadThreadedHeapProfile to pass the correct parameters to AdjustSamples.
| * Fix sdallocx() assertion.Jason Evans2014-09-091-16/+18
| | | | | | | | | | Refactor sdallocx() and nallocx() to share inallocx(), and fix an sdallocx() assertion to check usize rather than size.
| * Support threaded heap profiles in pprofBert Maher2014-09-091-126/+251
| | | | | | | | | | | | | | | | | | - Add a --thread N option to select profile for thread N (otherwise, all threads will be printed) - The $profile map now has a {threads} element that is a map from thread id to a profile that has the same format as the {profile} element - Refactor ReadHeapProfile into smaller components and use them to implement ReadThreadedHeapProfile
| * Merge pull request #115 from thestinger/isqalloctJason Evans2014-09-091-1/+1
| |\ | | | | | | fix isqalloct (should call isdalloct)
| | * fix isqalloct (should call isdalloct)Daniel Micay2014-09-091-1/+1
| |/
| * Add support for sized deallocation.Daniel Micay2014-09-0910-5/+201
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new `sdallocx` function to the external API, allowing the size to be passed by the caller. It avoids some extra reads in the thread cache fast path. In the case where stats are enabled, this avoids the work of calculating the size from the pointer. An assertion validates the size that's passed in, so enabling debugging will allow users of the API to debug cases where an incorrect size is passed in. The performance win for a contrived microbenchmark doing an allocation and immediately freeing it is ~10%. It may have a different impact on a real workload. Closes #28
| * Add relevant function attributes to [msn]allocx().Jason Evans2014-09-082-20/+15
| |
| * Thwart optimization of free(malloc(1)) in microbench.Jason Evans2014-09-081-19/+25
| |
| * Merge pull request #114 from thestinger/timerJason Evans2014-09-083-11/+11
| |\ | | | | | | avoid conflict with the POSIX timer_t type
| | * avoid conflict with the POSIX timer_t typeDaniel Micay2014-09-083-11/+11
| |/ | | | | | | It hits a compilation error with glibc 2.19 without a rename.
| * Add microbench tests.Jason Evans2014-09-082-1/+143
| |
| * Add a simple timer implementation for use in benchmarking.Jason Evans2014-09-084-1/+75
| |
| * Move typedefs from jemalloc_protos.h.in to jemalloc_typedefs.h.in.Jason Evans2014-09-085-4/+7
| | | | | | | | | | Move typedefs from jemalloc_protos.h.in to jemalloc_typedefs.h.in, so that typedefs aren't redefined when compiling stress tests.
| * Optimize [nmd]alloc() fast paths.Jason Evans2014-09-077-131/+172
| | | | | | | | | | | | Optimize [nmd]alloc() fast paths such that the (flags == 0) case is streamlined, flags decoding only happens to the minimum degree necessary, and no conditionals are repeated.
| * Whitespace cleanups.Jason Evans2014-09-054-21/+21
| |
| * Refactor chunk map.Qinfan Wu2014-09-057-149/+186
| | | | | | | | | | Break the chunk map into two separate arrays, in order to improve cache locality. This is related to issue #23.
| * Disable autom4te cache.Jason Evans2014-09-033-3/+3
| |
| * Make VERSION generation more robust.Jason Evans2014-09-021-4/+26
| | | | | | | | | | | | | | | | | | | | | | | | Relax the "are we in a git repo?" check to succeed even if the top level jemalloc directory is not at the top level of the git repo. Add git tag filtering so that only version triplets match when generating VERSION. Add fallback bogus VERSION creation, so that in the worst case, rather than generating empty values for e.g. JEMALLOC_VERSION_MAJOR, configuration ends up generating useless constants.
| * Merge pull request #108 from wqfish/devJason Evans2014-08-271-4/+0
| |\ | | | | | | Remove junk filling in tcache_bin_flush_small().
| | * Remove junk filling in tcache_bin_flush_small().Qinfan Wu2014-08-271-4/+0
| |/ | | | | | | | | | | Junk filling is done in arena_dalloc_bin_locked(), so arena_alloc_junk_small() is redundant. Also, we should use arena_dalloc_junk_small() instead of arena_alloc_junk_small().
| * Test for availability of malloc hooks via autoconfSara Golemon2014-08-223-1/+40
| | | | | | | | | | | | | | | | | | __*_hook() is glibc, but on at least one glibc platform (homebrew), the __GLIBC__ define isn't set correctly and we miss being able to use these hooks. Do a feature test for it during configuration so that we enable it anywhere the hooks are actually available.
| * Implement per thread heap profiling.Jason Evans2014-08-2011-706/+1217
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename data structures (prof_thr_cnt_t-->prof_tctx_t, prof_ctx_t-->prof_gctx_t), and convert to storing a prof_tctx_t for sampled objects. Convert PROF_ALLOC_PREP() to prof_alloc_prep(), since precise backtrace depth within jemalloc functions is no longer an issue (pprof prunes irrelevant frames). Implement mallctl's: - prof.reset implements full sample data reset, and optional change of sample interval. - prof.lg_sample reads the current sample interval (opt.lg_prof_sample was the permanent source of truth prior to prof.reset). - thread.prof.name provides naming capability for threads within heap profile dumps. - thread.prof.active makes it possible to activate/deactivate heap profiling for individual threads. Modify the heap dump files to contain per thread heap profile data. This change is incompatible with the existing pprof, which will require enhancements to read and process the enriched data.
| * Add rb_empty().Jason Evans2014-08-202-0/+16
| |
| * Dump heap profile backtraces in a stable order.Jason Evans2014-08-202-62/+119
| | | | | | | | | | Also iterate over per thread stats in a stable order, which prepares the way for stable ordering of per thread heap profile dumps.
| * Directly embed prof_ctx_t's bt.Jason Evans2014-08-202-56/+26
| |
| * Convert prof_tdata_t's bt2cnt to a comprehensive map.Jason Evans2014-08-202-66/+25
| | | | | | | | | | | | Treat prof_tdata_t's bt2cnt as a comprehensive map of the thread's extant allocation samples (do not limit the total number of entries). This helps prepare the way for per thread heap profiling.
| * Fix arena.<i>.dss mallctl to handle read-only calls.Jason Evans2014-08-152-23/+42
| |
| * Fix and refactor runs_dirty-based purging.Jason Evans2014-08-142-127/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix runs_dirty-based purging to also purge dirty pages in the spare chunk. Refactor runs_dirty manipulation into arena_dirty_{insert,remove}(), and move the arena->ndirty accounting into those functions. Remove the u.ql_link field from arena_chunk_map_t, and get rid of the enclosing union for u.rb_link, since only rb_link remains. Remove the ndirty field from arena_chunk_t.
| * arena->npurgatory is no longer needed since we drop arena's lockQinfan Wu2014-08-122-20/+3
| | | | | | | | after stashing all the purgeable runs.