summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Implement metadata statistics.Jason Evans2015-01-2418-167/+356
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Use the correct type for opt.junk when printing stats.Guilherme Goncalves2015-01-231-1/+1
|
* Implement the jemalloc-config script.Jason Evans2015-01-234-4/+89
| | | | This resolves #133.
* Update copyright dates for 2015.Jason Evans2015-01-231-2/+2
|
* Document under what circumstances in-place resizing succeeds.Jason Evans2015-01-221-0/+16
| | | | This resolves #100.
* Refactor bootstrapping to delay tsd initialization.Jason Evans2015-01-226-125/+203
| | | | | | | | | | | | Refactor bootstrapping to delay tsd initialization, primarily to support integration with FreeBSD's libc. Refactor a0*() for internal-only use, and add the bootstrap_{malloc,calloc,free}() API for use by FreeBSD's libc. This separation limits use of the a0*() functions to metadata allocation, which doesn't require malloc/calloc/free API compatibility. This resolves #170.
* Fix arenas_cache_cleanup().Jason Evans2015-01-221-1/+1
| | | | | Fix arenas_cache_cleanup() to check whether arenas_cache is NULL before deallocation, rather than checking arenas.
* Add missing symbols to private_symbols.txt.Abhishek Kulkarni2015-01-211-0/+4
| | | | This resolves #185.
* Fix OOM handling in memalign() and valloc().Jason Evans2015-01-171-2/+4
| | | | | | Fix memalign() and valloc() to heed imemalign()'s return value. Reported by Kurt Wampler.
* Fix an infinite recursion bug related to a0/tsd bootstrapping.Jason Evans2015-01-151-1/+3
| | | | This resolves #184.
* Add a isblank definition for MSVC < 2013Guilherme Goncalves2015-01-091-0/+8
|
* Make mixed declarations an errorMike Hommey2014-12-181-0/+1
| | | | | | It often happens that code changes introduce mixed declarations, that then break building with Visual Studio. Since the code style is to not use mixed declarations anyways, we might as well enforce it with -Werror.
* Move variable declaration to the top its block for MSVC compatibility.Guilherme Goncalves2014-12-171-2/+2
|
* [pprof] Produce global profile unless thread-local profile requestedBert Maher2014-12-151-2/+3
| | | | | | | | | Currently pprof will print output for all threads if a single thread is not specified, but this doesn't play well with many output formats (e.g., any of the dot-based formats). Instead, default to printing just the overall profile when no specific thread is requested. This resolves #157.
* Introduce two new modes of junk filling: "alloc" and "free".Guilherme Goncalves2014-12-1514-71/+139
| | | | | | | | In addition to true/false, opt.junk can now be either "alloc" or "free", giving applications the possibility of junking memory only on allocation or deallocation. This resolves #172.
* Ignore MALLOC_CONF in set{uid,gid,cap} binaries.Daniel Micay2014-12-143-1/+50
| | | | | | This eliminates the malloc tunables as tools for an attacker. Closes #173
* Style and spelling fixes.Jason Evans2014-12-0920-40/+36
|
* Add a C11 atomics-based implementation of atomic.h API.Chih-hung Hsieh2014-12-074-0/+56
|
* Style fixes.Jason Evans2014-12-061-2/+2
|
* Fix OOM cleanup in huge_palloc().Jason Evans2014-12-051-6/+2
| | | | | | Fix OOM cleanup in huge_palloc() to call idalloct() rather than base_node_dalloc(). This bug is a result of incomplete refactoring, and has no impact other than leaking memory during OOM.
* Fix test_stats_arenas_bins for 32-bit builds.Yuriy Kaminskiy2014-12-031-0/+1
|
* teach the dss chunk allocator to handle new_addrDaniel Micay2014-11-293-9/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This provides in-place expansion of huge allocations when the end of the allocation is at the end of the sbrk heap. There's already the ability to extend in-place via recycled chunks but this handles the initial growth of the heap via repeated vector / string reallocations. A possible future extension could allow realloc to go from the following: | huge allocation | recycled chunks | ^ dss_end To a larger allocation built from recycled *and* new chunks: | huge allocation | ^ dss_end Doing that would involve teaching the chunk recycling code to request new chunks to satisfy the request. The chunk_dss code wouldn't require any further changes. #include <stdlib.h> int main(void) { size_t chunk = 4 * 1024 * 1024; void *ptr = NULL; for (size_t size = chunk; size < chunk * 128; size *= 2) { ptr = realloc(ptr, size); if (!ptr) return 1; } } dss:secondary: 0.083s dss:primary: 0.083s After: dss:secondary: 0.083s dss:primary: 0.003s The dss heap grows in the upwards direction, so the oldest chunks are at the low addresses and they are used first. Linux prefers to grow the mmap heap downwards, so the trick will not work in the *current* mmap chunk allocator as a huge allocation will only be at the top of the heap in a contrived case.
* Remove extra definition of je_tsd_boot on win32.Guilherme Goncalves2014-11-181-6/+0
|
* Fix more pointer arithmetic undefined behavior.Jason Evans2014-11-171-4/+4
| | | | | | Reported by Guilherme Gonçalves. This resolves #166.
* Fix pointer arithmetic undefined behavior.Jason Evans2014-11-172-17/+31
| | | | Reported by Denis Denisov.
* Make quarantine_init() static.Jason Evans2014-11-073-4/+2
|
* Fix two quarantine regressions.Jason Evans2014-11-053-2/+26
| | | | | | | | Fix quarantine to actually update tsd when expanding, and to avoid double initialization (leaking the first quarantine) due to recursive initialization. This resolves #161.
* Disable arena_dirty_count() validation.Jason Evans2014-11-011-2/+6
|
* Don't dereference NULL tdata in prof_{enter,leave}().Jason Evans2014-11-011-13/+18
| | | | | | It is possible for the thread's tdata to be NULL late during thread destruction, so take care not to dereference a NULL pointer in such cases.
* Fix arena_sdalloc() to use promoted size (second attempt).Jason Evans2014-11-011-8/+11
| | | | | Unlike the preceeding attempted fix, this version avoids the potential for converting an invalid bin index to a size class.
* Fix arena_sdalloc() to use promoted size.Jason Evans2014-11-011-7/+15
|
* rm unused arena wrangling from xallocxDaniel Micay2014-10-311-16/+8
| | | | | It has no use for the arena_t since unlike rallocx it never makes a new memory allocation. It's just an unused parameter in ixalloc_helper.
* Miscellaneous cleanups.Jason Evans2014-10-313-10/+10
|
* avoid redundant chunk header readsDaniel Micay2014-10-312-45/+42
| | | | | | * use sized deallocation in iralloct_realign * iralloc and ixalloc always need the old size, so pass it in from the caller where it's often already calculated
* mark huge allocations as unlikelyDaniel Micay2014-10-314-16/+16
| | | | This cleans up the fast path a bit more by moving away more code.
* Fix prof_{enter,leave}() calls to pass tdata_self.Jason Evans2014-10-301-19/+24
|
* Use JEMALLOC_INLINE_C everywhere it's appropriate.Jason Evans2014-10-304-15/+15
|
* Merge pull request #154 from guilherme-pg/implicit-intJason Evans2014-10-201-1/+1
|\ | | | | Fix variable declaration with no type in the configure script.
| * Fix variable declaration with no type in the configure script.Guilherme Goncalves2014-10-201-1/+1
|/
* Merge pull request #151 from thestinger/rallocJason Evans2014-10-162-2/+2
|\ | | | | use sized deallocation internally for ralloc
| * use sized deallocation internally for rallocDaniel Micay2014-10-162-2/+2
| | | | | | | | | | | | | | The size of the source allocation is known at this point, so reading the chunk header can be avoided for the small size class fast path. This is not very useful right now, but it provides a significant performance boost with an alternate ralloc entry point taking the old size.
* | Initialize chunks_mtx for all configurations.Jason Evans2014-10-161-4/+3
|/ | | | This resolves #150.
* Purge/zero sub-chunk huge allocations as necessary.Jason Evans2014-10-161-24/+51
| | | | | | | Purge trailing pages during shrinking huge reallocation when resulting size is not a multiple of the chunk size. Similarly, zero pages if necessary during growing huge reallocation when the resulting size is not a multiple of the chunk size.
* Add small run utilization to stats output.Jason Evans2014-10-151-16/+34
| | | | | | | | | | | Add the 'util' column, which reports the proportion of available regions that are currently in use for each small size class. Small run utilization is the complement of external fragmentation. For example, utilization of 0.75 indicates that 25% of small run memory is consumed by external fragmentation, in other (more obtuse) words, 33% external fragmentation overhead. This resolves #27.
* Thwart compiler optimizations.Jason Evans2014-10-151-0/+12
|
* Fix line wrapping.Jason Evans2014-10-151-10/+10
|
* Fix huge allocation statistics.Jason Evans2014-10-155-160/+252
|
* Update size class documentation.Jason Evans2014-10-151-26/+84
|
* Add per size class huge allocation statistics.Jason Evans2014-10-1310-338/+724
| | | | | | | | | | | | | Add per size class huge allocation statistics, and normalize various stats: - Change the arenas.nlruns type from size_t to unsigned. - Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's. - Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with stats.arenas.<i>.bins.<j>.curregs . - Add the stats.arenas.<i>.hchunks.<j>.nmalloc, stats.arenas.<i>.hchunks.<j>.ndalloc, stats.arenas.<i>.hchunks.<j>.nrequests, and stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
* Fix a prof_tctx_t/prof_tdata_t cleanup race.Jason Evans2014-10-122-5/+11
| | | | | | Fix a prof_tctx_t/prof_tdata_t cleanup race by storing a copy of thr_uid in prof_tctx_t, so that the associated tdata need not be present during tctx teardown.