summaryrefslogtreecommitdiffstats
path: root/src/stats.c
Commit message (Collapse)AuthorAgeFilesLines
* Handle race in stats_arena_bins_printQi Wang2017-02-011-2/+11
| | | | | | | When multiple threads calling stats_print, race could happen as we read the counters in separate mallctl calls; and the removed assertion could fail when other operations happened in between the mallctl calls. For simplicity, output "race" in the utilization field in this case.
* Replace tabs following #define with spaces.Jason Evans2017-01-211-11/+11
| | | | This resolves #564.
* Update brace style.Jason Evans2017-01-211-21/+24
| | | | | | | Add braces around single-line blocks, and remove line breaks before function-opening braces. This resolves #537.
* Test JSON output of malloc_stats_print() and fix bugs.Jason Evans2017-01-191-28/+37
| | | | | | | | Implement and test a JSON validation parser. Use the parser to validate JSON output from malloc_stats_print(), with a significant subset of supported output options. This resolves #551.
* Added stats about number of bytes cached in tcache currently.Qi Wang2017-01-181-0/+13
|
* Implement arena.<i>.destroy .Jason Evans2017-01-071-4/+33
| | | | | | | Add MALLCTL_ARENAS_DESTROYED for accessing destroyed arena stats as an analogue to MALLCTL_ARENAS_ALL. This resolves #382.
* Replace the arenas.initialized mallctl with arena.<i>.initialized .Jason Evans2017-01-071-4/+8
|
* Add MALLCTL_ARENAS_ALL.Jason Evans2017-01-071-1/+1
| | | | | | | Add the MALLCTL_ARENAS_ALL cpp macro as a fixed index for use in accessing the arena.<i>.{purge,decay,dss} and stats.arenas.<i>.* mallctls, and deprecate access via the arenas.narenas index (to be removed in 6.0.0).
* Implement per arena base allocators.Jason Evans2016-12-271-4/+23
| | | | | | | | | | | | | Add/rename related mallctls: - Add stats.arenas.<i>.base . - Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal . - Add stats.arenas.<i>.resident . Modify the arenas.extend mallctl to take an optional (extent_hooks_t *) argument so that it is possible for all base allocations to be serviced by the specified extent hooks. This resolves #463.
* Fix JSON-mode output for !config_stats and/or !config_prof cases.Jason Evans2016-12-231-10/+11
| | | | | | | These bugs were introduced by 0ba5b9b6189e16a983d8922d8c5cb6ab421906e8 (Add "J" (JSON) support to malloc_stats_print().), which was backported as b599b32280e1142856b0b96293a71e1684b1ccfb (with the same bugs except the inapplicable "metatata" misspelling) and first released in 4.3.0.
* Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).Jason Evans2016-11-151-3/+4
| | | | | This avoids warnings in some cases, and is otherwise generally good hygiene.
* malloc_stats_print() fixes/cleanups.Jason Evans2016-11-011-18/+3
| | | | | | Fix and clean up various malloc_stats_print() issues caused by 0ba5b9b6189e16a983d8922d8c5cb6ab421906e8 (Add "J" (JSON) support to malloc_stats_print().).
* Add "J" (JSON) support to malloc_stats_print().Jason Evans2016-11-011-313/+716
| | | | This resolves #474.
* Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).Jason Evans2016-10-281-18/+26
| | | | | This avoids warnings in some cases, and is otherwise generally good hygiene.
* Remove all vestiges of chunks.Jason Evans2016-10-121-11/+0
| | | | | | | | Remove mallctls: - opt.lg_chunk - stats.cactive This resolves #464.
* Remove ratio-based purging.Jason Evans2016-10-121-42/+10
| | | | | | | | | | | | | Make decay-based purging the default (and only) mode. Remove associated mallctls: - opt.purge - opt.lg_dirty_mult - arena.<i>.lg_dirty_mult - arenas.lg_dirty_mult - stats.arenas.<i>.lg_dirty_mult This resolves #385.
* Remove obsolete stats.arenas.<i>.metadata.mapped mallctl.Jason Evans2016-06-061-9/+4
| | | | | Rename stats.arenas.<i>.metadata.allocated mallctl to stats.arenas.<i>.metadata .
* Rename huge to large.Jason Evans2016-06-061-35/+36
|
* Move slabs out of chunks.Jason Evans2016-06-061-22/+23
|
* Use huge size class infrastructure for large size classes.Jason Evans2016-06-061-78/+8
|
* Remove redzone support.Jason Evans2016-05-131-1/+0
| | | | This resolves #369.
* Remove quarantine support.Jason Evans2016-05-131-1/+0
|
* Remove Valgrind support.Jason Evans2016-05-131-1/+0
|
* Add the stats.retained and stats.arenas.<i>.retained statistics.Jason Evans2016-05-041-4/+8
| | | | This resolves #367.
* Fix malloc_stats_print() to print correct opt.narenas value.Jason Evans2016-04-121-1/+1
| | | | | This regression was caused by 8f683b94a751c65af8f9fa25970ccf2917b96bb8 (Make opt_narenas unsigned rather than size_t.).
* Make opt_narenas unsigned rather than size_t.Jason Evans2016-02-241-2/+8
|
* Implement decay-based unused dirty page purging.Jason Evans2016-02-201-18/+42
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.
* Add --with-malloc-conf.Jason Evans2016-02-201-0/+2
| | | | | Add --with-malloc-conf, which makes it possible to embed a default options string during configuration.
* Fix MinGW-related portability issues.Jason Evans2015-07-231-45/+44
| | | | | | | | | | | | | Create and use FMT* macros that are equivalent to the PRI* macros that inttypes.h defines. This allows uniform use of the Unix-specific format specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions of e.g. PRIu64. Add ffs()/ffsl() support for compiling with gcc. Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM, ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and use the file for tests as well as for core jemalloc code.
* Fix MinGW build warnings.Jason Evans2015-07-081-46/+49
| | | | | | | | | | Conditionally define ENOENT, EINVAL, etc. (was unconditional). Add/use PRIzu, PRIzd, and PRIzx for use in malloc_printf() calls. gcc issued (harmless) warnings since e.g. "%zu" should be "%Iu" on Windows, and the alternative to this workaround would have been to disable the function attributes which cause gcc to look for type mismatches in formatted printing function calls.
* Add the "stats.arenas.<i>.lg_dirty_mult" mallctl.Jason Evans2015-03-241-10/+1
|
* Add the "stats.allocated" mallctl.Jason Evans2015-03-241-3/+5
|
* Fix a compile error caused by mixed declarations and code.Qinfan Wu2015-03-211-2/+3
|
* Fix lg_dirty_mult-related stats printing.Jason Evans2015-03-211-66/+82
| | | | | | | | This regression was introduced by 8d6a3e8321a7767cb2ca0930b85d5d488a8cc659 (Implement dynamic per arena control over dirty page purging.). This resolves #215.
* Implement dynamic per arena control over dirty page purging.Jason Evans2015-03-191-0/+10
| | | | | | | | | | | | | | Add mallctls: - arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be modified to change the initial lg_dirty_mult setting for newly created arenas. - arena.<i>.lg_dirty_mult controls an individual arena's dirty page purging threshold, and synchronously triggers any purging that may be necessary to maintain the constraint. - arena.<i>.chunk.purge allows the per arena dirty page purging function to be replaced. This resolves #93.
* Move centralized chunk management into arenas.Jason Evans2015-02-121-12/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate all centralized data structures related to huge allocations and recyclable chunks into arena_t, so that each arena can manage huge allocations and recyclable virtual memory completely independently of other arenas. Add chunk node caching to arenas, in order to avoid contention on the base allocator. Use chunks_rtree to look up huge allocations rather than a red-black tree. Maintain a per arena unsorted list of huge allocations (which will be needed to enumerate huge allocations during arena reset). Remove the --enable-ivsalloc option, make ivsalloc() always available, and use it for size queries if --enable-debug is enabled. The only practical implications to this removal are that 1) ivsalloc() is now always available during live debugging (and the underlying radix tree is available during core-based debugging), and 2) size query validation can no longer be enabled independent of --enable-debug. Remove the stats.chunks.{current,total,high} mallctls, and replace their underlying statistics with simpler atomically updated counters used exclusively for gdump triggering. These statistics are no longer very useful because each arena manages chunks independently, and per arena statistics provide similar information. Simplify chunk synchronization code, now that base chunk allocation cannot cause recursive lock acquisition.
* Implement metadata statistics.Jason Evans2015-01-241-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | There are three categories of metadata: - Base allocations are used for bootstrap-sensitive internal allocator data structures. - Arena chunk headers comprise pages which track the states of the non-metadata pages. - Internal allocations differ from application-originated allocations in that they are for internal use, and that they are omitted from heap profiles. The metadata statistics comprise the metadata categories as follows: - stats.metadata: All metadata -- base + arena chunk headers + internal allocations. - stats.arenas.<i>.metadata.mapped: Arena chunk headers. - stats.arenas.<i>.metadata.allocated: Internal allocations. This is reported separately from the other metadata statistics because it overlaps with the allocated and active statistics, whereas the other metadata statistics do not. Base allocations are not reported separately, though their magnitude can be computed by subtracting the arena-specific metadata. This resolves #163.
* Use the correct type for opt.junk when printing stats.Guilherme Goncalves2015-01-231-1/+1
|
* Add small run utilization to stats output.Jason Evans2014-10-151-16/+34
| | | | | | | | | | | Add the 'util' column, which reports the proportion of available regions that are currently in use for each small size class. Small run utilization is the complement of external fragmentation. For example, utilization of 0.75 indicates that 25% of small run memory is consumed by external fragmentation, in other (more obtuse) words, 33% external fragmentation overhead. This resolves #27.
* Add per size class huge allocation statistics.Jason Evans2014-10-131-81/+134
| | | | | | | | | | | | | Add per size class huge allocation statistics, and normalize various stats: - Change the arenas.nlruns type from size_t to unsigned. - Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's. - Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with stats.arenas.<i>.bins.<j>.curregs . - Add the stats.arenas.<i>.hchunks.<j>.nmalloc, stats.arenas.<i>.hchunks.<j>.ndalloc, stats.arenas.<i>.hchunks.<j>.nrequests, and stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
* Implement/test/fix prof-related mallctl's.Jason Evans2014-10-041-14/+19
| | | | | | | | | | | Implement/test/fix the opt.prof_thread_active_init, prof.thread_active_init, and thread.prof.active mallctl's. Test/fix the thread.prof.name mallctl. Refactor opt_prof_active to be read-only and move mutable state into the prof_active variable. Stop leaning on ctl-related locking for protection.
* Convert to uniform style: cond == false --> !condJason Evans2014-10-031-1/+1
|
* Implement per thread heap profiling.Jason Evans2014-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Rename data structures (prof_thr_cnt_t-->prof_tctx_t, prof_ctx_t-->prof_gctx_t), and convert to storing a prof_tctx_t for sampled objects. Convert PROF_ALLOC_PREP() to prof_alloc_prep(), since precise backtrace depth within jemalloc functions is no longer an issue (pprof prunes irrelevant frames). Implement mallctl's: - prof.reset implements full sample data reset, and optional change of sample interval. - prof.lg_sample reads the current sample interval (opt.lg_prof_sample was the permanent source of truth prior to prof.reset). - thread.prof.name provides naming capability for threads within heap profile dumps. - thread.prof.active makes it possible to activate/deactivate heap profiling for individual threads. Modify the heap dump files to contain per thread heap profile data. This change is incompatible with the existing pprof, which will require enhancements to read and process the enriched data.
* Refactor huge allocation to be managed by arenas.Jason Evans2014-05-161-16/+13
| | | | | | | | | | | | | | | | | | | | Refactor huge allocation to be managed by arenas (though the global red-black tree of huge allocations remains for lookup during deallocation). This is the logical conclusion of recent changes that 1) made per arena dss precedence apply to huge allocation, and 2) made it possible to replace the per arena chunk allocation/deallocation functions. Remove the top level huge stats, and replace them with per arena huge stats. Normalize function names and types to *dalloc* (some were *dealloc*). Remove the --enable-mremap option. As jemalloc currently operates, this is a performace regression for some applications, but planned work to logarithmically space huge size classes should provide similar amortized performance. The motivation for this change was that mremap-based huge reallocation forced leaky abstractions that prevented refactoring.
* Normalize #define whitespace.Jason Evans2013-12-091-4/+4
| | | | Consistently use a tab rather than a space following #define.
* Add arena-specific and selective dss allocation.Jason Evans2012-10-131-2/+8
| | | | | | | | | | | | | | | | | | | Add the "arenas.extend" mallctl, so that it is possible to create new arenas that are outside the set that jemalloc automatically multiplexes threads onto. Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible to explicitly allocate from a particular arena. Add the "opt.dss" mallctl, which controls the default precedence of dss allocation relative to mmap allocation. Add the "arena.<i>.dss" mallctl, which makes it possible to set the default dss precedence on a per arena or global basis. Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge". Add the "stats.arenas.<i>.dss" mallctl.
* Don't use sizeof() on a VARIABLE_ARRAYMike Hommey2012-05-021-2/+2
| | | | In the alloca() case, this fails to be the right size.
* Allow je_malloc_message to be overridden when linking staticallyMike Hommey2012-05-021-15/+7
| | | | | | | | | | | | | If an application wants to override je_malloc_message, it is better to define the symbol locally than to change its value in main(), which might be too late for various reasons. Due to je_malloc_message being initialized in util.c, statically linking jemalloc with an application defining je_malloc_message fails due to "multiple definition of" the symbol. Defining it without a value (like je_malloc_conf) makes it more easily overridable.
* Avoid variable length arrays and remove declarations within codeMike Hommey2012-04-291-2/+2
| | | | | | | | | | | | MSVC doesn't support C99, and building as C++ to be able to use them is dangerous, as C++ and C99 are incompatible. Introduce a VARIABLE_ARRAY macro that either uses VLA when supported, or alloca() otherwise. Note that using alloca() inside loops doesn't quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged. It might be worth investigating ways to check whether VARIABLE_ARRAY is used in such context at runtime in debug builds and bail out if that happens.
* Update prof defaults to match common usage.Jason Evans2012-04-171-0/+1
| | | | | | | | | Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB). Change the "opt.prof_accum" default from true to false. Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be abused to disable final profile dumping.