summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Use linear scan for small bitmapsDave Watson2016-02-262-3/+88
| | | | | | | | | | | | | For small bitmaps, a linear scan of the bitmap is slightly faster than a tree search - bitmap_t is more compact, and there are fewer writes since we don't have to propogate state transitions up the tree. On x86_64 with the current settings, I'm seeing ~.5%-1% CPU improvement in production canaries with this change. The old tree code is left since 32bit sizes are much larger (and ffsl smaller), and maybe the run sizes will change in the future. This resolves #339.
* Miscellaneous bitmap refactoring.Jason Evans2016-02-264-39/+38
|
* Improve test_threads performancerustyx2016-02-261-4/+4
|
* Fix MSVC projectrustyx2016-02-262-0/+4
|
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-264-15/+14
| | | | This resolves #341.
* Remove a superfluous comment.Jason Evans2016-02-261-1/+0
|
* Add more HUGE_MAXCLASS overflow checks.Jason Evans2016-02-261-23/+34
| | | | | | | Add HUGE_MAXCLASS overflow checks that are specific to heap profiling code paths. This fixes test failures that were introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class overflow behavior defined.).
* Cast PTRDIFF_MAX to size_t before adding 1.Jason Evans2016-02-263-10/+10
| | | | | | This fixes compilation warnings regarding integer overflow that were introduced by 0c516a00c4cb28cff55ce0995f756b5aae074c9e (Make *allocx() size class overflow behavior defined.).
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-2514-89/+247
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Refactor arenas array (fixes deadlock).Jason Evans2016-02-259-211/+159
| | | | | | | | | | | | Refactor the arenas array, which contains pointers to all extant arenas, such that it starts out as a sparse array of maximum size, and use double-checked atomics-based reads as the basis for fast and simple arena_get(). Additionally, reduce arenas_lock's role such that it only protects against arena initalization races. These changes remove the possibility for arena lookups to trigger locking, which resolves at least one known (fork-related) deadlock. This resolves #315.
* Fix arena_size computation.Dave Watson2016-02-251-1/+1
| | | | | | | | | | | | | Fix arena_size arena_new() computation to incorporate runs_avail_nclasses elements for runs_avail, rather than (runs_avail_nclasses - 1) elements. Since offsetof(arena_t, runs_avail) is used rather than sizeof(arena_t) for the first term of the computation, all of the runs_avail elements must be added into the second term. This bug was introduced (by Jason Evans) while merging pull request #330 as 3417a304ccde61ac1f68b436ec22c03f1d6824ec (Separate arena_avail trees).
* Fix arena_run_first_best_fitDave Watson2016-02-251-1/+1
| | | | | | Merge of 3417a304ccde61ac1f68b436ec22c03f1d6824ec looks like a small bug: first_best_fit doesn't scan through all the classes, since ind is offset from runs_avail_nclasses by run_avail_bias.
* Attempt mmap-based in-place huge reallocation.Jason Evans2016-02-253-13/+12
| | | | | | | | Attempt mmap-based in-place huge reallocation by plumbing new_addr into chunk_alloc_mmap(). This can dramatically speed up incremental huge reallocation. This resolves #335.
* Document the heap profile format.Jason Evans2016-02-241-1/+49
| | | | This resolves #258.
* Update manual to reflect removal of global huge object tree.Jason Evans2016-02-241-16/+11
| | | | This resolves #323.
* Fix ffs_zu() compilation error on MinGW.Jason Evans2016-02-241-3/+5
| | | | | This regression was caused by 9f4ee6034c3ac6a8c8b5f9a0d76822fb2fd90c41 (Refactor jemalloc_ffs*() into ffs_*().).
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-242-2/+6
|
* Compile with -Wshorten-64-to-32.Jason Evans2016-02-241-0/+1
| | | | | This will prevent accidental creation of potential integer truncation bugs when developing on LP64 systems.
* Silence miscellaneous 64-to-32-bit data loss warnings.Jason Evans2016-02-2413-41/+49
|
* Change lg_floor() return type from size_t to unsigned.Jason Evans2016-02-242-17/+18
|
* Use ssize_t for readlink() rather than int.Jason Evans2016-02-241-1/+1
|
* Make opt_narenas unsigned rather than size_t.Jason Evans2016-02-246-14/+24
|
* Make nhbins unsigned rather than size_t.Jason Evans2016-02-242-2/+2
|
* Explicitly cast mib[] elements to unsigned where appropriate.Jason Evans2016-02-241-9/+9
|
* Refactor jemalloc_ffs*() into ffs_*().Jason Evans2016-02-248-40/+70
| | | | Use appropriate versions to resolve 64-to-32-bit data loss warnings.
* Fix Windows build issuesDmitri Smirnov2016-02-244-6/+31
| | | | This resolves #333.
* Collapse arena_avail_tree_* into arena_run_tree_*.Jason Evans2016-02-242-13/+8
| | | | | These tree types converged to become identical, yet they still had independently generated red-black tree implementations.
* Separate arena_avail treesDave Watson2016-02-242-94/+56
| | | | | | | | | | | Separate run trees by index, replacing the previous quantize logic. Quantization by index is now performed only on insertion / removal from the tree, and not on node comparison, saving some cpu. This also means we don't have to dereference the miscelm* pointers, saving half of the memory loads from miscelms/mapbits that have fallen out of cache. A linear scan of the indicies appears to be fast enough. The only cost of this is an extra tree array in each arena.
* Remove rbt_nilDave Watson2016-02-242-109/+86
| | | | | Since this is an intrusive tree, rbt_nil is the whole size of the node and can be quite large. For example, miscelm is ~100 bytes.
* Use table lookup for run_quantize_{floor,ceil}().Jason Evans2016-02-234-32/+90
| | | | | Reduce run quantization overhead by generating lookup tables during bootstrapping, and using the tables for all subsequent run quantization.
* Fix run_quantize_ceil().Jason Evans2016-02-231-1/+1
| | | | | | | | | | | In practice this bug had limited impact (and then only by increasing chunk fragmentation) because run_quantize_ceil() returned correct results except for inputs that could only arise from aligned allocation requests that required more than page alignment. This bug existed in the original run quantization implementation, which was introduced by 8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index randomization for large allocations.).
* Test run quantization.Jason Evans2016-02-225-10/+194
| | | | | Also rename run_quantize_*() to improve clarity. These tests demonstrate that run_quantize_ceil() is flawed.
* Indentation style cleanup.Jason Evans2016-02-221-13/+13
|
* Refactor time_* into nstime_*.Jason Evans2016-02-2217-557/+526
| | | | | | | Use a single uint64_t in nstime_t to store nanoseconds rather than using struct timespec. This reduces fragility around conversions between long and uint64_t, especially missing casts that only cause problems on 32-bit platforms.
* Fix Windows-specific prof-related compilation portability issues.Jason Evans2016-02-212-5/+16
|
* Fix time_update() to compile and work on MinGW.Jason Evans2016-02-211-6/+9
|
* Remove _WIN32-specific struct timespec declaration.Jason Evans2016-02-211-6/+0
| | | | struct timespec is already defined by the system (at least on MinGW).
* Fix overflow in prng_range().Jason Evans2016-02-215-6/+40
| | | | | Add jemalloc_ffs64() and use it instead of jemalloc_ffsl() in prng_range(), since long is not guaranteed to be a 64-bit type.
* Add symbol mangling for prng_[lg_]range().Jason Evans2016-02-201-0/+2
|
* Add MS Visual Studio 2015 supportrustyx2016-02-2010-0/+1204
|
* Fix warning in ipallocrustyx2016-02-201-2/+2
|
* Prevent MSVC from optimizing away tls_callback (resolves #318)rustyx2016-02-201-1/+3
|
* getpid() fix for Win32rustyx2016-02-202-0/+4
|
* Add CPU "pause" intrinsic for MSVCrustyx2016-02-201-6/+16
|
* Fix error "+ 2")syntax error: invalid arithmetic operator (error token is " ↵rustyx2016-02-201-1/+1
| | | | in Cygwin x64
* Detect LG_SIZEOF_PTR depending on MSVC platform targetrustyx2016-02-202-6/+19
|
* Fix a typo in the ckh_search() prototype.Christopher Ferris2016-02-201-1/+1
|
* Handle unaligned keys in hash().Jason Evans2016-02-202-4/+33
| | | | Reported by Christopher Ferris <cferris@google.com>.
* Increase test coverage in test_decay_ticks.Jason Evans2016-02-201-123/+98
|
* Implement decay-based unused dirty page purging.Jason Evans2016-02-2018-112/+1268
| | | | | | | | | | | | | | | | This is an alternative to the existing ratio-based unused dirty page purging, and is intended to eventually become the sole purging mechanism. Add mallctls: - opt.purge - opt.decay_time - arena.<i>.decay - arena.<i>.decay_time - arenas.decay_time - stats.arenas.<i>.decay_time This resolves #325.