summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/size_classes.sh
Commit message (Collapse)AuthorAgeFilesLines
* Header refactoring: Pull size helpers out of jemalloc module.David Goldblatt2017-05-311-0/+1
|
* Remove --with-lg-tiny-min.Jason Evans2017-04-241-0/+2
| | | | | | This option isn't useful in practice. This partially resolves #580.
* Header refactoring: move jemalloc_internal_types.h out of the catch-allDavid Goldblatt2017-04-191-2/+3
|
* Implement compact rtree leaf element representation.Jason Evans2017-03-231-0/+15
| | | | | | | | | If a single virtual adddress pointer has enough unused bits to pack {szind_t, extent_t *, bool, bool}, use a single pointer-sized field in each rtree leaf element, rather than using three separate fields. This has little impact on access speed (fewer loads/stores, but more bit twiddling), except that denser representation increases TLB effectiveness.
* Replace tabs following #define with spaces.Jason Evans2017-01-211-13/+13
| | | | This resolves #564.
* Break up headers into constituent partsDavid Goldblatt2017-01-121-19/+4
| | | | | | | | | | This is part of a broader change to make header files better represent the dependencies between one another (see https://github.com/jemalloc/jemalloc/issues/533). It breaks up component headers into smaller parts that can be made to have a simpler dependency graph. For the autogenerated headers (smoothstep.h and size_classes.h), no splitting was necessary, so I didn't add support to emit multiple headers.
* Relax NBINS constraint (max 255 --> max 256).Jason Evans2016-06-061-4/+2
|
* Rename huge to large.Jason Evans2016-06-061-4/+4
|
* Move slabs out of chunks.Jason Evans2016-06-061-11/+11
|
* Simplify run quantization.Jason Evans2016-05-161-16/+0
|
* Refactor runs_avail.Jason Evans2016-05-161-4/+11
| | | | | | | | Use pszind_t size classes rather than szind_t size classes, and always reserve space for NPSIZES elements. This removes unused heaps that are not multiples of the page size, and adds (currently) unused heaps for all huge size classes, with the immediate benefit that the size of arena_t allocations is constant (no longer dependent on chunk size).
* Implement pz2ind(), pind2sz(), and psz2u().Jason Evans2016-05-131-5/+38
| | | | | | | These compute size classes and indices similarly to size2index(), index2size() and s2u(), respectively, but using the subset of size classes that are multiples of the page size. Note that pszind_t and szind_t are not interchangeable.
* Initialize arena_bin_info at compile time rather than at boot time.Jason Evans2016-05-131-7/+56
| | | | This resolves #370.
* Make *allocx() size class overflow behavior defined.Jason Evans2016-02-251-2/+2
| | | | | | | Limit supported size and alignment to HUGE_MAXCLASS, which in turn is now limited to be less than PTRDIFF_MAX. This resolves #278 and #295.
* Fix xallocx() bugs.Jason Evans2015-09-121-0/+5
| | | | | Fix xallocx() bugs related to the 'extra' parameter when specified as non-zero.
* Update a comment.Jason Evans2015-06-151-1/+2
|
* Add --with-lg-tiny-min, generalize --with-lg-quantum.Jason Evans2014-10-111-5/+5
|
* Add configure options.Jason Evans2014-10-101-3/+9
| | | | | | | | | | | | Add: --with-lg-page --with-lg-page-sizes --with-lg-size-class-group --with-lg-quantum Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE. Fix various edge conditions exposed by the configure options.
* Normalize size classes.Jason Evans2014-10-061-5/+10
| | | | | | | | | | Normalize size classes to use the same number of size classes per size doubling (currently hard coded to 4), across the intire range of size classes. Small size classes already used this spacing, but in order to support this change, additional small size classes now fill [4 KiB .. 16 KiB). Large size classes range from [16 KiB .. 4 MiB). Huge size classes now support non-multiples of the chunk size in order to fill (4 MiB .. 16 MiB).
* Optimize [nmd]alloc() fast paths.Jason Evans2014-09-071-0/+3
| | | | | | Optimize [nmd]alloc() fast paths such that the (flags == 0) case is streamlined, flags decoding only happens to the minimum degree necessary, and no conditionals are repeated.
* Refactor chunk map.Qinfan Wu2014-09-051-1/+1
| | | | | Break the chunk map into two separate arrays, in order to improve cache locality. This is related to issue #23.
* Add size class computation capability.Jason Evans2014-05-291-58/+203
| | | | | | | Add size class computation capability, currently used only as validation of the size class lookup tables. Generalize the size class spacing used for bins, for eventual use throughout the full range of allocation sizes.
* Remove support for non-prof-promote heap profiling metadata.Jason Evans2014-04-111-3/+2
| | | | | | | | | | | | | | | Make promotion of sampled small objects to large objects mandatory, so that profiling metadata can always be stored in the chunk map, rather than requiring one pointer per small region in each small-region page run. In practice the non-prof-promote code was only useful when using jemalloc to track all objects and report them as leaks at program exit. However, Valgrind is at least as good a tool for this particular use case. Furthermore, the non-prof-promote code is getting in the way of some optimizations that will make heap profiling much cheaper for the predominant use case (sampling a small representative proportion of all allocations).
* Use echo instead of cat in loops in size_classes.shMike Hommey2012-04-171-21/+11
| | | | | This avoids fork/exec()ing in loops, as echo is a builtin, and makes size_classes.sh much faster (from > 10s to < 0.2s on mingw on my machine).
* Use $((...)) instead of expr.Jason Evans2012-04-031-15/+15
| | | | | | | Use $((...)) for math in size_classes.h rather than expr, because it is much faster. This is not supported syntax in the classic Bourne shell, but all modern sh implementations support it, including bash, zsh, and ash.
* Clean up *PAGE* macros.Jason Evans2012-04-021-1/+1
| | | | | | | | | | | s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g. Remove remnants of the dynamic-page-shift code. Rename the "arenas.pagesize" mallctl to "arenas.page". Remove the "arenas.chunksize" mallctl, which is redundant with "opt.lg_chunk".
* Remove bashism.Jason Evans2012-03-121-1/+1
| | | | Submitted by Mike Hommey.
* Simplify small size class infrastructure.Jason Evans2012-02-291-0/+132
Program-generate small size class tables for all valid combinations of LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate all relevant data structures, and remove the distinction between tiny/quantum/cacheline/subpage bins. Remove --enable-dynamic-page-shift. This option didn't prove useful in practice, and it prevented optimizations. Add Tilera architecture support.