summaryrefslogtreecommitdiffstats
path: root/include/jemalloc/internal/prng.h
Commit message (Collapse)AuthorAgeFilesLines
* Header refactoring: prng module - remove from the catchall and unify.David Goldblatt2017-04-241-0/+185
|
* Break up headers into constituent partsDavid Goldblatt2017-01-121-207/+0
| | | | | | | | | | This is part of a broader change to make header files better represent the dependencies between one another (see https://github.com/jemalloc/jemalloc/issues/533). It breaks up component headers into smaller parts that can be made to have a simpler dependency graph. For the autogenerated headers (smoothstep.h and size_classes.h), no splitting was necessary, so I didn't add support to emit multiple headers.
* Rename atomic_*_{uint32,uint64,u}() to atomic_*_{u32,u64,zu}().Jason Evans2016-11-071-4/+4
| | | | This change conforms to naming conventions throughout the codebase.
* Refactor prng to not use 64-bit atomics on 32-bit platforms.Jason Evans2016-11-071-16/+127
| | | | This resolves #495.
* Implement cache-oblivious support for huge size classes.Jason Evans2016-06-031-9/+26
|
* Refactor jemalloc_ffs*() into ffs_*().Jason Evans2016-02-241-1/+1
| | | | Use appropriate versions to resolve 64-to-32-bit data loss warnings.
* Fix overflow in prng_range().Jason Evans2016-02-211-1/+1
| | | | | Add jemalloc_ffs64() and use it instead of jemalloc_ffsl() in prng_range(), since long is not guaranteed to be a 64-bit type.
* Refactor prng* from cpp macros into inline functions.Jason Evans2016-02-201-24/+43
| | | | | Remove 32-bit variant, convert prng64() to prng_lg_range(), and add prng_range().
* Implement cache index randomization for large allocations.Jason Evans2015-05-061-6/+6
| | | | | | | | | | | | | | | | | | | | Extract szad size quantization into {extent,run}_quantize(), and . quantize szad run sizes to the union of valid small region run sizes and large run sizes. Refactor iteration in arena_run_first_fit() to use run_quantize{,_first,_next(), and add support for padded large runs. For large allocations that have no specified alignment constraints, compute a pseudo-random offset from the beginning of the first backing page that is a multiple of the cache line size. Under typical configurations with 4-KiB pages and 64-byte cache lines this results in a uniform distribution among 64 page boundary offsets. Add the --disable-cache-oblivious option, primarily intended for performance testing. This resolves #13.
* Whitespace cleanups.Jason Evans2014-09-051-1/+1
|
* Normalize #define whitespace.Jason Evans2013-12-091-2/+2
| | | | Consistently use a tab rather than a space following #define.
* Rename prn to prng.Jason Evans2012-03-021-0/+60
Rename prn to prng so that Windows doesn't choke when trying to create a file named prn.h.