summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorJason Evans <jasone@canonware.com>2013-12-16 05:49:40 (GMT)
committerJason Evans <jasone@canonware.com>2013-12-16 05:57:09 (GMT)
commit6e62984ef6ca4312cf0a2e49ea2cc38feb94175b (patch)
tree7fbee95e1bdd18181509c331225b2390561898b9 /include
parent665769357cd77b74e00a146f196fff19243b33c4 (diff)
downloadjemalloc-6e62984ef6ca4312cf0a2e49ea2cc38feb94175b.zip
jemalloc-6e62984ef6ca4312cf0a2e49ea2cc38feb94175b.tar.gz
jemalloc-6e62984ef6ca4312cf0a2e49ea2cc38feb94175b.tar.bz2
Don't junk-fill reallocations unless usize changes.
Don't junk fill reallocations for which the request size is less than the current usable size, but not enough smaller to cause a size class change. Unlike malloc()/calloc()/realloc(), *allocx() contractually treats the full usize as the allocation, so a caller can ask for zeroed memory via mallocx() and a series of rallocx() calls that all specify MALLOCX_ZERO, and be assured that all newly allocated bytes will be zeroed and made available to the application without danger of allocator mutation until the size class decreases enough to cause usize reduction.
Diffstat (limited to 'include')
-rw-r--r--include/jemalloc/internal/tcache.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/jemalloc/internal/tcache.h b/include/jemalloc/internal/tcache.h
index d4eecde..c3d4b58 100644
--- a/include/jemalloc/internal/tcache.h
+++ b/include/jemalloc/internal/tcache.h
@@ -297,6 +297,7 @@ tcache_alloc_small(tcache_t *tcache, size_t size, bool zero)
binind = SMALL_SIZE2BIN(size);
assert(binind < NBINS);
tbin = &tcache->tbins[binind];
+ size = arena_bin_info[binind].reg_size;
ret = tcache_alloc_easy(tbin);
if (ret == NULL) {
ret = tcache_alloc_small_hard(tcache, tbin, binind);