diff options
author | Jason Evans <jasone@canonware.com> | 2014-02-25 19:58:50 (GMT) |
---|---|---|
committer | Jason Evans <jasone@canonware.com> | 2014-02-25 20:37:25 (GMT) |
commit | 940fdfd5eef45f5425f9124e250fddde5c5c48bf (patch) | |
tree | 21a3ebf0fa30c95ad97291e5eccb0201574cb83f /src | |
parent | cb657e3170349a27e753cdf6316513f56550205e (diff) | |
download | jemalloc-940fdfd5eef45f5425f9124e250fddde5c5c48bf.zip jemalloc-940fdfd5eef45f5425f9124e250fddde5c5c48bf.tar.gz jemalloc-940fdfd5eef45f5425f9124e250fddde5c5c48bf.tar.bz2 |
Fix junk filling for mremap(2)-based huge reallocation.
If mremap(2) is used for huge reallocation, physical pages are mapped to
new virtual addresses rather than data being copied to new pages. This
bypasses the normal junk filling that would happen during allocation, so
add junk filling that is specific to this case.
Diffstat (limited to 'src')
-rw-r--r-- | src/huge.c | 10 |
1 files changed, 10 insertions, 0 deletions
@@ -171,6 +171,16 @@ huge_ralloc(void *ptr, size_t oldsize, size_t size, size_t extra, abort(); memcpy(ret, ptr, copysize); chunk_dealloc_mmap(ptr, oldsize); + } else if (config_fill && zero == false && opt_junk && oldsize + < newsize) { + /* + * mremap(2) clobbers the original mapping, so + * junk/zero filling is not preserved. There is no + * need to zero fill here, since any trailing + * uninititialized memory is demand-zeroed by the + * kernel, but junk filling must be redone. + */ + memset(ret + oldsize, 0xa5, newsize - oldsize); } } else #endif |