diff options
| author | Jason Evans <jasone@canonware.com> | 2010-10-18 00:51:37 (GMT) |
|---|---|---|
| committer | Jason Evans <jasone@canonware.com> | 2010-10-18 00:52:14 (GMT) |
| commit | 940a2e02b27b264cc92e8ecbf186a711ce05ad04 (patch) | |
| tree | 3fc3c27dc5d95816eee38d6ab3f5330f1527de3c /test/aligned_alloc.c | |
| parent | 397e5111b5efd49f61f73c1bad0375c7885a6128 (diff) | |
| download | jemalloc-940a2e02b27b264cc92e8ecbf186a711ce05ad04.zip jemalloc-940a2e02b27b264cc92e8ecbf186a711ce05ad04.tar.gz jemalloc-940a2e02b27b264cc92e8ecbf186a711ce05ad04.tar.bz2 | |
Fix numerous arena bugs.
In arena_ralloc_large_grow(), update the map element for the end of the
newly grown run, rather than the interior map element that was the
beginning of the appended run. This is a long-standing bug, and it had
the potential to cause massive corruption, but triggering it required
roughly the following sequence of events:
1) Large in-place growing realloc(), with left-over space in the run
that followed the large object.
2) Allocation of the remainder run left over from (1).
3) Deallocation of the remainder run *before* deallocation of the
large run, with unfortunate interior map state left over from
previous run allocation/deallocation activity, such that one or
more pages of allocated memory would be treated as part of the
remainder run during run coalescing.
In summary, this was a bad bug, but it was difficult to trigger.
In arena_bin_malloc_hard(), if another thread wins the race to allocate
a bin run, dispose of the spare run via arena_bin_lower_run() rather
than arena_run_dalloc(), since the run has already been prepared for use
as a bin run. This bug has existed since March 14, 2010:
e00572b384c81bd2aba57fac32f7077a34388915
mmap()/munmap() without arena->lock or bin->lock.
Fix bugs in arena_dalloc_bin_run(), arena_trim_head(),
arena_trim_tail(), and arena_ralloc_large_grow() that could cause the
CHUNK_MAP_UNZEROED map bit to become corrupted. These are all
long-standing bugs, but the chances of them actually causing problems
was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED
conversion.
Fix a large run statistics regression in arena_ralloc_large_grow() that
was introduced on September 17, 2010:
8e3c3c61b5bb676a705450708e7e79698cdc9e0c
Add {,r,s,d}allocm().
Add debug code to validate that supposedly pre-zeroed memory really is.
Diffstat (limited to 'test/aligned_alloc.c')
0 files changed, 0 insertions, 0 deletions
