| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
Make malloc_write() non-inline, in order to resolve its dependency on
je_malloc_write().
|
|
|
|
|
|
|
|
|
|
| |
Embed the bin index for small page runs into the chunk page map, in
order to omit [...] in the following dependent load sequence:
ptr-->mapelm-->[run-->bin-->]bin_info
Move various non-critcal code out of the inlined function chain into
helper functions (tcache_event_hard(), arena_dalloc_small(), and
locking).
|
|
|
|
| |
Tested with MSVC 8 32 and 64 bits.
|
|
|
|
|
|
| |
Theses newly added macros will be used to implement the equivalent under
MSVC. Also, move the definitions to headers, where they make more sense,
and for some, are even more useful there (e.g. malloc).
|
|
|
|
|
|
|
|
|
| |
Using errno on win32 doesn't quite work, because the value set in a shared
library can't be read from e.g. an executable calling the function setting
errno.
At the same time, since buferror always uses errno/GetLastError, don't pass
it.
|
|
|
|
| |
Windows headers define a VOID macro.
|
|
|
|
|
|
|
|
|
|
|
|
| |
MSVC doesn't support C99, and building as C++ to be able to use them is
dangerous, as C++ and C99 are incompatible.
Introduce a VARIABLE_ARRAY macro that either uses VLA when supported,
or alloca() otherwise. Note that using alloca() inside loops doesn't
quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged.
It might be worth investigating ways to check whether VARIABLE_ARRAY is
used in such context at runtime in debug builds and bail out if that
happens.
|
| |
|
|
|
|
|
| |
Handle prof_tdata resurrection during thread shutdown, similarly to how
tcache and quarantine handle resurrection.
|
|
|
|
|
|
| |
Don't set prof_tdata during thread cleanup, because doing so will cause
the cleanup function to be called again, the second time with a NULL
argument.
|
|
|
|
|
| |
Fix a PROF_ALLOC_PREP() error path to initialize the return value to
NULL.
|
|
|
|
|
| |
Fix the "epoch" mallctl to update cached stats even if the passed in
epoch is 0.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Handle quarantine resurrection during thread exit in much the same way
as tcache resurrection is handled.
|
| |
|
|
|
|
|
| |
Fix ctl to correctly compute the number of children at each level of the
ctl tree.
|
| |
|
|
|
|
|
|
|
| |
MSVC doesn't support C99, and as such doesn't support designated
initialization of structs and unions. As there is never a mix of
indexed and named nodes, it is pretty straightforward to use a
different type for each.
|
|
|
|
|
|
|
|
|
|
|
| |
Fix a potential deadlock that could occur during interval- and
growth-triggered heap profile dumps.
Fix an off-by-one heap profile statistics bug that could be observed in
interval- and growth-triggered heap profiles.
Fix heap profile dump filename sequence numbers (regression during
conversion to malloc_snprintf()).
|
|
|
|
|
| |
Commit 4eeb52f removed vsnprintf validation, but left a now unused va_copy.
It so happens that MSVC doesn't support va_copy.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls. If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR. However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works. Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix chunk_alloc_dss() to zero memory when requested.
Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated
memory.
Fix huge_palloc() to always junk fill when requested.
Improve chunk_recycle() to report that memory is zeroed as a side effect
of pages_purge().
|
|
|
|
|
|
|
|
|
| |
Fix a memory corruption bug in chunk_alloc_dss() that was due to
claiming newly allocated memory is zeroed.
Reverse order of preference between mmap() and sbrk() to prefer mmap().
Clean up management of 'zero' parameter in chunk_alloc*().
|
|
|
|
|
| |
Put CONF_HANDLE_*() keys in quotes, so that they aren't mangled when
--with-private-namespace is used.
|
| |
|
|
|
|
|
| |
Bookkeeping an extra argument that actually only stores a function pointer
for a function we already have is not very useful.
|
|
|
|
|
|
| |
Make special FreeBSD libc/libthr function overrides for
_malloc_prefork(), _malloc_postfork(), and _malloc_thread_cleanup()
visible.
|
|
|
|
|
|
| |
These flags take unsigned values, but they were fed with signed values
taken with va_arg, and that led to sign extension in cases where the
corresponding value has the most significant bit set.
|
|
|
|
|
| |
This will be used to implement the feature on mingw, which doesn't have
madvise.
|
|
|
|
|
|
| |
Clean up a few config-related conditionals to avoid unnecessary
dependencies on prof symbols. Use cassert() rather than assert()
everywhere that it's appropriate.
|
|
|
|
|
|
|
|
|
| |
Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
Change the "opt.prof_accum" default from true to false.
Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be
abused to disable final profile dumping.
|
|
|
|
|
|
| |
Add the --disable-munmap option, remove the configure test that
attempted to detect the VM allocation quirk known to exist on Linux
x86[_64], and make --disable-munmap implicit on Linux.
|
|
|
|
|
|
|
|
|
|
|
| |
Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.
Unify the chunk caching for mmap and dss.
Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
|
|
|
|
|
|
| |
Always disable redzone by default, even when --enable-debug is
specified. The memory overhead for redzones can be substantial, which
makes this feature something that should only be opted into.
|
|
|
|
|
| |
Chunk_boot0 calls rtree_new, which calls base_alloc, which locks the
base_mtx mutex. That mutex is initialized in base_boot.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.
Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.
Remove the run_size_p parameter from sa2u().
Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c9f702d9fa9be497a1a2aa1be13a29e (Add alignment support to
chunk_alloc()).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors. Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.
Merge arena_salloc() and arena_salloc_demote().
Refactor i[v]salloc() to expose the 'demote' option.
|
|
|
|
|
|
|
| |
Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.
Reported by Mike Hommey.
|
| |
|
|
|
|
| |
It was only used by the swap feature, and that is gone.
|
|
|
|
|
|
|
| |
Always initialize tcache data structures if the tcache configuration
option is enabled, regardless of opt_tcache. This fixes
"thread.tcache.enabled" mallctl manipulation in the case when opt_tcache
is false.
|
|
|
|
|
| |
Remove arena_malloc_prechosen(), now that arena_malloc() can be invoked
in a way that is semantically equivalent.
|
| |
|
|
|
|
| |
Reported by Mike Hommey.
|
| |
|
|
|
|
|
| |
Add a0malloc(), a0calloc(), and a0free(), which are used by FreeBSD's
libc to allocate/deallocate TLS in static binaries.
|