| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
larger than uint. Reverts r65975. Reviewed by Brett Cannon.
|
|
|
|
| |
64bit OSes. Reviewed by Benjamin Peterson.
|
|
|
|
|
|
|
|
| |
was not always being done properly in some python types and extension
modules. PyMem_MALLOC, PyMem_REALLOC, PyMem_NEW and PyMem_RESIZE have
all been updated to perform better checks and places in the code that
would previously leak memory on the error path when such an allocation
failed have been fixed.
|
|
|
|
|
|
| |
Added checks for integer overflows, contributed by Google. Some are
only available if asserts are left in the code, in cases where they
can't be triggered from Python code.
|
|
|
|
|
| |
backwards compatibility. Add Py_Refcnt, Py_Type, Py_Size, and
PyVarObject_HEAD_INIT.
|
|
|
|
| |
Will backport.
|
|
|
|
|
| |
2*sizeof(size_t) now, not 8. This probably accounts for
current disasters on the 64-bit buildbot slaves.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to each allocated block. This was using 4 bytes for each such
piece of info regardless of platform. This didn't really matter
before (proof: no bug reports, and the debug-build obmalloc would
have assert-failed if it was ever asked for a chunk of memory
>= 2**32 bytes), since container indices were plain ints. But after
the Py_ssize_t changes, it's at least theoretically possible to
allocate a list or string whose guts exceed 2**32 bytes, and the
PYMALLOC_DEBUG routines would fail then (having only 4 bytes
to record the originally requested size).
Now we use sizeof(size_t) bytes for each of a PYMALLOC_DEBUG
build's extra debugging fields. This won't make any difference
on 32-bit boxes, but will add 16 bytes to each allocation in
a debug build on a 64-bit box.
|
| |
|
|
|
|
|
| |
solution) in the same way as listobject.c got changed. Hoping for a better
solution.
|
|
|
|
|
|
|
| |
This is a heavily altered derivative of SF patch 1123430, Evan
Jones's heroic effort to make obmalloc return unused arenas to
the system free(), with some heuristic strategies to make it
more likley that arenas eventually _can_ be freed.
|
| |
|
| |
|
|
|
|
| |
Perhaps Py_NO_INLINE should be moved to pyport.h or some other header?
|
|
|
|
|
|
|
|
|
|
| |
managed by C, because it's possible for the block to be smaller than the
new requested size, and at the end of allocated VM. Trying to copy over
nbytes bytes to a Python small-object block can segfault then, and there's
no portable way to avoid this (we would have to know how many bytes
starting at p are addressable, and std C has no means to determine that).
Bugfix candidate. Should be backported to 2.4, but I'm out of time.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
larger than the system page size.
|
|
|
|
| |
Two of these were real bugs.
|
|
|
|
| |
(Silences compiler warning for Compaq C++ 6.5 on Tru64.)
|
|
|
|
|
| |
"special builds" I ever use. If you use others, document them here, or
don't be surprised if I rip out the code for them <0.5 wink>.
|
|
|
|
| |
copying it if at least 25% of the input block can be reclaimed.
|
|
|
|
|
| |
display a msg warning that the count of bytes requested may be bogus,
and that a segfault may happen next.
|
|
|
|
|
|
| |
consistency checks, enabled only in a debug (Py_DEBUG) build. Note that
this never gets called automatically unless PYMALLOC_DEBUG is #define'd
too, and the envar PYTHONMALLOCSTATS exists.
|
|
|
|
|
|
| |
doesn't return NULL.
PyObject_Realloc: better comment for why we don't call PyObject_Malloc(0).
|
| |
|
|
|
|
|
|
| |
Added code to call this when PYMALLOC_DEBUG is enabled, and envar
PYTHONMALLOCSTATS is set, whenever a new arena is obtained and once
late in the Python shutdown process.
|
|
|
|
|
|
|
|
|
|
| |
_PyObject_DebugMalloc: explicitly cast PyObject_Malloc's result to the
target pointer type.
_PyObject_DebugDumpStats: change decl of arena_alignment from unsigned
int to unsigned long.
This is for the 2.3 release only (it's new code).
|
|
|
|
|
|
|
|
|
|
| |
most of the work. In particular, if the underlying realloc is able to
grow the memory block in place, great (this routine used to do a fresh
malloc + memcpy every time a block grew). BTW, I'm not so keen here on
avoiding possible quadratic-time realloc patterns as I am on making
the debug pymalloc more invisible (the more it uses memory "just like"
the underlying allocator, the better the chance that a suspected memory
corruption bug won't vanish when the debug malloc is turned on).
|
| |
|
|
|
|
|
| |
prefix. These symbols are private to the file, and the PYMALLOC_ gets
in the way (overly long code lines, comments, and error messages).
|
|
|
|
|
| |
PyMalloc_ prefix and use PyObject_ instead. I'm not sure about the
debugging functions. Perhaps they should stay as PyMalloc_.
|
|
|
|
|
|
|
|
|
|
|
| |
The bug report pointed out a bogosity in the comment block explaining
thread safety for arena management. Repaired that comment, repaired a
couple others while I was at it, and added an assert.
_PyMalloc_DebugRealloc: If this needed to get more memory, but couldn't,
it erroneously freed the original memory. Repaired that.
This is for 2.3 only (unless we decide to backport the new pymalloc).
|
|
|
|
| |
the big numbers.
|
| |
|
|
|
|
| |
accounts for every byte.
|
|
|
|
|
|
|
|
| |
runtime multiplications and divisions, via the scheme developed with
Vladimir Marangozov on Python-Dev. The pool_header struct loses its
capacity member, but gains nextoffset and maxnextoffset members; this
still leaves it at 32 bytes on a 32-bit box (it has to be padded to a
multiple of 8 bytes).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
what these do given a 0 size argument. This is so that when pymalloc
is enabled, we don't need to wrap pymalloc calls in goofy little
routines special-casing 0. Note that it's virtually impossible to meet
the doc's promise that malloc(0) will never return NULL; this makes a
best effort, but not an insane effort. The code does promise that
realloc(not-NULL, 0) will never return NULL (malloc(0) is much harder).
_PyMalloc_Realloc: Changed to take over all requests for 0 bytes, and
rearranged to be a little quicker in expected cases.
All over the place: when resorting to the platform allocator, call
free/malloc/realloc directly, without indirecting thru macros. This
should avoid needing a nightmarish pile of #ifdef-ery if PYMALLOC_DEBUG
is changed so that pymalloc takes over all Py(Mem, Object} memory
operations (which would add useful debugging info to PyMem_xyz
allocations too).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
possible pool states. I think it's much clearer now.
Added a new long overdue block-management overview comment block.
I believe the comments are in good shape now.
Added two comments about possible small optimizations (one getting rid
of runtime multiplications at the cost of a new pool_header member; the
other getting rid of runtime divisions and the pool_header capacity
member, at the cost of a static const vector of 32 uints).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This displays stats about the # of arenas, pools, blocks and bytes, to
stderr, both used and reserved but unused.
CAUTION: Because PYMALLOC_DEBUG is on, the debug malloc routine adds
16 bytes to each request. This makes each block appear two size classes
higher than it would be if PYMALLOC_DEBUG weren't on.
So far, playing with this confirms the obvious: there's a lot of activity
in the "small dict" size class, but nothing in the core makes any use of
the 8-byte or 16-byte classes.
|
|
|
|
| |
with each other.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
the code so that the most frequent cases come first. Added comments.
Found a hidden assumption that a pool contains room for at least two
blocks, and added an assert to catch a violation if it ever happens in
a place where that matters. Gave the normal "I allocated this block"
case a longer basic block to work with before it has to do its first
branch (via breaking apart an embedded assignment in an "if", and
hoisting common code out of both branches).
|
|
|
|
| |
and terminology, plus explanation of some extreme obscurities.
|