| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
|
| |
of classic classes now. Alas, new-style classes are still a problem, and
didn't need to be fixed in 2.3 (they were already immune in 2.3 due to the
new-in-2.3 tp_del slot).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The move_finalizers() routine checks each object in the unreachable
list to see if it has a finalizer. If it does, it is moved to the
finalizers list. The collector checks by calling, effectively,
hasattr(obj, "__del__"). The hasattr() call can result in an
arbitrary amount of Python code being run, because it will invoke
getattr hooks on obj.
If a getattr() hook is run from move_finalizers(), it may end up
resurrecting or deallocating objects in the unreachable list. In
fact, it's possible that the hook causes the next object in the list
to be deallocated. That is, the object pointed to by gc->gc.gc_next
may be freed before has_finalizer() returns.
The problem with the previous revision is that it followed
gc->gc.gc_next before calling has_finalizer(). If has_finalizer()
gc->happened to deallocate the object FROM_GC(gc->gc.gc_next), then
the next time through the loop gc would point to freed memory. The
fix is to always follow the next pointer after calling
has_finalizer().
Note that Python 2.3 does not have this problem, because
has_finalizer() checks the tp_del slot and never runs Python code.
Tim, Barry, and I peed away the better part of two days tracking this
down.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
2.2.1 has another bug that prevents the regression (which isn't a
regression at all) from showing up. "The regression" is actually a
glitch in cyclic gc that's been there forever.
As the generation being collected is analyzed, objects that can't be
collected (because, e.g., we find they're externally referenced, or
are in an unreachable cycle but have a __del__ method) are moved out
of the list of candidates. A tricksy scheme uses negative values of
gc_refs to mark such objects as being moved. However, the exact
negative value set at the start may become "more negative" over time
for objects not in the generation being collected, and the scheme was
checking for an exact match on the negative value originally assigned.
As a result, objects in generations older than the one being collected
could get scanned too, and yanked back into a younger generation. Doing
so doesn't lead to an error, but doesn't do any good, and can burn an
unbounded amount of time doing useless work.
A test case is simple (thanks to Kevin Jacobs for finding it!):
x = []
for i in xrange(200000):
x.append((1,))
Without the patch, this ends up scanning all of x on every gen0 collection,
scans all of x twice on every gen1 collection, and x gets yanked back into
gen1 on every gen0 collection. With the patch, once x gets to gen2, it's
never scanned again until another gen2 collection, and stays in gen2.
Opened another bug about that 2.2.1 isn't actually tracking (at least)
iterators, generators, and bound method objects, due to using the 2.1
gc API internally in those places (which #defines itself out of existence
in 2.2.x).
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
(some tweaking for different _PyObject_GC_Malloc() call in 2.2. checked,
still returns the same thing on failure...)
_PyObject_GC_New: Could call PyObject_INIT with a NULL 1st argument.
_PyObject_GC_NewVar: Could call PyObject_INIT_VAR likewise.
Bugfix candidate.
Original patch(es):
python/dist/src/Modules/gcmodule.c:2.40
|
| |
|
|
|
| |
WITH_CYCLE_GC. (Neil pointed this out before the weekend, and I fixed
it right away, but forgot to check it in.)
|
| |
|
|
|
|
|
|
|
|
| |
This is Neil's fix for SF bug 535905 (Evil Trashcan and GC interaction).
The fix makes it possible to call PyObject_GC_UnTrack() more than once
on the same object, and then move the PyObject_GC_UnTrack() call to
*before* the trashcan code is invoked.
BUGFIX CANDIDATE!
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
"complicated_bool".
|
| |
|
|
|
|
|
|
|
|
|
| |
objects to save in gc.garbage. This should be the last change needed to
fix SF bug 477059: "__del__ on new classes vs. GC".
Note that this change slightly changes the behavior of the collector.
Before, if a cycle was found that contained instances with __del__
methods then all instance objects in that cycle were saved in
gc.garbage. Now, only objects with __del__ methods are saved in
gc.garbage.
|
| |
|
|
|
|
| |
When moving objects with a __del__ attribute to a special list, look
for __del__ on new-style classes with the HEAPTYPE flag set as well.
(HEAPTYPE means the class was created by a class statement.)
|
| |
|
|
| |
SF bug 476129: "gc.collect sometimes hangs".
|
| |
|
|
| |
up GCC warnings.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The platform requires 8-byte alignment for doubles, but the GC header
was 12 bytes and that threw off the natural alignment of the double
members of a subtype of complex. The fix puts the GC header into a
union with a double as the other member, to force no-looser-than
double alignment of GC headers. On boxes that require 8-byte alignment
for doubles, this may add pad bytes to the GC header accordingly; ditto
for platforms that *prefer* 8-byte alignment for doubles. On platforms
that don't care, it shouldn't change the memory layout (because the
size of the old GC header is certainly greater than the size of a double
on all platforms, so unioning with a double shouldn't change size or
alignment on such boxes).
|
| |
|
|
|
|
| |
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
|
| |
|
|
|
| |
no way to talk the debugger into showing me how many bytes were being
allocated.
|
| |
|
|
|
|
|
|
|
| |
visit_finalizer_reachable since it's the same as visit_reachable.
Rename visit_reachable to visit_move. Objects can now have the GC type
flag set, reachable by tp_traverse and not be in a GC linked list. This
should make the collector more robust and easier to use by extension
module writers. Add memory management functions for container objects
(new, del, resize).
|
| |
|
|
| |
of PyList_Append.
|
| |
|
|
|
| |
get_referents, and is not yet documented in the library manual).
Suggestions for a better name welcome.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
collector will be saved in gc.garbage. This is useful for debugging a
program that creates reference cycles.
- Fix else statements in gcmodule.c to conform to Python coding standards.
|
| | |
|
| |
|
|
| |
we don't need to run gc frequently
|
| |
|
|
|
| |
add sanity check to gc: if an exception occurs during GC, call
PyErr_WriteUnraisable and then call Py_FatalEror.
|
| |
|
|
| |
also initial static debug variable to 0
|
| |
|
|
|
|
| |
Small stylistic changes by VM:
- is_enabled() -> isenabled()
- static ... Py_<func> -> static ... gc_<func>
|
| |
|
|
|
|
| |
debug_cycle(), and don't cast the pointer to a long. Neither needs
the literal `0x' prefix as %p automatically inserts this (on Linux at
least).
|
| |
|
|
| |
so we get better error messages.
|
| |
|
|
|
|
|
| |
Change a cast, intialize a local, and make some sprintf() format strings
type-appropriate (add the "l" to "%d").
Closes SourceForge patch #100737.
|
| | |
|
| |
|
|
| |
conditionally in the code.
|
| |
|