diff options
author | Tim Peters <tim.peters@gmail.com> | 2003-04-07 19:21:15 (GMT) |
---|---|---|
committer | Tim Peters <tim.peters@gmail.com> | 2003-04-07 19:21:15 (GMT) |
commit | f6b8045ca5262f0d83eea3a279828a3f136f167f (patch) | |
tree | cf4eb25fc1d278acff4487a3e2aa9b2422d93c8a /Lib/test/test_gc.py | |
parent | df875b99fcb69c18168fb761ddaa722a034175dd (diff) | |
download | cpython-f6b8045ca5262f0d83eea3a279828a3f136f167f.zip cpython-f6b8045ca5262f0d83eea3a279828a3f136f167f.tar.gz cpython-f6b8045ca5262f0d83eea3a279828a3f136f167f.tar.bz2 |
Reworked has_finalizer() to use the new _PyObject_Lookup() instead
of PyObject_HasAttr(); the former promises never to execute
arbitrary Python code. Undid many of the changes recently made to
worm around the worst consequences of that PyObject_HasAttr() could
execute arbitrary Python code.
Compatibility is hard to discuss, because the dangerous cases are
so perverse, and much of this appears to rely on implementation
accidents.
To start with, using hasattr() to check for __del__ wasn't only
dangerous, in some cases it was wrong: if an instance of an old-
style class didn't have "__del__" in its instance dict or in any
base class dict, but a getattr hook said __del__ existed, then
hasattr() said "yes, this object has a __del__". But
instance_dealloc() ignores the possibility of getattr hooks when
looking for a __del__, so while object.__del__ succeeds, no
__del__ method is called when the object is deleted. gc was
therefore incorrect in believing that the object had a finalizer.
The new method doesn't suffer that problem (like instance_dealloc(),
_PyObject_Lookup() doesn't believe __del__ exists in that case), but
does suffer a somewhat opposite-- and even more obscure --oddity:
if an instance of an old-style class doesn't have "__del__" in its
instance dict, and a base class does have "__del__" in its dict,
and the first base class with a "__del__" associates it with a
descriptor (an object with a __get__ method), *and* if that
descriptor raises an exception when __get__ is called, then
(a) the current method believes the instance does have a __del__,
but (b) hasattr() does not believe the instance has a __del__.
While these disagree, I believe the new method is "more correct":
because the descriptor *will* be called when the object is
destructed, it can execute arbitrary Python code at the time the
object is destructed, and that's really what gc means by "has a
finalizer": not specifically a __del__ method, but more generally
the possibility of executing arbitrary Python code at object
destruction time. Code in a descriptor's __get__() executed at
destruction time can be just as problematic as code in a
__del__() executed then.
So I believe the new method is better on all counts.
Bugfix candidate, but it's unclear to me how all this differs in
the 2.2 branch (e.g., new-style and old-style classes already
took different gc paths in 2.3 before this last round of patches,
but don't in the 2.2 branch).
Diffstat (limited to 'Lib/test/test_gc.py')
-rw-r--r-- | Lib/test/test_gc.py | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/Lib/test/test_gc.py b/Lib/test/test_gc.py index e225881..1fbd508 100644 --- a/Lib/test/test_gc.py +++ b/Lib/test/test_gc.py @@ -272,8 +272,9 @@ def test_boom(): # the internal "attr" attributes as a side effect. That causes the # trash cycle to get reclaimed via refcounts falling to 0, thus mutating # the trash graph as a side effect of merely asking whether __del__ - # exists. This used to (before 2.3b1) crash Python. - expect(gc.collect(), 0, "boom") + # exists. This used to (before 2.3b1) crash Python. Now __getattr__ + # isn't called. + expect(gc.collect(), 4, "boom") expect(len(gc.garbage), garbagelen, "boom") class Boom2: |