diff options
author | Jeremy Hylton <jeremy@alum.mit.edu> | 2001-10-15 21:37:58 (GMT) |
---|---|---|
committer | Jeremy Hylton <jeremy@alum.mit.edu> | 2001-10-15 21:37:58 (GMT) |
commit | 499ab6a65381256e24bf99ef897e8058c19d54ec (patch) | |
tree | 27079f8852cb90dbb261424b7315250f94d27e26 /Lib/test | |
parent | abe2c62bdb26742379a7082f31821bacfc22106f (diff) | |
download | cpython-499ab6a65381256e24bf99ef897e8058c19d54ec.zip cpython-499ab6a65381256e24bf99ef897e8058c19d54ec.tar.gz cpython-499ab6a65381256e24bf99ef897e8058c19d54ec.tar.bz2 |
Better fix for core dumps on recursive objects in fast mode.
Raise ValueError when an object contains an arbitrarily nested
reference to itself. (The previous fix just produced invalid
pickles.)
Solution is very much like Py_ReprEnter() and Py_ReprLeave():
fast_save_enter() and fast_save_leave() that tracks the fast_container
limit and keeps a fast_memo of objects currently being pickled.
The cost of the solution is moderately expensive for deeply nested
structures, but it still seems to be faster than normal pickling,
based on tests with deeply nested lists.
Once FAST_LIMIT is exceeded, the new code is about twice as slow as
fast-mode code that doesn't check for recursion. It's still twice as
fast as the normal pickling code. In the absence of deeply nested
structures, I couldn't measure a difference.
Diffstat (limited to 'Lib/test')
0 files changed, 0 insertions, 0 deletions