summaryrefslogtreecommitdiffstats
path: root/Lib/test
diff options
context:
space:
mode:
authorJeremy Hylton <jeremy@alum.mit.edu>2001-10-15 21:37:58 (GMT)
committerJeremy Hylton <jeremy@alum.mit.edu>2001-10-15 21:37:58 (GMT)
commit499ab6a65381256e24bf99ef897e8058c19d54ec (patch)
tree27079f8852cb90dbb261424b7315250f94d27e26 /Lib/test
parentabe2c62bdb26742379a7082f31821bacfc22106f (diff)
downloadcpython-499ab6a65381256e24bf99ef897e8058c19d54ec.zip
cpython-499ab6a65381256e24bf99ef897e8058c19d54ec.tar.gz
cpython-499ab6a65381256e24bf99ef897e8058c19d54ec.tar.bz2
Better fix for core dumps on recursive objects in fast mode.
Raise ValueError when an object contains an arbitrarily nested reference to itself. (The previous fix just produced invalid pickles.) Solution is very much like Py_ReprEnter() and Py_ReprLeave(): fast_save_enter() and fast_save_leave() that tracks the fast_container limit and keeps a fast_memo of objects currently being pickled. The cost of the solution is moderately expensive for deeply nested structures, but it still seems to be faster than normal pickling, based on tests with deeply nested lists. Once FAST_LIMIT is exceeded, the new code is about twice as slow as fast-mode code that doesn't check for recursion. It's still twice as fast as the normal pickling code. In the absence of deeply nested structures, I couldn't measure a difference.
Diffstat (limited to 'Lib/test')
0 files changed, 0 insertions, 0 deletions