| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
though I tried to be very careful. This is a slight simplification, and it
adds a new feature: a small stack-allocated "recycled" array for the cases
when we don't remove too many items.
It allows PyList_SetSlice() to never fail if:
* you are sure that the object is a list; and
* you either do not remove more than 8 items, or clear the list.
This makes a number of other places in the source code correct again -- there
are some places that delete a single item without checking for MemoryErrors
raised by PyList_SetSlice(), or that clear the whole list, and sometimes the
context doesn't allow an error to be propagated.
|
|
|
|
| |
The invariant checks would break.
|
|
|
|
|
|
|
|
|
| |
invariants allows the ob_item != NULL check to be replaced with an
assertion.
* Added assertions to list_init() which document and verify that the
tp_new slot establishes the invariants. This may preclude a future
bug if a custom tp_new slot is written.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to NULL during the lifetime of the object.
* listobject.c nevertheless did not conform to the other invariants,
either; fixed.
* listobject.c now uses list_clear() as the obvious internal way to clear
a list, instead of abusing list_ass_slice() for that. It makes it easier
to enforce the invariant about ob_item == NULL.
* listsort() sets allocated to -1 during sort; any mutation will set it
to a value >= 0, so it is a safe way to detect mutation. A negative
value for allocated does not cause a problem elsewhere currently.
test_sort.py has a new test for this fix.
* listsort() leak: if items were added to the list during the sort, AND if
these items had a __del__ that puts still more stuff into the list,
then this more stuff (and the PyObject** array to hold them) were
overridden at the end of listsort() and never released.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
mutation during list.sort() used to rely on that listobject.c always
NULL'ed ob_item when ob_size fell to 0. That's no longer true, so the
test for list mutation during a sort is no longer reliable. Changed the
test to rely instead on that listobject.c now never NULLs-out ob_item
after (if ever) ob_item gets a non-NULL value. This new assumption is
also documented now, as a required invariant in listobject.h.
The new assumption allowed some real simplification to some of the
hairier code in listsort(), so is a Good Thing on that count.
|
| |
|
|
|
|
|
| |
the size_t nbytes, and passed nbytes to malloc, so it was confusing to
effectively recompute the same thing from scratch in the memset call.
|
| |
|
|
|
|
|
|
| |
__oct__, and __hex__. Raise TypeError if an invalid type is
returned. Note that PyNumber_Int and PyNumber_Long can still
return ints or longs. Fixes SF bug #966618.
|
|
|
|
| |
modules and objects.
|
| |
|
| |
|
|
|
|
|
| |
methods on string and unicode objects. Added unicode.decode()
which was missing for no apparent reason.
|
| |
|
| |
|
|
|
|
|
|
| |
Need to return -1 on error.
Needs backport.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- weakref.ref and weakref.ReferenceType will become aliases for each
other
- weakref.ref will be a modern, new-style class with proper __new__
and __init__ methods
- weakref.WeakValueDictionary will have a lighter memory footprint,
using a new weakref.ref subclass to associate the key with the
value, allowing us to have only a single object of overhead for each
dictionary entry (currently, there are 3 objects of overhead per
entry: a weakref to the value, a weakref to the dictionary, and a
function object used as a weakref callback; the weakref to the
dictionary could be avoided without this change)
- a new macro, PyWeakref_CheckRefExact(), will be added
- PyWeakref_CheckRef() will check for subclasses of weakref.ref
This closes SF patch #983019.
|
|
|
|
|
|
| |
The builtin eval() function now accepts any mapping for the locals argument.
Time sensitive steps guarded by PyDict_CheckExact() to keep from slowing
down the normal case. My timings so no measurable impact.
|
|
|
|
| |
places it's just noise.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
* Change a XDECREF to DECREF (adding an assertion just to be sure).
|
|
|
|
|
|
| |
have a __module__. Test for this case.
Bugfix candidate, will backport.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tests which nicely highly highlight weaknesses).
* Initial value is now a large prime.
* Pre-multiply by the set length to add one more basis of differentiation.
* Work a bit harder inside the loop to scatter bits from sources that
may have closely spaced hash values.
All of this is necessary to make up for keep the hash function commutative.
Fortunately, the hash value is cached so the call to frozenset_hash() will
only occur once per set.
|
|
|
|
|
|
| |
* Non-zero initial value so that hash(frozenset()) != hash(0).
* Final permutation to differentiate nested sets.
* Add logic to make sure that -1 is not a possible hash value.
|
|
|
|
|
| |
Prevents a collision pattern that occurs with nested tuples.
(Yitz Gale provided code that repeatably demonstrated the weakness.)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
many more prime multipliers and that performs well on collision tests.
|
|
|
|
|
|
|
| |
iswide() for east asian width manipulation. (Inspired by David
Goodger, Reviewed by Martin v. Loewis)
- Move _PyUnicode_TypeRecord.flags to the end of the struct so that
no padding is added for UCS-4 builds. (Suggested by Martin v. Loewis)
|
| |
|
|
|
|
|
|
|
| |
(Basic approach and test concept by Tim Peters.)
* Improved the hash to reduce collisions.
* Added the torture test to the test suite.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
[ 899109 ] 1==float('nan')
which can now finally be closed, I think.
|
|
|
|
| |
Minor wording fix.
|
|
|
|
|
|
|
| |
- Neatened the braces in PyList_New().
- Made sure "indexerr" was initialized to NULL.
- Factored if blocks in PyList_Append().
- Made sure "allocated" is initialized in list_init().
|
|
|
|
|
| |
Re-use list object bodies. Saves calls to malloc() and free() for
faster list instantiation and deallocation.
|